I've been seeing a number of posts about the death, or at least increasing irrelevance, of the RDBMS. First I read Nitin Borwankar's article about how Web 2.0 disrupts our database world. Then I found this post by Alex Bundardzic saying that Ruby 1.2 drives the final nail into the RDBMS coffin. Then just today Arnon Rotem-Gal-Oz says that the RDBMS is dead.
What the heck is going on here? Why is everyone claiming the irrelevance or actual death of the RDBMS?
Alex's blog is one I don't fully understand, but then again, I don't think I fully understand his Resource-Oriented Architecture, and I've never used Ruby on Rails. I can't see how REST support in Ruby on Rails somehow gets rid of the need for an RDBMS.
But let's look at the other two. Both Nitin and Arnon are talking about the impact of scale. Nitin also talks about the impact of hierarchical organizations such as social networks that are very hard to translate to a relational model and still get good performance.
To be honest, I think the biggest challenge is scale. Web applications are attracting a user-base that is not just your own company's employees, but an ever-increasing vast sea of global consumers, millions and millions of them. If your web application is successful, you may need to hit levels of scale, reliability, scalability and availability that you never thought possible, and definitely far beyond what relational database vendors thought possible when these systems were first architected.
With this kind of scale, you have to try to eliminate every single point of contention and single point of failure in your system. You need to be able to distribute it across thousands of machines where every request can run independently and on any machine. And the problem is that the database is a big point of contention, because it implements transactional semantics, which thus require centralized locking and storage to disk, creating bottlenecks and slowing everything down.
So I agree with these authors that Web Scale is forcing us to do a major re-thinking of how and when to persist data, and ask questions about how important, really, are ACID semantics and normalized data? Where and when can I let go of atomicity, consistency, integrity and durability (ACID)?
This reminds me of what airlines do. You book a flight, and you have the impression that they have reserved you a seat on the plane. Well, they have, and they haven't. In their case they need to optimize for the fullest flight possible, and people are unpredictable. We commit to being there for the flight, but sometimes we don't show up or cancel at the last minute. So, they overbook. They take the risk that everybody or almost everybody shows up, and if necessary they do a compensating transaction (literally) by paying people to take another flight. This potential cost is worth it to them in the long run.
In the same way, you start making calculated risks because scalability and performance are so important. You risk some loss of data and use a "lazy cache" (at the disk level or above) that batches up writes rather writing to disk for every transaction. You use more relaxed modes of maintaining consistency with things like optimistic concurrency and compensating transactions. You let the client manage some of the durability by sending it a representation of the current state and let it ship that back to your for the next operation (this is the REST model).
Another issue of scaling is the fact that databases tend to be centralized, and this creates a bottleneck. Oracle RAC is clustered, but it can't scale to hundreds of nodes because it's busy managing locks and shipping data around.
The centralized bottlneck of an RDBMS can often be solved at the application level by partitioning your data across multiple databases. This is what eBay does - each request is keyed off of the user id and sent to the appropriate database/app server partition. But this does require a domain model where there is as close to zero data shared across requests. Otherwise you just end up duplicating the bottleneck at the application tier by shipping data between application servers. This is another way where the REST model of shipping relevant state back to the client comes in handy -- the client can provide all the context it needs for each request, and the server doesn't have to keep this stuff around (and potentially share it with another server in the cluster).
Finally, maybe not everything has to be stored in the relational model. Maybe you can get by with a key/value store like Berkeley DB. Or maybe you can just keep some of your data in a clustered in-memory hashtable like memcached or Gigaspaces. If you can do this, it can give you some serious performance wins.
But does all this make the database "dead" or "irrelevant?" I don't think so, and I don't think these authors really think so either (well, I don't know about Alex). I think it's very very good to have something that will give you the ACID guarantees and the power of SQL when you need it. It's a sense of security and safety, like having a home to come back to with a warm fire and a cup of cocoa after having a wild old time on the town.
It just means we have to re-think any tendencies we may have that everything has to be completely correct and perfect and relational when dealing with our data. Sometimes sloppy but quick is the best way to go.
 Gigaspaces is actually a lot more than just an in-memory hashtable. Because they store objects, then you can keep your data and logic close together, and in almost all cases this allows you to process a request on a single machine. This can be a very big performance and scalability win.