We couldn't get this project funded for various reasons. It's been frustrating because I knew we were on to something but couldn't make it happen - and now, as many of us predicted, the industry is moving in that direction - a growing belief in denormalization, caching, and eventual consistency. But still, many applications write to the database as part of the cycle of a transaction.
But this article at HighScalability hit the nail on the head - you evolve your application from database-centric to cache-centric to memory-centric. Money quote:
As scaling and performance requirements for complicated operations increase, leaving the entire system in memory starts to make a great deal of sense. Why use cache at all? Why shouldn't your system be all in memory from the start?
Well, that sounds awfully familiar. And it's true. If you are replicating anyway, your risk of data loss is pretty minimal, and as Pat Helland, Amazon ex-architect says, computer suck, and you should just plan to apologize sometimes. If you batch your changes every N minutes, then you have an N minute window where some changes may be lost. But for many applications, that's OK. And the wins you get in terms of throughput are significant.
Latency was slashed because logic was separated out of the HTTP request/response loop into a separate process and database persistence is done offline.
It looks like the way they implemented an all-memory solution was using Gigaspaces, which, by the way, has a solution that deploys and scales automatically on the Amazon EC2 fabric. And the result: near linear scalability and 1 billion events a day. Yeah, that's the ticket.
 I think actually this statement is misleading. Reading the article more carefully, they are claiming they can get to 1 billion events a day. This hasn't actually been tested, so take it with a grain of salt.