Software Development

RAM is the new SSD

Your data fits in RAM. Yes, it does. Don’t believe it? Visit the hilarious yourdatafitsinram.com website.

But there is an entirely new dimension to this since last week’s announcement by Intel, which hasn’t gotten enough attention in the blogosphere yet.

New 3D XPoint™ technology brings non-volatile memory speeds up to 1,000 times faster than NAND, the most popular non-volatile memory in the marketplace today.

And then:

The companies invented unique material compounds and a cross point architecture for a memory technology that is 10 times denser than conventional memory

This is colossal news, which you can read from the official source here: http://newsroom.intel.com/community/intel_newsroom/blog/2015/07/28/intel-and-micron-produce-breakthrough-memory-technology

What does it mean for software?

SSD has already had a big impact on how we think about software, especially in the database business. Many RDBMS’s internal optimisations are based on the fact that the database is installed on a system with few CPUs, a bit of RAM and a large HDD. HDD’s are very slow and suffer from a lot of latency due to their spinning. Data needs to be cached on several layers – in the operating system that accesses blocks on the disk as well as in the database that accesses rows from the tables or indexes.

SSD changed a lot of this, as the spinning (and its associated latency) has gone, which is most useful for index lookups as Markus Winand from use-the-index-luke.com explains:

index lookups have a tendency to cause many random IO operations and can thus benefit from the fast response time of SSDs. The fun part is that properly indexed databases get better benefits from SSD than poorly indexed ones

SSD is still relatively new and not yet fully adopted in enterprise data centers and associated software, yet already, we’re seeing this new trend:

RAM is the new SSD

One of the most impressive displays of yourdatafitsinram.com is Stack Exchange, the platform running the popular Stack Overflow. According to their website, the platform is transferring 48 TB data / month to its users via an average 225 requests per second.

From our perspective, the database metrics are even more interesting, as Stack Overflow is essentially running a single SQL Server instance (with a hot standby), accommodating 440M queries per day via 384 GB of RAM and a DB size of 2.4TB.

The full metrics can be found on this website: http://stackexchange.com/performance

Now, let’s apply Intel’s new 3D XPoint™ technology to this model – perhaps we don’t need any disk anymore, after all (except for logging and backups)?

Don’t scale out. Yet.

A lot of recent hype has been evolving around the need for scaling out as Moore’s Law has come to a halt and we now need to parallelise on many many cores. But this doesn’t mean that we absolutely need to parallelise on many machines. Keeping all data processing in one place that can be scaled up greatly with processors and RAM will help us prevent hard-to-manage network latency and will again allow us to continue using established, slightly adapted RDBMS technology. Prices for hardware will crumble soon enough.

We’re looking forward to an exciting new era of scaling up massively. With SQL, of course!

Reference: RAM is the new SSD from our JCG partner Lukas Eder at the JAVA, SQL, AND JOOQ blog.

Lukas Eder

Lukas is a Java and SQL enthusiast developer. He created the Data Geekery GmbH. He is the creator of jOOQ, a comprehensive SQL library for Java, and he is blogging mostly about these three topics: Java, SQL and jOOQ.
Subscribe
Notify of
guest

This site uses Akismet to reduce spam. Learn how your comment data is processed.

0 Comments
Inline Feedbacks
View all comments
Back to top button