Key-Value Store

Pascal Francq

September 8, 2015 (April 10, 2012)


Most software applications rely on a data management systems (files, databases, etc.) to store the data they manipulate. In the late 1970s, relational databases emerged to manage large sets of data. But they face problems to scale to the current data deluge, in particular the one generated by the development of the Internet. New data management systems are developed to face this situation, such as key-value stores.

1 Introduction

If some software applications can load all the necessary data once and for all (mostly when they start), some of them cannot. These latest ones rely therefore on separate processes that manage the whole data set outside the memory (on hard disks for example) and that retrieve only the needed (little) part of it at a given moment. In the latest 1970s, some applications were specially developed to provide relational databases [A]  [A] There are sometimes wrongly called SQL databases. In fact, SQL databases are a particular type of relational databases, those understanding the Structured Query Language (SQL).. These databases manage tables, each table stores records (for example persons) defined by a fixed set of fields (name, surname, address, etc.). Table fields can be linked to form a database schema (for example one table stores customers, one stores articles and a third linked to two others stores the purchases). Overtime time, complex relational databases were developed with powerful advanced features including consistency and concurrency, to cite a few.
Today, we face a data deluge in certain domain. Think only to the Internet: there are at least thousand billions of documents on line [1]. In this context, the most interesting features (such as the “understanding” of a database schema) provided by relational databases come with a cost (in terms of computational time needed to perform the read and write operations) that prohibit their use. New data management systems, called NoSQL databases, were develop to face this new situation.

2 NoSQL Databases

NoSQL database is a generic term that names database management systems that are not relational databases (they do not understand SQL). Concretely, NoSQL databases may not require a fixed database schema. In general, they are optimized for a given type of data stored (key-value pairs, graphs, documents, etc.). Several NoSQL databases were developed by Internet giants, some of them being freely available.
NoSQL databases are often optimized for particular tasks, mostly retrieving information (such as search engines) or appending data (most social networking applications are based on information continuously added by their users). In order to ensure a rapid access to information, these databases are sometimes replicated in several distributed data centers. While consistency is certainly the most important feature for relational databases, for NoSQL databases it is scalability (the fact that the performances do not decrease as fast as the amount of data increases).
This priority change comes with a major drawback theorized as the CAP theorem [2]. It says that a distributed data management system (a traditional relational database or a NoSQL one) cannot ensure at the same time the three following characteristics: consistency (all replications of a database have the same data), availability (a request sent receives always a respond) and partition tolerance (the system resists when some replications encounter problems). The developers of NoSQL databases must therefore balance these three elements in a particular context.

3 Key-Value Stores

Key-value stores are a particular type of NoSQL databases. As their name suggest, they store key/value pairs. For example, for search engines, a store may associate to each keyword (the key) a list of documents containing it (the corresponding value).
One approach to implement a key-value store is to use a file decomposed in blocks [3]. As Figure 1↓ shows, each block is associated with a number (ranging from 1 to ). Each block manages a set of key-value pairs: the beginning of the block contained, after some information, an index of keys and the position of the corresponding values. These values are stored starting from the end of the block (like a memory heap). The free space available is delimited by the end of the index and the end of the values.
Figure 1 Structure of a key-value store.
In this implementation, the size of a block is important since it defines the largest value that can be stored (for example the longest list of document identifiers containing a given keyword). Moreover, it supposes that a block number is associated to each key. These block numbers can be assigned in two different ways:
  1. The block number is obtained directly from the key, typically by using a hash function. The size of the file is then defined by the largest block number computed by every possible key.
  2. The block number is assigned increasingly. When a new pair must be stored, the first block that can hold it is chosen. In practice, a given amount of space is reserved in a block in order to manage updates of existing pairs (a new value can replace an older and smaller one). This limit the size of the file to amount of values to store.

4 Implementation

The key-value store implementation described above is provided by the R Core Library through three classes.
The RBlockFile class manages a file build from several blocks of a given size. It maintains a cache in memory with a given number of blocks. It supposes that the first block number is 1 (0 represents an invalid block). The block and cache sizes are parameters.
The RKeyValueFile template inherits from RBlockFile. It provides a key-value store where a key is represented by the class given as parameter. When a key-value to store is associated with a null block number, it searches for the first block with sufficient free space. A tolerance parameter specifies the amount of space reserved in a block for future updates of existing pairs. The methods Seek are used to position the block to a specific value corresponding to a block number and an index (the key). The basic Read and Write methods can then be used to read or write the value (in practice, several calls to these methods are needed to store the whole data representing a value).

5 Related Articles


[1] Jesse Alpert & Nissan Hajaj, “We Knew the Web Was Big...”, The Official Google Blog, 2008.

[2] Seth Gilbert & Nancy Lynch, “Brewer’s Conjecture and the Feasibility of Consistent, Available, Partition-tolerant Web Services”, ACM SIGACT News, 33(2), pp. 51—59, 2002.

[3] Justin Zobel, Alistair Moffat & Ron Sacks-Davis, “Storage Management for Files of Dynamic Records”, In Proceedings of the Australasian Database Conference, pp. 26—38, 1993.