Using Elastic Search as a Key Value store

I have in the past used Solr as a key value store. Doing that provided me with some useful information:

  1. Using Solr as a key value store actually works reasonably well. I have in the past used indexes on a Solr two node master/slave setup with up to 90M documents (json), of roughly 1-2KB each with a total index size (unoptimized) of well over 100GB handling queries that returned tens to hundreds of documents at a 4 queries / second throughput. With writes & replication going on at the same time. In a different project we had 70M compressed (gzip) xml values of roughly 5KB each stuffed in a Solr index that managed to sustain dozens of reads per second in a prolonged load test with sub 10ms response times. Total index size was a bit over 100GB. This was competitive (slightly better actually) with a MySql based solution that we load tested under identical conditions (same machine, data, and test). So, when I say Solr is usable as a key value store, I mean I have used it and would use it again in a production setting for data intensive applications.
  2. You need to be aware of some limitations with respect to eventual consistency, lack of transactionality, and reading your own writes, and a few other things. In short, you need to make sure your writes don’t conflict, beware of a latency between the moment you write something and the moment this write becomes visible through queries, and thus not try not to read your own writes immediately after writing them.
  3. You need to be aware of the cost of certain operations in Lucene (the framework that is used by Solr). Getting stuff by id is cheap. Running queries that require Lucene to look at thousands of documents is not, especially if those things are big. Running queries that produce large result sets is not cheap either. Mixing lots of reads and writes is going to kill performance due to repeated internal cache validation.
  4. Newer versions of Lucene offer vastly improved performance due to more clever use of memory, massive optimizations related to concurrency, and a less disruptive commit phase. Particularly Lucene 4 is a huge improvement, apparently.
  5. My experience under #1 is based on Lucene 2.9.x and 3.x prior to most of the before mentioned improvements. That means I should get better results doing the same things with newer versions.

Recently, I started using Elastic Search, which is an alternative Lucene base search product, and this makes the above even more interesting. Elastic search is often depicted as simply a search engine similar to Solr. However, this is a bit of an understatement and it is quite a bit more than that.

It is better described as a schema less, multi tenant, replicating & sharding key value store that implements extensible & advanced search features (geo spatial, faceting, filtering, etc.) as well.

In more simple terms: you launch it, throw data at it, find it back querying it, and add more nodes to scale. It’s that simple. Few other products do this. And even less do it with as little ceremony as Elastic Search. This includes most common relational and nosql solutions on the market today. I’ve  looked at quite a few. None come close to the out of the box utility and usability of Elastic Search.

That’s a big claim. So, lets go through some of the basics to substantiate this a little:

  • Elastic search stores/retrieves objects via a REST API. Convenient PUT, POST, GET, and DELETE APIs are provided that implement version checks (optionally on PUT), generate ids (optionally on POST), and allow you to read your own writes (on GET). This is what makes it a key value store.
  • Objects have a type and go in an index. So, from a REST point of view, the relative uri to any object is /{index}/{type}/{id}.
  • You create indices and types at runtime via a REST API. You can create, update, retrieve and delete indices. You can configure the sharding and replication on a per index basis. That means elastic search is multi tenant and quite flexible.
  • Elastic Search indexes documents you store using either a dynamic mapping, or a mapping you provide (recommended). That means you can find back your documents via the search API as well.
  • This is where Lucene comes in. Unlike the GET, search does not allow you to read your own writes immediately because it takes time for indices to update, and replicate and doing this in bulk is more efficient.
  • The search API is exposed as a _search resource that is available in at the server level (/_search), index level (/{index}/_search, or type level (/{index}/{type}/_search). So you can search across multiple indices. And because Elastic Search is replicating and sharding, across multiple machines as well.
  • When returning search results, Elastic Search includes a _source field in the result set that by default contains the object associated with the results. This means that querying is like doing a multi-get, i.e. expensive if your documents are big, your queries expensive, and your result sets large. What this means is that you have to carefully manage how you query your dataset.
  • The search API supports the GET and POST methods. Post exists as a backup for clients that don’t allow a json body as part of a GET request. The reason you need one is that Elastic Search provides a domain specific language (json based, of course) to specify complex queries. You can also use the Lucene query language with a q=query parameter in the GET request but it’s a lot less powerful and only useful for simple stuff.
  • Elastic Search is clustered by default. That means if you start two nodes in the same network, they will hook up and become a cluster. Without configuration. So, out of the box, elastic search shards and replicates across whatever nodes are available in the network. You actually have to configure it to not do that (if you would actually want that). Typically, you configure it in different ways for  running in different environments. It’s very flexible.
  • Elastic Search is built for big deployments in e.g. Amazaon AWS, Heroku, or your own data center. That means it comes with built in monitoring features, a plugable architecture for adapting to different environments (e.g. discovery using AWS & on the fly backups & restores to/from S3), and a lot of other stuff you need to run in such environments. This is a nice contrast to solr, which doesn’t do any of these things out of the box (or well for that matter) without a lot of fiddling with poorly documented stuff. I speak from experience here.
  • Elastic Search is also designed to be developer friendly. Include it as a dependency in your pom file and start it programmatically, or simply take the tar ball and use the script to launch it. Spinning up an Elastic Search as part of a unit test is fairly straightforward and it works the same as a full blown cluster in e.g. AWS.
  • Configuration is mostly runtime. There are only a few settings you have to decide before launching. The out the box experience is sensible enough that you don’t have to worry about it during development.

In summary: Elastic Search is a pretty damn good key value store with a lot of properties that make it highly desirable if you are looking for a scalable solution to store and query your json data without spending a lot of time and effort on such things as configuration, monitoring, figuring out how to cluster, shard, and replicate, and getting it to do sensible things, etc.

There are a few caveats of course:

  • It helps to understand the underlying data structures used by Lucene. Some things come cheap, some other things don’t. In the end it is computer science and not magic. That means certain laws of physics still apply.
  • Lucene is memory and IO intensive. That means things generally are a lot faster if everything fits into memory and if you have memory to spare for file caching. This is true for most storage solutions btw. For example with MySql you hit a brick wall once your indexes no longer fit in memory. Insert speeds go through the roof basically and mixed inserts/selects become a nightmare scenario.
  • Mixed key value reads/writes combined with lots of expensive queries is going to require some tuning. You might want to specialize some of the nodes in your cluster for reads, for writes, and for querying load. You might want to think a bit about how many shards you need and how many replicas. You might want to think a bit about how you allocate memory to your nodes, and you might want to think a lot about which parts of your documents actually need to be indexed.
  • Elastic Search is not a transactional data store. If you need a transactional database, you might want to consider using one.
  • Elastic Search is a distributed, non transactional system. That means getting comfortable with the notion of eventual consistency.
  • Using Elastic Search like you would use a relational databases is a very bad idea. Particularly, you don’t want to do joins or update multiple objects in big transactions. Joins translate to doing multiple expensive queries and then doing some in memory stuff to throw away most of the results that just invalidated your internal caching. Doing many small interdependent writes is not a great idea either since that tends to be a bit unpredictable in terms of which writes go in first and when they get reflected in your query results. Besides, you want to write in bulk with Lucene and avoid the overhead of doing many small writes.
  • Key value stores are all about de-normalizing and getting clever about how you query. It’s better to have big documents in one index than to have many small documents spread over several indices. Because having many different indices probably means you have some logic somewhere that fires of a shit load of queries to combine things from those indices. Exactly the kind of thing you shouldn’t be doing.  If you start thinking about indices and types in database terms (i.e. tables), that is a good indicator that you are on the wrong track here.

So, we’re going to use Elastic Search at We’re a small setup with modest needs for the foreseeable future. Those needs are easily met with a generic elastic search setup (bearing in mind the caveats listed above). Most of our data is going to be fairly static and we like the idea of being able to scale our cluster without too much fuss from day 1.

It’s also a great match for our front end web application, which is based around the backbone javascript framework. Backbone integrates well with REST APIs and elastic search is a natural fit in terms of API semantics. This means we can keep our middleware very simple. Mostly it just passes through to Elastic Search after doing a bit of authentication, authorization, and validation. All we have is CRUD and a few hand crafted queries for Elastic Search.



14 Replies to “Using Elastic Search as a Key Value store”

  1. Excellent article/post! I’ve really did enjoyed the fine deep dive on the possible usable of ES as a k/v. I want to think ElasticSearch as a k/v store is valuable if you attach/associate document content indexing with it, it adds strong value proposition if you need rich content search associated to k/v mapping. Lucene shine for large content search with reasonable static indexing (no heavy updates), it does not shine for small k/v lookups, IMO. Indexing of Static content is fine, you expose a true limitation when it comes to dynamic and shared content, with no locking, k/v consistency will not be achieved.

    a real NoSQL+ES would be a powerful combination.

    1. k/v consistency in a distributed system is always going to be hard. Not impossible though but hard. Google has been working on transactional distributed (as in globally) systems and they seem to believe it is possible.

      Elastic search actually does reasonably well on dynamic updates. It’s optimized for (near) real time search. Obviously, you will run into some hard limits here relative to bulk indexing content, which is where it really shines. But it’s still pretty good. Good enough that you can get away using it as a key value store at least. You can control how often it updates its indexes. By default this is every second, which might be too often during heavy write traffic. And you can add nodes to scale.

      Having a separate system for key value storage and an asynchronous system for indexing that creates a lot of problems as well. For example, keeping the two in sync and detecting inconsistencies becomes a problem. Finally, it complicates your infrastructure and adds to total cost.

    2. If you are looking for a true NOSql DB that integrated automatically with ElasticSearch… it seems Couchbase has done just that.

      1. I don’t know what you mean by ‘true’ nosql :-). Both couchbase and elasticsearch are very capable key value stores.

        However, the point of this article is that you don’t really need couchbase since elastic search is a pretty decent key value store by itself. So, why complicate your architecture with an extra component, extra latency for replicating changes, and monitoring for ensuring the two are in sync?

  2. This was an interesting read — I’m using PostgreSQL and now integrating with ElasticSearch (for full text search), and I’m thinking that in the long run it could be easier with one database only (i.e. using only ElasticSearch), and it’d scale better, I suspect.

    How do you deal with race conditions and eventual consistency? For example, when you needed to update more than one document at once. (Perhaps that’s a too big question though! I know it arises for other NoSQL stores as well.)

  3. You can use the bulk API to modify multiple documents. The whole cluster has a default refresh of one second, which is typically too long to wait for in a single transaction. This affects bulk update and searches but not get/put/delete of documents. So, you can read your own writes with GET but search might be inconsistent for a short time.

    The way to deal with it is to design around it and not rely on e.g. doing a search right after a GET. In any case, most javascript frontends are asynchronous these days so it is good to think asynchronous.

  4. How did you tune your number of indexes/shards/replicas for the key/value store workload? Did you do various benchmarking tests with different amounts of documents per index and different configurations of shards/replicas per index?

    1. Not on Elasticsearch but we did extensive randomized concurrent read tests on solr 3.x at the time and it consistently outperformed the mysql setup it was replacing by being both faster and having less standard deviation in the response times. Solr has of course no shards and we weren’t using solr clustering at the time.

      I wouldn’t spend much time tuning Elasticsearch until it is obvious that you need to. Out of the box performance is fine and so are the defaults. Most users won’t be planning a huge cluster initially (we run with 3 nodes) so it doesn’t make sense to change the default number of shards. Also 1 replica is fine until you need to scale out. Otherwise, you should expect elasticsearch 1.x to run circles around the solr 3.x setup I benchmarked long time ago. They’ve done so much optimization work in lucene 4.x that its actually unfair to compare the two at all.

  5. I heard somewhere (elasticsearch training I think) that using around 255 shards per index is very fast if the key-value item sizes are small and similar. But as always, “it depends” and so would need benchmarking against workload/hardware/configuration specifics.

  6. excellent post . one question: its possible to use elasticsearch as a key/value storage engine. where the value is the webservice response and the key is something like the relevant search parameters or the API call URL/XML in md5() format to hit the cache when a webuser request the same information? is the correct way to create a key for elasticsearch retrieval or other path would be more suitable? thank you.

    1. That should work as you describe. Using a md5 hash for the key means you can bypass querying when looking up, which will definitely be helpful. You should ensure that you exclude whichever field you store the raw content in from indexing. That way you don’t waste time indexing potentially huge blobs. I’d also index a timestamp and maybe use rolling over indices so I could easily delete older cache entries,

Leave a Reply