Why Datomic?

Cross-posted from Zololabs.

Many of you know we’re using Datomic for all our storage needs for Zolodeck. It’s an extremely new database (not even version 1.0 yet), and is not open-source. So why would we want to base our startup on something like it, especially when we have to pay for it? I’ve been asked this question a number of  times, so I figured I’d blog about my reasons:

  • I’m an unabashed fan of Clojure and Rich Hickey
  • I’ve always believed that databases (and the insane number of optimization options) could be simpler
  • We get basically unlimited read scalability (by upping read throughput in Amazon DynamoDB)
  • Automatic built-in caching (no more code to use memcached (makes DB effectively local))
  • Datalog-as-query language (declarative logic programming (and no explicit joins))
  • Datalog is extensible through user-defined functions
  • Full-text search (via Lucene) is built right in
  • Query engine on client-side, so no danger from long-running or computation-heavy queries
  • Immutable data – audits all versions everything automatically
  • “As of” queries and “time-window” queries are possible
  • Minimal schema (think RDF triples (except Datomic tuples also include the notion of time)
  • Supports cardinality out of the box (has-many or has-one)
  • These reference relationships are bi-directional, so you can traverse the relationship graph in either direction
  • Transactions are first-class (can be queried or “subscribed to” (for db-event-driven designs))
  • Transactions can be annotated (with custom meta-data) 
  • Elastic 
  • Write scaling without sharding (hundreds of thousands of facts (tuples) per second)
  • Supports “speculative” transactions that don’t actually persist to datastore
  • Out of the box support for in-memory version (great for unit-testing)
  • All this, and not even v1.0
  • It’s a particularly good fit with Clojure (and with Storm)

This is a long list, but perhaps begins to explain why Datomic is such an amazing step forward. Ping me with questions if you have ’em! And as far as the last point goes, I’ve talked about our technology choices and how they fit in with each other at the Strange Loop conference last year. Here’s a video of that talk.

Make it right, then make it fast

Cross-posted to Zolo Labs

Alan Perlis once said: A Lisp programmer knows the value of everything, but the cost of nothing.

I re-discovered this maxim this past week. 

As many of you may know, we’re using Clojure, Datomic, and Storm to build Zolodeck. (I’ve described my ideal tech stack here). I’m quite excited about the leverage these technologies can provide. And I’m a big believer in getting something to work whichever way I can, as fast as I can, and then worrying about performance and so on. I never want to fall under the evil of premature optimization and all that… In fact, on this project, I keep telling my colleague (and everyone else who listens) how awesome (and fast) Datomic is, and how its built-in cache will make us stop worrying about database calls. 

A function I wrote (that does some fairly involved computation involving relationship graphs and so on) was taking 910 seconds to complete. Yes, more than 15 minutes. Of course, I immediately suspected the database calls, thinking my enthusiasm was somehow misplaced or that I didn’t really understand the costs. As it turned out, Datomic is plenty fast. And my algorithm was naive and basically sucked… I had knowingly  glossed over a lot of functions that weren’t exactly performant, and when called within an intensive set of tight loops, they added up fast.

After profiling with Yourkit, I was able to bring down the time to about 900 ms. At nearly a second, this is still quite an expensive call, but certainly less so than when it was ~ 1000x slower earlier.

I relearnt that tools are great and can help in many ways, just not in making up for my stupidity 🙂