Clojure, the REPL and test-driven development

I’ve been using Clojure for nearly a year now, and something strange has been happening… I still think unit-tesitng is extremely important, but for some reason I don’t seem to be writing the same number of tests any more. I’m ashamed to say it, but there it is. And it gets stranger – this new lower test count doesn’t seem to matter.

It seems to me that my Clojure code works right the first time more often than my Ruby or Java code ever did. And I seem to find less defects in the Clojure code over time, too.

This is not just a fanboy speaking, though I am a huge fan of Clojure. I think that the reasons I’m observing this is due to a an important characteristic of the language. Instead of just talking about it, let me first walk you through an example.

This is something I had to do recently – we wanted to build a kind of reverse index for an HBase table. The row ids of this table are time-stamps. The idea was that this “reverse index” would allow us to answer the question of what the first time-stamp for a given day was. In other words, we needed to convert a list of time-stamps into a lookup of day vs. the first time-stamp of that day. Eg.


[“112323123” “1231231231” “123123123” “ 1231231123” ....]


{“2009-07-01” “123123123”
 “2009-07-02” “123131213”
 “2009-07-03” “123123122”}

(Note: I plucked the numbers out of the air, they aren’t accurate. But the idea is that the input is a long stream of timestamps, and possibly hundreds could correspond to each day.)

So I get started… thinking to myself – I know how to convert a timestamp to a day. From there, it’s easy to write a function that returns a hash containing the day vs. timestamp (Since I already had a function day-for-timestamp, it was easy) –

(defn day-vs-timestamp [time-stamp]
  {(day-for-time-stamp time-stamp) time-stamp})

So now, all I have to do is map the above function across the input. This gives me a list of hash-maps, each with one key-value pair. To ensure that I’m doing this in order of oldest first, I sort the input as well. Inside of a let form, all of this looks like –

(let [all-pairs (map day-vs-timestamp (sort input-list))]

Now, I have this list of hashes, each with one key (the day) and one corresponding value (the time-stamp itself). I want to combine these into one single hash-map which would be the final answer. But I have to deal with the issue of duplicate keys – when I find a duplicate key, I want to keep the first value associated with the key since it would be the oldest.

Clojure has a merge-with function which does just this – it accepts a function with 2 arguments (which are the two values in case a duplicate key is found) and the returning value is used in the merged hash-map.

(apply merge-with #(first [%1 %2]) all-pairs)

That’s basically it.

Combining everything –

(defn day-vs-timestamp [time-stamp]
  {(day-for-time-stamp time-stamp) time-stamp})

(defn lookup-table [input-timestamps]
  (let [all-pairs (map day-vs-timestamp (sort input-list))]
    (apply merge-with #(first [%1 %2]) all-pairs)))

When I write code like this – I often ask myself, what exactly should I test? I end up writing a few happy path tests that prove my code works. And then a couple of tests that test border cases and negative paths. And I sometimes do it test first.

But the REPL has spoilt me. What I used TDD for when coding with Ruby (and still do), I often do at the REPL. I build tiny functions that work – these are often single lines of code. Then I combine these into other functions, often no more than two lines of code each, sometimes three. And it all just works – leaving me wondering what to cover with tests.

The main reason I still write tests is for regression – if something breaks in the future, I catch it quickly. However, the other thing – the test *driven* design aspect of TDD – has been somewhat replaced by the REPL. And its very much more dynamic than a set of static tests. It really brings out the rapid, in rapid application development – especially when combined with Emacs and SLIME.

One main difference with Clojure vs. Ruby (say) is that Clojure is functional (I use very little of Clojure’s constructs for state). And in the functional world, I just don’t have to worry about state (obviously), and this tremendously simplifies code. I think in terms of map, filter, reduce, some, every, merge, etc. and the actual logic is in tiny functions used from within these other higher level constructs. The idea of first-class functions is also key – I can build up the business logic by writing small functions that do a tiny thing each – and combine them using higher-order functions.

This is one reason why we’re so productive with Clojure. We’ve moved to Clojure for 90% of our work. That said, we still use Ruby for parts of our code-base, and it’s still my favorite imperative language 🙂

Startup logbook – v0.2 – distributed Clojure system in production

This past weekend, we pushed another major release into production. We’ve been working on several things and have made a few pushes since the last time I wrote about this – but this release has a bunch of interesting Clojure related stuff.

Long-running processes

The main thing of note is that the majority of our back-end is now written in Clojure. You might recall that our online-merchant customers send us a lot of data, and we run a ton of analytics on that data. Our initial plans involved Ruby, but as we started using Clojure, it turned out that it is very well suited for this job as well (long running, message-driven processes that crunch numbers).

The raw data sits in HBase, and every night a “master” process starts up which kicks-off the processing of the previous day’s worth of data. The job of this master is only to coordinate the work (it doesn’t actually do any real work), it does this by breaking work into chunks and dispatching messages that each assign work to any worker process that picks it up. The master is single threaded for simplicity, but failure tolerant – it checkpoints everything in a local MySQL database, and if it crashes, it is automatically re-spawned and it recovers from where it left off.


An elastic cloud of worker processes run in anticipation of the master handing out this work. The worker processes use the MySQL database to keep track of their progress as well. The rest is rather domain-specific. We use intermediate representations of the raw data, which is also stored in HBase, before finally storing the summarized version again in HBase.


We use an in-house distributed-programming framework called Swarmiji to make such distributed programs very easy to write and run. Swarmiji implements a flavor of staged event-driven architecture (SEDA) to allow server processes that exhibit scalable, predictable throughput. This is especially true in the face of over-load, which we can certainly expect in our environment.

The reason I wrote this framework was that I wanted to create distributed, parallel programs which exploited large numbers of machines (like in a data-center) – without being limited by clojure’s in-JVM-threads-based model. So each worker process in Swarmiji gets deployed as a shared-nothing JVM process.

I will write up a post introducing Swarmiji in the next few weeks – once its a bit more battle-tested, and I’ve added a few more features (mainly around process management).

The first meetup of the Bay Area Clojure User Group

We held the first meeting of the BACUG (ugh, that’s a tedious acronym) last week, on Thursday the 5th of February. I was expecting about 6-8 people to show up, but we had about 12 people, quite a nice turn out!

After introductions and such, Chris Turner presented his unit-testing library for clojure called Clojure Spec. We talked about testing in general, and because he uses a lot of macro-writing macros in his code, we talked about macros in general and the difficulty in debugging macros. It was a good discussion – though I did throw in (tongue-in-cheek) that we’re using Test Is at Cinch. Thanks Chris, it was a good presentation.

After that talk, I gave an introduction to how we’re presently using Clojure in our startup. Since we’re using HBase as our persistent data-store, and processing that data using a bunch of clojure processes, we talked about data-modeling for HBase-like systems (I will write about my implementation of that data-model in another post).

An interesting thread came up – led by Joe Mikhail who works at Google and obviously does a lot of Map/Reduce/BigTable stuff – around how at least in the beginning, multi-threaded clojure processes can be used in place of Hadoop when processing HBase data. And he did mention that using systems like Terracotta, one can scale up such solutions. We’re going to look into that next week.

The remainder of the meeting went by in a buzz of talking about different languages, technologies, and general geek topics. The one thing of interest here was the point that software transactional memory is no panacea (surprise!) to the whole concurrency thing. Here’s an ACM article (there are several actually, just keep turning the pages) that throws some light on the issue. Here’s another.

Overall, it was a very good meeting – we’re hoping that the next one would be attended by more people, and especially some folks that have used other Lisps in the past – we’re all curious about what such folks think about Clojure. Indeed, and we want to learn from their experience in using idiomatic Lisp.

I’m thinking that we’ll do another Clojure meetup in about 6 weeks or so… join up, and stay tuned!

P.S. – A shout of thanks to my employer Runa. for hosting this meeting.

Startup logbook – v0.1 – Clojure in production

Late last night, around 3 AM to be exact, we made one of our usual releases to production (unfortunately, we’re still in semi-stealth mode, so I’m not talking a lot about our product yet – I will in a few weeks). There was nothing particularly remarkable about the release from a business point of view. It had a couple of enhancements to functionality, and a couple of bug-fixes.

What was rather interesting, at least to the geek in me, was that it was something of a technology milestone for us. It was the first production release of a new architecture that I’d been working on over the past 2-3 weeks. Our service contains several pieces, and until this release we had a rather traditional architecture – we were using ruby on rails for all our back-end logic (eg. real-time analytics and pricing calculations) as well as the user-facing website. And we were using mysql for our storage requirements.

This release has seen our production system undergo some changes. Going forward, the rails portion will continue to do what it was originally designed for – supporting a user-facing web-UI. The run-time service is now a combination of rails and a cluster of clojure processes.

When data needs to be collected (for further analytics down the line), the rails application simply drops JSON messages on a queue (we’re using the excellent erlang-based RabbitMQ), and one of a cluster of clojure processes picks it up, processes it, and stores it in an HBase store. Since each message can result in several actions that need to be performed (and these are mostly independent), clojure’s safe concurrency helps a lot. And since its a lisp, the code is just so much shorter than equivalent ruby could ever be.

Currently, all business rules, analytics, and pricing calculations are still being handled by the ruby/rails code. Over the next few releases we’re looking to move away from this – to instead let the clojure processes do most of the heavy lifting.

We’re hoping we can continue to do this in a highly incremental fashion, as the risk of trying to get this perfect the first time is too high. We absolutely need to get the feedback that only production can give us – so we’re more sure that we’re building the thing right.

The last few days have been the most fun I’ve had in any job so far. Besides learning clojure, and hadoop/ hbase pretty much at the same time (and getting paid for doing that!), it has also been a great opportunity to do this as incrementally as possible. I strongly believe in set-based engineering methods, and this is the approach I took with this as well – currently, we haven’t turned off the ruby/rails/mysql system – it is doing essentially the same thing that the new clojure/hbase system is doing. We’re looking to build the rest of the system out (incrementally), ensure it works (and repeat until it does) – before turning off the (nearly legacy) ruby system.

I’ll keep posting as I progress on this front. Overall, we’re very excited at the possibilities that using clojure represents – and hey, if it turns out to be a mistake – we’ll throw it out instead.

More on Lisp syntax, and language extensions

Following my recent post on the topic, I thought of one more thing that the syntax of Lisp allows you to do. Being homoiconic, and the fact that code manipulation is so simple (it’s all lists), layering on “language extensions” becomes possible. For example, if Betty Programmer realizes that OO is a great way to design and write code but that Lisp by itself doesn’t provide an OO facility (there are no “class” constructs, no inheritance etc.) – she doesn’t need to despair.

She can write code to add an OOP system to the language. Yes, this means Lisp really blurs the distinction between the language designer and the programmer. In other words, while it’s fairly obvious that Lisp is very well suited to writing DSLs, it is also possible to fundamentally extend the language as well – like adding an OO system, or pattern-matching, or logic-programming (ala Prolog).

Now, obviously, I’m not proficient enough yet to do anything of this sort. But, as I said before, it is my intention to learn 🙂

Lisp. A language where being meta is something worth thinking about.

Lisp syntax, and when code is data

Like I said earlier, my friend Ravi introduced me to Lisp several years ago, but it has taken me many years to really want to learn it well enough. I’ll write about my reasons in another post. In any event, at the beginning of this year, I started to pick it up again, promising myself that I’d be serious. This time. So far so good.

I think I’ve started to grok one of the core ideas of Lisp. I had always read that the syntax of Lisp was one of its strengths. And I had always struggled with that idea, knowing it was important, yet was quite unable to really put my finger on it. I think I’m closer to it today.

If you had to create a programming language to write programs that wrote programs (as in, say DSLs) – what design choices would you make?

For one thing, you’d have to be able to generate and manipulate (walk parse-trees, compare and transform nodes etc.) code as though it were just another data-structure. Right? OK, so the code that was being generated would look like and behave like data.

You would then create an EVAL function that could run the generated code. Maybe your generated code would in turn produce generated code, so to keep things easy and simple, your language syntax would be the same as that of the generated language. In other words, you’d end up with a homoiconic programming language. Finally, you would bootstrap your language processor and arrive at your final metacircular evaluator.

To recap, this language would have syntax that looked and behaved like data and because of it could generate and manipulate that data, which itself could be code. What would this data structure look like? One obvious choice for this is a tree (because of parse-trees). If you think about it, XML is just like a tree. But it’s kludgey. What we want is something like XML but without all the cruft. For example –


<function name="add_to_stock">
<param name="counter" />
<call_function name="increment">
<argument value="counter"/>

<function name="remove_from_stock">
<param name="item"/>
<call_function  name="decrement_from_stock_file">
<argument value="item"/>

The syntax is truly disgusting, but useful – especially if you need to programmatically generate it. Let’s now try to make it easier for humans, too. I’m going to remove the ‘program’ tag, because all this stuff is code. I’m going to then change from XML tags to simple ‘(‘ and ‘)’ without the names – and make an assumption – the first word that appears is always a function call. Except for define – which I’ll use to denote a definition for a function. I’ll also lose the XML attribute names, assuming that words that follow the function name are always parameters (unless it’s a code block itself – which would get evaluated first). So, we’re left with –

(define (add_to_stock counter)
    (increment counter))

 (define (remove_from_stock item)
    (decrement_from_stock_file item))

Where does this leave us?

It’s the same exact XML syntax, but it’s just a bit modified and has a few rules thrown in. Importantly, it’s still as easy to generate as XML. It’s just a list of lists of words. As in, a unit of code in this format would always start and end with parenthesis, and they would enclose either a bunch of zero or more symbols, or other lists.

In fact, a language that was good at list processing and had an eval function would probably do a really good job with this stuff!