The State of Simulation Testing - 2015 Results

Well, it’s been about a month since I initiated the State of Simulation Testing - 2015 survey, and the results are in. In total, 29 people responded, and the results were quite informative about the state of simulation testing today.

Since the number of respondents was on the low-side, I think the most appropriate summary would be a qualitative summary. For raw data, I encourage you to peruse the raw responses (names and companies redacted).

General Demographics

In the survey, I asked a number of demographic questions. There weren’t many surprises, but it won’t hurt to go over the responses:

In what capacity do you work?

Capacity Count Percent
At a company 19 66%
At a consulting company 5 17%
Self-employed 5 17%

What industry do you/your clients work in?

Industry Count Percent
E-Commerce 7 23%
Financial Services 6 20%
Healthcare 3 10%
Professional Services 2 7%
Apartment Rentals 1 3%
Automotive 1 3%
Charity 1 3%
Education 1 3%
Business-process Outsourcing 1 3%
IT / Data analytics 1 3%
Media applications 1 3%
Mobile 1 3%
Music 1 3%
Online entertainment 1 3%
Software Development 1 3%
Technology 1 3%

A lot of E-Commerce and Financial applications, as I had expected, but I was surprised by the myriad of other places people are applying the technique.

How big is your company?

Size Count Percent
1-5 employees 10 34%
6-25 employees 7 24%
25-100 employees 4 14%
100-500 employees 2 7%
500+ employees 6 21%

Results here were a surprise to me. I had expected predominately large organizations, but in fact the most predominant response was “1-5 employees”.

How would you categorize your company?

Category Count Percent
Startup 13 45%
Small-to-medium Sized Business 11 38%
Large Enterprise 5 17%

A little unexpected. I had thought large enterprise might be the biggest category, but in fact companies of all sorts are using simulation testing.

What is at risk if your systems fail to operate correctly?

At risk Count Percent
We stand to lose customers 14 48%
We stand to lose revenue 9 31%
We could face legal action 1 3%
We stand to lose developer/business time 3 10%
There would be no substantial impact 1 3%
All of the above 1 3%

As I expected, losing customers/revenue occupies the majority of the responses (nearly 80%). Elaborations ranged from direct revenue loss, to less direct, intangible loss. Some interesting quotes:

  • “We are a self-coaching tool […] if we’re down or buggy [then] folks [will] lose confidence in our tool and associate that with our approach and concept.”
  • “If there was any more than a 5 minute outage, we could lose a large contract and subsequently a large portion of our business.”
  • “Our customers' inventory flow would be incorrect, either too much or not enough inventory moving between trading partners.”

Other General Facts

  • 72% of all systems under test were Clojure systems.
  • Most respondents learned about simulation testing via Cognitect’s efforts (not surprising).
  • Most companies were introduced to simulation testing by a developer or team lead.
  • Nearly every company considering simulation testing performs at least unit testing, if not integration, and other internal QA. Not a big surprise, given simulation testing is somewhat of a more rigorous form of testing. I wouldn’t think a company not testing at all would considering going with simulation testing.

Who’s using it?

Of those the 29 responses, 21 have considered simulation testing (positively), 5 have implemented one, and 1 considered, but rejected. 2 respondents had not yet considered simulation testing internally.

Why yes?

Of the 5 respondents who already have simulation tests, these were some of the most serious customers. In fact, of one respondent I happen to know personally, I can almost certainly guarantee they’d make national (possibly international) news if they had a serious failure.

These respondents are also different in that they’re generally larger (25-500 employees), and generally classified their risks as either loss of customers or revenue. Interestingly, only one of these customers uses Simulant (and the one that does, uses a fork of it). I can only presume this is because a) “simulation testing” as a concept is not entirely novel, and b) because these established businesses needed to mitigate risks within their system before the advent of Simulant.

Why no?

The single respondent whose company decided against simulation testing, did so primarily because it “seemed like a lot of investment to try the approach.” This foreshadows “What Could Be Better”, but I can only presume this was because of the lack of maturity around tooling and lack of success-stories.

Why maybe?

The most common thread through respondents positively considering simulation testing was the existence of mission critical systems that were complicated and hard to test. Less common, but still prevalent was a general curiosity about the technique.

The most interesting response to me, by far, was a respondent who reported their system was an event-sourced system, that they could (relatively) trivially re-feed into a simulation test. I want to work on that system!

Not Considered

Only 2 respondents had yet to consider simulation testing. These two companies were both on the small side (< 25), and noted a lack of familiarity with the technique/benefits.

What could be better?

Rounding out the survey, I asked two general questions:

  • What could make simulation testing easier to implement or use, and
  • Do you have any general comments regarding simulation testing.

The responses to these questions are probably the most informative. Through nearly every response ran one common theme: we need more resources. Being as obscure a technique as it is, simulation testing lacks a lot of things techniques like TDD have going for them. This is supported by responses to “How would you rate your team’s skills…”, with 75% of respondents familiar with only the basics of simulation testing, or not at all.

respondents identified a lack of:

  • A body of knowledge – think blog articles, tutorials, videos, books, etc.
  • Helpful tools – think libraries, templates, and well-known best practices.
  • Public success stories – A tool in some ways. Let me explain: a few respondents noted the difficulty of making a business-case for simulation testing, primarily based on the lack of public success stories. It’s one thing to sell a fancy new technique on its own merits, it’s another thing entirely to do it without any big companies putting their name behind it.


In general, I’m really happy with how the survey turned out. I had been prepared for maybe 5-10 results, but 30, boy, that’s actually a workable number of responses. I’m also really happy with the depth of the responses I received–nearly everyone elaborated on why they made certain choices and how simulation testing could be better.

Going forward this year, simulation testing is going to be my primary focus. In fact, I’m building Homegrown Labs (my company) around it. If you want to learn more about simulation testing, I encourage you to sign up for my mailing list below. In the near future, I’ll be blogging about techniques, open-sourcing useful tools, and talking about how you can convince your boss to let you make take the plunge into simulation testing.


The State of Simulation Testing - 2015

As you probably know, there’s always lots of new and exciting developments in Clojure-land. One, more recent technique that has me really excited has been Simulation Testing. I’ve used Simulant to implement simulation tests on a few projects this last year with great success, and I’m really curious to hear about if/how others are using the technique.

To that end, I’ve put together a survey on sim-testing–The State of Simulation Testing–that I hope can become an annual fixture both inside and outside of the Clojure community. Once the ballots have closed on February 20th, I’ll tally and report upon the results on this blog.

If you’ve considered, implemented or even out-right rejected simulation testing, I’d love to hear more from you:

Take the Survey!

Thanks for your time!


In-Memory SQLite Database In Clojure

Recently, I’ve been working with a SQLite database using Clojure. In this post, I’d like to share my experience that I’ve got from that challenge.

SQLite is a great tool used almost everywhere. Browsers and mobile devices use it a lot. A SQLite database is represented by a single file that makes it quite easy to share, backup and distribute. It supports most of the production-level databases’ features like triggers, recursive queries and so on.

In addition, SQLite has a killer feature to be run completely in memory. So, instead of keeping an atom of nested maps, why not to store some temporary data into well organized tables?

JDBC has SQLite support as well, you only need to install a driver, the documentation says. But then, I’ve got a problem dealing with in-memory database:

(def spec
  {:classname   "org.sqlite.JDBC"
   :subprotocol "sqlite"
   :subname     ":memory:"})

(jdbc/execute! spec "create table users (id integer)")
(jdbc/query spec "select * from users")

> SQLiteException [SQLITE_ERROR] SQL error or missing database
> (no such table: users)  org.sqlite.core.DB.newSQLException (

What the… I’ve just created a table, why cannot you find it? An interesting note, if I set a proper file name for the :subname field, everything works fine. But I needed a in-memory database, not a file.

After some hours of googling and reading the code I’ve found the solution.

The thing is, JDBC does not track DB connections by default. Every time you call for (jdbc/*) function, you create a new connection, perform an operation and close it. For such persistent data storages like Postgres or MySQL that’ fine although not effective (in our project, we use HikariCP to have a pool of open connections).

But for in-memory SQLite database, closing a connection to it leads to wiping the data completely out form the RAM. So you need to track the connection in more precision way. You will create a connection by yourself and close it when the work is done.

First, let’s setup your project:

:dependencies [...
               [org.xerial/sqlite-jdbc "3.20.0"]
               [org.clojure/java.jdbc "0.7.0"]
               [com.layerware/hugsql "0.4.7"]
               [mount "0.1.11"]

and the database module:

(ns project.db
  (:require [mount.core :as mount]
            [hugsql.core :as hugsql]
            [ :as jdbc]))

Declare a database URI as follows:

(def db-uri "jdbc:sqlite::memory:"

Our database shares two states: when it’s been set up and ready to work and when it has not. To keep the state, let’s use mount library:

(declare db)

(defn on-start []
  (let [spec {:connection-uri db-uri}
        conn (jdbc/get-connection spec)]
    (assoc spec :connection conn)))

(defn on-stop []
  (-> db :connection .close)

  ^{:on-reload :noop}
  :start (on-start)
  :stop (on-stop))

Once you call (mount/start #'db), it becomes a map with the following fields:

{:connection-uri "jdbc:sqlite::memory:"
 :connection <SomeJavaConnectionObject at 0x0...>}

When any JDBC function or a method accepts that map, it checks for the :connection field. If it’s filled, JDBC uses that connection as well. If it’s not, a new connection is issued. In my case, every execute/query call created a new in-memory database and stopped it right after the call ends. That’s why the second query could not to find users table: because it was performed within another database.

Now with the db started, you are welcome to perform all the standard jdbc operations:

(jdbc/execute! db "create table users (id integer, name text))")
(jdbc/insert! db :users {:id 1 :name "Ivan"})
(jdbc/get-by-id db :users 1) ;; {:id 1 :name "Ivan"}
(jdbc/find-by-keys db :users {:name "Ivan"}) ;; ({:id 1 :name "Ivan"})

Finally, you stop the db calling (mount/stop #'db). The connection stops, the data disappears completely.

For more complicated queries with joins, HugSQL library would be a good choice. Create a file queries.sql in your resources folder. Say, you want to write a complex query that filters a result by some values that probably are not set. Here is an example of what you should put into queries.sql file:

-- :name get-user-visits :?
from visits v
join locations l on v.location =
    v.user = :user_id
    /*~ (when (:fromDate params) */
    and v.visited_at > :fromDate
    /*~ ) ~*/
    /*~ (when (:toDate params) */
    and v.visited_at < :toDate
    /*~ ) ~*/
    /*~ (when (:toDistance params) */
    and l.distance < :toDistance
    /*~ ) ~*/
    /*~ (when (:country params) */
    and = :country
    /*~ ) ~*/
order by

In your database module, put the following on the top level:

(hugsql/def-db-fns "queries.sql")

Now, every SQL template has become a plain Clojure function that takes a database and a map of additional parameters. To get all the visits in our application, we do:

(get-user-visits db {:user_id 1 :fromDate 123456789 :country "SomePlace"})
> a seq of maps...

Hope this article will help those who’ve got stuck with SQLite.


Reasoning About Code

A lot of people talk about "reasoning about code". I certainly do. It's something that I don't think I ever heard when I was an OO programmer. What does it mean?

It's a tough question, because it's not one of those technical terms with a technical definition. It's just something people say when they are talking about the benefits of functional programming. But since I say it, too, I might as well give my understanding of the term.

To me, "reasoning about code" is all about the limitations of the human mind. If we were hyper geniuses, we could read any amount of code and just understand it. But we're not and we can't. As programs get bigger, we can't help but lose track of what's happening in a program.

People are used to interacting with the real world, where effects tend to be local. For instance, if I'm locked in my house, a person ten miles away cannot attack me. When we walk down the street at night, we know there are people who might hurt us somewhere in the world. But what we're concerned about is if they are close by. Instead of starting with the locations of all attackers and calculating the probability of each of them being able to harm us, we look around where we are and we evaluate the people we see. Humans think locally because that's where the action is.

It's this sense of locality that we find in functional programming languages. When you look at functional code, several things help you localize yourself:

  • Scopes are localizing. Definitions cannot "leak in" from elsewhere.
  • Pure functions are localizing. Pure functions will act the same regardless of when they are called or how many times they are called.
  • Immutable values are localizing. No need to worry about other parts of the code modifying it.
  • Isolated and consolidated side-effects. At least they're happening in the same place.

Since everything is relatively local, you have less to read and understand to be able to reason about what it's going to do. There are some common things that are non-localizing in programming languages.

  • Global mutable state. Anything, in any part of the code, can change it at any time. The opposite of local.
  • Scope leak. Variables can often be modified outside of their scope, or mutations in one scope are visible outside the scope.
  • Mutable objects. You are referencing an object that is changing out from under you.
  • Side effects. Well, once you send a message across the wire, or receive on, it's not so local.

So what should we do? Push state down from global to local. Make it as local as possible. Factor out pure functions from your code, which should isolate state change. Use immutable values whenever possible. Isolate and consolidate your side effects (so at least you know where they are happening).


People talk about "reasoning about code" a lot and it's not clear that it's meaningful. But I do use the term and when I do I mean "things are more local so I can keep them in my head". It's a notion that serves me well. For instance, it acts like a code smell. If I'm not able to keep something in my head, it's time to make it more functional.

If you like this topic, or you'd like to get into functional programming, you might like to try the Newsletter. It's a weekly email about Clojure, Functional Programming, and how they relate to the history of technology.


Beautiful Location

How do you show off the beauty of a place?

I’ve been to many programming conferences and one thing I am tired of are the boring venues. I’m tired of looking at grey, carpeted walls. I know there are good reasons for doing things at a hotel, but I couldn’t imagine inviting people to New Orleans just to be at a big box hotel. I wanted the location to be an important part of the conference.

Last weekend I was in the French Quarter and decided to take the time to show off the wonderful location I have booked for Clojure SYNC. Short of being there, there’s really no better way to explain it than with a video. My daughter was a big help getting this footage and she makes a brief appearance.

And please be warned: there is jazz!

Oh, and the video lies. Tickets are on sale now!


Befunge for Clojurescript

Having written the Beatnik interpreter a few months ago, I was recently reminded that my Befunge interpreter still didn’t have a Clojurescript version and, given it’s been almost exactly five years since that post, it’s about time. For the impatient, it’s hosted on Github pages. By default it loads ‘Hello World’, and there’s a couple of other examples on the ‘choose examples’ list, or you can load an arbitrary Befunge program as well using the ‘load file’ option. For longer programs, try reducing the ‘Time per step’ to get a more reasonable speed.

The core Befunge interpreter is mostly as-per the 2012 blog post, but I had to make a number of changes to make in run in-browser with Clojurescript, as it’s still actually also a valid Clojure program as well. I’d like to talk a little bit about those changes, the new bits that are the interpreter, and a few things that I’ve learnt over the last five years.

The first issue concerns Reader Conditionals. A Clojure program naturally accumulates a few calls into Java methods and other things that are specific to the JVM implementation, and my Befunge interpreter was no different. One that tripped me up in particular was that int in Clojure on a character gets you it’s Unicode value, but in Clojurescript this fails horribly. Javascript however has a charCodeAt that does the equivalent, and so you can write

(defn charcode [c]
 #?(:clj  (int c)
    :cljs (.charCodeAt c 0)))

and get a function that does the right thing in each language. This is also useful when you want to print to console in Clojure, but store in a variable for print-to-screen for Clojurescript.

Second was macros and testing. There’s a copy of the PyFunge test suite in the code base, and I’d previously just said “just run (runPyfungeTests) at your repl, it’s all fine”. This time around though, I understood macros better (in large part thanks to the “Clojure for the Brave and True” macro tutorial). Because of the homoiconicity of Clojure, they’re actually a lot less scary and easier to write than some languages (yeah, I’m looking at you, Rust macro system). I still have to lookup the syntax every time to remember the difference between ‘ v.s. ` v.s. ~, but once you get that, and understand that some bits are just so you output a list of those symbols, and some bits are used to generate the bits you’re going to output, they make a lot of sense. This meant I could now write a much better test that automagically generates Clojure tests from the Befunge code in the test folders, which means there’s now a proper Travis CI build.

After that, it was mostly a bog-standard Reagent app, with some use of Figwheel to do live reloading (which is still such a great development experience). The one extra thing that I hadn’t done before was using core.async to deal with things that you need to run in a loop as opposed to reacting to state changes (e.g. uploading a file, or running the Befunge program). Marcin Kulik has a good guide on this, and I also used Michael McClintock’s guide on uploading (although without the CSV parse step).

Permalink Newsletter 239: Clojure SYNC tickets on sale

Issue 239 – August 21, 2017

Hi Clojurnators,

A big thing is happening in New Orleans in 2018.

And you can be a part of it.

I just finished (minutes ago) putting up the site for Clojure SYNC. It’s a Clojure conference I’m organizing. And it’s going to be awesome. It’s a lot of work putting one of these together (so they tell me). But all you have to do is buy a ticket, book a flight, and reserve a hotel room.

You can get the Early Bird rate now until September 21. With it you save $50.

The venue is booked and I’ve arranged a great deal at four hotels within walking distance. These are beautiful hotels and they’ve been so helpful getting all of this together. You can find links on the Lodging page to book rooms at the discounted rates. (Well, all except one, which I’m supposed to get today).

That rate does expire and the rooms are limited, so do book your hotel as soon as possible.

Finally, if you aren’t ready to buy just yet, please get on the mailing list to be notified of updates as they happen. I will still talk about it in this Newsletter a little, but I’ll give you more frequent updates on the Clojure SYNC dedicated list.

I hope to see you in New Orleans. Buy a ticket today.

Rock on!
Eric Normand <>

PS I just set up the site, so please let me know if there are any problems. Thanks!

Web Development in Clojure

I mentioned last week that I would spend a good amount of time on revamping courses. Well, this was the first one I tackled, which was my second course ever.

There is now a complete GitHub repo for this course which I have tested to work. I also tagged each section in git to be sure you can never get lost. There’s also a complete written guide for both 30-minute lessons, which explains in more depth and links to documentation, along with copy-pastable code.

I’ve already shown it to some people and have gotten even more great suggestions for improvements, so be on the lookout for those.

Ways of Hearing

We are awash in a sea of media all the time. I use media in McLuhan’s sense, which is more like a synonym for technology. It’s basically anything that can change our relationship to our environment. In this way, the “medium is the message”. That is, the medium changes us regardless of the content.

That idea is rarely talked about, but this audio essay does it will. It shows how digital audio recording and playback have fundamentally changed how we record and how we listen. I can’t wait for the rest of the series.

“Deprecated” tag on

As part of the work to keep courses up-to-date, I’ve coded up a new tag to mark things as deprecated. At least one of the courses (Complete Web App from Scratch) was too far gone to be updatable. I was too ambitious choosing the main library since even at the time it would change between lessons. Being deprecated means you can still watch it if you want to, but I don’t recommend starting the course if you haven’t already.

Barbara Liskov 2010 Grace Hopper Celebration Keynote YouTube

Dr. Liskov gives a personal history of the study of modular programming.


What makes a good REPL?

Dear Reader: although this post mentions Clojure as an example, it is not specifically about Clojure; please do not make it part of a language war. If you know other configurations which allow for a productive REPL experience, please describe them in the comments!

Most comparisons I see of Clojure to other programming languages talk about its programming language semantics: immutability, homoiconicity, data-orientation, dynamic typing, first-class functions, polymorphism 'à la carte'... All of these are interesting and valuable features, but what actually gets me to choose Clojure for projects is its interactive development story, enabled by the REPL (Read-Eval-Print Loop), which lets you evaluate Clojure expressions in an interactive shell (including expressions which let you modify the state or behaviour of a running program).

If you're not familiar with Clojure, you may be surprised that I describe the REPL as Clojure's most differentiating feature: after all, most industrial programming languages come with REPLs or 'shells' these days (including Python, Ruby, Javascript, PHP, Scala, Haskell, ...). However, I've never managed to reproduced the productive REPL workflow I had in Clojure with those languages; the truth is that not all REPLs are created equal.

In this post, I'll try to describe what a 'good' REPL gives you, then list some technical characteristics which make some REPLs qualify as 'good'. Finally, I'll try to reflect on what programming language features give REPLs the most leverage.

What does a good REPL give you?

The short answer is: by providing a tight feedback loop, and making your programs tangible, a REPL helps you deliver programs with significantly higher productivity and quality. If you're wondering why a tight feedback loop is important for creative activities such as programming, I recommend you watch this talk by Bret Victor.

If you have no idea what REPL-based development looks like, I suggest you watch a few minutes of the following video:

Now, here's the long answer: A good REPL gives you...

A smooth transition from manual to automated

The vast majority of the programs we write essentially automate tasks that humans can do themselves. Ideally, to automate a complex task, we should be able to break it down into smaller sub-tasks, then gradually automate each of the subtasks until reaching a fully-automated solution. If you were to build a sophisticated machine like a computer from scratch, you would want to make sure you understand how the individual components work before putting them together, right? Unfortunately, this is not what we get with the typical write/(compile)/run/watch-stdout workflow, in which we essentially put all the pieces together blindly and pray it works the first time we hit 'run'. The story is different with a REPL: you will have played with each piece of code in isolation before running the whole program, which makes you quite confident that each of the sub-tasks is well implemented.

This is also true in the other direction: when a fully-automated program breaks, in order to debug it, you will want to re-play some of the sub-tasks manually.

Finally, not all programs need be fully automated - sometimes the middle ground between manual and automated is exactly what you want. For instance, a REPL is a great environment to run ad hoc queries to your database, or perform ad hoc data analysis, while leveraging all of the automated code you have already written for your project - much better than working with database clients, especially when you need to query several data stores or reproduce advanced business logic to access the data.

How's life without a REPL? Here's a list of things that we do to cope with these issues when we don't have a REPL:

  • Experiment with interactive tools such as cURL or database clients, then reproduce what we did in code. Problem: you can't connect these in any way with your existing codebase. These tools are good at experimenting manually, but then you have to code all the way to bridge the gap between making it work with these tools and having it work in your project.
  • Run scripts which call our codebase to print to standard output our files. Problem: you need to know exactly what to output before writing the script; you can't hold on to program state and improvise from there, as we'll discuss in the next section.
  • Use unit tests (possibly with auto-reloading), which have a number of limitations in this regard, as we'll see later in this post.

A REPL lets you improvise

Software programming is primarily and exploratory activity. If we had a precise idea of how our programs should work before writing them, we'd be using code, not writing it.

Therefore, we should be able to write our programs incrementally, one expression at a time, figuring out what to do next at each step, walking the machine through our current thinking. This is simply not what the compile/run-the-whole-thing/look-at-the-logs workflow gives you.

In particular, one situation where this ability is critical is fixing bugs in an emergency. When you have to reproduce the problem, isolate the cause, simulate the fix and finally apply it, a REPL is often the difference between minutes and hours.

Fun fact: maybe the most spectacular occurrence of this situation was the fixing of a bug of the Deep Space 1 probe in 1999, which fortunately happened to run a Common Lisp REPL while drifting off course several light-minutes away from Earth.

A REPL lets you write fewer tests, faster

Automated tests are very useful for expressing what your code is supposed to do, and giving you confidence that it works and keeps working correctly.

However, when I see some TDD codebases, it seems to me that a lot of unit tests are mostly here to make the code more tangible while developing, which is the same value proposition as using a REPL. However, using unit tests for this purpose comes with its lot of issues:

  1. Having too many unit tests makes your codebase harder to evolve. You ideally want to have as few tests as possible capture as many properties of your domain as possible.
  2. Tests can only ever answer close-ended questions: "does this work?", but not "how does this work?", "what does this look like?" etc.
  3. Tests typically won't run in real-world conditions: they'll use simple, artificial data and mocks of services such as databases or API clients. As a result, they don't typically help you understand a problem that only happens on real-life data, nor do they give you confidence that the real-life implementations of the services they emulate do work.

So it seems to me a lot of unit tests get written for lack of a better solution for interactivity, even though they don't really pull their weight as unit tests. When you have a REPL, you can make the choice to only write the tests that matter.

What's more, the REPL helps you write these tests. Once you have explored from the REPL, you can just copy and paste some of the REPL history to get both example data and expected output. You can even use the REPL to assist you in writing the fixture data for your tests by generating it programmatically (everyone who has written comprehensive fixture datasets by hand knows how tedious this can get). Finally, when writing the tests require implementing some non-trivial logic (as is the case when doing Property-Based Testing), the productivity benefits of the REPL for writing code applies to writing tests as well.

A REPL makes you write accessible code

A REPL-based workflow encourages you to write programs which manipulate values that are easy to fabricate. If you need to set up a complex graph of objects before you can make a single method call, you won't be very inclined to use the REPL.

As a result, you'll tend to write accessible code - with few dependencies, little environmental coupling, high modularity, and tangible inputs and outputs. This is likely to make your code more clear, easy to test, and easy to debug.

To be clear, this is an additional constraint on your code (it requires some upfront thinking to make your code REPL-friendly) - but I believe it's a very beneficial constraint. When my car engine breaks, I'm glad I can just lift the hood and access all the parts - and making this possible has certainly put more work on the plate of car designers.

Another way a REPL makes code more accessible is that it makes it easier to learn, by providing a rich playground for beginners to experiment. This applies to both learning languages and onboarding existing projects.

What makes a good REPL?

As I said above, not all REPLs give you the same power. Having experimented with REPLs in various configurations of language and tooling, this is the list of the main things I believe a REPL should enable you to do to give you the most leverage:

  1. Defining new behaviour / modify existing behaviour. For instance, in a procedural language, this means defining new functions, and modify the implementation of existing functions.
  2. Saving state in-memory. If you can't hold on to the data you manipulate, you will waster a ton of effort re-obtaining it - it's like doing your paperwork without a desk.
  3. Outputting values which can easily be translated to code. This means that the textual representation the REPL outputs is suitable for being embedded in code.
  4. Giving you access to your whole project code. You should be able to call any piece of code written in your project of its dependencies. As an execution platform, the REPL should reproduce the conditions of running code in production as much as possible.
  5. Putting you in the shoes of your code. Given any piece of code in one of your project files, the REPL should let you put yourself in the same 'context' as that piece of code - e.g write some new code as if it was in the same line of the same source file, with the same lexical scope, runtime environment, etc. (in Clojure, this is provided by the (in-ns ...) - 'in namespace' - function).
  6. Interacting with a running program. For instance, if you're developing a web server, you want to be able to both run the webserver and interact with it from the REPL at the same time, e.g changing the implementation of a route and seing the change in your web browser, or sending a request from your web browser and intercepting it in your REPL.
  7. Synchronizing REPL state with source code files. This means, for instance, 'loading' a source code file in the REPL, and then seeing all behaviour and state it defines effected in the REPL.
  8. Being editor-friendly. That is, exposing a communication interface which can be leveraged programmatically by an editor Desirable features include syntax highlighting, pretty-printing, code completion, sending code from editor buffers to the REPL, pasting editor output to editor buffers, and offering data visualization tools. (To be fair, this depends at least as much on the tooling around the REPL than on the REPL itself)

What makes a programming language REPL-friendly?

I said earlier that Clojure's semantics were less valuable to me than its REPL; however, these two issues are not completely separate. Some languages, because their semantics, are more or less compatible with REPL-based development. Here is my attempt at listing the main programming language features which make a proficient REPL workflow possible:

  1. Data literals. That is, the values manipulated in the programs have a textual representation which is both readable for humans and executable as code. The most famous form of data literals is the JavaScript object Notation (JSON). Ideally, the programming language should make it idiomatic to write programs in which most of the values can be represented by data literals.
  2. Immutability. When programming in a REPL, you're both holding on to evaluation results and viewing them in a serialized form (text in the output); what's more, since most of the work you're doing is experimental, you want to be able confine the effects of evaluating code (most of the time, to no other effect than showing the result and saving it in memory). This means you'll tend to program with values, not side-effects. As such, programming languages which make it practical to program with immutable data structures are more REPL-friendly.
  3. Top-level definitions. Working at the REPL consists of (re-)defining data and behaviour globally. Some languages provide limited support for this (especially some class-based languages); sometimes they ship with REPLs that 'patch' some additional features to the language for this sole purpose, but in practice this results in an impedance mismatch between the REPL and an existing codebase - you should really be able to seamlessly transfer code from one to the other. More generally, the language should have semantics for re-defining code while the program is running - interactivity should not be an afterthought in language design!
  4. Expressive power. You may think it's a bit silly to mention this one, but it's not a given. For the levels of sophistication we are aiming for, we need our languages to have clear and concise syntax which can express powerful abstractions that we know how to run efficiently, and there is no level of interactivity that can make up for those needs. This is why we don't write most of our programs as Bash scripts.


If you've ever played live music on stage without being able to hear your own instrument, then you have a good idea of how I feel when I program without a REPL - powerless and unconfident.

We like to discuss the merits of programming languages and libraries in terms of the abstractions they provide - yet we have to acknowledge that tooling plays an equally significant role. Most of us have experienced it with advanced editors, debuggers, and version control to name a few, but very few of us have had the chance to experience it with full-featured REPLs. Hopefully this blog post will contribute to righting that wrong :).


Extracting Chrome Cookies with Clojure

Introduction Did you know that it was possible to extract your google chrome cookies to use them in a script? This can be a great way of automating interaction with a website when you don't want to bother automating the login flow. There is a great python library to do that called browsercookie. Browsercookie works on Linux, Mac OSX, and Windows and can extract cookies from Chrome and Firefox. I have been learning Clojure for the past month and I decided to reimplement the same functionality as browsercookie as an exercise!


A Reason to Code

A few months ago I read about a new programming language called Reason. It was sold to me as a better JavaScript, like so many compile-to-javascript languages were. We've seen them, CoffeeScript, LiveScript, Elm, ClojureScript. So I was a bit reluctant to look into it, but a few days ago I started reading into it and I was blown away.

What is Reason?

Reason is a language that can compile to JavaScript and native, like you know from C/C++. So you can use it to write a Node.js application, or your front-end or write a normal native app, like people did back in the days ;)

The interesting thing about Reason is now, it's not completely new. It can basically be seen as a new syntax for OCaml. So you can convert Reason to OCaml and back without any losses.

Why a new syntax? Well, I guess the target group of Reason are web developers, so they wanted to make it look more like JavaScript.

For example JavaScript has spread syntax for arrays:

    const x = 1;
    const a = [2, 3, 4];
    const b = [x, ...a]; // -> [1,2,3,4];

Doing this with OCaml looks like that:

    let x = 1
    let a = [2; 3; 4]
    let b = x :: a

Doing this with Reason looks like that:

    let x = 1;
    let a = [2, 3, 4];
    let b = [x, ...a];

How does it work with JavaScript?

The OCaml compiler has a plugin-system you can implement your own back-ends for it that create the outputs you need. So devs at Bloomberg build BuckleScript, a back-end for the OCaml compiler that outputs JavaScript.

Now, people weren't too impressed by this, because, as I mentioned above, many languages did this before, also OCaml didn't have a too familiar syntax, but the creators of Reason found that the OCaml compiler is pretty gud and wanted to harness this power for the Web, so they set out to create this new syntax while trying to get full OCaml compat.

So what happens is, you write Reason, convert it to OCaml (lossless) and compile it down to JavaScript with the help of BuckleScript. As far as I know it's one of the fastest to-javascript compilers out there and it even produces human readable code, which is funny since it compiles from OCaml byte-code to JavaScript.

Why should anyone care?

OCaml is crazy powerful.

First, it has a sound static type-system, that gives really nice error messages, think Elm level errors, with thinks like "you wrote X did you mean Y?", but yes, some people don't care much about this and those who do already using Elm, Flow or TypeScript. (btw. Flow is build in OCaml)

Second, implicit imports and exports for modules. You know the whole bunch of imports we got now in JavaScript? If you don't fold your imports with an IDE, the more complex JS files today show a list of imports and you have to scroll down to see the real code. Also, often we forget to import something and everything blows up. OCaml does all this for you. Every .re file is a module and exports all its top-level variables.

    /* */
    let a = 1;
    let b = 2;
      let notExported = 3;

    /* */
    let a = MyModule.a;
    let b = MyModule.b;
    let c = MyModule.notExported; // -> error

Third, OCaml figures out what values it can calculate at compile time and does dead code elimination. This is like tree-shaking on steroids. Prepack tries to do this for JavaScript.

For example this Reason code:

    let z = {
      let x = 10;
      let y = 20;
      x + y

Could naively be converted to this JavaScript:

    let z;
      const x = 10;
      const y = 20;
      z = x + y;
    exports.z = z;

But it's actually converted to (something like) this:

    var z = 30;
    exports.z = z;

It throws out modules you don't use, since you never import them yourself anyway, OCaml does it for you and then it also tries to pre-calculate values and throws out dead code.

Finally, it has ESLint and Prettier (which is inspired by refmt, the Reason formatter) like features already included. Code hints and automatic formatting, so you don't have to think about indentation or semicolons anymore.

Bonus, it even has a JSX-like syntax, which is a big win for React developers and it's even a bit more light weight than JSX.

    // JavaScript JSX
    <Component key={a} ref={b} foo={c} d={d}> {child1} {child2} </Component>

    // Reason JSX
    <Component key=a ref=b foo=c d> child1 child2 </Component>

Also, there are some smaller things that help in everyday coding, like named function arguments or the use of the type-system instead of strings for decisions, things that compile down to a bool or number in the end instead of using strings etc.


Reason seems to be a pretty interesting piece of software. It addresses a bunch of things I see in my daily JavaScript coding, which gives me the feeling that it's a practical language. I didn't think like this when I heard that it's basically OCaml -> a functional programming language.

So I think it is worth a try, for my private projects at least.


Copyright © 2009, Planet Clojure. No rights reserved.
Planet Clojure is maintained by Baishamapayan Ghose.
Clojure and the Clojure logo are Copyright © 2008-2009, Rich Hickey.
Theme by Brajeshwar.