The Curious Case of Clojure Contrib

Do you know what’s the common thing between Clojure Contrib, microservices and DevOps? Everyone thinks they know and understand them, but most people have a different idea about what they are.1

For many years people have believed that Contrib is one of the following:

  • A staging ground for functionality that will eventually make its way to Clojure.
  • A standard library for Clojure.
  • A set of (superior) libraries maintained/blessed by Rich/Cognitect.

Contrib has a long history and I think that in the course of it the vision for the project changed a few times.2 Unfortunately for quite a while the only official description of Contrib was one really outdated article on Clojure’s Confluence.

I’ve wanted to share my perspective on Contrib for a very long time, as I felt it had a somewhat harmful effect on the broader Clojure community, but I’ve been putting this off as I don’t enjoy writing such articles much. I finally found the inspiration that I needed after a recent conversation I had on GitHub with Sean Cornfield, Alex Miller and Stu Halloway about Contrib’s philosophy. I won’t reproduce here everything that we discussed there - after all you can just go and read the discussion or peruse the excellent new documentation that conversation resulted in.

The Nature of Contrib

So what is Contrib, really? Let’s see what the official documentation has to say on the subject:

Clojure is an umbrella level project that hosts many additional libraries called Clojure contrib, which all exist under the Clojure organization on GitHub.

These libraries use the same contribution model (Contributor Agreement, license, and copyright) as Clojure itself, and a shared infrastructure maintained by the Clojure core team:

  • Source control - Git libraries on GitHub
  • Issues - JIRA
  • Development model - patches in JIRA, no PRs
  • Continuous integration - Jenkins
  • Doc generation and hosting - Autodoc and GitHub pages
  • Builds - Maven with deployment to Maven Central under the groupId org.clojure

Every project has one or more owners (often community members) who determine the direction of their library, triage patches, etc. Project owners have full commit rights and authenticated access to their project on the build box for releases.

Contrib libraries are open source libraries, like any other open source Clojure library. They vary widely in development activity and quality and may or may not be the best alternative for their functionality in the overall ecosystem. You should evaluate them before use as you would any open source library you intend to use as a dependency.

– Official Contrib Libraries Documentation (

I think that’s pretty clear. As you can see Contrib libraries follow very closely Clojure’s own development model and share its licensing. You should also note there are no guarantees about the quality and maturity of those libraries.

I think generally too much is made of contrib. It’s just a collection of libs with a shared dev model. That’s no different than clj-commons or ClojureWerkz. They all have different approaches but each has its own view of how to setup and manage projects.

There is no “goal” for contrib beyond providing infrastructure for lib dev. It’s just a bunch of libs with a shared dev model.

– Alex Miller

Even clearer.

A Personal Perspective

Now that we’re on the same page about what Contrib is, I’ll switch gears a bit and share some personal perspective on it, which is why I’ve started writing this article in the first place.

First things first - I’m certainly bothered a bit by the lack of an overarching goal that drives Contrib. It’s hard to be making progress if you don’t know where you’re going, right? Likely all of this is just an artifact of the history of Contrib, but it irks me regardless. I actually attribute a lot of other things that I’ll discuss in the paragraphs to come to this general lack of vision and direction.

I’ve always been a bit frustrated that there’s no clear distinction between projects in Contrib which ended up there for historical reasons (e.g. as part of the restructuring of the original monolith clojure-contrib), projects that were community-contributed (e.g. tools.trace), and “special” projects that are run by Cognitect themselves (e.g. core.async, tools.deps, etc). To add further to the confusion there are some core.something and tools.something projects started by Rich/Cognitect and some that are not (core.async vs core.logic, tools.reader vs tools.deps, etc). It’s really hard to tell at a glance which are the “Core Clojure” projects in there and which are the community contributed/maintained. That’s not a big deal for me, as I’ve followed Clojure’s development closely since version 1.0, but I know it is a serious source of confusion for casual onlookers. If it were up to me I’d just move the “non-Cognitect” projects to a different GitHub org (e.g. and keep only the core projects in the current organization.

By the way, writing this reminded me that we don’t even have the proper terminology for something like core.async. In theory it’s just another library, but in practice its impact is huge and no one really tried to replicate core.async, despite a lot of complaints about known problems and slow development pace (likely because there is still some expectation that it might end up being part of Clojure at some point). I also can’t tell you how to refer to something like tools.deps - it’s definitely not an ordinary community project, but it’s also not a true core project like clojure.spec. Anyways, for lack of better terminology I’ll refer to all “special” libraries as “Core Clojure” in this article.

I think Cognitect are aware of this “blurred lines” problem as recently they have been favouring using their company’s GitHub orgs for non-core Clojure libraries (e.g. transit, REBL, etc). I also haven’t seen any new non-Cognitect-driven projects join Contrib in a very long time, which is a good thing in my book.

If there’s one area where Cognitect could have done better it’s community building, and engaging more actively Clojurians to help with the growth of the Clojure ecosystem. Here I’m not talking about engaging people to work on Clojure itself, but rather on libraries, frameworks, development tools and all that jazz. This might be just some wild speculation on my end, but I really feel that if the community stewards encouraged more actively work in certain areas it’s pretty likely that people will rally around their cause. Believe it or not leaders wield considerable motivational power. :-)

In this line of thinking, the biggest problem with Contrib is that its contribution process failed to attract contributors to most projects hosted under it. Typically the projects would see activity while their original author is involved in them and would stagnate afterwards. Even extremely popular projects like nREPL, tools.namespace and core.logic were affected by this. Currently only 11 out of 42 Contrib libraries are considered in active development with the majority of the libraries considered to be in a “stable” state. Unfortunately there’s relatively thin line between stable and abandoned… While all of the libraries that are marked as stable are certainly useful in their current form, there’s also the fact that few of them made it to 1.0, which carries the implication that their author(s) felt that they hadn’t reached the point of stability yet. On the other hand, it’s definitely true that abandoned libraries have very stable APIs. :-) I also strongly believe that active development and stability are not fundamentally at odds with one another. I think Clojure itself is a perfect example of this.

I’m not going to try to make the argument that the Clojure’s contribution process is bad in general - it’s what Rich wants for Clojure and that’s obviously his decision to make. Still, it seems strange to impose it on projects where he and Cognitect are not really involved. And if the CA is the only really important aspect of Contrib - well, there are certainly less painful ways to enforce it (e.g. you can have a CI check for PRs that verifies the PR author has signed the CA). Going down the rabbit hole, however, you really start to wonder what’s the point of having some libraries under the same CA as Clojure if they are not essential in some manner. I definitely think that all of this is just a remnant of period in time where it was actually a possibilities for some of those external libraries to be promoted to Clojure itself.

The high bar to contribute has likely affected negatively most projects in Contrib and this goes well with the previous point I made - essential and non-essential projects have different impact and goals. The development rigidity required for something like tools.deps is probably not required for something like tools.namespace. A separation between the two types of Contrib projects might open up the way for the non-essential projects to be developed in a more “relaxed” manner, while still remaining part of Contrib.

I’m assuming that by now many of you, dear readers, believe that I’m advocating for some changes to Clojure Contrib. Let me pause here for a second to assure you that’s definitely not the case. I’ve resigned myself to the fact that Contrib is not going to change and I’ve made my peace with this.

The real question that I want to ask is do we even need Contrib today? Are there any benefits it provides for the projects there at all? The point of this post is certainly not to teach Cognitect how to run Contrib and we’re finally getting to it.

Beyond Contrib

I believe it’d be really great if people stopped looking up to Cognitect to provide them with all the answers and all the solutions, and embraced the fact that their bandwidth is limited and their goals and priorities are not always going to be aligned with yours. That’s perfectly OK. If we all accept this the solution to the problems I’ve outlined so far becomes apparent - the needs a community are generally going to be served best by community-driven organizations. Organization that share your perspective and your priorities.

There have already been some community-driven success stories in the past (e.g. ClojureWerkz) and we recently saw the creation of Clojure Commons, which aims to host, develop and maintain key libraries in the Clojure ecosystem. That’s a fantastic initiative and I’m certain it’s going to play a vital role in Clojure’s future! But it’s just one step in a long journey. I’m a big believer in laser focused organizations that house related projects targeting a particular problem/domain. Think Ring, nREPL, Clojure Emacs. It’s much easier to build a team of like-minded people when the scope of something is clearly defined. We need a lot more of those!

The people who know me are aware of my immense passion for free software projects and that I always want to build healthy communities around my projects, nurture contributors, etc. I hope that the Clojure community will be dominated by such a type of organizations down the road.

And that’s the real point I’ve been trying to make all along - there’s a huge difference between community-driven projects and projects with community involvement. Clojure Contrib is not a community-driven project. Plain and simple. For any community to mature and evolve it needs to take control of its destiny. That’s equal to switching from a mentality of following to leading. From waiting for something to happen to making things happen. From complaining to fixing and building. That’s where I want to see the Clojure community several years from now. Trust me when I tell you that it’s both more fun and more empowering!


So, let’s summarize the key take always:

  • Contrib is just a common way to develop Clojure libraries.
  • It’s pretty unlikely that any of the projects there are ever becoming a part of Clojure.
  • There are no guarantees about the quality and maturity of the libraries.
  • There’s no real reason for most projects to join Contrib.
  • There’s a big difference between community-lead projects and projects with community involvement.
  • It’s high time for the Clojure community to take stronger ownership of its future.

Contrib played an important part in the history of Clojure, but I think we’re at a point where it has outlived its usefulness and should be considered mostly a curious artifact of the past.3 I’m certain that with time most of the non-Cognitect libraries there will be migrated elsewhere (e.g. to Clojure Commons) or will be replaced by alternative libraries, developed outside of Contrib. I can also hope that my article and the success of nREPL, following its migration outside of Contrib, will expedite this process and inspire more of you to participate in community-driven projects.

I’ve just noticed that Sean Corfield is considering to continue to the development of java.jdbc outside Contrib, which I see as another encouraging sign that the times are changing. Hopefully other library maintainers will move in this direction as well. Fingers crossed!

Keep hacking!

  1. Usually the wrong idea. 

  2. Chas Emerick told me once the only reason he agreed to transfer nREPL to Clojure Contrib was a promise that it can end up as part of Clojure once it stabilizes. 

  3. And I’m really partial to Sean Cornfield’s opinion he expressed here


The Completely Ordinary Case of Clojure Contrib

I read The Curious Case of Clojure Contrib and unfortunately I can’t help but disagree with a lot of it. Yet strangely, I agree almost entirely with its conclusions.

I do agree with the quotes from me at the beginning. :) None of it should be surprising though. Contrib is just a bunch of libraries with a shared CA and a shared dev infrastructure. Very early, stuff like clojure.test did move from the original monolithic contrib into core and the thought at the dawn of split contrib was that might happen more, in which case having a shared CA would be helpful. I can’t remember anyone ever claiming that stuff in contrib would necessarily move into core, just that it was possible to do so if that became desirable. Some of tools.deps add-lib stuff might move into core, for example.

There are a bunch of contrib projects that are not “in” Clojure but are used in the Clojure build itself (test.generative, data.generators, test.check, tools.namespace, etc) and we’re far more comfortable depending on those as contrib libraries than relying on fully external libs when building and testing Clojure proper. We now have the opposite situation - parts that started in Clojure and got pushed out, like spec, and that scenario may become more common over time, allowing us to evolve the parts included “in the box” at a faster rate than the language itself. In all these cases, contrib is useful.

There seems to be some desire in the article to force all of the libraries in contrib to have the same role, or ownership, or use, or importance. But, they’re all different, they each have unique histories, purposes, etc. If they don’t all look the same, it’s because they’re not. What they share is: license, CA, dev model, infrastructure.

At Cognitect, we make decisions about where and how to host different libs based on a variety of factors. Whether we put things under Cognitect or Cognitect Labs or Clojure or their own orgs (like Pedestal) are decisions based on a bunch of factors - licensing, support expectations, toolset, whether we consider them to be products of Cognitect or Clojure, who’s working on it, for what purpose, etc.

Contrib projects are community projects - anyone is welcome to contribute under the rules of the project and at the direction of the project owner. Saying “contrib” is not community-driven is meaningless because “contrib” is not one thing - every project is different. Many contrib projects are run and contributed to entirely by members of the community without any involvement from the core team. Saying those projects aren’t community-driven seems disrespectful to those community members doing that work. Certainly, there are projects in contrib closer to the language that are controlled more tightly. Even those are still open to contributions. Spec or tools.deps or core.async have all had community contributions. This distinction between community-driven and community-involved is … weird. All projects have owners that make decisions.

Re “typically the projects would see activity while their original author is involved in them and would stagnate afterwards” - this is true of 99% of open source projects. The benefit of having an organization around these projects is exactly that if interest wanes or life intercedes, we can carry it along for a while until a new maintainer steps up - and this has happened many times! You say “there’s relatively thin line between stable and abandoned” but that goes the other way too - there’s a relatively thin line between stable and active. All it takes is for someone to have new ideas and ask to pick up the project again. Again, this has happened many times.

Re “the high bar to contribute has likely affected negatively most projects in Contrib”, this is imo, totally bogus. Most contrib projects have a few primary contributors and many more contributors that have provided a fix or two, which is entirely the norm across open source. The real barriers to entry in contrib projects are exactly the same as they are in any other project - getting people to know it exists, providing useful functionality, creating consistent design over time, building consensus, getting work done. Process stuff like signing a digital CA form (one minute), or making a patch file (just add .patch to a github PR url), or using JIRA are trivial compared to the people and technical parts of this work. I make patches and PRs and jiras and github issues every day and it’s all just the frame, not the work.

There seems to be a narrative that nREPL “failed” in contrib but has been wildly successful outside of it. I call BS on that. nREPL has succeeded outside contrib because Bozhidar has been a tireless cheerleader and promoter and nurturer of people AFTER it was migrated. He could have done the exact same work when it was in contrib, as others do in other contrib projects. The contributor curve since migrating looks very familiar - a few primary contributors, lots of people contributing a fix or two, just like most contrib projects. The distribution is a little wider, and that’s the benefit of a lot of people work.

Despite all these things I disagree about, I am in complete agreement that if the community wants things to exist, they should make and nurture them. That has been the message for the entire history of Clojure (it’s in the linked history article from the 2011 Conj, it’s in the etiquette newsgroup post from Rich in 2011). Whether you make things and nurture them in contrib or elsewhere is imo entirely irrelevant.

Please, go forth, make more awesome Clojure things, and tell people about them. If you want to be involved, there are an endless number of existing projects with things to fix and improve. You do not need direction, or blessings, or contrib. Just do the work that needs doing.

Permalink Newsletter 323: Tip: Keep your workflow simple

Issue 323 – April 22, 2019 · Archives · Subscribe

Clojure Tip 💡

keep your workflow simple

In my Repl-Driven Development in Clojure course, I recommend learning three commands for executing code: one to evaluate the whole file, one for a top-level form, and one for a single expression. With those three, you’ve got all you need for selectively evaluating code. There are more options available, but are they worth learning? It’s doubtful.

It’s always tempting to solve workflow problems with another tool. Is reloading namespaces hard to get right? Add another tool. Do you forget what keys belong in a certain map? Add another tool.

These tools could fix your problems. And they might bring their own new problems. In my opinion, it’s much better to aim for elegance and solve the problems by removal, not addition.

I’ve done a tiny bit of modern JavaScripting, and there were two problems with the tooling. The first one everyone talks about: the churn of tools. New tools replacing old tools. And new versions of existing tools. Very rapid change.

But there was a second problem that people talk about much more rarely: the sheer quantity of tools. Transpilers. Linters. Minifiers. Tree shakers. Dependency tools. Whatever Browserify is. Each one solves real problems. But there is a high cost associated with each.

We don’t have the same problem to the same scale, but the problem is there. We will always be tempted to reach for a new, shiny tool to make our problems disappear. Very often, the problem is still there, and we also have a new problem—the care and management of the tool.

My recommendation is to avoid taking on new tools. Sure, give them a shot, but with a skeptical eye. That won’t completely eliminate new tools since they will tend to slip in, just due to entropy. So you must actively look for tools and dependencies you can eliminate.

Do you have a tip you’d like to share? Let me know. Just hit reply.

Functional Programming Media 🍿

As you may know, I have a podcast. It comes out twice per week. A kind listener recently pointed out that my RSS feed only had ten items in it. That means only 5 weeks were included. I’ve extended it to all items now, so you can find all the episodes back to day one.

Check out my podcast. It’s called Thoughts on Functional Programming.

Brain skill 😎

Take notes by hand. There’s something about holding a pen and moving your hands across an empty page. Paper is two-dimensional, so your notes don’t have to be linear. Your mind slows down since you hand write slower than typing. Does it make you learn better? That’s unclear. But using a different part of your brain might help make more connections.

Clojure Puzzle 🤔

Last week’s puzzle

The puzzle in Issue 322 was to count the number of ways to add coins up to 200£.

You can see the submissions here.

One thing I like about Project Euler problems (and this was one) is that they seem pretty simple. The difficulty comes in their scale. They often don’t fit in memory, or the naive solution is exponential, or some other scaling issue. The simple, straightforward solution doesn’t work. So you have to start using your brain again, even when you’re in a powerful language.

This puzzle was no exception. My naive implementation never returned. You can see mine here. I did manage a recursive, lazy approach which does much better. It was inspired by Duminda Rathnayaka’s submission.

This puzzle was perhaps a little harder than usual. I only got two submissions. This gives me the idea to teach a little bit about recursion. I do have a podcast episode about it, but it’s very high-level.

This week’s puzzle

range consolidation (from Rosetta Code)

We can represent a range of numbers as a tuple, like this [1 4]. That means all real numbers between 1 and 4, including 1 and 4. The task is to write a function that takes a collection of ranges and consolidates them so that there are no overlapping ranges.

For example, [1 4] and [3 5] overlap, so they would be consolidated to [1 5]. [10.2 15] does not overlap, so it doesn’t change.

(consolidate [[1 4] [3 5] [10.2 15]]) ;=> [[1 5] [10.2 15]]

There can be any number of ranges in that collection, and they could be in any order.

As usual, please send me your implementations. I’ll share them all in next week’s issue. If you send me one, but you don’t want me to share it publicly, please let me know.

Rock on!
Eric Normand

The post Newsletter 323: Tip: Keep your workflow simple appeared first on


What is point-free style?

Point-free style is a way of defining functions with a very simple constraint: you cannot name arguments or intermediate values. How can you possibly do that? Well, with higher-order functions, of course. For instance, with function composition, you can define a new function without naming the arguments. Some languages, like the APL family, or Haskell, let you do this very easily.


Eric Normand: What is point-free style, and does it have a point? By the end of this episode, you’ll know what point-free style is. How you might already be using it and why it makes things hard to read sometimes.

Hi, my name is Eric Normand and I help people thrive with functional programming. Point-free style, it’s an important term because you might come across it when you’re reading about functional programming, or in a discussion with someone who is into functional programming.

It’s good to be familiar with it. It’s also an important style for programming, because it helps you think at the right level of abstraction. These things are good to name. This is where my head is.

That said, it’s not like once you learn this, you should be doing point-free style everywhere. Except maybe as an exercise. It is not “The last style” that we should be learning, it’s nothing like that.

What is point-free style? It means you’re programming in a way that you don’t need to name your arguments or name any of your intermediate values. When you define a function, normally, you have to name the arguments. Most languages require it, or they make that the really easy way to define a function.

Likewise, when you’re in the body of the function, you’re writing what this function does. Often the easiest way to use the result of calling a function is to save it to a variable, and then pass that variable to the next function. Use it later in the expression.

Point-free style says, “Can we do without that? Naming is hard, it takes a lot of room. It’s a line per variable. Maybe we don’t want to do that?”

How do you do this? The easiest example to really hit home is, let’s say you’re calling map, and you’re calling map with something like toUpperCase, the function toUpperCase. On a list of strings, you’re mapping toUpperCase over those strings.

One thing you could do is you could define…you’re calling map, you’re passing in a function, so you’re going to define a new function, give it an argument, maybe call it S because it’s a string. Then you call toUpperCase on that string and return it. Why did you wrap toUpperCase in a new function? It already is a function.

You could just pass that toUpperCase function directly. It already takes one argument, it already returns the right value, why would you wrap it up a lot? I mean, just my personal experience, I see a lot of JavaScript programmers doing this. A lot of JavaScript code seems to unnecessarily wrap functions up in other functions. I don’t know why it’s common there.

I know in some cases you need it because the map passes both the value and the index, and you’d want to ignore the index. There’s reasons for some of it but a lot of times it’s unnecessary.

Removing that extra layer where you have to name the argument and just pass in the function itself, it’s a way to achieve point-free style. Point-free style has a bunch of little techniques that remove the need for arguments, one little step at a time until you got no arguments. That’s the point-free.

You could have point-less, point-minimal style where you have one or two arguments left maybe. It seems like a spectrum, that’s what I’m trying to say. It seems like a spectrum, you can get down to point-free style, but you could move along in that direction.

Another one that we’ve already talked about in a previous episode is function composition. Function composition in the general case is, you have two functions, you want to take the return value of one and pass it to the other.

So you make a new function, it takes the arguments, passes them to the first function, saves the return value then calls the second function with that return value, then calls the second function with that return value, and then returns it.

Well, there’s a special case where the second function only takes one argument. The return value can go right in there. You don’t even need to wrap it in a new function that you write explicitly. You can put all that boilerplate into another function, a higher-order function that takes two functions and returns a new function.

Notice when you do that, you’ve totally eliminated naming the arguments or the intermediate values. You just say, “Compose f g or f.g,” depending on your language and how you do it.

Instead of writing new function that takes x, calls g of x, and then passes that to f and returns it. You just write, Compose f g. You’ve eliminated the argument. Function composition is used a lot in point-free style. Another way that’s very related to function composition is pipelining. In Clojure, you have a thing called the threading macros, it’s a bunch of things.

It’s a bunch of different macros that do threading. What it lets you do is write pretty long function composition expressions, but in a top-down, like one line at a time way, where the return value of one thing is passed to the next thing, is passed to the next thing, is passed to the next thing. You never have to name those things. This is a way of achieving point-free style.

I believe Elixir has something very similar. I don’t know if they call it threading or pipeline, something like that, but it’s the pipe…Oh man, am I going to get this wrong? I think it’s pipe greater than sign. Elm has something similar as well, it just lets you flow values through these functions. You don’t have to name any of those intermediate values. Why would you want to do this? Why?

Why is this a thing worth naming? Two reasons, really. Naming is hard, so having to come up with all these variable names is a thing that if you can avoid it, and without much cost, it might be worth avoiding naming. The names take up more room, so you have longer code.

There’re things that could get out of sync more easily, meaning, I’ve had a thing where I’ve called the variable, five days, because the first time I wrote it, it was five days, like a unit of time.

Then I changed it. Now, the function that I’m assigning to that variable is returning seven days. I need to change the variable name. I need to change the variable name and then go and find everywhere I use it and change it. I named it wrong. Naming is hard, I named it wrong. I could have avoided that maybe, and not have to deal with it, at all.

Then there’s this other more important reason besides not having to do this hard thing called naming. The other thing is that when you’re doing point-free style, you’re often working at a higher level of abstraction. Let’s call it the right level of abstraction. You’re like really in the zone, you’ve got the program in your head. You’re thinking, “Oh, hey, I need to compose these two functions.”

You don’t want to be thinking, “OK, now I have to define a new function give it an argument name and thread this argument in here.” You don’t want to do all that. You want to think, “Oh, I’m just composing these two. Done.” In Haskell, it’s one character, .f.g, boom. A period. The full stop.

It lets you express yourself at that higher level just like higher-order functions too. It’s done with higher-order functions. Point-free style is done with higher-order functions. This is taking it to the extreme where you’d never name arguments. That’s the style.

Just like you can get into this flow where you’ve got really good concentration that day, you got the whole thing in your head and you’re working really efficiently. It’s becoming almost a very clever code.

That doesn’t mean that tomorrow, you’re going to be in that same headspace, a week from now, or certainly, six months from now. This style can be very terse. It relies often on operator precedence rules and other knowledge of how things work, like how currying will work, things like that. What happens is you come back to it and you don’t understand it anymore.

Looking at some Haskell point-free code for examples, I used to work in Haskell. I would write stuff like this sometimes. There’s a lot of parentheses around things. There’ll be a parentheses around the compose operator because you want to force the precedence somehow.

You’re passing the compose and you don’t want the compose to run. You want to pass compose. You put parentheses around it. You get some really, what I would consider, gnarly, gnarly code. There’s something about the redundancy of names that makes it easier to read sometimes. I think it’s like a noble pursuit or it’s a good exercise to try to push the style to its extreme. Just like an architecture.

You might say, “Let’s take some architectural principle and push it to the extreme, see what we get, what building we will design if we just say,” I’m just going to make something up, “All the vertical structure is going to be made of steel and all the horizontal structure is going to be made of wood.”

Just very, very basic rules and constraints like that. You see what buildings you make out of that. It’s a great exercise because it’s forcing you to explore in this new way. That doesn’t mean it will make a good building. It doesn’t mean people will want to live in the building or work in that building. But it’s a good exercise.

I think that’s true of point-free style. It’s a good exercise. It causes you to think of this higher-order function level. It doesn’t mean the code you write is the code you want to live with in your system. That’s my opinion there.

The guy who wrote the Dart programming language, he has said that he thinks you really can’t do point-free style and currying, which is used in point-free style a lot.

You can’t really do them without a good type system because you lose so much information when you eliminate the arguments and the names of the intermediate values, that you can’t tell yourself when you’re looking at the function whether it’s correct or not, whether you’ve made a mistake.

Just as an example, if you wanted to write a sum function in point-free style in Haskell because Haskell lets you do that, you say, sum = foldr, if you’re doing a foldr, + 0. OK, that’s point-free style. I had not named any arguments. Typically, if you’re going to do a foldr, you can say sum of the list is equal to foldr + 0, list. You’re naming the list argument. You don’t have to because of currying.

Now, when you read that, you have to know how many arguments foldr takes. You might know it. I’m not saying you don’t know that, but what if it was a different function? What if it wasn’t foldr? What if it’s some other function that you don’t know? You don’t know if there’s some currying going on because you don’t know how many arguments that function has.

You can look at the type, but that’s it. That’s what it’s saying that it relies on good type information that your compiler can check.

I tend to agree with that. If you go that far, you lose so much information for your brain to pick up on that you really need the help of the type system. Just look at some of the point-free style stuff that people had done. You have all these operators, dots and greater than, less than operators.

You’re like, “What does this do? How many arguments does it take? What’s the precedence rules? I can’t figure out why you put those parentheses there.” It’s very terse.

That said, I think it’s a spectrum. You can go to this extreme, but also there might be somewhere along that spectrum that is perfect, has very clear code and does not have unnecessary names, but has some names to give you clues what the code is supposed to do.

Another thing about it is because you’re working at this high-level, this is a good thing. You’re working at this high-level, you or the compiler can begin to do algebraic manipulations on the functions. You’re at a higher order. You’re dealing with functions of functions. These functions can have algebraic identities, algebraic properties.

You could say, just a simple example, that map of F over a list is equivalent to the list of f of each element of the list. That’s like an identity. Your compiler might say, “Oh, because I’m working on a higher-order function, I understand how map works.”

I can replace this map with something else, with some other code that is equivalent to it. I don’t have to actually call the function “map” myself.

There’s a bunch of these identities. In fact, that is where the programming language J was going. The idea was that you couldn’t define functions with defn or def function.

There was no way to just make a function. You had to do it with these higher-order combinators. You would be able to have all these algebraic identities that would let the compiler optimize and do all sorts of cool stuff.

Imagine you could only do map, filter and reduce, and not define new functions. You’re just mapping and filtering with existing functions, and compose. Obviously, you can do stuff like that, compose.

The languages that use this the most…this is my experience. My little bit of research has turned these things up. This is where I have encountered it the most.

There’s the APL family of languages. APL, famous for being very terse and not having to name arguments. J is a member of the APL family. Look at some APL code and you’ll see how terse it can get.

Haskell lets you work on the whole spectrum. You can define all your arguments with names and intermediate values with names, or you can move more toward the point-free style if you want.

I already mentioned in Clojure and Elixir, there’s the thread or pipeline stuff. I just want to mention names can be bad, but they can be really good for readability. They can be a pain to write, but really helpful when you’re reading your code.

Just to recap…I’m going long, so I’m just going to recap real quick. Point-free style means programming where you don’t have to name arguments or intermediate values. The main reason we do it is to be able to code at the right level of abstraction.

You don’t want to be thinking about, “How do I name this function?” or “How do I name this argument?” Stuff like that. You want to be thinking higher order like, “OK. This is the composition of this. This is mapping that over this.” That’s what you’re thinking.

It creates some really terse code that can be obtuse, hard to understand. APL family, Haskell, Clojure, and Elixir, those all use it.

I want to ask you, are you using point-free style anywhere in your code? Have you managed to eliminate an argument here or there that you didn’t need to name?

I’m also curious because there’s some really [laughs] opaque code out there that’s in point-free style. I’m wondering what the worst you’ve seen is.

Send that to me. You can email me at or you can tweet me @ericnormand. Find me on LinkedIn. Please subscribe if you liked this because there’s more coming down the pipe.

I’m Eric Normand. I’ll see you later.

The post What is point-free style? appeared first on LispCast.



seancorfield/next.jdbc 1.0.0-alpha8

I’ve talked about this in a few groups – it’s been a long time coming. This is the “next generation” of – a modern wrapper for JDBC, that focuses on reduce/transducers, qualified-keywords, and datafy/nav support (so, yes, it requires Clojure 1.10).

The next generation of a new low-level Clojure wrapper for JDBC-based access to databases. It’s intended to be both faster and simpler than and it’s where I intend to focus my future energy, although I have not yet decided whether it will ultimately be a new set of namespaces in the Contrib lib or a separate, standalone OSS library!

At this point, I’m looking for feedback on the API and the approach (as well as bugs, performance issues, etc). Please take it for a spin and let me know what you think via GitHub issues or in the #sql channel on the Clojurians Slack or the #sql stream on the Clojurians Zulip.

The group/artifact ID will change at some point: and the actual namespaces will too, but I will try to make that as painless as possible when I take this out of the alpha phase.


Meta Reduce, Volume 2019.1

Time to recap what I’ve been up to lately. Admittedly this post is a bit overdue1, but there has been a lot on my plate lately and I had little time for blogging.


Dutch Clojure Days

The most interesting thing I’ve done on the Clojure front recently was speaking at the Dutch Clojure Days conference. I spoke there about nREPL’s revival, future and importance. I think the talk went well and resonated with the audience. I even got an unexpected promise of financial backing for the project by Siili!

Here’s the slide deck from my presentation. I hope the recording of the talk will become available soon.

I think it was a great conference overall and I’m happy I got the opportunity to be a part of it. I certainly hope that I’ll find my way back there, even if I’m just a regular attendee. I have to note that even though the conference was completely free it was organized really well! You could clearly tell that it was a product of love and passion and that’s the only real recipe for a great conference!

By the way, Josh Glover recently wrote a nice summary of the conference.


There’s not much to report here. The most important thing that happened was the decision to make op names strings internally. This simplified the implementation of the new EDN transport and make the code simpler overall.

I’ve also updated the nREPL section in Lambda Island’s guide to Clojure REPLs to reflect the recent nREPL changes.


On Orchard’s front the big news is that we’re very close to getting rid of almost all third-party dependencies the project currently has - namely tools.namespace and java.classpath. This happened as the side-effect of work to address dynapath-related issues on Java 9+. You can learn more about all of this here. Once we wrap this up, I’ll cut Orchard 0.5 and we’ll focus our attention on Orchard 0.6 and the huge milestone of merging ClojureScript support into Orchard (and deprecating cljs-tooling in the process).

Fun times ahead!


I’ve finally found a bit of time to work on CIDER.

My current focus is getting more functionality working without cider-nrepl. The biggest step in this direction was implementing an eval-based fallback for cider-var-info - a function which powers definition lookup and doc lookup (amongst others CIDER commands). You can read more about the scope of that task here and I’d certainly appreciate some help with it.

On a related note - I can’t wait for us to implement client dependency injection in nREPL itself, as it would simplify that functionality tremendously.

In other news - I fixed a long-standing problem with checking for the presence of required ClojureScript dependencies before starting a ClojureScript REPL, but at this point I’m thinking that probably this whole dependency validation idea was a bad one and I should just kill those checks completely.

I’ve also looked into a compilation error highlighting regression caused by changes in Clojure 1.10. It’s a trivial problem, but Emacs Lisp regular expressions have a very messed up syntax that makes it really unpleasant to work with them. Here’s the regular expression in question:

(defvar cider-compilation-regexp
  '("\\(?:.*\\(warning, \\)\\|.*?\\(, compiling\\):(\\)\\(.*?\\):\\([[:digit:]]+\\)\\(?::\\([[:digit:]]+\\)\\)?\\(\\(?: - \\(.*\\)\\)\\|)\\)" 3 4 5 (1))
  "Specifications for matching errors and warnings in Clojure stacktraces.
See `compilation-error-regexp-alist' for help on their format.")

How many things do you need to escape???



Only 5 days after DCD I did a second talk at the RubyDay conference in Verona, Italy. I spoke about Ruby 3 there and you can check out my slide deck here. Funny enough, a lot was said about Ruby 3 on RubyKaigi just a week after my talk. I’m pretty glad that I was spot on with most of my predictions though, despite the limited sources of information I was working with.

You’ve got no idea how stressful it is to work on two new talks for back-to-back conferences! I was so relieved after I wrapped up my RubyDay talk! And I’m never doing this again.2

Apart from the stress RubyDay was a really nice event and I had a lot of fun meeting the local Ruby community. The conference was also a nice excuse for me to travel around Verona and spend a week enjoying Italian food and wines.


RuboCop saw a ton of activity lately and an important new release, which improved a lot its pretty-printing capabilities.

RuboCop 0.68 is already right around the corner and I’m really excited about getting some line length autocorrection support in it. That’s going to be a massive improvement in the formatting department.

I was also quite pleased to see what Flexport are doing to push this even further! I’m really grateful to see them making their work open-source and contributing it upstream!


I continue slowly with my Haskell explorations and I’m having quite a bit of fun overall. Nothing interesting to report, though.


I never met Joe Armstrong in person, but I’ve always admired his work and he was a big source of inspiration for me. I was really saddened by the news of his passing on Saturday and I want to take a moment to honour his memory. I’ve often pondered on the legacy of software engineers, as things are some transient and ephemeral in our line of work. I have no doubt, however, that Joe’s legacy will live on for a very long time and his work will continue to inspire software engineer in the years to come!

Rest in Peace, Joe! You’ll be missed, but not forgotten!

Real World

It was really nice to spend some time in Amsterdam and Italy around the two conferences. I finally managed to visit Milan and the nearby Lake Como, after planning to do so for ages, and they didn’t disappoint. I can also heartily recommend to everyone to spend some time in the nearby town of Bergamo (where most low-cost airlines bound for Milan tend to land).

I’ve started reading “Persepolis Rising” (the seventh book in the “The Expanse” sci-fi series) and so far I find it to be pretty disappointing. Generally I’ve noticed that most book series really struggle to keep their momentum past volume #3.

On the movies side I finally watched “Baby Driver” and I certainly enjoyed it a lot. The movie’s style and soundtrack are quite something and felt very refreshing to me! Yesterday I went to see “Pet Sematary” and it was so-so. I think I liked the first movie better (even if I can barely remember it at this point). This was a reminder I should probably read the book one of those days. Later today I plan to hit the movies again with “Shazam!”. Fingers crossed!

As usual, I wanted to do many other fun or healthy things, but I miserably failed. Better luck next time!

  1. I was hoping I’d write one each week. 

  2. Although I’ve promised this to myself in the past as well. 


Ep 025: Fake Results, Real Speed

Nate wants to experiment with the UI, but Twitter keeps getting the results.

  • “This thing that we’re making because we’re lazy has now taken 4 or 5 weeks to implement.”
  • Last week: “worker” logic vs “decider” logic. Allowed us to flatten the logic.
  • “You can spend months on the backend and people think you’ve barely got anything done. You can spend two days on the UI and then people think you’re making progress.”
  • (04:20) Now we want to work on the UI, but we don’t want data to post to real Twitter.
  • “Why do you keep posting ‘asdf asdf asdf’ to Twitter?”
  • UI development involves a lot of exploration. Need to try things out with data.
  • We want to be able to control the data we’re working with.
  • “We want carefully-curated abnormal data to test the edges of our UI.”
  • (06:30) We could have a second Twitter account to fill with junk data.
  • Or, we could use the Twitter API sandbox.
  • Problem: we don’t have much control over the data set. Eg. We can’t just “reset” back to what it used to be.
  • Problem: what about when we’re hacking on a plane?
  • Plus, we want to be able to share test data between developers.
  • (09:10) What can we do instead? Let’s make a “fake” Twitter service we run on our machine.
  • “Fake” Twitter gives us something our application can talk to that is under our control.
  • Having a “fake” Twitter service creates new problems
    • Project wants to grow larger and larger
    • Now we have to run more things
    • Creates more obstacles between checking out the code and getting work done
  • (12:35) Rather than a “fake” Twitter service, we want to “fake it” inside the application.
  • What is “faking it”? Is this the same as “mocking”?
  • Use a protocol for our Twitter API “handle”. Allows for an alternative implementation.
  • Not the same as mocking
    • We are not trying to recreate all the possibilities.
    • We are not trying to use it to test the component code.
  • The purpose is different than mocking. The Twitter “faker” is to help us work on the rest of the application.
  • The “faker” is about being productive, not about testing.
  • (17:20) Can have the faker start with useful data.
  • Default data in the faker can launch you straight into dev productivity.
  • “You want automation to support your human-centric exploration of things.”
  • “If your environment is as fast as it can possibly be, then it allows you to be as fast as you can possibly be.”
  • (20:10) Can use the faker for changing the interaction
  • Eg. have a “slow” mode that makes all the requests take longer.
  • Useful to answer the question: “What does this UI feel like when everything gets slow?”
  • “The faker can implement behavior you need, for exploring the space you need to cover, to converge on the right solution.”
  • (22:00) Can have the faker pretend something was posted manually.
  • Allows you to see how the UI behaves when the backend discovers a manual post.
  • (23:00) Goals of faking
    • Support exploration. Not about testing and validation
    • Support the human, creative side of development
    • Support the developer experience

Related episodes:

Clojure in this episode:

  • defprotocol
  • defrecord
  • component/system-map

Related projects:


A Bitemporal tale

For those readers whose learning ability is better when affections are in place, we’re offering a short story writing experience through database transactions and queries.


Assuming you have some basic knowledge of Clojure and you own a REPL. All you need for this tale is to add Crux to your deps. For more configuration details see here.

   ; lein
   [juxt/crux "19.04-1.0.2-alpha"]
   ; deps.edn
   juxt/crux {:mvn/version "19.04-1.0.2-alpha"}

Fire up a repl and create a namespace

(ns a-tale
  (:require [crux.api :as crux]))

Define a system

(def system
    {:kv-backend "crux.kv.memdb.MemKv"
     :db-dir "data/db-dir-1"}))

; alternatively, you can go with RocksDB for a persistent storage
  org.rocksdb/rocksdbjni {:mvn/version "5.17.2"} ; add this to your deps
  ; define system as follows
  (def system
    (crux/start-standalone-system ; it has clustering out-of-the-box though
      {:kv-backend "crux.kv.rocksdb.RocksKv"
       :db-dir "data/db-dir-1"})))

Letting data in

The year is 1740. We want to transact in our first character – Charles. Charles is a shopkeeper who possesses a truly magical artefact: A Rather Cozy Mug, which he uses in some of the most sacred morning rituals of caffeinated beverages consumption.

    ; tx type    | id for the transaction (in-memory db or Kafka)
  [[:crux.tx/put   :ids.people/Charles

    {:crux.db/id :ids.people/Charles  ; id again for the document in Crux
     :person/name "Charles"
     ; age 40 at 1740
     :person/born #inst "1700-05-18"
     :person/location :ids.places/rarities-shop
     :person/str  40
     :person/int  40
     :person/dex  40
     :person/hp   40
     :person/gold 10000}

    #inst "1700-05-18"]]) ; valid time (optional)
; yields transaction data like
{:crux.tx/tx-id 1555661957640
 :crux.tx/tx-time #inst "2019-04-19T08:19:17.640-00:00"}

Ingest the remaining part of the set

  [; rest of characters
   [:crux.tx/put :ids.people/Mary
    {:crux.db/id :ids.people/Mary
     :person/name "Mary"
     ; age  30
     :person/born #inst "1710-05-18"
     :person/location :ids.places/carribean
     :person/str  40
     :person/int  50
     :person/dex  50
     :person/hp   50}
    #inst "1710-05-18"]
   [:crux.tx/put :ids.people/Joe
    {:crux.db/id :ids.people/Joe
     :person/name "Joe"
     ; age  25
     :person/born #inst "1715-05-18"
     :person/location :ids.places/city
     :person/str  39
     :person/int  40
     :person/dex  60
     :person/hp   60
     :person/gold 70}
    #inst "1715-05-18"]])
; yields tx-data, omitted

  [; artefacts
   ; In our tale there is a Cozy Mug...
   [:crux.tx/put :ids.artefacts/cozy-mug
    {:crux.db/id :ids.artefacts/cozy-mug
     :artefact/title "A Rather Cozy Mug"
     :artefact.perks/int 3}
    #inst "1625-05-18"]

   ; ...some regular magic beans...
   [:crux.tx/put :ids.artefacts/forbidden-beans
    {:crux.db/id :ids.artefacts/forbidden-beans
     :artefact/title "Magic beans"
     :artefact.perks/int 30
     :artefact.perks/hp -20}

    #inst "1500-05-18"]
   ; ...a used pirate sword...
   [:crux.tx/put :ids.artefacts/pirate-sword
    {:crux.db/id :ids.artefacts/pirate-sword
     :artefact/title "A used sword"}
    #inst "1710-05-18"]
   ; ...a flintlock pistol...
   [:crux.tx/put :ids.artefacts/flintlock-pistol
    {:crux.db/id :ids.artefacts/flintlock-pistol
     :artefact/title "Flintlock pistol"}
    #inst "1710-05-18"]
   ; ...a mysterious key...
   [:crux.tx/put :ids.artefacts/unknown-key
    {:crux.db/id :ids.artefacts/unknown-key
     :artefact/title "Key from an unknown door"}
    #inst "1700-05-18"]
   ; ...and a personal computing device from the wrong century.
   [:crux.tx/put :ids.artefacts/laptop
    {:crux.db/id :ids.artefacts/laptop
     :artefact/title "A Tell DPS Laptop (what?)"}
    #inst "2016-05-18"]])
; yields tx-data, omitted

; places
  [[:crux.tx/put :ids.places/continent
    {:crux.db/id :ids.places/continent
     :place/title "Ah The Continent"}
    #inst "1000-01-01"]
   [:crux.tx/put :ids.places/carribean
    {:crux.db/id :ids.places/carribean
     :place/title "Ah The Good Ol Carribean Sea"
     :place/location :ids.places/carribean}
    #inst "1000-01-01"]
   [:crux.tx/put :ids.places/coconut-island
    {:crux.db/id :ids.places/coconut-island
     :place/title "Coconut Island"
     :place/location :ids.places/carribean}
    #inst "1000-01-01"]]) ; yields tx-data, omitted

Looking Around : Basic Queries

Get a database value and read from it consistently. Crux uses datalog query language. I’ll try to explain the required minimum, and I recommend as a follow up read.

(def db (crux/db system))

; we can query entities by id
(crux/entity db :ids.people/Charles)

; yields
{:crux.db/id :ids.people/Charles,
 :person/str 40,
 :person/dex 40,
 :person/location :ids.places/rarities-shop,
 :person/hp 40,
 :person/int 40,
 :person/name "Charles",
 :person/gold 10000,
 :person/born #inst "1700-05-18T00:00:00.000-00:00"}

; Datalog syntax : query ids
(crux/q db
        '[:find ?entity-id ; datalog's find is like SELECT in SQL
          ; datalog's where is quite different though
          ; datalog's where block combines binding of fields you want with filtering expressions
          ; where-expressions are organised in triplets / quadruplets

          [?entity-id    ; first  : usually an entity-id
           :person/name  ; second : attribute-id by which we filter OR which we want to pull out in 'find'
           "Charles"]])  ; third  : here it's the attribute's value by which we filter

; yields

; Query more fields
(crux/q db
        '[:find ?e ?name ?int
          ; where can have an arbitrary number of triplets
          [?e :person/name "Charles"]

          [?e :person/name ?name]
          ; see – now we're pulling out person's name into find expression

          [?e :person/int  ?int]])

; yields
#{[:ids.people/Charles "Charles" 40]}

; See all artefact names
(crux/q db
        '[:find ?name
          [_ :artefact/title ?name]])
; yields
#{["Key from an unknown door"] ["Magic beans"]
  ["A used sword"] ["A Rather Cozy Mug"]
  ["A Tell DPS Laptop (what?)"]
  ["Flintlock pistol"]}

Undoing the Oopsies : Delete and Evict

Ok yes, magic beans once were in the realm, and we want to remember that, but following advice from our publisher we’ve decided to remove them from the story for now. Charles won’t know that they ever existed!

  [[:crux.tx/delete :ids.artefacts/forbidden-beans
    #inst "1690-05-18"]])

Sometimes people enter data which just doesn’t belong there or that they no longer have a legal right to store (GDPR, I’m looking at you). In our case, it’s the laptop, which ruins the story consistency. Lets completely wipe all traces of that laptop from the timelines.

  [[:crux.tx/evict :ids.artefacts/laptop]])

Let’s see what we got now

(crux/q (crux/db system)
        '[:find ?name
          [_ :artefact/title ?name]])

; yields
#{["Key from an unknown door"] ["A used sword"] ["A Rather Cozy Mug"] ["Flintlock pistol"]}

; Historians will know about the beans though
(def world-in-1599 (crux/db system #inst "1599-01-01"))
(crux/q world-in-1599
        '[:find ?name
          [_ :artefact/title ?name]])

; yields
#{["Magic beans"]}

Plot Development : DB References

Let’s see how Crux handles references. Give our characters some artefacts. We will do with function as we will need it later again.

(defn first-ownership-tx []
  [; Charles was 25 when he found the Cozy Mug
   (let [charles (crux/entity (crux/db system #inst "1725-05-17") :ids.people/Charles)]
     [:crux.tx/put :ids.people/Charles
      (update charles
              ; Crux is schemaless, so we can use :person/has however we like
              (comp set conj)
              ; ...such as storing a set of references to other entity ids
      #inst "1725-05-18"])
    ; And Mary has owned the pirate sword and flintlock pistol for a long time
    (let [mary  (crux/entity (crux/db system #inst "1715-05-17") :ids.people/Mary)]
      [:crux.tx/put :ids.people/Mary
       (update mary
              (comp set conj)
       #inst "1715-05-18"])])

(def first-ownership-tx-response
  (crux/submit-tx system (first-ownership-tx)))

; yields tx-data
{:crux.tx/tx-id 1555661957644
 :crux.tx/tx-time #inst "2019-04-19T08:19:21.640-00:00"}

Note that transactions in Crux will rewrite the whole entity, there’re no partial updates and no intentions to put them in the core as of yet. This is because the core of Crux is intentionally slim, and features like partial updates shall be kept in the upcoming convenience projects!

Who Has What : Basic Joins

(def who-has-what-query
  '[:find ?name ?atitle
    [?p :person/name ?name]
    [?p :person/has ?artefact-id]
    [?artefact-id :artefact/title ?atitle]])

(crux/q (crux/db system #inst "1726-05-01") who-has-what-query)
; yields
#{["Mary" "A used sword"]
  ["Mary" "Flintlock pistol"]
  ["Charles" "A Rather Cozy Mug"]
  ["Charles" "Key from an unknown door"]}

(crux/q (crux/db system #inst "1716-05-01") who-has-what-query)
; yields
#{["Mary" "A used sword"] ["Mary" "Flintlock pistol"]}

A few convenience functions

(defn entity-update
  [entity-id new-attrs valid-time]
  (let [entity-prev-value (crux/entity (crux/db system) entity-id)]
    (crux/submit-tx system
      [[:crux.tx/put entity-id
        (merge entity-prev-value new-attrs)

(defn q
  (crux/q (crux/db system) query))

(defn entity
  (crux/entity (crux/db system) entity-id))

(defn entity-at
  [entity-id valid-time]
  (crux/entity (crux/db system valid-time) entity-id))

(defn entity-with-adjacent
  [entity-id keys-to-pull]
  (let [db (crux/db system)
        (fn [ids]
          (cond-> (map #(crux/entity db %) ids)
            (set? ids) set
            (vector? ids) vec))]
      (fn [e adj-k]
        (let [v (get e adj-k)]
          (assoc e adj-k
                   (keyword? v) (crux/entity db v)
                   (or (set? v)
                       (vector? v)) (ids->entities v)
                   :else v))))
      (crux/entity db entity-id)

; Charles became more studious as he entered his thirties
(entity-update :ids.people/Charles
  {:person/int  50}
  #inst "1730-05-18")

; Check our update
(entity :ids.people/Charles)

{:person/str 40,
 :person/dex 40,
 :person/has #{:ids.artefacts/cozy-mug :ids.artefacts/unknown-key}
 :person/location :ids.places/rarities-shop,
 :person/hp 40,
 :person/int 50,
 :person/name "Charles",
 :crux.db/id :ids.people/Charles,
 :person/gold 10000,
 :person/born #inst "1700-05-18T00:00:00.000-00:00"}

; Pull out everything we know about Charles and the items he has
(entity-with-adjacent :ids.people/Charles [:person/has])

; yields
{:crux.db/id :ids.people/Charles,
 :person/str 40,
 :person/dex 40,
 #{{:crux.db/id :ids.artefacts/unknown-key,
    :artefact/title "Key from an unknown door"}
   {:crux.db/id :ids.artefacts/cozy-mug,
    :artefact/title "A Rather Cozy Mug",
    :artefact.perks/int 3}},
 :person/location :ids.places/rarities-shop,
 :person/hp 40,
 :person/int 50,
 :person/name "Charles",
 :person/gold 10000,
 :person/born #inst "1700-05-18T00:00:00.000-00:00"}

What Was Supposed To Be The Final

Mary steals The Mug in June

(let [theft-date #inst "1740-06-18"]
    [[:crux.tx/put :ids.people/Charles
      (update (entity-at :ids.people/Charles theft-date)
     [:crux.tx/put :ids.people/Mary
      (update (entity-at :ids.people/Mary theft-date)
              (comp set conj)

(crux/q (crux/db system #inst "1740-06-18") who-has-what-query)
; yields
#{["Mary" "A used sword"]
  ["Mary" "Flintlock pistol"]
  ["Mary" "A Rather Cozy Mug"]
  ["Charles" "Key from an unknown door"]}

So, for now, we think we’re done with the story. We have a picture and we’re all perfectly ready to blame Mary for stealing a person’s beloved mug. Suddenly a revelation occurs when an upstream data source kicks in. We uncover a previously unknown piece of history. It turns out the mug was Mary’s family heirloom all along!

Correct The Past

(let [marys-birth-inst #inst "1710-05-18"
      db        (crux/db system marys-birth-inst)
      baby-mary (crux/entity db :ids.people/Mary)]
    [[:crux.tx/cas :ids.people/Mary
      (update baby-mary :person/has (comp set conj) :ids.artefacts/cozy-mug)

; ...and she lost it in 1723
(let [mug-lost-date  #inst "1723-01-09"
      db        (crux/db system mug-lost-date)
      mary      (crux/entity db :ids.people/Mary)]
    [[:crux.tx/cas :ids.people/Mary
      (update mary :person/has (comp set disj) :ids.artefacts/cozy-mug)

  (crux/db system #inst "1715-05-18")
; yields
#{["Mary" "A used sword"] ["Mary" "Flintlock pistol"]}
; Ah she doesn't have The Mug still.
; Because we store that data in the entity itself
; we now should rewrite its state on "1715-05-18"

(crux/submit-tx system (first-ownership-tx))

  (crux/db system #inst "1715-05-18")
; yields
#{["Mary" "A used sword"]
  ["Mary" "Flintlock pistol"]
  ["Mary" "A Rather Cozy Mug"]}
; ah, much better

Note that with this particular data model we should also rewrite all the artefacts transactions since 1715. But since it matches the tale we can omit the labour for this time. And if acts of ownership were separate documents, the labour wouldn’t be needed at all.


  (crux/db system #inst "1740-06-19")
; yields
#{["Mary" "A used sword"]
  ["Mary" "Flintlock pistol"]
  ["Mary" "A Rather Cozy Mug"]
  ["Charles" "Key from an unknown door"]}

Now, knowing the corrected picture we are more ambiguous in our rooting for Charles or Mary.

Also we are still able to see how wrong we were as we can rewind not only the tale’s history but also the history of our edits to it. Just use the tx-time of the first ownership response.

  (crux/db system
           #inst "1715-06-19"
           (:crux.tx/tx-time first-ownership-tx-response))
; yields
#{["Mary" "A used sword"]
  ["Mary" "Flintlock pistol"]}

What’s Next?

Crux has a few important features which we left out of scope of this tale.

  • Crux has first-class Java API, so you can use it from Kotlin, Java, Scala or any other JVM-hosted language.

  • There’s history API, so for every entity you get its full history, or history bounded by valid time and transaction time coordinates.

  • Clustering

  • Datalog Rules – powerful query extensions

  • Evict can be scoped

Learn about these and other features on our docs portal.

If you have any suggestions on how to improve this tutorial or docs don’t hesitate to contact us on Zulip or on #crux channel on for tutorial corrections ping @spacegangster


I want to give my credit on this post to all Crux authors and contributors, thanks to Jon Pither for inviting me to Crux team and a special thanks to Jeremy Taylor for his invaluable input to this tale.


Journal 2019.16 - closed spec checking

Closed spec checking

As I mentioned last week, I’ve been working on adding a new form of closed spec checking and that’s in master for spec 2 now, or at least the first cut of it. The idea is that you can mark one or more (or all) specs as “closed” and they will then act in that mode for valid?, conform, and explain until you open them again. The specs themselves are unaltered - there is no marker in the symbolic spec form indicating “closed”-ness, rather this is a checking mode that you can turn on, kind of like instrument.

(require '[clojure.spec-alpha2 :as s])
(s/def ::f string?)
(s/def ::l string?)
(s/def ::s (s/schema [::f ::l]))
(s/valid? ::s {::x 10})  ;; "extra" keys are ok
;;=> true

;; now "close" the ::s spec (no-arg arity closes all specs)
(s/close-specs ::s)
(s/valid? ::s {::x 10})
;;=> false

(s/explain ::s {::x 10})
;; #:user{:x 10} - failed: (subset? #{:user/f :user/l} (set (keys %))) spec: :user/s

;; now open again
(s/open-specs ::s)

(s/valid? ::s {::x 10})
;;=> true

Still a lot of open questions about the API, behavior, etc, but this should give you an idea.

Clojure 1.10.1

We had some good feedback at the beginning of the week on Clojure 1.10.1-beta2, particularly about the error reporting aspects which go to a file. I continue to think that the default of writing to a temp file is best, but grew unsatisfied with the configurability of what was there, which was only a new clojure.main option. Instead of having a flag, decided it should be an enumeration of options - file (for temp file), stderr, and none. Also, having it just as a flag on clojure.main meant that every external launcher didn’t know how to do that, whereas making it a Java system property is already a configurable path that these tools (like Leiningen) can already handle. So, I made it check Java system property, then optionally override from clojure.main flag, and default to file mode. This is all captured in CLJ-2504 which I expect will go through screening next week.

I’m also growing a little concerned at the number of people patching around CLJ-1472 for Graal so will probably discuss that with Rich and consider whether to do anything for 1.10.1. The proposalss are tricky and may have unexpected perf impacts so its a difficult one to assess.

In any case, we would like to bring this release to an RC soon and get it released.

Rich has a couple api additions he put together this week related to metadata retention and those will get slotted in once we’re looking at 1.11 again.

Other stuff I enjoyed this week…

I haven’t had many music recommendations lately as I went through a bit of a dry spell. But I listened to a lot of music this week! My son is a drummer and has been doing an Aretha Franklin / Stevie Wonder show recently so it’s been a lot of fun watching him listen to a lot of stuff I’ve loved for a long time but that’s new to him. So many great tracks on Songs in the Key of Life, but he’s particularly digging on Sir Duke and the oddball Contusion this week.

In newer music, I really dug Lay Back by CLAVVS (pronounced “claws”), that’s from a 3-song EP and I like all 3 tunes on that.

I also want to give a shout-out to Stuart Sierra’s new podcast No Manifestos - the first 3 episdodes were all great! I really enjoyed listening to them.

Also, outside the Clojure world I run this little conference called Strange Loop - the CFP is open now, please submit!


Copyright © 2009, Planet Clojure. No rights reserved.
Planet Clojure is maintained by Baishamapayan Ghose.
Clojure and the Clojure logo are Copyright © 2008-2009, Rich Hickey.
Theme by Brajeshwar.