Machine Learning With Clojure

Machine Learning with Clojure

From the past 3 blog posts on Artificial Intelligence (AI), Machine Learning (ML) and Deep Learning (DL), you should by now, have an idea of the basics. I hope I have whet your appetite by the potential for ML, but the some of apprehensions that surround it too.

The good news is that you are still interested! So the next natural step is to ask, "How can I start Machine Learning myself, and build an AI that'll do all my coding for me?" (Or ... is just my dream?) Luckily, there are many open sourced resources such as TensorFlow, by Google, Amazon Lex, Polly and Rekognition, by Amazon and Microsoft's Cortana (are you getting the idea?), to name but a few.


TensorFlow, released in November 2015, is an open sourced library by the Google Brain Team. Originally called DisBelief, Google uses TensorFlow across a range of it's products and services. From the obvious, deep neural networks for search rankings, to the subtle, automatic generation of email responses and real time translations.

The main language for Machine Learning, for much of it's history, is in Python, as it is built upon C/C++. Python also has a very well developed and powerful scientific computing library NumPy, allowing users to do very efficient and complex neural networks with relative ease. Due to the longevity of Machine Learning's relationship with Python, the community is strong and libraries are rich.

But, "what if I don't want to use the bad concurrency models in Python and use the sane concurrency model in Clojure?" I hear you cry! Luckily, TensorFlow has just released it's Java API. Although it is still new and comes with a warning that it is experimental, you have the first in-roads to TensorFlow and Clojure. Along with the majority of Java libraries, TensorFlow has it's own Maven Repository so getting started with TensorFlow in Clojure is as easy as adding

[org.tensorflow/tensorflow "1.1.0-rc1"]
to your project and then importing it into your namespace.

(ns tensor-clj.core
  (:import org.tensorflow.TensorFlow))

(defn myfirsttensor []
  (println (str "I'm using TensorFlow version: " (TensorFlow/version))))

Comparing the TensorFlow Python API, to the TensorFlow Java API, you will immediately see a massive disparity in the libraries available, making operations that are relatively straightforward in Python, difficult in Java and hence Clojure.

However, for computing matrices and other statistical operations, Java has a wealth of options such as ND4J. If you'd rather use a Clojure Library, there is Incanter, Core.Matrix or Clatrix that can have similar capabilities.

An alternative option is to do your TensorFlow operations (or Machine Learning Operations) in Python and simply call you results in Clojure. The beauty of programming is that you are not penned into one language, you have the ability to choose based on your personal preferences.

For an interesting article that goes into more depth about Clojure, Deep Learning and TensorFlow, follow here.

Amazon Services

Amazon have recently released 3 different Deep Learning sources, Lex, Polly and Rekognition.

Amazon Lex

Amazon Lex is a service used for conversations by either voice or text. It comes with a ready-made environment allowing you to quicky make a natural language "chatbot". Amazon Lex comes with slack integration so you can build a chatbot in any case you desire (it can respond to any pesky chat messages on your behalf). The technology you'd be using with Lex is the same that is used by Alexa. This means the text and voice chats are tried and tested, being used in a commercially viable product that can be bought by you and me.

With Lex, the cost is $0.004 per voice request and $0.00075 per text request, so you are able to a large number of interactions with you chatbot for a low cost.

Amazon Polly

Amazon Polly is a service that turns text into speech. Polly uses deep learning technology to turn text (words and numbers) into speech. It sounds like the human voice and can speak 24 languages. You are able to cache and save the speech audio to be able to be replayed it when you are offline, or if you want distribute it.

For Polly, you pay per character converted from text to speech. As you have the ability to save and replay speech, for testing purposes the cost is low.

Amazon Rekognition

Amazon Rekognition is used for image analysis. The "Hello World" equivalent for deep learning is MNIST, classifying images of dags and cats. Rekognition gives you the ability to do just that, with the added benefit of visual search. The technology is actively used by Amazon to analyse billions of images each day for Prime Photos. A deep neural network is used to detect and label the details in your image, with more labels and facial recognition features continually being added.

Testing with Rekogntion allows you to analyse 5000 images per month and store 1000 face data each month for 12 months.

Coming from Amazon, Lex, Polly and Rekognition come ready with integration into your existing Amazon infrastructure, and are able to coexist with other Amazon WebServices such as S3 and Elastic BeanStalk.

All 3 Amazon products are now available for you to test. All you need is an Amazon account and you are up and running.

The products are out-of-the-box, meaning you can easily get them up and running, having a simple working product in a short amount of time. However, due to this ease of starting up, it means the learning gradient of customisation is steeper.


Microsoft's Cortana, released in 2014 as a competitor to Google Now and Siri is a digital personal assistant for Windows 10, Windows 10 Mobile, Windows Phone 8.1 Microsoft Band, Xbox One, iOS and Android. Cortana provides many different services:

  • Reminders and Calander Updates
  • Track Packages, Flights
  • Send Emails and Texts
  • Access the internet

Cortana uses Microsoft's Cognitive Toolkit (CNTK), a Python based deep learning framework which allows a machine to recognize photos and videos or understanding human speech by mimicking the structure and functions of the human brain. Using CNTK, you have access to ready-made components, the ability to express your own neural networks. You are also able to train and host your models on Microsoft's alternative to Amazon Web Services, Azure.


A Clojure library for neural networks is Cortex. An excellent article on the capabilities of cortex can be found here

As it is written in Clojure, Cortex has the obvious benefit of being easily integrable into your current application. However, it doesn't have the same backing of major companies nor the longevity of other frameworks. An advantage is that Cortex is being actively developed with improvements being made consistently. If you want a more tried and tested framework to begin your deep learning journey, Cortex may not be the right framework for you, rather something for you to come back to once you have a better understanding.

Other Machine Learning Systems

There are many more ML and DL frameworks such as Caffe, Theano, and Torch. Caffe is targeted at mass markets, for people who want to use deep learning for applications. Torch and Theano are more tailored towards people who want to use it for research on DL itself.


Caffe is a deep learning framework, created by Berkeley AI Research. It is Python based that gives Expressive Architecture, Extensible Code, Speed and already has an established community.


  1. High Performance without GPUs
  2. Easy to use - Many features come "Out-of-the-Box"
  3. Fast


  1. Outside Caffe's Comfort Zone, difficultly increases massively
  2. Not well documented
  3. Uses 3rd party packages so may lead to versions skewing


Theano is also a Python Library. Although not strictly a Deep Learning Framework, it allow you to efficiently evaluate mathematical expressions with multi-dimensional arrays. Theano, works along the same lines as TensorFlow, using multi-dimensional arrays as tensors. Theono uses automatic differentiation on a neural network, which is the bases of backpropagation.


  1. Lazy Evaluation
  2. Graph Transformation
  3. Supports Tensor and Linear Algebra Operations
  4. Gives a Good Long term understanding
  5. Developed and helpful community
  6. Automatic Differentiation (When you look into the Mathematics behind Neural Networks, you begin to realise how much timeand effort this saves!)


  1. Can get Stuck in the Nitty Gritty Details
  2. Graph Optimizations increase the compilation time
  3. Long Overheads due to Optimizations


Torch is a Machine Learning Library, released in 2002, based on Lua. Torch also provides an N-Dimensional Array (Tensor) that has the basic operations for indexing, slicing, transposing, type-casting, resizing, sharing storage and cloning. Torch is maintained and used by a variety of companies, including Facebook AI Research, Google's Deep Mind and Twitter.


  1. Flexibility and Speed
  2. Torch Read Eval Print Loop (TREPL) for debugging. If you are coming from aprogramming language that already utilises the power of the REPL, then you can see the immediate benefits.
  3. Provides ports to MAC, Linux, Android and iOS
  4. Written in Luajit which is a JITlanguage, so there is almost zero compilation overhead for the computational graph. It is extremely easy to integrate Lua with an existing C/C++/Fortran code.
  5. Very easy to create a (deep) neural nets right out of the box without additional libraries
  6. Actively developed


  1. Written in Lua so not accessible to many people
  2. Doesn't have complete automatic differentiation

For a more comprehensive comparison across multiple deep learning software's, go here.

To Conclude ....

Although it's been around for a while, Machine Learning and Deep Learning have only come to the fore-front of the programming world in the past 5 years. People were just starting to test the waters at the beginning of the century and so much has happened in a small amount of time.

Now we come to Clojure. Clojure is even younger, so the community doing both Clojure and Machine Learning is very small indeed. Luckily Clojure has Java and all it's resources to fall back on. As Machine Learning and Clojure separately become more popular the community using both technologies will grow alongside.

The Deep Learning Libraries and Frameworks I have provided in this article are by no means an exhaustive list. They have their own subtle differences, enough for you to make your own personal choice. However, it is important to say that there are many other examples out there, but I feel these are some great choices to get you going.


Why Functional Programming?

Why Functional Programming?

11 Reasons to Learn Functional Programming

1. Functional Programming is Fun

I was this close to quitting software engineering. I was tired of jobs writing PHP or Java or JavaScript. I wanted a challenge and something interesting. What drew me back as I was about to leave? Functional Programming.

2. Functional Programming is Growing

I started doing functional programming back in 2001. Back then, if you talked to people online, even those who had been programming for a while, there was no hope for getting a job in Lisp. It was like an urban legend. People knew people who told stories about people who had jobs. But mostly they remembered how easy functional life was in the 1980s.

But it's different now. People are getting paid to work in Clojure, Haskell, Elixir, Erlang, Scala, and more. They're talking about them at conferences. They're blogging and tweeting FP. FP is a viable career path.

Check out this graph of survey results:

Future Languages (Intent to learn)

Scala, Swift, Clojure, Haskell, Erlang, Elixir, and F# are all in the top 15.

Source: O'Reilly 2016 European Software Development Salary Survey

3. Two Ways are Better than One

Some people are going to tell you that Functional Programming is better than the other paradigms. Some people will say Object Oriented is better. They might be right. They might be wrong. But what is clear is that a person knowing Object Oriented and Functional is better off than someone only knowing Object Oriented. It's like having a hammer and a saw. Two different tools let you do more.

4. Functional Programming Makes You Look Smart

When I moved back to New Orleans in 2012, I told someone that I was working in Haskell. He literally dropped down and did this:

We're not worthy!

It was half joking and totally unjustified. But only half joking.

You don't have to be a genius to learn functional programming, though that is the stereotype. It's unfortunate. Functional programming is very accessible. But, of course, you can keep that a secret if you want to impress people 😉

Joking aside, knowing how to program in a different paradigm will help you fill niches others can't. You'll solve problems that were made for FP and it will look like magic.

5. You Will Learn Monads

Monads are not that big of a deal. But learning them is a challenge. There is a law of the universe that says that once you "get" monads, you have an incredible desire to share them with the world but lose all ability to communicate how they work.

The best way to learn monads is to approach them gradually. You can use Haskell very productively without learning monads. Eventually, just by exposure, you'll get them. That's what happened to me.

6. Functional Code is Shorter

You may have heard it before, but I'll say it again: Functional code tends to be shorter than equivalent code in non-functional languages. See this paper about a project at Erlang and their estimates of how much shorter it is than an equivalent C++ program. Spoiler: C++ is between 4 and 10 times longer than Erlang. Here is a quote:

Some concrete comparisons of source code volume have been made, as applications written in C++ have been rewritten in Erlang, resulting in a ten-fold reduction in the number of lines of uncommented source code. Other comparisons have indicated a fourfold reduction. A reasonable conclusion is that productivity increases by the same factor, given the same line/hour programmer productivity.

When measuring product quality in number of reported errors per 1000 lines of source code, the same relationship seems to exist: similar error density, but given the difference in code volume, roughly 4-10 times fewer errors in Erlang-based products.

7. Functional Code is Simple

In the Clojure world, the concept of "simplicity" has an objective meaning. We have taken to agree with the research done by Rich Hickey and his powers of presentation. In Simple Made Easy, Rich Hickey presents a definition of simplicity that deals with how intertwined things are. It's not about "ease".

That definition is what I mean. But the concept applies to the broader functional programming world as well.

8. Functional Programming will Improve All of Your Programming

Programmers who understand more than one paradigm are better programmers. They have more tools in their toolbox. So even if you're writing Object Oriented code at work, knowing functional programming concepts will help you with that code. In the end, the principles of good code are the same. OO and FP simply approach the same problem from different angles. Seeing in multiple angles will let you solve your problem in a better way.

9. Functional Programming has a Long and Storied History

Lisp was invented in 1958 and has been copied and reimplemented countless times since then. Each time, important lessons were learned. Those lessons are felt when you're coding with it. Functional programming is one of the first paradigms (invented around the time of procedural) and has fed the other paradigms ideas and features since its inception, including the if statement. You will learn a lot just by working in a language with so much history.

10. Functional Programming Makes Concurrency Easier

Immutability and pure functions were made to work on multiple CPUs.

11. Functional Programming Embodies Good Software Practices

In college my professors gave me lots of advice about best practices in software. But the code for the languages we used didn't follow those practices. It was hard to accept their advice. I would have to implement the advice by discipline and hard work--with no help from the language. Then when I got into functional programming, I realized that functional programming did all of those good practices by default.

Want to transition your career to Functional Programming? Sign up for my 10-part email course called The Functional Programming Career Guide.


Emerging Technologies for the Enterprise Conference 2017: Day One Recap

Day One of the 12th annual Emerging Technologies for the Enterprise Conference was held on Tuesday, April 18 in Philadelphia, PA. This two-day event included keynotes by Blair MacIntyre (augmented reality pioneer) and Scott Hanselman (podcaster), and featured speakers Monica Beckwith (JVM consultant at Oracle), Yehuda Katz (co-creator of Ember.js), and Jessica Kerr (lead engineer at Atomist).

By Michael Redlich

Permalink Newsletter 222: Laziness, core.async, and Re-frame

Issue 222 – April 24, 2017

Hi Clojurers,

Please enjoy the issue.

Rock on!
Eric Normand <>

PS Want to get this in your email? Subscribe!

What are the most used Clojure libraries?

Jake McCrary analyzed Clojure projects in the GitHub dataset and answered some questions about the popularity of different libraries.

Writing ClojureScript Native Apps is Easy

Expo is JavaScript bindings for native iOS and Android libraries. I’m very interested in this right now because ClojureScript with React Native is quite an amazing development experience and this makes it that much easier. This article by Frankie Sardo gives a nice intro. I do want to make a course on this after I finish Re-frame for Web.

On the Judicious Use of core.async

Some good tips from Paul Stadig about the good and bad of core.async.

Auditing with Reified Transactions in Datomic

Whit Chapman gives a nice tip about using transactions to store information about who is transacting the data. That way, you can later query the history.

Numbers as text

Ray McDermott is quite the Beyonce fan, apparently. In this article, Ray runs through an exercise of printing the numbers written out in letters instead of digits.

Roughing It Dev Style: Coding Without a Computer

Chazona Baum shares her experience coding from her iPhone. I’d love to be able to give up my laptop and work exclusively through a window into the cloud. I worry a bit about not being able to work offline, but sometimes I think it’s a blessing to be disconnected from your work.

Semi-eager realization of lazy sequences in ClojureScript

Marcin Kulik presents a cool system for calculating lazy sequences ahead of time while the browser is idle.

Two more Re-frame components

I’ve published two more lessons in the new Re-frame Components course I’m making. I build a sortable table in the first one and then rework it to use the database in the second lesson. We dive into some of the deeper Re-frame ideas. This course is all about skills in the small. I’m planning another course where we go over the big ideas of Re-frame.

LambdaConf Interviews

LambdaConf is coming up and we’ve been interviewing the speakers. There’s some good stuff coming up. I will be speaking there as well. But I think I will be all conferenced-out after that!


Turing Omnibus #1: Algorithm for generating wallpapers

I have started to read The New Turing Omnibus - a book that offers 66 concise, brilliantly written articles on the major points of interest in computer science theory, technology and applications.

From time to time, I will write a blog post presenting a chapter of this book.


Today, I am glad to present an interactive version of Chapter 1 about algorithms in general.

The algorithm

In order to explain what is an algorithm, the author presents a simple recipe for generating wallpapers.

Here is the recipe:


Now, we are going to code this algorithm in Clojurescript and you will be able to play with it in your browser thanks to the interactive Klipse snippets.


First, we need a function that draws a color pixel on a canvas:

(defn draw-pixel! [canvas x y color]
  (let [ctx (.getContext canvas "2d")
        scale 2]
    (set! (.-fillStyle ctx) color)
    (.fillRect ctx (* scale x) (* scale y) scale scale)))

Then, a function that erases a canvas i.e. color it in white:

(defn reset-canvas! [canvas]
(let [ctx (.getContext canvas "2d")]
  (set! (.-fillStyle ctx) "white")
  (.fillRect ctx 0 0 (.-width canvas) (.-height canvas))))

Black and White Wallpaper

The algorithm is controlled by the geometry of a square:

  • its x-position named a
  • its y-position named b
  • the side of the square named side

(defn draw-bw-wallpaper! [canvas a b side]
  (let [points 200]
  (dotimes [i points]   
    (dotimes [j points]
      (let [x (+ a (* i (/ side points)))
            y (+ b (* j (/ side points))) 
            c (int (+ (* x x) (* y y)))] 
        (when (even? c)
          (draw-pixel! canvas i j "black")))))))

(draw-bw-wallpaper! canvas 5 5 9)

The cool thing about this algorithm is that when we modify the side of the square, we get a completly different pattern:

(draw-bw-wallpaper! canvas 5 5 100)

Go ahead, play with the code…

The interactive code snippets are powered by the Klipse plugin.

Three Colors

We can generate a 3-color wallpaper by calculating the remainder of c modulo 4 and chose a color accordingly:

(defn draw-color-wallpaper! [canvas a b side]
  (let [points 200]
    (dotimes [i points]
      (dotimes [j points]
        (let [x (+ a (* i (/ side points)))
              y (+ b (* j (/ side points)))
              c (int (+ (* x x) (* y y)))
              color  (case (mod c 4)
                       0 "red"
                       1 "green"
                       2 "blue"
         (draw-pixel! canvas i j color))))))

(draw-color-wallpaper! canvas 5 7 101)

Again, when we modify the side of the square, we get a completly different pattern:

(draw-color-wallpaper! canvas 5 7 57)

Grand Finale

Someone in reddit suggested to loop over the value of side in order to watch all the generated wallpapers like a movie.

Here is the result:

(defonce interval (atom nil))
(defonce side (atom 0))

(def delta 0.5)
(defn step [canvas container]
  (set! (.-innerHTML container) (str "side: " @side) )
  (reset-canvas! canvas)
  (draw-color-wallpaper! canvas 5 5 (swap! side + delta)))

(.clearInterval js/window @interval)
(reset! side 0)
(reset! interval (.setInterval js/window step  500 canvas js/klipse-container)) 

Are you able to provide a simple explanation about this algorithm?

How is it able to generate so many different beautiful patterns?

Have you found a magnificient pattern? Please share its code…


Clojure.core: juxt, some and reduced

I have been coding in Clojure for 5 years now - but still from time to time I am re-discovering some interesting functions from the clojure.core namespace. Today, I am going to share with you some interesting usages of juxt, some and reduced.

Select several values from a map - with juxt

We have select-keys to create a submap of a map.

(select-keys {:a 1 :b 2 :c 3} [:a :b])

But what if we want to get only the values?

We could chain select-keys and vals:

(-> (select-keys {:a 1 :b 2 :c 3} [:a :b])

But it doesn’t feel idiomatic.

It’s much cleaner to use juxt:

((juxt :a :b) {:a 1 :b 2 :c 3})

Find an item in a sequence - with some

We can find an item in a sequence with (first (filter ...)):

(first (filter #(= % :c) [:a :b :c :d]))

But it’s much more idiomatic to use some and a set as a predicate:

(some #{:c} [:a :b :c :d])

It works fine because in clojure, a set behaves as a function that receives an argument and returns it if it belongs to the set - and nil otherwise:

(#{:c} :c)
(#{:c} :a)

Terminates a reduce - with reduced

How do you terminate a reduce once you have found the value that you were looking for?

For instance, let’s imagine that you want to sum a sequence of positive numbers with a tweak: if the sum is greater than 1000 you want to return :big-sum instead of the sum.

You could write it with reduce:

(defn my-sum [s]
  (let [res (reduce (fn [sum x]
                      (+ sum x))
    (if (>= res 1000)

(my-sum (range 100))

But - inside the reduce - once you have discovered that the sum is greater than 1000, there is no point of continuing the calculation (because all the numbers are positive).

Let’s terminate our reduce with reduced:

(defn my-sum-opt [s]
  (reduce (fn [sum x]
            (let [res (+ sum x)]
              (if (>= res 1000)
                (reduced :big-sum)

(my-sum-opt (range 100))


Introduction to functional programming with C#

It is no surprise that one of the biggest challenges in the enterprise software development is complexity. Change is inevitable. Especially when a new feature is introduced that adds complexity. This might lead to difficulty in understanding and verify the software. It doesn’t stop there, it increases development time, introduces new bugs and at some point, it is even impossible to change anything in the software without introducing some unexpected behavior or side effects. This may result in slowing down the project and eventually lead to a project failure.

Imperative programming styles like object oriented programming have capabilities to minimize complexity to a certain level when done right by creating abstractions and hiding complexity. An object of a class has certain behavior that can be reasoned without caring about the complexity of the implementation. Classes, when written properly, will have high cohesion and low coupling where the code re-usability is increased while complexity is kept reasonable.

Object oriented programming is in my blood stream, I’ve been a C# programmer most of my life and my thought process always tends to use a sequence of statements to determine how to reach a certain goal by designing class hierarchies, focusing on encapsulation, abstraction, and polymorphism that tend to change the state of the program which actively modifies memory. There’s always a possibility that any number of threads can read a memory location without synchronization. Consider if at least one of them mutates it that leads to a race condition. Not an ideal condition for a programmer who tries to embrace concurrent programming.

However, it is still possible to write an imperative program by writing a complete thread-safe code that supports concurrency. But one has to still reason about performance, which is difficult in a multi-threaded environment. Even if parallelism improves performance, it is difficult to refactor the existing sequential code into parallel code as the large portion of the codebase has to be modified to use threads explicitly.

What’s the solution?

It is worth to consider “Functional programming”. It is a programming paradigm originated from ideas older than the first computers when two mathematicians introduced a theory called lambda calculus. It provided a theoretical framework that treated computation as an evaluation of mathematical functions by evaluating expressions rather than execution of commands and avoiding changing-state and mutating data.

By doing so, it is much easier to understand and reason about the code and most importantly, it reduces the side effects. In addition to that, performing unit testing is much easier.

Functional Languages:
It is interesting to analyze the programming languages that support functional programming such as Lisp, Clojure, Erlang, OCaml and Haskell which have been used in industrial and commercial applications by a wide variety of organizations. Each of them emphasizes different features and aspects of the functional style. ML is a general-purpose functional programming language and F# is the member of ML language family and originated as a functional programming language for the .Net Framework since 2002. It combines the succinct, expressive and compositional style of functional programming with the runtime, libraries, interoperability and object model of .NET

It is worthy to note that many general programming languages are flexible enough to support multiple paradigms. A good example is C#, which has borrowed a lot of features from ML and Haskell although it is primarily imperative with a heavy dependence of mutable state. For example, LINQ which promotes declarative style and immutability by not modifying the underlying collection it operates on.

As I am already familiar with C#, I favor combining paradigms so that I have control and flexibility over choosing the approach that best suits the problem.

Fundamental principles:

Having stated before, Functional programming is programming with mathematical functions. The idea is, whenever the same arguments are supplied, the mathematical function returns the same results and the function’s signature must convey all information about possible input it accepts and the output it produces. By following two simple principles, namely — Referential transparency and Function honesty.

Referential Transparency:
Referential transparency, referred to a function, indicates that you can determine the result of applying that function only by looking at the values of its arguments. It means that the function should operate only on the values that we pass and it shouldn’t refer to the global state. Refer to the below example:

public int CalculateElapsedDays(DateTime from)
     DateTime now = DateTime.Now;
     return (now - from).Days;

This function is not referentially transparent. Why? Today it returns different output and tomorrow it will return another. The reason is that it refers to the global DateTime.Now property.

Can this function be converted into a referentially transparent function? Yes.

By making the function to operate only on the parameters that we passed in.

public static int CalculateElapsedDays(DateTime from, DateTime now) => (now - from).Days;

In the above function, we eliminated the dependency on global state thus making the function referentially transparent.

Function honesty:

Function honesty states that a mathematical function should convey all information about the possible input that it takes and the possible output that it produces. Refer to the below example:

public int Divide(int numerator, int denominator)
   return numerator / denominator;

Is this function referentially transparent? Yes.

Does it qualify as a mathematical function? May not be.

Reason: The input arguments states that it takes two integers and return an integer as an output. Is it true in all scenarios? Nope.

What happens when we invoke the function like:

var result = Divide(1,0);

Yes, you guessed it right. It will throw “Divide By Zero” exception. Hence, the function’s signature doesn’t convey enough information about the output of the operation.

How to convert this function into a mathematical function?

Change the type of denominator parameter like below

public static int Divide(int numerator, NonZeroInt denominator)
    return numerator / denominator.Value;

NonZeroInt is a custom type that can hold any integer except zero. Now, we have made this function honest as it conveys all information about the possible input that it takes and the output that it produces.

Despite the simplicity of these principles, functional programming requires a lot of practices that might seem intimidating and overwhelming to many programmers. In this article, I will try to cover some basic aspects of starting with which includes “Functions as first class citizens”, “Higher order functions” and “Pure functions”

Functions as first-class citizens:

When functions are first-class citizens or first-class values, they can be used as input or output for any other functions. They can be assigned to variables, stored to collections, just like values of other type. For example:

Func<int, bool> isMod2 = x => x % 2 == 0;
var list = Enumerable.Range(1, 10);
var evenNumbers = list.Where(isMod2);

The above code illustrates that functions are indeed first-class values because you can assign the function ( x => x % 2 == 0) to a variable isMod2 which in-turn passed as an argument to Where (an extension on IEnumberable)
Treating functions as values is necessary for functional programming style as it gives the power to define Higher-order functions.

Higher-order functions (HOF):

HOFs are functions that take one or more functions as arguments or returns a function as a result or both. All other functions are First-order functions.

Let’s consider the same modulo 2 example in which “list.Where” does filtering to determine which number to be included in the final list based on a predicate provided by the caller. The predicate provided here is isMod2 function and theWhereextension method of IEnumberable is the Higher-order Function. Implementation of Where looks like below

public static IEnumerable<T> Where<T>
   (this IEnumerable<T> ts, Func<T, bool> predicate)
   foreach (T t in ts)  
      if (predicate(t)) 
         yield return t;

❶ The task of iterating over the list is an implementation detail of Where.
❷ The criterion determining which items are included is decided by the caller

The Where HOF applies the given function repeatedly to every element in the collection. HOFs can be designed to conditionally apply the function to the collection as well. I’ll leave that for you to experiment. By now you would have realized how concise and powerful Higher-order functions are.

Pure functions:

Pure functions are mathematical functions that follow the two basic principles that we discussed before — Referential transparency and Function honesty. In addition to that, Pure functions don't cause any side effects. Which means, it mutates neither the global state nor input arguments. Pure functions are easy to test and reason about. Since the output is dependent only on input, the order of evaluation isn’t important. These characteristics are very vital for a program to be optimized for parallelization, Lazy evaluation, and Memorization (Caching).

Consider the below example console application that multiplies a list of numbers by 2 and nicely formats that into multiplication table.

// Extensions.cs
public static class Extensions
    public static int MultiplyBy2(this int value)
       return value * 2;
// MultiplicationFormatter.cs
public static class MultpilicationFormatter
    static int counter;

    static string Counter(int val) => $"{++counter} x 2 = {val}"; 

    public static List<string> Format(List<int> list)
        => list
// Program.cs
using static System.Console;
static void Main(string[] args)
     var list = MultpilicationFormatter.Format(Enumerable.Range(1, 10).ToList());
     foreach (var item in list)
// Output
1 x 2 = 2
2 x 2 = 4
3 x 2 = 6
4 x 2 = 8
5 x 2 = 10
6 x 2 = 12
7 x 2 = 14
8 x 2 = 16
9 x 2 = 18
10 x 2 = 20

❶ A pure function
❷ An impure function as it mutates the global state
❸ Pure and impure functions can be applied similarly

The above code executes well without any issue as we are operating only on 10 numbers. What if we want to do the same operation for a bigger set of data especially when the processing is CPU intense? Would it make sense to do the multiplications in parallel? That means the sequence of data can be processed independently.

Certainly, we can use the power of Parallel LINQ (PLINQ) to get parallelization for almost free. How can we achieve this? Simply by adding AsParallel to the list.

public static List<string> Format(List<int> list)
=> list.AsParallel()

But hold on, pure and impure functions don’t parallelize well together.

What do I mean by that? As you know that Counter function is an impure function. If you have some experience in multi-threading, this will be familiar to you. The parallel version will have multiple threads reading and updating the counter and there’s no locking in place. The program will end up losing some updates which will lead to incorrect counter results like below

1 x 2 = 2
2 x 2 = 4
7 x 2 = 6
8 x 2 = 8
6 x 2 = 10
4 x 2 = 12
9 x 2 = 14
10 x 2 = 16
5 x 2 = 18
3 x 2 = 20

Avoiding state mutation:
One possible way to fix this is to avoid state mutation and running counter. Can we think of a solution to generate a list of counters that we need and map them from the given list of items? Let’s see how:

using static System.Linq.ParallelEnumerable;
public static class MultpilicationFormatter
   public static List<string> Format(List<int> list)
   => list.AsParallel()
      .Zip(Range(1,list.Count), (val, counter) => $"{counter} x 2 = {val}")

Using Zip and Range, we have re-written the MultiplicationFormatter. Zip can be used as an extension method, so you can write the Format method using a more fluent syntax. After this change, Format method is now pure. Making it parallel is just a cherry on top of the cake ❶.This is almost identical to the sequential version except ❶ and ❷

Of course, it is not always as easy as this simple scenario. But the ideas that you have seen till now will deploy you into a better position to tackle such issues related to parallelism and concurrency.

This article was originally posted in my Medium blog


in which actors simulate a protocol

I've been on a bit of a yak shave recently on Bussard, my spaceflight programming adventure game. The game relies pretty heavily on simulating various computer systems, both onboard your craft and in various space stations, portal installations, and other craft you encounter. It naturally needs to simulate communications between all these.

I started with a pretty simple method of having each connection spin up its own coroutine running its own sandboxed session. Space station sessions run smash, a vaguely bash-like shell in a faux-unix, while connecting to a portal triggers a small lisp script to check for clearance and gradually activate the gateway sequence. The main loop would allow each session's coroutine a slice of time for each update tick, but a badly-behaved script could make the frame rate suffer. Also input and output was handled in a pretty ad-hoc method where Lua tables were used as channels to send strings to and from these session coroutines. But most problematic of all was the fact that there wasn't any uniformity or regularity in the implementations of the various sessions.

The next big feature I wanted to add was the ability to deploy rovers from your ship and SSH into them to control their movements or reprogram them. But I really didn't want to add a third half-baked session type; I needed all the different types to conform to a stricter interface. This required some rethinking.

The codebase is written primarily in Lua, but not just any Lua—it uses the LÖVE framework. While Lua's concurrency options are very limited, LÖVE offers true OS threads which run independently of each other. Now of course LÖVE can't magically change the semantics of Lua—these threads are technically in the same process but cannot communicate directly. All communication happens over channels which allow copies of data to be shared, but not actual state.

While these limitations could be annoying in some cases, they turn out to be a perfect fit for simulating communications between separate computer systems. Moving to threads allows for much more complex programs to run on stations, portals, rovers, etc without adversely affecting performance of the game.

Each world has a thread with a pair of input/output channels that gets started when you enter that world's star system. When a login succeeds, a thread is created for that specific session, which also gets its own stdin channel. Each OS can provide its own implementation of what a session thread looks like, but they all exchange stdin and stdout messages over channels. Interactive sessions will typically run a shell like smash or a repl and block on stdin:demand() which will wait until the main thread has some input to send along.

local new_session = function(username, password)
   if(not os.is_authorized(hostname, username, password)) then
      return output:push({op="status", out="Login failed."})
   local session_id = utils.uuid()
   local stdin = love.thread.newChannel()
   sessions[session_id] = os.new_session(stdin, output, username, hostname)
   sessions[session_id].stdin = stdin
   output:push({op="status", ok=true, ["new-session"] = session_id})
   return true


Semi-eager realization of lazy sequences in ClojureScript

asciinema player represents "frames" as a lazy sequence of screen states.

It starts by fetching asciicast JSON file, and extracts STDOUT print events from it. They form a vector like this:

(def stdout-events [
  [1.2 "hell"]
  [0.1 "o"]
  [6.2 " "]
  [1.0 "world"]
  [2.0 "!"]

To get a sequence of "frames" we run reductions function over it, with blank terminal (vt) as initial value:

(defn reduce-vt [[_ vt] [curr-time str]]
  [curr-time (vt/feed-str vt str)])

(let [vt (vt/make-vt width height)]
  (reductions reduce-vt [0 vt] stdout-events))

The details of the above code are not really important. The important thing is that the result of reductions is a lazy sequence. The player consumes this sequence as the playback progresses, sleeping proper amount of time (the number in stdout-events items) between screen updates.

When the sequence is consumed, reduce-vt function interprets text with control sequences (str arg in reduce-vt) and applies screen transformations to terminal model (vt). Usually this is not computationally intensive, but can be pretty heavy for more colorful, animated recordings. In such cases the animation stutters.

Also, when you skip forward to further position in the recording the player needs to consume all items from this lazy sequence up to the point you requested. This often requires lots of work, because computation for all not realized items adds up. In some cases it's visible as UI lag - player doesn't respond to user input because it's busy interpreting tens of kilobytes of text.

Knowing that the sequence will eventually be consumed in full, why not pass it through doall right before the beginning of the playback to generate all screen states upfront? That would certainly help with stuttering and made seeking blazing fast, but it would block the UI thread for couple of seconds or more. Not quite acceptable solution.

Would be great if we could semi-eagerly realize the lazy sequence piece by piece when the browser is idle during playback (e.g. when the player waits for time to pass before displaying next screen contents).

Cooperative scheduling to the rescue! requestIdleCallback, a younger sibling of requestAnimationFrame, allows code execution to be scheduled when the browser has nothing to do. Perfect fit for the problem.

Here's a requestIdleCallback based version of Clojure's dorun:

(defn dorun-when-idle [coll]
  (when-let [ric (aget js/window "requestIdleCallback")]
    (letfn [(make-cb [coll]
              (fn []
                (when (seq coll)
                  (ric (make-cb (rest coll))))))]
      (ric (make-cb coll)))))

(dorun-when-idle my-lazy-seq)

It goes through all sequence items, one item per every requestIdleCallback invocation. Clojure(Script)'s data structures are immutable, but lazy sequences cache their items internally. This allows us to walk through the sequence in one place, realizing it, while consuming it in another place, at slower pace.

The above function reminds me of little-known seque function, which realizes lazy sequence in a separate JVM thread, trying to stay ahead of the consumption. dorun-when-idle is pretty hungry, as it goes to the end of the sequence regardles of the consumption pace, however I think it should be possible to implement seque in ClojureScript through requestIdleCallback with the same semantics as in Clojure.

Anyway, this greatly improved animation smoothness and seeking responsiveness in asciinema player under major browsers, and it's already running on


Writing ClojureScript native apps is easy

What is wrong with native app development?

I’ve always enjoyed writing mobile apps (although I considered it being way harder than it should be). The combination of complex build tools, monster IDEs, proprietary software, slow feedback cycle and terrible API designs, seemed to conjure endless obstacles between me and the app I am developing.

When React Native came out I hoped it would bring an end to this suffering, but it turned out to be yet one more layer of complexity on top of a broken ecosystem. After the announcement of Create React Native Apps and Facebook collaborating with Expo I can now say the entry barrier to native apps is as low as writing a web app.


How does Expo fix it?

Expo akes the bold stance of not allowing you to write any native code at all. Together with React Native, it exposes a JS-only SDK wrapping of most of the native API’s, and then runs your app inside the Expo client. This means you cannot perform sophisticated hardware control (but most likely, you won't need to if you stick with basic audio/video/bluetooth controls). Then, you can write a pure JS app that gets rendered with native components.

The experience has been crafted extremely well. You only need to learn a smaller, more coherent set of API’s and you are able to write a more compact and easier to maintain code. More importantly, you can choose your own development tools, since you don't need to own a Macbook or install Android Studio to run it. Your app is dynamically fetched and updated on the fly, so there is no need to go through a complex release process and distributing it is as convenient as sharing a link.


In the past I have used Cordova and similar strategies to write apps in pure JS, but the result was never on-par with a native app. With Expo, you really get the best of both worlds.

Just how easy it is to add ClojureScript to Expo?

Let’s see. Start by creating a fresh expo project:

npm i -g exp
mkdir foo && cd foo && exp init -t blank
npm install

Then add the following ClojureScript files

In foo/src/app/core.cljs

(ns app.core)

(def React (js/require "react"))
(def ReactNative (js/require "react-native"))
(def Expo (js/require "expo"))

(def createFactory (aget React "createFactory"))
(def createClass (aget React "createClass"))
(def registerRootComponent (aget Expo "registerRootComponent"))

(def Text (createFactory (aget ReactNative "Text")))

(def style
  #js {
    :flex 1
    :fontSize 18
    :paddingTop 80
    :textAlign "center"

(def App (createClass #js {:render #(Text. #js {:style style} "Hello from Cljs")}))

(registerRootComponent App)

In foo/build.cljs

(require '[ :as b])

(b/build "src"
  {:output-to "main.js"
   :optimizations :advanced})

Compile your cljs files using lumo and serve the app:

npm i -g lumo-cljs
lumo -c src build.cljs
exp start

Congrats! You now can run a ClojureScript app on iOS from a Linux laptop with just node installed!

A good exercise is to try and convert some simple examples to ClojureScript, like those you find in the Snack playground. If you’re feeling lazy, you can simply clone this repo which also shows continuous deployment done via Travis CI.

Where do you go from here?

Just like every other Clojure developer you're probably very spoiled because you came to rely on powerful tools:

  • Fast compilation times
  • Hot code reload
  • Device REPL

In order to use this toolchain you need to bring in leiningen and/or boot and the setup is a bit more convoluted. Fortunately, the community already has developed a template where all these problems have been solved for you. Obviously, it has many more moving parts compared to the simpler setup above, but it just works™ 1.

Is it production ready?

ClojureScript moves fast and there are already various apps in production built this way.

You will find plenty of resources on to help you writing ClojureScript apps with React Native. Everything from the newest React Navigation library to a Graphql client can be easily adaptable to your needs. It's early days but I feel ClojureScript + Expo is one of the best options for building large, sophisticated apps on mobile.


1 Big shout out to tiensonqin for putting this together and to every other person hanging around in the #cljsrn channel for this massive achievement. It's incredible how much has been done by the community with small, incremental contributions. back


Copyright © 2009, Planet Clojure. No rights reserved.
Planet Clojure is maintained by Baishamapayan Ghose.
Clojure and the Clojure logo are Copyright © 2008-2009, Rich Hickey.
Theme by Brajeshwar.