What is functional thinking?

My book is coming out soon in early access. It’s called ‘Taming Complex Software: A Friendly Guide to Functional Thinking’. But what is ‘functional thinking’? In this episode, I explain the term and why I’m no longer redefining ‘functional programming’.

Transcript

Eric Normand: What is functional thinking? In this episode, I’ll talk about the topic of my new book. My name is Eric Normand. I help people thrive with functional programming.

I have a new book, which will come out, eventually. I’m publishing it with Manning. It’s about functional programming.

The reason I’m talking about it now is that it is soon going to come out in early access, which means, you’ll be able to read it online, an ebook format. Before it comes out, you’ll be able to read the chapters that are done. I wanted to start talking about it. Because it’s this close, it’s very close now.

The title of the book is, “Taming Complex Software — A Friendly Guide to Functional Thinking.” I wanted to talk about what functional thinking is, what I mean by it. I started this podcast about 18 months ago. It was made to start thinking about this book, and the ideas that I wanted to put into it.

I proposed a new definition of functional programming. You can go back to those early episodes, where I reason out why I need a new definition of functional programming. As a short explanation, the standard definition of functional programming says that, “It is programming with pure functions and avoiding side effects.”

There’s a lot of truth to that definition, but I feel it makes a lot of assumptions that are not explained in the definition. It is also problematic for other reasons. It scares people. Because when they learn what side effects are, how can you write software if you’re avoiding the main purpose of running our software in the first place?

I wanted a definition that would clarify it, and put this definition as a smaller part of a bigger picture. As an example, the definition makes a distinction between pure functions and side effects, but it’s not explained in the definition. I feel this distinction that’s being made is very central to functional programming. It should be part of the definition.

Why should it be part of the definition? Because no other paradigm makes that distinction. No other paradigm talks about the difference between side effects and pure functions. You can double check. They might talk about it later, but not as part of the definition. Usually, it’s with a nod to functional programming.

I believe that that should be baked into the definition. I’ve been in discussions with a lot of people. A lot of people agree with me. A lot of people say, “Oh, the definition is just fine.” These are just the implications of this definition. That’s what I’m talking about the implications. They’re not primary.

I’ve been in a lot of discussions. Some people say, “It’s so clarifying to hear you make that distinction as the primary thing.” Anyway, I was continuing along with this idea when the publisher finally came up with a title that everyone at the company, the publishing company that’s working on the book liked, which I read before, Taming Complex Software — A Friendly Guide to Functional Thinking.

The functional thinking got me thinking that maybe I don’t need to be so antagonistic about this definition. I’ll just call it something else. Call it what I’m explaining something else. The thing is no functional programmer would say that distinction between side effects and pure functions is unimportant.

Everyone would say it’s important to functional programming. The disagreement was merely in, do we need a new definition? I don’t know if I really want to have that fight. I don’t want to fight that fight. The fight would be with people who are already sold on functional programming.

In a certain sense, you could say, it’s good to pick fights, because it creates marketing, a buzz about your book. It’s like a little controversy that gets people talking about the book, and so then they might buy it.

What I have been thinking about more and more is that it’s not the discussion that I would have with people would be among people who are already bought in to functional programming. It might be good for marketing. I decided I didn’t want to do it. I liked the functional thinking idea that it’s not really that different. It’s just that it hasn’t been defined.

I can make it whatever I want. Basically, it’s the same stuff as I was saying before, but with a different term. It’s basically how functional programmers think, which is what functional programming is, functional programming as a paradigm. Paradigm means the ways of thinking and approaching problems and the concepts.

It’s the same thing, functional thinking and functional programming, but no one has defined it yet. I can do what I want with it. It’s also focusing more on the thought processes, as opposed to specific functional programming techniques or features, which I think a lot of functional programming has gone.

A lot of what people understand when you say the term functional programming. They’re already like, “So, it’s about monads?” I’m like, “No, that’s not where we’re going. We’re going much more fundamental.” It’s about this distinction between pure functions and side effects. That’s an update. That’s what functional thinking is.

When you see the title, you’ll know where I’m coming from with that. The main distinction that functional programmers do make that distinguishes them from other programmers and other paradigms is identifying side effects, identifying pure functions, and identifying the other implied thing, which is the data.

The definition does not say anything about data. It just says pure functions. I’ve asked people about that, and they say, “Oh, well, it’s implied because you need to add something for the functions to act on.” That’s obviously data. It’s there. No one would disagree that you need data as separate from pure functions.

They just never mentioned it in the definition. Functional thinking explicitly calls it out. I’m calling them actions, calculations, and data. Actions, they’re not the same as side effects. They’re more like impure function.

You got impure functions, which I’m calling actions. You got pure functions, which are calculations. You’ve got data, which should be immutable. That’s my ideas on functional thinking.

Look for the book. It should be coming really soon, the next few weeks. We’re counting it in weeks, now. We had to pull all the little pieces together. Then, someone at Manning has press to a button so it goes live. It’s happening.

They don’t like to promise this, but the idea that they’re shooting for is one chapter every month, a new chapter released every month, or, at least, a major update to an existing chapter. I’m starting with three chapters, chapters one through three. Then I hope to hit that target of one chapter a month. I’ve been working at that pace. It looks good.

I’m going to sign off now. You can find this episode and all of the past episodes and all future episodes in the future at lispcast.com/podcast. You’ll find video recordings, audio recordings, and transcripts. You can read it if you prefer reading.

You’ll also find links to subscribe, or contact me on social media. If you want to follow me there, get in touch with me. I much prefer a discussion than simple follow. It’s all there. Go to lispcast.com/podcast to check it out.

All right. I’m Eric Normand. Keep on programming functionally, and rock on.

The post What is functional thinking? appeared first on LispCast.

Permalink

Should functional be the way to go?

This question is directed towards more of the practical side of both Frontend and Backend programming.

I have recently picked up Functional Programming and I am not sure if I should actually start building apps in it.

Also I don't just refer to adding functional sprinkles on the top with reduce and map but to actually build on functional paradigm through lambda calculus and the shared concepts(Type, Category Theory) around it.

I am aware that it makes sense in some fields proven by Scala, Clojure and such but I am not sure about if web projects should go Functional even if with some impurities?

Some advice or input will be helpful.

Permalink

The REPL

The REPL

Tempting fate, lots of journals, and JIRA!
View this email in your browser

The REPL

Notes.

Since the last newsletter I’ve been over to America for work, and back home again. Things have been pretty busy with work, travel, family, e.t.c. but life is finally settling down a little bit. While I was in America I got to see a bunch of Clojure folk in San Francisco, New York, and even Chicago after weather stranded me there overnight. It was really great to meet so many people that I’d only interacted with online before, everyone I met was really welcoming and friendly.

Falcon (my employer) has two job openings, a Web Development Engineer (mostly front-end), and a Senior Software Engineer (mostly backend). If you’d like to work in an empathetic, inclusive, fast growing team that uses Clojure, please look at applying. We’re looking to hire people remotely (or in the Bay Area) who can work with a significant overlap in the Pacific time zone.

Let me know if you have any questions.

-main

  • Clojure 1.10.1 was released. It’s a pretty small release with a few very targeted fixes. It’s great that they were able to get released off-cycle and didn’t need to wait for Clojure 1.11. Thanks Alex and everyone else who got it out!
  • Jacek Schae has a new podcast out about ClojureScript and he’s had a stellar bunch of guests so far.
  • Bozhidar Batsov talked about Daniel Higginbotham’s recent tweet about Clojure Heroes. What was most striking to me was how many people I could think of that had contributed greatly to the Clojure community to make it the way it is today.
  • JUXT have been writing journal posts talking about what they’ve been up to lately.
  • Speaking of JUXT, I interviewed Jeremy Taylor and Malcolm Sparks on The REPL podcast about Crux, their new bitemporal database.
  • ClojureScript logging with goog.log from Lambda Island

Libraries & Books.

  • Michiel Borkent helps you tempt fate with finitize. It’s a library that lets you limit and realize possibly infinite sequences.
  • Get Programming with Clojure is a new Clojure book from Yehonathan Sharvit (creator of Klipse). It’s in early access with Manning at the moment.
  • supdate is a new library for transforming nested data structures. It’s similar to Specter.

People are worried about Types. ?

  • Ambrose Bonnaire-Sergeant successfully defended his thesis on Typed Clojure

Foundations.

Tools.

Recent Developments.

  • There’s a new page on clojure.org about Clojure’s development process. It explains the philosophy behind Clojure’s development process.
  • Alex Miller has more journal posts about what he’s been working on including a JIRA migration for dev.clojure.org (which must have been a huge amount of work). 17 , 181920.
  • I’m particularly excited about that the Clojure core team is talking about having a separate way to create enhancement requests similar to PEP’s.

Learning

  • Deploying ClojureScript to GitHub Pages. This is from a new site called Between Two Parens; I’m jealous I didn’t think of that name first.
  • Cognitect has a (slightly old by now) blog post talking about the future of the conferences that they put on. They are going to be continue running Clojure/conj, but are discontinuing Clojure/west and EuroClojure. They are also going to be giving more support to community run Clojure conferences, which is awesome.
  • Building a ClojureScript wrapper for React with Hooks
  • Andy Chambers from Funding Circle has a blog post on the Confluent blog about Testing Event-Driven Systems using Clojure

Misc.

I’m Daniel Compton. I maintain public Maven repositories at Clojars, private ones at Deps, and help fund OSS Clojure projects (along with tons of generous members like PitchJUXTMetosinAdgoji, and Funding Circle) at Clojurists Together. If you’ve enjoyed reading this, tell your friends to sign up at therepl.net, or post a link in your company chatroom. If you’ve seen (or published) a blog post, library, or anything else Clojure/JVM related please reply to this to let me know about it.

If you’d like to support the work that I’m doing, consider signing up for a trial of Deps, a private, hosted, Maven Repository service that I run.

Thanks!

Copyright © 2019 Daniel Compton, All rights reserved.


Want to change how you receive these emails?
You can update your preferences or unsubscribe from this list

Email Marketing Powered by Mailchimp

Permalink

Ep 033: Cake or Ice Cream? Yes!

Nate needs to parse two different errors and takes some time to compose himself.

  • Previously, we were able to parse out errors and give the parsing function the ability to search as far into the future as necessary.
  • We did this by having the function take a sequence and return a sequence, managed by lazy-seq.
  • (01:30) New Problem: We need to correlate two different kinds of errors.
  • The developers looked at our list of sprinkle errors and they think that they’re caused by the 357 errors.
  • They have requested that we look at the entire log and generate a report of 357 and sprinkle errors, so we can tell if they’re correlated.
  • “When someone says, do I want cake or ice cream, the right answer is: yes, I want both!”
  • Before, we were only parsing out a single type of error and summarizing it, but now we need to parse out both types of errors.
  • If we try to parse both kinds of errors with the same function, we will quickly get ourselves into nested ifs or maybe an infinite cond. Perhaps a complex state machine with backtracking?
  • (05:55) Realization: Each error stands alone. Once you detect the beginning of a sprinkle error, you won’t need to look for a 357 error.
  • You can take each one in turn.
  • (06:30) Solution step 1: What if we had two functions, one for each type of error.
  • Each of these functions would take the entire sequence and tell us if there was an error at the beginning.
  • Previously, our function both recognized errors and handled the sequence generation. If we pull those apart, we can add parsing for more errors easily.
  • Each error parsing function would return nil if no error was found at the head of the sequence.
  • (08:46) Solution step 2: Create a function that uses the two detectors to find out what error is at the head of the sequence.
  • It takes the sequence, and wraps consecutive calls in an or block.
  • The or block will try each one in turn until one matches and then that is the result.
  • Each error’s parsing is in its own function, and the combining function serves as an inventory.
  • (11:35) Solution step 3: Create a lazy sequence that wraps calls to the combined detector function.
  • Last week’s code has parsing and lazy in one function.
  • Now that we’ve pulled the parsing out, we can use the remaining structure to create our lazy sequence.
  • The combined detector function is parse-next, and the function that manages the lazy sequence is parse-all.
  • “Now we’ve fulfilled our obligation to have bike-shedding on naming. Next up, cache consistency. And finally, off-by-one errors.”
  • The top of parse-all has a call to lazy-seq.
  • It will use the result of calling parse-next on the sequence.
    • If it gets something, it will use cons to add that value to the beginning of a recursive call to itself.
    • If it gets nil, it will recursively call itself with the rest of the sequence, thus advancing the parsing forward one step.
  • It’s not a ton of boilerplate, but it is nice to put all the mechanics of the sequence creation into a function by itself.
  • Now we have a heterogeneous sequence of errors, and we can transform it into any report that is useful.
  • Each parsing function doesn’t need to worry about advancing down the sequence, that is handled by the higher parse-all function.
  • Since we have a new lazy sequence, we can take it and make recognizers that take it and generate an even higher level of sequence.
  • We ruminate more on higher level data in Episode 020.

Related episodes:

Clojure in this episode:

  • seq, cons, rest
  • lazy-seq
  • or, cond

Code sample from this episode:

(ns nate.week-05
  (:require
    [devops.week-01 :refer [parse-line]]
    [devops.week-02 :refer [process-log]]
    [devops.week-03 :refer [sprinkle-errors-by-type]]
    ))

(defn parse-sprinkle
  [lines]
  (let [[first-line second-line] lines
        [_whole donut-id] (some->> first-line :log/message (re-matches #"failed to add sprinkle to donut (\d+)"))
        [_whole error] (some->> second-line :log/message (re-matches #"sprinkle fail reason: (.*)"))]
    (when (and donut-id error)
      (merge first-line
             {:kind :sprinkle
              :sprinkle/donut-id donut-id
              :sprinkle/error error}))))

(defn parse-357-error
  [lines]
  (let [[first-line] lines
        [_whole user] (some->> first-line :log/message (re-matches #"transaction failed while updating user ([^:]+): code 357"))]
    (when user
      (merge first-line
             {:kind :code-357
              :code-357/user user}))))

(defn parse-next
  [lines]
  (or (parse-357-error lines)
      (parse-sprinkle lines)))

(defn parse-all
  [lines]
  (lazy-seq
    (when (seq lines)
      (if-some [found (parse-next lines)]
        (cons found (parse-all (rest lines)))
        (parse-all (rest lines))))))

(defn kind?
  ([kind]
   #(kind? % kind))
  ([line kind]
   (= kind (:kind line))))


(comment
  (process-log "sample.log" #(->> % (map parse-line) parse-all (map :kind) doall))
  (process-log "sample.log" #(->> % (map parse-line) parse-all (filter (kind? :sprinkle)) sprinkle-errors-by-type))
  )

Log file sample:

2019-05-14 16:48:55 | process-Poster | INFO  | com.donutgram.poster | transaction failed while updating user joe: code 357
2019-05-14 16:48:55 | process-Poster | INFO  | com.donutgram.poster | failed to add sprinkle to donut 23948
2019-05-14 16:48:55 | process-Poster | INFO  | com.donutgram.poster | sprinkle fail reason: should never happen
2019-05-14 16:48:55 | process-Poster | INFO  | com.donutgram.poster | failed to add sprinkle to donut 94238
2019-05-14 16:48:55 | process-Poster | INFO  | com.donutgram.poster | sprinkle fail reason: timeout exceeded threshold
2019-05-14 16:48:56 | process-Poster | INFO  | com.donutgram.poster | transaction failed while updating user sally: code 357
2019-05-14 16:48:55 | process-Poster | INFO  | com.donutgram.poster | failed to add sprinkle to donut 24839
2019-05-14 16:48:55 | process-Poster | INFO  | com.donutgram.poster | sprinkle fail reason: too many requests
2019-05-14 16:48:55 | process-Poster | INFO  | com.donutgram.poster | failed to add sprinkle to donut 19238
2019-05-14 16:48:55 | process-Poster | INFO  | com.donutgram.poster | sprinkle fail reason: should never happen
2019-05-14 16:48:57 | process-Poster | INFO  | com.donutgram.poster | transaction failed while updating user joe: code 357
2019-05-14 16:48:55 | process-Poster | INFO  | com.donutgram.poster | failed to add sprinkle to donut 50493
2019-05-14 16:48:55 | process-Poster | INFO  | com.donutgram.poster | sprinkle fail reason: unknown state

Permalink

JUXT Journal 2019-06-14

Crux

  • We’ve released a new version: "19.06-1.1.0-alpha", the changes are listed below.

  • Malcolm and Jeremy were interviewed by Daniel Compton on episode #24 of The REPL podcast.

  • The Crux team will be giving a talk at the Strange Loop conference in Missouri this September, "Temporal Databases for Streaming Architectures".

Crux night in London

Next Tuesday we are delivering a London Clojurians hands-on workshop for Crux (generously hosted by Funding Circle!), with free pizza and t-shirts featuring John McCarthy: visit the meetup.com page. Already 80 spaces are taken and tickets are available on a first come first serve basis, get yours before they run out.

highres 475768605

Here’s the agenda:

  • Introductions

  • Lightning Talks

    • Tick by Malcolm Sparks

    • Bitemporality by Jon Pither

    • Crux 101 by Jeremy Taylor

  • Q&A Panel

  • Hands-on Tour

  • Food & Drink

You can see the slides online now.

The core development team will be giving a talk describing the architecture and features of Crux, followed by a code-along session giving you a chance to use Crux yourself. You will come away with an understanding of Crux’s datalog, extension points, and the value of bitemporality in a database.

Crux changes

CAS and Put now do not require a duplicated ID to be placed into both the document and the op, it’s now only specified once in the op. This makes the API much friendlier, and reduces unnecessary user burden of duplication.

There were a number of internal bug fixes which you can find the details of in the changelog.

Edge

There was an investment into solving some of the UX issues that Emacs users experience when using cider-jack-in-clj&cljs. If you typed (go) into your Clojure REPL, you would end up getting a port conflict exception. Eventually it turned out to be a race condition in figwheel-main as you would restart the server underneath it, and it would try to helpfully start one. Until that issue is resolved, Edge now has a workaround for this problem.

Lucio updated the default Edge app template with a nicer set of default CSS.

Edge got a new site and a redesigned UI for the documentation set to go with it. As the templates for the Edge docs are now custom, the hope is that we can do more useful things with it. One things that’s already implemented are convenient "Suggest and edit" links on every page. We also want to explore the idea of "expanding snippets" which will be more self contained in order to aid in creating tutorials.

Speaking of Edge docs, several new articles were added:

There were a few minor bug fixes too:

  • Fix for newer Clojure’s which was broken in some setups by only loading piggieback into ClojureScript projects (which would otherwise bring in an ancient ClojureScript)

  • Remove the hard-coded "edge" from the test-all function

  • Prevent the dev-extras namespace from being unloaded when reloading initially fails

Aero

We did some major work on the internals of Aero. This restructure resolved all of the open bugs in Aero due to a new way of handling collections. We’re still testing it on some client projects, but initial results look good. If you have a complicated config.edn, we’d be interested in hearing if this pull request works for you.

The new approach enables a new kind of tagged literal in Aero config files, these behave more like macros. The built-in Aero conditionals like #or and #profile have been migrated to this new API. Due to the existence of this new API, deferreds can now be deprecated. Deferreds were a workaround for the lack of true conditionals in Aero, and they had severe limitations such as non-composability.

There’s an alpha namespace available, which will allow you to create custom macros, but the API may still see some revisions.

With this work out of the way, it’s now possible to start looking into namespacing the tagged literals in Aero. Namespacing will be an opportunity to avoid breaking changes while improving behavior. For example #include does not throw if the file being included does not exist, but #aero/include may do so.

Permalink

defn #47 Tommi Reiman

Tommi joined us in Episode 47 to talk about several awesome Open Source libraries he built, about metosin, and Finnish Clojure community!

Permalink

Podcast Is Happening

I’m five episodes in, so I think it is safe to announce that I have a podcast! No Manifestos is a podcast about people living with technology, because that describes everyone today. Whether you work in the sciences, business, or the software industry itself, everyone has to use and live with software. In each episode,…

Read the full article

Permalink

Grumpy chronicles: Pedestal and routing

As part of an ongoing experiment, I decided to update Grumpy to Pedestal. The main commit we’ll be discussing is here.

Long live middlewares

The biggest difference between Ring and Pedestal is interceptors instead of middlewares. I like the idea and sympathize with the reasoning behind it: more control over execution which enables multithreading and async request/responses. Not that I need those for a content site such as Grumpy but it was fun to play with something different for a change.

Interceptors are inherently less elegant than middlewares: they are records, so you have to rely on the interceptor library to build them. That is a bit annoying since now you have two types of things floating around: stuff that can be converted to interceptor and interceptors themselves. Those are not the same, and I used to mistake one for another a few times.

Unlike middlewares, which are simple functions, interceptors do not naturally compose. That means you’ll need something to run them for you, so you’ll have to bring Pedestal dependency whenever you need to play with those.

I was also happy to find out that most interceptors are just standard Ring middlewares converted to the new style. Most of my stuff “just worked” without too much conversion effort. E.g. parameters for Session middleware and Session interceptor perfectly match, etc.

(def session
  (middlewares/session
    {:store (session.cookie/cookie-store
              {:key cookie-secret})
     :cookie-name "grumpy_session"
     :cookie-attrs session-cookie-attrs}))

Writing interceptors is as easy as writing middleware. The biggest difference being that you now accept Context map and return Context map instead of request/response as in Ring. Context has both request/response as keys though.

(def force-user
  {:name ::force-user
   :enter
   (fn [ctx]
     (if-some [u grumpy/forced-user]
       (assoc-in ctx [:request :session :user] u)
       ctx))
   :leave
   (fn [ctx]
     (if-some [u grumpy/forced-user]
       (update ctx :response assoc
         :cookies {"grumpy_user"
                   (assoc user-cookie-attrs :value u)}
         :session {:user    u
                   :created (grumpy/now)})
       ctx))})

Being Clojure, Pedestal operates on loosely typed maps. This bestiary of map types was pretty helpful.

One of the problems with Interceptors is that, despite being a great idea, they are not aiming at being a foundation for the next big Clojure web stack. For now, they are happy just being a part of Pedestal package. Others are starting to build alternatives with slightly different semantics, which I’m not sure is a good thing. I mean, alternatives are great, but two version of the same thing with almost the same contract but not quite? Might lead to segmentation and confusion.

Starting server

For some reason there’s no documentation on how to start a server:

These are supposed to be links...

Even the Jetty page here is a placeholder.

From one of the guides you can figure out that the method you need is io.pedestal.http/create-server which also has an empty API doc:

The only clue here is that argument is called “service-map” so you head onto third documentation page that’s easiest to find through Google and finally you may have your answers:

Luckily, the server I wanted to use (Immutant) was on the list:

Not sure how hard would it be to use a custom Servlet container for example. I have no idea what “server function” might be, and it doesn’t seem to be documented anywhere.

Running the app

From there everything was smooth enough. The only annoyance was that production logs kept reporting “Broken pipe” error quite often:

In full aligment with Clojure, stacktraces are enormous. Click to see full res.

As far as I understand, this is an error that happens when the server tries to write to a socket that was effectively closed but not reported as closed yet? I’m not certain on details but enough to know that it repeats quite reliably during normal site usage from a browser.

My point here is: why the heck is it reported as an error at all? If you have a web server, a client disconnect is not something exceptional. It’s not even a warning! This is a normal operation that should not be reported to the logs at all unless specifically asked. Imagine if TCP stack logged each lost packet…

It was not easy to get rid of, too. Writing to the socket is something that happens after all user-defined interceptors have finished, so you can’t use that elaborate error-handling routine to suppress this. My first attempt at catching this error in the last interceptor undestandably failed.

(defn suppress-error [name class message-re]
   (interceptor/interceptor
     {:name name
      :error
      (fn [ctx ^Throwable e]
        (let [cause (stacktrace/root-cause e)
              message (.getMessage cause)]
          (if (and (instance? class cause) (re-matches message-re message))
            (do
              (println "Ignoring" (type cause) "-" message)
              ctx)
            (assoc ctx :io.pedestal.interceptor.chain/error e))))}))

...

(update ::http/interceptors
  #(cons (suppress-error ::suppress-broken-pipe
           java.io.IOException #"Broken pipe") %))

(last :leave/:error interceptor have to go first because logic)

In any other language that would be game over. The error should be handled inside the framework, so it’s either pull request with hopes of getting it merged in the next six months in the best case, or cloning and running your own version.

But thank God we are coding in Clojure! Which means we can redefine anything, anywhere at runtime at no cost. Which is exactly what I did to monkey-patch Pedestal’s internals on the fly!

; Filtering out Broken pipe reporting
; io.pedestal.http.impl.servlet-interceptor/error-stylobate
(defn error-stylobate [{:keys [servlet-response] :as context} exception]
  (let [cause (stacktrace/root-cause exception)]
    (if (and (instance? IOException cause)
          (= "Broken pipe" (.getMessage cause)))
      (println "Ignoring java.io.IOException: Broken pipe")
      (io.pedestal.log/error
        :msg "error-stylobate triggered"
        :exception exception
        :context context))
    (@#'io.pedestal.http.impl.servlet-interceptor/leave-stylobate context)))


; io.pedestal.http.impl.servlet-interceptor/stylobate
(def stylobate
  (io.pedestal.interceptor/interceptor
    {:name ::stylobate
     :enter @#'io.pedestal.http.impl.servlet-interceptor/enter-stylobate
     :leave @#'io.pedestal.http.impl.servlet-interceptor/leave-stylobate
     :error error-stylobate}))

...

(with-redefs [io.pedestal.http.impl.servlet-interceptor/stylobate stylobate]
  (-> ...
    (http/create-server)
    (http/start)))

I also filed it to Pedestal upstream, we’ll see how it goes.

Routing

Pedestal comes with routing built-in, which is a nice upgrade from Ring that required you to look for a separate library to handle those.

Another welcome change coming from Compojure: in Compojure, you wrap routes in middlewares, which means middlewares apply before routing happens. That works fine unless you want a different set of middleware at different routes. It certainly can be made to work in Compojure as well, just not the default. In Pedestal, routing happens first and everything else is configured per-route.

One minor annoyance with Pedestal routes is that every route requires a unique name. I do agree that names, in general, are great, but not every little thing should have one. With routes, I believe, the method + path themselves make a great name. Forcing user to invent something else will just lead to obscure arbitrary names.

Another thing, a big one. Route composition and overlapping routes. Imagine the following routes:

/post/:id
/post/create

First one retrieves post by id, where id could be anything. BUT if that anything happens to be the word "create" then the second route should be triggered instead.

But is it even correct? Personally, I see no problem here. It’s pretty clear what should happen, the only downside is that you can’t have a post with id == "create" which is an acceptable tradeoff for beautiful URLs.

Compojure can make it work but only if you manually order your routes. That is because /post/:id also matches /post/create, but not the other way around.

Compojure matcher is a linear matcher. It examines routes one by one. That works fine as long as you don’t care about matching performance and as long as you can put your routes in the correct order.

But linear matching breaks encapsulation. Imagine that the final app is built from multiple namespaces each providing their own set of routes. If you have something like this:

(ns A)
(def routes
  ["/post/:id" ...]
  ["/draft/create" ...])

(ns B)
(def routes
  ["/post/create" ...]
  ["/draft/:id" ...])

Then no matter in which order you include A/routes and B/routes one of the routes will be shadowed. It also makes changes to routes non-local, as adding a new route to one namespace might accidentally break something else in a completely different namespace.

That’s the case against linear routes. Slow performance and bad isolation. Happily, Pedestal comes with two more advanced routes that perform better by using tries instead of the linear scan to match routes. This is faster but unfortunately does not support our use-case.

Map tree only supports static routes. If you need to pass any parameter do it in a query. Not sure if anybody does build applications that way, seems too limiting to me and not too beautiful. "/post?id=123&action=create" only because our matching algorithm happens to perform better? Nope. Not even once.

Prefix tree router documentation says

Wildcard routes always win over explicit paths in the same subtree. E.g., /path/:wild will always match, even if /path/user is defined

which is basically to say that overlapping routes are not supported at all. Better throw an exception rather than “silently win” because if the user did supply a route that might be also matched as wildcard her intentions are pretty clear and those are not to “just ignore this for me will you?”.

The third router that comes with Pedestal is linear one so we are back at ordering routes ourselves.

I was under impression that reitit was claiming to handle this exact problem. Turned out that the only thing they fixed was error reporting: the conflicting routes are now reported to the user. Well, that’s much better that Pedestal approach but still leaves me at nothing.

So here’s what I do: I collect all the routes and sort them myself, then pass the result to linear matcher. Still slow match performance (same to Compojure which I used before that, though) but at least encapsulation is respected.

(defn compare-parts [[p1 & ps1] [p2 & ps2]]
  (cond
    (and (nil? p1) (nil? p2)) 0
    (= p1 p2) (compare-parts ps1 ps2)
    (= (type p1) (type p2)) (compare p1 p2)
    (nil? p1) -1
    (nil? p2) 1
    (string? p1) -1
    (string? p2) 1
    :else (compare p1 p2)))

(defn sort [routes]
  (sort-by :path-parts compare-parts routes))

...

{::http/routes
 (routes/sort (concat routes auth/routes authors/routes))}

I also did a little DSL that auto-generates route names (required by Pedestal) from method and path and allows for nested collections of interceptors so that I can add multiple interceptors the same way I add one. Instead of this:

(io.pedestal.http.route/expand-routes
  #{{}
    ["/forbidden" :get :route-name :forbidden (vec (concat populate-session [route/query-params handle-forbidden]))]
    ...})

I can write

(grumpy.routes/expand
  [:get "/forbidden" populate-session route/query-params handle-forbidden]
  ...)

(in both cases populate-session is a vector of multiple interceptors that are convenient to use together).

I’m quite happy with the usability of this but would like the performance of trie matcher too. Maybe one day I’ll release my own opinionated router.

Conclusion

Using Pedestal is no harder that Ring. Some included batteries are a welcome addition (router), some edges are rough (unnecessary exceptions) and documentation is scarce. Unfortunately, I didn’t get to the async requests where Pedestal model is supposed to shine, as the Grumpy website has no use of those. Overall an okay experience, but not a deal-breaker for me. I would love to see Clojure community gather around a one true interceptors model but it doesn’t seem to happen quite yet.

Permalink

We make information systems

I often have to remind myself that we make information systems when we are programming for a business. My natural tendency, due to years of education, is to begin by creating a simulation. But this is not what we need to do. Instead of simulating a shopper going through a store, we need to model the information transfers that were captured in the pre-digital information system of pen-and-paper inventory and order management.

Transcript

Eric Normand: We make information systems, and we shouldn’t forget that. Hi, my name is Eric Normand. I help people thrive with functional programming. This is going to be one of those thought pieces, I guess, less of a tutorial.

I want to talk about what I was taught in school, what I see other people being taught, and my opinions on some of that. Often, when we’re dealing with modeling a problem, we’re taught to think about the actors in the system. These are the entities that act out the drama in the situation, in the simulation.

The example would be if you’re buying things at a grocery store. There’s the shopper. There’s the apples, oranges, and bread that they’re buying. Those we’re taught to model in our code. You would make a shopper class. You would make a product class. Then you’d subclass it with orange, apple, and bread classes.

Then you would have some kind of methods on it. If you have something like the shopper buys a bread as a scenario, a use case that you have to implement, you have to decide, but you might put the method buy on the bread, or you might put it on the shopper. It depends.

That’s where the debate is, “Where do you put it? Who’s doing the shop? Who’s the noun? Who’s the verb? Who’s the subject? Who’s the object?” I’ve done those systems. I’ve done them as assignments. Unfortunately, I’ve done it so many times that it’s the way I think when I’m working on side projects and things.

That’s the real issue. I’m struggling to get to the meat of this and not be condescending, because I do think that this is a problem with the way we’re taught and the way the examples are in our books, the examples our professors use.

We’re not modeling. When you get to the real world and you’re working on a business, let’s say, you’re working on a supermarket, and they want you to code up their system. You’re not actually going to be coding up a simulation between a shopper and a broccoli.

That’s not what is happening. What you should be modeling, what you should be implementing is the information system of that store. If it’s a successful store already, like before they’re digitized, before they’re computerized, they’ve probably already got information systems in place, probably.

It might be paper and pencil. It might be in somebody’s head, but it’s a system already where they’re writing down the inventory at the end of the day. They keep track of their sales. They keep track how much money they think they should have in their cash registers. Maybe, they’re figuring out how many tomatoes they need to buy the next time they order tomatoes. That is what we’re modeling.

We’re not modeling the situation of the shopper picking up the broccoli and carrying it to the checkout counter. That almost doesn’t even need to be modeled unless you’re in a totally virtual environment. We’ll get to that. We’re modeling the information system. We’re modeling the transactions of money and product.

There’s shippers coming in, bringing truckloads of food. We’re tracking that. We’re tracking how long it takes for that food to leave the shelves. We’re tracking how much storage space we need to keep our inventory. We’re tracking all the employees and how much we have to pay them and how much profit we’re making, how much revenue.

These are the things that are part of the information system. They’re left out typically when we are taught to think about a problem in software. I just want to reiterate that. We model the information systems, and we often forget that. We treat these exercises as telling us something about how supermarkets work when that’s not what’s going on.

I don’t know if I have anything more to say about this except that I do believe that the way that I’ve been teaching functional thinking, which is to divide things into actions, calculations, and data, can help you see that more clearly.

If you see where is data being stored or sent from one place to another and be captured by devices, you’re thinking about calculations, what computations are being done. Then when you think about actions, especially in terms of essential actions, things that have to happen, not like storing stuff in a global variable, that doesn’t have to happen even though that is an action.

Say, making a particular computerized order to another fulfillment center, that is an action that you want to happen. You can’t get around it. What are those actions that are taking place?

By doing that, you get a little bit closer to figuring out the pieces that you need to implement in your system. It’s not enough. That’s just the start of the analysis, but it’s somewhat closer than what I consider the…

I’m trying to be so nice here, so forgive me. [laughs] I would not normally be so generous with this stuff. If you caught me alone and bought me a drink or something, I probably would not be so generous.

We’re taught to model things in terms of nouns and verbs or in terms of type hierarchies, you might say. Inheritance are hierarchies. We’re not modeling those aspects of the world. We’re modeling the information system.

Another example — I literally had this exercise in my school. We were supposed to model a university registration system. You have a student class. You have a course class. The whole exercise was about figuring out where to put the methods and how to make it so that the student knows what courses it’s registered for, and the course knows what students it’s registered for.

The state of the course has to be maintained in sync with the state of all the students. The problem is, it’s the same fallacy. What you’re simulating is students registering themselves into courses instead of simulating or modeling the registration book or the computer system that exists at the university. That is what you should be doing.

Basically, you just need a single table that joins the students, the student ID with the course ID, and to put some kind of object-oriented front on top of that, like it’s some kind of simulation. Like the student calls register on the course, and it passes itself as an argument. It’s kind of a farce.

Just don’t forget, we’re modeling the information systems, not this interplay of different entities. I hope this wasn’t too much of a downer.

If you want to listen to more episodes, you can go to lispcast.com/podcast. You’ll see a list of all the past, present, and future, and multiverse episodes. They each have audio, video, and text transcripts. You can pick the way you like to consume the content.

There’s also links to subscribe and how to find me on social media including email, Twitter, and LinkedIn.

I love to get into discussions, so fire away those discussions, comments, questions, complaints, anything. We’ll talk about it. Awesome. Rock on.

The post We make information systems appeared first on LispCast.

Permalink

Copyright © 2009, Planet Clojure. No rights reserved.
Planet Clojure is maintained by Baishamapayan Ghose.
Clojure and the Clojure logo are Copyright © 2008-2009, Rich Hickey.
Theme by Brajeshwar.