Software Engineer (Haskell, Full Stack) at Capital Match (Full-time)

About Us: Capital Match is a leading P2P lending platform based in Singapore, founded in 2014, backed by alternative investment management firm with more than US$ 5 bn AUM. We are looking for experienced developers to lead our tech growth in the Fintech space, expand into surrounding countries and develop new products on the platform.

Job Description: We are inviting developers with a minimum of 5 years coding experience. The candidate should have functional programming experience as well as experience in developing server and web applications. An interest in all aspects of the creation, growth and operations of a secure web-based platform: front-to-back feature development, distributed deployment and automation in the cloud, build and test automation, is highly desirable. A background in fintech and especially the lending space would be an advantage (but not essential).

Job Requirements: Our platform is primarily developed in Haskell with a ClojureScript frontend. Candidates should ideally have production experience with Haskell, or strong experience with at least one other functional programming language. (For example: OCaml/F#/Scala/Clojure/Lisp/Erlang)

We use Docker containers and standard cloud infrastructure systems to manage our production rollouts, so familiarity with Linux systems, command-line environments and cloud-based deployments is highly desirable. Exposure to and understanding of XP practices such as TDD, CI, Emergent Design, Refactoring, Peer Review and Continuous Improvement is highly desirable.

We are inviting developers with at least 5 years of software engineering experience.

Offer: We offer a combination of salary and equity depending on experience and skills of the candidate. Most expats who relocate to Singapore do not have to pay their home country taxes and the local tax rate in Singapore is more or less 5%. Visa sponsorship will be provided. Singapore is a great place to live, a vibrant city rich with diverse cultures, a very strong financial sector and a central location in Southeast Asia.

www.capital-match.com

Get information on how to apply for this position.

Permalink

Modern Front-End Development with Ruby on Rails and Puppies

Update: I'm taking this book off the market for the moment. I never finished the Elm chapter because of some health problems I encountered, and the guilt about that is quite frustrating.



If you've found it baffling, stressful, or confusing to get started with modern JavaScript development, I have an easy solution for you. I wrote a book which not only walks you through the process, but does so with an adorable, reassuring puppy on every page.

The first, beta edition of my new book is now available. It costs $41. Here are a few sample pages.





This book has a lot more color and design than the average tech book. I did a lot of production work for ad agencies and design shops before I became a developer, and I did a little design work, too. I'm not going to lie, I'm a little bit rusty, but I think tech books should be fun to look at, especially when they're about the front end. This also happened because I was inspired by Lea Verou's terrific book CSS Secrets.

For related reasons, the code in this book uses syntax highlighting. I think syntax highlighting is important in tech books. For a lot of programmers, it's very rare on a day-to-day basis to ever even see code which doesn't have syntax highlighting. So if you're used to seeing code with syntax highlighting, and then a book shows you the code without syntax highlighting, you're actually getting less information from the code than normal. Books on code should give you more context, not less. And, if you're a newbie, and you're buying my book, I want you to get a realistic experience of what being a programmer is like. I can understand why books printed on physical paper in black and white don't use syntax highlighting, because ink costs money, but if you're publishing a PDF — and I am — then I really think syntax highlighting ought to be a given.

The book also comes with two git repos, because the repos allow you to follow along with my code as you go through the book. If you've ever done this with another book and found out that you had to figure out what the author forgot to tell you in order to make their code work, guess what? Non-issue when the author gives you a git repo. If they forgot to mention something, you just run a git show or a git diff to get the exact changes.

Plus, a lot of code books will give you code you can download, but they just give you the finished product, or a bare-bones sandbox to play in. A code base isn't a finished product, it's a continuum over time, and with a git repo, it's a carefully indexed continuum over time. I think that's an important aspect of making code clear, so I first did all my code-related research in other repos, and then built these repos so that the repos would themselves be easy to read.

Also, to continue the theme that I think a tech book should make use of modern technology, the book has about 200 links, in its current state. An upcoming chapter adds 27 more, and future chapters will add more still. So, if at any point you want more detail, there are plenty of opportunities to dive deeper.

What I'm putting on sale today is v0.1, and I'm expecting to do a few updates before I call it 1.0. Despite that, this early-release version has a lot to it. There are the two repos I mentioned: a small repo to help you understand a new CSS standard called Flexbox, and the main example app for the book. The book covers the example app's step-by-step journey from a front-end based on jQuery and CoffeeScript to one built using React, ES6, and JSX. The book also features a puppy on every page.

The example app is a simple flash cards app for learning and studying written Japanese. These pages feature screenshots:



This app is simple enough that I can cover every change in detail, but complex enough that it can drive some investigation of different strategies for the front end. In some ways, it's like reading ToDoMVC implementations, except there's an explanation for every decision, and these decisions get compared to each other. I use a 12-point checklist to compare different front-end strategies. (I trim that checklist down for some chapters, however; for instance, the Webpack chapter is about infrastructure, so it skips some unrelated considerations.)

The existing chapters cover setting up this app as a vanilla Rails app with jQuery and CoffeeScript, translating it to ES5 — because jQuery's features are basically built into every browser at this point, and using the language instead of a library has much better performance characteristics — then translating it again to ES6 via Babel, adding CSS styles (including Flexbox), replacing the asset pipeline with Webpack, and rebuilding the app in React. There's also an experimental ES8 feature. Every step in the process is clearly explained, and there's a GitHub repo where you can file issues if you encounter errors, typos, or ambiguities.

Although I haven't set up a formal roadmap, I have a pretty strong idea what version 0.2 will look like. I've already got the next chapter written. In it, I translate the React app into an Om app, using ClojureScript. I've also translated that app again into an Elm app. I've got the code for that, and I'll be writing an Elm chapter around it soon. That's probably what you get in version 0.3.

(The Om/ClojureScript code is included in the repo, if you want to skim ahead, but I'm expecting to release version 0.2 pretty soon anyway.) (Update: The version 0.2 release just went out!)

To be clear, if you buy now, you get the new versions when it's updated, unless I ever release a 2.0 or higher version. Buying the 0.1 version means you get all the 0.x and 1.x versions free. You might get grandfathered in for a 2.0 version also, but you might not; I want to leave that open, because it's too far off for me to make any realistic plans around it.

Of course, if you're not satisfied for any reason, just let me know and I'll happily give you a full refund. You literally don't even have to give me a reason. I just want my customers to be glad they bought from me.

Permalink

Provisioning a remote server with Ansible

If you've ever manually provisioned a server before, you know the feeling of excitement that you receive once you're finished and your application is running on a remote machine. If you've ever provisioned two servers identically, you know the feeling of dread of getting it exactly right the second time. Thankfully, tools like Ansible exist to help us provision multiple servers exactly the same way. Today, I'll talk to you about provisioning a remote machine using Ansible.

Install Ansible

If you haven't yet, you'll want to install Ansible on the non-remote machine. Ansible is installed via Pip so if you don't have Pip installed, you'll want to install that as well. And Pip is installed via easy_install so we should probably just start there:

  • If necessary, run easy_install pip.
  • If necessary, run pip install ansible.

If that ran successfully, then you should know have Ansible installed! This will give us access to ansible and ansible-playbook which we'll use later on in this tutorial.

Getting access to a remote server

We'll be provisioning a Ubuntu 16.04LTS server today and to do that, we'll create one using Digital Ocean. For those who have never used it, Digital Ocean is a great way to get access to cheap-costing Virtual Machines that you have full control over! That being said, if you have another provider that you'd prefer to use then you can. This tutorial should be relatively provider agnostic.

With Digital Ocean, you'll want to start by creating an account. Then, you'll be able to create a Droplet, which is what we want to do. Once clicking "Create a Droplet", you should see something like this:

Like I said before, we'll be creating a Ubuntu 16.04 server, so be sure to choose that. Once you'll scroll down you'll see more options for your droplet.

Here you can choose the standard specs of your VM. Feel free to pick whatever is cost effective for you. For this tutorial, the $5/mo or the $640/mo won't make a difference.

Finally, you'll want to scroll down and hit "Create." Great! Now we can get to start on our Ansible work! Just make sure you grab your IP address from your new droplet!

Our First Ansible Playbook

Ansible uses an idea of "Playbooks". Playbooks are essentially the policy that you want your provisioned servers to follow. We'll cover a few common scenarios with our playbook. Namely, we'll write our own role, use a user-created role, and set up an inventory (or a target to provision).

Lets start with the folder structure though:

You'll want to create the following files (in your project or elsewhere):

  • ./provisioning/inventories/prod/hosts
  • ./provisioning/roles/common/tasks/main.yml
  • ./provisioning/requirements.yml
  • ./provisioning/setup.yml

./provisioning/inventories/prod/hosts defines a target for our servers. We'll specify a few defaults and do a quick inventory. It's worth mentioning that we won't go in depth on this part as we're simply configuring just one server.

  • ./provisioning/roles/common/tasks/main.yml will be where we write our custom task called common. You can put whatever you want here or even name the role a better name (and probably should!).

  • ./provisioning/requirements.yml is our Ansible dependencies file. Think of this as a package.json, project.clj or any other file that manages dependencies.

  • ./provisioning/setup.yml is our playbook. This is our entry point for Ansible and will be where we orchestrate our roles against inventories as well as define config values.

Building an Inventory

We'll start by building our inventory so we don't lose that IP address that's hanging out in our copy/paste buffer. Let's open ./provisioning/inventories/prod/hosts and add the following lines:

default ansible_ssh_host=192.168.1.101 ansible_ssh_user=root

[webservers]
default  

This will setup our inventory for prod with defaults of our IP and the user to run these commands as. Basically, after creating those defaults we're setting up a target called webservers and we're applying the defaults to that server. There are better ways to build inventories when you're provisioning multiple machines at once, but for now, this works for us!

Building a Task

Ansible tasks can do quite a lot of things in one go. For simplicity, we have all of our tasks in ./provisioning/roles/common/tasks/main.yml. We'll accomplish a lot with just a little in this task, namely, we'll install Java8, set our Java Environment Variables, add wget, vim, and curl, and finally download and install leinengen and clj-boot. Here's what our task looks like:

---
- name: Add Java8 repo to apt
  apt_repository: repo='ppa:openjdk-r/ppa'
  tags:
    - install
  become: yes
  become_method: sudo

- name: Add WebUpd8 apt repo
  apt_repository: repo='ppa:webupd8team/java'

- name: Accept Java license
  debconf: name=oracle-java8-installer question='shared/accepted-oracle-license-v1-1' value=true vtype=select

- name: Update apt cache
  apt: update_cache=yes

- name: Install Java 8
  apt: name=oracle-java8-installer state=latest

- name: Set Java environment variables
  apt: name=oracle-java8-set-default state=latest

- name: System Setup
  apt: pkg="{{ item }}" state=installed update-cache=yes
  with_items:
    - wget
    - vim
    - curl
  tags:
    - install
  become: yes
  become_method: sudo

- name: set user bin directory
  set_fact:
    user_bin_dir: /usr/bin

- name: download leiningen
  get_url:
    url:  https://raw.githubusercontent.com/technomancy/leiningen/stable/bin/lein
    dest: "{{ user_bin_dir }}"

- name: add executable permission for lein script
  file:
    path: "{{ user_bin_dir }}/lein"
    mode: "a+x"
  become: yes
  become_method: sudo

- name: Install Clj-Boot
  shell: bash -c "cd /usr/local/bin && curl -fsSLo boot https://github.com/boot-clj/boot-bin/releases/download/latest/boot.sh && chmod 755 boot"
  become: yes
  become_method: sudo

Ansible ships with a lot of configurable tasks that you can leverage and a lot of commands such as get_url or shell as shown in the example above but the real power of ansible comes from leverage community-built tasks to ensure you dont reinvent the wheel.

Leveraging Community-Built Tasks

Realistically, if you want Postgres installed on a server, there's only so many possible permutations of an installation you could want. This is where well-built community tasks come into play. Add the following to your ./provisioning/requirements.yml file and I'll explain them in a moment:

---
- src: ANXS.postgresql
- src: geerlingguy.nodejs

Alright, this is is pretty straight-forward. We're defining a few requirements from Ansible Galaxy. We can download these requirements and run them as if we had written the tasks themselves. In this case, we're depending on a Postgres role and a NodeJS role. You can find these roles and more like them on Ansible Galaxy.

Let's go ahead and run ansible-galaxy install -r provisioning/requirements.yml. This will pull down the roles from ansible's galaxy and make them available to use.

Writing our Playbook

Finally, its time to wire all of this together with our Playbook. I generally dont have many playbooks for an application so I call mine setup.yml, but you could call yours dev.yml or prod.yml or what have you. Let's open ./provisioning/setup.yml and add the following:

---
# This playbook handles setting up Clojure and Postgres

- name: install java, clojure, and devtools
  hosts: all
  remote_user: root
  become: yes
  become_method: sudo
  vars:
    database_name: 'dev'
    database_user: 'dev_user'
    database_password: 'dev_pass'
    database_hostname: 'localhost'

  roles:
    - common

    - role: ANXS.postgresql
      postgresql_version: 9.6
      postgresql_databases:
      - name: "{{database_name}}"
      postgresql_users:
      - name: "{{database_user}}"
        pass: "{{database_password}}"
        encrypted: no
      postgresql_user_privileges:
      - name: "{{database_user}}"
        db: "{{database_name}}"
        priv: "ALL"

    - role: geerlingguy.nodejs

Now, we've defined out playbook as well as declared some vars. We've also mentioned hosts: all saying that this applies to all hosts, here we could have webservers as we've defined before, but for our sakes we're just provisioning a single host. Then, we have our roles declared below that. We use our common role that we've defined, plus the two from Ansible Galaxy. You'll notice that we're providing a lot of config options to the Postgres role, but none to the NodeJS role. That's due to the way that the roles are built and the defaults being acceptable.

Running our playbook against our Server

A note about Ansible on Ubuntu 16.04LTS: You'll need to SSH into your server and run sudo apt install python-minimal because the python command that is required by Ansible is ambiguous.

To run our newly built playbook against our server, we'll simply run ansible-playbook --inventory=provisioning/inventories/prod provisioning/setup.yml. That should do it! After ansible runs, you'll have a new server with Postgres, Node, Java8 and Leinengen installed! If you ssh into the machine, you can check that everything is running as well.

Permalink

The REPL

The REPL

Clojurists Together, MAGIC, and a dose of Clojure
View this email in your browser

The REPL

Notes.

The big news for last week, at least from my perspective, was launching Clojurists Together. This is a project designed to support open source Clojure software and keep it sustainable by paying maintainers. We've already had quite a few individuals sign up, but not too many companies yet. If you work at a company that uses Clojure, please consider signing your company up for a Company Membership.

-main

Libraries & Books.

  • Compound is a micro structure for re-frame data

People are worried about Types. ?

Foundations.

Tools.

Recent Developments.

  • ASYNC-119: Combines CLJS core.async macros namespace into main core.async namespace. ASYNC-142: Rename CLJS core.async namespaces to match Clojure ones. The combination of these two being resolved should make it a lot easier to write cross platform core.async code.

Learning.

Misc.

Copyright © 2017 Daniel Compton, All rights reserved.


Want to change how you receive these emails?
You can update your preferences or unsubscribe from this list

Email Marketing Powered by MailChimp

Permalink

All The Cool Kids Are Doing It

Blockchain...

The first time I heard of it was in relation to cryptocurrency. The second time, another cryptocurrency. And then, several times later, I heard of it in context of a "decentralized internet" project called LBRY, of which I am an early user.

I've lost count of the number of times I've heard of it since.

Now, I recognize that this word represents some hugely vital concept in programming, which I won't even begin to pretend I understand. I'm not commenting on the viability or usefulness of blockchain for any given project. But eight years studying communication in this industry has taught me how to spot fads...and this one just hatched.

"Fad" is a pretty loaded word, so let me slam on the brakes and explain myself. There are several brilliant innovations I'd classify as technology fads, but never on the basis of their technical details. An innovation only becomes a technology fad when we start mass-adopting it on any basis other than its own merit!

I can point to three other distinct fashionable technology fads of today.

For example, look at Javascript, a language whose design is widely criticized in many circles, not entirely without merit. As much as I personally dislike Javascript, I'll grant that it presently has a solid use case in web development, specifically in manipulating HTML and CSS to create smooth, interactive web experiences.

Yet it seems that a major sector of the industry is trying to replace the entire technology stack with Javascript. Applications! Data processing! GUIs! There's nothing it can't do!...or so this crazed bunch tells us. But if we're honest, Javascript is not well suited to these use cases, because no matter how many magical frameworks we pile on it, Javascript was designed for web interactivity. We're now sitting on a volatile pile of fragile spaghetti code, just waiting for an IT to sneeze before it all falls apart.

This isn't Javascript's fault. We're just trying to use it to replace C++, Python, Rust, Ruby, Perl, R, Haskell, Go, and the rest of the gang. We've lost sight of what Javascript is supposed to be. By the same rule, we can't really use any of those other languages to truly replace Javascript, because they weren't built for the same purpose!

Languages need to know why they exist, and stick with those reasons. Writing a statistical computing program in C++ is (usually) as silly as building a game engine in R. You could probably manage to pull off both, but you'd be wasting a lot of energy and time for a suboptimal result.

In other words, Javascript's problem can be summarized in the words of poet John Lydgate: "You can please some of the people all of the time, or all of the people some of the time, but you can't please all of the people all of the time."

If we move away from language debates, we won't go far before we come across some mention of the cloud. It wasn't long after this mystical land of water vapor and data was first mentioned that some astute IT pointed out...

The cloud is just someone else's computer.

The cloud offers some very neat innovations. My company's website migrated from traditional hosting to a Linode VPS, and we couldn't be happier. Some software and services are empowered by an intelligent use of cloud computing and VPS technology (two different concepts that we keep lumping under the same cover-all name.) However, the cloud doesn't always have a silver lining.

One online friend described how his company's CTO decided to migrate their entire infrastructure to "The Cloud," with little more explanation than some empty tech babble he no doubt picked up by skimming an article on Slashdot. His plan held less water than the actual clouds over the tech office, but he was obliviously plowing forward with his plan, deaf to the warning cries from half his IT staff. He was insisting on migrating an entire, humming, live infrastructure, databases and all, from their in-house servers to AWS because he had heard the siren-song of The Cloud. The functional infrastructure was to be dismantled, and rebuilt in AWS, with no gain in functionality, a significantly higher operating cost, and years of fixing the problems that come from hammering a square peg into a round hole. And all the IT department wept.

I've also watched programmers "solve" problems in relatively minor app projects by using "The Cloud," when local based solutions were faster, cheaper, and more efficient. This included one case, I forget the technical specs thereof, where a college senior project set up no less than three cloud-based microservices for a smartphone app that, in retrospect, didn't really need any.

I can still hear the marketing exec who dreamt up the term "the cloud" cackling on his way to the bank.

Finally, we have neural networks. Again, I'm not going to claim expertise here, although I'll admit that I get a little excited seeing some of the stuff they manage with these! AI can pull off some amazing feats with neural networks and machine learning.

However, I'd also classify neural networks as a fad. I can be found lurking on Freenode IRC nearly 10 hours a day, six days a week, and I can't tell you how many times I heard people mention using neural networks for purposes they are entirely unsuited for. It was as if someone mentioned it on Reddit, and now every basement programmer wants to implement their own.

Of the three, this one seems to be losing steam very quickly. Perhaps it is that you can't get very deep into implementing machine learning before you realize you're in over your head, and you could achieve just as good a result in your Yet Another Number Guessing Game with a simpler algorithm.

I think blockchain may be poised to replace neural networks in terms of technology fad rankings, and it makes me sad. There are a lot of good ideas that we need to be pursuing with this technology, but after the hordes of fair-weather fans finish with it, the tech world will be left with ringing eardrums and a bad taste in their mouth. When that day comes, it will take a brave soul to even suggest blockchain, until the technology fades into a sunset of obscurity.

You may be shaking your head at me. "These aren't fads," you might say, "and even if they are, the damage won't be that bad."

Allow me to remind you of a few fads of yesterday.

Java. Hadoop. Wordpress. Python. Joomla. Shockwave Flash. Extreme programming. Singletons. HTML iframes. Macros. Goto statements.

All of these have uses. There are, or were, proper applications for each. But the frenzied application of these technologies to every problem imaginable wore off the shine and finish, leaving them with a haze of knee-jerk repulsion. Python had to gain a whole new version to regain some of its lost relevance.

Now we have a whole new batch of ideas and technologies, some of which are already suffering the damage of fad status.

Cloud Computing. Neural Networks. IoT. Go. SaaS. Rust. Docker. Haskell. Javascript.

And blockchain.

Our innovations deserve better treatment than this. Please, for the love of all the shiny new technologies, as well as the old reliable ones, follow these three simple rules:

  1. Know why the technology exists, and generally use it only for that purpose. We can certainly explore innovative uses of technologies, but be careful not to force a square peg into a round hole, nor to make one technology do everything.

  2. Choose technology solely on its merit in the context of your project, and never on the basis of its trendiness, popularity, or lack thereof. FORTRAN may well bear consideration in the same breath as Clojure.

  3. Combat fad-based technology decisions. Have a broad understanding of technologies old and new, and be generous with this knowledge. When you spot someone building a desktop solitaire game in Node.js using a neural network, remind them that Python and basic conditional statements are things.

Working together, we might just prevent the day that some technical whitepaper bears the ominous title "Blockchain Considered Harmful".

What are some fad horror stories you've encountered?

Permalink

Elixir Code Quality Tools

My Clojure Code Quality Tools post remains one of the more popular articles on this blog. Since then, I’ve been writing a lot more Elixir code. I thought it’d be fun to write a similar post on what to use with the Elixir programming language.

By default, Elixir will do a good job with pattern matching and unused functions warnings, as well as giving you deprecation warnings. But as you learn and progress in mastery of Elixir, you’ll want more feedback. That’s where code quality tools come in.

As indicated in the original post, tools should help us follow best practices. The top issues I run into with Elixir code are checking my function signatures and pattern matching to ensure I’ve caught all cases, enforcing good style, and checking my code coverage. This post introduces 5 tools that are now a part of my Elixir workflow, depending on the project.

This post will have less examples than the last, but you can refer to each project’s README to help you get started. Instead, I’ll cover why you might want to use each of these tools and what problems they solve.

doctests

Built in to Elixir itself, doctests are a simple way to couple examples of useage where your function definition lives, in the source. They define an example call from an iex prompt, and show what the return should be. When you set up a simple test file in the suite to run doctests, it will automatically check that the examples match the actual function call response. All it takes is a test file such as:

defmodule Alohomora.ResourceTest do
  use ExUnit.Case, async: true
  doctest Alohomora.Resource
end

And to test functions with doctests in your Alohomora.Resource namespace. Best of all, there’s nothing extra to add – these tests will run when you run mix test.

dialyxir

This library is actually a set of mix tasks on top of the popular Dialyzer tool for Erlang. Dialyzer, if you haven’t encountered it yet, is a DIscrepancy AnaLYZer for ERlang programs. That is to say, it finds dead code, type issues, and unnecessary tests. In my experience, it finds unreachable branches, and it finds tuple returns that aren’t matching a pattern where they’re used. These two seem to be the bulk of the things it finds for me, at least. It will also find functions that aren’t defined for the number of parameters you’re passing, but it seems to have issues with certain libraries I use – those libraries do in fact define functions with that signature.

To really get the power of Dialyzer, you’ll to start including type specs in your source.

dogma and credo

Style guides help a team to write consistent code, more than they specify architecture patterns or idiomatic solutions. If you want to start applying a style guide to your Elixir code, look no further than dogma. As indicated on its README, “It’s highly configurable so you can adjust it to fit your style guide, but comes with a sane set of defaults so for most people it should just work out-of-the-box.” I’d recommend starting with its default style guide and seeing what it yells about.

The kinds of things that dogma will tell you about are lines too long, trailing whitespace, and so on. You’ve probably seen similar output if you use a tool like rubocop in Ruby.

Credo has some overlap with dogma, but it does far more. This tool is more concerned with code smells. Some examples are checking function complexity, negated conditionals, and idiomatic ways to format your pipe operators and one-line functions. You can read up on the credo docs to understand everything it checks for.

Ultimately, you may feel like you only need one or the other. For myself, I use credo with mix credo --strict.

ExCoveralls

Code coverage in your tests is an important metric to keep track of. While striving for 100% code coverage isn’t always worth it, it is good to know whether you’re exercising the code you think you are, and ensuring that all edge cases and error cases that you coded for have been checked with a test.

Installing ExCoveralls in your project will allow you to print out your code coverage for each module to the shell, as HTML, or to post the code coverage to Coveralls.io. ExCoveralls also supports being run from a few different CI tools including CircleCI, Semaphore, and Travis. If you don’t need all this, you could look into ExCov or simply run mix test --cover – although I find this built-in mix task’s output to be the least useful.

The Erlang observer GUI

This one is built in, simply launch it from your `iex terminal:

iex> :observer.start

And it will launch a GUI tool. The GUI has charts of memory and processing done, as well as a graph of application supervisor trees. Honestly, there’s a lot more in this tool than I’ve had time to explore and learn about.

mix profile.fprof

This is another built-in tool that you can use. Profiling might not be the first tool you need when developing, but when you want to understand and trace what your code is doing, the fprof tool’s output can be incredibly handy. Make sure you run this with MIX_ENV=prod when you use this. It is also possible to wrap fprof to profile slow test cases, which I have found useful in the past. There’s other libraries that you can install for profiling and tracing Elixir, but so far, mix profile.fprof has worked well enough for me.

Final Thoughts

This post was 5 tools for your Elixir workflow. If you’ve been working in Elixir for awhile, you may be already using them. And there’s probably more tools that I don’t know about, out there.

Let me know if you found this blog post useful, or if you have any other tools to recommend!

Permalink

PurelyFunctional.tv Newsletter 252: Brand, Distributed, Feelings

Issue 252 – November 20, 2017

Hi Clojurists,

Last week I opened up the Early Access Sale for Understanding Re-frame. You can get 60% off the unfinished course with access to all new lessons as they comes out. Of course, members get access to everything already. Add Understanding Re-frame to the cart, Use coupon code REFRAMEPEAP when you’re checking out, and enjoy the new checkmark functionality 🙂 The coupon will expire soon.

I had one free lesson in the course, to give a feel for what you’d learn. The lesson was an overview of the whole Re-frame stack. But because it’s over four hours already (and growing), I decided to make a second lesson free. This one is an overview of the parts of Re-frame.

Please enjoy the issue.

Rock on!
Eric Normand <eric@purelyfunctional.tv>

PS Want to get this in your email? Subscribe!


datawalk

Have you ever found yourself printing large maps, trying to find the nested data you’re looking for? Egg Syntax has created a tool to help you quickly explore complex, nested data structures at the REPL.


Why a Personal Brand is Necessary For Today’s Developer

Neha Batra explains why a brand is important and how to go about building one.


A guide to distributed work

I worked remotely for many years. This article by Jake McCrary has some good guidance.


Top Ten Reasons to Attend Clojure SYNC

Well, I wrote out these reasons. Which is most compelling to you?


Relaunching Clojureverse

Arne Brasseur has been working on making his Discourse forum into a place to build a strong community. I’ve never been one for online discussions, but I think I’ll give Arne’s site a try. I really trust his leadership on this. A community site couldn’t be in better hands.


Some thoughts and feelings on RubyConf 2017

Avdi Grimm was in my home town of New Orleans this past week. After the conference, he reflected on what makes the Ruby community great.


Currently Recording: Understanding Re-frame

Yes, the course continues. Now with over 4 hours of video lessons. Here are the lessons I’ve published since the last newsletter:

Use the coupon REFRAMEPEAP for 60% off.


From the archives: Composable parts

What does it mean to say things are composable?

Permalink

Clojure Tip of the Day – Episode 3: Threading Macros Tracing

The third episode of my Clojure Tip of the Day screencast is out.

You can find the video on YouTube: Clojure Tip of the Day – Episode 3: Threading macros tracing

The episode shows a quick “debugging” technique using the println function to print intermediate values flowing through the threading macros to the standard output.

TL;DR

  • For thread-first macro you can use (doto println) to quickly print intermediate value
  • Usually, it’s better and more convenient to introduce little helper function spy : (def spy #(do (println "DEBUG:" %) %)) which then works for all threading macros
  • If you want to use doto-like method then you need to wrap in an anonymous function: (#(doto % println)) (notice extract parentheses – see macro-expansion of threading macros)

 Credit

Thanks to Sean Corfield and Brandom Adams for providing the tips on the Clojurians slack channel.

Permalink

Copyright © 2009, Planet Clojure. No rights reserved.
Planet Clojure is maintained by Baishamapayan Ghose.
Clojure and the Clojure logo are Copyright © 2008-2009, Rich Hickey.
Theme by Brajeshwar.