## Visualizing Data Structures

May 27, 2014.

For a class at MIT, I built an interactive tool for building and exploring data structures based on the

*pointer machine*model of computation (e.g. linked lists, binary search trees, etc.). It features an integrated development environment for implementing data structures in JavaScript......and a highly animated interface for making them come to life:

It has a few built-in data structures, including linked lists, ordinary binary search trees, AVL trees, and splay trees.

## The “Safe” Construct

May 20, 2014.

Here’s a feature I wish existed in statically-typed programming languages.

Java has something called “checked exceptions”, which is a mechanism that forces you to handle errors (or explicitly declare them in the function signature). The consensus seems to be that this was a bad idea, but I find the concept somewhat appealing.

It does seem sloppy that some exceptions are deemed “checkworthy” and others are runtime errors, and I don’t get to choose which are which (unless I also wrote the code that throws them). What if I’m writing a quick-and-dirty program to run a one-off job and don’t care about exception handling? Or what if I’m writing an autopilot for a commercial airplane and want every exception to be checked (to completely eliminate runtime errors)?

In the functional programming world, there are other mechanisms for error handling, such as algebraic data types like

`Either`

/`Option`

/`Maybe`

and friends, which are used very much like checked exceptions—the type checker forces you to handle the error case. Functional programmers have devised a few tricks to make errors propagate automatically behind the scenes, such as monads with the clever `do`

notation in Haskell. Clever as it is, it still forces me to write all my code in the appropriate monad and glue it all together with monad transformers.Ideally we’d all just formally verify our programs (with smart tools to help us) and not worry about the possibility of any errors, ever. Maybe in 50 years.

Back on planet Earth, I’d rather have it be like this. Every exception is unchecked. But there should be a new language construct—let’s call it

`safe`

. If any code within the `safe`

block has the possibility of throwing an exception, then you get a compile-time error. Roughly:safe { // code that can't throw an exception }

And you can specify a whitelist of exceptions that are allowed to escape the

`safe`

block:safe except FileDoesNotExist { file = openFile("foo.txt") // might throw FileDoesNotExist but nothing else }

Let me give some examples of why this is useful. Suppose I’m writing a web server. Check it out:

while (true) { safe { // accept new connection // serve response } }

Now I can be confident that my web server will never crash. Of course I can still use normal exception handling throughout the program, but none of the exceptions are allowed to reach the outer loop and crash the program.

Back to the autopilot example—in that case I would just wrap the entire program in a

`safe`

block. And if I were writing a quick-and-dirty script, I wouldn’t use `safe`

at all.Library/API designers could also wrap each function in a

`safe`

/`except`

block, and the `except`

part documents which exceptions may be thrown. But I don’t have to catch them.Now I, the lazy user of the library, can simply ignore any errors that might be thrown (and let them crash the program). Or, if I want the compiler to exhaustively check that I’ve handled every error, I can use

`safe`

. It’s all up to me.## Dynamic Typing: An Oxymoron?

March 19, 2014.

In computer science and programming parlance, there are two distinct interpretations of the phrase

*type system*, and it causes all kinds of squabbles that drive me insane. The interpretations are:- A framework in which to categorize data.
- A formal system used to ensure that programs have meaning.

It is tempting to identify #1 with dynamic typing and #2 with static typing, but neither has any notion of

*static*vs.*dynamic*.Almost all programming languages, such as Python, Scheme, Ruby, Java, Haskell, C++, etc., have #1. Some say these are “typed” languages—but many reserve that description only for the ones which are statically-typed. Those who claim that dynamically-typed languages don’t have a true type system employ the term “tag system” instead. A few languages, such as FORTH, assembly languages, and early versions of FORTRAN, only support one sort of data (usually a machine word), and it is a bit barmy (but perhaps still correct) to say they are characterized by #1. These are called “untyped” or “unityped” languages.

Of the languages mentioned above, Java, Haskell, and C++ are typically said to also have #2. However, even this is a little suspect, because the proof systems for such languages are often not sound (even Haskell, often lauded for its type system, has an unsafe escape hatch). In the strictest sense, for almost any modern programming language, appeasing the proof checker (“type checker”) does not guarantee that the program won’t crash (e.g., usually these systems do not eliminate the possibility of running out of memory). It is worth mentioning that “type systems” in this camp can be used for both expressing proofs in mathematics and reasoning about computer programs (though they often take very different forms when used for these distinct purposes).

For those who do not obsess over programming language theory like I do, #2 typically shows up in academic papers as a set of funny-looking rules, e.g.,

\[ \frac{ \Gamma \vdash e_0 : \sigma \qquad \Gamma, x : \sigma \vdash e_1 : \tau }{\Gamma \vdash \text{ let } x = e_0 \text{ in } e_1 : \tau}, \]

which are used by the compiler to automatically check or infer the types of expressions in a program to prove that the program makes sense.

Those on this side of the fence might consider it an insult to say that languages like Python or Ruby have a “type system” (after all, type theory was originally designed to be a serious foundation for the entire field of mathematics), whereas those in the other camp might view their contenders as stubborn academic zealots trapped in an ivory tower.

The only reason we have these two contentious viewpoints is because we’ve been using one term to mean two different things. Should we use two distinct terms instead?

## Theorem Proving in Programming

March 19, 2014.

I’m developing a programming language that has a built-in automated theorem prover (rather than a traditional type system). My inspiration comes from Coq, an interactive theorem prover, and Whiley, one of the most intriguing programming languages I’ve ever seen.

Let me tell you a little about it. It uses a cool formal system called the

*sequent calculus*(check out this amazing interactive tutorial from my fellow MIT colleague if you’ve never heard of it) and another system called*Hoare logic*. But that’s not important. The punchline is that all of the following are within the realm of possibilities:- A type system for “free”. For example, in order to add two numbers, you must first prove that they are numbers (because that is the precondition for addition). Sounds annoying? This is where the “automated” part of “automated theorem proving” comes in. It works like a type inferencer—most of the time, the compiler can automatically prove that you’re doing a valid operation.
- Types that represent arbitrary mathematical predicates. Examples: the type of even integers, the type of sorted lists, or the type of non-singular matrices. These are called dependent types in my circle of friends, or, more specifically, refinement types.
- Guaranteed safety. You want to divide two numbers? You must prove that the divisor will never be zero. Except most of the time the compiler will prove it for you. No segfaults, no runtime checks, no nonsense.
- What about when the compiler can’t find a proof? No problem, just give it a high-level proof outline. It will try to fill in the details.
- Memory safety. The language will be garbage-collected (as is common in modern programming languages), but sometimes you want a little bit more control. You can manually allocate memory, but you must prove to the compiler that you free it later. No memory leaks allowed.
- Provable totality. You can prove that a function terminates—no accidental infinite loops. Or, if you like, prove that a function runs in \( O(n \log n) \) time.
- Prove arbitrary theorems. Every once in a while, an engineer might need to do this—and it’s nice to have the proof machine-checked.
- Concurrency. Prove that you have the only reference to a variable, and don’t worry about other threads mutating it. Or, in the face of shared memory, prove that deadlock cannot occur. (This is related to my Masters research.)
- Performance. Prove that you never access an array out-of-bounds, and then you don’t need an expensive runtime check on every array access.
- Security. Prove the absence of CSRF attacks. Or prove that a user can only access his/her own data (i.e., a static version of another MIT colleague’s work).
- Better collaboration. If a coworker breaks your code, the proof-checker will reject it and he/she will not be able to check it in.

It’s the ultimate in static typing. Of course these are ambitious ideas and I certainly panegyrized them a bit (in particular, automated theorem provers are typically quite

*dumb*), but I could imagine a future in which all programming languages have a fancy proof system like this—especially if the programmer does not need to think about it most of the time. Wouldn’t that be neat?## Monads for Dummies

February 28, 2014.

Don’t mind the linkbait title—I know you’re not a dummy. And you’re totally smart enough to understand monads.

There is no shortage of monad tutorials. I’ve even written a couple myself. But these tutorials try to get you from zero to hero at lightning speed. This one will be different. I’m not going to explain categories, T-algebras, functors, or any of that abstract nonsense. I’m also not going to teach you how to use monads effectively, or even try to convince you why they are useful at all.

So what’s the point then? The point is to make you not afraid of the word “monad.” You know the term comes from math. But the way monads are used in functional programming is pretty far removed from their mathematical heritage. So let’s not even bother—your time is precious. This tutorial will be very practical. I just want you to get the big picture and not feel intimidated by them—and you can check out other resources (like my other articles) for the details.

Okay, let’s get started. Oh by the way, I’m not going to use Haskell in this article, which is probably what you think of when you hear about monads. How about something a little more familiar, like Python? Sounds good.

Monads tend to come up a lot in functional programming. I’ve written about functional programming before, but all you need to know for now is that it’s a paradigm in which mutable data is generally discouraged, or even forbidden. Computation generally happens by composing pure functions together. A pure function is one that does nothing but return the result of a computation. It doesn’t print anything, store anything, ask for user input, etc. It just manipulates the data you pass it and returns more data.

You may be more familiar with object-oriented programming. Let me translate monads into OOP-speak. A monad is an object. It has a couple of methods:

class Monad: def __init__(self, x): ... def bind(self, f): ...

`x`

can be anything. `f`

is a function that takes something of the same type as `x`

(presumably from the monad) and returns a new monad. Lastly, `bind`

also returns a new monad (often from the result of `f`

). Literally that’s it. Maybe you don’t know what it’s used for, but at least now you’ve seen the interface.Typically, a monad acts as some sort of container, and

`bind`

applies a function to what it contains. Let’s make a monad that stores a single element:class SimpleMonad(Monad): def __init__(self, x): self.x = x def bind(self, f): return f(self.x)

To make this a little more concrete, this is how you use it:

def double(x): return SimpleMonad(x * 2) def add1(x): return SimpleMonad(x + 1) foo = SimpleMonad(5).bind(double).bind(add1)

Okay, so you can chain functions together with

`bind`

. But why would we do that? Equivalently, we could have just written `add1(double(5))`

, which is much shorter.The main idea is that monads don’t have to just call the functions you pass to

`bind`

. They can change the rules. For example, here is a monad that allows you to chain operations without worrying about null pointer errors.class MaybeMonad(Monad): def __init__(self, x): self.x = x def bind(self, f): if self.x is None: return self else: return f(self.x)

If any intermediate computation in a sequence of operations results in

`None`

, the entire expression becomes `None`

and no further operations are performed (much like how `NaN`

works in floating-point arithmetic, if you’re familiar with that). If you spend more time with monads, you’ll find that you can also use them to implement exception handling, mutable state, concurrency, and a bunch of other things.Remember: functional programming is all about writing programs by composing functions together.

*The power of monads is that they allow you to change the rules for how functions are composed.*When I was learning about monads, the main things I was confused about were:

- It seems like using monads requires you to write a lot of code, e.g.,
`SimpleMonad(5).bind(double).bind(add1)`

instead of`add1(double(5))`

. Do people really go through all this trouble? - How do I get the value
*out*of the monad? Can I access it like`monad.x`

? - I hear that Haskell doesn’t let you write “impure” functions that have side effects, but I know for a fact that Haskell programs can print to the screen and write to files. What’s the deal? Do monads somehow resolve this?

Well, I know the answers now, so let me share the love:

- Haskell has a special syntax for monads and Python doesn’t—so that’s why they don’t look very nice in Python. It’s actually quite brilliant. Check it out if you have time. Using it feels very natural and it looks a lot like imperative code.
- Some monads provide a “getter” function that allows you to get the value back out. But a monad is not required to have one, and some monads don’t. For many monads, it doesn’t make sense to take a value out of it. A monad is not required to act like a container. See #3.
- You’re right—all functions in Haskell are pure. Haskell has a neat trick for doing I/O. You can think of a Haskell program as a pure computation that returns a tree-like data structure, and this tree represents another program that does impure things.Now, building this tree can be done in a purely functional way—as is required in Haskell. But when you “run” a Haskell program, two things happen: 1) your pure Haskell computation is executed, resulting in this tree-like data structure, 2) the tree-program from step 1 (which can print to the screen and write to files) is also executed.It just so happens that a certain monad, especially when combined with Haskell’s special monad syntax, makes it easy to build this tree. This monad is called
`IO`

in Haskell. The bind operation does not call the function you pass it, as it did in our monad examples above. Rather, it just stores it in the tree. The runtime will call it when interpreting the tree. That’s the high-level idea. In this post, I implement this idea in Python, which will probably make it a little more clear for you.Lastly, the`IO`

monad does not have a “getter” function. For example, if you have an`IO`

action that asks a user for a string of input, you cannot extract the string from the monad. This is because the`IO`

action doesn’t actually have a string to give you—it only represents a computation that returns a string. When you`bind`

a function`f`

to this monad, you’re building a new`IO`

action that asks the user for a string and then calls`f`

on it.

To summarize, monads let you change what it means to compose functions together. You might want to keep track of some extra information, store the functions in a tree, call them in a different order, call them multiple times, skip some functions entirely (based on the value being passed to them), etc.

You don’t have to use them, but now you don’t have to fear them either!

*EDIT: Fixed a typo and added the*

`MaybeMonad`

example at /u/wargotad’s suggestion.