## Algorithmic scheduling

October 10, 2013

A few weeks ago I had an idea.

All of my scheduling constraints are imposed by other people. For example, I have to go to an interview at 1pm because that’s when the recruiter is available. Or I have to go to class at 2:30 because that’s when my university scheduled it. Everything on my calendar has to happen at a specific, rigid time because that’s when someone else scheduled it—or that’s the time we previously agreed upon.

Sometimes, scheduling is difficult. I remember trying to create a rehearsal schedule for my music group was a nightmare—it seemed like there was no rehearsal time that worked for everyone. Similarly, I was recently trying to organize a banquet potluck and found it obnoxiously frustrating to coordinate.

I asked myself:

*where do everyone’s time constraints come from?*Surely, there must be some fountain of scheduling constraints that everyone is drinking from. I quickly realized that scheduling constraints are almost always artificial—ultimately, it all comes down to when people agree to meet. There’s a cost associated with rescheduling—and this cost is mostly one of bookkeeping and communication.For example, the reason I can’t meet with you tomorrow at 3pm is because I have a meeting with my coworker from 2-4pm. But that meeting with my coworker was scheduled for an arbitrary time that worked with both of our schedules—why can’t I just reschedule it? Because that’s the only time that works for him. But why can’t he reschedule his other meetings so I can reschedule my meeting with him? Simply because this is a logistical challenge that takes time and energy to resolve.

Fundamentally, there’s only one strict constraint:

*you can’t be at two events at the same time.*So we want to maximize the number of items on our agenda such that the items never overlap.This is just a textbook computer science problem in disguise! Optimization, network flow, graph theory, constraint satisfaction, artificial intelligence, something like that. I have the tools to solve this problem.

So imagine this: a computer program that arbitrates everyone’s schedules to find the globally optimal solution. People are really good at acting like greedy algorithms and finding locally optimal solutions. But we could be so much more efficient if this was done algorithmically—a computer has no problem working out the logistical details of coordinating everyone’s schedules. So you tell it who you want to meet with, rather than when you want to meet. It knows your schedule, it knows their schedules, it knows how to work it out. It’s frames the scheduling problem in terms of a social graph. Scheduling is a social activity, after all.

I have a million other ideas that piggyback off of this one. Perhaps you have some preference for when you want to meet. Maybe you want to meet in the afternoon, because that’s when you can focus the best. You can associate some cost function with this event that penalizes the algorithm for not scheduling it in the afternoon. Maybe it’s a parabola centered at 2pm. You could control the focal length of this parabola—this lets you tell the algorithm how much you care about this preference.

Maybe you can tell it that you have a homework assignment due at 5pm on Friday. You know it will take you four hours to finish—but you don’t care which four hours. The algorithm finds time for you to work on it, or makes time if necessary.

Maybe you have other kinds of constraints. You know that you need to pick up your cousin’s gift from the UPS Store before you can go to her birthday party. The algorithm will make sure these events happen in the correct order. Or maybe you want to have lunch and dinner every day, but not within 4 hours of each other. The algorithm should be able to space those events apart.

I could go on forever. But the point is that I’m trying to convince you that the way we think about scheduling is

*outdated*. Finding times to do things is an algorithmic process, and computers should do it for us. What we care about is*who*we want to meet with. And that’s the primary input to this algorithm.A few concerns:

*People will be disgruntled with the idea of a computer shuffling around their calendars.*For example, you don’t want to be in the middle of a 2-hour drive to a board meeting, only to find out that the meeting has been rescheduled for tomorrow because the algorithm was trying to schedule the CEO’s dentist appointment today. Solution: freeze all events that are within, say, 24 hours of the present. This window should be configurable.*UI/UX challenges.*What is a user-friendly way to input all this data into an application? I have a few ideas. For example, start with a Google Calendar-like interface. When you create a new event, you see a heading entitled “Event Time.” Underneath this heading you see a time/date picker control, but this is grayed out by default. On the right of the time picker, you see a slider control that defaults to the far right—indicating “Don’t care.” If you drag this slider to the left, you are indicating that you have some preference for the time. Then the time picker control becomes enabled. You can drag the slider all the way to the right, indicating that the event has to happen exactly at the time you specify. Or you can leave the slider somewhere in the middle, indicating a mild preference. I think, if done right, this would be fairly intuitive.*People will be too lazy to input all this information into an app.*That’s fine—they can use it like a traditional calendar if they like. But the more information you give it, the smarter it becomes—and the more it can do for you.*The “dating site problem”—the app is only useful if everyone uses it already.*This isn’t really true—when you’re the only one using it, it’s no worse than Google Calendar (it’s better: the app can still help automate your schedule). In my vision for the product, the app can act as your secretary to the outside world, even if the world doesn’t use it. If someone wants to schedule a meeting with you, you can point them to some webpage where they can book a time on your calendar (without necessarily revealing what’s on your calendar), and you don’t have to be a part of the process. This saves you time, and also promotes the product.*How can it be monetized?*Two ideas come to mind. First, advertisements that target you based on your schedule—which should be much more effective than normal ads, because they will be more relevant to your life (e.g., “Hey! Why don’t you go to this concert on Saturday? You’re not doing anything else this weekend!”). Second, an enterprise version of the product. A company could save a lot of man-hours using a product like this—and as a bonus, the app could automatically reserve conference rooms and ensure that they don’t get double-booked. Additionally, it would enable the company to do all kinds of crazy scheduling feats—like scheduling a 400-person conference on a moment’s notice. Everyone’s schedules would be automatically re-optimized to make it possible.

## Things I want from a web framework

October 10, 2013

I’m a minimalist in many ways. I want a web framework that does exactly what I need and no more. Ruby on Rails is my go-to framework these days, but it feels bloated and a bit too magical. These are things that would be in my ideal web framework:

- URL mapping by regular expressions. The regular expressions should be pre-compiled at load time so that routing is instantaneous.
- A templating language that lets me break views up into modular pieces. I should be able to nest views arbitrarily, even recursively. Even better if the templating language validates the well-formedness of my HTML, like Dropbox’s Pyxl.
- An assets packager. It should compile SASS/SCSS/LESS and CoffeeScript into CSS and JavaScript, and minify the result. Everything should be versioned, including other assets like images. The appropriate headers (Content-Type/Expires/Last-Modified/ETag/etc.) should be set. Expired assets should redirect to current ones, if the URL has changed (e.g., if assets are versioned by a hash or timestamp in the filename). I should be able to compile all resources of a particular type into a single file. E.g., all JavaScript is concatenated together, likewise with CSS, and images are bundled into a sprite sheet. Even more aggressively, if the assets are small enough, I might want to inline everything into the HTML using Data URIs—so the client only needs to download a single file for the whole application. This is especially good for mobile applications, where the cost of making a request is high.
- Cookie-based sessions. The cookies should be cryptographically signed—and better yet—encrypted.
- Internationalization and localization support.
- Security stuff: CSRF protection, a typing scheme that prevents injection attacks (e.g., “blessed strings”), etc.
- HTTP stuff: The ability to specify the response headers, support for chunked transfer encoding, configurable implementations of the HEAD and OPTIONS verbs, etc. I should be able to stream video data to the client. HTTPS support is a must.
- Caching. I should be able to specify that some requests are to be cached and their responses will be reused. Or, more generally, I want to be able to store anything in the cache (not just views)—something like a local Memcached instance.

Things I do NOT want in a web framework:

- An ORM. At most, I want a small DSL for building safe SQL queries. I should be able to control what database I use, what indexes a table has, how data are sharded, how the system scales, etc. I do not want to find that my app doesn’t scale because my web framework writes inefficient SQL. Also, not every web application uses SQL—nowadays, NoSQL databases and other non-relational datastores are common. How persistent data is stored, if there is persistent data at all, is very application-dependent.
- Login / user authentication functionality. The web framework shouldn’t have a notion of “user” or “password”—all of that should be up to the application. Similarly, roles/privileges should not be a built-in concept.
- A JavaScript framework. Admittedly, I use jQuery in most of my web projects, but it seems wrong to have a JavaScript library be a dependency of a web framework. What if I want to use an alternative, such as Prototype, or none at all?

Uh oh—I’m getting tempted to build my own. Not this again.

## Let there be light!

May 27, 2013

I usually write about computer science, engineering, or music, but today I’m going to digress and talk about physics. In particular, I’ll explain how Maxwell’s equations predict the existence of electromagnetic waves.

I will assume you have a basic familiarity with vector calculus and partial differential equations, but no knowledge of electrodynamics.

### Electric and magnetic fields

There are two important vector fields you need to know about: the electric field \( \mathbf{E} \) and the magnetic field \( \mathbf{B} \). What is the significance of these two fields? Mathematically, they describe the forces that are exerted on charged particles due to the charges of other particles. This is called the Lorentz force, and is given by:

\[ \mathbf{F} = q \left( \mathbf{E} + \mathbf{v} \times \mathbf{B} \right) \]

Here, \( \mathbf{v} \) is the velocity of the particle. Of course, this velocity depends on the choice of reference frame. Due to Faraday’s Law and Ampère’s Law (both described below), we’ll see that the choice of reference frame doesn’t actually matter (phew!). The factor \( q \) is the charge of the particle. For an electron, \( q \approx -1.602 \times 10^{-19} \) Coulombs. Notice that this value is negative, so electrons tend to accelerate in the opposite direction of the electric field.

### Maxwell’s equations

Maxwell’s equations relate electric fields, magnetic fields, charges, and current. They come in various forms—here I’ll describe them in their microscopic, differential form. We’ll start with Gauss’s Law:

\[ \boldsymbol{\nabla} \cdot \mathbf{E} = \frac{ \rho }{ \epsilon_0 } \]

This states that for any point in space, the divergence of the electric field \( \mathbf{E} \) is proportional to the charge density \( \rho \) (the constant factor \( \epsilon_0 \) is called the permittivity of free space). More simply, Gauss’s Law tells us that charges create electric fields. There is also a similar law for the magnetic field \( \mathbf{B} \):

\[ \boldsymbol{\nabla} \cdot \mathbf{B} = 0 \]

In other words, magnetic “charges” do not exist; magnetic field lines are always loops. Speaking of magnetic fields, the next law states that, in addition to charges, a time-varying magnetic field can also affect the electric field:

\[ \boldsymbol{\nabla} \times \mathbf{E} = - \frac{ \partial \mathbf{B} }{ \partial t } \]

This is called Faraday’s Law. For example, if I pick an intertial reference frame \( X \) with a stationary electron and move a nearby magnet (which generates a magnetic field) through space, the magnetic field will be time-varying, so an electric field is induced due to Faraday’s Law. Let’s say this electric field imparts a Lorentz force \( f \) on the electron. From the reference frame \( Y \) of the magnet, the magnetic field appears stationary but the electron is moving and will feel the same Lorentz force \( f \) due to the second term in the Lorentz force. So the choice of reference frame does not matter.

The final equation is Ampère’s Law:

\[ \boldsymbol{\nabla} \times \mathbf{B} = \mu_0 \left( \mathbf{J} + \epsilon_0 \frac{ \partial \mathbf{E} }{ \partial t } \right) \]

The constant \( \mu_0 \) is called the permeability of free space. \( \mathbf{J} \) is the current density, which describes how current is locally flowing at a point in space. In English, Ampère’s Law states that a magnetic field is generated by electric current or by a time-varying electric field (the latter term is called “Maxwell’s correction”). Just as with Faraday’s Law, the derivative term will depend on the choice of reference frame, but in the end the Lorentz force always comes out to be the same.

Amazingly, from these simple laws, you can derive all of classical electrodynamics!

### The Helmholtz wave equation

In a vacuum, there is no charge or current density. Then we can reduce Maxwell’s equations to:

\[ \boldsymbol{\nabla} \cdot \mathbf{E} = 0 \]
\[ \boldsymbol{\nabla} \cdot \mathbf{B} = 0 \]
\[ \boldsymbol{\nabla} \times \mathbf{E} = - \frac{ \partial \mathbf{B} }{ \partial t } \]
\[ \boldsymbol{\nabla} \times \mathbf{B} = \mu_0 \epsilon_0 \frac{ \partial \mathbf{E} }{ \partial t } \]

Before we continue, we need the following vector calculus identity:

\[ \boldsymbol{\nabla} \times \left( \boldsymbol{\nabla} \times \mathbf{E} \right) = \boldsymbol{\nabla} \left( \boldsymbol{\nabla} \cdot \mathbf{E} \right) - \boldsymbol{\nabla}^2 \mathbf{E} \]

In other words, the curl of the curl is equal to the gradient of the divergence minus the vector Laplacian (the proof follows directly from the definition of the vector Laplacian). For this identity, I chose the letter \( \mathbf{E} \) because we will be taking the curl of the curl of the electric field:

\[ \boldsymbol{\nabla} \times \left( \boldsymbol{\nabla} \times \mathbf{E} \right) \]

But first, let’s take the curl of Faraday’s Law to get this in terms of the magnetic field:

\[ \boldsymbol{\nabla} \times \left( \boldsymbol{\nabla} \times \mathbf{E} \right) = - \frac{ \partial \left( \boldsymbol{\nabla} \times \mathbf{B} \right) }{ \partial t } \]

Now we apply our vector identity to the left-hand side:

\[ \boldsymbol{\nabla} \left( \boldsymbol{\nabla} \cdot \mathbf{E} \right) - \boldsymbol{\nabla}^2 \mathbf{E} = - \frac{ \partial \left( \boldsymbol{\nabla} \times \mathbf{B} \right) }{ \partial t } \]

By Gauss’s Law in a vacuum, we can eliminate the leftmost term:

\[ \boldsymbol{\nabla}^2 \mathbf{E} = \frac{ \partial \left( \boldsymbol{\nabla} \times \mathbf{B} \right) }{ \partial t } \]

Now we can use Ampère’s Law in a vacuum on the right-hand side:

\[ \boldsymbol{\nabla}^2 \mathbf{E} = \mu_0 \epsilon_0 \frac{ \partial^2 \mathbf{E} }{ \partial t^2 } \]

This is called the

*Helmholtz wave equation*. It states that the second spatial derivative (the vector Laplacian) of the electric field is proportional to the second time derivative, where the proportionality constant is \( \mu_0 \epsilon_0 \).### Electromagnetic waves

In a classical setting (no relativistic or quantum effects), Maxwell’s equations always hold—and we’ve just shown how they imply the Helmholtz wave equation in a vacuum. Let’s solve the wave equation. Once we have a solution, we’ll check that it agrees with Maxwell’s equations (not all do).

One solution is \( \mathbf{E} = \mathbf{0} \) (which implies \( \mathbf{B} = \mathbf{0} \)), but it’s a bit prosaic to call that a

*wave*. Still, a quick look at Maxwell’s equations reveals that this solution does not violate the laws of classical electrodynamics, so we can have \( \mathbf{E} = \mathbf{B} = \mathbf{0} \) in a vacuum. In other words, darkness is physically possible, which is great because otherwise I would have difficulty sleeping at night.The nice thing about Maxwell’s equations is that they are

*linear*in \( \mathbf{E} \) and \( \mathbf{B} \). In other words, if we find multiple solutions to these equations, their superposition is also a solution. This fact is very useful in harmonic analysis. In particular, we can represent almost all functions as the superposition of basic sinusoidal waves with the Fourier series. Conveniently, the form of the Helmholtz wave equation suggests a sinusoid might be a solution. If it works and agrees with Maxwell’s equations, then the superposition principle allows us to predict the existence of all kinds of electromagnetic waves.With that in mind, let’s try a solution of the form \( \mathbf{E} = \hat{x} E_0 \cos \left( \omega t - k z - \phi \right) \), for some unknown \( E_0 \) (amplitude), \( \omega \) (frequency), \( k \) (wavenumber), and \( \phi \) (phase shift). We say that this wave is \( \hat{x} \)-polarized and propagates in the \( \hat{z} \) direction.

First, we’ll try it with the wave equation:

\[ \boldsymbol{\nabla}^2 \mathbf{E} = \mu_0 \epsilon_0 \frac{ \partial^2 \mathbf{E} }{ \partial t^2 } \]
\[ \boldsymbol{\nabla}^2 \left( \hat{x} E_0 \cos \left( \omega t - k z - \phi \right) \right) = \mu_0 \epsilon_0 \frac{ \partial^2 }{ \partial t^2 } \left( \hat{x} E_0 \cos \left( \omega t - k z - \phi \right) \right) \]
\[ - \hat{x} E_0 k^2 \cos \left( \omega t - k z - \phi \right) = - \hat{x} \mu_0 \epsilon_0 E_0 \omega^2 \cos \left( \omega t - k z - \phi \right) \]
\[ k^2 = \mu_0 \epsilon_0 \omega^2 \]
\[ \frac{k}{\omega} = \sqrt{\mu_0 \epsilon_0} \]

So \( \mathbf{E} = \hat{x} E_0 \cos \left( \omega t - k z - \phi \right) \) is a solution to the wave equation as long as \( \frac{k}{\omega} = \sqrt{\mu_0 \epsilon_0} \). The quotient of the frequency \( \omega \) and the wavenumber \( k \) is the speed \( c \) of the wave, so we can see that in a vacuum \(c = \frac{\omega}{k} = \frac{1}{\sqrt{ \mu_0 \epsilon_0 }} \). In SI units, that gives 299,792,458 m/s, which seems just right.

Let’s test our solution against Maxwell’s equations in a vacuum. Taking the divergence of \( \mathbf{E} \) yields \( 0 \) because the wave is \( \hat{x} \)-polarized but has no dependence on \( \hat{x} \). Therefore Gauss’s Law is satisfied. The rest of Maxwell’s equations involve \( \mathbf{B} \), so we must find a \( \mathbf{B} \) that works with our trial solution \( \mathbf{E} \).

Let’s start with Faraday’s law:

\[ \boldsymbol{\nabla} \times \mathbf{E} = - \frac{ \partial \mathbf{B} }{ \partial t } \]
\[ \boldsymbol{\nabla} \times \left( \hat{x} E_0 \cos \left( \omega t - k z - \phi \right) \right) = - \frac{ \partial \mathbf{B} }{ \partial t } \]
\[ \hat{y} E_0 k \sin \left( \omega t - k z - \phi \right) = - \frac{ \partial \mathbf{B} }{ \partial t } \]

This tells us that the time derivative of \( \mathbf{B} \) is a sinusoid polarized in the \( \hat{y} \) direction. Therefore:

\[ \mathbf{B} = \hat{y} E_0 \frac{k}{\omega} \cos \left( \omega t - k z - \phi \right) + \mathbf{C} \]

The vector \( \mathbf{C} \) is the constant of integration. From the Helmholtz equation we saw that \( \frac{k}{\omega} = \sqrt{\mu_0 \epsilon_0} \), so we can rewrite \( \mathbf{B} \) as:

\[ \mathbf{B} = \hat{y} E_0 \sqrt{\mu_0 \epsilon_0} \cos \left( \omega t - k z - \phi \right) + \mathbf{C} \]

Now let’s test \( \mathbf{E} \) and \( \mathbf{B} \) against Ampère’s Law:

\[ \boldsymbol{\nabla} \times \mathbf{B} = \mu_0 \epsilon_0 \frac{ \partial \mathbf{E} }{ \partial t } \]
\[ \boldsymbol{\nabla} \times \left( \hat{y} E_0 \sqrt{\mu_0 \epsilon_0} \cos \left( \omega t - k z - \phi \right) + \mathbf{C} \right) = \mu_0 \epsilon_0 \frac{ \partial }{ \partial t } \left( \hat{x} E_0 \cos \left( \omega t - k z - \phi \right) \right) \]
\[ - \hat{x} E_0 k \sqrt{\mu_0 \epsilon_0} \sin \left( \omega t - k z - \phi \right) = - \hat{x} \mu_0 \epsilon_0 \omega E_0 \sin \left( \omega t - k z - \phi \right) \]
\[ \frac{k}{\omega} = \sqrt{ \mu_0 \epsilon_0 } \]

This is the same constraint that we got from the Helmholtz equation (so it works). Finally, we check Gauss’s Law for Magnetism:

\[ \boldsymbol{\nabla} \cdot \mathbf{B} = 0 \]
\[ \boldsymbol{\nabla} \cdot \left( \hat{y} E_0 \frac{k}{\omega} \cos \left( \omega t - k z - \phi \right) + \mathbf{C} \right) = 0 \]
\[ 0 = 0 \]

So our sinusoidal \( \mathbf{E} \) and \( \mathbf{B} \) fields agree with all of Maxwell’s equations. Since \( E_0 \), \( \omega \), and \( \phi \) were arbitrary (and we saw that \( k \) depends on \( \omega \)), any amplitude, frequency, and phase shift is valid (but the wave always has constant velocity \(c = \frac{1}{\sqrt{ \mu_0 \epsilon_0 }} \) in a vacuum). By the superposition principle, we can take any linear combination of waves of this form without violating Maxwell’s equations. This means, at the very least, any wave which can be decomposed into basic sinusoids (almost every wave) is valid. Furthermore, the polarization of the wave \( \hat{x} \) and the direction of propagation \( \hat{z} \) were also chosen arbitrarily—Maxwell’s equations do not have a preference, as long as they are perpendicular.

So Maxwell’s equations predict that a lot of electromagnetic waves are possible. Let’s experimentally verify our findings—oh, you’ve done that by reading this article!

## Pirates of the Caribbean and Bach

May 27, 2013

My favorite Classical composer is Bach. One of my favorite contemporary composers is Hans Zimmer, who scored the soundtrack for countless movies. I often play Zimmer’s music on piano, and I would play Bach too if I had the technical ability.

I was listening to one of Zimmer’s pieces for Pirates of the Caribbean: Dead Man’s Chest called

*The Kraken*, and I noticed that it sounded very...Baroque!At 1:29, I figured out why. Amidst the barrage of pirate contention surrounding the

*Black Pearl*and the*Flying Dutchman*, I could hear an homage to my second-favorite\( ^1 \) Bach piece—the well-known*Toccata and Fugue in D minor*! You will surely recognize it:Same key, same instrumentation (pipe organ), same motif. Now I know why I like Hans Zimmer so much—we have the same inspiration.

[1] My top favorite Bach piece is his “Little” Fugue in G minor. You can hear this in

*The Kraken*too, if you listen with a bit of imagination.## Never Seek to Tell Thy Love

May 16, 2013

I wrote a ballad for tenor voice and piano featuring lyrics from William Blake’s poem “Never Seek to Tell Thy Love.” From a music theoretical perspective, this is my most sophisticated composition to date. The piece is written in B major but borrows chords from G#, with a modulation to C major and finally to A minor. Unconventionally, the chord progression

*I -> III*is featured prominently throughout, with*III*functioning primarily as a pre-dominant. The sheet music is free and you can get it here.Bill Cutter, director of the MIT Concert Choir (of which I am an alumnus), offered to sing the piece to the accompaniment of Professor Charles Shadle, and together they gave an incredibly powerful performance: