Steve's Real Blog

irskep,, slam jamsen, diordna, etc

Yesterday, I added PNG export to Browserboard, my multiplayer whiteboard web app. About half the effort was spent getting text to render correctly. 90% of what I found about this topic on Google was garbage, especially on Stack Overflow, so here's my attempt at clearing things up for people who want to do this without relying on somebody's half-baked code snippet.

HTML Canvas has not changed in 18 years

Apple created the <canvas> element to enable Mac developers to create widgets for Dashboard, a feature that died in 2019. Canvas replicates a subset of the Core Graphics C API in JavaScript. Naturally, this hack intended for a proprietary OS feature became the foundation of thousands of browser games and the only browser-native way to generate PNG images from JavaScript.

Because <canvas> is based primarily on a macOS graphics API and not the web, it was not designed with great web support in mind. In particular, its text rendering capabilities are extremely poor. Some issues include:

  1. Line breaks are ignored.
  2. Only one font and style can be used. Multiple styles are not possible.
  3. Text will not wrap automatically. There is a “max width” argument, but it stretches the text instead of wrapping.
  4. Only pixel-based font and line sizes are supported.

So the best we can hope for in “multi-line text support in <canvas>” is to support line breaks, text wrapping, and a single font style. Supporting right-to-left languages is an exercise for the reader.

There is a good JavaScript library for drawing multi-line text in <canvas>

There's a library called Canvas-Txt that is as good as it gets, but for some reason doesn't rise above the blog trash on Google.

If you got here just trying to figure out how to accomplish this one task, there is your answer. You can stop here.

Deriving <canvas> text styles from the HTML DOM

For Browserboard's PNG export, I needed a way to configure Canvas-Txt to match my quill.js contenteditable text editor. The key to doing that is CSSStyleDeclaration.getPropertyValue(), which you can use to find out any CSS property value from its computed style.

The TypeScript code snippet below finds the first leaf node in a DOM element and applies its styles to Canvas-Txt. (If you're using JavaScript, you can just delete all the type declarations and it should work.)

import canvasTxt from "canvas-txt";

function findLeafNodeStyle(ancestor: Element): CSSStyleDeclaration {
  let nextAncestor = ancestor;
  while (nextAncestor.children.length) {
    nextAncestor = nextAncestor.children[0];
  return window.getComputedStyle(nextAncestor, null);

function renderText(
  element: HTMLElement,
  canvas: HTMLCanvasElement,
  x: number,
  y: number,
  maxWidth: number = Number.MAX_SAFE_INTEGER,
  maxHeight: number = Number.MAX_SAFE_INTEGER
) {
  const ctx = canvas.getContext("2d");
  if (!ctx) return canvas;

  // const format = this.quill.getFormat(0, 1);
  const style = findLeafNodeStyle(element);

  ctx.font = style.getPropertyValue("font");
  ctx.fillStyle = style.getPropertyValue("color");

  canvasTxt.vAlign = "top";
  canvasTxt.fontStyle = style.getPropertyValue("font-style");
  canvasTxt.fontVariant = style.getPropertyValue("font-variant");
  canvasTxt.fontWeight = style.getPropertyValue("font-weight");
  canvasTxt.font = style.getPropertyValue("font-family");
  // This is a hack that assumes you use pixel-based line heights.
  // If you're rendering at something besides 1x, you'll need to multiply this.
  canvasTxt.lineHeight = parseFloat(style.getPropertyValue("line-height"));
  // This is a hack that assumes you use pixel-based font sizes.
  // If you're rendering at something besides 1x, you'll need to multiply this.
  canvasTxt.fontSize = parseFloat(style.getPropertyValue("font-size"));

  // you could probably just assign the value directly, but in TypeScript
  // we try to explicitly handle every possible case.
  switch (style.getPropertyValue("text-align")) {
    case "left":
      canvasTxt.align = "left";
    case "right":
      canvasTxt.align = "right";
    case "center":
      canvasTxt.align = "center";
    case "start":
      canvasTxt.align = "left";
    case "end":
      canvasTxt.align = "right";
      canvasTxt.align = "left";


So there you go. Good luck.

This is the fifth post in a series about my new app Oscillator Drum Jams. Start here in Part 1.

You can download Oscillator Drum Jams at

Earlier this year I learned that Garageband on my iPhone can do multitrack recording when I plug it into my 16-channel USB audio interface. This is an object less than 6 inches long handling tasks that would have required thousands of dollars of equipment twenty years ago, accessible to most teenagers today.

The audio system running on iPhones was designed in 2003 for desktop Macs running at around 800 MHz, slightly slower than the original iPhone’s processor. It’s a complex system, but the high level APIs are consistent and well-documented. As a result, there are many fantastic audio-related apps on the store: synthesizers, metronomes, music players, and toys that make music creation accessible to anyone. And because there’s a USB audio standard shared between iOS and macOS, there’s no need to install drivers.

I’m really grateful that I’m able to build on the work of past engineers to make Oscillator Drum Jams. It wasn’t easy, but I was ultimately able to ship it because the pieces already exist and can be plugged together by a single person working on occasional nights and weekends.

I’m also grateful that I got the opportunity to work on this project with Jake, whose passion and dedication to his music meant that we had over a hundred loops to share with the drum students of the world.

This is the fourth post in a series about my new app Oscillator Drum Jams. Start here in Part 1.

You can download Oscillator Drum Jams at

This will be a shallower post than the others in this series. I just want to point out a few things.

The original UI was backwards

When I started this project, I thought about it like a programmer: as a view of a collection of data. So I naturally created a hierarchical interface: pages contain exercises, and exercises have a bunch of stuff in them. I worked really hard on an “exercise card” that would slide up from the bottom of the screen and could be swiped up to show more detail or swiped down to be hidden.

Screenshot of old design

After an embarrassing amount of time, I realized I was optimizing for the wrong thing. Really, I was optimizing for nothing. I finally asked myself what people would want to do while using this app. My speculative answers were, in order of frequency:

  1. Stop and start loops
  2. Adjust the tempo
  3. Go to another exercise within the same page
  4. Go to another page

With that insight—and no user research, so never hire me as a PM—I made some wireframes:

Wireframe 1

Wireframe 2

I shed a single tear for my “wasted” work and spent the next couple of weekends replacing all of my UI code.

Although the iPad wireframe was still a bit silly, we ended up in a pretty good place. The important thing is the play/pause button is nice and big. At some point I expect to rearrange all the controls on iPad, though, because the arrangement doesn't have any organizing principle to it.

(It does look much better than it could have due to the efforts of designer Hannah Lipking!)

Final screenshot of iPhone app

Final screenshot of iPad app

AutoLayout is a pain

I did this whole project using nothing but NSLayoutConstraint for layout, and I regret it. Cartography or FlexLayout would have saved me a lot of time and bugs.

Continue to Part 5: Coda

This is the second post in a series about my new app Oscillator Drum Jams. Start here in Part 1.

You can download Oscillator Drum Jams at

With my audio assets in place, I started work on a proof of concept audio player and metronome.

The audio player in Oscillator has three requirements: 1. It must support multiple audio streams playing exactly in sync. 2. It must loop perfectly. 3. It must include a metronome that matches the audio streams at any tempo.

Making the audio player work involved solving a bunch of really easy problems and one really hard problem. I’m going to gloss over lots of detail in this post because I get a headache just thinking about it.


I used AudioKit, a Swift wrapper on top of Core Audio with lots of nice classes and utilities. My computer audio processing skills are above average but unsophisticated, and using AudioKit might have saved me time.

I say “might have saved me time” because using AudioKit also cost me time. They changed their public APIs several times in minor version bumps over the two years I worked on this project, and the documentation about the changes was consistently poor. I figured things out eventually by experimenting and reading the source code, but I wonder if I would have had an easier time learning Core Audio myself instead of dealing with a feature-rich framework that loves rapid change and hates documentation.

Time stretching is easy unless you want a metronome

Playing a bunch of audio tracks simultaneously and adjusting their speed is simple. Create a bunch of audio players, set them to loop, and add a time pitch that changes their speed and length without affecting their pitch.

My first attempt for adding a metronome to these tracks was to keep doing more of the same: record the metronome to an audio track with the same length as the music tracks and play them simultaneously.

This syncs up perfectly, but sounds horrible when you play it faster or slower than the tempo it was recorded at. This is because each tick of a metronome is supposed to be a sharp transient. If you shorten the metronome loop track, each metronome tick becomes shorter, and because the algorithm can’t preserve all the information accurately, it gets distorted and harder to hear. If you lengthen the metronome loop track, the peak of the metronome’s attack is stretched out, so the listener can’t hear a distinct “tick” that tells them exactly when the beat occurs.

My first solution to this was to use AudioKit’s built-in AKMetronome class. This almost worked, but because it was synchronized to beats-per-minute rather than the sample length of the music tracks, it would drift over time due to tiny discrepancies in the number of audio ticks between the two.

My second, third, and fourth solutions were increasingly hacky variations on my first solution.

My fifth and successful metronome approach was to use a MIDI sequencer that triggers a callback function on each beat. On the first beat, the music loops are all be triggered simultaneously, and a metronome beat is played. On subsequent beats, just the metronome is played.

Metronome timing is hard

With a metronome that never drifted, I still had an issue: the metronome would consistently play too late when the music was sped up, and too early when the music was slowed down.

The reason is obvious when you look at the waveforms:

Illustration of waveforms  The peak of each waveform doesn't match exactly with the mathematical location of each beat, because each instrument’s note has an attack time between the start of the beat and the peak of the waveform. When we slow down a loop, the attack time increases, but the metronome attack time is the same, so the music starts to sound “late” relative to the metronome. If we speed it up, the attack time decreases, and it starts to sound “early.”

To get around this, I did some hand-wavey math that nudges the metronome forward or backward in time relative to the time pitch adjustment applied to the music tracks.

This approach uses the CPU in real time, which adds risk of timing problems when the system is under load, but in practice it seems to work fine.

Continue to Part 4: The Interface

This is the second post in a series about my new app Oscillator Drum Jams. Start here in Part 1.

You can download Oscillator Drum Jams at

To start making this app, I couldn’t just fire up Xcode and get to work. The raw materials were (1) a PDF ebook, and (2) a Dropbox folder full of single-instrument AIFF tracks exported from Jake’s Ableton sessions. Neither of those things could ship in the app as-is; I needed compressed audio tracks, icons for each track representing the instrument, and the single phrase of sheet music for every individual exercise.

Screenshot of Oscillator with controls for each track

Processing the audio

Each music loop has multiple instruments plus a drum reference that follows the sheet music. We wanted to isolate them so people could turn them on and off at will, so each exercise has 3-6 audio files meant to be played simultaneously.

Jake made the loops in Ableton, a live performance and recording tool, and its data isn’t something you can just play back on any computer, much less an iPhone. So Jake had to export all the exercises by hand in Ableton’s interface.

Ableton Live

We had to work out a system that would minimize his time spent clicking buttons in Ableton’s export interface, and minimize my time massaging the output for use in the app. Without a workflow that minimizes human typing, it’s too easy to introduce mistakes.

The system we settled on looked like this:

       Drum Guide.aif
       Drum Guide.aif

p36 50BPM triplet click/
                           Drum Guide.aif

The outermost folder contains the page number. Each folder inside a page folder contains audio loops for a single exercise. The page or the exercise folder name may contain a tempo (“50BPM”) and/or a time signature note (“triplet click”, “7/8”). This notation is pretty ad hoc, but we only needed to handle a few cases. We changed the notation a couple of times, so there were a couple more conventions that work the same way with slight differences.

I wrote a simple Python script to walk the directory, read all that messy human-entered data using regular expressions, and output a JSON file with a well-defined schema for the app to read. I wanted to keep the iOS code simple, so all the technical debt related to multiple naming schemes lives in that Python script.

The audio needed another step: conversion to a smaller format. AIFF, FLAC, or WAV files are “lossless,” meaning they contain 100% of the original data, but none of those formats can be made small enough to ship in an app. I’m talking gigabytes instead of megabytes. I needed to convert them to a “lossy” format, one that discards a little bit of fidelity but is much, much smaller.

I first tried converting them to MP3. This got the app down to about 200 MB, but suddenly the beautiful seamless audio tracks had stutters between each loop. When I looked into the problem, I learned that MP3 files often contain extra data at the end because of how the compression algorithm works, making seamless looping very complex. MP3 was off the table.

Fortunately, there are many other lossy audio formats supported on iOS, and M4A/MPEG-4 has perfect looping behavior.

Finally, because Jake’s Ableton session sometimes contains unused instruments, I needed to delete files that contained only silence. This saved Jake a lot of time toggling things on and off during the export process. I asked FFmpeg to find all periods of silence in a file, and if a file had exactly one period of silence exactly as long as the track, I could safely delete the file.

Here’s how you find the silences in a file using FFmpeg:

  -i <PATH>
  -loglevel 32
  -af silencedetect=noise=\(-90.0dB):d=\(0.25)
  -f null

Here’s how the audio pipeline ended up working once I had worked out all the kinks: 1. Loop over all the lossless AIFF audio files in the source folder. 2. Figure out if a file is silent. Skip it if it is. 3. Convert the AIFF file to M4A and put it in the destination folder under the same path. 4. Look at all the file names in the destination folder and output a JSON file listing the details for all pages and exercises.

Creating the images

The exercise images were part of typeset sheet music like this:

Sheet music

There were enough edge cases that I never considered automating the identification of exercises in a page, but I also never considered doing it by hand in an image editor either. No, I am a programmer, and I would rather spend 4 hours writing a program to solve the problem than spending 4 hours solving the problem by hand!

I started by using Imagemagick to convert the PDF into PNGs. Then I wrote a single-HTML-file “web app” that could use JavaScript to display each page of sheet music, with a red rectangle following my mouse. The JavaScript code assigned keys 1-9 to different rectangle shapes, so pressing a key would change the size of the rectangle. When I clicked, the rectangle would “stick” and I could add another one. The points were all stored as fractions of the width and height of the page, in case I decided to change the PPI (pixels per inch) of the PNG export. I’m glad I made that choice because I tweaked the PPI two or three times before shipping.

Here’s what that looked like to use:

Red rectangles around sheet music

The positions of all the rectangles on each page were stored in Safari’s local storage as JSON, and when I finished, I simply copied the value from Safari’s developer tools and pasted it into a text file.

Now that I had a JSON file containing the positions of every exercise on every page, I could write another Python script using Pillow to crop all the individual exercise images out of each page PNG.

But that wasn’t enough. The trouble with hand-crafted data is you get hand-crafted inconsistencies! Each exercise image had a slightly different amount of whitespace on each side. So I added some code to my image trimming script that would detect how much whitespace was around each exercise image, remove it, and then add back exactly 20 pixels of whitespace on each side.

I still wish I had found a way to remove the number in the upper left corner, but at the end of the day I had to ship.

Diagrams of the asset pipeline

Continue to Part 3: The Audio Player

For the past two years, I’ve been slowly working with my drum teacher Jake Wood on an interactive iOS companion to Oscillator, his drum book for beginner. The app is called Oscillator Drum Jams, and it’s out now!

Jake wrote almost 150 music loops tailored to individual exercises in the book. The app lets you view the sheet music for each exercise and play the corresponding loop at a range of tempos.

Instead of practicing all week to a dry metronome, or spending time making loops in a music app like Figure, students can sit down with nothing but their phone and have all the tools they need be productive and engaged.

The app supports all iPhone and iPad models that can run iOS 11, in portrait or landscape mode.

This project ties together a lot of skills, and I’m going to unpack them in a series of posts following this one.

If you enjoy this series, you might also want to check out my procedural guitar pedal generator.

Hipmunk, company that defined a large part of my career, is shutting down in seven days. This is a rambly post about it that I'm optimizing for timeliness over quality.

I joined in 2015 after leaving a failed startup, hoping to find out what life was like as an iOS developer while making travel search slightly nicer for ordinary people. Over the next two and a half years, I evolved from an overeager young engineer to a trusted engineering leader and manager, which enabled me to ultimately move on to Asana as the manager of the iOS team.

Hipmunk was full of people who wanted to make finding flights and hotels just a little bit easier. The entire value proposition was the user experience, and as an engineer, it was very rewarding to have almost all my work be in service of making a common experience a little bit nicer. There were lots of passionate people who were friendly, fun to be around, and good at their jobs.

I'm not the best qualified person to diagnose Hipmunk's failure, but from a business standpoint it was never exactly “crushing it.” Margins are thin for sales lead middlemen (i.e. metasearch sites), and airlines are complete bastards when it comes to their data. A lot of Hipmunk's revenue was coming from Yahoo Travel when I joined, and it shut down in 2016, which made things even tougher.

Users were hard to hang onto. We had to recapture them every time they decided to start searching for flights or hotels. The features we built to keep users around regularly all failed to move the needle, and half the time the people using our search wouldn't click our booking links even though they ended up buying the flights, so we wouldn't get paid.

Hipmunk made some technical decisions that in my experience made development much more challenging than necessary. Our home-grown ORM was poorly understood, our database usage was weird and badly optimized, and we had multiple huge rewrites of web code in React + Redux without understanding best practices, leaving messes for future engineers to clean up under pressure to ship even more changes.

There were some engineering bright spots. Hipmunk's in-house analytics platform was expensive to maintain but extremely accessible to everyone at the company and used by most people in all departments. Hackathons were approached with genuine passion, and many projects shipped, some even being integrated into the product strategy. Here's a video of one of mine:

The program shown in the video is completely functional and you can really book flights and hotels using w3m. The code came together really quickly because I had already rewritten mobile web flight search singlehandedly, as well as writing lots of hotel search code on iOS.

Hipmunk had a ritual of “demo time” every Friday afternoon, where anyone could show the rest of us what they had been working on. I like showing my work and I did a lot of getting-pixels-on-the-screen, so I gained a reputation for being Demo Guy. I liked being Demo Guy and I miss it.

On the product side, the bag was similarly mixed. We wasted thousands of engineer hours building things that never got traction and never made money. We had a full time team of 3+ working on a chat bot for some reason. Meanwhile, as the team shrank first after layoffs and then after the Concur acquisition, we didn't have enough engineers to address tech debt or even maintain existing systems. And honestly, in hindsight, I'm not even sure we should have had native mobile apps.

That said, I did get to do a lot of things I'm proud of as an engineer, alongside Jesus Fernandez, Wojtek Szygiol, Cameron Askew, and Ivan Tse, plus wonderful designers Lauren Porter and Tony Dhimes, and PM Devin Ruppenstein: * Built the best reusable, configurable calendar picker the world has ever seen (Concur has since deleted the code) * Designed an interesting and practical interview question that I used to hire an excellent engineer * Designed a scalable, clean, teachable iOS app architecture and slowly migrated the whole app to it * Built some great UI components (Concur has mostly deleted them by killing the Discover feature) * Rewrote the entire mobile web flight search site by myself to have a better user experience and use a faster API (it's still up!) * Shipped the mobile web flight search site as part of the native app using a wrapper so clever you can barely tell it's secretly a web page

I made all of this:

Could Hipmunk have worked as a business under slightly different circumstances or with a different set of product decisions? Maybe. I'm not sure. Maybe with a small, bootstrapped team, but even then I wouldn't bet on it. The margins are too tight and the users aren't loyal, for good reason—all they want is a good price on a good flight or hotel.

Speaking of hotels, though, I invite you to look at Hipmunk Hotels:

Hipmunk Hotels

And then look at Google Hotels around November 2018:

Google Hotels

And then look at Steve Vargas's LinkedIn profile. 🤔🤔🤔🤔🤔🤔🤔🤔🤔🤔🤔🤔


SillyPond generates unplayable sheet music in the style of John Stump's “Faerie’s Aire and Death Waltz”. It's my entry for PROCJAM 2019! Using a mixture of generative grammars, a linguistics library, noise functions, and a lot of hand-stitched random distributions, SillyPond constructs a description of sheet music, which is then converted to an image using the open-source music engraving software LilyPond.

The source code is one 720-line Ruby file that only depends on basic natural language and noise libraries. It boils down to a simple grammar with a cute library of phrases. Lovely stuff.

Emoji Soup Heritage

Life simulator with emphasis of simulating learning, communicating and creativity.

It simulates NPCs being born, living and eventually dying, gaining skills and advancing careers, meeting each other and learning from each other. And, sometimes, creating something that can be called a “Heritage” – a great work that will stay forever and can be used by NPCs to learn, or even find someone with common interest in it.

This is my favorite kind of half-finished project. The aesthetic says “I am getting down to business and have no time to do fancy visual crap,” and the content suggests so much to the imagination.

1002. Learned Charisma
1003. Learned Charisma
1003. Met Vicente Hobbs
1004. Will be a Actor
1004. Advanced as Actor
1005. Learned Charisma because of the interest in Actor career
1005. Met Esther Morse
1006. Advanced as Actor
1007. Advanced as Artist because of the interest in Actor career
1098. Met Nellie Trujillo because of the common interest in Actor career
1098. Met Cary Silva
1100. Met Willis Walton
1101. Was introduced to Abel Reese by Bruce Howard
1101. Met Chris Byrd because of the common interest in Police career
1101. Met Dwight Schultz
1103. Met Everett Mitchell because of the common interest in Actor career
1104. Met Julius Santana because of the common interest in Teacher career
1110. Died

The source code confirms that the author is building a rich world model from which they can draw interesting stories, which means they're after my own heart.

Sitcom Generator

Jay's off-color remark puts a sensitivity trainer in the office for a presentation, which prompts Richard to create his own. Bertha gets flashed in the parking lot, and Jay goes all out to secure the premises.

One of Bertha's old college friends comes to The Registrant. Bertha hits the Candelariochester media circuit to promote the Harvest Festival, but Jeremiah freaks out on air when his past is exposed.

Richard decides he can be a sitcom writer and comes up with “nothing.” What he doesn't realize is that he was only chosen because all the big-name players weren't available.

Now Jay must fess up, which will reveal his relationship with Bertha, or risk getting fired. Grace gets a town hall meeting to discuss her park idea, but she finds out that public reception is less than receptive.

A few weeks ago I ordered some drum gear from Sweetwater. They’re the anti-Amazon of music gear shopping: well-categorized, curated, human-run, and reliably legitimate.

When my package arrived, Procjam was in full swing, and I was working on a stories-about-dead-aliens generator. Sitting in the bottom of the box was Sweetwater’s paper catalog, and found myself flipping through the pages reading descriptions of snare drums.

I’m not a guitarist, but I wondered what the guitar pedal section would look like. It turns out there are, uh, a lot of guitar pedals.

When I sat down to work on my Procjam entry again, I realized I would have a lot more fun generating guitar pedals than stories about dead aliens.

Play with the guitar pedal generator

Download a PDF catalog

Source code

Guitar pedals!

Screen Shot 2019-11-24 at 1.34.11 PM.png

Screen Shot 2019-11-24 at 1.34.44 PM.png

Screen Shot 2019-11-24 at 1.34.59 PM.png

Screen Shot 2019-11-24 at 1.36.19 PM.png

Why procgen? Why guitar pedals?

Everest Pipkin gave a talk at Roguelike Celebration this year about corpora-as-media, and gave a great argument for why procedural generation is a good way of making art:

  1. It increases the likelihood of surprise.
  2. It can produce novel ideas that humans cannot or do not.
  3. It can be responsive and unique.
  4. Writing is hard, so we should make the computer do it.

When looking at the body of guitar pedal appearances and descriptions as corpora, several characteristics pop out as making them good procgen candidates. For descriptions, there is a jargony vocabulary describing features and output, a varied but predictable structure, and a large number of examples. For appearances, there is a flexible visual grammar that makes a guitar pedal recognizable no matter what the tiny details are.

Beyond all that theory stuff, I try to float with the currents of inspiration and not think too much. This time I was inspired by a Sweetwater catalog.

Computer as music catalog copywriter

As I was writing this post I kept wanting to quote parts of Everest Pipkin’s talk, but I was in danger of reproducing the whole thing in text, so you should just go watch the talk (it’s only 23 minutes) and I’ll assume you did that from now on. If you don’t have time, it’s fine. You’ll understand well enough.

The goal of my procedural guitar pedal marketing copy generator is to co-create a celebration of the unique language of music with the original copywriters. My corpus is 20 pages of guitar pedals literally torn out of the Sweetwater catalog.

A page torn out of the Sweetwater catalog

Rules, Ethos, Poetics

To create a grammar for generating new copy, it helps to figure out the rules, ethos, and poetics of the copyspace. (Thanks, Everest!)


  1. Less than 400 characters.
  2. Always positive, never negative.
  3. Roughly two sentences.


  1. Every guitar pedal is valuable.
  2. Every guitar pedal is unique.
  3. Every guitar pedal is perfect.


  1. Details over comprehensiveness.
  2. Use the full body of musical language.
  3. Revere the guitar legends.
  4. Celebrate jargony features.

Anatomy of guitar pedal marketing copy

For each item in the corpus, I broke it down into its component parts before I entered it into my data files.

Here’s a word-for-word blurb for the Archer (“Classic Transparent Overdrive”) pedal:

The Archer is a high-headroom, transparent overdrive pedal, packing coveted K-style harmonic saturation and plenty of clean output. Archer’s touch-preserving headroom and endless amp-pushing boost are the results of an internal charge pump that boosts its 9-volt operating voltage up to 18 volts.

Broken down into its component parts, it looks like this:

[The :shortname] is a high-headroom, transparent [:purpose] [:pedal],
packing [:benefit] and [:benefit]. [:shortname]’s [:benefit] and
[:benefit] are the results of [:benefit].

Here’s another one, the PlimSoul (“A New Flavor of Overdrive”) pedal:

What’s better than a great overdrive pedal? How about two great overdrives in one pedal? Fulltone’s PlimSoul delivers just that, giving you a smooth, bluesy overdrive on one side and a more biting overdrive on the other. You can even blend the two channels to craft your ultimate signature overdrive.

What’s better than a great [:purpose] [:pedal]? How about _two_ great
[:purposes] in one [:pedal]? [:brand]’s [:shortname] delivers just
that, giving you [a :toneAdjective], [:toneAdjective] [:purpose] on
one side and a more [:toneAdjective] [:purpose] on the other. You can
even blend the two channels to craft your ultimate signature [:purpose].

Between these two, we’ve collected a list of “benefits”:

  • “endless amp-pushing boost”
  • “an internal charge pump that boosts its 9-volt operating voltage up to 18 volts”
  • “a genuine, US-made NOS 6205 preamp tube”

We also have some “tone adjectives”:

  • “smooth”
  • “bluesy”
  • “biting”
  • “clean”

Now let’s do a remix from scratch for “The Stevebox”:

[The :shortname] features [:benefit] for [a :toneAdjective],
[:toneAdjective] sound that everyone will love. And [:benefit] means
you’ll never lose that [:toneAdjective] [:purpose].

The Stevebox features a genuine, US-made NOS 6205 preamp tube for a biting, smooth sound that everyone will love. And an internal charge pump that boosts its 9-volt operating voltage up to 18 volts means you’ll never lose that bluesy overdrive.


One last thing worth pointing out in this section is that most descriptions can be cleanly split into “first sentence” and “second sentence,” and it’s usually possible to match any first sentence with any second sentence.

What’s better than a great overdrive pedal? How about two great overdrives in one pedal? Fulltone’s PlimSoul delivers just that, giving you a smooth, bluesy overdrive on one side and a more biting overdrive on the other. And an internal charge pump that boosts its 9-volt operating voltage up to 18 volts means you’ll never lose that clean overdrive.

Not perfect, but more than good enough!

What you’re looking at here is a grammar: recursive rules for putting things together. As I write this, my grammar contains 2,901 words, plus a list of 1,820 famous musicians.


The tool I used to process the grammar into text is called Improv by Bruno Dias. Improv’s core idea is that phrases have tags, and once a tag is used, its value doesn’t change. So if I have a phrase tagged type: distortion, and that phrase is used somewhere in my output, then phrases tagged type: reverb will not appear because their tag has the wrong value.

To learn more about Improv, read Bruno's tutorial. My code uses most of Improv’s features, so it might serve as a good example. Feel free to email me if you’re working with Improv and need help.

My main “innovation” around using Improv is I do some of the text entry in a CSV with LibreOffice, so I can browse word lists very quickly. I have a Python script that converts the CSV to YAML.

Screenshot of LibreOffice editing my CSV file

Computer as guitar pedal designer

Text is cool, but to really sell something like this to internet randos, I needed graphics. I used Vue.js and a massive amount of CSS to create a grammar for guitar pedals that looks something like this:

  • Controls: pair, single row, pair of rows, triangle
    • Single control: knob, finger switch, or LED
  • I/O
    • Inside: labels
    • Outside: rectangles representing jacks
  • Name and brand
  • Bottom: 1-4 foot switches
    • Foot switch: Stomp plate, hex foot switch, or round foot switch

Examples of pedals

The graphical part has no connection to the marketing copy besides the name and brand, so the descriptions can sometimes mention features the pedal doesn’t appear to have, but in practice no one notices.

CSS is a deep technical topic and I don’t have energy left to write about details, but here are some opinions:

  • CSS with Flexbox is a great visual prototyping tool.
  • Use Puppeteer for PDF output.
  • Use ems for sizing so you can scale the whole thing using font size.


I did a PDF version for NaNoGenMo (“write a program that generates 50,000 words of fiction”). The catalog has 760 pedals organized by type and brand.

Tips for PDF output using Puppeteer:

  • Fix background colors using -webkit-print-color-adjust: exact;.
  • Fix box shadows using -webkit-filter: opacity(1);.
  • Text shadows must be done using -webkit-filter rather than text-shadow.
  • There is a minimum font size. Trying to make text smaller than 0.5rem won’t work.
  • CSS columns don’t work well with page breaks. Group pages in divs.

Fun parts of the data files


You can talk to me on Twitter at @irskep or over email. The main feedback I’ve gotten so far is “this is great, do modular synths next.” No promises!

I recently got the procgen bug and the space nerd bug at the same time. So I decided to see if I could write a bit of code that could generate star systems that pass the astronomy nerd sniff test based on recent research.

I'm sure I made a lot of mistakes, and if this is ever posted in a proper space science forum many flaws will be found, but even so, I think I fulfilled the spirit of my mission.

I'm not really an astronomy nerd myself, so I started with Wikipedia pages. I learned about the Kepler mission. I found some charming old-school web sites. All told, I skimmed about 40 articles and papers, and directly used about half of what I came across.

When I finished, I published a JavaScript library called Stellar Dream that programmers can use in their own projects. I also built a cute Windows 95-style web app that lets you browse imaginary star systems as if you were using digital telescope software: The Keplverse! My hope is that people will use this tool to assist in worldbuilding for roleplaying campaigns and works of art.

Play with the imaginary telescope

Keplverse Telescope Software 1.0.png


The first thing you need in a star system is a star. I decided to include only main sequence stars, since some of my sources didn't include information about anything else. Non-main-sequence stars, i.e. giants and white dwarfs, happen to be inhospitable to interesting exoplanets anyway.

Star type is picked using a simple weighted random choice. Each star type is associated with an approximate color, luminosity range, and radius range. The mass can be computed from luminosity.

Colors are from What color are the stars? by Mitchell Charity. Probabilities are from The Real Starry Sky by Glenn LeDrew.

Type Probability Color
M 0.7645629 #9bb0ff
K 0.1213592 #aabfff
G 0.0764563 #cad7ff
F 0.0303398 #f8f7ff
A 0.0060679 #fff4ea
B 0.0012136 #ffd2a1
O 0.0000003 #ffcc6f

Radius, temperature, and luminosity are much trickier. Calculating the Radius of a Star from the Sloan Digital Sky Survey explains how to do it properly, but I hate real math, so I just took a random value 0-1 and mapped it to a min and max value for the star based on its type. Because I was much more interested in planets than stars, and I wasn't planning to create accurate orbital mechanics, I was eager to hand-wave past this part of the generator.

Every star system has a habitable zone: an orbital radius that is close enough for liquid water to exist on a planet's surface, but far enough away for the atmosphere not to burn off. I learned how to compute the habitable zone for each type of star from, an amazing Web 1.0 site. You combine the star's luminosity with a magic (to me) number, the normalized solar flux factor, or seff. Each star has a seffMin and seffMax value representing the beginning and end of the habitable zone. The habitable zone boundaries in AU can be computed using Math.sqrt(luminosity / seff[Min|Max]).

The last interesting value I chose to look at was metallicity. Research published in The Astrophysics Journal in 2004 by Fischer and Valenti suggests that gas giants are much more common around stars with high metallicity. My best approximation for picking reasonable metallicity values came from a small figure on page 5 of The Metallicity Distribution of the Milky Way Bulge by Ness and Freeman. I combined two Gaussian distributions that put most stars in the 0–0.5 range, with the rest at -0.5–0. There's a correlation between high metallicity and proximity to the galactic plane, but I explicitly ignored that since I wasn't trying to place any stars in a coordinate space.

Binary star systems

Many star systems, perhaps most, contain two or more stars in orbit 0–1 light years apart. The fraction of star systems in various configurations is not well known as far as I could tell. I also didn't want to deal with modeling the effects of the distance between binary stars on orbits and such.

So I decided to do pull some heuristics out of thin air: 11% of the time, an extra star will be added to the system as a “close binary”, and the less massive star will be ignored for planet-generating purposes. The less massive star also does not affect the habitable zone of the system, which I realize might be too extreme of a simplification.


Research on stars has been steady for as long as people have had telescopes, but research on exoplanets took a huge leap forward when NASA launched the Kepler telescope into space, surveying 150,000 stars for exoplanets over four years. Because of this mission, there's a huge difference in the quality and results of research published before and after about the year 2010. I wanted to combine some high-level basic facts from the Kepler mission into a simplistic yet believable model for placing planets in star systems.

What we know about exoplanets

Googling around for Kepler findings is a good way to spend an afternoon. I learned a few interesting and surprising facts, not all directly related to Kepler: * Few gas giants have been found in far-out orbits, despite Jupiter and Saturn being quite far out in our solar system. * At least one in six stars has an earth-like planet. * Planets tend to have a similar size to those in adjacent orbits. (Many Worlds is a great blog!) * Nearly all sun-like stars have earth-like planets. * 70% of stars, regardless of type, have an Earth- or Neptune-like in orbits up to a bit over 1 AU. * M-Dwarf stars have an average of 0.5 Earth-like planets in their habitable zones.

Beyond those neat little nuggets, I also discovered a fun mini-controversy: experts disagree about how to classify planets! Some charts have as many as 8 classes: Mars-size, Earth-size, super-Earth-size, Neptune-size, sub-Jupiter-size, Jupiter-size, and super-Jupiter-size. But an article called Sorry, Super-Earth Fans, There Are Only Three Classes Of Planet, a summary of the much drier Probabilistic Forecasting of the Masses and Radii of Other Worlds, convinced me that there three types is enough:

  1. Terran worlds with rocky surfaces and maybe atmospheres
  2. Neptunian worlds with large atmospheres made of hydrogen and helium
  3. Jovian worlds that are big enough to compress on the inside

The only thing that differentiates these classes is size. Once a planet gains a certain mass, two Earths or so, it gains a hydrogen-helium envelope that is inhospitable to life and becomes a Neptunian planet. And when it crosses another mass threshold, the extreme gravity means it can't have a well-defined surface, and becomes a Jovian. Finally, a body might be massive enough to behave like a successful or failed star, no longer even classified as a planet.

Each planet type has a different mass-radius relation. The unit M⊕ is Earth-masses.

Type Mass (10^X M⊕) Radius exponent (M⊕^x)
Terran -1.3–0.22 0.28
Neptunian 0.22–2 0.59
Jovian 2–3.5 -0.4

Imagining solar systems

Putting all these facts together, I made what I thought was a reasonable pass at a set of simplistic rules.

First, if the star type is A, B, or O, then create no planets. These stars either change too quickly to support life, or have stranger rules regarding planet formation. O-type stars become supernovas, B-type stars may only have gas giants, and I couldn't find enough data on A-type stars. (Terraforming Wiki: Stars and other hosting celestial bodies)

Next, there's a 30% chance of stopping, because only about 70% of remaining star systems have planets at all.

Now that it's time to make some planets, use the some eyeballed and estimated figures to weight different kinds of planets.

  • Gas giants have a weight of 0.3 if the star's metallicity is >= 0, otherwise it's 0.04.
  • Terrans have a weight of 0.3.
  • Neptunians have a weight of 0.6. This is the most common type of observed exoplanet.

I decided to ignore the “planets in adjacent orbits have similar sizes” correlation, and instead pick planet type by simple weighted random choice. But where in orbit will they go? I couldn't find any accessible research-supported models for this, so I pieced together a few ideas:

  • Kepler was mostly looking for planets in close orbits, so it might have missed many planets in far orbits.
  • Kepler found a lot of planets in close orbits.
  • Planets are pretty common in the habitable zones of stars.
  • The Titus-Bode relation is pretty accurate.

Combining those ideas, I thought it would be reasonable to create eleven “slots”: one somewhere in the habitable zone, five closer to the star, and five farther away from the star. One would be randomly chosen as the “start,” and then planets could be added inside and outside the existing planets based on some probability, occasionally skipping an orbit just for fun.

I added a special cheat case to match the statistic that M-dwarfs average 0.5 Terran planets in their habitable zones. If the star is an M-dwarf, there's a 40% chance of forcing the start of the planet sequence to be a Terran planet in the habitable zone.

Once a planet is added, there's a 30% chance of adding an additional planet in an orbit closer or farther away from the star. Half the time, an orbit slot is skipped. The 30% value comes from a NASA infographic, Planetary Systems by Number of Known Planets.

Each planet gets a random mass based on its type, and its radius can be computed from its mass.

That's the end of the algorithm! The result is a model of a star system that sort of, kind of, if you squint at it, vaguely lines up with what scientists know about the distribution of exoplanets among star systems.

Future work

There are a few facts I didn't account for in my program that would be simple to fix. I made no attempt to make planets in adjacent orbits have similar sizes, since I couldn't find precise explanations of how strong that correlation was. I also didn't try to build a model of planets any deeper than Kepler studies: I have no idea what planets are made of, or anything about moons around exoplanets. I didn't include asteroid belts, even though they are likely quite common according to /r/cosmology.

Since this program is mainly for science fiction worldbuilding, I think the lack of an orbital period calculator is my biggest missing feature. It's probably a simple calculation, but I haven't yet found a good resource for someone as lazy with math as I am. Hopefully someone taking a physics class can help me out with applying Kepler's 3rd Law. I made an attempt, but I got stuck on dealing with units and the Gaussian gravitational constant.

If you have a suggestion or notice a problem, please email me or open a GitHub issue on either the Stellar Dream or Keplverse project.




Prior art