Stop using second-class languages and build tools to develop your web application

Please, stop using all this HTML middleware to develop your web application. CoffeeScript, SASS, LESS, HAML, Middleman, Bower, Grunt, etc. Stop it.

No, it doesn’t make things easier for the developer. Sure, it might make things easier for YOU, A developer, but not for all developers or future maintainers. I don’t want to learn yet another silly combination of second-class languages to figure out what your application is doing. Just write vanilla JavaScript. Just write vanilla CSS. Just write vanilla HTML.

Stop with the middleware nonsense. It’s not necessary and it actively harms the maintainability of the application, not to mention that it certainly puts a shelf life on the application as a whole. How long will these second-class languages be around for? Will they be supported in the future? Are they going to completely change their inner/outer workings and break your application in two years?

There are several very valid and very strong reasons against using all these middleware and second-class languages.

Development Environment

One big strike against using middleware and second-class languages is that the development environment becomes virtually impossible to reproducibly install and run across all platforms.

Installing just one tool like coffeescript means that you need to specify exactly which version of coffeescript worked for you on the platform you developed on plus the specific version of node.js required to execute that coffeescript version. The standard package managers for other platforms might not carry that version yet, leading you into a yak-shaving exercise to go find the proper version for your platform, maybe compile it, install it, manage it in a virtual environment of some sort so it doesn’t conflict with a pre-existing installation of a specific version required for a different project you might be working on, etc.

As for coffeescript itself, I feel it’s on its way out in light of recent advancements in JavaScript starting with ECMAScript 6. We don’t need some silly syntax translator. We can write JavaScript just fine now. Sure it’s more typing for you but code is written to be read far more often than to just be written.

Maybe you decide to go with some middleware management tool like Middleman. That will require a ruby installation of a very specific version so that all the features you rely on will work, and of course requires that you know a bit of ruby to work with the config.rb configuration file. Well, I’ve never touched anything ruby before, and I really don’t care for it so I’ve never bothered to learn it. Granted it’s not that hard to pick up on and follow some patterns to modify some stuff in the config.rb, but if I have to dig deep and really change the guts of how the application wants to be configured, I’m at a loss for time and effort to go do that.

Let’s say I have no time to dedicate towards learning CoffeeScript, or HAML, or SCSS, and I just want to statically recompile all these second-class languages into their first-class counterparts and ditch the second-class language code entirely. Can I do that with middleman? Yes and no. I can certainly get all the output into one gigantic messy ball of javascript in a single all.js file. Is that maintainable going forward were I to completely drop the CoffeeScript source? Nope. I’ve got to find a way to replicate the dependency reordering logic of middleman and compile all the coffeescript files individually and rewrite the HAML to generate the appropriate script inclusion tags. What a pain in the ass. So there’s really no time-feasible way to drop these damn second-class languages that I don’t care for.

Middleman also apparently wants to be run and handled entirely by Bundle, which is some ruby virtual environment manager if I understand it all correctly, which I probably don’t and I don’t have enough time to begin to care about any of that. I have to learn how Bundle works and what it is in order to just execute Middleman. Should I need to worry about this? Certainly not.

Does it all work on Windows? Regrettably, no. Or at least not as far as I tried before giving up and doing it all inside a Linux VM. I tried setting it all up via MSYS2, my currently preferred Windows development “sub-“environment, and it all failed totally miserably. Does it work on OS X? Maybe. Does it work on Linux? Probably. Do I want to use a Linux desktop to develop? Personally, no.

If this were all done with vanilla HTML, JS, CSS, I wouldn’t have any problems loading it into my preferred development environment, not to mention IDE. That raises another question… Will my IDE understand all the second-class languages being used here? Does it recognize HAML or SCSS or CoffeeScript? Maybe. Will it recognize them as first class citizens with full code-completion support? There’s no guarantee. What is guaranteed is that pretty much any modern IDE WILL understand HTML, JS, and CSS. They pretty much have to if you’re using them for web development.

Learning curve

I touched on this a bit in the previous section, but I just don’t have the time to dedicate to learn all these second-class languages. They look completely foreign to me. My eyes are not trained to notice what is significant and what is insignificant in the code produced in these languages. I can’t just scan through an unfamiliar language and gleam semantics from it. Maybe some seemingly inconsequential language sigil completely changes the expected behavior of the affected code and I’m unaware of that, i.e. maybe just adding a ‘@’ or ‘~’ sign here or there completely turns the code’s behavior on its head. Maybe they mean nothing and are accepted as part of identifiers in your language. Maybe it’s a mixture where ‘@’ means nothing but ‘~’ means something very important to the semantics. Do I care? Not right now.


The bottom line that irks me about all this is that there is so much waste generated as a side-effect of installing all the build tools to take advantage of these second-class languages’ touted benefits over their first-class counterparts.

I need to install two extra languages (ruby, node) with their own runtimes that I don’t use on a daily basis JUST to run the build tools to recompile these second-class languages into their first-class counterparts so it all can be run by a browser. Then I need to install these tools into their respective runtimes with their own list of per-runtime dependencies.

Does Middleman install the nodejs runtime? I don’t have a clue. I might have two or three different installations of node.js or ruby or their gems or npm packages sitting around in various places on my system now and I wouldn’t have a clue where they are or how to invoke them.

I’ve had to install I don’t know how many ruby gems just to get Middleman off the ground. So many libraries and useless extra bits of code completely irrelevant to the final product. Most of those gems required a C++ compiler installed to compile some native code for whatever reason. Was I supposed to know that ahead of time? Nobody told me. So I need a C++ compiler to compile some part of some random ruby gem that’s going to run for maybe 10 seconds as part of the build process, if it even gets used at all? No thanks. Oh, and if I didn’t actually need it, it’s still listed as a dependency and it still has to be compiled and the gem manager will totally bomb if it can’t be compiled. Can I clean out the object files from the C++ compilation phase or will those just rot on my disk? What other cruft is ruby installing on my system that I never asked for?

What a complete waste of disk space and time and energy installing all these prerequisites. Not to mention energy required to write all the documentation and list the step-by-step development environment setup procedure for the next poor sob who has to pick up this project and make a simple change. Do I even know the minimal step-by-step procedure required to set up the environment? Nope. I was a blind man groping through a dark alley just trying to resolve one error at a time. Can I back up two steps ago and undo what I just did? Nope. Did I record my procedure? Of course not. I just wanted to get the damn thing off the ground to see what’s what.


You’ve just ballooned my development environment up by 1,000,000 KB or so just to compile maybe 300 KB of web code. No, this isn’t easier for anyone involved. Stop doing it. Stop adding unnecessary dependencies to the Nth degree. Don’t make me require two extra big fat runtimes and who knows how many packages just to run your build tools because they may have allowed you type quicker. Instead, learn your craft well, and understand it from the ground up. Do things in the most efficient way possible with the least amount of total dependencies. Maximize the utilization of the resources of everyone involved in using your work, not just your own.

Fix animated GIF timing

Have you found a priceless animated GIF but are disappointed with its timing? Maybe it’s too slow or too fast? I’ve got an easy solution for you. You don’t need any complex software like ffmpeg or mencoder or imagemagick or any of that crap (well, they’re not crap; they’re just ridiculously complicated and user hostile). All you need is a simple hex editor.

Here’s a simple before/after example of what I’m talking about:





As you can see, the top GIF is much slower (depending on your browser, here in Chrome it animates at the default of 10fps). The bottom GIF is much closer to the original movie’s framerate of approximately 25fps.

To do these kinds of simple corrections, all you need is a hex editor with a search and replace feature that lets you search for hex strings and replace them with other hex strings. The venerable HxD is a fantastic hex editor with exactly such a feature. Go download and install HxD now.

Now download your new favorite timing-challenged GIF to your local computer and open it with HxD.

Search (CTRL-F) from the beginning of the file for the hex string 21 F9 04 05 00 00 (you need to select Hex String from the drop down; the default is Text String which is not what you want). If you cannot find any matches using this 6 value string, search instead for 21 F9 04 05 (leave off the last two 00 00). If even that does not find any matches, then search for just the first three hex values 21 F9 04. Whatever you do find, copy 6 values out of the display starting at 21 F9 04 and put them back into the search/replace dialog.

The last two values of the 6 value sequence specify the delay time to wait after displaying each frame. You’ll encounter many copies of this sequence throughout the GIF file because it will exist for each frame of the animation.

Most of the problematic GIFs you’ll encounter will just specify 00 for the frame delay time, meaning “don’t care”, which is pretty dumb if you think about it. Browsers will just interpret that as defaulting to something obscenely slow and useless like 10fps, which explains why the GIF appears to be slow in playback.

Once you’ve identified the 6 value sequence to search for, go ahead and replace all occurrences of it with 21 F9 04 XX 07 00 (where XX is the same value as what you searched for, which may or may not be 05). The second-to-last value, 07, is the frame delay time measured in 1/100ths of a second. Feel free to modify that value to your liking. Choosing the best value here depends entirely on the source material’s frame rate, so I cannot tell you exactly what to fill in here.

I find that useful values are in the range from 04 to 07. Remember that the smaller the number, the faster the animation will run. You can do the math yourself based on the source material’s frame rate: 100 / n, where n is the frame rate.

  • 03 is pretty good for 30fps source content (actual rate will be 33.333fps, a bit too fast and sorta noticeable)
  • 04 is perfect for 25fps source content (most movies)
  • 07 is pretty good for 15fps source content (actual rate will be 14.286fps, a little slow but not very noticeable)

Unfortunately, we can’t specify fractional values in this animation delay time field, only integer values. This appears to be an oversight of the GIF animation specification. The 16 bits of space reserved for this animation rate value is horribly underutilized. No one should ever need an animation delay of 655.35 seconds, for instance. They should have instead stored a frequency value here, not a delay time value. Off the top of my head, I would make use of these 16 bits to store the animation rate in fps at a x100 scale. This would give much finer grained control over the frame rate, e.g. storing 2,997 as a 16-bit unsigned integer value will yield a playback rate of 29.97 fps, or 3,000 for 30.00fps, or 1,500 for 15.00fps, etc.

Use ffmpeg to record desktop video on Windows

Want to capture your desktop for screencasts on Windows? Use ffmpeg! It’s totally free and has no watermarks, no ads, etc. Unfortunately, there’s no GUI nor a simple installer for inexperienced users, but I’m here to try to alleviate some of that pain and give you the magical ffmpeg incantation to instantly get good quality results.

Download & Install

NOTE: When you go to download the installers for the tools you need, be sure to consistently choose either the x86 or the x64 version of each. Do not mix and match x86 and x64 installs as that will not work. If you have a 64-bit machine, I recommend the x64 version. If your machine is ancient and somehow only 32-bit, go with the x86 version.

First, download ffmpeg from here. Install it to somewhere obvious like C:\ffmpeg\. I don’t recommend installing into C:\Program Files\. File paths with whitespace in them are annoying to deal with on the command line, which is where we’ll be working.

ffmpeg is the main tool that handles video and audio encoding. It muxes those streams together and records them into a video file. Unfortunately, it does not come bundled with a desktop video capture input source for Windows which is what we need to capture your desktop screen with. So…

You’ll need a DirectShow filter to capture your desktop that ffmpeg can use. Go here and follow steps to download a binary version of that screen-capture-recorder project. (As of this writing you can find those binaries hosted on sourceforge‘s terrible, terrible site.)


Open a command prompt (Start -> Run -> cmd). Type in the following commands, assuming you installed ffmpeg to C:\ffmpeg:

C:\>SET PATH=%PATH%;C:\ffmpeg\bin

C:\>ffmpeg -f dshow -i video="screen-capture-recorder" -f dshow -ac 2 -i audio="virtual-audio-capturer" -ar 48000 -acodec libmp3lame -ab 192k -r 30 -vcodec libx264 -crf 18 -preset ultrafast -f mpegts desktop.mpg

That will start ffmpeg running and will capture both video from your desktop and audio from your sound card (what you’re hearing). It will encode video on-the-fly in h.264 at 30 frames per second with high quality and encode audio on-the-fly in MP3 at 192kbps (also high quality).

IMPORTANT: Press ‘q’ to stop. DO NOT PRESS CTRL-BREAK OR CTRL-C or you will prematurely abort the process and the file may not be finalized properly. Also, make sure the output file does not exist before you start recording.

Let’s break this command down a bit:

  • -f dshow specifies that the next input source is a DirectShow filter
  • -i video="screen-capture-recorder" specifies the screen-capture-recorder desktop video source you installed earlier
  • -f dshow specifies that the next input source is a DirectShow filter
  • -ac 2 specifies 2 audio channels to capture (i.e. stereo)
  • -i audio="virtual-audio-capturer" specifies the virtual-audio-capturer audio source that comes installed with the screen-capture-recorder (this records the audio you hear through your speakers)
  • -ar 48000 specifies to capture audio at 48000Hz (ideal for audio/video sync)
  • -acodec libmp3lame specifies to use libmp3lame as the audio encoder which implements the MP3 standard
  • -ab 192k specifies to encode MP3 audio at 192kbps (high quality)
  • -r 30 specifies to capture video at 30 frames per second (ideal for YouTube)
  • -vcodec libx264 specifies to encode video using the libx264 encoder which implements the h.264 standard
  • -crf 18 specifies the h.264 encoding quality of 18 which is good (0 = lossless, 30 = crap)
  • -preset ultrafast specifies an ultra-fast encoder setting so that we can reliably record without interruptions
  • -f mpegts to specify that we want to use MPEG-TS as our container format; this is beneficial for live streaming purposes and also for uploading to YouTube.
  • desktop.mpg is our output file

Feel free to tune the parameters to your liking. Enjoy!

Advanced Usage

To get a list of all the DirectShow filters (audio and video) available for you to record from, use this command:

ffmpeg -list_devices true -f dshow -i dummy

This will output something like this (example from my system):

[dshow @ 00000000028f83e0] DirectShow video devices
[dshow @ 00000000028f83e0]  "screen-capture-recorder"
[dshow @ 00000000028f83e0] DirectShow audio devices
[dshow @ 00000000028f83e0]  "1-2 (UA-1000)"
[dshow @ 00000000028f83e0]  "virtual-audio-capturer"
[dshow @ 00000000028f83e0]  "3-4 (UA-1000)"
[dshow @ 00000000028f83e0]  "5-6 (UA-1000)"
[dshow @ 00000000028f83e0]  "7-8 (UA-1000)"
[dshow @ 00000000028f83e0]  "9-10 (UA-1000)"
[dshow @ 00000000028f83e0]  "Mon (UA-1000)"

I use a Roland EDIROL UA-1000 multi-channel USB audio interface which has 8 input and output channels plus a monitor input source for recording what’s going out to the speakers.

You can add more than one audio track to your video if you want to narrate along with your video and also record the speaker output but not prematurely mix the two tracks. Here’s an example incantation (specific to my system) to do so:

ffmpeg -f dshow -i video="screen-capture-recorder" -ac 1 -f dshow -i audio="1-2 (UA-1000)" -ac 2 -f dshow -i audio="Mon (UA-1000)" -map 0 -r 30 -vcodec libx264 -crf 18 -preset ultrafast -map 1:0 -ar 48000 -acodec libmp3lame -ab 192k -map 2 -ar 48000 -acodec libmp3lame -ab 192k -f mpegts raw.mpg

I have my microphone on the “1-2 (UA-1000)” channel pair recorded to the primary audio track (in mono) and then the monitor output “Mon (UA-1000)” recorded to a second audio track (in stereo). Later, I extract the two audio streams from the recorded video file and process the microphone signal to clean it up; add compression, EQ to pull out low end and add high end, etc. Then I mix the two tracks back together and output a final video file including the mixed stereo audio track.

Note that parameter ordering is very important so don’t go rearranging things. The -map options specify how the input sources are to be mapped to output streams in the recorded file.

MiniLISP for C#

MiniLISP is an extremely minimal implementation of a limited yet powerful enough dialect of LISP which I invented for use in C# and .NET applications. It is a dynamically yet strongly typed language with very few primitives: function invocation (func param1 param2), lists [a b c], identifiers hello, integers 1024, and quoted strings 'single quotes with \\ backslash \n escaping and multi-line literals.'. It is implemented across two C# source files and relies on no external dependencies other than the .NET framework, thus making it ideal for direct inclusion into any existing project.

Function invocation (denoted with parentheses or curly braces) and lists (denoted with square brackets) are kept as separate primitives to allow easy distinguishing of data from code, both visually and as an implementation simplification. This may be against common LISP idioms, but I find it both practical and useful for the simplicity of the language. This dialect is intended to make it extremely simple for C# developers to implement external functions for the LISP code to call out to.

Only integers and strings are currently supported as the primitive data types. There is no support for float, double, or decimal types. This will likely be revised in the future.

The reason for the choice of single-quoted strings (as opposed to the more popular double-quoted strings) is so that the language may be embedded in a C# string literal with minimal fuss. C# string literals are denoted with double quotes and one would have to double-up each double quote in order to escape it. Using single quote characters allows us to avoid this nastiness. Also, we gain free backslash escape sequences for our strings since C# raw string literals (e.g. @"raw \ string") do not interpret backslash escape sequences.

Some example MiniLISP code:

(prefix st [StudentID FirstName LastName])
{prefix st [StudentID FirstName LastName]}

Both of these expressions are identical. Function invocation is denoted with either parentheses or curly braces. Curly braces are used to allow embedding of MiniLISP code inside SQL query text, for example. Standard SQL syntax makes little if no use of curly brace characters, so they are an ideal signal to indicate the start and end of a section of MiniLISP code. Java’s JDBC escape syntax demonstrates success with their use of curly braces to escape out of SQL.

Function parameters are separated by whitespace, as are list items.

Identifiers and quoted strings are both treated as strings for data purposes. Identifiers may contain alphanumeric sequences and the hyphen character, but must start with either a hyphen or an alpha character.

Quoted strings must start and end with a single quote character and may contain common backslash escape sequences.

Integers must start with a numeric character and proceed in kind.

To parse the above MiniLISP fragment in C#:

const string code = @"{prefix st [StudentID FirstName LastName]}";
var lex = new Lexer(new StringReader(f));
var prs = new Parser(lex);
var expr = prs.ParseExpr();

This code will give us an SExpr instance representing either the s-expression that was parsed or a parser error. Let’s try to evaluate the s-expression to get a result back:

var ev = new Evaluator();
var result = ev.Eval(expr);

This throws an exception at runtime, "Undefined function 'prefix'", indicating that we did not define the “prefix” function. Let’s fix that by defining our “prefix” function with the evaluator:

var ev = new Evaluator()
    { "prefix", (Evaluator v, InvocationExpr e) =>
        if (e.Parameters.Length != 2) throw new ArgumentException("prefix requires 2 parameters");

        // Evaluate parameters:
        var prefix = v.EvalExpecting<string>(e.Parameters[0]);
        var list = v.EvalExpecting<object[]>(e.Parameters[1]);

        var sb = new StringBuilder();
        for (int i = 0; i < list.Length; ++i)
            if (list[i].GetType() != typeof(string)) throw new ArgumentException("list item {0} must evaluate to a string".F(i + 1));
            sb.AppendFormat("[{0}].[{1}] AS [{0}_{1}]", prefix, (string)list[i]);
            if (i < list.Length - 1) sb.Append(", ");
        return sb.ToString();
    } }

var result = ev.EvalExpecting<string>(expr);

You can see how easily we’re able to define MiniLISP-invokable functions from C# code. The Evaluator class implements the IEnumerable interface and the Add method required by C# to give us the dictionary initializer syntactic sugar. Each object to add is a pair of the function name and the C# delegate which is called when the evaluator invokes the function by that name. The s-expression’s function parameters are only evaluated by the delegate on demand.

This “prefix” function we defined expects 2 parameters: the first a string, and the second a list (typed as object[]). We evaluate both of those parameter s-expressions using the Evaluator instance named v passed into the function.

Then, for every item in the list, we make sure it is a string typed value, then append it to our StringBuilder and format it with the prefix appropriately, also inserting commas for separators.

Our resulting output for the example code above is:

[st].[StudentID] AS [st_StudentID], [st].[FirstName] AS [st_FirstName], [st].[LastName] AS [st_LastName]

This should be perfect for inclusion in a SQL query.

const string query = @"SELECT {prefix st [StudentID FirstName LastName]} FROM Student st WHERE st.StudentID = @studentID";

But parsing out that embedded MiniLISP code from the rest of the SQL syntax is left as an exercise for next time.

Thoughts on Go 1.1

I’d like to share a few thoughts I have about the Go programming language after implementing my very first and currently only project in it. This may be a bit premature since I don’t have much experience with it, so if you have some advice to give or some justifications to make then please comment back. I’m always eager to learn new things!

For future readers, it should be known that at the time of this writing (2013-05-22), Go 1.1 was just recently released, so all of this observation is specific to that version and not to any newer version that obviously doesn’t exist yet.

Fair warning: there are some strong opinions expressed here. I make no apology for having strong opinions, but perhaps the tone in which those opinions are expressed might be offensive and I will preemptively apologize for that. It’s hard for me to decouple the passion from the tone.

Language features:

First off, let’s address the biggest elephants in the room:

  1. usage of nil instead of the much more common null to represent the lack of a value for a reference type
  2. non-nullable strings
  3. import with an unused package is a compiler error
  4. identifier case determines package exposure

I don’t think that nil and null in terms of reference values (or the absence of such) are two different concepts here so there’s really no reason that I can see for going with nil over null. It seems contrarian in nature. I’ll just dispense with the nillity and say null from now on and you should know what I mean.

Strings in the Go language act like reference types, and since all other reference types are nullable, why not strings? The idea that the empty string is equivalent to the null string is utter nonsense. Anyone who preaches or practices this has no appreciation for the real expressive value of nullability or optionality. Having a way to represent a missing value as opposed to an empty value (or zero value) is a good thing.

Now, if strings were non-nullable AND there were a more general optionality feature of the type system to make any non-nullable type into a nullable one, THEN that would be nice. In that case, the nullability of a type would be decoupled from the type itself and I would agree then that string should be non-nullable, like every other basic type should be. I’ve yet to see this kind of clean type system design in the family of curly-brace languages. An example syntax off the top of my head would be string? (nullable string) vs. string (non-nullable default string) and int? vs. int and bool? vs. bool, etc. You see where I’m going.

The most popular complaint that I’ve seen is that all imported packages must be used or you get a compiler error. This compiler error is just downright stupid. I see the intention, and I can kinda get why this was done. But the developers chose to stick to their guns and suggest workarounds for the obvious deficiency, and this is where things get worse. The suggested workaround is to define a dummy variable in your code using some exported member of the package. This workaround is a worse code smell than the original problem of having an “unclean” import section! What were they thinking?! Nonsense. Give me a compiler option to turn that stupidity off at the very least. I should be the one who decides whether an import list should be exact or not, not my compiler nor its over-zealous authors. We’ll revisit this a little bit later in a dumb little narrative.

Riding on the package import error’s heels is the requirement that public members of packages must start with an uppercase character. Character case should not decide such an important aspect and also somewhat volatile fact of a package’s member. During development you might start out with everything private and then maybe wish to expose things later, or even vice versa. Having to change the exposure of a package member will mean having to rename all instances of its usage. What a needless pain. It also makes the export list of the package less discoverable. An export clause at the top of the package file would do fine and serve as better documentation.

There are other issues with forced character casing that arise in marshalling of data to JSON and XML, for instance. Granted there are “tags” that one can apply to struct members in order to provide marshalling hints but the simple fact that you can’t cleanly represent your struct members as close to how you wish to represent the marshalled data is a shame.

Now that the big elephants are out of the way, the rest of the language is more or less competent. The only other major complaint at this point would be the lack of generics. You can’t really cleanly bolt generics onto an already-released language. C# and Java both learned that lesson the hard way. It really has to be baked in from the start. That is, of course, unless you want to just cut a swath of breaking changes in with version 2.0 of your language to get generics in. I guess it depends on the boldness of the language development team. I personally would be fine with breaking changes if they introduced a much more powerful feature that took out a lot of warts and inconsistencies.

There is a bit of silliness that arises from the consequences of how semicolons are elided at the lexer level. For instance, if you separate out a method call expression onto multiple lines where each line is a parameter expression terminated by a comma, then the last parameter line must also terminate with a comma even if the very last line contains the closing paren of the method call expression. Perhaps an example will help:

    param2,  // <- this comma is **required**

This isn’t a huge deal, but it does sorta make things look messy. Now, I’m all for acceptable usage of extra trailing commas in things like list initializers because they’re useful there, but for a standard method call expression that doesn’t have a variable number of parameters it’s kind of misleading. Your eye parses the last param line expecting another one and gets misdirected to the ending paren unexpectedly. Where’d the last-last param go? Oh, there isn’t one? Hm, okay. Weird.

Don’t forget that this extra comma is only required IF you format your code in this style. Obvious response is “well don’t format it that way”. My obvious response to that would be “Screw you. I’ll format my code how I think my code should be formatted and how I want to read it. Your idiotic lexer hacks to elide semicolons are getting in my way.” After coding for 20+ years with semicolons I have no objections to them and it’s just second nature at this point to type them in anyway.

(Side-note: Yes I’m only 30 years old and yes I’ve been coding for 20+ years since I was 8 years old. Deal with it.)

Go lacks a native enum type. Its replacement is the somewhat less obvious combination of a type declaration with a const section that describes a series of constant values outside the namespace of that new named type that should act as the enum’s type name. Here’s an example:

type sortBy int
const (
    sortByName sortBy = iota

All that code just to effectively create an enum named sortBy that would’ve been this brief in C# or Java or C++:

enum sortBy {

Of course we could make both of those even more brief, but the comparison here is fair I think. The Go version is needlessly more wordy for this most common of cases. Granted, I like the iota concept. That’s really cool, but there’s no reason that we can’t get iota into a native enum type in Go. Furthermore, the lack of the namespace for the enum members means that they end up at your package level with pseudo-namespace identifiers which makes things get a bit wordy. At that point you might as well just go back to writing C code with ENUMNAME_MACROS_LIKE_THIS to define enum members.

There’s the horrid syntax of map[K]V. This just makes my eyes bleed, but given the present lack of generics and the inability to design anything less ugly I guess I’ll deal with it. I just can’t bring myself to type that in here again, so let’s just move on.

Why is len a built-in global function and not a built-in method on slice/array types? len(slice) could just as easily be slice.Length() but it’s not. Granted, my syntax is longer, but is obviously more consistent in appearance with other method calls.

I do like Go’s slice support, but I think they didn’t take it far enough. They should’ve taken a leaf from Python’s book and implemented negative end values to denote positions from the end of the slice instead of having to compute that offset yourself. The D programming language almost got there with its $ token to represent the length of the slice e.g. a[0 .. $ - 1], but I think I’ll give the bronze to Python here for a[0:-1]. Go has neither, and forces you to a[0 : len(a) - 1].

The simpleton will say, “but what’s wrong with that?” And I will reply, “Fine, then try this package.GetSomething(lots of parameters here)[0 : len(package.GetSomething(lots of parameters here) - 4].” Did you get lost? Did you recompute something there that you shouldn’t have? Sure you can just pull it out to a separate variable on the line above and refactor the entire expression you just cooked up. Or you could just say package.GetSomething(lots of parameters here)[0 : -4] and you’re done.

Now if you’re a Go expert and you know something that I don’t about this, then it’s not in the (rather terse) language specs. I checked.


I think the most confusing part of the language is that interface implementation is entirely implicit and not discoverable at all. At first I thought this would be kind of cool, but unless you’re intimately familiar with all implementation details of all packages, you’re never going to know what interfaces a given type implements. This makes using the standard library a nightmare.

Okay, this method wants a Reader … do I have a Reader here? What is that? Oh geez, now I have to look at the type the library exposed to me to check if it even implements that interface… Oh of course it doesn’t state it obviously anywhere so I have to read their source code or gleam that fact by glancing at ALL their exported methods for ALL their types. If my human-eye parser is off by a token or two then whoops! I guessed wrong. Oh, that interface accepts a POINTER to that type but not a copy of.

All this is fine, of course, but Go(d) forbid you have a dirty import list! THE HORROR! How could you not know that you don’t need that time package despite the fact that the os.FileInfo has a ModTime() that gives you back a time.Time that may or may not require you to use the format string constant from the time package!? If you don’t need that format string then you don’t need the time package and you’re a bad developer for importing it as a precaution. Oh wait, now you do need that format string constant? Well, you should’ve imported that time project! What’s wrong with you?

Let’s not forget about the fact that interface{} is the preferred way to represent the any type. Which makes me wonder… WHY NOT JUST ALIAS IT AS any AND BE DONE WITH IT? I don’t want to type interface{} everywhere when I could just as easily type any. Save the pinkies!

I do understand why that is done and it is pretty cool that the language lets you just embed an unnamed type declaration where a type is required (unless that is false which makes this whole justification section moot), but why not just alias that awful syntax to something much simpler and more meaningful? The fact that interface{} is the catch-all interface is cute and all, but I don’t think we need to encode that fact directly in that representation throughout all code.

Standard Library:

The terminology present in the standard library is just foreign and awkward. Let’s take a few examples:

html.EscapeString. Escape? No, we’re ENCODING HTML here, not escaping. HTML has its own encoding. It is not a string literal to have certain characters escaped with escape characters, like a "C \"string\" does with the \\ backslash escape char". HTML is a different language, not an escaped string. Point made? Okay, moving on.

net.Dial. Dial? I haven’t heard “dial” in serious use since the good old days of dialing into BBSes with my 57.6k baud modem (if I was even lucky enough to get that baud rate). “Hello, operator? Can you dial a TCP address for me? My fingers are too fat to mash the keypad with.” Nowadays we just “Connect” to things. Try to keep up.

rune for characters? What? No. No. No. No no no. Why not char LIKE EVERY OTHER LANGUAGE ON THE PLANET? What new value does the term “rune” bring to the table other than to just be obscuritan and contrarian like with your usage of nil? My keyboard here does not carve runes into stone tables for archaeologists to unearth and decipher 2,000 years from now. My keyboard is for typing characters. Let’s get with the times here.

Then there’s the complete lack of support for null strings in the JSON encoder. Really? You can’t call that a JSON encoder in my book. This means that you have to design your JSON-friendly structs to have interface{} where you really just mean a string that could sometimes be null? Awful.

Pile on top of that the idiotic uppercase-letter-means-public decision and you get this rule: “The json package only accesses the exported fields of struct types (those that begin with an uppercase letter). Therefore only the exported fields of a struct will be present in the JSON output.” (emphasis added). That’s quoted right from the JSON documentation.


Let me point out some of the features that I really enjoy so that we don’t end on a completely negative note here.

First, the runtime is extremely solid. I haven’t had my HTTP server process that I wrote in Go go down at all, even when it’s faced with boneheaded developer mistakes. I think that says a lot. Good on you guys for a rock solid implementation.

The concurrency model is solid. I don’t have much experience with channels yet, but that’s definitely the right direction to go. I am getting the benefits of the concurrency model with http.Serve and friends without even having to explicitly deal with it in my code at all. I like that. Keep it up.

The multi-valued-return functions are awesome and reduce a lot of unnecessary control flow boilerplate. Combined with the pragmatic if statement, there’s definitely power there, e.g. if v, err := pkg.GetSomething(); err != nil { yay! }.

Raw string literals are just great. No more really needs to be said here. I like that the back-tick character (not rune) was used for these strings. C# did well enough with @"raw string literals" but the double quote is such a common character that you have to double-up on them to escape them, e.g. @"""". I definitely prefer `back-ticks`. I’m much less likely to require a literal back-tick character in my strings than a double quote character.

Implicit typing is wonderful with the := operator.

Multi-valued assignment is simply awesome, e.g. a, b = b, a to implement a simple a, b swap operation. I need to take more advantage of that in my code.

The lack of required parens for the if statement is great but comes at a high cost of requiring that the statement body be surrounded in curly-braces in all cases. This restriction is a bit annoying for simple for-loop if (filterout) continue; cases.

Grouping function parameters by type is awesome, e.g. func Less(i, j int)

The name-type order rule contrary to the more common type-name rule is a welcome change, e.g. i int vs. int i.

I do agree with Go’s explicit error handling strategy via multi-return values and if statements. I’m mostly against exceptions and their ubiquitous use of handling all error cases. From a reliability standpoint, explicit error handling is far easier to deal with than a virtually unbounded set of exceptions that I can’t easily reason about.


Once you get past the warts and big issues and find the workarounds, you can get really productive in this little language. I am mostly impressed at this point and want to see bigger and better things. So far, it’s the best option I have for writing reliable network services with, HTTP or otherwise, and having them execute efficiently.

Home Recording Advice

Here’s a bit of home recording advice I just gave to a fellow YouTuber. If you don’t know, I have a YouTube channel where I post home-recorded guitar cover videos here. And if you do know, good for you buddy. Anyways, I thought this was a valuable collection of knowledge I’ve gained about the subject and summarized fairly well. The question posited was about where to spend your money to get the most bang for your buck, so to speak.

Obviously if you want quality you’ll need to spend a bit of cash, but there are places where you can make acceptable trade-offs. Here’s where you ought to spend your money best, in order of importance:

  1. Guitar instrument, guitar strings, and pick (aka plectrum)
  2. Guitar amplifier (if you don’t like the sound coming out of your amplifier, you won’t like what it sounds like on the recording)
  3. Instrument cables (avoid crackly cabling with poor connectors; Planet Waves is generally good)
  4. Studio monitors (I have Yamaha HS80M pair and HS10W subwoofer, subwoofer is probably optional for starting out)
  5. Recording room treatment (a couple of Auralex foam pads stuck to the wall in strategic locations does wonders)
  6. Microphones ($80 – $100 should suit you fine here, just get a Shure SM57; they’re standard workhorses and sound great on guitar speaker cabinets)
  7. Microphone XLR cables
  8. Computer audio interface (I use Roland’s OCTA-CAPTURE ($800) but there are cheaper variants on that same unit with fewer channels. Check out the DUO-CAPTURE EX)

Disclaimer: This is just my list and there’s nothing inherently right or wrong about it. It’s just a representation of what value I’ve learned to place on things in the chain of everything between your fingers executing a musical performance all the way to the final captured performance in your DAW suitable for mixing with.

These investments will all enable you be able to capture the sound coming out of your guitar amplifier into some computer software, a digital audio workstation. I’d recommend Cakewalk Sonar X2 since that’s what I use and am most familiar with.

What seems to matter the most to the quality of the final mix is actually what you do in the mixing and mastering phases. You can completely ruin a good recording with bad mixing. I know; I’ve done it too many times. Conversely, you can’t make a good mix with a bad recording. “Get it right at the source” should be your mantra, where the source is any one of: your fingers on the guitar, the guitar itself, the amplifier, the speaker, the room the speaker is in, and the microphone at the speaker, including all cabling involved. I guess “the source” is considered to be anything in the physical realm that is not a part of your DAW software that leads to producing the digital track.

I also recommend dialing the amplifier gain down quite a bit while recording. Most great recorded tones are recorded with significantly less gain than you’d expect. The real trick to getting a huge guitar sound is in layering lots of lower gain sounds on top of and next to each other in the mix. Also roll off a lot of low end, like below 100Hz. That’ll clear up the low end quite a bit to let you have some thundering bass and kick drum down there. Otherwise it’ll get all muddied up and you’ll be sad.

Finally, for when you get really into this sort of thing, I’d recommend picking up a re-amp unit. This unit allows you to record the guitar performance first and play it back through an amplifier to be recorded later, when you dial in all your settings just right and like what you hear. This is what the pros do and I’ve only just started doing it myself.

One final tidbit is perhaps Windows OS specific, and that is regarding driver modes for how your DAW connects to your audio interface. In Windows, with a high quality audio interface, you’re likely to have the option for using ASIO which is an extremely low-latency driver mode that lets your DAW talk directly to the audio interface without going through the Windows kernel as an intermediary. This offers huge benefits in terms of latency and CPU utilization in that the system no longer has to do a lot of extra copying and processing just to get your audio data to where it has to get to anyway.

You only want to use the true ASIO offering from your audio interface driver. Don’t use the ASIO4ALL driver because that one’s a big phony. It won’t give you the true low latency of real ASIO that the manufacturer’s driver would. Now, ASIO4ALL is useful as a compatibility layer if the software you’re using only supports ASIO, but don’t expect it to be low latency because it simply cannot be, by design.

Custom Directory Listing with Nginx and Go

For the last few years, I’ve been maintaining a large repository of files and folders on my website here using lighttpd‘s default directory index generator. The generator is fine to get the job done, but offers no extra features. I just recently switched to nginx and its directory index generator is a bit worse than lighttpd‘s (the autoindex directive). This approach worked fine for a while but I really wanted the option to have a custom file ordering for certain directories, e.g. to order by date descending so newer files would automatically float to the top of the file list. So I wrote a HTTP server in Go to do just that, and a little more!

This project was my first real foray into the Go programming language (which I have a few choice opinions about but I’ll express those in another post later). For the most part, the experience has been pleasant, save for a few language warts. The Go runtime is rock solid and my HTTP server has not gone down at all. I keep it running with upstart on my Ubuntu server. If you’re not managing your daemons with upstart, you definitely should start. It’s far easier than the horrible copy/paste/modify workflow of those awful init.d scripts.

What I do is have nginx act as a reverse proxy for /ftp/ requests to my Go HTTP server which is just listening on a localhost port. I intend to change this over to use local Unix sockets for more security and to save my sanity in dealing with TCP port numbers and remembering which one goes where.

The main features of this directory listing generator are custom ordering of files per directory and slightly advanced symlink support.

To specify a custom ordering for a directory, just create a file named .index-sort in the directory and have its contents be a single line specifying the sort mode. The available sort modes are documented on the GitHub project’s README. To override the default sort order, you can specify the ?sort=mode query string parameter in the request.

The advanced symlink support helps to translate filesystem symlinks into HTTP 302 redirects. This works for both files and directories. If the symlink target path is within the filesystem jail being served up, the request will be served, otherwise a 400 Bad Request error will be presented.

For example, if you have a set of versions of some file and a symlink that always points to the latest version, the directory listing will 302 redirect from the symlink request to the actual target filename that is the specific version. In other words, a request to file-latest.kind might redirect to file-v1.kind. This way, the downloaded filename will represent the symlink target file-v1.kind and you can be sure which specific file your users have downloaded, instead of the file being served up as file-latest.kind and you having no clue which one that represented at the time the user downloaded the file.

I’m really pleased with this setup and it took me only a few hours to code up and test. Go does allow one to be productive right off the bat. Best of all, there’s no funny business about threading, concurrency, or reliability like you get with other things like Ruby or Python (mostly the concurrency issue here). There’s just fast, compiled, statically typed code here; just the way I like it. Of course Go isn’t perfect, but we’ll get into that later.

Feel free to use this process for hosting your own directory listings. I look forward to the pull requests!

Goodbye lighttpd; hello nginx

enter image description here




enter image description here

It took me a while (collectively ~8 hours), but I’ve finally replaced lighttpd with nginx on this server!

nginx is already using vastly fewer resources than lighttpd ever did on its best day. I’m happy about that considering the limited resources this server has (MemTotal: 1008568 kB). I’m also pleased with the way nginx handles basic things in a zero downtime manner, e.g. reloading configuration files. I hated that I always had to completely kill lighttpd and restart it just to reload the configuration file for a minor change. nginx reloads the configuration file transactionally and will rollback if issues are found. That alone is worth switching for if you’re on the fence.

Getting nginx to match my existing lighttpd configuration was a bit of a challenge but I got it all sorted out in the end. Some issues I faced were in getting PHP requests through to php-fpm. Those issues were mostly due to nginx‘s quirky root and alias directive behavior, especially regarding the request handling cycle and nested location tags and all the internal redirections and regexes required. (I HATE regexes.)

I settled on a very simple albeit repetitive configuration. There’s no global root directive. All the main location directives are independent of one another, which works best for my setup since I have WordPress as the root / with other sites “grafted” on from there. The PHP-specific location directives are copy/pasted and nested into each main location directive as needed.

The trickiest part was getting PHP requests with PATH_INFO (e.g. /index.php/2013/05/article-name) to work. I found the default example in the nginx documentation for fastcgi_split_path_info and it works great.

For those who are curious and just want to see the nginx.conf details, here you are!

server {

    location / {
        root   /var/www-bittwiddlers/wordpress;
        index  index.php;

        location ~ ^.+\.php {
            try_files $uri /index.php;

            fastcgi_split_path_info ^(.+\.php)(/?.+)$;
            fastcgi_pass   unix:/tmp/php5-fpm.sock;
            fastcgi_index  index.php;
            include        fastcgi_params;
            fastcgi_param  SCRIPT_FILENAME $document_root$fastcgi_script_name;

There are a few other main location directives, but they’re irrelevant to the WordPress setup so I’ve omitted them here.

My fastcgi_params file is almost exactly the default file that comes with nginx, except the SCRIPT_FILENAME line is commented out. I’ve found that the best way is to specify this param per each location directive. $document_root does not work when you only have an alias directive and no root directive. It will only have a value if a root directive exists.

For my configuration I’ve abandoned aliases entirely because of the PHP configuration issues they caused. This is most unfortunate because it should just be a simple thing to set up, but it is not.

Another minor issue that bit me was configuring HTTP Basic Authentication. lighttpd and nginx handle this differently regarding the passwd files that store the username/passwords. nginx is a little more obsecure* (conjunction of obscure and secure, implying security via obscurity) than lighttpd in that it requires that passwords in the htpasswd file are “encrypted” so you have to use the htpasswd tool to create those entries. lighttpd is a little more lax in that it doesn’t care at all.

What also irked me is that nginx has no equivalent to lighttpd‘s "require" => "user=username" feature. I was using that feature in lighttpd to “secure” some parts of the site down to specific users while using one common htpasswd file. For nginx I had to separate the htpasswd file into multiple files, one for each section. This was a little annoying but not really a big deal.

What am I doing “securing” things with HTTP Basic Auth, you ask? I’m taking the most primitive security measures to protect access to those things which deserve only such primitive security measures. In other words, the measure is consistent with the value I place on the secured data. :)

Doom Classic with 24bpp lighting

It’s been a while since I pulled an all-night coding binge, but last night that counter was reset to zero. The fruits of that labor are a modestly improved look to the Doom Classic modes under the Doom 3 BFG edition which was recently open-sourced.

Here’s a before/after screenshot pair demonstrating the improved colors for lighting (click for full view):

It takes a keen eye to spot some differences, but the effect should be apparent overall while playing the game for an extended period of time, especially while visiting darker areas in-game. Take a close look at the entryway on the left side and also at the brighter brown wall on the right side.

The Doom Classic modes under BFG are simply ports of the original Doom engine, complete with the old software renderer. It seems they patched up the renderer to scale the original resolution of 320×200 up by a factor of 3x to 960×600. The main game engine (doom3bfg.exe) simply takes the 8bpp palettized framebuffer rendered each frame from the DoomClassic library and updates a texture with its contents, to be presented to the user in the main game window.

While I was perusing the code, I found, by happenstance, this typedef byte lighttable_t; line with these comments above it:

// This could be wider for >8 bit display.
// Indeed, true color support is possible
// precalculating 24bpp lightmap/colormap LUT.
// from darkening PLAYPAL to all black.
// Could even use more than 32 levels.
typedef byte lighttable_t;

This looks like a conversation between developers via code comments (with my own edits to fix spelling), but the way they did the import to git caused all authoring history to be lost, probably on purpose, so we don’t know who’s talking to whom here.

Regardless, what they’re saying here is essentially that lighttable_t, which is used to store palette index lookups based on light levels, could be made to be larger (e.g. 32 bits) to support true color (24bpp with no alpha), with a few additional code changes to generate said light maps and look up the raw RGB colors instead.

The way the engine works is that there is a 256 color palette stored in the main IWAD file in the PLAYPAL lump. All textures and sprites in the game data refer to colors in this main palette. However, there is lighting to be taken into consideration. The engine has to darken the colors referred to in textures and sprites according to the surrounding light level and z-distance. This is done with a light map, from the COLORMAP lump, which is simply an optimized palette lookup table for 32 distinct light levels. Each light level has a 256-entry lookup table which tells it which color from the 256 color palette best matches the original color darkened to the light level. Of course it won’t be perfect since there are only 256 colors able to be displayed on the screen at one time, so you’ll get some color shifting effects and other quantization effects here. But overall, the result is rather impressive for 1994-era technology!

What I’ve done is (mostly) removed the need for the COLORMAP lump and gone straight to calculating the raw RGB colors from the PLAYPAL palette based on the light levels. This way you get direct 24bpp color from the engine. Of course, our colors are still limited to what’s available in the original palette so the source material hasn’t changed, only our rendering is improved.

The light levels available are from 0 to NUMCOLORMAPS-1, where NUMCOLORMAPS is 32. According to some comments in the code, light level 0 is full brightness and level 31 is full darkness. I was able to easily increase NUMCOLORMAPS from 32 up to 64, giving more distinct colors and a smoother lighting look. I was not able to increase NUMLIGHTLEVELS though; there’s something crazy going on with the code related to that constant.

The part that made this all (relatively) easy was that the neo/framework/common_frame.cpp code which projects the 8bpp screen to the 32bpp texture is very simple and does the palette lookup itself. I left this code mostly the same, except I changed the screens array to store larger integers instead of bytes.

I extended the XColorMap array from 256 entries to 256 * NUMCOLORMAPS entries which essentially makes it a larger palette of 16,384 colors instead of just 256 colors. I modified the I_SetPalette method to precalculate all the 16,384 colors based on the original 256 colors.

The rest of the work involved making sure that all the rendering code could handle a wider screen element integer size than byte. There were lots of hard-coded assumptions that the element size would be a byte, apparent in several memcpy and memset calls.

I did encounter some problems that didn’t allow me to fully skip loading the COLORMAP lump.

The primary problem was with the fuzz effect for spectres and your gun (and also other invisible players in network mode). The problem is that the effect uses a specific colormap (#6) from the COLORMAP lump to “dither” the onscreen colors, which produces an effect that isn’t easy to reproduce with a simple calculation. After failing twice or thrice to reproduce this effect, I finally resorted to just bringing back the original COLORMAP and doing a little bit twiddling on the colormapindex_t values read from the screen to keep the light levels consistent.

The other problem was the inverted color effect (only used when the player picks up an invulnerability sphere). I just had to import the colormap at index 32 from the lump to get this to work and also update the INVERSECOLORMAP to be NUMCOLORMAPS since it’s now 64 instead of 32. Just a little table translation there.

There appear to be two extra colormaps in the lump that I’ve not accounted for so I’m just ignoring them. The game plays and looks great now. Admittedly, the red- and green-tint effects don’t look as good as they used to for some reason. I’ll have to check that out. The effect comes across, but it gets too dark further in the distance.

How I fixed the crash in Doom 3 BFG Edition

Merely 10 hours ago, id Software released the GPL source code to Doom 3 BFG Edition. Unfortunately, when I built the game with VS2012 Premium, the Doom Classic modes crash (both Doom 1 and 2) instantly. Here is the small tale of how I fixed that bug.

The obvious thing to do was to fire up the game in Debug mode and see how far I get. The debugger (under default configuration) wasn’t giving me much when the code bombed out due to an unhandled Access Violation Win32 exception. The key was to force the debugger to break when the access violation exception occurs in the first place rather than letting it pass unhandled. VS2012 gives you a check-box labeled “Break when this exception type is thrown” when the unhandled exception is caught. Turn this on and restart the game and try to start up Doom 1 or 2 from the main menu.

Now we get a first-chance exception occurring in r_things.cpp line 196:

intname = (*int *)namelist[i];

A quick check to the Locals debugger window shows that i is 138. The access violation exception is thrown by the OS when the process tries to read memory at namelist[138]. Let’s try reading from namelist[137] using the Watch window to see if index 137 is safe. Okay, everything looks fine there at index 137. It’s just at 138 where it bombs out. Let’s remember this number.

Now let’s step backwards a bit and try to find our place in the code. Where did this namelist pointer originate from? Jumping back to P_Init in the call stack shows us that P_InitSprites was called with sprnames and P_InitSprites hands that off to P_InitSpriteDefs unchanged. Let’s take a look at this sprnames in info.cpp

const char * const sprnames[NUMSPRITES] = { "TROO","SHTG",**...<snip>...**,"TLMP","TLP2" };

That’s it? No NULL terminator there? And there’s this constant array size specifier there: NUMSPRITES. Visual Studio tells me that its value is 138. That sounds familiar…

Let’s go back and take a look at that function where our first access violation occurred to see why it’s trying to read past the bounds of the hard-coded array (whose length is 138 elements).

We can see that the size of namelist (assigned to ::g->numsprites) is calculated to be longer than it should because there is no NULL terminator present. That causes the loop below it to try to access memory beyond what’s allowed. Here’s the simple counting code:

// line 173 in p_thing.cpp:  
check = namelist;  
while (*check != NULL)  

Perhaps the original developer assumed that the const memory section would be zeroed out and the counting while-loop would just luckily run into an extra zero that just so happened to be found just past the bounds of the array? I can’t see why this is a safe assumption to make under any context whatsoever. Perhaps a random happy coincidence of memory layout and padding made this work in VS2010?

Based on this analysis, it seems obvious to me that these methods should be passing around the array’s known count (NUMSPRITES) instead of trying to calculate it dynamically by scanning for NULL terminators. A quick search through the code shows me that these functions are only used once from P_Init so this should be a safe change to make.

This particular instance of this class of bug makes me wonder what other instances of this class of bug are lying around the code elsewhere. I think I got extremely lucky in this instance and could pinpoint a root cause because the data was hard-coded.

I’m going out on a limb here, but it seems that VS2012 added some extra protections to make sure that access violations were thrown for access beyond the bounds of statically-allocated memory regions, which makes me doubly lucky to find the bug. I’m not sure exactly how they’ve done that, not being too familiar with the Windows memory management APIs, but I’m sure there are all sorts of caveats and gotchas with protecting fixed-size memory regions (page alignment issues, etc.). I wonder if this bug would reproduce in VS2010, or any other compiler for that matter…

The pull request I’ve submitted just appends the NULL terminator to the hard-coded array. From here, the code works great and Doom 1 and 2 start up just fine.