About James Dunne

Senior Software Engineer currently employed by VivaKi.

Stop using second-class languages and build tools to develop your web application

Please, stop using all this HTML middleware to develop your web application. CoffeeScript, SASS, LESS, HAML, Middleman, Bower, Grunt, etc. Stop it.

No, it doesn’t make things easier for the developer. Sure, it might make things easier for YOU, A developer, but not for all developers or future maintainers. I don’t want to learn yet another silly combination of second-class languages to figure out what your application is doing. Just write vanilla JavaScript. Just write vanilla CSS. Just write vanilla HTML.

Stop with the middleware nonsense. It’s not necessary and it actively harms the maintainability of the application, not to mention that it certainly puts a shelf life on the application as a whole. How long will these second-class languages be around for? Will they be supported in the future? Are they going to completely change their inner/outer workings and break your application in two years?

There are several very valid and very strong reasons against using all these middleware and second-class languages.

Development Environment

One big strike against using middleware and second-class languages is that the development environment becomes virtually impossible to reproducibly install and run across all platforms.

Installing just one tool like coffeescript means that you need to specify exactly which version of coffeescript worked for you on the platform you developed on plus the specific version of node.js required to execute that coffeescript version. The standard package managers for other platforms might not carry that version yet, leading you into a yak-shaving exercise to go find the proper version for your platform, maybe compile it, install it, manage it in a virtual environment of some sort so it doesn’t conflict with a pre-existing installation of a specific version required for a different project you might be working on, etc.

As for coffeescript itself, I feel it’s on its way out in light of recent advancements in JavaScript starting with ECMAScript 6. We don’t need some silly syntax translator. We can write JavaScript just fine now. Sure it’s more typing for you but code is written to be read far more often than to just be written.

Maybe you decide to go with some middleware management tool like Middleman. That will require a ruby installation of a very specific version so that all the features you rely on will work, and of course requires that you know a bit of ruby to work with the config.rb configuration file. Well, I’ve never touched anything ruby before, and I really don’t care for it so I’ve never bothered to learn it. Granted it’s not that hard to pick up on and follow some patterns to modify some stuff in the config.rb, but if I have to dig deep and really change the guts of how the application wants to be configured, I’m at a loss for time and effort to go do that.

Let’s say I have no time to dedicate towards learning CoffeeScript, or HAML, or SCSS, and I just want to statically recompile all these second-class languages into their first-class counterparts and ditch the second-class language code entirely. Can I do that with middleman? Yes and no. I can certainly get all the output into one gigantic messy ball of javascript in a single all.js file. Is that maintainable going forward were I to completely drop the CoffeeScript source? Nope. I’ve got to find a way to replicate the dependency reordering logic of middleman and compile all the coffeescript files individually and rewrite the HAML to generate the appropriate script inclusion tags. What a pain in the ass. So there’s really no time-feasible way to drop these damn second-class languages that I don’t care for.

Middleman also apparently wants to be run and handled entirely by Bundle, which is some ruby virtual environment manager if I understand it all correctly, which I probably don’t and I don’t have enough time to begin to care about any of that. I have to learn how Bundle works and what it is in order to just execute Middleman. Should I need to worry about this? Certainly not.

Does it all work on Windows? Regrettably, no. Or at least not as far as I tried before giving up and doing it all inside a Linux VM. I tried setting it all up via MSYS2, my currently preferred Windows development “sub-“environment, and it all failed totally miserably. Does it work on OS X? Maybe. Does it work on Linux? Probably. Do I want to use a Linux desktop to develop? Personally, no.

If this were all done with vanilla HTML, JS, CSS, I wouldn’t have any problems loading it into my preferred development environment, not to mention IDE. That raises another question… Will my IDE understand all the second-class languages being used here? Does it recognize HAML or SCSS or CoffeeScript? Maybe. Will it recognize them as first class citizens with full code-completion support? There’s no guarantee. What is guaranteed is that pretty much any modern IDE WILL understand HTML, JS, and CSS. They pretty much have to if you’re using them for web development.

Learning curve

I touched on this a bit in the previous section, but I just don’t have the time to dedicate to learn all these second-class languages. They look completely foreign to me. My eyes are not trained to notice what is significant and what is insignificant in the code produced in these languages. I can’t just scan through an unfamiliar language and gleam semantics from it. Maybe some seemingly inconsequential language sigil completely changes the expected behavior of the affected code and I’m unaware of that, i.e. maybe just adding a ‘@’ or ‘~’ sign here or there completely turns the code’s behavior on its head. Maybe they mean nothing and are accepted as part of identifiers in your language. Maybe it’s a mixture where ‘@’ means nothing but ‘~’ means something very important to the semantics. Do I care? Not right now.


The bottom line that irks me about all this is that there is so much waste generated as a side-effect of installing all the build tools to take advantage of these second-class languages’ touted benefits over their first-class counterparts.

I need to install two extra languages (ruby, node) with their own runtimes that I don’t use on a daily basis JUST to run the build tools to recompile these second-class languages into their first-class counterparts so it all can be run by a browser. Then I need to install these tools into their respective runtimes with their own list of per-runtime dependencies.

Does Middleman install the nodejs runtime? I don’t have a clue. I might have two or three different installations of node.js or ruby or their gems or npm packages sitting around in various places on my system now and I wouldn’t have a clue where they are or how to invoke them.

I’ve had to install I don’t know how many ruby gems just to get Middleman off the ground. So many libraries and useless extra bits of code completely irrelevant to the final product. Most of those gems required a C++ compiler installed to compile some native code for whatever reason. Was I supposed to know that ahead of time? Nobody told me. So I need a C++ compiler to compile some part of some random ruby gem that’s going to run for maybe 10 seconds as part of the build process, if it even gets used at all? No thanks. Oh, and if I didn’t actually need it, it’s still listed as a dependency and it still has to be compiled and the gem manager will totally bomb if it can’t be compiled. Can I clean out the object files from the C++ compilation phase or will those just rot on my disk? What other cruft is ruby installing on my system that I never asked for?

What a complete waste of disk space and time and energy installing all these prerequisites. Not to mention energy required to write all the documentation and list the step-by-step development environment setup procedure for the next poor sob who has to pick up this project and make a simple change. Do I even know the minimal step-by-step procedure required to set up the environment? Nope. I was a blind man groping through a dark alley just trying to resolve one error at a time. Can I back up two steps ago and undo what I just did? Nope. Did I record my procedure? Of course not. I just wanted to get the damn thing off the ground to see what’s what.


You’ve just ballooned my development environment up by 1,000,000 KB or so just to compile maybe 300 KB of web code. No, this isn’t easier for anyone involved. Stop doing it. Stop adding unnecessary dependencies to the Nth degree. Don’t make me require two extra big fat runtimes and who knows how many packages just to run your build tools because they may have allowed you type quicker. Instead, learn your craft well, and understand it from the ground up. Do things in the most efficient way possible with the least amount of total dependencies. Maximize the utilization of the resources of everyone involved in using your work, not just your own.

IVO-CMS – Part 2 – The New

Any good system architecture is based on the concept of layering. A basic premise of layering is that one layer should not concern itself with the details of any other layer. With the proprietary CMS described in Part 1, my failure to realize that fact was the critical design and implementation flaw of the system. The implementation of the system is obsessed with the revision control part. However, with IVO-CMS, I’ve designed the content management system aspect to be ignorant of the revision control system. Think of it as a CMS wrapped in a revision control system. The entire CMS system itself can still function with a non-versionable implementation of the revision control system.

This is possible because the meat of the revision control system is simply a file system that stores blobs. Virtually any system can be designed with such an organization mechanism. The contents of the blobs and their relation to one another are what the design of the CMS is concerned with.

We’re no longer limited by the relational schema of a database. Our data structures are simply serialized to and from blobs stored in the revision controlled file system.

IVO-CMS uses XML as its serialization formation in its blobs. This is a natural choice because of the ability for an HTML5 document to be serialized as XML. HTML is our primary output for a web content management system, so a clean ability to manage and output it must be central to the design of the system.

IVO-CMS does not define any traditional CMS concepts at its core. Things like “page”, “content”, “navigation”, etc. are never mentioned. At its core, IVO-CMS is simply an HTML renderer with an extensible content processing engine.

The most basic concept at play is the blob, for lack of a better term. A blob is the recipe, written in XML, for rendering an HTML document fragment or even a complete document. IVO-CMS’s blob maps directly onto IVO’s blob.

The content rendering engine for IVO-CMS simply starts up a streaming XML reader on a blob and copies the XML elements read directly to the output, with a defined path for handling custom elements.

All custom processing elements of IVO-CMS start with ‘cms-‘. The most basic processing elements built-in are:

  • cms-import
  • cms-import-template
    • cms-template
    • cms-template-area
    • cms-area
  • cms-scheduled
  • cms-conditional
  • cms-link

Any XML element that starts with ‘cms-‘ is sent to a pluggable provider model to be parsed and handled.

Let’s start with cms-import. When a cms-import element is found in a blob, it should have the form <cms-import path=”/absolute/path/to/another/blob” /> or <cms-import path=”../relative/path/to/../../another/blob” />. Both absolute and relative paths are allowed to describe the location of the blob to import. The imported blob is sent through the content rendering engine and its output is directly injected into the output of the currently rendering blob. The relative path is relative to the currently rendering blob’s absolute path.

An imported blob must be a valid document fragment with one or many root elements that are fully closed. In other words, it cannot contain any unclosed elements which makes its usefulness in rendering partial HTML content limited. This is why cms-import-template was invented.

Think of cms-import-template as importing a template which has areas that can be overridden. This is analogous to the Page/Master Page concept of ASP.NET’s web forms. The page is the currently rendering blob and the master page is the imported template blob. Only certain blobs may be imported as templates – those that contain a single root element: cms-template. Unlike ASP.NET’s web forms, multiple templates may be imported into a single blob and templates may even import each other.

The cms-template blob may contain templateable areas with cms-template-area, uniquely identified with an ‘id’ attribute. The blob importing the template may override these template areas’ contents with a cms-area element and an ‘id’ attribute that matches the template. The order the cms-areas are defined in is important since all XML elements are processed in a streaming fashion and there is no back-filling of content.

Now we come to cms-scheduled. This is an element that allows part of a blob (or the entire thing, if so desired) to be rendered on a scheduled basis. It must first contain some <range from=”date” to=”date” /> elements that define the date ranges when the <content> element should be rendered. An <else> element may also be present to render content for when the current date/time does not fall into any of the date ranges.

Next up is the cms-conditional element which can primarily be used for selectively targeting content to specific audiences. It presents the content author with a simple system of if/else-if/else branching logic to determine which content to render for whichever audience. The inner elements are <if>, <elif>, and <else>. The attributes on the <if> and <elif> elements make up the conditional expressions.

The system evaluates the conditional expressions (a dictionary of key/value pairs pulled directly from the element attributes) to a single true/false value by using a “conditional provider” class. This class may be a custom implementation provided by the site implementer since it is best left up to him/her to define exactly how audiences may be defined and evaluated based on the user that the content should be rendered for.

However, that may be asking too much of the site implementer because it would potentially involve defining a domain-specific language for evaluating expressions. I may provide a default implementation that allows for defining complex boolean logic expressions, e.g. <if expr=”(role = ‘manager’) and (dept = ’23’)”>Hello, managers in dept 23!</if>. The values of variables ‘role’ and ‘dept’ would be provided by a provider model implementation that the site implementer could more easily develop.

Finally, the cms-link element is responsible for allowing the content author to simplistically create anchor links (i.e. the <a> tag) to other blobs without having to worry about the details of how the URL gets mapped to the referenced blob. This is primarily for SEO purposes so that you don’t have to force an implementation of your URL rewriting scheme into your site’s content. The site implementer can write a custom provider that takes the linked-to blob’s absolute path and rewrite it into a URL that should pull up that blob as its own page or as a wrapped article page or whatever other linking scheme he/she wishes to implement. This lets your content be internally consistent without worrying about URL details. Changing your SEO strategy for your content should be as simple as rewriting the link provider.

Now that we have the nitty-gritty details on how the content rendering engine and its basic processing elements work we can talk about how such a low-level engine can be integrated with an existing site, but that’s for next time!

Feather: A new language based on asynchrony

I’ve spent most of my free time over the last few weeks in pursuit of designing a new programming language, one designed for asynchrony from the ground up. I call this language “Feather,” in the hope that it will be lightweight, simple, elegant, and just might possibly enable one to fly.

My core goals for this new language are:

  • asynchronous execution by default with explicit mechanisms to revert to synchronous execution.
  • immutable data and no ability to share mutable state between independent threads of execution.
  • static typing and complete type-safety.

Keeping this goal list very short will allow me to actually achieve all of these goals relatively easily with a final, working reference implementation of a compiler and runtime system.

Asynchronous execution:
This is the primary and most important goal of the language. Where possible, functions must be allowed to execute asynchronously. That is, the completion of one function need not depend on the completion of another function. This assumes execution independence of functions. Of course, this will not always be possible in every case, since one function may rely on the computed results of another function in order to complete, creating a dependency. There may also be times where execution of certain functions need to take place in a specific sequence, but in general I believe these to be special cases.

No shared, mutable state:
The main problem with allowing just any random set of functions to execute asynchronously with respect to one other has to do with shared mutable state. For instance, if just two functions can modify the same shared state and both are executing asynchronously without any synchronization between the two, the final effects on the shared state are undefined. The simplest solution to this problem is to disallow the sharing of mutable state between functions.

Combining these two goals as core to the language has never been done before in any programming language I’ve ever used. Sure, other languages support asynchrony and even fewer may support immutable data, but not to the extent of putting them at the core of the language’s design. The concept of defining data as immutable is not the same as forcing that mutable data cannot be shared across threads of execution.

I hope that defining these goals up front justifies the language’s existence enough for me to further purse its design and development. For once, though, I think I’ll end this post before it gets too lengthy. I have much more to talk about in regards to this language and its features than one post can contain. So, until next time!