The Deep Roots of Javascript Fatigue

I recently jumped back into frontend development for the first time in months, and I was immediately struck by one thing: everything had changed .

When I was more active in the frontend community, the changes seemed minor. We’d occasionally make switches in packaging (RequireJS → Browserify), or frameworks (Backbone → Components). And sometimes we’d take advantage of new Node/v8 features. But for the most part, the updates were all incremental.

It’s only after taking a step back and then getting back into it that I realized how much Javascript fatigue there really is.

Years ago, a friend and I discussed choosing the ‘right’ module system for his company. At the time, he was leaning towards going with RequireJS–and I urged him to look at Browserify or Component (having just abandoned RequireJS ourselves).

We talked again last night, and he said that he’d chosen RequireJS. By now, his company had built a massive codebase around it –- “I guess we bet on the wrong horse there.”

But that got me thinking, with all the Javascript churn there’s really no right horse to bet on. You either have to continually change your tech stack to keep up with the times, or be content with the horse you’ve got. And as a developer who’s trying to build things for people , it’s difficult to know when it makes sense to invest in new tech.

Like it or not, Javascript is evolving at a faster rate than any widespread language in the history of computing.

Most of the tools we use today didn’t even really exist a year ago: React, JSX, Flux, Redux, ES6, Babel, etc. Even setting up a ‘modern’ project requires installing a swath of dependencies and build tools that are all new. No other language does anything remotely resembling that kind of thing. It’s enough to even warrant a “State of the Art” post so everyone knows what to use.

So, why does Javascript change so much ?

A long strange journey

To answer that question, let’s take a step back through history. The exact relationship between ECMAScript and Javascript is a bit complex, but it sets the stage for a lot of today’s changes.

As it turns out, there are a ton of interesting dynamics at play: corporate self-interest, cries for open standards, Mozilla pushing language development, lagging IE releases that dominate the market, and tools that pave over a lot of the fragmentation.

The Browser Wars

Javascript first appeared as part of Netscape Navigator in March, 1996. The creation story is now the stuff of legend, where Brendan Eich was tasked with creating a language to run in the browser as a one-man-dev-team. And in just ten days, he delivered the first working version of Javascript for Navigator.

To make sure they weren’t missing the boat, Microsoft then implemented their own version of Javascript, dubbed JScript to avoid trademark issues (they would continue calling the language “JScript” until 2009). It was a reverse-engineered version of Javascript, designed to be compatible with Netscape’s implementation.

After the initial release in November of 1996, Eich pushed Javascript to be designed as a formal standard and spec with the ECMA commmittee, so that it would gain even broader adoption. The Netscape team submitted it for official review in 1997.

Despite the submission to become an ECMA defined language, Javascript continued iterating as part of its own versions, adding new features and extensions to the language.

As Eich moved from Netscape to Mozilla, the Firefox teams owned Javascript development, consistently releasing additional specs for developers willing to work with experimental features. Many of those features have since made into later versions of ECMAScript-driving mainstream adoption.


ECMAScript has a weird and fascinating history of its own, largely broken up into different ‘major’ editions.

The Deep Roots of Javascript Fatigue

ECMA is an international standards organization (originally the European Computer Manufacturers Association ), and they handle a number of specs, including C#, JSON, and Dart .

When choosing a name for this new browser language, ECMA had to decide between Javascript, Jscript, or an entirely new name altogether. So they took the diplomatic route, and adopted the name ECMAScript as the ‘least common denominator’ between the two.

The first edition of ECMA-262 (262 is the id number given to the language spec) was then released in June 1997, to bridge the gap between Javascript and Jscript and establish a plan for standardizing updates going forward. It was designed to be compatible with Javascript 1.3, which had added call , apply , and various Date functions .

ES2 was released one year later in June 1998, but the changes were purely cosmetic. They had been added to comply with a corresponding ISO version of the spec without changing the specification for the language itself, so it’s typically omitted from compatibility tables.

ES3, released in December 1999, marked the first major additions to the language. It added support for regexes, .bind , and try/catch error handling. ES3 became the “standard” for what the majority of browsers would support, underscored by the popularity of older IE browsers. Even in February 2012 (13 years after the ES3 release!!), ES3 IE browsers still had over 20% of the browser market .

Then work began on ES4, a version that was designed to revolutionize Javascript. Ideas for classes, generators, modules, and many of the features that ended up in ES6 were introduced in what was dubbed the ‘harmony’ spec. The committee first published a draft of the spec in October 2007, aiming to have a final release the following year. It was supposed to be the modernized “Javascript 2.0” that would reduce a lot of the cruft of the first implementation.

But just as the spec was nearing completion, trouble struck! On the one hand Microsoft’s IE platform architect Chris Wilson, argued that the changes would effectively “break the web” . He stated that the ES4 changes were so backwards incompatible that they would significantly hurt adoption.

On the other hand, Brendan Eich, now the Mozilla CTO, argued for the changes. In an open letter to Chris Wilson , he objected to the fact that Microsoft was just now withdrawing support for a spec which had been in the works for years .

After a lot of internal friction, the committee finally set aside its differences. They actually abandoned ES4 altogether and skipped straight to creating ES5 as an extension the language features found in ES3! They added specifications for many of the features already added to Javascript 1.x by Mozilla and planned to release later that year. There was no ES4 release.

After meeting in Oslo, Brendan Eich sent a message to es-discuss (still the primary point for language development) outlining a plan for a near-term incremental release, along with a bigger reform to the language spec that would be known as ‘harmony’. . ES5 was finalized and published in December 2009.

With ES5 finally out the door, the committee started moving full-speed ahead on the next set of language extensions which they’d hoped to start incoporating nearly a decade earlier.

By now, browser-makers and committee members alike had seen the danger of adding a giant release without testing the features. So drafts of ES6 have been published frequently since 2011, and available behind flags ( --harmony ) in Node and many of the major browsers. Additionally transpilers had become a part of the modern build chain (more on that later), allowing developers to use the bleeding-edge of the spec, without worrying about breaking old browsers. It was eventually released in June 2015.

So we have:

  • Javascript versions driven by Netscape and then Mozilla (championed by Eich)
  • ECMAScript versions driven by the planning committee to standardize the language
  • JScript versions released by Microsoft which effectively mimicked Javascript and added its own extensions

With experiments and language additions to each dialect pushing the ES standard ahead.


Of course, we can’t talk just about the full history of JS without mentioning the platform it’s running on: browsers.

The Deep Roots of Javascript Fatigue W3Counter

Javascript is the only language in history that has so many dominant runtimes, all fighting for their piece of the market share. And it makes total sense, given how ‘bundled’ browsers are. If you build an alternative Python runtime, it’s hard to profit unless you add some sort of proprietary add-ons. But with browsers, the sky is the limit: tracking usage statistics, the ability to set default search engines, email defaults, and even advertising.

And the Javascript arms race certainly wasn’t helped by the collapse of the ES4 spec, or IE6-8’s longstanding vicegrip on the market. Though most of the 2000’s, Javascript runtimes remained non-standard and stagnant.

The Deep Roots of Javascript Fatigue W3Counter

But an interesting thing happened when Chrome first emerged onto the scene in December, 2008. While Chrome had numerous features that made it technically interesting (tabs as processes, a new fast js runtime, and others), perhaps the thing that made it most revolutionary was its release schedule.

Chrome shipped from day-1 with auto-updates, first tested from a widespread number of early adopters using the dev and beta channels. If you wanted to use the bleeding edge, you could easily do so. Nowadays, updates to the stable channel happen every six weeks, with thousands of users testing and automatically reporting bugs they encounter on the dev and beta channels.

It meant that Chrome was able to ship updates far more frequently than its competitors. Where the other browsers would tell users to update every 6-12 months, Chrome updated the three channels weekly, monthly, and released updates on the stable channel every six weeks.

This put enormous pressure on the other browser makers to follow suit, causing a rapid increase to iterate and release new language features. Gone are the days of the not-to-spec IE Javascript implementations. As the ES6 drafts started to finalize, browsers have steadily been adding those features and fixing bugs with their implementations.

Most of the features are still not “safe” to use and hidden behind flags, but the fact that browsers will continue to auto-update means that the velocity of new JS features as increased dramatically in the last few years.

Builders and Shims

Imagine now that you’re a developer just learning JS, and knowing none of that history. It’d be pretty much impossible to keep track of which features are supported in what browsers, what’s to spec and what’s not, and what parts of the language should even be used.

There’s an entire book on just the ‘good parts’ of Javascript , borne of it’s convoluted past. And before MDN, the predominant resources for getting an intro to Javascript just barely scratched the surface .

Realizing this was a problem, John Resig released the first version of JQuery in 2006. It was a drop-in library designed to pave over a lot of the inconsistencies between the different browsers. And for the most part, developers became comfortable with always bundling jQuery into their projects. It became the de facto library for DOM manipulation, AJAX requests, and more.

But working with many different scripts was tedious. They had to be loaded in a certain order, or else bundle duplicated dependencies many times over. Individual libraries would overwrite the global scope, causing conflicts and monkey-patching default implementations.

So James Burke created RequireJS in 2009 , spawned out of his work with Dojo. He created a module loader designed to specify dependencies, and load them asynchonously in the browser. It was one of the first frameworks that actually introduced the idea of a “module system” for isolating pieces of code and loading them asynchronously–dubbed AMD (Asynchronous Module Definition) .

While AMD was easy to load on the fly, it was also verbose . It essentially added dependency injection for every single library you’d want to load:

define('cache', ['db', 'exports'], function(db, exports){   var cache = {};   exports.get = function(key, cb){     if (cache.hasOwnProperty(key)) return cb(null, cache[key]);     db.get(key, function(err, res){       if (err) return cb(err);       cache[key] = res;       cb(null, res);     });   }); }); 

The simplicity had the benefit that the browser could load libraries on-the-fly, but it was a lot of overhead for the programmer to really understand.

In 2009, Ryan Dahl started working on a project to run Javascript as a server . He liked the simplicity of working in a single-threaded environment, and leveraged the non-blocking I/O of v8. That project became Node.js.

Since Node ran on the server, making requests for additional scripts was cheap. The scripts were cached, so it was as easy as reading and parsing an additional file.

Instead of using AMD modules, Node popularized the CommonJS format (the competing module format at the time), using synchronous require statements for loading dependencies. It mirrored the same way that other languages worked, grabbing dependencies synchronously and then caching them. Isaac Schuetler built npm around it as the de facto way to manage and install dependencies in Node. Soon, everyone was writing node scripts that looked like this:

var db = require('db'); var cache = {};  exports.get = function(key, cb){   if (cache.hasOwnProperty(key)) return cb(null, cache[key]);   db.get(key, function(err, res){     if (err) return cb(err);     cache[key] = res;     cb(null, res);   }); }; 

Yet the frontend still lagged. Developers wanted to require and manage Javascript dependencies with the same sort of first class CommonJS support we were getting in Node. The TC39 committee heard these complaints, but a fully-baked module system would take time to develop.

So a new set of tools appeared on the frontend developer’s toolchain. Bower for pure dependency management, Browserify and Component / Duo for actually building scripts and bundling them together. Webpack promised a new build system that also handled CSS, and Grunt and Gulp for actually orchestrating them all and making them play nicely together.


Builders didn’t quite take it far enough on their own. Once people were comfortable with the idea of adding convenience shims and libraries, they started exploring compiling other languages down to Javascript.

After all, there are hundreds of programming languages that can run on the server, but there’s only one widely-supported language for the browser.

Coffeescript was the first major player, created by Jeremy Ashkenas in 2009. Prompted by the surge of programmers building Ruby on Rails apps, Coffeescript provided a way to program in Javascript that looked like Ruby. It compiled down to Javascript, and using our handy build tools, could remove many of the gotchas around vanilla js.

But it didn’t end there.

Once transpilers became a standard a part of the build chain, we had opened the Pandora’s box of what could be compiled to Javascript.

We started with existing languages that compiled to JS. Gmail was built with GWT (Google Web Toolkit) –a toolkit that compiled Java down to Javascript. Clojurescript compiled Clojure to Javascript. Emscripten took C/C++ code and compiled it down to a much more efficent browser runtime using asm.js.

And now, there’s even entirely separate languages–built with compilation to Javascript as a first-class citizen. Dart and Typescript both allow users to run code in the browser both natively and compiled to JS. Elm was a new take on Javascript designed to compile down both visual elements and execute Javascript code.

Perhaps most interesting, transpilers provided a way to use tomorrow’s ES versions, today! Using tools like Babel , Gnode , and Regenerator , we could transform our ES6 features for the existing browser ecosystem. JSX allowed us to write HTML in a way that combined DOM composition and Javascript a bit more naturally–with all the complexity hidden behind a straightforward transpiler.

Nearly 10 years after Chris Wilson first thought that ES4 would break the web–Babel and other transpilers have acted as the “missing link” to provide backwards compatibility. Babel has worked it’s way into the standard Javascript toolchain, and now most builders include it as a first-class plugin.

Betting on the Right Horse

The history of the language should indicate that, above all, one thing is clear: there’s an enormously high rate of change in the Javascript world.

I didn’t even go into changes in frameworks ( Ember , Backbone , Angular , React ) or how we structure asynchronous programming (Callbacks, Promises, Iterators, Async/Await). But those have all churned relatively regularly over the years.

The most interesting thing about having a language where everyone is comfortable with build tools, transpilers, and syntax additions is that the language can advance at an astonishing rate. Really the only thing holding back new language features is consensus . As quickly as people agree and implement the spec, there’s code to support it.

It makes me wonder whether there will ever be another language like Javascript. It’s such a unique environment, where the implementations in the wild lag so far behind the spec, that it just invites new specs as quickly as the community can think of them. Some of them even become first-class citizens, and are adopted by browsers.

Javascript development doesn’t show any signs of stopping either. And why would it? There’s never been a time with more tooling and first-class JS support.

I predict we’ll continue to see the cycle of rapid iteration for the next few years at least. Companies will be forced to update their codebase, or be happy with the horse they have.

转载本站任何文章请注明:转载至神刀安全网,谢谢神刀安全网 » The Deep Roots of Javascript Fatigue

分享到:更多 ()

评论 抢沙发

  • 昵称 (必填)
  • 邮箱 (必填)
  • 网址