We were having a discussion about languages, prompted by a piece of technology strategy work we are doing around Node.js, and I came away with the impression that there is a general sense that we are moving from a period when the C-family of languages had a hegemony (from the mid-nineties onwards) into a period of fragmentation and diversity.
First, I’m going to challenge that notion with a bit of history, and then see where that perspective leaves us in the “language wars” of today.
When I was a baby developer, every single engineer was reasonably proficient in one of the popular Assembly Language families – typically Motorola’s 680×0 or Intel’s x86, but it could be Z80, 6502, ARM or something more esoteric (IBM Mainframes, anyone?)
Here’s some nostalgic Z80 code. Ah, big-endian architectures. How I miss you.
Even then, C was starting to take a firm hold everywhere, while scientists were using FORTRAN, line-of-business devs were still knocking out COBOL (and various mainframe languages like APL and PL/I), and CompSci academics were using languages like ML, LISP and Haskell.
We lived in a world of profound language diversity, specialized to a particular use case. It is often perceived that people used “the right tool for the right job” – but I think the reality was somewhat different. As I said, everyone knew a bit of assembler. You had to if you wanted to be able debug things at the lowest level on your platforms of choice. But LOB developers knew COBOL, not ML. Scientists knew FORTRAN, not LISP. Language diversity was really programmer diversity.
So where are we today? Well, judging by the TIOBE index, more than half of developers know at least one C-family language, be that C, C++, Java, C# or Objective-C.
Given the importance of the concepts embodied in node.js on the one hand, and the apparently insatiable industry demand for ever-more elaborate web pages on the other, why might this be so?
I think the answer is probably influenced by a risk/reward calculation. I mentioned this in the context of HTML5 a while ago: the tooling is poor, developer education is poor, the language is deceptive (it looks like C, but has much more in common with LISP(1) and ML), the debugging experience is extremely poor (even, perhaps especially, in the world of node.js), and although there are many 3rd party libraries (just look at the 28,000-odd packages on NPM), they are riddled with incompatibilities, and even the base libraries supported across all implementations are barely fit for purpose.
As a CTO, I certainly wouldn’t bet the farm on that kind of technology at this stage if I didn’t have to. Of course, it is really interesting to work with, and if people don’t work with it, it will never improve, but it is (clearly) not yet ready for mainstream adoption – Mort and Elvis are not in the building. If it is really the way forward, then how is this technology going to evolve to meet the constraints of broad adoption?