Endjin - Home

Which programming languages should I learn?

by Matthew Adams

header-which-language-should-i-learn-p1-1024px

We were having a discussion about languages, prompted by a piece of technology strategy work we are doing around Node.js, and I came away with the impression that there is a general sense that we are moving from a period when the C-family of languages had a hegemony (from the mid-nineties onwards) into a period of fragmentation and diversity.

First, I’m going to challenge that notion with a bit of history, and then see where that perspective leaves us in the “language wars” of today.

When I was a baby developer, every single engineer was reasonably proficient in one of the popular Assembly Language families – typically Motorola’s 680×0 or Intel’s x86, but it could be Z80, 6502, ARM or something more esoteric (IBM Mainframes, anyone?)

Here’s some nostalgic Z80 code. Ah, big-endian architectures. How I miss you.

Even then, C was starting to take a firm hold everywhere, while scientists were using FORTRAN, line-of-business devs were still knocking out COBOL (and various mainframe languages like APL and PL/I), and CompSci academics were using languages like ML, LISP and Haskell.

We lived in a world of profound language diversity, specialized to a particular use case. It is often perceived that people used “the right tool for the right job” – but I think the reality was somewhat different. As I said, everyone knew a bit of assembler. You had to if you wanted to be able debug things at the lowest level on your platforms of choice. But LOB developers knew COBOL, not ML. Scientists knew FORTRAN, not LISP. Language diversity was really programmer diversity.

A few years later, and C/C++ are becoming dominant, along with the amazingly successful and long-lived Visual Basic (nearly as ubiquitous as Excel as a populist programming tool). Then, along comes Java, and the rise of Perl and PHP, Python, C# (and VB.NET – a totally different language). More recently, JavaScript moves from being a poorly-understood SFX tool for web designers, to a mainstream language; and ML gives birth to a whole family like F#, Erlang and Scala.

So where are we today? Well, judging by the TIOBE index, more than half of developers know at least one C-family language, be that C, C++, Java, C# or Objective-C.

Many developers are also (we are told) learning JavaScript – driven by the demand for richly interactive web applications (and more fancy SFX in standard websites), and the rise in interest in node.js. It is interesting to note, though, that on the TIOBE measure, it has had a year-on-year decline in demand and has fallen out of the top 10 languages (usurped by a resurgent Perl, and Ruby).

Given the importance of the concepts embodied in node.js on the one hand, and the apparently insatiable industry demand for ever-more elaborate web pages on the other, why might this be so?

I think the answer is probably influenced by a risk/reward calculation. I mentioned this in the context of HTML5 a while ago: the tooling is poor, developer education is poor, the language is deceptive (it looks like C, but has much more in common with LISP(1) and ML), the debugging experience is extremely poor (even, perhaps especially, in the world of node.js), and although there are many 3rd party libraries (just look at the 28,000-odd packages on NPM), they are riddled with incompatibilities, and even the base libraries supported across all implementations are barely fit for purpose.

As a CTO, I certainly wouldn’t bet the farm on that kind of technology at this stage if I didn’t have to. Of course, it is really interesting to work with, and if people don’t work with it, it will never improve, but it is (clearly) not yet ready for mainstream adoption – Mort and Elvis are not in the building. If it is really the way forward, then how is this technology going to evolve to meet the constraints of broad adoption?

One way in which this evolution is happening is signposted by Node.js: the use of the JavaScript engine as (part of) a platform. A case in point is Microsoft’s own support for (perhaps I’d go so strong as to say partial adoption of) Node.js with IIS/Azure as first class hosts. Thinking of the JavaScript engine as a platform frees us from JavaScript as a language, and we can start to look at CoffeeScript, TypeScript and others as a partial solution to the language complexity issue. But are such marginal languages (even such esoterica as F# and Erlang top them in the adoption stakes) really going to save JavaScript in the long run?

I’ll push the boat out and say ‘no’. In 10 years time, the principle behind Node will be with us, but the .JS bit will be, to all intents and purposes, gone. We may well still call that part of the browser runtime JavaScript and there will be plenty of classic JavaScript out in the wild, but the languages and tooling supported by it must evolve beyond recognition for it to challenge for the top-5 language proponents seem to think it already is.

(1) Eric Lippert said on his blog in 2003: “Those of you who are familiar with more traditional functional languages, such as Lisp or Scheme, will recognize that functions in JScript are fundamentally the Lambda Calculus in fancy dress. (The august Waldemar Horwat — who was at one time the lead Javascript developer at AOL-Time-Warner-Netscape — once told me that he considered Javascript to be just another syntax for Common Lisp. I’m pretty sure he was being serious; Waldemar’s a hard core language guy and a heck of a square dancer to boot.)”

Matthew Adams on Twitter

About the author

Matthew was CTO of a venture-backed technology start-up in the UK & US for 10 years, and is now a Founder of Endjin Ltd, which provides technology strategy, experience and development services to its clients who are seeking to take advantage of Microsoft Azure and the Cloud. You can follow Matthew on twitter.