Developing story of programming languages

This post is slightly more philosophical, without practical outcomes for the reader — expressing an opinionated view on programming languages’ evolution. I have spent a substantial part of my life with them and created bonding to these “lingua francas” for programmers.

In the old days, developers tended to make holes in the paper from nine to five.

The history of spoken languages is so tightly connected with our humankind. Some of them disappeared together with civilizations; others were transformed or influenced by neighbors. Their programming counterparts share a similar fate, although in a much shorter time range of the last eighty years. Many future-shaping things happen in such a small timeframe — often unnoticed by our generation of software creators. To understand where the programming languages evolved, we need to rewind the story and fast-track some key moments.

Prehistoric age

My college friend once told me how his father read computer instructions dump. I was amazed to hear that it was a tightly packed listing of consecutive hexadecimal numbers. I couldn’t understand how he can start reading anywhere in the listing and immediately recognize what’s going on. For me, it looked like surviving an encounter with a saber-toothed tiger — our prehistoric ancestors were capable of it, but we lost this ability. We can’t imagine the hardships and struggles of programmers living in this “prehistoric age” of computing. They were so tightly bonded to a specific computer that its replacement made a significant portion of their knowledge and experience unusable. They had to think like these computers; there was no other option.

Simple sum coded in higher-level language and corresponding machine code (source).

Pathfinders

But soon, people realized that even programming a simple sum of two numbers needs a considerable amount of mental work. Why don’t we use the simple plus sign like in mathematics? Similar questions bring them to the invention of higher-level languages. Switching from computer instructions to more abstract constructs gave our precessors longing independence from the hardware. Now they could universally express and reuse algorithms — order of steps executable by different processors.

But this great discovery revealed something bigger — far, unknown horizons of software architecture and design. There was the “age of pioneers,” and like in space exploration, they tackle dead ends and failures. Especially famous is the GoTo command misuse causing obscure spaghetti code. The rumor is, the entire generation of programmers had to retire, to fade away its use. Finally, our precessors admit it publicly: we are in the software crisis; we need to step back and reinvent our development approach.

Objects are everywhere

Thinking in objects was already there — unnoticed and successfully used in the simulations of real-world systems. It takes some time to realize that every program is the simulation of possible or existing events. If a car is an object, it has specific attributes like current speed and interfaces to interact with it — e.g., pressing on the gas pedal. Consequently, having this abstraction in mind, we can split a team of programmers into halves. One will create the car, and the other can use it for any purpose, ignoring its internal complexity. It’s the same way we use our vehicles without the knowledge of engine ignition timing. This “abstraction approach” was so easy to grasp by programmers that the object-oriented paradigm became a leading paradigm for building software systems. That “modeling age” brought new terms to programming like encapsulation, inheritance, classes, interfaces, etc. Using new words had simple reasoning — to help imperfect human minds grasp the complexity of software. We took another step further away from the computer’s zeros and ones towards more natural thinking practiced in our physical world.

Cleaning up the memory garbage

Working with memory remained a pain for developers. Practically you work with it everywhere in the code — combining texts, arrays of numbers, or intermediate calculations. But memory is only borrowed, and the operating system wants it back for others. Forgetting to release used memory is one of the old programmer’s nightmare. Seeing your program sipping more and more memory during a longer execution time, you know what’s going on. Somewhere in these hundreds of thousands of code lines, someone forgot to clean up things, resulting in an unavoidable program crash sooner or later. That happened more than often till languages utilizing garbage collectors came to the scene. They release used memory automatically during the code execution.

The simple operation of merging two arrays is not so simple in the language without the garbage collector.

Being pampered by a platform with a garbage collector has its advantages. We stopped worrying about forgetting to clean up things everywhere. It means radically lower cognitive load to solve even the simplest tasks. Secondly, your source code will be slimmer — free of programming constructs adding nothing to the desired business logic. More concise code is always more understandable and maintainable. And finally, your program will probably be more secure.

Spirit of the present age

How can we describe the present age of programming? Firstly let’s look at the generally accepted list of programming languages’ popularity. Take the seven most significant languages with more than 2% share. Looking through them, we can make the following statements:

  • all are higher-level languages
  • six incorporated object-oriented paradigm
  • five has a built-in garbage collector

The result of eighty years of human effort lies in front of us. It rose from the sweat of many genius persons looking for developer’s holy grail and often trying crazy things to find it. Programming languages evolved to be more semantically expressive with less and less need to use technical constructs. Nowadays, we can see it throughout the whole domain of our work. E.g., instead of tackling synchronization primitives, we write asynchronous code like a story. We often replace “for” and “while” loops with semantical one-liners. And there are many other examples.

We are releasing ourselves from computers’ unnatural technological reality to the semantic worlds designed by ourselves. Eighty years ago, we had to adapt our thinking to computers. Now the situation is reversed; computers use their computational power to understand us.

Comparison of expressiveness between natural language and modern code

This evolution created two outcomes:

We are undoubtedly in “the age of humanity.” Programming is running towards the human world, our ideas, expressions, experience, and social interaction.

More than 56 million programmers (till Sept. 2020) joined one site dedicated to code sharing. It has a far-reaching economic and technological impact on our everyday work — many inventions originated here due to collective work. But we may not forget the socializing and cultural aspects of communities formed around this big code repository. Writing genuine code could have the same effect on readers as writing a good fiction book. Nowadays, it’s more like art to express yourself and less like the mumbling of nerds.

Long ago, languages became the melting pot for new thinking and the rise of cultures. It is fascinating that this history repeats in our technological world, here before our eyes.

Thanks for reading.

I have tackled software architecture and development for two decades and am still amazed by its inspirational approaches, like in the art.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store