How could we define an abstraction in the context of programming? Well, in simple words, I would say it is a part of logic extracted from the whole, keeping details separated from the general implementation. Furthermore, an abstraction should be reusable to decrease the complexity and thus improve the code comprehension. Obviously, it is easier to understand the big picture when dealing with the higher level of thoughts and to skip some specifics from the scope.
As you probably see, the abstraction may be a function that calculates the result based on its parameters, but also, it may go even further. We can actually have a number of functions that operate with one another and fashion them into expressions, statements, and programs. Having that, it is possible to stop expressing our logic in the previous form and start using an abstraction instead.
That may be perceived as a simplified description of a higher-level language, but let’s start from the beginning.
The abstraction of calculation
Back in the day, when we wanted to repeat some calculations, we had to write them down on paper to make sure we follow the tracks and do not miss a step. In the language of mathematics, we could express various computations and apply them when needed.
Later, the computer era came around with the remarkable ability to automate our recipes. By creating transistors and logic gates, we were able to improve the calculation speed. That is because we could translate our numbers to signals, operate over them, and translate the results back into numbers we could understand.
That was a perfect abstraction to speed up specific calculations and make sure they are easily repeatable without human error. Truth be told, we lost most of the readability because the engineering entry-level was very high, yet the advantage was supreme.
The abstraction of instruction
What happened next? Well, we got electronics, signals, gates, and everything was high-speed. Still, we needed to make it more abstract because nobody wanted to generate proper voltage when using a calculator.
Right about that time, the processor was created alongside the corresponding language called the assembler. Even though every architecture has its own assembler, we have in the end decided that some universal instructions would be common to most of them. From that day on, we could write instructions in human-readable text to tell the computer what we want it to do.
The entry-level changed; it seemed like a combination of previous mathematic paperwork and electronic information, thus bringing in the advantages of earlier approaches. As a result, we sacrificed a few possibilities and potential speed to get ourselves a standard that allowed programmers to write instructions in a more humane way.
The abstraction of a programming language
The last step was the invention of higher-level programming languages. They are higher-level because they are usually compiled or translated to lower-level programming languages or instructions.
Things like compilation, code checking, static typing, and automatic memory handling allowed for the creation of safe and low entry-level languages designed for specific purposes. We were able to define various paradigms to solve problems with scalability. For instance, object-oriented programming tried to model problems using classes, and pure functional programming forgo the state to improve formal reasoning. We could choose a tradeoff, design a language, and utilize it to solve a specific category of problems.
With every high-level language, we limit our power over the calculation possibilities and create a better standard for an error-free implementation, leading to more advanced and more complex programs.
Abstraction is naturally embedded in all programming languages. Starting from the very idea of automating manual calculations and ending with high-level specialized languages and frameworks. We no longer need to learn advanced math, electronics, addressing memory, and CPU architecture to write a “Hello World” program. Existing abstractions allowed programmers to focus on business logic and application architecture, assuming that everything below that will simply work.
The thing to learn here is to remember that every abstraction has two sides. One side is the restriction, where we lose some power of possibility. The other side is the advantage, where we gain specific traits partially from the limits we put on ourselves.
The interplay of pain and gain is at the very core of all abstractions. Some of them are definitely more fitted and effective than others. So, learn the tradeoff before creating a new one!