Recently, we have discussed the idea of abstraction behind imperative programming. The very concept of having an abstraction is to make things more manageable. Besides, when we consider the evolution of programming languages, the imperative paradigm itself seems a very natural approach.
High-level languages implementing imperative abstractions allow managing code easily due to explicit commands modifying the state. However, there are also other approaches introducing distinct paths of programming. Thanks to artificially reducing the scope of the expression, various paradigms achieve different, more sophisticated advantages.
The declarative paradigm
One of the most important approaches is declarative programming which, in short, defines a program as a set of constant declarations with logical relations that describe the computation logic. The vital thing is determining how the data flow through the program without modifying the external state and thus avoiding side effects in general.
The declarative paradigm is a perfect example of how giving away certain power of expression allows gaining specific features of a solution. For instance, when we implement our solution to a problem in a purely declarative manner, we may expect it to be easier to prove its correctness, run it independently in parallel, and scale up the solution.
The most popular programming approach within the declarative family is a functional paradigm, with its more conservative cousin — pure functional programming. Writing programs in this manner requires using composable functions encompassing conditions and expressions, preferably without side effects. Those functions are treated as first-class citizens and thus are valid arguments for other functions. It is common to construe logic where one parameter (data) is treated as an argument for the other parameter (function). This concept is present in implementations of
fold, widespread across the programming world, and based on a functional approach.
All of those composable functions with no modifiable state produce a code written in a more mathematical manner and thus more suited for formal verification. That means it is easier to prove its correctness in a universal language of mathematics.
Of course, it is possible to prove the exactness of a program where instructions modify the state, but that usually involves model checking and verifying all possible state permutations. That approach may become very cumbersome and thus inapplicable for larger codebases.
There are many implementations of this paradigm in different multi-paradigm languages, although I suggest looking at the pure ones to grasp the sense fully. A great example is Haskell — a general-purpose, statically typed, functional language or Elm — an attractive pure-functional alternative to manage web interfaces.
The specific subparadigm of the declarative approach is called programming in logic. This approach uses logical definitions and relations, allowing for validation of other statements or asking for values that suffice for further relations. It is like modeling logical problems in the language of mathematics and expecting your program to provide all solutions.
In some way, this approach is similar to database and SQL queries. In that case, our definitions and relations represent the data, and the program itself is a query that calculates the result based on provided definitions. I mention this, because the SELECT queries are declarative and nicely correspond to programming in logic.
In real life, logic programming allows one to quickly model and implement problems related to various optimization obstacles and logic reasoning dilemmas. Having that, one may find the solution and verify its correctness based on mathematical logic.
This paradigm suits particular purposes, so the implementations are also respectively designed to simplify logic operations. The language that implements this approach is Prolog, which is a great place to start programming in logic.
There are many more declarative approaches, and all of their implementations try to solve specific categories of problems.
The last one I believe is essential to recognize is reactive programming. The idea is that we define a source of data and declare streams that operate upon those data. The trick is that whenever the source changes its values, all the logic based on those data updates independently and produces new results.
Where do we see reactive programming? On the machine level, this may resemble the flow of signals through the composition of logic gates. On the system level, it could match elements of the interface based on the state, which operates as the source of truth for the visual representation. In both cases, the reactive idea defines some changeable source and declares streams of logic to produce the required results.
Using reactive programming for predicaments resembling automatic update processes produces reliable and straightforward code which may be easily tested and verified.
Reactive languages are usually domain-specific designs to solve specific categories of problems. However, many libraries and frameworks created for general-purpose languages add reactivity to programmers’ tools, making them easy to use.
A declarative paradigm is a comprehensive approach that requires writing logic using functions or relations. The structure provides the general guidance for running the program and computing results. Not writing explicit instructions that modify the state lies at the root of the declarative approach, which differentiates it from imperative programming. The declarative data flow and the lack of side effects simplify logic in general and help to prove the program’s correctness in a formal way.
There are many specific subparadigms and their implementations to handle particular categories of problems. That is possible because most functional trade-offs forgo the general applicability of a language to gain domain-related advantages.
Speaking of advantages, next time, we will discuss the real-life differences concerning imperative and declarative programming. See you!