Writing imperative vs. writing functional code

July 14, 2021

imperative vs. functional

There are many different paradigms in the programming world. Each introduces a specific abstraction that restricts us in some ways and gives us certain advantages in return. Speaking of programming languages and their designs, we distinguish two major approaches which those languages may implement.

One of them is imperative programming, which describes a program as an array of instructions that operate over some state. They may read and write values, and they may calculate new values based on the previous ones. This approach is very straightforward, and with the help of modern programming languages executing various imperative subparadigms, we gain the power to properly structure large codebases alongside the power of almost limitless expression.

Another critical approach is declarative programming, which defines the program as a set of relations describing the way to obtain the result. Usually, there is no state mutation necessary, and the purity of those declarations helps ensure the correctness and scale the solution. Declarative subparadigms are present in some general-purpose languages and also in many domain-specific languages.

Bottom-up vs. top-down

All of this is theory, but how do we actually differentiate the two approaches to programming? The best way to show the differences between those two ideas is to present their practical application.
Before we look into the actual code, we should understand the difference in the general order of solution design.

When we consider a problem and want to obtain the result imperatively, we approach it bottom-up. That means we devise our program so that we calculate basic elements first. Then, we combine those partial results and create more complex representations and calculations. Eventually, we unite every necessary piece and determine the final outcome.
We start from the small bottom elements and build up to the result.

Conversely, when we want to obtain the result in a declarative manner, we approach it top-down. We think about the final step to achieve the result and declare what we need to complete the final calculation. Then, whenever our declaration encompasses some untrivial items, we explain them next by stating their requirements to enable computation. We follow the path until we declare everything using trivial elements or elements that ultimately resolve into trivial ones.
We start from the top result definition and move down to the basic elements.

Simple arithmetic

The simplest example to properly show the difference between imperative and declarative programming is calculating the arithmetic mean of two values. Let’s say we want to write a program to calculate the average of 3 and 5.

First, we introduce the imperative approach in the C programming language, which is very imperative indeed.

int a = 3;
int b = 5;
int sum = a + b;
int average = sum / 2;
printf("%d\n" , average);

You can observe the bottom-up design. We start from the sum of two values, then use the partial outcome and divide it by two to calculate the final result.

Now, let’s look at the declarative solution. We use Haskell, which is a purely functional language.

main = do
    let a = 3
    let b = 5
    print $ average a b
-- declarations
average x y = (sum x y) / 2
sum x y = x + y

You can recognize the top-down design. We define our result, presenting it in the main program, then declare what the average means. The first declaration is not enough, because to explain it fully, we also need to declare what sum means.

As you see, the difference lies in the order of obtaining the final result. Of course, this does not mean that the calculations cannot be similar under the hood in bytecode. Still, when comparing programming paradigms, we consider the abstraction of high-level design.

Iterations

Let’s look into a more complex example. Imagine we have an array of numbers, and our task is to calculate the sum of those numbers. The thing is that we cannot add them all in one go, so we need to design a solution based on iteration.

When we approach this issue from the imperative perspective, it is evident that we need to add the numbers step by step. We start with a provisional sum at 0 and add the first number; then, to that sum, we add the second number, then the third, and so on. Finally, we have the sum of all the numbers. Let’s look at the C implementation.

int a[10] = {4, 7, 0, 5, 2, 9, 8, 3, 1, 6};
int i, sum = 0;
for (i = 0; i < 10; ++i) {
    sum += a[i];
}
printf("%d\n" , sum);

You can see that the calculation starts with a single element, and with each step, it obtains a more complex sum until it reaches the final result.

The declarative (functional) approach starts from the final result and declares what we need to make the final computation trivial. If what we need is not trivial enough, we declare it further.
We are not allowed to use the state and explicitly define the sequence of instructions. Instead, we need to determine what our abstract result is and declare it properly.

We can say that the sum of an entire collection is the outcome of the addition of two terms: the first number and the sum of the rest of the array. At this stage, we do not care that the second term is not trivial yet; we will consider it later. Still, the second term is at least less complex than the initial one, because it does not include the first value. Let’s look at the Haskell implementation.

main = do
    let a = [4, 7, 0, 5, 2, 9, 8, 3, 1, 6];
    print $ sum a
-- declarations
sum [] = 0
sum (x:xs) = x + sum xs

To define the result, we declared how to address the final calculation using recursion. Another language feature that helps is pattern matching. Using that, we could divide the first value x from the rest of the array — xs. The last thing to do, the base case for our recursive sum, is the definition of the result for the empty array, which is the most trivial call.

The contrast between stateful iteration and stateless recursion is evident in the context of the above example. It is worth mentioning that both paradigms allow implementing almost anything we can think of. The differences lie in the design, with all its advantages and disadvantages, which hugely affect complex programs and systems.

Conclusion

Writing programs in imperative and declarative paradigms differs in crucial elements. One introduces variables and loop, the other pattern matching and recursion. Those variations push us to think about the solutions differently.
Imperative programming urges us to calculate small elements and then combine them into bigger ones until we reach the result. On the other hand, declarative programming requires defining elements from the most complex to the trivial ones.

Understanding those differences allows us to use the advantages of certain programming paradigms better. That is definitely worth wishing to everyone, cheers!