Functional programming lately has generated so much buzz that even the corporate world is turning its heavy head to have a look.
With Scala and Clojure the Java Virtual Machine (JVM) now boasts two mature functional languages that can be used in conjunction with the legacy Java codebase. The other big corporate player, Microsoft added to its toolchain F#, another language that features a functional approach. Because of all the recent developments one might be led to believe that FP is a brand new cutting-edge technology, as I did when I first heard about it. But in reality FP is one of the oldest programming paradigms and even the corporate king Microsoft quietly started financing research on the purely functional language Haskell as far back as 1998, an effort still active to this day.
But if functional programming was born 1959, why is it that only now it is becoming a viable option? In other words why is it that the industry is only now becoming interested in the functional approach? Is functional a better paradigm than imperative or Object Oriented (OO)? And if so will FP be able to repeat or even top the success of OO in becoming the of the most adopted paradigm?
Is FP better?
In the 1930s It was proved that as long as a language is Touring complete is as expressive as it can be, or in other terms all Touring complete languages have the same expressive power. This allows the user to solve the same problems in any language: if it can be done in C it can be done in Scheme and vice-versa. With this in mind we can easily prove mathematically that FP as a paradigm is not inherently more powerful than OO or imperative. This tells us immediately that higher power cannot be the reason why FP is now being talked about and adopted more widely.
But if all complete languages are mathematically the same, in practice a language paradigm, features and libraries do make a big difference in how a user goes about solving a specific problem. Together with that every programmer has his/her own taste: I like language X, while my workmate next to me likes Y. X and Y are supposed to be the same but we would never exchange them, because we don't like using the other language. Using what we like make us productive and makes us enjoy what we do. So ultimately the choice of language makes a difference in how productive, enjoyable and easy it is to develop software. In this respect functional languages have an edge compared to other paradigms, as we'll shortly see.
First things first
All computers have been able to do since their inception is to take a sequence of actions and execute them. Actions can be "read data from memory", or "do some calculation", or "write the result on screen", etc... It is no surprise then that the first paradigm to evolve was the imperative, featured by languages like Fortran, Cobol or C. This is only only logical because all imperative languages do is to make the do-one-thing-at-a-time business of computers a bit more human-like. Imperative languages are relatively easy to grok at least in their basic functionalities. Despite being very different from any natural language they allow to express simple concepts rather simply. Assigning a value to variable is a bit like giving names to things or people; a data type is a bit like different varieties of fruits (apples go with apples, oranges with oranges) and a cycle is a bit like going through a phonebook to find the phone number we're after. Also we intuitively understand actions that can only be made in sequence: you first break eggs, then beat them, then mix them with flour and then bake the mixture. It is easy to see that changing the order of the actions makes a big difference, which is to say that the result of the whole program is dependent on when each action it is executed. By the same token it is also just as it is easy to understand that the outcome of an action can affect the outcome of all the following actions.
However if the programs we write grow in size and have to do more complex stuff things start getting complicated. Because imperative languages tend to be so close to machine-speak it becomes difficult to express abstract concepts. The capacity to abstract gives us the possibility to write more general programs that can be used not in just one specific case, but in many situations. Generality also makes it easier to break down the program in logical parts that can be composed together to make a complex system. As the human brain struggles to keep many concepts in mind at once, generality and modularity become a big bonus, and this is where the imperative paradigm is found wanting.
It was with the divide et impera intent that in the late 60s researches come up with the idea of providing programmers with an in-built new abstraction: the object; fast forward a few years and objects were added to C in what become C++. C++ allowed imperative programmers a lower entry level in the world of objects, allowing them to keep on using the imperative style while getting accustomed with objects as they went along. The model was so successful that now objects are regularly taught in most university courses, and OO is by far the most used paradigm.
Objects were successful because they solved some of the issues of modularity and generality, but carried with them an increasingly heavy baggage. The OO programmer must-read is a book written to help taming the complexity created by the abstraction that was meant to tame complexity. Despite being widely successful the OO paradigm is not a cure-all, in fact it has not replaced completely C and is found wanting in certain applications that are becoming topical. Of all the problems IT is facing today concurrency is one of the most talked about, and concurrent computation are notoriously hard to tackle with imperative or OO.
To picture concurrency imagine having 10 people on the line that constantly need to talk to each other, but you control a switchboard that can only deal with one call at a time. All calls must progress in strict sequence, and the two parties must be matched exactly all the times. And oh by the way if you mess up even one call the whole program crashes, if you block calls in the wrong way it may hang forever. In imperative or OO this is mostly in the programmer hands, and just to give a taste of how hard it is I'll cite Java own documentation.
It is our basic belief that extreme caution is warranted when designing and building multi-threaded applications … use of threads can be very deceptive … in almost all cases they make debugging, testing, and maintenance vastly more difficult and sometimes impossible. Neither the training, experience, or actual practices of most programmers, nor the tools we have to help us, are designed to cope with the non-determinism … this is particularly true in Java … we urge you to think twice about using threads in cases where they are not absolutely necessary …
Which finally brings us back to the subject of this post: functional programming.
Why did it take you so long?
Generality, modularity and concurrency is where FP languages really shine. Because of restrictions imposed by the programming paradigm, the programmer is forced to break down the program in small units, known as functions. Also, functions are usually often more restrictive than objects, treading flexibility with guarantee to execute consistently, no matter when or where they are executed in the program.
This allows for great modularity, breaking and resuming a part of the program at will (concurrency) or even to let it run on a different machine and get a result when done (parallelism). It is important to stress that the last two features do not have to be managed directly by the programmer, but the compiler can take care of that. Now you can see that it starts to make sense to use functional languages.
So why despite all these features that address pressing matters did FP adoption came so late and is still this low? There certainly are a few reasons, but to me the most important is how steep is the entry barrier to the functional world. As we have seen before simple imperative programming is relatively easy, and in most cases you do not have to commit to the Object in full, or at least right away. In other words, the entry barrier of most OO languages is low, and perhaps even most importantly the learning curve progressive.
FP on the other hand is a different story. FP is not based on how a machine works, but on mathematical theories, chiefly lambda calculus. That is to say that something intuitive like a recipe for a cake is replaced by mathematics, which together with contemporary art must be one one of the most abstract things mankind came up with. There still are shared concepts between functional and other paradigms, but many intuitive imperative concepts are replaced by much more abstract mathematical concepts. Assignments become bindings, cycles mutate in recursion, and all or the sudden people start talking about state and big O notation.
All this mind shifting imposes a big burden on the programmer, the entry barrier rises and becomes steeper as it takes a lot more time and effort to start writing meaningful programs.
Being a keen functional programmer myself I should be happy that FP is finally being recognised, and in fact I really am. However there still is something that spoils the party. The "FP is good to tackle complexity and concurrency" sounds a lot like: "FP is a necessary evil to tackle complexity and concurrency", and from the corporate CTO point of view, it actually is like that. So despite functional languages are excellent candidates to address current issues (and many more) I would be really be surprised if even one of those languages will get an adoption rate above 5 %. What I see in the future are tools and systems being programmed in functional languages, but in such a way that hides the functional nature of the tool to make it available for a wider audience. This is indeed a shame, for functional programming is not just for the programmer elite, it makes you productive and it's jolly good fun to do!