神刀安全网

Is Functional Programming really slow?

There is a widespread understanding that functional programming produces lower performance. After all, several factors key to the concept have indubitable costs – garbage collection is a must, and commonly used persistent data structures are slower, to name a few. This probably lead Alan Perlis to comment that “LISP programmers know the value of everything, and the cost of nothing.”

It is certainly true that imperative code can be faster. Anything done functionally can be mapped to an equivalent imperative program, while many well optimized imperative programs probably do not map to anything that could pass as idiomatic functional programming. But the interesting question is: do we always write that well optimized imperative code?

I have recently been doing a lot of data converting, one format to another. Since the logic is relatively complex, you write many classes, each in charge of some part of the format. Nothing surprising. I have seen several approaches, used by different programmers, and the interesting bit is that an approach which can be characterized as functional has been the most performant in practice.

The standard, imperative and object-oriented approach, though probably much more optimal on the level of one method, ends up using a lot of allocations. Many smaller converter objects get created. (This can be avoided by using static methods, though this leads to an unpleasant explosion of function parameters everywhere.) The results are typically passed in lists or arrays, which leads to a lot of collection allocations. The various collections do not live long, they are soon merged into a larger collection, which may expect a similar fate further up the call chain. To avoid this obvious cost, collections can be passed around, allocating one collection and filling it along the way. Yet, this tends to break encapsulation, so will often be avoided. All of this can be worked out, and an optimal solution achieved, but it requires careful thought, and it would possibly violate some basic principles of object-oriented programming.

The more functional approach was to rely on laziness. C# has for a long time had generator functions. If the function you’re writing returns an IEnumerable<T> (the standard .NET interface for a sequence, implemented by all collection classes), the keywords “yield return” allow you to return items one by one, as they get requested, when the resulting enumerable is iterated. Care is required, since enumerating such a sequence twice will execute the whole logic twice. The result does not get stored. In fact, it can easily be different! However, if used correctly, generators allow you to avoid allocating temporary collections.

The converters written with the functional approach were generally much faster than their more imperative counterparts, and consumed less RAM. All of them had parts written in both styles, and this is all a shaky estimate, of course. Yet it still indicates that just following FP principles may sometimes produce faster programs.

The OOP/imperative program could have been much faster, but the best practices of OOP do not lead to such a solution. They hardly lead at all. Rather, the best practices offer many paths to take. Some will be optimal, some will not. A professional programmer working on some project will probably not spend too much time bothering with this, nor is it in the interest of the project for them to produce an overly complex solution. Other people need to be able to understand and maintain the code. So a tendency to simple, safe solutions, which duly follow the SOLID principles, will be strong. Optimality is less likely to be reached.

The functional approach on the other hand tends to push the programmer towards a certain type of code. Static methods (in OOP terms), returning lazy collections, striving for functional purity – these are much stricter rules, leaving less choice, and leading towards a certain type of solution. With simpler problems like these data conversions, it can feel almost like the program writes itself. And though many individual methods seem slow, the overall program is faster, without nearly any effort put into optimizing it.

It is interesting to note that generator methods in C# have an old reputation of being slow. A state-object is allocated every time an enumerable is iterated over, which compares badly to, say, a for loop iterating over an array. I have personally felt driven away from using too many enumerables, just by this piece of knowledge hanging on in my memory, more like a silly prejudice than a well thought out opinion. Obviously, in situations when they enable you to avoid allocating that array, generators must be faster.

This is all based on a small set of examples, all of them programs of the same, specific kind. It’s impossible to draw any strong, general conclusions. Yet they raise an interesting question. Yes, imperative programming can be much faster, but, honestly – how often will we actually write that optimal imperative program?

In many areas, where every last bit of performance is needed, the imperative approach will win easily. (Though, reading John Carmack on the topic of FP , with his background in game programming, makes the question look much more complicated.) But for the everyday programmer, working on everyday problems in that typical enterprise(y) setting, the functional approach may even lead us to produce more performant code, despite not entirely unreasonable expectations to the contrary. Given all the other benefits functional programming brings, which have been written about extensively ( “Why Functional Programming Matters” is one of the more famous texts on the matter, it’s a very good read), being more functional in our work seems a no-brainer. Not because we couldn’t write the faster imperative program, but rather, let’s face it – we probably won’t.

转载本站任何文章请注明:转载至神刀安全网,谢谢神刀安全网 » Is Functional Programming really slow?

分享到:更多 ()

评论 抢沙发

  • 昵称 (必填)
  • 邮箱 (必填)
  • 网址