Locked. This question and its answers are locked because the question is off-topic but has historical significance. It is not currently accepting new answers or interactions.
I've read some texts about declarative/functional programming (languages), tried out Haskell as well as written one myself. From what I've seen, functional programming has several advantages over the classical imperative style:
Stateless programs; No side effects
Concurrency; Plays extremely nice with the rising multi-core technology
Programs are usually shorter and in some cases easier to read
Productivity goes up (example: Erlang)
Imperative programming is a very old paradigm (as far as I know) and possibly not suitable for the 21st century
Why are companies using or programs written in functional languages still so "rare"?
Why, when looking at the advantages of functional programming, are we still using imperative programming languages?
Maybe it was too early for it in 1990, but today?
Because all those advantages are also disadvantages.
Stateless programs; No side effects
Real-world programs are all about side effects and mutation. When the user presses a button it's because they want something to happen. When they type in something, they want that state to replace whatever state used to be there. When Jane Smith in accounting gets married and changes her name to Jane Jones, the database backing the business process that prints her paycheque had better be all about handling that sort of mutation. When you fire the machine gun at the alien, most people do not mentally model that as the construction of a new alien with fewer hit points; they model that as a mutation of an existing alien's properties.
When the programming language concepts fundamentally work against the domain being modelled, it's hard to justify using that language.
Concurrency; Plays extremely nice with the rising multi-core technology
The problem is just pushed around. With immutable data structures you have cheap thread safety at the cost of possibly working with stale data. With mutable data structures you have the benefit of always working on fresh data at the cost of having to write complicated logic to keep the data consistent. It's not like one of those is obviously better than the other.
Programs are usually shorter and in some cases easier to read
Except in the cases where they are longer and harder to read. Learning how to read programs written in a functional style is a difficult skill; people seem to be much better at conceiving of programs as a series of steps to be followed, like a recipe, rather than as a series of calculations to be carried out.
Productivity goes up (example: Erlang)
Productivity has to go up a lot in order to justify the massive expense of hiring programmers who know how to program in a functional style.
And remember, you don't want to throw away a working system; most programmers are not building new systems from scratch, but rather maintaining existing systems, most of which were built in non-functional languages. Imagine trying to justify that to shareholders. Why did you scrap your existing working payroll system to build a new one at the cost of millions of dollars? "Because functional programming is awesome" is unlikely to delight the shareholders.
Imperative programming is a very old paradigm (as far as I know) and possibly not suitable for the 21th century
Functional programming is very old too. I don't see how the age of the concept is relevant.
Don't get me wrong. I love functional programming, I joined this team because I wanted to help bring concepts from functional programming into C#, and I think that programming in an immutable style is the way of the future. But there are enormous costs to programming in a functional style that can't simply be wished away. The shift towards a more functional style is going to happen slowly and gradually over a period of decades. And that's what it will be: a shift towards a more functional style, not a wholesale embracing of the purity and beauty of Haskell and the abandoning of C++.
I build compilers for a living and we are definitely embracing a functional style for the next generation of compiler tools. That's because functional programming is fundamentally a good match for the sorts of problems we face. Our problems are all about taking in raw information -- strings and metadata -- and transforming them into different strings and metadata. In situations where mutations occur, like someone is typing in the IDE, the problem space inherently lends itself to functional techniques such as incrementally rebuilding only the portions of the tree that changed. Many domains do not have these nice properties that make them obviously amenable to a functional style.
Masterminds of Programming: Conversations with the Creators of Major Programming Languages
[Haskell] Why do you think no functional programming language has entered the mainstream? John Hughes: Poor marketing! I don't mean propaganda; we've had plenty of that. I mean a careful choice of a target market niche to dominate, followed by a determined effort to make functional programming by far the most effective way to address that niche. In the happy days of the 80s, we thought functional programming was good for everything - but calling new technology "good for everything" is the same as calling it "particularly good at nothing". What's the brand supposed to be? This is a problem that John Launchbury described very clearly in his invited talk at ICFP. Galois Connections nearly went under when their brand was "software in functional languages," but they've gone from strength to strength since focusing on "high-assurance software." Many people have no idea how technological innovation happens, and expect that better technology will simply become dominant all by itself (the "better mousetrap" effect), but the world's just not like that.
The stock answer is that neither will or should replace the other - they are different tools that have different sets of pros and cons, and which approach has the edge will differ depending on the project and other "soft" issues like the available talent pool.
I think you're right that the growth of concurrency due to multi-core will increase the percentage (of the global set of development projects) when functional programming is chosen over other styles.
I think it's rare today because the majority of today's professional talent pool is most comfortable with imperative and object-oriented technologies. For example, I have more than once chosen Java as the language for a commercial project because it was good enough, non-controversial, and I know that I will never run out of people that can program (well enough) in it.
Despite the advantages of functional programming, imperative and object oriented programming will never go away completely.
Imperative and object-oriented programming is a step-by-step description of the problem and its solution. As such, it can be easier to comprehend. Functional programming can be a bit obscure.
Ultimately, a useful program will always have side effects (like providing actual output to the user for consumption), so the purest of functional languages will still need a way to step into the imperative world from time to time.
The current state-of-the-art is that imperative languages (such as C#) borrow features from the functional world (such as lambda statements) and vice-versa.
Hasn't it?
Smalltalk was a great object-oriented system back in the day. Why hasn't object-oriented programming taken over? Well, it has. It just doesn't look like Smalltalk. Mainstream languages keep getting more Smalltalk-like with C++, Java, C#, etc. Fashion and style change slower than anything, so when OO went mainstream, we got it by gluing parts of OO to old languages so it looked enough like C to swallow.
Functional is the same way. Haskell was a great functional language. But we've got even more mass of mainstream programmers using C-like syntax today than 20 years ago. So it has to look like C. Done: look at any LINQ expression and tell me it isn't functional.
dynamic
. That's the most Smalltalk-y of these features, so it's no surprise to me that it's only present in the latest version of the most modern of these three languages. :-)
I believe that imperative languages are more prevalent simply because that's what more people are used to. Neither functional programming nor the imperative programming model is more obscure or academic than the other. In fact, they are complements.
One poster said that imperative code is easier to understand than functional programming code. This is only true if the reader has already seen imperative code, especially if the prior examples are part of the same "family" (for example, C/C++, Perl, PHP and Java). I would not claim that it's true for any imperative language; take a comparison of Java and Forth, to make an extreme example.
To a layperson, all programming languages are indecipherable gibberish, except maybe the verbose languages such as Hypertalk and SQL. (Of note, SQL is a declarative and/or functional language and enjoys enormous popularity.)
If we had been initially trained on a Lisp-y or Haskell-y language from the start, we'd all think functional programming languages are perfectly normal.
You've gotten enough answers already that I'll mention only a couple of things I don't see mentioned yet.
First and (in my mind) foremost, procedural languages have benefited greatly from their degree of commonality. For one example, almost anybody who knows almost any of the mainstream procedural (or OO) languages to almost any degree can read most of the others reasonably well. I actively avoid working in Java, C#, Cobol, Fortran, or Basic (for just a few examples) but can read any of them reasonably well -- almost as well, in fact, as people who use them every day.
On the functional side, that's much less true. Just for example, I can write Scheme quite reasonably as well, but that's of little use in reading Ocaml or Haskell (for only a couple of examples). Even within a single family (e.g., Scheme vs., Common Lisp) familiarity with one doesn't seem to translate nearly as well to the other.
The claim that functional code is more readable tends to be true only under a narrow range of conditions. For people who are extremely familiar with the language, readability is indeed excellent -- but for everybody else, it's often next to nonexistent. Worse, while differences in procedural languages are largely of syntax and therefore relatively easily learned, differences in functional languages are often much more fundamental, so they require considerable study to really understand (e.g., knowing Lisp is of little help in understanding Monads).
The other major point is that the idea that functional programs are shorter than procedural ones is often based more on syntax than semantics. Programs written in Haskell (for one example) are often quite short, but its being functional is a rather small part of that. A great deal if it is simply that Haskell has a relatively terse syntax.
Few purely functional languages can compete well with APL for terse source code (though, in fairness, APL supports creating higher level functions as well, so that's not quite a large a difference as in some other cases). Contrariwise, Ada and C++ (for only a couple examples) can be quite competitive in terms of number of operations necessary to accomplish a given task, but the syntax is (at least usually) substantially more verbose.
No Perceived Need
I recall my old Boss Rick Cline's response when I showed him a copy of John Backus' Turing Award lecture entitled Can Programming Be Liberated from the von Neumann Style?
His response: "Maybe some of us don't want to be liberated from the von Neumann style!"
Why hasn't functional programming taken over yet?
Functional is better for some things and worse for others so it will never "take over". It is already ubiquitous in the real world though.
Stateless programs; No side effects
Stateless programs are easier to test. This is now widely appreciated and often exploited in industry.
Concurrency; Plays extremely nice with the rising multi-core technology Programs are usually shorter and in some cases easier to read Productivity goes up (example: Erlang)
You're conflating concurrent and parallelism.
Concurrency can be done effectively using communicating sequential processes (CSP). The code inside a CSP can mutate its local state but the messages sent between them should always be immutable.
Purely functional programming plays extremely badly with multicore because it is so cache unfriendly. Cores end up contending for shared memory and parallel programs don't scale.
Why are companies using or programs written in functional languages still so "rare"?
Scala is often regarded as a functional language but it is no more functional than C# which is one of the most popular languages in the world today.
Why, when looking at the advantages of functional programming, are we still using imperative programming languages?
Purely functional programming has lots of serious disadvantages so we use impure functional languages like Lisp, Scheme, SML, OCaml, Scala and C#.
When I think about what functional programming might bring to my projects at work I'm always led down the same path of thought:
To get the full advantages of functional programming you need laziness. Yes, there are strict functional languages, but the real benefits of functional programming don't shine as well in strict code. For example, in Haskell it's easy to create a sequence of lazy operations on a list and concatenate them and apply them to a list. Eg. op1 $ op2 $ op3 $ op4 $ someList. I know that it's not going to build the entire list and internally I'm just going to get a nice loop that walks through the elements one at a time. This allows you to write really modular code. The interface between two modules can involve handing over potentially vast data structures, and yet you don't have to have the structure resident. But when you have laziness, it becomes hard to reason about memory usage. Changing the Haskell compiler flags frequently changes the amount of memory used by an algorithm from O(N) to O(1), except sometimes it doesn't. This isn't really acceptable when you have applications that need to make maximum use of all available memory, and it's not great even for applications that don't need all memory.
Two things:
It just takes time no matter how good a technology is. The ideas behind FP is about 70 years old. But its mainstream usage in Software Engineering (in the trenches, in industry) is probably less than 10 years. Asking developers to adopt racially new mindsets is possible but just takes time (many, many years). For example, OOP really got mainstream usage in the early 1980's. However, it did not gain dorminance until the late 1990's. You need people to be forced to face a technology's strength before it hits it big. Currently, people are using tools that does not not make use of parallelism and things works okay. When apps that do not use parallelism become unbearably slow; then many people will be forced to use parallelism-tools and FP may shot up in popularity. This may also apply to FP's other strengths.
Success story sharing