No it isn't. Scala has three complete type systems: an OO, inheritance based type system, a Haskell-like algebraic type system (sealed traits and case classes), and a duck-typed type system (structural types). Each of these type systems has been enough to write good, complex software on its own (in different languages, of course). Scala's got all three, plus macros. Scala is a superset of Java, Haskell, JavaScript and Lisp; such a language is not simple. A language where it takes an expert to understand how the standard library's linked list is implemented is not simple (even the language creators say that understanding, let alone implementing, Scala collection classes is reserved for experts). You may well understand parts of it, most of it or even all of it given enough time, but it is definitely not simple.
EDIT: Scala does have an incredible compiler, though, and it is interesting from a PL research POV (which is to be expected, given its origins). The JVM community will forever be indebted to Scala. But it is not a language that many will feel comfortable using, and most of those that do, use a subset of it (which is basically Kotlin), and ignore the fact that they can't understand how the linked list is implemented. Kotlin takes a lot of inspiration from Scala, obviously, and learns from its mistakes. It takes that subset of Scala that gives people 99.9% of what they want and need with only 10% of the complexity.
You ran with this "three complete type systems" line in the last thread, and it's still totally wrong.
ADTs/pattern matching are not "a complete type system". Structural types are not "a complete type system". Scala unifies the different concepts by expressing them through Java-style classes and inheritance.
Even if they were incompatible non-orthogonal type systems sitting side by side (they aren't), it is immensely useful to have both dynamic dispatch and ADTs/pattern matching in the toolset; they are often useful for different problems.
> ADTs/pattern matching are not "a complete type system". Structural types are not "a complete type system"
Why not? Can you not write any program using either? Haskell uses just the former, and JS the latter (though without type safety).
> Scala unifies the different concepts by expressing them through Java-style classes and inheritance.
I think it "unifies" them only in the sense that the (single) compiler can compile all three. But case classes cannot be part of an inheritance hierarchy nor structural types (or inheritance types) be used as ADTs. The three certainly intersect, but they're not really unified.
> It is immensely useful to have both dynamic dispatch and ADTs/pattern matching in the toolset; they are often useful for different problems.
Obviously, and that is why some languages employ the one or the other. The question is, how useful is it (from a cost/benefit perspective) to have all three (plus macros!) in the same language? After all, while each has its benefits sometimes, they also overlap a lot, as most programs are about as easily implemented in all three. As I've said elsewhere, in language design, as in most things, one must choose. A pizza, a steak and ice-cream all have their place, but may lose much of their flavor if mixed into the same dish. In the JVM ecosystem we are lucky enough to have easy interoperability between languages. The JVM is a restaurant with a varied menu, and you can have a meal of several courses. The best way to go, IMO, is to pick one simple language you're comfortable with and use it for most tasks, and for those tasks that require specialized skills outside your chosen language's strengths – use another.
While you can make do with either, they have complementary strengths and weaknesses, which are useful in different situations. See "Expression Problem", and "Visitor Pattern".
> Why not?
"Single Dispatch Polymorphism" is a language feature. ADTs/pattern matching are a language feature. Subtyping is a type-system feature. "Structural Typing" is a type-system feature. Guaranteeing exhaustive pattern matching is a type-system feature. They are not "type systems", and different languages pick them and combine them a la carte.
> But case classes cannot be part of an inheritance hierarchy nor structural types (or inheritance types) be used as ADTs
Yes they can, and they do. You just can't extend case classes from other case classes. You can use any class as an ADT-like structure if you want, leaving the hierarchy open or closed, and defining pattern matching extractors as you please. It's all part of the same system; whether you call it an "ADT" or a "class hierarchy" is a design pattern thing more than a rigid systemic property.
> Obviously, and that is why some languages employ the one or the other
Er... that doesn't follow. What I said is that it's very useful having both together. If you don't think that's valuable, that's fine; Scala's not for you.
Can you write any program with only case classes and sealed traits? Yes; Haskell does it. Can you write any program with just structural types? Yes, JS does it. Can you write any program with just inheritance types? Yes, Java, C++ and C# do it. Ergo, Scala has three type systems, and it doesn't matter whether you make them appear as one.
> It's all part of the same system; whether you call it an "ADT" or a "class hierarchy" is a design pattern thing more than a rigid systemic property.
Obviously it's part of the same system, which just happens to be a superset of three type systems. A type system is not how its implemented but what it does.
I like that you can even explain why it is complex. I was just able to give an example, that is explained by your argument (a type system for a purely functional language as opposed to a mix of everything).
Scala's standard library linked list is complex, sure. But it's also considerably more powerful than any equally safe implementation in any language, and considerably safer than any equally powerful implementation. If anything the fact that such complex libraries are written in it is evidence of scala's simplicity - if the language itself were complex, it would consume too much of one's attention for it to be possible to write such complex libraries.
But this is exactly why Scala's designers have their priorities wrong. This safety comes at a price (complexity), right? So, first, they should have asked themselves whether this safety is worth the price. Second, Scala is inherently an unsafe language (unlike Haskell) because it freely allows casts, and doesn't even make them difficult. So then the question is should you pay the cost of complexity in exchange for safety in a language that doesn't really provide safety? I think that the only case where a language designer would answer this question in the affirmative is if they cared more about PL research and compiler technology than about programmers.
Casts are fine as long as you are explicit about them - Scala is about giving the programmer control and visibility over what they're doing, not satisfying some academic notion of purity. My experience is that Scala makes exactly the right tradeoff - in practice, Scala code is substantially less buggy than most other languages (equivalent to Haskell, I'd say), while the complexity of map's implementation doesn't get in the way unless you need it (things that are easy with Haskell's map are still easy with Scala's map, things that are hard but possible with Haskell's map are still hard but possible with Scala's map, and some things that are impossible with Haskell's map are hard but possible with Scala's map).
I understand that some would agree with you. After all, there are some people who like Scala and not just for its "Kotlin subset". But can you understand why many organizations would think this is the wrong tradeoff? I, for one, do not like using stuff I don't understand, and I am unwilling to put in the time to understand Scala's types, especially because it's so easy to do without them. If I need Haskell-level type safety for some subset of a project, I would use Frege or another Haskell implementation for the JVM that is certain to come along. For the rest, I'm perfectly happy with Kotlin-level type safety, and for those portions of the project (over 98% for sure) I don't have to put up with a class library I don't understand.
> I am unwilling to put in the time to understand Scala's types, especially because it's so easy to do without them. If I need Haskell-level type safety for some subset of a project, I would use Frege or another Haskell implementation for the JVM that is certain to come along.
Learning Scala's type system takes time, yes; it quite possibly takes longer to learn Scala than most languages. But it's still going to be less effort than learning two different languages.
I think that most organizations are already using a lot of code that they don't understand. The JVM itself is many thousands of lines of not especially clear code; I suspect very few java-based organizations really understand it (and I've hit bugs in the JVM in real life, so it's not that there's no need to understand it). Likewise large, popular Java libraries like Spring and Hibernate (particularly given that they do bytecode manipulation, so you can't always even assume that the normal rules of the language apply).
I think standard library implementations are rarely read in any language, so the complexity of the standard library implementation in scala isn't a concern. I think the interfaces are important, and if users are unable to understand the method signatures then that's a problem. But documenting what a method does is often seen as adequate (e.g. in Python or Ruby the documentation is the only interface signature you get), so I don't see a fundamental problem with the "simplified signature in the documentation" approach (though it would be better to have some way of automatedly ensuring these were correct).
(The only alternative would be to give up the power entirely, or to have two distinct collection libraries; neither of those seems like a good answer. If you don't see having Haskell-level language power as an advantage, then yes the complex library interface is not worth it and you'd be better with a less powerful language. But if you see value in writing parts of your system in something like Haskell, I think it's a big win to be able to use the same language for the haskell-like parts and the kotlin-like parts, and learning a complex library is less overhead than learning a second language).
> I think it's a big win to be able to use the same language for the haskell-like parts and the kotlin-like parts, and learning a complex library is less overhead than learning a second language.
I think not, and here's why: we're talking about large organizations here, right? Because if you're a single developer or a small team, then anything from assembly to VisualBasic goes and it all pretty much boils down to personal preference. But in a large team (say 50 people and up), only one or two would need to master the "Haskell part" (assuming there was a real need for one), and the rest would use Kotlin. So there is no overhead in learning two languages. The developers would learn one simple language that they can understand, and the single star developer (there is usually one in any team) could probably master five different languages if the need arises. Throwing everything in a single powerful-yet-very-complex does not fit with how complex software is actually developed. This is one of the reasons many huge organizations abandoned C++ and flocked to Java pretty much on the day it was released (I'm exaggerating, but in a software-timeframe terms that's pretty much what happened).
So you're unwilling to use a library you don't understand the internals of, but you wouldn't mind if part of your project was written in an entire language you didn't know?
The worst company I worked for was the one where the dev team was split into three - the frontend guys working in PHP, the backend guys doing mapreduce stuff, and the middle team in Java. No-one understood what the others were doing, so there was no sense of the overall product, and bugs or features would get thrown over the wall to another team and then come back weeks later, by which time everyone had forgotten about them.
My last job worked in Scala, and had what was essentially a transactional framework using monads. I reckon at most three of the developers there actually understood the transaction library, and probably only two would actually have added new features to it. But everyone was able to write code that used it, and being able to see the code in the repository and their IDE, at least some of the other developers were able to pick it up gradually, from the outside in, learning more and more, starting to fix bugs that were similar to problems they'd seem elsewhere, and eventually reaching a full understanding. If that code had been written in a different language I think they would never have taken the first step.
> So you're unwilling to use a library you don't understand the internals of, but you wouldn't mind if part of your project was written in an entire language you didn't know?
Of course (as long as those responsible for maintaining that portion understand their libraries)! Since we agree that some parts of the code are specialized and reserved for experts anyway, what difference does it make if it's written in a different language?
This is not like your story at all about PHP, Java and mapreduce. In my scenario, everyone is using the same language except for the "expert portion", which comprises maybe 2% of the code. I think it is far better than having everyone use a complex language which they don't fully understand, just so that the experts working on that 2% will be nominally using the same language (I say nominally, because in the Scala case it would hardly be the same language – only the same compiler). I don't want to pay the complexity price for everyone for the power needed only for the 2%.
This reminds me of one of my favourite Stackoverflow threads, where Odersky himself explains in just a few paragraphs why the signature of the map function in Scala is not overly complicated:
http://stackoverflow.com/questions/1722726/is-the-scala-2-8-...
I am not joking when I say that they have actually introduced "simplified signatures" into the API docs after this :)
So yeah, if it takes several paragraphs to explain why something that can be as straightforward as this (Haskell)
(a -> b) -> [a] -> [b]
is actually not that complicated when it looks like this
it is clearly the stupid infidels that don't get it. This doesn't mean that Scala is a bad language in general, just that it is ridiculous to state that it isn't complex, especially compared to Lisp.
As Martin points out in his response, the Scala version of map isn't restricted to returning a list so (a -> b) -> [a] -> [b] isn't directly comparable. To make a similar version in Haskell you'd have to use a language extension (MultiParamTypeClasses?) and the type signature would be significantly more complex.
It's debatable whether the complexity of the API is worth the additional functionality, but the basic version of map (Data.List.map :: (a -> b) -> [a] -> [b]) is utterly trivial in either language.
> To make a similar version in Haskell you'd have to use a language extension (MultiParamTypeClasses?) and the type signature would be significantly more complex.
That can only be used with a fixed functor f; the scala version can send an (a -> b) to a (f a -> that), for any [a, b, that] combination for which a CanBuildFrom is available (this is basically a more generalized typeclass).
There are a lot of counter examples to this claim but just to keep this short, I'll go along with your claim.
A downside of this is heavy overload of keywords/symbols. For example, last time I checked, the underscore (_) has six different meanings depending on where it's used.
The number of syntactic rules is not a good way to judge whether a language is difficult to read, otherwise, Brainfuck would be the most readable language on the planet.
> the underscore (_) has six different meanings depending on where it's used.
no it has not ,it always express some kind of default behavior. just like ! in ruby method means mutation or ? means it returns a boolean.
> The number of syntactic rules is not a good way to judge whether a language is difficult to read, otherwise, Brainfuck would be the most readable language on the planet.
! in ruby means "dangerous", which is actually poorly defined. In some major libraries it is used to mean exception raising, in some it means self-mutation.
The bang (!) does not mean "destructive" nor lack of it mean non
destructive either. The bang sign means "the bang version is more
dangerous than its non bang counterpart; handle with care". Since
Ruby has a lot of "destructive" methods, if bang signs follow your
opinion, every Ruby program would be full of bangs, thus ugly.
Scala has very little syntactic rules. Of course if you dont take the time to learn them , you are not going to understand the language.