> I can’t help but think that complicated programming paradigms would seem more intuitive to beginners if taught through Ada instead of C and its derivative languages, as is common in computer science and engineering faculties worldwide.
At my university, the first courses you took in CS used Ada. I think it was a really good choice but I was in the minority I guess because after my year they switched to either using Java or Python depending on who taught which of the courses in that first series.
People found it frustrating how much work it'd take to get their programs to even compile but that's a good thing in my view. If it wasn't compiling, that was normally because the compiler found an error that would still be there at runtime in another language.
> People found it frustrating how much work it'd take to get their programs to even compile
Makes sense if they're students working on small projects. Ada is explicitly designed to make large programs readable, and willingly trades off on writeability when the two come into conflict. It isn't going to shine if you're writing small 'single shot' applications, that isn't what Ada is for.
(Ada also commits to using many English language words where languages like C use symbols. SQL does this too. I'm not sold on the idea that this improves readability. Of course, there's far more to Ada than the skin-deep matter of its wordy syntax.)
This is similar to the readability/writeability tradeoff of moving from JavaScript (more writeable) to TypeScript (more readable and more refactorable). See the current discussion thread at [0].
Interesting reading (for some of us at least): the original Ada design rationale document, from 1986 [1].
I really think a semantic facelift would do wonders for ADA, and shouldn't make much of a difference for readability. I know a lot of people are turned off by this.
I suspect overhauling the syntax is the kind of change that the committee would see as rather dramatic and with limited payoff. Introducing an optional alternative syntax alongside the existing one, would likely be seen as opening the door to syntactic inconsistencies, more likely to harm readability than to enhance it. The Python community has a relevant saying: There should be one -- and preferably only one -- obvious way to do it. [0]
I don't think it's a serious problem that Ada uses and then rather than &&, and uses or else rather than ||. It's the kind of thing you can get used to with time.
In C and C++, the && and || operators both short-circuit. For eager semantics in C/C++ you could make do with the bitwise operators & and |, keeping in mind that they're bitwise and not logical. You could do something like (!!first_int) | (!!second_int). You'd be better off forcing eager evaluation by saving the results to local variables, and then use the short-circuit logical operators.
Ada's and and or operators have no direct equivalents in C or C++.
> (Ada also commits to using many English language words where languages like C use symbols. SQL does this too. I'm not sold on the idea that this improves readability. Of course, there's far more to Ada than the skin-deep matter of its wordy syntax.)
Wasn't the hypothesis behind COBOL that this would allow managers to gain a certain level of understanding of the code?
I find that it made COBOL code notoriously unreadable.
I help beginner python students. Python might be a nice easy scripting language for bashing out NUMPY scripts, but I'm beginning to suspect it is terrible for teaching. The "what type is this variable, and will this function automatically convert it for me" game is not very fun at all for beginners.
I feel that Python will be known in the future as the language that caused a generation to have a crippled sense of reasoning about how to design computer programs.
It's lack of a normal, standard scoping model alone teaches a very flawed reasoning about computer programs.
That for-loops work by assignment rather than creating a new scope, or that if-conditions do not create a new scope for their arms is most unusual.
Not only does it not teach programmers to properly reason about scope, but it results into subtle bugs that are easy to miss. Consider the following:
list = []
for x in iterator:
list.append(lambda y: some_code_that_closes_over(x))
This almost certainly does not behave as the programmer intended, for the for-loop does not create a new scope, so all iterations of the loop share the same scope, and thus every closure that closes over `x` handles the same `x`, which will have the value that `x` had at the last loop, thus effectively mutating the closure after appending it to the list, so that the list will only contain effectively identical closures.
The proper way do it is by using the fact that functions do create a scope:
list = []
def loop_function(x):
list.append(lambda y: some_code_that_coses_over(x))
for x in iterator:
loop_function(x)
Certainly code that looks quite hackey to work around the lack of normal, expected block scoping.
If you replace a lambda with a function definition in the for loop, it's easier to see the scoping issue is actually a closure. And that makes sense because lambda is shorthand for a function.
Because the code that closes over can be arbitrarily complicated. I simply used code that only called a function with a single argument as a minimal example.
Apart from that, your code technically does not do the same thing as it calls the inner function with two arguments, whereas mine simply discards the second argument.
But your argument is about scoping. Replacing the lambda with a function inside the for loop shows the closure plainly. So your complaint can't be about scoping, or maybe this is a bad example of your argument.
Maybe your complaint is about the fact that statements inside a lambda isn't called until the lambda is called. But again, lambda's are just functions, and closures are closures.
Or maybe your argument is that closures are easy to make in python, which is true. But that doesn't mean the language is bad, since lots of languages use closures.
No, my complaint is, as said, hat for-loops in Python do not create a block scope as they do in about any other language. The closure is simply a way to illustrate that but there are many other problems such as this one:
x = something
# some lines of code
for x in iterator:
# something
# try to use x again here
# x has been re-assigned by the loop
In about any other language, the loop would create it's own scope, and thus shadow the existence of the outer `x`, rather than re-assigning it.
So your first example was clearly a bad example. This was a better example.
But even so, this is a lousy example since x is already defined at the top. So the for loop could theoretically use the defined x already.
I think what you're looking for is something akin to the following in C:
int x = 5;
for (int x=0; x<10; x++) {
int y = x;
}
But I would argue that this is terrible C code since the inner x shadows the outer scope leading to confusion as to which x the loop is iterating on. But you could also write the for loop like this:
for (x<0; x<10; x++) { ... }
So C doesn't dictate the scope of the loop which could be argued by your standards, even more confusing than python. Python just says your scope is just function level. Subtle C code changes can create scoping bugs.
So even the scoping example your trying to use is fraught with issues for new developers. Is "function-level" scoping better than "C-style" scoping say? I tend to think that function level scoping forces developers to simplify their code as it discourages symbolic shadows.
No, the first example illustrates the fundamental problem that all iterations of the loop share the same scope rather than a new one each.
In your example, the variable `x` is also shared with all iterations of the loop. Rather, it is more so as so in C, like syntax what is the common approach:
while(1) {
int x = next(iterator)
if(STOPITER) { break }
/* code that uses x */
}
Every iteration of the loop receives a brand new `x` rather than re-assigning the old `x`, this problem is illustrated by creating a closure that closes over the `x`, for which the expected behavior is not that the `x` is then assigned to another value on the next iteration of the loop.
The only way to achieve this in Python is to create an ad-hoc function which is passed `x` as a formal parameter, for every new function call in Python does create a new scope rather than simply re-assigning to the last one.
I would argue that code that is sensitive to scope is bad code no matter which language it's written in, since now you're constantly asking "When does a variable leave scope." If I change the scope of a variable, it causes a subtle change that breaks the closure.
In that case I would prefer to be explicit that the closure "wraps" a new instance of the variable rather than relying upon implicit language behavior to guarantee the scope is correct.
As python says in "import this": "Explicit is better than implicit."
> I would argue that code that is sensitive to scope is bad code no matter which language it's written in
Then you have argued that using any form of functions or subroutines in any language is bad design.
> since now you're constantly asking "When does a variable leave scope." If I change the scope of a variable, it causes a subtle change that breaks the closure.
Yes, that is what one must ask oneself as a programmer and that is what programmers who have not been taught wrong practices by having used Python as their introduction are instinctively constantly asking themselves.
Python programmers must also ask themselves this whenever they use functions and Python comes with a variety of ugly hacks around it's initial wanton design by a programmer who clearly does not understand scope how he originally designed it such as `nonlocal`.
Inside any function in Python there are four kinds of variables in terms of their scope: normal, formal, global, and nonlocal, each of them has a different scope and the programmer best be mindful of which is which as he codes, because they have widely different semantics and because Python has the same syntax for assignment and initialization and this difference especially creates very different interpretations for those four types.
> In that case I would prefer to be explicit that the closure "wraps" a new instance of the variable rather than relying upon implicit language behavior to guarantee the scope is correct.
I feel that you have a fundamental misunderstanding of what a closure or function is if you think it even possible for it to wrap a new instance of a variable.
You seem to be of the same misunderstanding as another user above that `let x = x;`-esque behavior would affect this issue in any way rather than be an irrelevant line.
> As python says in "import this": "Explicit is better than implicit."
Not only is this very rich coming from a language that uses exceptions for flow control, and the same syntax for initialization and assignment, it's also irrelevant and not a matter of explicitness versus implicitness.
Whether wrapping variables would be implicit or explicit in this context would not have solved the issue at all. Even if Python mandated explicit wrapping or made shadowing illegal, which some languages do, for which there is argument to be had, it would not change this subtle bug at all.
Not a technical counter argument, but surely this is a trifle in the grander scheme of things? The hard things about programming, such as modularization at a high level and verification of user input, are orders of magnitude more important than quirks like these?
(And if we broaden our view to include more of software engineering in general, like solving the right problems and validating hypotheses with fast feedback, it seems even more insignificant.)
Poor syntax or surface-level semantics make a language exponentially harder to use, because every layer of your program becomes more difficult. You have to make your modules smaller which means you have to split them up more and have more interfaces between them. Etc.
But block scope is all about modularization and ensuring that parts of the program that need not interact do not interact.
Python code seems to either be written from a mentality of not at all caring about variable lifetime and making it far longer than it should be, or programmers that do care, and uses classes and functions as makeshift tools to attempt to limit the lifetime of variables to make up for the lack of block scope of the language.
- Lambdas in Python do not allow assignment at all.
- Variables in Python cannot be assigned using themselves in their r.h.s., without first being assigned something else, because Python's scoping is strange.
You serve well as an example of a programmer that does not understand Python's semantics here, because they are very counter-intuitive, and you also serve as an example of a programmer that does not understand how scope works at all, even if your example be worked into something of valid Python syntax:
thunks = []
for x in "abcd":
def thunk():
y = x
print(y, end='')
thunks.append(thunk)
for thunk in thunks:
thunk()
The output is `dddd`, not `abcd`; the `y = x` part is completely irrelevant in this case, for when the thunk is called, `x` has already been re-assigned, and so `y = x`, is assigned the new value.
Again, `x` is re-assigned on every new iteration of the loop, as such the `x` in every single one of those thunks contains the value `x` had at the last iteration of the loop when the loop completes.
There are many ways to solve this issue, such as the one I initially gave, but yours isn't one of them and that you thought it was shows the counter-intuitive nature of Python's behavior here.
> ("list" is a builtin, so not a good variable name.)
Which would be another problem with Python's lack of scope. Shadowing the names of library functions and constants is not problematic in languages with proper block scope.
Maybe you should try running it before jumping straight to insults? My code is perfectly legal in Python and runs exactly as I said.
Yours:
In [1]: list = []
...: for x in range(5):
...: list.append(lambda y: y+x)
...: [f(100) for f in list]
Out[1]: [104, 104, 104, 104, 104]
Mine:
In [2]: list = []
...: for x in range(5):
...: list.append(lambda y, x=x: y+x)
...: [f(100) for f in list]
Out[2]: [100, 101, 102, 103, 104]
I understand the semantics just fine, and it's very clear what's going on. In my version, each lambda has two parameters, one called y, and one called x. The one called x has a default value, and the default value is whatever the loop variable x's current value is: a copy is made of the loop variable's value, and stored as a default parameter value. There is no assignment, and a variable is not assigned to itself. The parameter has the same name, but the scope is different; within the lambda, the parameter shadows external variables, just as in any function.
Shadowing list in a function scope is fine (just confusing), but your code does it in the global scope.
You seem to have a bone to pick with Python for some reason, but to me your attempts at criticism fall completely flat, as they are either factually incorrect or amount to "Python is different from [language x]".
And yes, your example works, and is functionally similar to my initial solution of using that function scope does exist by simply creating a function for no other reason than to call it to create a new scope.
That doesn't stop that both examples are ugly hacks needed to solve a problem with the language that most languages do not have. And despite your calling it idiomatic, I have never seen your example in the wild and it is, frankly, simply a hack that few programmers that haven't been explicitly taught this trick would quickly see the intend behind.
Javascript is too weird to be a teaching language. Lexical scoping, invisible and sometimes unintuitive conversion rules, challenging runtime environments, and asynchronous code are important concepts but I wouldn't put them into an intro to programming class.
IMO the ideal programming track would be as follows:
Intro to programming track:
* Assembly --> to understand the basics of how computers work and learn simple procedural and abstraction rules.
* C --> more procedural code, more abstractions, and higher order memory management
* Java --> to learn interfaces, object oriented code, and an introduction to functional programming concepts like lambdas, pure functions, the value of immutable objects
* {Racket/LISP or Favorite functional language} --> to gain intuition with more pure functional programming
* tools: learn source control systems, IDEs, debuggers, profilers, bug management systems, different dev systems, using Jenkins or other CIT tools.
* skills: learn about secure coding (best practices for writing auditable code with known failure modes), efficient coding (calculating big O for a given code, fixing performance bottlenecks), maintainable coding (best practices for decoupling code, writing testable code, writing unit tests)
* theory: track where you learn advanced algorithm theory, design patterns, hopefully centered around case studies.
With elective tracks for things like OS theory, networking, etc.
Too many people leave college with lots of information about data structures and algorithms but don't know how to write a unit test, how to use an IDE, how to write unit tests, how to write secure, maintainable code.
All of my academic computer science instruction was done in C. I was going to mention this in a response to one of the above comments regarding the pedagogical value of Python: I don't feel that taking onboard the concerns that come with programming in C (such as memory management) really made learning about computer science's fundamental algorithms and abstract data structures any easier. Many of the abstract data structures in question, such as trees and hash-maps, when practically implemented are dependent on an understanding of pointers that makes C an ideal pedagogical tool in at least one aspect.
I don't think it's really practical for every scenario. I still think it's a great tool for teaching students about many aspects of how a modern computer, or operating-system functions. You definitely won't get far in gaining a holistic understanding of any modern operating-system without understanding C.
I wouldn't program production code in C if I could avoid it. But sometimes you can't avoid it, and I wouldn't program production code in Pascal, either.
> Java --> to learn interfaces, object oriented code, and an introduction to functional programming concepts like lambdas, pure functions, the value of immutable objects
Jesus Christ, Java as an introduction to functional programming? I don't even know what to say.
Javascript is not so often used as a teaching language.
There are many languages which hammer worse practices than either such as POSIX shell scripts, but they are seldom used as a teaching language.
Python is in the unique space of having made horrible design decisions but somehow often used as a teaching language.
One would assume that programmers are to understand such concepts as block scoping rules and the difference between lexical and dynamic variables. — how is Python to teach them that?
I agree that Python is a bad language for beginners. It is conceptually heavyweight and full of opaque behavior. We lost a lot when BASIC stopped being the "standard" introduction to programming.
They may not have "a mental model of statically typed languages," but they sure "agonize" when there are eight different incompatible datatypes that represent a datetime and the library functions they want to use don't even specify which of these they take or return.
Yes. But statically associating types to binding names does not relieve them of making and fixing these mistakes, it just makes it a runtime vs compile time error.
I am not sure if this is still the case today, but Computer Engineering in EE Department used to teach Pascal / Ada / C as the first programming language. With the expectation that making a program to compile correctly is hard. Before you move off to something like Perl / Java / Python.
While in CS they tends to start to Python or Java. And you start learning all the OO, Procedure or whatever paradigm before going into Web or other area of development with PHP. Where you get some results earlier.
I used to think EE way of teaching sucks. Old fashioned, not following the industry trend. Boring. Teaching you about OSI Models while CS students were already having some fun with the higher level Web Development.
After a few years in industry, I believe that CS should start with either C or Scheme. C to teach you about real machines, Scheme to ignore the machine and do math (algorithms).
I get what you're saying, but learning C teaches you about the C memory model instead of "real machines". C was designed for portability across different architectures and, believe it or not, was considered high-level and abstract at one time.
But I agree that learning C is valuable because I believe that learning about manual memory management is valuable.
And that's fair. I tend to view C as high-level assembler which does mean some loss of fidelity from pure assembly. It seems like a case of being "close enough", I suppose, because switching to pure assembly would make it a lot harder to cover the subject matter in a standard semester.
>I get what you're saying, but learning C teaches you about the C memory model instead of "real machines".
No.
C doesn't do that, not at all; see: https://queue.acm.org/detail.cfm?id=3212479
If you want to get to the down and dirty, and quickly, without nearly all the "Gotcha!" inherit in C, FORTH is the way to go.
>C was designed for portability across different architectures
Absolute bullshit.
This is a claim that has been going on for far too long, the whole "portable assembler" myth.
IEEE694-1985 is a portable assembler, hell read "Using a high level language as a cross assembler" ( DOI: 10.1145/954269.954277 ), for that matter.
>and, believe it or not, was considered high-level and abstract at one time.
The "gotcha!"-factor was increasingly ignored when I was learning about computer languages, but I had an advantage in having my first language be Turbo Pascal and self-taught using the manual and compiler.
> But I agree that learning C is valuable because I believe that learning about manual memory management is valuable.
That's doable much easier in Ada, Forth, or BLISS.
C is becoming more and more of a liability, to the point I would say that it has no real value in any professional setting, and an incredibly limited one in the academic.
How can you say C has no real value in a professional setting? Look at the amount of C code in any Linux distribution, even ignoring the kernel itself.
> How can you say C has no real value in a professional setting?
Fairly easily.
My professional career has been mainly in maintenance and, as such, I get to see the gritty back-end of things, the end-result of all the technical-debt... and being more correctness and security-minded than most, I often note how a good design could help prevent problems, both on the language being used to implement and on the project itself.
From that perspective, most defenses of C as productive or useful fall flat on their faces, especially in recent years as security becomes more and more important a concern — about the only place that C makes any sense anymore is micro-controllers because "all the micro-controllers have a C compiler." — But let's not make the mistake thinking that "having a C compiler" means that C is a good (or even appropriate) language for the task.
Forth, Assembly, and Ada all exceed C's capabilities in many of what have been traditionally claimed as C's strong suits:
* Ada: much, MUCH, better as a systems-language. Any project of medium or large size should seriously consider Ada instead of C.
* Assembly: very fine control, especially important for the severely-constrained controllers.
* Forth: Very fast, very low-level; would recommend for small/medium-small projects on small controllers. (Doesn't have calling-conventions, doesn't manipulate stack-frames; this makes it faster than C.)
> Look at the amount of C code in any Linux distribution, even ignoring the kernel itself.
And?
That says NOTHING about having value in a professional setting, only that it (a) was chosen by a project that got big, and (b) enjoys popularity.
Nearly EVERYTHING that C is claimed to be [very] good about is done much better by some other language. C is particularly bad at large-systems, given the complete lack of modules, and entirely unsuitable for many things that it's commonly used for like multithreaded applications (honestly, take a look at Ada's TASK construct and consider how that might be used in [e.g.] a game-engine).
I think you and I see "value" differently. C is literally everywhere. Other people know it and are able to work with it. Tons of existing code is written in C. Most other languages can interface with it. This combination makes it incredibly valuable in a professional setting.
It is used inappropriately? Sometimes. Can there be better alternatives? Yes. But to say it has "no real value" is a bit extreme.
Pretty much all of my career has been in maintenance, so I tend to see the mess that gets made and while bad-design is always a killer, I notice there are languages that encourage/discourage it to varying degrees.
In the case of C, and other C-like languages, I most recently have four or five custom-made programs [some requiring specialized tools] that have little/nothing in the way of documentation: what it does, the "why-for"/motivation, any sort of high-level architecture-plan, or design-documents.
Fortunately for me, most of the programs actually do have documentation thanks to a true hero that left before I arrived.
I did some maintenance work on a fairly large C++ system that called into a custom lower level library written in C. Much of my work involved extending that lower level library. It processed several billion dollars a year in financial transactions, and was barely documented at all. It was intended to be "portable" so it had bizarre #ifdefs all over the place in case you happened to compile it on an ancient Unix with a K&R compiler from 1986. This was the early 2000's, so nobody ever did that. I doubt it would've worked, yet those #ifdefs remained "just in case."
The lower level C library was pretty clever. The original developer of most of it left about a year into my tenure there. He was one of the smartest guys I ever worked with.
The C++ "app" layer was a different story. The worst part it was a 3000 line switch/case block with about 100 different cases, chock full of copy-and-pasted code. It went on... and on... and on... I still have nightmares about it.
> The C++ "app" layer was a different story. The worst part it was a 3000 line switch/case block with about 100 different cases, chock full of copy-and-pasted code. It went on... and on... and on... I still have nightmares about it.
Ouch. That sounds brutal.
If I had to do something similar, or maintain that, in Ada I'd leverage nested subprograms, local type/subtype definitions, and mandatory case-coverage — and I've done similar with VMs, particular opcodes — so you get something like:
Type Opcode is ( NOP, Add_A, SUB_A, ..., Rem_D );
Procedure Execute_Instruction( State : in out Machine_State; Instructions : in Instruction_Stream ) is
Subtype A_Series is Opcode range Add_A..Sub_A;
Subtype B_Series is Opcode range Add_B..Sub_B;
Subtype C_Series is Opcode range Add_C..Sub_C;
Subtype D_Series is Opcode range Add_D..Sub_D;
Procedure Do_Add_A;
-- other subprograms.
Current : Opcode renames Decode( Next_Token( Instructions ) );
--...
Begin
Case Current is
when A_Series =>
case A_Series'(Current) is
when Add_A => Do_Add_A;
end case;
-- other series.
end case;
End Execute_Instruction;
Of course you could structure it so that all the Do_OPCODE subprograms are local to the top-level switch, or local to the nested switches, as best suits the design; or decompose along 'families' of operation (Add_A, Add_B, Add_C, Add_D), but the important thing there is keeping things local/nested for maintainability.
There's a lot of Python dislike on here, so I thought I'd add my anecdotal story.
I learned Basic, C, Assembly, and Matlab in college (in that order). However, I was never a very good programmer. After graduating, I bought an intro to Python book and read it cover to cover and did the examples. I then started writing scripts and it all kind of clicked. I found it really simple to build stuff. There's lists, tuples, dictionaries, functions, iterating, easy branching...etc. There wasn't a whole lot to remember. I did find it confusing at first why some things used function syntax min(a) and other things used object oriented notation like mylist.sort(), but it wasn't too hard to remember all the stuff I actually needed. I was able to start using classes when I felt I was ready.
Since then I've read books on and played around with Clojure, Common Lisp, Haskell, APL, Forth, Prolog, Ada, Powershell, Perl, Bash, Awk, Julia, C#, SQL...etc. During that time, I've found that Python holds it's ground in being very expressive, while easy for others to understand and performant enough for most uses with libraries like Numpy. Each time I try to move to a "grown-up" language like Java, I'm shocked at how verbose and clunky everything is. Sure it's great for production systems, but it's awful at just getting stuff done.
Python is used almost exclusively by all engineers at my company (some C#) in hundreds of automation scripts. Some programs are 10,000+ lines of code and are still maintained and easily read by others. We're mostly electrical engineers too, so everyone picked up coding on their own and we find most codebases are still pretty consistent.
I think Ada is a neat language for high reliability systems, but I don't think I'd choose it as a teaching language due to a lot of the friction with getting simple programs to work and wrestling with types. I've had very few issues with types in Python. I generally just use string, int, and float conversions when needed.
I started the other way round and came to Python long after I coded a lot in statically typed languages like Object Pascal, C++, Java, Scala and Rust. My feelings about Python are quite opposite to yours. I'm actually quite surprised how clunky Python is from my perspective.
I expected a small, simple beginner-friendly language, optimized for gluing stuff together, but I've found a huge number of overlapping features and idioms, more ways to do the same thing than in Scala, half-broken libraries with poor inline documentation, lot of "stringly" typed code everywhere where programmers don't seem to know other types than strings, ints and dicts, no ADTs / pattern matching, package version conflicts and all that with no help from the type system and unreliable help from IDE autocomplete.
Type annotations help a bit, but they are still far behind what's available in modern statically typed languages.
I've also run into a few things that were weirdly complex to do compared to other languages - e.g sending rest requests in parallel. Something I'd expect a glue language shine at.
It just feels like a major step backwards at least vs Scala and Rust which I used most recently.
So maybe it is a matter of earlier experience, familiarity and expectations?
Possibly lol. I find Rust & Scala to also require far too much boilerplate to get simple stuff done.
I suppose it depends on your intent. If you're writing a bunch of glue code, I seriously doubt Rust will be anywhere near as easy to reason about. Based on my own experience with the language, I can't imagine any coworkers ever grokking the language. Python on the other hand is picked up pretty quick. The beauty of Python is you don't need all that boilerplate nonsense like even having yo know what ADTs are or pattern matching or factory factory class. In Python, you just write code. A lot of people just have a few globals and a bunch of little functions and others use classes.
Maybe your domain is different? I might hate Python as much as you if I had to build a bunch of large backend code bases.
Not sure what boilerplate in Rust or Scala you're talking about.
I admit Rust is much harder to learn initially, but once learned properly, the amount of code in both languages is very similar (as long as we're not comparing one program calling out to a library and another one doing everything from scratch). Both can be very high level.
Here is a study where they've found Python to be not much less verbose than Java actually, despite Java being generally considered a verbose language:
> I seriously doubt Rust will be anywhere near as easy to reason about.
I find Rust easier to reason about because there are certain constraints on sharing mutable state forcing developers to keep to very simple data flows (complex data flows / dependencies are getting hard very quickly and the compiler will fight that a lot with you). Python has none of these, so reasoning about data structures that can be freely shared and mutated anywhere can be hard. Python needs a lot of self-discipline to not end up with unmaintainable code.
That study is also based off of Rosetta code tasks. I'm not sure if that accurately portrays code in the wild. Java codebases are full of design patterns that are entirely uneccessary in Python. Ever hear someone talk about design patterns for Python? It exists, but it is niche instead of the norm and I've literally never heard it mentioned in hundreds of hours talking about Python with coworkers.
I'm sure Rust makes lots of sense when it comes to concurrency and systems programming, but that's not where Python shines or is meant to be used. In scripting, task automation, data science...etc it is really hard to beat. So maybe we're arguing over the usage of an axe and a sword on the battlefield and not about different swords lol.
Is Python not an OOP language?
I think it is, so OOP design patterns still apply to it.
So the differences you observed might be not because of a language itself, but the complexity of projects these languages are applied to and cultural differences of the teams. So far I haven't worked on Python projects as big (in terms of functionality) as Java projects I've seen.
As for data science, so far I haven't stumbled upon any Python code that wouldn't look very similar translated to Java, Scala, R or Rust, assuming same libraries existed. Most of the code is very simple really: load data into some vector/matrix, apply some library code on it, get a different vector/matrix back, etc. The only thing that holds me to Python really are libraries.
As for concurrency - gluing systems together sometimes needs concurrency to cut the latency down. And in data science parallelism also means performance, and often it is needed. I'm not that convinced Python is a clear winner here.
> As for concurrency - gluing systems together sometimes needs concurrency to cut the latency down. And in data science parallelism also means performance, and often it is needed. I'm not that convinced Python is a clear winner here.
Ada is really quite good here, the Task is something such that I would say that if your application is inherently going to be dealing with concurrent processes you should seriously consider Ada. -- The 2020 standard is adding a Parallel keyword/block so that it should be (in theory) as easy to use e.g. CUDA parallelization as simply as compiling your code with a CUDA-aware compiler.
I think the winners in data science are Python, R, Matlab with Ada's share being non-existent. There is a reason for that. Ada doesn't have the numerical or data frame or stats libraries or any REPL functionality or charting libraries...etc. Language ecosystems are the thing that matters. I'd rather write Avionics or high speed trading systems in Ada, but data science? Maybe for some very niche problems.
> Ada doesn't have the numerical or data frame or stats libraries
Ada has a pretty nice set of numerics (Ada.Numerics.*), but the "lack of libraries" is almost a non-issue when the foreign-function interface is as simple as:
Function Example_1(Item : Some_Matrix) result Some_Matrix
with Import, Convention => Fortran,
External_Name => "EX1";
> or any REPL functionality
There are a few people coming in from data-science who lament the lack of REPL, while I might do one, it's rather low on my list, though I think HAC is trying for REPL or something like it. (I haven't used HAC yet.)
> Language ecosystems are the thing that matters. I'd rather write Avionics or high speed trading systems in Ada, but data science? Maybe for some very niche problems.
I agree that the ecosystems are what matters, and this alone would be enough to fuel my general hatred of C: the amount of time, effort, and money spent on C, whether "making a better C" or crippling tools (eg text-diff vs real semantic diff) or making "it 'mostly' works" accepted is simply astronomical.
I think you've already stumbled on a little problem and that is one of using a FFI. Python's Numpy handles all of that for me, so I never have to leave the walked garden of Python. Many scientists/engineers have similar views and likewise have very little Fortran or C experience, so having the kitchen sink distribution with (import numpy) is preferable to FFI.
> Is Python not an OOP language? I think it is, so OOP design patterns still apply to it.
“Design Patterns” in software (particularly when talking about what you see in code) are used largely to refer to things that are not just design patterns, but a combination of design pattern plus workaround for limitations in the facilities for reusable abstraction in the target language that prevent implementing the design pattern via reusable abstraction.
In that use, there aren't really “OOP design patterns”.
When I went to school the instructor talked a bit about Ada, but as far as I know there weren't any compilers available that mere students could afford. This was in the mid-90s.
I remember doing a bit of comparison of the syntax between languages, and Ada's lack of anything like a switch/case statement stood out, but the instructor did talk up the idea that if you could get it to compile it would run in Ada, assuming you didn't hit a compiler bug. Apparently getting the compilers working properly was a problem at the time.
I have no idea what my professor was on about then. I remember him showing a slide of this ridiculous looking if/then/elseif... chain compared to a C switch statement.
I'm not sure this was the case for us, there were plenty of other courses in the major that involved both widely used language and more niche ones.
I think it was more that we didn't have enough professors who were bought-in to Ada to handle the volume of students in those initial courses. The professor who used to teach all the intro courses (and was the biggest advocate for Ada there) became the department chair and let other people take over those courses so he could focus what time he had on some of the upper level courses. Pretty sure they still use Ada in their concurrent programming course too.
Edit: Got curious and I think this is all the languages I used as part of an undergrad CS degree in approximately the order I first used them:
Ada, Racket, C, Make, x86 ASM, Java, Bash, batch script, PHP, JavaScript, MySql, Python 2, Inform 7, Blender Game Engine visual scripting, ICON, Lua, Erlang, and ROBOTC
I'm honestly surprised how much variety there was despite the only academic language being Racket.
There's something wrong with the idea that you go to college for a computer science degree just for the job market, and not the science. If getting a job is all that matters, surely a trade school or boot camp is the better route. But then I'm of the opinion that college shouldn't be in the market of mass producing future workers.
The problem is employers require computer science degrees, especially for entry level jobs. Then there's things like coding tests about inverting binary tries and such that tends to be less thoroughly covered in trade schools. When employers have more realistic demands we might see some change.
At the same time colleges are morphing into expensive trade schools, their advertising is full of phrases like "preparing you for the real world".
In Portugal it used to be relatively easy, do you want a degree with focus on the job market?
Polytechnic, with 3 years duration.
Do you want to learn to learn, and be prepared for any kind of challenge, with a professional title?
University, with 5 years duration
Now with Bologna, the rules changed a bit, however the universities sell the fact that now with 5 years the degree includes the master title, which used to require up to 3 years more on top of the 5.
As far as I am aware, other European countries have similar approaches, and in most of them in what concerns state universities, the biggest expenses are lodging, food and getting the required teaching materials.
Which is fair, but also kind of funny. In theory, what most of us do day to day is software engineering and computer science is a niche corner of mathematics. But the broader expectation is that the math-niche teaches day to day programming.
I think there's a pedagogic void here. Everybody enjoys quick returns. be it a lisp repl, php f5 refresh or bash .. the desire to have clear mental model of your program state comes after (unless you have both brain power, talent and/or passion for that)
Western Washington University. I think they dropped Ada from the first courses in 2012 or 2013. Would be curious to see how those courses heavily featuring Racket have changed in the last decade while Racket was rapidly improving.
They were teaching Ada in the Intro to Computer Science classes at Truman State in the mid-2000's as well. I think they eventually switched to Python sometime after I graduated 2008.
In universities there were steep educational discounts of Sun hardware. At the time administering labs full of PCs was a nightmare, and a lot of departments were culturally predisposed towards Unix anyway, so a natural result in many places was Sun workstations or Sun Ray terminals where Java was well supported and was portable enough to also run on the student personal PC.
This is a well known pattern in services now too, lots of platforms are buying user market share by offering free/cheap service for college students and reaping the revenue when the students move on / generate invites.
I wouldn't call this bribes but you can make an argument that it's buying mindshare that you wouldn't get on your own merits.
In 1996-1997 I ran Java 1.0 or 1.1 on Solaris, MacOS 9, and Windows. I developed for, and on, all three. It was available on a number of other Unix implementations, including Linux and DEC. In addition to HotJava, Java applets ran on Netscape. In 1997 Symmantec had a popular free JIT that ran on most everything. These platforms dwarfed Solaris installations. How exactly were these minor Solaris boxes helping sell Java again?
Quite to the contrary. If anything, Java was helping sales of future Sun boxes, rather than the other way around. Sun worked hard to make sure Solaris ran Java efficiently and stably, so if you were a Java developer you might consider Sun boxen professionally.
Our university started to use Java right way in 1996 for compiler design and distributed systems.
Sun didn't need to bride our teachers, doing the same in C and C++ was such a pain with 1996 compilers and POSIX, that they didn't thought twice about changing into an interpreted language.
In answer to what appears to be a misunderstanding about Rust:
> Its foreign function interface seems particularly poorly implemented. The official Rust documentation suggests the use of the external third-party libc library (called a 'crate' in Rust parlance) to provide the type definitions necessary to interface with C programs. As of the time of writing, this crate has had 95 releases. Contrast this with Ada’s Interfaces.C package, which was added the language in Ada 95 and hasn’t needed to change in any fundamental way since.
Rust's libc crate isn't third-party, it's first-party, developed by the Rust project itself: https://github.com/rust-lang/libc/ . It's also not just for type definitions necessary to interface with C programs; here's the first heading and first paragraph of its README:
"libc - Raw FFI bindings to platforms' system libraries"
"libc provides all of the definitions necessary to easily interoperate with C code (or "C-like" code) on each of the platforms that Rust supports. This includes type definitions (e.g. c_int), constants (e.g. EINVAL) as well as function headers (e.g. malloc)."
The fact that this library contains low-level type definitions for every platform that Rust supports explains why it's had more than one release: new platforms get added, platforms add new interfaces, and platforms change the definitions of existing interfaces (possibly incompatibly, which explains why this isn't in the standard library).
> It lacks basic features necessary for the task, like bitfields, and data structure packing.
The latter is achieved via the built-in `repr(packed)` attribute (https://doc.rust-lang.org/nomicon/other-reprs.html#reprpacke...) and the former is provided by the bitflags crate: https://crates.io/crates/bitflags (while unlike libc this does not live under the rust-lang org on Github, it does live under its own org which appears to be populated exclusively by Rust project team members).
Regarding `repr(packed)`: Thank you for posting this. I really like Rust's official documentation on the subject. I stand corrected regarding Rust's support of structure packing. The following statement is a little troubling however: "As of Rust 2018, this still can cause undefined behavior." This greatly affects Rust's suitability for bare-metal programming, where you very often require control over a structure's layout in memory at bit-level granularity.
Regarding bitfields: At the risk of sounding a little old-fashioned, I don't like the idea of having to import external packages to provide these kinds of fundamental features. The article hints as much. It might be a bit of a culture clash however I feel that learning the different styles and interfaces of a bunch of external packages is an extra, undesirable cognitive burden imposed on the developer. Plus, "A macro to generate structures which behave like bitflags" (The crate's official description) doesn't sound very robust to me. It sounds like precisely the kind of hack that a future release could break.
In fairness, it should be mentioned that the C standard does not guarantee the layout and order of individual bitfields either (refer to section 6.7.2.1 paragraph 11 of the C1X standard). Even though the usage of bitfields is common in C, it's not without its issues.
> "As of Rust 2018, this still can cause undefined behavior."
This is referring to the fact that you have to be careful when accessing the fields of a packed struct that everything is aligned correctly. Normally anything where "you have to be careful" in order to uphold memory safety requires use of the `unsafe` keyword, but due to an oversight Rust doesn't currently require it in this instance. So, in typical Rust fashion, this isn't referring to anything sinister like a compiler miscompilation, it's just an error that Rust isn't being as paranoid as it strives to be. :P
The patch to require this `unsafe` annotation was actually just filed last week; it will become a warning first, then might become a hard error for users who opt-in to the upcoming 2021 edition: https://github.com/rust-lang/rust/pull/82525
>> "As of Rust 2018, this still can cause undefined behavior."
>
> This is referring to the fact that you have to be careful when accessing the fields of a packed struct that everything is aligned correctly. Normally anything where "you have to be careful" in order to uphold memory safety requires use of the `unsafe` keyword, but due to an oversight Rust doesn't currently require it in this instance.
No, there's a huge difference between UNDEFINED BEHAVIOR and UNSAFE BEHAVIOR (in the sense of "be careful here").
Take Ada's "Unchecked_Conversion" function, it operates essentially the same as C++'s bitwise-cast, and thus is unsafe ("be careful" sense) but is not undefined.
Rust uses "unsafe" to refer specifically to memory safety, not merely for any operation considered generally undesirable. Because UB can cause any effect at all, anything that is potentially UB is regarded as "unsafe".
That is a mistake. Architectures are being updated to support unaligned access. MIPS, PowerPC, and ARM have all been updated. The machines that can't handle unaligned access are going the way of machines with sign-magnitude integers, trap representations, 9-bit bytes, base-16 floating-point, and so many other terrible things that must have seemed like good ideas at the time.
Basically, Intel goofed back when SSE was first introduced, and some compilers (including both gcc and llvm) got tripped up. Intel made two instructions for loading SSE registers, a normal one and one that would take alignment faults. There was the suggestion that the one with alignment faults would perform better, so people used it. In later processors, the tiny difference went away. So now you have a useless instruction supported by the hardware, and it is getting emitted by LLVM.
All the other instructions that could be emitted by llvm, and all the instructions that should be emitted by llvm, do not take alignment faults.
I don't believe they're being ignored, however, I encourage you to comment if you have something to add, even if it is just to reiterate that comment in a modern context.
This was a good read. I instantly recognized the title of the textbook that is mentioned in the blogpost? I own it!
Having a background in C, I went back and forth with Ada for years, without really jumping all in. In the last couple years in particular, with the growing popularity of Rust, I started to renew my interest.
I'm reminded of a popular reddit thread on r/Ada-- someone called Rust a "toy language", which prompted the valid response that Rust is being used in a lot of commercial products lately. The response[0] kind of brings home the caliber that Ada is capable of, starting with:
> Rust being used in commercial products isn’t really the same ballpark as what I’m talking about. It’s not even the same game.
It seems like the easiest way to trend on HN is to make a post such as "<old software> rewritten in Rust", but each time I see more and more people advertising Rust, I just wonder why Ada didn't get the credit it deserved as being absolutely bulletproof. Eventually, I came across an "Ada Manifesto" of sorts [1] that finally pushed me to "put my money where my mouth is" and start going all in with the language.
(the same author of that "Manifesto" maintained a "Should have Used Ada"[2] series for a while that points out just how using Ada could have stopped certain security vulnerabilities from ever being a problem in the first place)
Ada is anything but dead and there's a lot of interesting things coming out for the 202x specification. I hope to see community enthusiasm grow as people begin to shift their interest more and more to safe languages.
> I just wonder why Ada didn't get the credit it deserved as being absolutely bulletproof
It's not. All languages make trade-offs in the performance-convenience-safety-etc space, and Ada's choice is not "100% safety". It lacks memory safety and has holes in its type system: https://www.enyo.de/fw/notes/ada-type-safety.html
I wouldn't say Ada lacks memory safety. It doesn't go for 100%-in-all-cases but neither does Rust. The main differences are that it doesn't do memory safety by default (which is significant), and also treats memory safety with less granularity.
Aside from simply making manual memory management less frequent (with things like variably sized arrays), it has memory pools and subpools to handle more large-scale memory safety issues. Essentially you can define the scope for all allocations of a type.
It's an interesting tradeoff, though unfortunately I haven't seen much discussion about it.
I think it’s time we start expecting 100% memory safety as table stakes, because any flaws are catastrophic. Moore’s Law has more than paid for it; Android could run an animated display and a Bluetooth stack in a wristwatch seven years ago.
One area where I think Ada has the edge is providing language constructs that make bare metal programming safer. Concepts like 'dangling pointers' and 'memory leaks' aren't relevant in a programming environment without a heap. In bare-metal programming on a microcontroller you're more likely working within a flat memory model where the 'memory safety' provided by some modern programming languages is less relevant. Arguably, this is the context within which safety-critical programming is actually happening.
You can absolutely cause a pointer to dangle without heap allocation. Pointers can point to the stack too. You also have stuff like iterator invalidation, which is sort of a special case of a dangling pointer.
You can write massive programs without a pointer in sight. If you have to import any C, then it's made a lot worse. You're better off doing memory mapped register type stuff in Ada.
I really like Ada's ranged types (VHDL has them also - it inherited them from Ada). You can say:
type OperatingTemp is range 33 .. 90;
And then declare variables of that type and they will be range checked - an exception will be thrown if the variable goes out of that range. Wish more languages had this feature.
I agree. I learned Ada in university (my professor was on the Ada committee) and used it for an embedded development course. The language was really good (many fewer foot guns than C or C++; clearly Ada was holistically designed), but I recall having issues with the tooling and the ecosystem, and the community was super defensive and hostile. If something wasn't working for your use case and the community didn't have a solution, then your use case was invalid. If you politely asked for help, you were incompetent or too lazy to figure it out on your own. This was a decade ago, so maybe things have changed since, but I can honestly understand why the language is niche. The type system and other intra-language features are very interesting but ultimately not the most important aspects of a programming language.
Ranged types are directly from Pascal. You can also use them to specify the valid indexes for an array (so a given array might be zero-based, or one-based, or 1900-based, or even -32768-based). A very positive feature, I agree.
type
TPerson = record
name: string;
age: integer;
end;
TThrteenPersons = array[10..22] of TPerson;
procedure Check(const aPerson: TPerson);
var
vPerson: TThrteenPersons;
begin
vPerson[11] := aPerson; // this is ok
vPerson[1] := aPerson; // this one gives compile error
end;
alternatively if runtime range checking is on then vPerson[i] = aPerson will raise the exception if i is out of range
Right. I know it works with numeric indexes in Pascal. My point was that (using my previous example) in Ada it works with any discrete type, in that case characters (which are not numbers in disguise in Ada) so I can do this:
In Pascal you can use any ordinal type as the type of the array index. You can create a new type from a range of ordinal values.
So you can have arrays where the index runs from -10 to +20 or from +7 to +26 for instance. Booleans are also ordinal types so you can have an array where the index is boolean.
Well, "range" was not the correct term. What I meant is that you can't treat the source as sparse. You can't skip portions of it. So you can't make an array that only covers every other element of this enumeration:
type Colors is (Red, Green, Blue, Black, Gray, White);
It has to be a consecutive group of them, taken in order, as the specification of even this enumeration has an ordering to it as far as Ada is concerned. So I can make an array using all of Colors as the index, or some subset but it has to be, say, Red..Blue and not Red|Blue skipping over Green.
I much prefer VHDL to Verilog. Mostly because you can define your own types in VHDL but can't in standard verilog (SystemVerilog is closer to VHDL in capability). Also because the whole reg thing in Verilog just doesn't seem very ergonomic to me - I have to stop and look things up when coding state machines or similar stateful things in Verilog. In VHDL you've got signals and variables. Signals can have state. Variables are only used inside of processes - much easier to remember for me, anyway (maybe my block has to do with learning VHDL first).
And finally VHDL is strongly typed. Verilog is pretty much C-typed meaning it's pretty weak. About a dozen years ago I worked at an EDA company and one of my tasks was to run our generated HDL code through a popular industry linting tool. There were hundreds linting problems with the Verilog code that needed to be fixed. In the VHDL code there were only a couple of things that needed to be addressed - this was mostly due to VHDL's strong type system preventing many of the problems that showed up on the Verilog side.
Yeah, it's very weird that a language developed by the DoD ended up popular in the EU but much less so in the US. I'm not sure why that happened. I did do a stint in Italy about 16 years ago and they preferred VHDL there at that time as well, so it's not exactly a new phenomenon. I did a VHDL project for a US company last year so it's not like it's completely unheard of here. Another way to look at it is that there are fewer people who know VHDL so when you do find places that are doing VHDL projects there's less competition.
In JavaScript it is possible to define getter and setter method of an object. So this behaviour can be emulated by adding a validation in the setter method.
Right, it's emulated but not universal or built-in. Nothing in JavaScript will enforce using getters/setters, it's 100% up to programmer discipline or adding a linter (if any linter has this as an option) to ensure both the creation of the accessors and their use.
The benefit of this in Ada is that it is universal, and it requires the programmer to do nothing special later on except to not turn off the runtime checks. Using that above example, this would be a runtime error:
Reading : Integer := Get_A_Reading(...); -- In some run Reading gets 0 for some reason
Temperature : OperatingTemp;
...
Temperature := OperatingTemp'Value(Reading); -- runtime error in Ada if Reading is invalid
And because OperatingTemp is a distinct type, you never even have the option to do:
Temperature := Reading; -- doesn't compile
Also, in most OO languages where you'd make temperature an internal, private variable you can prevent external users of the class from interacting with it incorrectly, but you can't control internal users. Consider this Java-ish example:
class Thermostat {
private int temperature; // should only be in [33,90]
public void SetTemperature(int value) { // check range and set }
public void AnotherMethod(int value) {
// good discipline would be:
SetTemperature(f(value));
// but it's not strictly required so this is also allowed
temperature = f(value);
}
}
In JavaScript it is possible to define getter and setter method of an object
If a civil engineer had access to concrete but he chose to build with mashed potato instead he would be considered insane. Yet programmers with access to Ada choose JavaScript and it's considered perfectly normal.
Yes and it would be the same type but there is a risk of overflow (and underflow with subtraction) which could result in a runtime exception (same as the risk with conventional integers). You’d want to include a guard that prevented those results.
I like the "clean feel" of Ada's syntax: it combines the elegance of Python with a bit more structure and does not suffer from Python's significant whitespace issues.
The so-called "Ada comb" structure that is used for packages, subprograms, and even declare blocks makes it easy to find what you are looking for because it makes the source code more regular.
The "Ada comb" is formed by the shape of the source code with the subprogram header / declare, begin, exception, and end forming the "teeth" of the comb and the rest of the source code indented between the "teeth":
function Square_Root (Arg : Float) return Float is
I did Ada back in university. I wrote a toy Ada compiler for my master's thesis. I was so enamored with Ada's exception handling at the time! When I discovered Python and its similar exception handling, I jumped onto Python in a heartbeat.
> I use PL/SQL a fair bit, which was inspired by Ada.
I have a bit of a project-idea here: PL/SQL+Ada+VHDL all together in an IDE.
> Unfortunately, the designers did not bring over
> ShortName is New ReallyLongAndAwkardName;
Do you mean renames?
Package Text renames Ada.Strings.Fixed;
RENAME is a command in Oracle SQL. I see in the documentation that PL/SQL gives it as a "keyword", not as a "reserved word". This means, I gather that you can name a subprogram as RENAME, though Oracle recommends against it. You could not, on the other hand name a subprogram INSERT, unless you double-quoted it--which, again, Oracle recommends against.
> The so-called "Ada comb" structure that is used for packages, subprograms, and even declare blocks makes it easy to find what you are looking for because it makes the source code more regular.
"Declare" blocks are a PITA. I don't mean the declaration part in the subprogram example you quoted (that's fine), but having to use explicit "declare" blocks to create new variables in the scope of a loop or a conditional. You can't just declare variables after the introducing keyword (then, loop, ...), possibly between it and a "begin" that would be fused to be come a part of the loop/conditional syntax. You have to use a separate "declare" block, with its "begin" and its own "end;", and then you still have to have the "end loop;"/"end if;" of the loop/conditional itself.
Illustration.
A normal conditional:
---------
if A = 5 then
B := 7;
C := A + 1;
end if;
---------
What you'd think you could at least do (it would already be a bit heavy, but by Ada standard verbosity it would be a good and smooth fit):
---------
if A = 5 then
declare
D : natural;
begin
B := 7;
D := 1;
C := A + D;
end if;
---------
What you actually need to do:
---------
if A = 5 then
declare
D : natural;
begin
B := 7;
D := 1;
C := A + D;
end;
end if;
---------
So you waste one indentation level more each time you add one such declare block. Talking about combs, this get hairy quickly even when you have simply a few nested loops/conditionals.
Also, the "end;" of the "declare" block is not an "end declare;" or "end block;" like you have "end if;" and "end loop;". So that make it a bit harder again to know where you are when you are closing your blocks/loops/conditional.
Then you can add a label, but that make it even heavier (and good luck finding a clever name for the block label each time). Not sure there exists a good solution to place the (opening) label. And the closing label comes at the same place a closing keyword, which may be considered a bit confusing.
---------
if A = 5 then
myblock:
declare
D : natural;
begin
B := 7;
D := 1;
C := A + D;
end myblock;
end if;
---------
or
---------
if A = 5 then
myblock: declare
D : natural;
begin
B := 7;
D := 1;
C := A + D;
end myblock;
end if;
---------
or
---------
if A = 5 then
myblock:
declare
D : natural;
begin
B := 7;
D := 1;
C := A + D;
end myblock;
end if;
---------
The last one, which wastes not only 1, but 2 indentations levels(!), and even worse, creates an indentation gap in the end, happens to be the recommended style...
I think everything you say is explained correctly, just that you describe a very special fringe case. The variables in your `declare` block only need to exit during the if, so this is one way to manage their lifespan.
In Ada as in almost every language you are well advised to have small functions with limited scope and then build on top of them. This would mean if you would declare your variables as normal part of a function the variables would have the lifespan of the function, and the function being small and to the point would also not be that long and you can avoid all the additional effort with `declare`.
I am not saying that there is no use for `declare` but it’s not needed that often or a big hassle in my view.
Ada is a bit more verbose than other languages and has one of the main aims to be readable on the basis that you write it once and read it many times.
HN formatting tip: Put two extra spaces in front of each of those code blocks and remove all those unnecessary blank lines and you're comment will be tolerably formatted. As it is, it's a PITA to read because it spans several screens without any good cause.
These seem contrived. Ada supports encapsulation, at the procedure or package level. I would just declare up-front the A,B,C,D variables .... I don't see what the begin/end blocks buy you kn these examples.
I took my first university course in Ada in 1981. I also worked for a company that provided an Ada runtime in the mid 80's.
Ada was sabotaged early on because it was 'mandated' by the DOD for new programs. That meant that all the usual suspects, like Lockheed, GD , TI (I don't remember exactly which ones) came up with Ada compilers and runtimes that cost on the order of $10K per seat. The typical military contractor ripoff. So it was impossible for individuals or small companies to use Ada on their own dime. It was only feasible if the cost was rolled into a larger (bloated) defense contract. So it couldn't get a following. Of course much later on free versions became available but it was too late.
That said, Ada was absolutely no fun to program with. It was awkward and verbose. I hated it from the get go, compared to the alternatives. If it is so super why is rarely used.
Amazing, since the first ADA compiler wasnt released until 1983.
You'll make those HR people happy who want candidates with 10 years of experience in 2 year old languages.
>> That meant that all the usual suspects, like Lockheed, GD , TI (I don't remember exactly which ones) came up with Ada compilers and runtimes that cost on the order of $10K per seat.
I always wondered why I've never seen it used outside government work. Your explanation seems incredibly obvious after reading it.
I went to a summer program at the US Air Force Academy and took a short class using Ada with Lego mindstorm robots. Having never done any programming at the time beyond BASIC (thanks to QBASIC and my TI-83+ calculator), I really enjoyed learning Ada. The instructor talked about the safety aspect of Ada which went over my head at the time :)
I have done a lot of embedded programming in C over the years and while the reality is most embedded programmers know C best, I am starting to think we would be better served to try a new language with better features such as Ada or Rust. C++ is nice as well, but has it's own set of problems when used for embedded programming.
Nim has some influence from Ada in its type system and has much of the same feeling that the language is a power tool at your disposal. They have been making changes to make it useful without GC for embedded targets.
I love Ada, unfortunately it’s real world use seems to be relegated to old legacy code. I’d like to use it a little more on the side, but I also need to keep my priorities focused on realism, which sadly means ignoring Ada and learning something like C++ which seems unapproachable from any angle.
Ada also seems to have a weirdly negative rep in many circles it seems. I recall looking around for an Ada compiler for a moderately popular platform and came across and old thread where people didn’t give any options but instead just joking about how the OP was interested in such a terrible language. Maybe it’s the Pascal/Algol type syntax?
> people didn’t give any options but instead just joking about how the OP was interested in such a terrible language.
I think the attitude is mostly a historical artifact and momentum. The language was soundly rejected in the 80s and 90s by many people in favor of C, for numerous reasons. Some valid, others invalid. It's carried a reputation since then (much like the author's take on Fortran, many quick takes here on PHP and Perl) that reduces the potential for adoption today even though the language is actually rather pleasant (IMHO, and also now, may not have been 20 years ago) to work with.
But other languages are better advocated for, and have (mostly) better tooling these days. Alire is helping on the package manager front, but it's still pretty new. When people think "safety" in software they tend to jump straight to Rust and Haskell, mentally, even though Ada also fits within that conceptual space.
You can say this is still 20 years ago, but you have to count from the other side. If you have two solutions to a problem the earlier one wins more (except if the later ones is order of magnitude better on a metric that counts). Yes, rust is also late to the party, but support of Mozilla is a big advantage.
From my experience, it still gets a lot of use in aerospace and defence companies. Outside of those industries I can't think of people even mentioning it really, but again probably biases abound.
Skimming my local jobs list(SE England) I see new listings from both Airbus and BAE Systems looking for Ada developers in the past week.
Again this personal experience only, but one of the projects I'm linked to is a greenfields Ada project.
However, you'd be correct to say there are an awful lot of multi-decade projects in both the companies I mentioned(and the aerospace/defence in general). For example, last year my main project was work stemming from a design that began in 1997.
>I love Ada, unfortunately it’s real world use seems to be relegated to old legacy code. I’d like to use it a little more on the side, but I also need to keep my priorities focused on realism, which sadly means ignoring Ada and learning something like C++ which seems unapproachable from any angle.
Oh, my github must be ancient then!
> Ada also seems to have a weirdly negative rep in many circles it seems. I recall looking around for an Ada compiler for a moderately popular platform and came across and old thread where people didn’t give any options but instead just joking about how the OP was interested in such a terrible language. Maybe it’s the Pascal/Algol type syntax?
That stems from the hatred from the people working at the DoD at the time who'd never even seen the language.
Heh, I must say way, impressive GitHub. You the original author of the OSDev wiki’s barebones Ada tutorial?
Off topic, but have you ever heard of CHILL? It’s a language from the ITU designed for telephone switches (like Erlang) but is supposedly very similar to older Ada standards.
I have heard of it, and was tempted to grab a copy of the spec. -- The context in which I heard it, language design, Ada was put forth as the only language designed for both maintainability and correctness, someone responded that CHILL was another, but the only other they knew of.
Yes, two different locations for freely getting the spec.
> I can see why it's no longer used, with all those modes! But another language with ranges. Also has module inheritance.
IIRC, "mode" used to mean (in some cases, historically, in CS) the equivalent of what we now call a "type" — I seem to recall Algol using the terminology, but may be misremembering.
> Heh, I must say way, impressive GitHub. You the original author of the OSDev wiki’s barebones Ada tutorial?
Thanks. Yes, you can tell because it doesn't show as a fork on my gh and it should show on the wiki who wrote it.
> Off topic, but have you ever heard of CHILL? It’s a language from the ITU designed for telephone switches (like Erlang) but is supposedly very similar to older Ada standards.
I've never programmed Ada, but I've read pl/(pg)sql are based off it, and those have pretty wide use in their niche.
So its principals may live on for quite some time in those other languages.
His discussion of Ada (not all upper case -- can the HN title be changed?) is interesting, but his criticism of Fortran, although colorful ("eldritch"?) , is vague and likely uninformed. I program in Fortran 95 a lot and can easily understand my code years later.
"Admittedly, I had pictured Ada’s syntax resembling the uncompromising verbosity and rigid construction of COBOL, or perhaps the Lovecraftian hieroglyphics of Fortran’s various eldritch incarnations."
My wife used to work as a webdev at a shop that made its bread and butter contracting out some devs to do M/MUMPS development for a bank. Truly horrifying. The language/DB, but also that this thing was involved in handling people's money.
In true Cthulhu fashion, after encountering it at a medical facility where the lead programmer bragged about how amazing it was, I bought a book on the language. I like collecting language books (Icon was fun), but that was rather painful to read through and I have done IBM JCL and found it not unpleasant.
The NY Times a few summers ago wrote up some firm in Wisconsin that handles data-processing for quite a few insurance companies, using MUMPS. A co-worker's son a summer or so before had weighed competing internships from that firm and from a financial company in New York. I said I thought that the latter would offer more portable experience.
> "Lovecraftian hieroglyphics of Fortran" <-- the author never programmed in Perl, I presume.
Or in Fortran, it's a relatively verbose language from what I remember of it (that said, I learned Fortran 90/95, so maybe the older variants were worse)
Modern Fortran (2008/2018) syntax is a bliss. You would almost not notice if your are coding in MATLAB or Fortran with the advantage of your code being orders of magnitude faster in Fortran. Those who criticize the syntax of Fortran have probably not learned anything beyond F77, or at best F90.
> The writing is on the wall: Ada is here to stay.
OK, this has no relation to the actual content of the article, but I have to point out that that is not what the phrase "the writing is on the wall" means.
In the original biblical usage a Babylonian king held a feast and suddenly a disembodied hand wrote: MENE, MENE, TECKEL, PARSIN on the wall, which translated to:
"Mene: God has numbered the days of your reign and brought it to an end."
"Tekel: You have been weighed on the scales and found wanting."
"Peres: Your kingdom is divided and given to the Medes and Persians.”
So you're right, it doesn't connote that something will endure, but that something will end.
Author here. Thank you for pointing this out! I actually did not know that this phrase had such a specific meaning. I had incorrectly inferred from other usage that the phrase meant that the fact in question was supported by evidence to a degree that warranted no further debate. I feel a little embarrassed for the misuse. However I'm all the better off for having been educated on its proper use! Thank you.
> It is possible to define a struct type in C with bit-fields for the individual elements, however the C standard does not guarantee the layout and order of the individual fields.
As a professional embedded developer who uses bitfields to access registers every day, this doesn't really make a practical difference. On any bare-metal or embedded project you will rely on the behaviour of your compiler, and portability is largely irrelevant if you're accessing memory-mapped registers. Probably, the manufacturer has already provided register maps using bitfields anyway.
Having direct control over this type of thing is important when updating the fields of a persistent data structure. I've had to deal with mistake before, where the original developer thought the layout matched what they specified, but the actual layout that got persisted didn't match. For compatibility, the broken layout stuck around forever, and special rules were required to detect this.
I suppose the HN software changed it and the submitter didn't notice (or didn't care). It happens silently after submitting, unlike the "13 too long" message that blocks submission.
Many of those rewrites are benign, even good, many others are stupid and infuriating.
Unfortunately there is no way to have yourself declared to be a "well-known submitter with a history of not editorializing or outrage-optimizing submission titles" and get this thing switched off.
In this case there have been so many discussions and submissions about the American with Disabilities Act that it sounds plausible to be such an automatic rewrite.
I think the thing that's been under-sold in Ada is the way you can get so much at compile time as properties of things, like, you can say "this type counts between 1 and 30" and then later refer to "the largest value of this type" as a loop bound, say, in a way that won't break if you later set the maximum to 40. And hides the fact it's implemented as an unboxed primitive integer.
I tend to like Ada, but it is a tiring language to read with the all caps. Also, it 'feels' like it has a gatekeeper group and really doesn't come up in any mobile conversation. I still believe someone will do something akin to a syntax substitution and come up with a well liked language.
Also, modern Fortran is not that bad of a language much like the modern parts of C++.
The community is definitely not helping, or has not helped. It seems to be improving but I've seen a lot of negative responses to novices that read like some of the things I saw in the 00s when trying to get into Common Lisp. Some individuals are willing to push past this, for others it's a deal breaker.
The community is becoming more open (AdaCore has been very helpful here), I think, but you still have a fair amount of vocal gatekeepers that are going to continue to keep people out (deliberately or not).
It is, and I should have said that. I pushed past the issues in the larger community when I started because I had a good local community (grad school, prof was pushing it). About the same time the general community behavior was improving, and now it's a very welcoming language community. You still occasionally get curmudgeons but they aren't the loudest voices.
The tradition of all caps keywords (just a convention, not required as others have noted) was common in most languages in the era before most editors had syntax highlighting. It wasn't meant to be "tiring" but rather the opposite, to make reading code easier. Of course modern syntax highlighting makes the convention obsolete.
> Reserved words differing only in the use of corresponding upper and lower case letters are considered as the same (see 2.3).
Interestingly, the manual itself doesn't even use upper case:
> For readability of this manual, the reserved words appear in lower case boldface.
So even the Ada standard recognizes the upper case is less readable and undesired.
Case insensitivity also applies to identifiers (variable names, type names etc.) (sec 2.3) and that is a definitely an archaic feature, but the GNAT compiler has the helpful -gnatya style check which enforces the Pascal_Case style: https://gcc.gnu.org/onlinedocs/gcc-4.6.4/gnat_ugn_unw/Style-...
---
Another common misconception is that function/procedure names (designators) must appear after the `end`. That is optional:
All caps haven't been a part of the language since Ada83, go check the follow up spec, Ada95. Maybe you're thinking of Niklaus Wirth's languages, the Modula's and Oberon?
Wirthian languages seemed to keep the 6bit symbol encoding of the original Pascal, where the symbols and keywords ended up case-nonpreserving. Was that changed at any point with Modula or Oberon? I never have seen any code with all caps.
Oberon is case-sensitive and all keywords are all caps. I was curious to see it because I like Object Pascal/Delphi, that was the most distracting part of the language coming from Delphi standards.
> Oberon is case-sensitive and all keywords are all caps. I was curious to see it because I like Object Pascal/Delphi, that was the most distracting part of the language coming from Delphi standards.
Yeah, it's horrible to see and worse to type. I took Oberon-2's grammar and changed it to a more Ada like one for a project I'm working on.
I took an Ada programming course when I worked at a defense contractor in the late 80s. Coming from C and Pascal it seemed familiar enough to learn quickly but was overkill for what we were doing at the time. I left soon after and I have no idea if they ever actually adopted it.
No discussion of Ada is complete without referencing the cost of commercial development licenses. They're expensive. Very expensive.
Any enthusiasm for this language is inevitably quashed upon encountering the $$,$$$ per-seat price of the compilers for the absolute bare-bones x86 version. It's more if you want to target non x86. I pester AdaCore for info every few years and while they've dropped a little bit, they're still out of reach for companies not in the Fortune 500. I'd love to use SPARK, but I don't see that happening any time soon.
Except that it the GNAT compiler is part of GCC and GPL with the linking exception. Why not use that?
And I don’t get what you mean with “pestering for info”. I’ve once discussed buying their compiler and I was by no means in a F500 (decided against buying btw).
Ada syntax is closely related to Pascal and Algol. If you like these languages, then you will enjoy Ada. Unfortunately many people prefer the tenseness of C++.
We did Ada at UNSW in the early 90's but for only one subject - parallel programming. I think many dining philosophers starved during the assignments...
If it helps make up your mind at all, I was much the same the way until I ended up trying Ada. After a night bashing my head against the compiler... well now every time I see a new language pop up I quickly check a few of the features I've become too attached to from Ada and usually* leave in disappointment.
You can decide for yourself if this is an endorsement or a warning. Or both.
*Exceptions obviously exist for very different niches or paradigms.
Ranges are a pretty big part of it for me, but what really gets me is the ease of creating useful new types. Having to destructure a new type to use it, or create a bunch of pass-through functions kinda ruins the point. I haven't seen any language really have a strong cultural focus on types like Ada does, and I think the ease of creating and using new types is the cause.
I think the best I've actually seen in this regard is actually Haxe, with some decent macros for automatically getting pass-through functions. It doesn't even make you list them all out explicitly (as I've seen others do), IIRC.
Most languages just go with machine size "types" and no strict typing on those. I've not seen any other languages do what Ada does. I've got my project to do so though.
There's no real reason for it to annoy me so much, but I was just thinking about how I can't stand when people seemingly randomly capitalize words relating to technology without the intention of drawing emphasis to them: GIT, NODE, JIRA, JAVA... why??
This drives me crazy: whenever I type 'emacs' on my iPad, I get 'emasculated' (and the iPad keeps insisting on it even when I try to correct the word).
Thought this was talking about blockchain tech, Cardano (Token $ADA) which I'm having a hard time giving a chance. Was hoping something could change my mind
They still don't have a smart contract platform launched on the mainnet afaik. The only reason to use it in my mind is as an alternative to Ethereum, NEO, EOS, etc in the smart contract space. Every year that goes by where it does not have it's core competitive feature in a production-ready state is a nail in the coffin from my perspective.
I've been watching for awhile. The road map said it was going to be out in ~a month, a year or two ago. Excuse my skepticism, but I feel like I've been following the eternally shifting roadmap long enough to warrant it. I like Haskell, I flipped through their Plutus example code, I want to use the product! They should just ship what they have.
At my university, the first courses you took in CS used Ada. I think it was a really good choice but I was in the minority I guess because after my year they switched to either using Java or Python depending on who taught which of the courses in that first series.
People found it frustrating how much work it'd take to get their programs to even compile but that's a good thing in my view. If it wasn't compiling, that was normally because the compiler found an error that would still be there at runtime in another language.