Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

If anything it seems like C++ is the one in decline. C is still the only option for lots of use-cases; C++ is a) succumbing to feature-bloat and b) having different parts of its userbase siphoned off by Go, Rust, Swift, even C# (games; both Unity and Godot).


The only thing keeping C around are FOSS UNIX clones and embedded developers that are religiously stuck with C89 and their PIC compiler extensions.

Even Arduino like boards use C++ nowadays.

Unity and Godot have their graphics engine written in C++.

Rust, Swift are dependent on LLVM, written in C++.

Go, it isn't really in the same league of programmer expressiveness.


The "only thing" (FOSS UNIX clones) is kind of a gigantic thing.

The Windows kernel is mostly written in C, too. iOS uses Objective C.

Modern C++ isn't bad, but robust classes require an enormous effort together with tests suites that approach the size of test suites required for an untyped Python module.

In practice segfaults are as prevalent in C++ as they are in C. Type errors by implicit conversions are more frequent in C++ than they are in C. Full testing of all conversion rules requires a superhuman effort.

So I think well written C is often safer than well written C++, because one does not have to deal with the case explosions introduced by classes and their implicit rules.

The often quoted safer strings in C++ are just a very tiny part of the equation.

If I had my way, we'd all be writing Ada or Standard ML, but no one listens.


While they are still gigantic, I bet long term they will stop being so, thanks to the cloud, type 1 hypervisors, unikernels and serverless computing.

30 years ago no one though commercial UNIXes would be wiped out by a "toy" UNIX clone project.

Windows kernel has been moving into C compiled as C++ since Vista introduced support for kernel mode C++.

iOS uses C++ for its drivers, Metal shaders and compiler toolchain. That Objective-C implementation relies on C++.

Yes, the Trojan horse of C's copy-paste compatibility is also its Achilles heel of security, however in contrast with C's attitude regarding security, C++ provides the library types to actually implement secure code, if one bothers to make use of them.

If I had my way, C would have bounds checking enabled by default, strong typed enums, proper arrays and strings, but no one at WG14 listens.


There are lots of performance sensitive tools, language runtimes, and cross-platform libraries written in C.


And even more written in C++. Should we start counting which ones?


Hmm, I’m not sure I’d readily agree.


Starting by Clang and GCC, I bet that for most relevant C projects there is somehow an equivalent written in C++, or even more than one, with the exception of UNIX kernel clones.


Let's start with CPython. What is an equivalent written in C++? (Note: there is an equivalent written in both Java and C#. But not C++.)


I guess Numba might quality.

Then we have Pyston, and the ill fated unladen-swallow.


I have spend the quarantine hacking in C. On FreeCIV.

Still used lots and lots of places


Same can be said by any programming language, C is the Cobol of systems programming languages, so as long as there UNIX FOSS clones it won't go away.


Freeciv fan here. What are you working on in it?


Forked Attitude AI and making a rules based AI


C++ has been greatly reinvigorated this past decade with language and compiler modernization. Few C++ programmers decamped to Go; most of Go’s adoptees came from Python. Go doesn’t have all that much to offer a C++ programmer.


> Go doesn’t have all that much to offer a C++ programmer.

Concurrency and parallelism without debugging hell is huge.


> Concurrency and parallelism without debugging hell is huge.

There is plenty of libraries in C++ that haves you concurrency and parallelism without debugging hell.

Hell in concurrent code comes when you have shared state, and that is valid for both Go and C++. Using shared Golang Map in multi-threaded code is a very good example of deathtrap, even if it's in Go.


C++ will be getting a concurrency model, likely in C++23. As with other language features the position papers and implementations refer not only with their authors’ experience with C++ efforts (libuv, asio, pthreads) but other languages as well, such as go.


C++20 offers both with very nice graphical debuggers, which Delve isn't really a match for.


What graphical debuggers for C++ do you recommend?


Visual C++ ones.


That's not really an option for me when I debug C++ that is run on a Linux or Android TV. It also wouldn't be an option when I used to work with C++ that was used on Linux servers. That's a big chunk of C++ use.

While I have your attention, I see your comments here in C++ threads quite often. You seem to like the language and at the same time you have some experience with simpler languages like Oberon. My experience with C++ has been that people happily go on to create giant inheritance hierarchies, with no restraint for multiple inheritance to the point that it's so unmanagable that nobody knows where things go and what actually happens. With all that inheritance, they lose track that the same thing has actually been copied tens of times between different classes and now these tens of classes have code that is basically copy-pasted and who knows what benefit anyone has from all this inheritance. Google engineers are one of the offenders when it comes to overuse of inheritance. Then, modern C++ allegedly fixes some warts, but IME it introduces more of them. E.g. the initialization fiasco (I honestly can't say with 100% confidence what braces do when you initialize std::vector). Then there is this thing that when you create a lambda like so:

  [&] { ... }
And then move it to a function which will create lambdas of this form, just with different captures:

  std::function<void()> create_lambda(args...) {
      return [&] { ... };
  }
The second version will crash later when you run the lambda. Because it turns out that "[&]" just copies the stack pointer and isn't syntactic sugar for enumerating all the variables that you capture. This was non-obvious to me and it was really hard to track down the first time I encountered this problem. I think "[&]" is broken. Similarly, std::thread crashing in destructor when the user doesn't call std::thread::join, nor std::thread::detach. The ambiguity of syntax... I just don't buy that "modern C++" makes things particularly better. It sometimes makes them worse.

Don't you experience the same fatigue as I do?

EDIT: Oh, I just remembered another thing: at a previous job we discovered that MinGW's implementation of std::random_device just returned 0 every time. It is major facepalm for MinGW, but also for the C++ standard for making this behaviour allowed (yes, it's actually compliant with the standard). And even after it was fixed, there are still systematic issues with this whole randomness API. E.g. you can't actually seed it properly[1].

[1]: https://codingnest.com/generating-random-numbers-using-c-sta...


Well, Android Studio also has Clion debuggers integrated, which although not at the same level of visualization capabilities, they are quite further than bare bones alternatives.

Fun fact, due to continuous ranting regarding C++ support from game developers, not only did Google announce at GDC 2020 that they are upping their game regarding C++ tooling on Studio, they will also provide Android plugins for Visual Studio.

When people discuss Oberon, they tend to focus on Oberon (1992) or the minimalist approach that Niklaus Wirth has pursued later with Oberon-07.

When I talk about Oberon, I think beyond my early experiences with Native Oberon, and also include Oberon-2, Component Pascal, Zonnon and Active Oberon into the mix. The Oberon language linage, not all of them done directly by Wirth.

In that sense, for me Oberon is what EHTZ nowadays uses as Oberon, namely Active Oberon.

http://cas.inf.ethz.ch/projects/a2/repository/raw/trunk/Lang...

You will find out that minimalist Oberon is long gone and the language in a certain sense compares to C#, D in complexity, with support for generics, untraced references, exceptions, linkage models, inline Assembly.

Because that is the problem with minimalist languages, library boilerplate just grows to emulate what should have been in the language to start with. In Oberon's case, Active Oberon is the result of many years of research and student thesis doing OS implementations in Oberon, while noticing that stuff like generics, untraced references, inline Assembly are quite useful.

As for C++, while I still enjoy using it, nowadays it is mostly a tool for OS integration libraries and GPU shaders (HLSL/Metal/CUDA), I rather write the main code in .NET languages, Java or some other managed language.


> You will find out that minimalist Oberon is long gone and the language in a certain sense compares to C#, D in complexity, with support for generics, untraced references, exceptions, linkage models, inline Assembly.

Generics is something I can't live without, to be honest. But I don't like C++'s implementation of generics. The error messages they generate and the whole SFINAE business make for a horrible programmer experience. Rust has the best implementation that I know of currently. Inline assembly sure is useful.

Anyway, thanks for the extensive answer. I think I understand your point of view better now.


> I think "[&]" is broken.

I feel like this is consistent with the "pay for what you use" design. Why capture all variables in scope by copies when you can just capture them all as references (which might go out of scope)? If you truly need to capture other stuff, you can do so using more syntax. If "[&]" did what you describe then "wtf, why is it saying I have a deleted copy constructor for some value I'm not using" or "wtf, why is my code so slow because of unnecessary copies of large objects" would be all sorts of fun.


"&" captures references, not copies. I would like it to be just syntactic sugar for "&x, &y, ..." and only include the variables that I actually used in the body of lambda. There is no performance cost to that (unless a crash is somehow a performance win --- hey, the program finished earlier, I guess it's faster...).


But the complaint was about lifetimes. You seem to want copies (if I understood your post correctly).


> it turns out that "[&]" just copies the stack pointer and isn't syntactic sugar for enumerating all the variables that you capture

it doesn't copy the stack pointer, but it enumerates all variables you use and captures a reference to it (I think the &, as used in address-of and references kind of gives it away). [=] {} unsurprisingly captures by value.

Some compilers do warn when you return lambdas capturing local vars by reference, but it is not super reliable.


At one job I wrote the equivalent of the following (it's just that I invoked create_lambda more times):

  #include <functional>
  #include <iostream>
  
  std::function<void()> create_lambda(const int& x, const int& y) {
      return [&] {
          std::cout << "x: " << x << "\n";
          std::cout << "y: " << y << "\n";
      };
  }
  
  int main() {
      int x = 5;
      int y = 7;
  
      const auto f = create_lambda(x, y);
  
      f();
  }
And it crashed. There were no local variables in create_lambda and the lambda crashed in the same function that create_lambda was called in. When I try to reproduce it now, it doesn't work. Not sure what's different, but this is basically what happened.


> std::thread crashing in destructor when the user doesn't call std::thread::join, nor std::thread::detach.

this has been 'fixed' with jthread. FWIW I think it is a mistake, blocking in a destructor is can easily lead to deadlocks (I have first hand experience, it is not just a theoretical issue). Detaching is not an option either.


Swift? I didn't know it had much use outside of mac/iOS. It seems like a cool language otherwise.


It just recently got official support for Windows and Linux, and it had low-key Linux support for a while before that. I've heard of people writing web servers in it. Like Go, it compiles directly to native machine code.


Isn't it still slower than GO because of referencing counting?


Depends what you mean by slow. Go is garbage-collected, Swift is reference-counted. Each has advantages and disadvantages. But also, that doesn't apply to stack-allocated values. It's not as simple as one being faster than the other.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: