Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Well, actually it did.

Some of the reasons are:

1) Most applications use very reliable databases as a store. Back in the day, programmers used to write their own one-off BTREE libraries, which stopped with msql/mysql, jet, dbm and sqlite.

(A holdover from that era is when interviewers asked you to roll your own sort algorithm, which you should almost never do for production apps.)

2) Some software is fuzzed.

3) Some software has managed memory.

4) Some software has tests.



5) Some software is formally verified

Formal verification is huge these days I would like to think.


Huge? I have worked in this industry for a while now and never heard anyone outside of HN even mention it. It’s decidedly not huge for any definition of huge (outside of academia maybe?)


The "big production" example that kept being referred to in university was a (Belgian?) metro system. I think it was well suited to formal verification because it had a finite number of valid states, and the cost of failure was sufficient to justify the investment.

EDIT: I now think it was French, source https://www.prover.com/portfolio-items/ratp-paris/



It's used to some extent in the aviation industry for flight-critical software. So maybe huge in some contexts.





Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: