Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Those sound like unrecoverable errors. Exceptions only make sense for recoverable errors, otherwise why not just call abort()?


A particular task may not be recoverable, but the process state can be. Photoshop shouldn't abort just because you pulled the thumb drive you were reading/writing from. A concurrent network server with 9999 connected clients shouldn't abort because malloc failed initializing connection state for the 10000th.[1] This isn't just a QoI issue, it's also a security issue.

[1] Many types of network servers, for example a multimedia streaming server or a SOCKS proxy, only need to allocate dynamic memory during the early phase of the connection. After that the client can be served indefinitely. If you're writing a library, best practice is to assume your caller can handle malloc failure; if not they can choose to abort.


It depends on the context. Just because you can't continue writing to a file that was open, doesn't mean the program should crash.

For example, in a user-facing desktop application, maybe you just catch the exception in a high-level place, log the error, report it to the user, and explain that the action they just attempted failed. They can then try again, without having to restart the entire application.


Because only the caller of a function gets to decide if an error is recoverable or not. Also note that this reasoning applies to all kinds of error handling, not just exceptions.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: