In addition to what everyone else has said, that would presumably prevent you from writing statements like "a++" or "a += 1" - instead you'd have to write "a = (a + 1) as u8", which seems like it would get very tedious, even if it is much clearer that it could overflow.
It wouldn't produce an exception, it would not compile. The nice thing is that you can avoid range checking at runtime.
Exactly, Ada's modular types would be a good option in this case, if that is what you want (my feeling is, most likely not unless you are doing some low level stuff). An alternative would be to rewrite the for loop in a functional or range based style.
In algorithmic code, you almost never want overflow. If you have a little function to calculate something, you want the intermediate variables to be big enough to perform the calculation, and in the end you cast it down to the size needed (maybe the compiler can do it, but maybe you know from some mathematical principles that the number is in a certain range and do it manually). In any case, I would want to be warned by the compiler if I am:
1. loosing precision
2. performing a wrong calculation (overflowing)
3. accidentially loosing performance (using bignums when avoidable)
1 and 2 can happen in C if you are not careful. 3 could theoretically happen in Python I guess, but it handles the transition int <-> bignum transparently good enough so it was never an issue for me.
You could increment integers so long as you make it clear what you will do in the overflow case. Either use bigints with no overflow, specify that you do in fact want modular behavior, or specify what you want to do when your fixed width int overflows upon increment. That seems eminently sensible, instead of having overflow just sit around as a silent gotcha enabled by default everywhere.
IMO, that looks ugly, but that probably is a matter of getting used to it.
Compared to the %256 option, it has the advantage that, if you change the type of a, you won’t have to remember to change the modulus.
They also chose to not make modular integers separate types. That makes mixing ‘normal’ and modular arithmetic on integers easier (with modular types, you’d have to convert between regular and modular integers all the time) (edit: that also is consistent with the fact that the bitwise operators work with regular ints, and ook not require a special “set of 8/16/32/…” type that’s implemented as an integer)
I wouldn’t know how common such mixing is and, hence, whether that is a good choice.