That's not how this works. What you're doing here is defining away the scope of the problem to make O(1) complexity analysis redundant. A bound of O(1) conventionally means that we can exploit some structural component of the input and the given problem to find a solution independent of the input size. If you redefine input size to the more narrow sense of inputs which don't have some sort of structural feature like that, then yes of course nothing is constant time. But then you're just shifting the difficulty of the problem around without much of a gain.
I see how you may think that, but I would argue that I am not redefining anything, as things turn nonsensical without this assumption.
It is not an input if it does not affect your output (I am sure a better formal definition exists—I usually hear "meaningful", hence my use of that). Without this restriction, any input could be arbitrarily padded, inflating input sizes and making an algorithms complexity appear lower.
I find the even/odd example to be an exceptionally clear case of an algorithm that takes a fixed 1-bit input: Its implementation on a little-endian machine does not know the bounds of its input, only the location of the least significant byte (due to byte-addressing).
Without a defined bound, such an implementation must be assumed to either take a single byte as input (here a byte read is an implementation detail, despite the algorithm only needing one bit), or the full system memory from the location specified as input regardless of contents (due to having used the word "arbitrary", the input is allowed to not fit in registers).
However, I admit that I am now entering deeper theoretical waters with regards to definition details that I am normally comfortable with. I do however not find any other definition to line up with practice.
> I see how you may think that, but I would argue that I am not redefining anything, as things turn nonsensical without this assumption.
The point I'm making is that this statement reduces to the claim that defining an algorithm's worst case time complexity as O(1) or constant time is nonsensical.
Do you disagree that adding two numbers is a constant time operation?
> Do you disagree that adding two numbers is a constant time operation?
It is constant time by crypto programming definition in both theory and practice, and O(1) only in theory (bigints are a pain).
I did not try to claim that O(1) is nonsensical. Rather, that certain O(1) variable-input algorithms are in fact simply fixed input algorithms due to not considering its input.
I also find these "non-sensical" O(1) algorithms to be outliers.
Do you disagree that adding two numbers is a constant time operation?
I do. A general algorithm for adding two numbers, at least one represented in the conventional way as a series of digits in some base, has a lower bound of O(log n).