Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

"Constant time in the exponent" is nonsense, I'm afraid.

A bounded exponent of n would be the same thing as "polynomial time", but the thing that's bounded (and indeed arbitrarily close to 2 for large n) is the exponent of log n.

The running time of the algorithm presented in this paper is n (log n)^(2+o(1)). This ...

... is not constant; it increases with n, a bit faster than linearly.

... has o(1), not O(1), in the exponent; the two mean different things. O(1) means "bounded", o(1) means "tends to zero". The claim isn't that the running time is <= n times polynomial(log n) but that it's <= n times "at most approximately a quadratic polynomial in log n".

... doesn't in fact depend mostly on that exponent; the most important factor is the n, not the (log n)^(2+o(1)). If that 2 were a 100, the n factor would still (asymptotically) matter more.

For instance, suppose n=2^100 and our logs are to base 2. Then the running time of this algorithm is approximately some constant times 2^100 (that's the n factor) times 100^2 (that's the factor with log n in it). 2^100 is much, much, much bigger than 100^2.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: