Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I think you moved the goal posts and are now talking about philosophy, not math. If you are saying all math is probabilistic even with conventional theorems, how can you predict a future of probabilistic math if it's already here?


> are now talking about philosophy, not math

Call it what you will.

> If you are saying all math is probabilistic even with conventional theorems, how can you predict a future of probabilistic math if it's already here?

One crucial difference is that our confidence level in a given proposition will be made explicit and will be quantified. I think you are misinterpreting what I'm saying as well-- the "experiments" I'm suggesting are not naive monte carlo simulations or tests of a nonexhaustive sample of special cases, but the generation of formal logical proofs (although not necessarily limited to that). The uncertainty would arise physically from the largeness of the computation, and once that barrier is crossed, it's conceivable that other (less rigorous) methods could be added to the battery of techniques as well (thereby increasing confidence). Also note that the size (amount of information) of the theorems and proofs would be large, and perhaps will surpass human comprehension, even after multiple stages of approximation and abstraction. The degree of confidence in such theorems would also be close to certainty as well. No human readable proofs would be harmed in the process, except for the false ones!


I assume (perhaps incorrectly) that the terrabytes of data generated by these computational proofs are created using an algorithm that has been proven correct by a human, so I am further assuming that the potential for uncertainty you are positing comes from machine error (especially since you use the term "physical"). Are you aware of the rate of physical error in modern CPUs, memory, and storage technologies when they are arrayed in a redundant fashion? It is extremely low, and very unlikely even for a data set of this size, especially when compared to human error rates. Furthermore, were this to be replicated by someone else on another set of machines, the likelihood of the same bit flip due to physical error is so unlikely as to be astronomical in its probability.


>the terrabytes of data generated by these computational proofs are created using an algorithm that has been proven correct by a human

Was thinking much bigger than that. Probably won't be feasible in the near future, and would of course be directed by very clever agents (mathematicians), themselves having excellent mathematical insight.

>It is extremely low, and very unlikely even for a data set of this size, especially when compared to human error rates.

That's the idea. Replace "practically impossible to discover or know" with knowing to an astronomically high degree of certainty. Never suggested otherwise.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: