Many "open problems" in mathematics are not actually that interesting on their own. Take the Collatz conjecture: it's pretty much just the statement that a certain 3 line program terminates. In many cases such as this, it is not the truth or falsity of a statement that interests people, it is the method to arrive at a proof which yields new insights.
For example, the graph minor theorem states that a certain order on graphs does not have an infinite antichain. Classically this may have some interesting consequences, but in reality it's not very useful. However, the proof contains some real gems, such as the notion of tree decompositions and the graph structure theorem that lead to mountains of new results all throughout graph theory and related disciplines. Tellingly, the graph minor theorem is a short and simple question while the graph structure theorem is a deeply technical statement that would not have been conjectured on its own but rather was discovered in the process of showing the graph minor theorem.
When proofs are creeping slowly into the petabyte or exabyte territory, at some point you will not be able to distinguish them from the complexity of Mother Nature herself.
Proof used to be a distillation of a complex phenomenon, or a complex logical tangle, into a smooth thread that one could follow and agree that, yes, this is obvious when you put it this way.
Sure, at some point computers will take over the work of doing proofs, just like in a different trade hammers took over fists, or wheels took over feet, or engines took over muscles, hearts and lungs. But at that point we will have to re-examine this concept of "proof".
If and when most of the heavy lifting in the field of doing proofs will be done by computers, I suspect a new definition of proof will emerge not too far from what you're saying.
Rene Descartes or Isaac Newton would throw a fit if they could see this.
I am stockpiling industrial amounts of popcorn for the (not too distant) future debates around the question: should we take into account the inherently limited capabilities of the human mind when defining the concept of "proof"?
To be sure, myself I am very much a classicist of the old school when it comes to such matters. But this is a brave new world we are slowly but surely moving into.
It means the proof is explanatory, i.e., that it provides a framework of thought that extends our capability to reason about related problems within our very limited cognitive envelope. Pure proofs, like purely predictive models in science, often do not provide this.
Is that particularly valuable, though? It seems like a pretty anthropic point of view to value an algorithm as more elegant just because inferior hardware is able run it.
It's been a historically valuable heuristic -- algorithms/theories that provide cognitive scaffolding allow us not only to solve related problems but reason about the limits of applicability of that theory, successively guiding discovery of new theory. Obviously our cognitive envelope doesn't just include our own brains but the tools we use, including computational ones, so it's not as if pure proof/pure prediction don't also have value for guiding discovery, but it does require a lot of ingenuity to find ways to exploit them.