The original question was:
The question is in context with Kurt Gödel’s Incompleteness Theorem.
Question: If no formal logic can ever be, complete and consistent, does that mean, humans are illogical beings?
Further elaboration of the question: Imagine a mathematician seeing a statement and intuitively knowing that the statement is true. But the statement is unprovable.
Conclusion: If the mathematician was a logical being, he would not be able to conclude that the statement is true. Because it is unprovable. But since he knows it is true, it must mean that the mathematician is an illogical being. Because the mathematician doesn’t use logic to conclude – he use intuition.
Further clarification: Since maths is based on axioms – which are statements that are true but unprovable. Does this mean that the creator of the axioms is an illogical being? Because the axioms would never exist if the creator was a logical being.
To take the question further: Are humans constrained by logic in the sense that we could model the universe by a big equation, then calculate for a later given time, what the state of the universe is in? So our actions would just be a product of a predefined mathematical path, which we don’t know (or might never know).
Or are humans not limited by logic so it would be impossible to model the universe by a big equation.
Because the equations ‘foundation’ is based on something which cannot ‘contain’ an equation for an illogical being?
I’m trying to hint on the question about: free will vs. fate.
Mathematician: Human beings are very, very far from perfectly logical beings (by which I mean that our thinking is often at odds with logic, and we routinely draw false conclusions by misprocessing information). There is no need to reference the incompleteness theorem to show that. Wikipedia’s list of fallacies does a nice job of cataloging many of the most common errors in thinking that humans make. I think it is very safe to assume that most people in the world have made at least a handful of the errors in this list, and everyone has made at least one of these errors at some point.
But, the fact that we are frequently illogical says nothing about whether human behavior could be predicted by applying the laws of physics to a detailed description of a human’s current state. It may be possible to make such a prediction in theory (though accuracy will be inherently limited by Heisenberg’s uncertainty principle, imperfections in our measurement tools, and other factors). In practice, however, predicting a person’s behavior using physics is absurdly difficult to do and it is possible that we will never come close to being able to do so. It is not inconceivable though that one day we will be able to model approximate human behavior in the short term by using a very detailed simulation of a person’s brain. At this point in time though, we can only speculate.
Physicist: The most frustrating thing about the Incompleteness theorem is not just that it shows that there are an infinite number of true and unprovable statements, but that it provides no means to figure out which statements those are. We have a lot of statements that we assume (or define) to be true without proof, like “you can’t split a point”. To make it sound as though there’s more to it than that, we call them “axioms”. It’s not that they’re intuitively true, they’re defined to be true, and what we call logic follows afterward. When a mathematician says something is “true” it’s always “true, given some set of axioms”.
As far as modeling the universe goes: The universe is, on one level, entirely deterministic. If you had access to the total quantum wave function of the universe you could roll it forward in time (deterministically), no problem. However, this isn’t how we experience the universe. We only experience a tiny fraction of this wave function.
Basically, different versions of you experience every possible outcome of every event, so it no longer makes any sense to ask “which will happen?”. They all happen (all that are possible), but each version only experiences one. That isn’t terribly clear, but there’s a post about almost exactly this question here.
Pingback: Overfitting - Pagina 358 - I Forum di Investireoggi