In computer science, the efficiency of algorithms is generally written in the form of a function indicating the maximum amount of operations that must be performed for a given input size. For example, in the case of a linear search through a list of n items, the computational complexity would be written as O(n), meaning that for every additional item, one more operation must be performed.
However, many algorithms cannot run in O(n) time, or even close to it. For example, take, once again, the traveling salesman problem, in which the algorithm must find the optimal route between several cities without ever visiting one more than once. The simplest solution runs in O(n) time, while even a better solution using dynamic programming still only runs in O(n22n). Because, as far as anyone can find, this algorithm cannot be reduced to run in polynomial time (that is, O(nsomething)), the problem is considered to be NP-complete.
Unlike the aforementioned problem, many problems have solutions that do indeed run in polynomial time, which are known as P. However, while no one has found polynomial-time solutions to NP-complete problems, this doesn’t necessarily mean that they don’t exist. In fact, no one has yet provided a proof that, indeed, such problems cannot have polynomial-time solutions; there’s even a $1,000,000 prize out there for the first person to provide a correct proof! As such, I have created the following infallible proof:
P = NP
P/P = NP/P
1 = N
P = (1)P
P = P
By the Reflexive Property of Equality, this is always true.