The Complex Road From P to NP: The Magic of the Solution Space
2024-8-10 22:0:18 Author: hackernoon.com(查看原文) 阅读量:6 收藏

P (polinomyal time) vs NP (non-polynomial time) is a question that tackles the underlying complexity roots of specific problem spaces. For example, a P problem is one for which the solution time increases in polynomial time. We have an array of numbers: [a, b, c, d, e, f, g], and the task is to sort those numbers. The way that current algorithms solve this problem is by going through each number one by one, and if the current number is smaller than the last (in case we sort in an ascending manner), the number is moved one space back. The more numbers we add to the array, the more time it takes for a full sort. That increase is, however, gradual and predictable.

When it comes to NP problems, the complexity of the problem is far greater. For example, such an NP problem is “The Traveling Salesman Problem” (TSP). This problem imposes that a map with a certain number of cities is given: let’s say cities [a, b, c, d, e, f, g]. The objective is to find the shortest route between all those cities. In this case, the more cities we add, the time required to find the solution increases drastically.

For a better understanding, imagine that in the case of a P problem, the time increase is similar to an addition where, with each new data added to the set, the time increases by adding the sum of datapoints found in the dataset, based on its location. In our sorting problem, for example, when we have one number, the time it takes to solve the problem is 1 (not 0 because checking is required in the end), and with the second number added, the time becomes 1 + 2 = 3. The third number (+3) brings the time to 6, the fourth (+4) to 10, and so on.

When it comes to the NP problems, in the case of TSP, for example, for each city added, we multiply the added city’s number with the current required time. With a single city, the time is 1, with two cities, the time becomes 1 x 2 = 2, and with 3 cities, we get 2 x 3 = 6. The fourth city will bring the time to 6 x 4 = 24, and so on. Of course, this is not a valid and realistic time-increase scenario, yet it marks a great way to visualize how much more time is needed as the dataset of an NP problem increases in contrast to the dataset of a P problem.

Now that we understand the two types of problems, the question we impose is: Is P equal to NP (meaning that with the right tools and algorithms, we could efficiently solve each problem, NP or P, in a polynomial time) or are they different (meaning that the complexity is an inherent property of the problem space, and so, there exist problems that we can never fully tackle, no matter how advanced our knowledge and understanding becomes).

Those familiar with the P-NP problem suggest that they are inherently different and that there are problems which we may never be able to solve efficiently. But then, what is our curiosity and understanding for, if not to break that separation between the two problem types.

In the following parts, I will present my view and the angles that I have found to look at this problem. By the end of the article, I hope that I was able to clearly present to you a holistic understanding of these two intertwined problem spaces.

Part 1: Simplicity Against Complexity

In order to better understand the nature of the problems, we are going to walk through the roads of philosophy a bit and ask ourselves about the nature of simplicity and complexity. After all, if complexity is ultimately different from simplicity, we might simply and truthfully assume that there are problems (NP) for which complex solution spaces (i.e., quantum superpositionality) are required in order to tackle the problem in a somewhat polynomial time.

In the case of the TSP problem, a complex solution space would denote a solution path that takes into account all the cities and their respective positions and keeps hold of all of those in order to find a reasonable tie between the cities. But even if we take into account all the weights required, the city from which we start is as important as the walk that the algorithm performs to find the most efficient route, right? If we start from city a, then the most efficient path will take a certain shape; if we start from city b, the most efficient path will look different.

Or maybe this is a wrong reasoning. After all, the most efficient path is one and only and is ultimately the most efficient simply because it represents the shortest tie between all the cities. We do not look for the shortest path from city a to city b, but for the shortest path that ties all cities together. In this view, we could visualize the shortest route, akin to a “collapsed” state, where the total distance between the cities is the shortest.

If we are to use “brute-force” algorithms that form all kinds of paths and then compare them, all those paths will be the result of the same “brute-force” reasoning of the algorithm, and so each instance of forming the path is ultimately a linear reasoning. And if we were to find, by chance, the shortest route, the “brute-force” and “chance” aspects of the algorithm would have no way of knowing if that route was ultimately the shortest.

Now, this approach seems like it could benefit from the powers of machine learning, which ultimately is trained to make approximations. Imagine training an AI using maps of cities along with the shortest path drawn between them. This way, instead of “brute-force” algorithms, we could switch to “educated-guess” algorithms that will prove a tangible increase in efficiency.

Oh, but then, we would still require an absolute way to arrive at that shortest path. And as of now, there is no way of even knowing with 100% accuracy if the path in front of us is the shortest. In this sense, heuristics and other mathematical models aim to provide an understanding of the logical foundation that will tell us about the most efficient path. Yet, those approaches are as of now incomplete, and we don’t even know if, when they are complete, they would still be able to provide us with the most accurate answer or just a “brute-educated” guess.

Part 2: Oversimplified

I may have gone a bit astray from the topic of simplicity and complexity. Or maybe from tackling them in a real philosophical way. All I did in this sense was basically ask if we could somehow reach a certain level of complexity in our approach and if we would be given a “yes” once we found the right solution. But since the shortest path exists on any map with any number of cities, it must have specific values and specific details that make it stand out from the crowd, right?

Or maybe those details only come up after endless loops through different paths in the form of total traveled distance. But that may be simply irrational to assume. After all, the shortest path is the shortest, no matter how many times we go through it. Indeed, the more we go through different loops, the more we understand which is shorter and which is bigger. This reasoning may, however, be required only in cases in which we want to differentiate between atomically small loops with insufficiently accurate measurement tools.

It would make sense now that the problem here is not finding the truth of the inquiry but rather the capability of the tools we use to test it. When it comes to cutting a tree, we use an axe. When it comes to listening to music, we use our headsets. When it comes to formalizing and understanding mathematics, we use logically-built tools.

And maybe that’s the inherent beauty found in mathematics. We take something simple, merge it with another simple thing, and together they form something complex, which allows us to move diagonally, for example. Or to draw a perfect circle or stuff. But then, how many such simple tools can be bound to one another? At what point can we mix two complex tools together? And if so, can we achieve that higher complex tool only by merging the two lower complexes or also by merging all the lower simple ones that form them?

Heuristics, in this sense, are like those tools for which, in their interplay, we may find a way to answer with 100% accuracy whether we have or have not found the shortest path between the cities. In this view, heuristics are like a solution-prover, but to find that solution, we may require other approaches. At the end of the day, the roots of P vs NP are so deeply tied to the nature of complexity itself that we would have to ask whether we could walk two (and even more) distinct paths in a single linear time.

Part 3: The Fractal Nature of Complexity

It is an interesting thing to be out here. Out… there. After a thirty-minute break from writing, a break used to place the ideas that were to follow in the best order and on the most understandable scale. And the fact is that yes, the ideas are clearer than ever; they even collapsed into a full-closing cycle. And then, that cycle turned out to be a dot, to be a shining part of the whole whole, and it is not shining because it is special in any way in regards to the whole system, but because it is the current space, the current understanding, and the place in which we sit. It is a place in which, when we look up, we find both complexity and simplicity. When we look down, we find the same. When we look sideways, it is no different.

In this way, it is so true that we find what we search for. If we look for the nature of NP, the ever-complex, we indeed find it, in its outmost complex nature. We also strip simplicity from it in the process to make sure that we throw away the ladder after we climb it. But then, if we look for ways to reconcile the two views, to merge together P and NP as mere parts of a wholistic understanding, one in which, for a problem to exist, it requires a clear solution, then we can understand that, with enough striving and dedication, a solution can ultimately be found. And no matter how elusive that solution may be, there is always the potential to achieve it in the most fluid and tangible way.

And now, to eliminate the confusion from words, I want to say that I advocate for the fact that P ultimately equals NP. And that is simply because if we haven’t found the solution, it doesn’t mean it is not there, waiting for us to stumble upon it. And if you call me optimistic, I want to say that I see myself as realistic.

Maybe I wrote the conclusion before even ending the article. But then, I like this style. It brings a sense of a “living” style where I am not just building idea upon idea, while keeping the hope that I expressed myself so clearly until the end.

The nature of the scientific papers is that you first place your abstract, like, “P equals NP because simplicity and complexity are intertwined.” after which you continue on to express your points and thoughts about why and how this is true.

In an article, however, the goal is to make the person reading it understand something; it is akin to teaching. While scientific research is written with the goal that people who already know about the subject give their thoughts and opinions in regards to the “reasoning” presented, and if someone holds some knowledge that can tie all those points together and even more, then that “reasoning” is reframed, logically completed, and scientifically rooted and becomes a "discovery.”

Imagine merging both styles together. What would it result? It would be like a gradual growth of ideas, one in which insight after insight arises. The abstract would lose meaning in this sense, as not even the writer would know where the path would lead. In this sense,the writer may have a vague idea or a self-imposed starting point, like proving that P equals NP or that P is different from NP. Later, in that building of insights, a small overlook may point in a completely different direction, and then, trying to draw back without deleting the last argument will only result in confusion.

Just like drawing back to my initial building before intentionally reframing part 3 into the conclusion, which I held and found beautiful to place in. But how would I return there? I mean, you, as a reader, might have built idea after idea and tried to grasp an overall form or shape. But then, that’s the beauty of it all, isn’t it? We can take breaks from our logical reasoning, let creativity bloom up our potential, and then start back again, new and refreshed, with new perspectives and more efficient ways to arrive at the answer. And in this sense, part 3 was just a break from it all. And I will take another break now, just to take a small walk. After which, we are going to dwell on part 4.

Part 4: Inside a Fractal

When we think about a fractal, we imagine a self-repeating pattern that holds the same properties on and at all scales and dimensions. The Mandlebrot set is, for example, a fractal that represents something akin to a cell, and when you zoom in within that cell, you find similar structures over and over again. Well, those exactly similar cell-like structures are not as common as you might think. The fractal is, in the end, so magnificent that you can see each detail that sums up that cell with extreme clarity as you zoom in more and more.

There are parts of it similar to blades of grass, and other parts similar to the light curvature that you see when light passes behind a black hole, among many many other interesting aspects. And after you zoom in more and more, you will eventually get to the same initial cell, reiterated at atomically-small scales in relation to the starting one. And you can zoom in from there even more.

So esentially, a fractal is something similar to a road from a simple P problem, one which, when seen in all its potential complexity, becomes a very mind-bending NP problem that seems insolvable simply due to the astral amount of computational power (even if the path to solve it is a linear one) required to solve it. You can, for example, make a P problem out of “draw the Mandlebort set at 3000x zoom,” and the solution is a linear one. The program simply walks the fractal space, collects the data piece by piece, and copies it on the other paper. But the amount of time required to achieve the full drawing may simply be enormous. Maybe, unless we give the program enough memory and efficiency to memorize all of it and then paste it with the same efficiency or an even greater one.

Now, would a problem such as “Copy the Mandlebrot set completely on this paper” be considered an NP problem? After all, due to the infinity of the zoom that we can achieve, it would take an infinite amount of time to pass the first pixel, wouldn’t it? But then, how do we see the fractal at any scale if beneath it sits an infinite complexity to be drawn? Maybe the algorithm that draws the fractal creates the first image and then continues to indefinitely work towards achieving greater and greater levels of complexity and depth. And that makes you wonder: what if, from a certain semi-infinite depth (or complexity), we find a different figure? Or maybe we will pass a point from which the Mandlebroth fractal is starting to be represented in other ways, maybe opposite ones.

In the face of such mind-bending questions, we sure do feel like we need a break. Like our brains simply overloaded because they tried to process those scales. But then, we are not working on scientific research here; our goal is to simply explore the complexity and enormity of it all, not process it. Maybe it is easier once you form relative weights or find different types of infinities that you can use in order to make sense of the scale of things.

For example, if my assumption is that at the other side of infinity, the Mandlebrot set is seen mirrored, then it would make sense that the mirroring effect may start occurring from a semi-infinite zoom (or depth). But then, that semi-infinity is not real. Infinity, in the true sense, suggests that the Mandlebrot set holds every state, shape, and form that has ever existed, can ever exist, and will ever exist. But then, there are boundaries, aren’t there? It is clear that this fractal is just a pattern. A pattern that, yes, may take a lot of shapes, but will still be bound within itself, within its own structure and rules. And in any case, that “just a pattern” is incredibly beautiful and complex, in and of itself.

Part 5: Complexity

As I previously stated, in the building of ideas, we may arrive at a point in which we are simply drawn to conclude the opposite of our starting assumption. I mean, how could one believe that P equals NP and that NP problems simply do not exist after all this explosion of complexity? But as I said in the last article, when we express ideas, we simply “point” to a certain concept. And as a required building block, the enormity of the complexity found in the fractal had to be provided as a potential “Zenith” of complexity. A peak of understanding when it comes to defining what 3-dimensional infinity really looks like. And now that we have all that infinity around us, where can we go?

Where we always go when we want to think. We take a vintage point and look at the very first iteration of the fractal. All that three-dimensional infinity sits in front of us. We reason that if we want to throw a needle and see where it lands, we may face a quite strange phenomenon. The smaller the tip of the needle is, the more it takes for it to fall and the more expanded the ground-space becomes. And at the same time, the more “chaotic” or “less-predictable” the hit ground-point becomes. But could we, with enough infinitely-small needles, be able to achieve the whole image of the fractal? No matter how much space and needles it takes? After all, from this vantage point, we can clearly see the limits, and unless there is a perfect self-similarity at play, some loss must occur after each iteration.

But then, complexity is even far beyond this map. When it comes to the size of the needles, for each size, we have a unique map that is formed. But then, aren’t the smaller-needle maps merely a more complex (and higher-quality) representation of the bigger-needle maps? In this sense, complexity represents a kind of unfolding of a more detailed space. A space which holds within polynomial roads of exploration, and even on the contrary of the believed assumptions, this expansion of complexity allows for a more accurate and efficient exploration than a lack of complexity would.

For example, if instead of the fully infinitely complex map of the fractal, we were to hold a less complex map and we wanted to achieve a certain ground-based point that sits on a more complex map, we would have to first choose a point on the less complex map on which to zoom in and reveal the more complex point that we wanted to achieve.

And this idea turns the whole NP space on its head, while also acknowledging that the polynomial time required to solve specific problems could very well take thousands of years, and that on a polynomial road. And in all honesty, maybe the next question is if quantum computing could hold a kind of superpositionality able to cut the time from x, to x divided by (the number of qubits used).

Part 6: Conclusions and Thoughts

Before dwelling into the possible implications of the quantum approach, I find it fitting to present the map of claims that I have so far made.

  • P and NP are one and the same, meaning that all problems could eventually be solved in polynomial time once we find the right problem space and the right solution space

  • NP problems are more like extensive-polynomial problems, where their solution space is so massive and complex that finding a solution would take a long time

  • Complexity and simplicity are interwoven, and in their interplay, our perspective and level of achieved depth is what sees them as one or the other

  • The complex tools we achieve are used in order to solve detailed problem spaces in a more efficient manner, using the interplaybetween simplicity and complexity to their advantage

    And after all is said and done, when we step into the realm of quantum computing, things might take a radical shift. Here are some possible avenues of exploration.

  • Despite all that is said here, quantum computing could have their own unique NP problems that are inherently different from what classical computing has to offer

  • The nature of quantum computing could at the same time be a complementary and intertwined aspect of classical, offering in the end tools that quantum NP problems require to be solved polynomially

  • Those quantum tools could work alongside classical algorithms in order to provide an ever greater efficiency that promises to bypass the maximum efficiency of both paradigms

  • Current quantum computing algorithms (I don’t know how they are built) might require classical computing aspects as a pre-required rule of functionality. In this case, we would need to catalogue classical and quantum perspectives into two distinct types of computing in order to be able to better understand and merge them together

Part 7: Quantum-Barrier

Given the enormous potential hidden in quantum-power, the systems that hold together our privacy are under constant threat. ZKP (zero-knowledge-proof) systems offer a possibly viable escape route. After all, their basis is formed under the presumption that the holder of the key gives no information whatsoever about the key in the unlocking process. In this view, the key is hidden in plain sight, right under the nose of those who wish to interfere and steal it. But at the same time, the foundations on which the system is built and operates are capable of hiding the entire system from the sight of outsiders.

It would be like walking through the ever-changing and ever-blurry maze of the computational space, either with your classical, quantum, or even quantum-classical computer, and all you see around is an ever-changing space for which, if you want to make sense of it, you would have to hold the information about the very first instance of its creation. To have access to the building blocks that started and shaped the system ever since its creation.

And in a sea of fuzzyness and systems, even if you have access to the building blocks of a specific system, you would never know which system to apply it to, as the sea of interconnected systems is far too big and there are too many systems that inter-change themselves, taking the shape of one another at specific intervals of time.

For the systems themselves, it would be easy to know which information they should accept and which not, but at the same time, extreme synctronicity would be required in order for each system to hold its own unique perspective. Given, however, the infinity found in the gradients of colors, each building block could have its own beginning and on-going goal that could be bound from reaching the state of color of another system. You could imagine it akin to how radio-waves operate.

Maybe, the chaotic elements of such a system could give rise to a kind of inter-ordered system which, when seen as a whole, simply makes no sense. And in order to decipher it, you would have to guess building blocks formed by a cipher, which holds hundreds or thousands of numbers that are constantly changing within their own boundaries.

In this view, the more systems there are, the fewer chances there are for an attacker to enter a system, but at the same time, the more systems there are, the more choices there are for an attacker. Maybe quantum computing would allow one to test a single key on all the available systems at a single time. Constantly generating keys and testing them on the whole systems at once.

But once the current key is found, a requirement for the “first-ever” key would be needed in order to actually enter the system. Or even better, by storing in the systems the first 10 keys, a random key out of those ten could be a requirement to enter once the actual state-key is guessed.

Part 8: Layers Within Layers Within Layers

Or puzzles within puzzles within puzzles. One thing is certain: outside complexity blooms, expanding on all layers at the same time and at polynomial speed. But then, the system itself must, from one point on, become so advanced and chaotic that not even advanced alien decryption systems would be able to tap into it, right?

When we look at our place now, seeing this complete explosion of complexity as a big bang, or more formally, a singularity, we also acknowledge that these advancements mark simply the first stepping stone in all there is to come. We sit at a place where thinking about the future actually counts towards brightening it more than ever. And yes, it always counted. But now, it counts more than ever, and that will be the case for centuries to come. And maybe even millenials.

Who knows what we may find? But one thing is certain: the decisions that we will take now are going to guide the future in ways that future generations have not even chosen. So we might as well keep an eye open for their perspective. In the recent past (and even in current times), people have been sent to war without their will. People have been forced to create destructive weapons and even test them.

But then, what if we only theorize about the weapons and instead create the shields required to defend against them? Why waste time on trying to destroy what we haven’t already built? Once again, you may call me optimistic when I say that the universe may, in the end, be inherently good. But after all, the universe did not offer us other fights but the fight for hunger, which ultimately allows us to feel the beauty and taste of each bite. And that is especially true when it comes to knowledge.

In my view, it is foolish to assume that an ultra-powerful laser, or an array of smaller lasers, is better at defending our planet from a meteorite, when in reality, only if we scratch a bit of the surface, we can find the power to use the effects of quantum-gravity to our advantage as a propelling force, akin to a bomb, but one that spreads solely anti-gravity around. Or maybe focus on a rocket powerful enough to push meteorites off-course from behind by using extreme powers. And at the same time, we can use the rocket to lift entire trains and place them on the moon.

And ultimately, isn’t this the magic of the solution space? We can either see it from a limited perspective, under the assumption that there are things that we may never know, or we can acknowledge the power of free will and its true potential to shape entire destinies and hearts.


文章来源: https://hackernoon.com/the-complex-road-from-p-to-np-the-magic-of-the-solution-space?source=rss
如有侵权请联系:admin#unsafe.sh