Before we dive into NP, NP-completeness, and polynomial reductions, we first need a shared language:
What is “algorithmic complexity”? What is “polynomial time”? Why are some algorithms fast but not poly-time?
We begin with three foundational complexity notions:
Polynomial Time
Pseudo-Polynomial Time
Weakly Polynomial Time
These concepts will appear repeatedly later.
Note
In later chapters, “polynomial time” may be abbreviated as poly-time, pseudo-polynomial as pseudo-poly-time, and weakly polynomial as weakly-poly-time.
These abbreviations are only for convenience and are not formal terminology.
In theoretical CS, poly-time is universally considered the threshold of efficient computation.
Because:
The growth of polynomial-time algorithms (e.g.,
Changing computers, languages, and constant factors will not change its complexity class.
Poly-time problems are scalable in real engineering.
Examples:
| Time Complexity | Practical? | Notes |
|---|---|---|
| ✔ | very fast, linear time | |
| ✔ | sorting-level fast | |
| ✔ | acceptable | |
| ✔ | still poly-time | |
| ✘ | exponential blow-up | |
| ✘ | catastrophic |
Thus, P problems (Polynomial-Time Problems) refer to problems solvable within practical running time.
A problem belongs to P if there exists an algorithm whose running time is:
for some constant
Important
Note: the problem is in P, not the algorithm.
A problem is in P because it has some poly-time algorithm.
Classic poly-time algorithm examples
1. Dijkstra Shortest Path (Non-negative weights)
Code: Graph.hpp
Running time:
Therefore, Shortest Path is a classic P problem.
2. Maximum Flow
Notes: Network Flow
| Algorithm | Time | Type |
|---|---|---|
| Ford–Fulkerson | pseudo-polynomial | |
| Capacity Scaling | weakly polynomial | |
| Edmonds–Karp | strongly polynomial | |
| Orlin + KRT (improved) | strongly polynomial | |
| Modern Approximation Methods | almost linear |
Thus, Max Flow is also a P problem.
3. Minimum Spanning Tree (MST)
Code: Graph.hpp
Kruskal / Prim run in:
Hence MST is also a P problem.
These algorithms form the foundation of tractable computation.
One-sentence definition:
A pseudo-polynomial algorithm depends on the numerical value
of the input, not its bit-length .
The best example comes from dynamic programming.
Classic Example: 0/1 Knapsack DP
(570 course notes: DP Part 1)
(Carl’s tutorial: 01 Knapsack)
DP complexity:
Where:
Why is it not polynomial?
Because
Thus it is not poly-time.
A more intuitive explanation
You asked:
Different Paths DP is poly-time, but Knapsack DP is not. Why?
My explanation:
For grid DP (e.g., “Unique Paths”), the size of the DP table grows with the input size (the grid).
For Knapsack, the DP table height grows with the numeric value W, not with input size.
A single digit change in W (e.g., W+1) adds another entire DP row.
This disconnect between “input length” and “DP table size” makes it pseudo-poly.
Example table
| W | bit-length | DP Time | Feasible? |
|---|---|---|---|
| 1000 | 10 bits | OK | |
| 30 bits | Not OK | ||
| 100 bits | Impossible |
Pseudo-poly algorithms frequently appear in NP-hard optimization:
Knapsack
Subset Sum
We'll revisit this when discussing NP-hardness.
A weakly polynomial algorithm lies between true polynomial and pseudo-polynomial.
One-sentence definition:
Weakly polynomial time = polynomial in
and bit-length , but not in itself.
Thus:
weak-poly ⇒ in P
pseudo-poly ⇒ not necessarily in P
Classic example: Linear Programming (LP)
LP solvers (ellipsoid / interior-point) run in:
Where:
This is truly polynomial-time and far better than pseudo-poly.
Definition:
Only depends on structural input size.
Examples:
BFS / DFS
Prim / Kruskal
Hopcroft–Karp matching
Some max-flow algorithms (e.g., Orlin)
Strongly poly-time algorithms are the most robust.
| Type | Depends On | Example | In P? |
|---|---|---|---|
| Strongly Polynomial | MST, Matching | ✔ Yes | |
| Weakly Polynomial | Linear Programming | ✔ Yes | |
| Pseudo-Polynomial | Knapsack DP | ✘ No (unless numbers are small) |
Tip
Pseudo-polynomial algorithms may look fast, but because their dependence is on