Dynamic Programming
Shortest Acyclic Path
Let be a directed acyclic graph, where each edge has a weight .
Topological sorting: vertices are numbered such that for each , we have .
Let and .
How do we compute the shortest path from to ?
Can we do better than Dijkstra algorithm, using the topological ordering of ?
Step 1: Structure of the Optimal Solution
Let be all vertices that have an edge to .
The last edge of the shortest path from to is for some .
If we know this index , then the shortest path from to is equal to path from to followed by the edge .
Since we don't know the index , the length of the shortest path from to is equal: (length of the shortest path from to ) + . In other words, the shortest path from to contains the shortest path from to one of .
Step 2: Set up a Recurrence for the Optimal Solution
For , define length of a shortest path from to . Since , we want to compute .
Recurrence:
- For ,
Step 3: Solve the Recurrence "Bottom-Up"
First idea:
- To compute , take all edges , and recursively compute . From this, compute as
This is slow and takes .
Better approach:
d(v1) = 0for i in range(2, n):k = indegree(vj)Let u1, u2, ..., uk be all the vertices that have an edge to vjd(vj) = infinityfor i in range(1, k):d(vj) = min {d(vj), d(ui) + wt(ui, vj)}return d(vn)
This running time here is .
Matrix Chain Multiplication
matrix, matrix, matrix. has entries, each of which can be computed in time. So can be computed in time. We define the cost of multiplying and to be .
In general,
- : positive integers
- : matrices such that has rows and columns.
Compute the best order to compute the product of the matrices by minimizing the total cost.
Step 1: Structure of the Optimal Solution
Consider the best order to compute . In the last step, we multiply: and (A_{k+1} \cdot ... \cdot A_j), both of which are already computed.
Step 2: Set Up a Recurrence for the Optimal Solution
For , define = minimum cost to compute .
We want to compute .
If we know , then .
But we do not know , so try all values of , and take the best one.
Recurrence:
- For : .
- For :
Step 3: Solve the Recurrence Bottom-Up
Compute, in this order,
.
.
.
Algorithms
for i = 1 to n dom(i, i) = 0for l = 2 to n do// Compute m(1, l), m(2, l + 1), ..., m(n - l + 1, n)for i = 1 to n - l + 1 do// Compute m(i, i + l - 1)j = i + l - 1// Compute m(i, j) using the recurrencem(i, j) = ∞for k = i to j - 1 dom(i, j) = min {m(i, j), m(i, k) + m(k + 1, j) + pi-1*pk*pj}return m(1, n)
Running time is
Longest Common Subsequence
We have two sequences:
The sequence is a subsequence of but not of .
LCS(, ) is the longest subsequence of and : or
Step 1: Structure of the Optimal Solution
LCS(, ). Therefore
- Case 1:
- and
- Case 2:
- Case 2a):
- Case 2b):
- We compute both 2a and 2b as we don't know which case we fall under
Step 2: Set up a recurrence for the Optimal Solution
For and define length of
We want to compute
Recurrence:
- If or ,
- If and ,
- If and ,
Step 3: Solve the recurrence "Bottom-Up"
Fill in the matrices for and .
- First row:
- First column:
Then fill in the matrix, row by row, in each row from left to right.
Algorithm
for i = 0 to m doc(i, 0) = 0for j = 0 to n doc(0, j) = 0for i = 1 to m dofor j = 1 to n doif xi = yj thenc(i, j) = 1 + c(i - 1, j - 1)elsec(i, j) = max {c(i - 1, j), c(i, j - 1)}}return c(m, n)
Running time:
Space:
But if we only want to compute , we only need the current row and the previous row. Hence, Space:
General Structure
In general, when we solve a problem using dynamic programming, we go through the following steps:
- Understand the structure of the optimal solution. Show that there is an optimal substructure: an optimal solution “contains” optimal solutions for subproblems (which are smaller problems).
- Set up a recurrence for the optimal solution. We know from Step 1 that an optimal solution can be obtained from optimal solutions for subproblems. Use this to derive recurrence relations.
- Solve the recurrence bottom-up. First solve the smallest subproblems (usually the base case of the recurrence). Then solve the second smallest subproblems. Then solve the third smallest subproblems, etc.
Do not use a recursive algorithm!
All-Pairs Shortest Paths
Let be a directed graph, where . Each edge has a weight .
For all and , compute the weight of a shortest path in from to , which we denote by .
Step 1: Structure of the Optimal Solution
Consider the shortest path from to , and assume this path has at least one interior vertex. Let be the largest interior vertex.
- shortest path from to , all interior vertices are
- shortest path from to , all interior vertices are
Step 2: Set up a recurrence for the Optimal Solution
For , , , be the length of the shortest path from to , all of whose interior vertices are vertices are .
We want to compute for all .
Recurrence:
For
For
For , .
Algorithm: Floyd-Warshall
for i = 1 to n dofor j = 1 to n doif i = j thendist(i, j, 0) = 0elsedist(i, j, 0) = +infinityfor all edges (i, j) dodist(i, j, 0) = wt(i, j)for k = 1 to n dofor i = 1 to n dofor j = 1 to n dodist(i, j, k) = min {dist(i, j, k - 1), dist(i, k, k - 1) + dist(k, j, k - 1)}
Final Remarks
Dynamic Programming solutions work for finding shortest path, minimum cost, but not longest or max. That is a NP-Hard problem