Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Hands-On Data Structures and Algorithms with Python – Third Edition

You're reading from   Hands-On Data Structures and Algorithms with Python – Third Edition Store, manipulate, and access data effectively and boost the performance of your applications

Arrow left icon
Product type Paperback
Published in Jul 2022
Publisher Packt
ISBN-13 9781801073448
Length 496 pages
Edition 3rd Edition
Languages
Arrow right icon
Author (1):
Arrow left icon
Dr. Basant Agarwal Dr. Basant Agarwal
Author Profile Icon Dr. Basant Agarwal
Dr. Basant Agarwal
Arrow right icon
View More author details
Toc

Table of Contents (17) Chapters Close

Preface 1. Python Data Types and Structures 2. Introduction to Algorithm Design FREE CHAPTER 3. Algorithm Design Techniques and Strategies 4. Linked Lists 5. Stacks and Queues 6. Trees 7. Heaps and Priority Queues 8. Hash Tables 9. Graphs and Algorithms 10. Searching 11. Sorting 12. Selection Algorithms 13. String Matching Algorithms 14. Other Books You May Enjoy
15. Index
Appendix: Answers to the Questions

Dynamic programming

Dynamic programming is the most powerful design technique for solving optimization problems. Such problems generally have many possible solutions. The basic idea of dynamic programming is based on the intuition of the divide-and-conquer technique. Here, essentially, we explore the space of all the possible solutions by decomposing the problem into a series of sub-problems and then combining the results to compute the correct solution for the large problem. The divide-and-conquer algorithm is used to solve a problem by combining the solutions of the non-overlapping (disjoint) sub-problems, whereas dynamic programming is used when the sub-problems are overlapping, meaning that the sub-problems share sub-sub-problems. The dynamic programming technique is similar to divide and conquer in that a problem is broken down into smaller problems. However, in divide and conquer, each sub-problem has to be solved before its results can be used to solve bigger problems. In contrast, dynamic programming-based techniques solve each sub-sub-problems only once and do not recompute the solution to an already-encountered sub-problem. Rather, it uses a remembering technique to avoid the re-computation.

Dynamic programming problems have two important characteristics:

  • Optimal substructure: Given any problem, if the solution can be obtained by combining the solutions of its sub-problems, then the problem is said to have an optimal substructure. In other words, an optimal substructure means that the optimal solution of the problem can be obtained from the optimal solution of its sub-problems. For example, the ith Fibonacci number from its series can be computed from (i-1)th and (i-2)th Fibonacci numbers; for example, fib(6) can be computed from fib(5) and fib(4).
  • Overlapping sub-problem: If an algorithm has to repeatedly solve the same sub-problem again and again, then the problem has overlapping sub-problems. For example, fib(5) will have multiple time computations for fib(3) and fib(2).

If a problem has these characteristics, then the dynamic programming approach is useful, since the implementation can be improved by reusing the same solution computed before. In a dynamic programming strategy, the problem is broken down into independent sub-problems, and the intermediate results are cached, which can then be used in subsequent operations.

In the dynamic approach, we divide a given problem into smaller sub-problems. In recursion also, we divide the problem into sub-problems. However, the difference between recursion and dynamic programming is that similar sub-problems can be solved any number of times, but in dynamic programming, we keep track of previously solved sub-problems, and care is taken not to recompute any of the previously encountered sub-problems. One property that makes a problem an ideal candidate for being solved with dynamic programming is that it has an overlapping set of sub-problems. Once we realize that the form of sub-problems has repeated itself during computation, we need not compute it again. Instead, we return a pre computed result for that previously encountered sub-problem.

Dynamic programming takes account of the fact that each sub-problem should be solved only once, and to ensure that we never re-evaluate a sub-problem, we need an efficient way to store the results of each sub-problem. The following two techniques are readily available:

  • Top-down with memoization: This technique starts from the initial problem set and divides it into small sub-problems. After the solution to a sub-program has been determined, we store the result of that particular sub-problem. In the future, when this sub-problem is encountered, we only return its pre computed result. Therefore, if the solution to a given problem can be formulated recursively using the solution of the sub-problems, then the solution of the overlapping sub-problems can easily be memoized.

Memoization means storing the solution of the sub-problem in an array or hash table. Whenever a solution to a sub-problem needs to be computed, it is first referred to the saved values if it is already computed, and if it is not stored, then it is computed in the usual manner. This procedure is called memoized, which means it “remembers” the results of the operation that has been computed before.

  • Bottom-up approach: This approach depends upon the “size” of the sub-problems. We solve the smaller sub-problems first, and then while solving a particular sub-problem, we already have a solution of the smaller sub-problems on which it depends. Each sub-problem is solved only once, and whenever we try to solve any sub-problem, solutions to all the prerequisite smaller sub-problems are available, which can be used to solve it. In this approach, a given problem is solved by dividing it into sub-problems recursively, with the smallest possible sub-problems then being solved. Furthermore, the solutions to the sub-problems are combined in a bottom-up fashion to arrive at the solution to the bigger sub-problem in order to recursively reach the final solution.

Let’s consider an example to understand how dynamic programming works. Let us solve the problem of the Fibonacci series using dynamic programming.

Calculating the Fibonacci series

The Fibonacci series can be demonstrated using a recurrence relation. Recurrence relations are recursive functions that are used to define mathematical functions or sequences. For example, the following recurrence relation defines the Fibonacci sequence [1, 1, 2, 3, 5, 8 ...]:

func(0) = 1 
func(1) = 1  
func(n) = func(n-1) + func(n-2) for n>1

Note that the Fibonacci sequence can be generated by putting the values of n in sequence [0, 1, 2, 3, 4, ...]. Let’s take an example to generate the Fibonacci series to the fifth term:

    1 1 2 3 5 

A recursive-style program to generate the sequence would be as follows:

def fib(n):   
     if n <= 1:   
        return 1   
     else:  
        return fib(n-1) + fib(n-2)  
for i in range(5):
    print(fib(i))

This will produce output like the following:

1
1
2
3
5

In this code, we can see that the recursive calls are being called in order to solve the problem. When the base case is met, the fib() function returns 1. If n is equal to or less than 1, the base case is met. If the base case is not met, we call the fib() function again. The recursion tree to solve up to the fifth term in the Fibonacci sequence is shown in Figure 3.5:

Figure 3.5: Recursion tree for fib(5)

We can observe from the overlapping sub-problems from the recursion tree as shown in Figure 3.6 that the call to fib(1) happens twice, the call to fib(2) happens three times, and the call to fib(3) occurs twice. The return values of the same function call never change; for example, the return value for fib(2) will always be the same whenever we call it. Likewise, it will also be the same for fib(1) and fib(3). So, they are overlapping problems, thus, computational time will be wasted if we compute the same function again whenever it is encountered. These repeated calls to a function with the same parameters and output suggest that there is an overlap. Certain computations reoccur down in the smaller sub-problem.

Figure 3.6: Overlapping sub-problems shown in the recursion tree for fib(5)

In dynamic programming using the memoization technique, we store the results of the computation of fib(1) the first time it is encountered. Similarly, we store return values for fib(2) and fib(3). Later, whenever we encounter a call to fib(1), fib(2), or fib(3), we simply return their respective results. The recursive tree diagram is shown in Figure 3.7:

Diagram  Description automatically generated

Figure 3.7: Recursion tree for fib(5) showing re-use of the already computed values

Thus, in dynamic programming, we have eliminated the need to compute fib(3), fib(2), and fib(1) if they are encountered multiple times. This is called the memoization technique, wherein there is no recomputation of overlapping calls to functions when breaking a problem down into its sub-problems.

Hence, the overlapping function calls in our Fibonacci example are fib(1), fib(2), and fib(3). Below is the code for the dynamic programming-based implementation for the Fibonacci series.

def dyna_fib(n):
    if n == 0:
        return 0
    if n == 1:
        return 1  
    if lookup[n] is not None:
        return lookup[n]
  
    lookup[n] = dyna_fib(n-1) + dyna_fib(n-2)
    return lookup[n]
lookup = [None]*(1000)
 
for i in range(6): 
    print(dyna_fib(i))

This will produce an output like the following:

0
1
1
2
3
5

In the dynamic implementation of the Fibonacci series, we store the results of previously solved sub-problems in a list (in other words, a lookup in this example code). We first check whether the Fibonacci of any number is already computed; if it is already computed, then we return the stored value from the lookup[n]. Otherwise, when we compute its value, it is done through the following code:

    if lookup[n] is not None:
        return lookup[n]

After computing the solution of the sub-problem, it is again stored in the lookup list. The Fibonacci number of the given value is returned as shown in the following code snippet:

lookup[n] = dyna_fib(n-1) + dyna_fib(n-2)

Furthermore, in order to store a list of 1,000 elements, we create a list lookup using the dyna_fib function:

    lookup = [None]*(1000)

So, in dynamic programming-based solutions, we use the precomputed solutions in order to compute the final results.

Dynamic programming improves the running time complexity of the algorithm. In the recursive approach, for every value, two functions are called; for example, fib(5) calls fib(4) and fib(3), and then fib(4) calls fib(3) and fib(2), and so on. Thus, the time complexity for the recursive approach is O(2n), whereas, in the dynamic programming approach, we do not recompute the sub-problems, so for fib(n), we have n total values to be computed, in other words, fib(0), fib(1), fib(2)fib(n). Thus, we only solve these values once, so the total running time complexity is O(n). Thus, dynamic programming in general improves performance.

In this section, we have discussed the dynamic programming design technique, and in the next section, we discuss the design techniques for greedy algorithms.

lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime
Banner background image