Each recursive call makes a new copy of that method (more specifically speaking ‘the variables’) in memory. Once a method ends (i.e returns some data), the copy of that returning method is removed from the memory. The recursive solutions look simple but visualization and tracing takes time. For better understanding, let us see the following example. For this example. if we call the print function with n = 4, visually our memory assignment may look like this: Now, let us consider our factorial function. The visualization of this will be:
nikoo28
nikoo28
a techsavvy guy and a design buff... I was born with the love for exploring and want to do my best to give back to the community. I also love taking photos with my phone and Canon Kiss X5 in order to capture moments in my life. It's my pleasure to have you here.


What is Recursion? Any function which calls itself is called a recursive function. A recursive method solves a problem by calling a copy of itself to work on a smaller problem. This is called the recursion step. The recursion step can result in many more such recursive calls. It is important to ensure that the recursion terminates. Each time the function calls itself with a slightly simpler version of the original problem. The sequence of smaller problems must eventually converge on the base case. Why Recursion? Recursion is a useful …

All divide and conquer algorithms divides the problem into sub problems, each of which is part of the original problem, and then perform some additional work to compute the final answer. As an example, merge sort algorithm operates on two subproblems, each of which is half the size of the original and performs O(n) additional work for merging. This gives the running time equation. T(n) = 2T(n/2) + O(n) The following theorem can be used to determine the running time of divide and conquer algorithms. For a given program (algorithm), …

To analyze the given algorithm we need to know on what inputs the algorithm is taking less time (performing well) and on what inputs the algorithm is taking huge time. We know that an algorithm can be represented in form of an expression. That means we represent the algorithm with multiple expressions: one for case where it is taking less time and other for the case where it is taking more time. In general Worst Case: Defines the input for which the algorithm takes huge time. Input is the one …

The rate at which running time increases as a function of input is called Rate of Growth. Let us assume that you went to a shop to buy a car and a cycle. If your friend sees you there and asks what you are buying then in general we say buying a car. This is because. cost of a car is too big compared to cost of cycle (approximating the cost of cycle to the cost of a car). Total Cost = cost_of_car + cost_of_cycleTotal Cost ≈ cost_of_car (approximation) For …

Let us consider the problem of preparing an omelet. For preparing omelet, general steps we follow are: 1.>Get the frying pan. 2.>Get the oil. >Do we have oil? *If yes, put it in the pan. *If no, do we want to buy oil? #If yes, then go out and buy oil. #If no, then we can terminate. 3.>Turn on the stove, etc… What we are doing is, for a given problem (preparing an omelet), giving step by step procedure for solving it. Formal definition of an algorithm can be given …

Welcome to Study Algorithms. Here we will try to learn some very basic algorithms used in everyday coding practices. Useful for students of all kinds and helpful with interviews as well. You can always post ideas and suggestions in the comments section. All queries will be well entertained. Happy Learning!