Home Theory How do you compare two algorithms?

How do you compare two algorithms?

by nikoo28
0 comments 8 minutes read

We compare things all the time to determine which one is better over the other. The best example is when you are buying stuff online, we compare prices across websites, two different mobile phones, or even two different outfits. We know that there are different techniques of solving a problem in computer science:

Naturally, when we try to solve a problem with different methods, one would want to compare them. Once they are put side to side, we can then analyze which one performs better, and obviously we would tend to pick up the better one.

“Quality” of an Algorithm

When it comes to different outfits or gadgets we have some criteria upon which we evaluate them. So certainly, we want to define some parameters upon which we can compare two algorithms. The two most important factors are:

  1. Time: Ever since the invention of computers, programs are written to reduce the time taken to achieve a job. Hence, an algorithm is considered better if it runs faster than the other one.
  2. Space: Memory is cheap now days, but everyone loves an application that takes the least amount of RAM. Don’t we all hate when you have 70 tabs open, and a web browser slows down your entire PC. Also, when writing system level code, every extra byte of memory matters. Thus, an algorithm which takes up less RAM would be considered to be a better one.
Execution Time:

To put it in plain words, this is the time taken for an algorithm to complete successfully. How do you compare two algorithms based on the execution time? You need to follow a certain set of rules for that:

Experimental:
  • A fool proof way to compare 2 different algorithms would be to actually run them and observe the results. The one which gives you the output in less time would said to be the better one.
  • But, when running these algorithms, you need to ensure that we are using the same hardware for both. A faster processor can give better results for a poorly written algorithm. Also, you cannot rely on a single run. Your computer might be running other workloads in the background and that can affect the performance. Hence, it is recommended that you several iterations of both the techniques and then get an average value before comparing.
  • We also need to make sure that the input workload for the algorithms remain the same. Using random input data cannot give us reliable results. It could be possible that certain input loads work unreasonably fast. Example: Trying to sort an already sorted list.

Even with all these conditions in mind, you cannot certainly reach an exact result, as it is scientifically impossible to run 2 algorithms in the exact same conditions. You wouldn’t be able to compare two algorithms which work nearly the same. Thus, you need to do a little deep dive in the actual flow of the algorithm.

Analytical:

Your algorithm may contain a lot of conditional blocks, and control blocks that determine the runtime of your algorithm. Suppose you are given with 2 code blocks. Forget about the implementation details, and assume that both of them give the correct answer.

Algorithm 1:

for(i = 0; i < 10; i++) {
  for(j = 0; j< 5; j++) {
    // do something...
  }
}
Code language: Java (java)

Algorithm 2:

for(i = 0; i < 10; i++) {
  // do something...
}

for(j = 0; j < 5; j++) {
  // do something...
}
Code language: Java (java)

Let us analyze both the algorithms:

Algorithm 1:

  • We are starting a loop i that runs for 10 times.
  • Inside the loop we are starting one more loop j that runs for 5 times.
  • This means that for each iteration of i, we run j 5 times.
  • Hence, the total number of times we do something is 10 * 5 = 50 times.

Algorithm 2:

  • We start a loop i that runs for 10 times.
  • Now we do something for 10 times.
  • Exit the loop.
  • We start another loop j that runs for 5 times.
  • Now we do something for 5 times.
  • Hence, the total number of times we do something was 10 + 5 = 15 times
What did we infer from this?

At the first glance it may seem that both the codes have 2 loops. One runs for 10 times and the other runs for 5 times. So, it is natural to feel that both will take the same time to run. But when we compare both of them analytically, we see that algorithm 1 is doing something for 50 times, while algorithm 2 is doing something for just 15 times. This means that algorithm 1 would be a better choice.

This phenomenon is also known as Runtime Complexity.

Space Consumption:

We just discussed in our quality criteria that ‘space’ is an important factory to determine which algorithm performs better. What do you mean by that? Let us understand this with a very basic example. To set up some context, we would consider the block size of a single node in a Linked List as 10kb.

10kb block size of a list node.
Fig: Block size of a Linked List node.

Question: You are given with 2 linked lists, that are already sorted. Merge the two lists and return a single sorted list.

According to the question, we are given with 2 lists. Each node in the linked list would have a block size of 10kb, and hence the setup would look something like this:

total sizes of 2 lists
Fig: 2 lists and their total sizes

The size of first list would be 5 * 10 = 50kb, and the size of the second list would be 3 * 10 = 30kb. Hence, currently you are using 80kb of memory. Let us skip the implementation details and look at 2 algorithms to solve this problem.

Algorithm 1:
  • Create a new list of 5 + 3 = 8 nodes
  • Sort the two lists
  • Fill up the new list
  • Return the result.

This algorithm would look something like:

merging 2 sorted linked lists with extra space.
Fig: Merging 2 sorted linked lists with extra space

In the above algorithm, we create a new list that takes up extra 80kb of space. Hence, the total space required to run this algorithm would be:

50 (list 1) + 30 (list 2) + 80 (new list) = 160kb

Thus, we need a total of 160kb memory for algorithm 1 to run successfully. Let us look at another approach to solve the same problem.

Algorithm 2:
  • Scan the shorter list one by one.
  • Take up an element.
  • Scan the longer list and insert it at the right place.
  • Repeat the process for all the elements of the smaller list.

This algorithm when completed would give us a result something like:

shows in-place sorting of lists.
Fig: Merging 2 sorted linked lists without taking extra space.

You see, that in the above algorithm, we do not consume any extra space. We were clever enough and we saved space. This algorithm would just need 80kb of space to run. That is just the space required by the two already present lists.

What did we infer from this?

When you have to compare these 2 algorithms, we can easily say that algorithm 2 is the clear winner when it comes to saving space. 80kb of extra space may seem very small, but consider writing codes for gigantic lists. You should always compare your methods to write the code keeping space consumed in mind.

This is also known as the Space Complexity.

Video Explanation:
YouTube player

You may also like

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More