Friday 13 April 2012

Complexities with examples

Hi, welcome back.  In this post we'll be dealing with various complexities and corresponding examples.

As usual we'll start with the best available complexity and move towards worst complexities.
In each of the complexity we'll look at both space and time.

Constant O(1): If an algorithm is written in constant complexity, the input size doesn't have any effect.

Examples:
1. Code returning the square of a number.

Logarithmic O(Log n) :  An algorithm is said to be in logarithmic complexity if it iterates log n times for given n (in case of time complexity)  or allocates log n number of spaces (Space complexity).

Example:
1. Binary Search:  In case of binary search we'll be searching for a number in a series of given numbers.  Here the constraint is the given numbers should be in sorted order. Note:  The worst case of binary search is Log n.

suppose the given numbers are 1,2,3,4,5,6,7,8. and we are asked to find weather the series contains element 1(key element).  The logic of binary search is simple, we'll check if the center element in the series is equal to key or not.  If yes we'll return the position, if its not equal, we'll compare the key element with the middle
element.  If the key element is greater than the middle element we can safely ignore the elements less than the middle element(I mean left half), and continue right half of middle element.  If the key element is less than the middle element we'll continue with left half, safely ignoring the right half.

thus here is the procedure:  1,2,3,4,5,6,7,8 since the number of elements are 8 we have 4th position as the middle position, which is 5. On comparing the key element 1 with 5, its clear that the key is less than the middle element, thus we can ignore 5,6,7,8 part. --------------------> 1st iteration.


Now the input is 1,2,3,4 now the middle element is at position 2. And the element at that position is 3.  Since 1 is less than 3, we can ignore 3,4 part ---------------------------> 2nd iteration.


Now the input is 1,2 now the middle element is at position 1. And the element at position 1 is 2.  Since 1 (key element) is less than 2, we can ignore 2 part----------------------> 3rd iteration.

Now the input is 1, on comparing key with 1.  We get the position and thus we return the index of element 1.
----------------------->  4th iteration


Interestingly log8 base 2 = 3.  Thus in case of binary search it always takes log n iterations to reach to the correct location, for given n elements.  Weather the element in the correct location is equal to the key element or not, we don't bother we are going to stop iteration and return the index value if it is equal or some -1 or MAXINT value to symbolically say we couldn't find.

Note:  Thus if you could write a code which can safely ignore half of the input probabilities,  and recurs  through the other half with same logic.  Yes you are right, you are writhing code in log n complexity.

I hope you could understand, in case of any questions please place that in comment section.

Linear O(n): In case if we need to consider every value in the input probable series, we are dealing with O(n).  Since in case of Log n complexity we ignored half of the value for every iteration, we can blindly say O(log n) is better than O(n).

Example: Linear Search.  Unlike in Binary Search if the input series is not in sorted order, we have to relay on linear search.  In case of Linear search we are going to compare the key value with every element in the input series.  Assuming the worst case, i.e., either the element is not present or the element is the last element in the series, than we need to travel till the last covering n elements.  Thus the worst complexity is O(n).

Linearithmic O(n Log n):  This is the combination of linear and logarithmic complexities.  In case for every element in the series you want to run binary search on the remaining elements, then we are dealing complexity of O(n Log n).

Example: Quick sort, merge sort.  We shall see important sorting in separate post

Quadratic O(n2):  If we need to iterate over every element for n times, then we are dealing with quadratic complexity.

Example: Bubble sort, Insertion sort.  As said earlier we'll see those concepts on separate post.

Cubic O(n3):  If we need a logic which should iterate over an iteration which in turn iterates over the other, then we are in cubic complexity.
Example: Floyd-Warshall alorithm.  We will deal this concept in a post dedicated to Graph algorithms.

Exponential O(en): We'll deal this with a very common example, a Rabbit example.  Imagine there that there are 2 rabbits in the first year (21):

first year:             2----------->21
second year:        4----------->22
third year:            8----------->23
fourth year:          16---------->24
.
.
.
.
10 th year:       1024---------->210    
Thus for every year the populations is doubling itself, leading to enormously large in short time.
And the series 2,4,8,16,32 etc. is said to be exponential series.

Hope the examples we have gone through help us.
 

No comments:

Post a Comment