Locking : Mutex vs Spinlocks

Difference between mutex vs spinlocks

I would start with a very basic technical interview question, that is : What is difference between mutex and spin-lock?
First of all, let’s understand what is a mutex?
Mutex is kind of a key to use any resources in the system. If you have mutex, you can use the resource, if you don’t wait till the mutex is released. Process goes to wait queue for that particular resource. 

What is spin-lock? Spin lock is a mechanism in which the process needing a resource poll the lock on resource till it gets it. It is also called as busy-waiting. Process will be busy in a loop till it gets the resource.

So we get the first difference there itself, the way both wait for the resource lock. In case of mutex, process goes in wait state and does not use processor while in spin-lock, process keeps on looping and hence uses the processor even while waiting.

Second difference comes from : when should we use spin-lock or mutex?

Spin-locks are best used when a piece of code cannot go to sleep state. Best example of such code would be interrupts request handlers.

Mutexes are best used in user space program where sleeping of process does not affect the system in catastrophic way.

Spin locks make sense when we have multi-processor systems or we have uni-processor system with preemption enabled, spin locks used in uni-processor systems without preemption enabled, may lead to deadlock where process goes on spinning forever.

Following table summarizes the above points.

Mutex Vs Spinlock


Criteria
Mutex
Spinlock
Mechanism
Test for lock.
If available, use the resource
If not, go to wait queue
Test for lock.
If available, use the resource.
If not, loop again and test the lock till you get the lock
When to use
Used when putting process is not harmful like user space programs.
Use when there will be considerable time before process gets the lock.
Used when process should not be put to sleep like Interrupt service routines.
Use when lock will be granted in reasonably short time.
Drawbacks
Incur process context switch and scheduling cost.
Processor is busy doing nothing till lock is granted, wasting CPU cycles.

In next post we would list some the APIs which are used for using mutex and spinlocks in Linux kernel.

Convert a decimal number to binary number.

Convert a decimal number to binary number  

This article explains basic usage of stack with the help of a simple problem that is converting a decimal number to a binary number.

We know how the algorithm for the problem.
1. Divide the given number by 2.
2. Store the remainder.
3. Repeat the step 1 on the quotient.
4. When quotient is zero, print stored remainders in reverse order.

Example: Let’s say we want to convert 27  to binary number.

Step 1 : 27/2 =13 Remainder 1
Step 2 : 13/2 =6   Remainder 1
Step 3 : 6/2 = 3    Remainder 0
Step 4 : 3/2 = 1    Remainder 1.
Step 5 : 1/2 = 0    Remainder 1.
When we print the remainder in reverse order, we get 11011 which is binary representation of 27.
Since, we want the last remainder first, this is a perfect case for using stack.


Code

Implementation of push and pop functions can be referred here.

Complexity

Complexity of algorithm to convert a decimal number to its binary representation is log(n).

Stack as data structure

Stacks as data structure

What is stack?

Stack is an abstract data structure used which has this unique property called as LIFO, last in first out. LIFO means whatever goes in first, is last one to come out and item that has gone fist will be at the stack for longest. Stack data structure is used when you have to track all previous visited elements and process them in reverse them. This is typical in problems related to binary tree like non-recursive implementation of traversals of tree, parenthesis matching, print a word in reverse order etc. 

Whenever we encounter a problem which requires some sort of reversing on the already existing arrangement, we go for stacks.

Basic operations allowed on stack.
1. Push() : With this operation, an element is added to stack.
2. Pop() :   With this operation, element at the top of the stack is taken out.

stack data structure
Stack operations

Other operations which are commonly used during stack usage are :

1. is_empty() :  This checks if there are some elements present in stack.
2. peek(): This operation return the element at the top, but unlike pop, it does not remove element from stack.

Since stack can dynamically increase and decrease in size based on push and pop operation, we need to keep track of the top of the stack.  Notice that, stack can be represented as recursive structure i.e if we remove the top of the stack, remaining N-1 element is again a stack.

Let’s see an example to understand reversal order of stack, that means whatever is entered in stack is taken out in reverse order.

Stack as abstract data types
Reversal order of stack

This reversal property is used to maintain called and calling function relationships, web browsers back button history etc.

Implementation of stack data structure

Stacks can be implemented in two ways.

1. Array based implementation of stack.

In this implementation, underlying data structure used is an array. We keep an extra variable (top) to keep track of number of elements present in stack at given time. When we add an element to stack, we increase the top. When we pop an element from stack, we decrease the top.
In this implementation, top being -1 represents empty stack and top equal to N-1, N being the size of array, represents stack full.
Push and pop operations are of O(1) complexity.

Drawback of this implementation is that we need to define stack size before hand and increasing and decreasing dynamically would be overhead.

2. Linked List based implementation of stack.

In this implementation, underlying data structure used is a linked list. Every node in the linked list represents an element in stack. Every push operation adds a node at the head of the linked list and every pop operation removes the node from the head of the linked list.
In this implementation too, push and pop operations are of O(1) complexity. Also stack can grow as much as required. Head being NULL represents empty stack.
Drawback is we need to store 4 bytes extra as next pointer for each element.
Stacks are used for supporting recursive paradigm in programming languages. During program management, parameters passed to functions, return values, return pointers are stored in stack.
Problems which can be solved efficiently using stack data structure  

Dynamic programming : Longest Common Subsequence

A subsequence of string is set of all the characters which in a left to right order and not necessarily contiguous.  For example: string ABCDEG has ABC, ADG, EG, BCDEG subsequences.

whereas BDA is not a subsequence of given string.

Problem statement

Given two strings X and Y, find longest common subsequence Z.

For example 
X = ABCDSEFGD
Y = ACFEFXVGAB
LCS Z would be ACEFG.

Analysis

Can we divide this problem in sub problems?

Let’s say length of X is N and length of Y as M.
We start from the end of both strings. Check if X[N] == Y[M].
If yes, over problem now reduces to find the longest common subsequence in X[1…N-1] and Y[1…M-1].

What if they are not equal? Then one by one we have to exclude character from string X and Y.

So first we exclude the character from the X. hence problem reduces to X[1…N-1] and Y[1…M].
If we exclude the character from Y, problem reduces to X[1…N] and Y[1…M-1].

We would take maximum from both the cases. So the recursive relation comes up as


C[i,j]  =  1 + C[i-1, j-1] if X[i] == Y[j]

         =   MAX (C[i-1,j], C[i, j-1]) if X[i] != Y[j]
         =   0 if i or j is 0

Above relation can be easily implemented using recursion construct of programming language.


When we see that implementation, we notice that there are subproblems which are solved multiple times. To avoid solving those subproblems again and again, we can store values of those subproblems.

This gives us a perfect case for application of dynamic programming.

Code

Complexity analysis

Above implementation has time and space complexity of O(N*N).

Dynamic Programming : Matrix chain multiplication

Matrix chain multiplication using dynamic programming

Problem here is, we are given N matrices. Dimensions of each matrix given in an array P where P[i-1] and P[i] denote rows and column respectively of ith matrix. We need to find an order in which these matrices are multiplied, so that total number of scalar multiplications are least. Matrix chain multiplication is a typical problem which is used to explain dynamic programming. Approach learnt here can be easily applied to many other problems.

Before going further, lets understand some points about matrix multiplication
  1. Matrix multiplication is associative i.e.  A* (B*C) = (A*B) *C
  2. It is not commutative i.e  A * (B*C) not equal to A * (C * B)
  3. To multiply two matrices, they should be compatible i.e. no of columns in first matrix should be equal to number of rows of second matrix.
Let’s get back to problem.  Easy way to understand the problem is to say we need to figure out a way to put parenthesis around matrices so that total number of multiplication are least.

Brute force way of doing this to find out every possible combination of parenthesis and check which one is minimum. 
This algorithm is exponential in time and hence of no use for every large input.

Let’s see if dynamic programming fit in.

We can see that cost of multiplying matrices Ai to Aj  is cost of (Ai to Ak) + (Ak+1 to Aj )+ 
(P[i] * P[k-1] * P[j]).
Idea is to find out K such that later expression becomes minimum. M[i,j] represents the cost to multiply matrix i to matrix j

M[i,j]  = M[i,k] + M[K+1,j] + ((P[i] * P[k-1] * P[j]) )

So when we are calculating M[i,j] we should already have M[i,k] and M[k+1,j].  
We also know that M[i,i] = 0 as multiplying a single matrix will be 0.

Looking at the requirement to calculate M[i,j], we can say that it depends on the length of the chain.
Length of chain from matrix i to matrix j would be i+j-1.

We start with length 2 and go on to solve the table entry for length N. M[1, N] will give us the final cost. Since we need to start at every position for every length,
then i + L -1 = j. Also, j<=N; hence i <= N-L+1.

Code for multiplication of matrix problem

Complexity analysis

Time complexity of matrix chain multiplication using dynamic programming is O(N*N). Also space complexity is O(N*N).

Reference

What is dynamic programming?

Dynamic programming

Dynamic programming is a way of solving bigger problems with the help of results of sub-problems. Difference between divide and conquer and dynamic programming is that in divide and conquer approach sub problems are mutually exclusive where as in dynamic programming sub problems are overlapping.

Conditions for application of dynamic programming approach

There are two properties which should be fulfilled by any problem to be classified as dynamic programming problem.

First is optimal structure. Problem should ask for the optimal value as result like maximum value for given weight, over maximum no of applications with given bandwidth etc and optimal solution of subproblems can be used to get optimal solution of bigger problem.

Second, there should be overlapping sub problems, which can be used to derive the solution for the bigger problem.

There are two approaches in which dynamic programming can be used :

1. Bottom up approach 
In this, problem is divided into smaller problems and then smallest problem is solved and used upward to solve bigger problems.

2. Top down approach
When we have recursive relation like this T(n) = 2 T(n-1) + n, where subproblem is not significantly smaller than the original problem, and going down the recursion tree, many subproblems are same, it is better to store results of each problem we have seen till now and use them to calculate solution of subproblems, here subproblems already calculated are not again done. This technique is called as memoization.

Any dynamic programming problem can be solved using recursion, by reducing the candidate each time depending on the required output. However, that solution unnecessary solves already solved sub problems again and again leading to increase time complexity. In dynamic programming approach, we store the results of sub problems and each sub problem is evaluated only once. Hence in DP we trade time with space.

There are many problems which are good candidate for dynamic programming application. I have made a list of such problems and will code there one by one in my following posts.

1.  0-1 knapsack problem
2.  Largest increasing sub sequence
3.  Longest palindrome sub string
4.  Matrix chain multiplication
5.  Longest common sub sequence.
6.  Coin change

Dynamic programming : 0-1 knapsack problem

0-1 Knapsack problem

0/1 Knapsack is typical problem which is used to demonstrate application of greedy algorithm as well as dynamic programming. There are cases when applying greedy algorithm does not give optimal solution.
There are many flavors in which Knapsack problem can be asked.
1. A thief enters a museum and want to steal artifacts from there. Every artifact has a weight and value associated with it. Thief carries a knapsack (bag) which can take only a specific weight.

Problem is to find the combination of artifacts thief takes so that he gets maximum value and weight of all the taken artifacts is less the capacity of the bag he has brought.
Thief cannot take any artifact partially. Either he takes it or leaves it. Hence the problem is 0-1 knapsack.

2. Second flavor is : We have N files each having a size say Si. We have a total storage capacity of W bytes. For each file to be store the re-computation cost is Vi. Problem is to store as files on storage that combined size of all files is less than W and their re-computation value is maximum.

We can either store or leave the file. We cannot store partial file. Hence this is a case of 0-1 knapsack problem.
Knapsack problem in pure mathematics terms:

Analysis

Brute force method would try all the subsets of the set of items and see which one gives as the maximum value. This method would be of exponential order.
Can we do better?  If we consider every item, there are two possibilities associated with it.
First, given item is included in the subset which is optimal. Then we need to find out all the items in remaining N-1 items which can optimize the sub problem for weight W-wk. Value of this item is added to candidate maximum value.
Second case is the item is not included into the set. In that case, we need to find out items in remaining N-1 items which can optimize the the original problem. Value of this item is not added into candidate maximum value. Inclusion depends on two conditions : 1. Weight of the item is less than the total weight. 2. Inclusion of item increases the value which was already there with K-1 items with W-Wk weight.
When remaining allowed weight is zero (case 1 ) or we have considered all items (case 2) , we have reached the solution.

Dynamic programming Implementation of knapsack problem

In implementation, we define a two dimensional array of size N * W.
Element in this array can be interpreted as for a given value of W  w (w<W) and for a given number of items i (i < N),   best solution would be value of that element i.e array(I, w).
For i =0 and w=0, all the values will be zero. Hence first column and first row will be filled with all zero values. We would build on top of that.
For each item in set, we would check for maximum value we can get for weight w.

As explained in the analysis, based on its two conditions, we include or exclude the item.
We can easily keep track of which items are included in optimal structure by keeping boolean two dimensional array. Each element a[i,j] is true if for weight j ith item was included.

Code

One similar problem which can be solved with same approach is minimum number of coins to be used to get change of a particular amount.
I am skipping the whole analysis and directly pasting the code here.

Complexity analysis

Complexity of the dynamic programming implementation of knapsack problem is O(N *W). Space complexity is again O(N*W). It is thumb rule that we trade space for time in dynamic programming.

Reference