# Largest sum contiguous subarray

Given an array of signed integers, find largest sum contiguous subarray. It mean find subarray with maximum sum from given array. Subarray is continuous indices of array with size ranging from 0 to N where N is size of original array. For example, for array [ -1, 3, -5, 4, 6, -1, 2, -7, 13, -3 ], output is subarray [ 4,6,-1,2,-7,13 ] with sum = 17. Largest sum contiguous subarray problem is solved using Kadane’s algorithm

Maximum sum subarray can be found using dynamic programming approach. At each element in input array, take decision whether to include current element into longest subarray till now or not to include current element. This is made based on following considerations:

1. If element at index i increases sum of subarray ending at index i-1, then it should be included in subarray.
2. If element at index i, when added to sum of subarray ending at i-1 makes sum negative, number is not included. Because adding it will make all subsequent sums less than what already is available with subarray ending at i-1. To summarize,

To calculate sum till index i, we can use sum we have already calculated till i-1 index.
If adding element at i makes the sum negative, we would drop the element, as having this element in the sub array will only decrease the already existing sum as well as sum all the elements subsequent of this element. Hence we would start afresh from i+1 element.

Second thing is that even if sum is greater than zero after taking ith element, check if it greater than sum till i-1 index.

## Largest sum contiguous subarray : Inclusion algorithm

• If addition of ith element makes sum of subarray negative, make current sum as zero and start all over again with i+1 index.
• If sum is greater than sum of subarray ending i-1 index, then ith element is added in the sub array and rightmost index of sub array becomes i.
• If  sum is less than max sum till i-1 index, then don’t change rightmost index of sub array.

Above algorithm to maximum sub contiguous subarray does not work with all numbers in array being negative. To handle that case, scan all elements of array prior to application of algorithm and see if there is at least one non-negative number in array. Also, during this phase keep track of the largest number seen. If all elements are negative, return largest number.

## Largest sum contiguous subarray implementation

```
#include <stdio.h>

void largestSumContiguousSubarray(int a[], int size){

int startIndex = 0, endIndex = 0;
int index;
int currStartIndex = 0;

int maxSum = 0;
int currentSum = 0;

for(index = 0 ; index < size; index++){
currentSum  = currentSum + a[index];
// case 1 : When ith element can be included
// Change the end index and also update the start index.
if(currentSum > maxSum){
maxSum = currentSum;
endIndex = index;
startIndex = currStartIndex;
}
/*case 2 : When ith index cannot be included and
we need to start with i+1 index. till now our max sum
and start and end index of that sum remain same */
if(currentSum < 0){
currStartIndex = index + 1;
currentSum = 0;
}
}
printf("\nMaximum Sum : %d", maxSum);
printf("\nBetween (%d, %d)", startIndex, endIndex);
}

//Driver program
int main() {

int intArr[] = {-1, 3, -5, 4, 6, -1, 2, -7, 13, -3};
int size = sizeof(intArr)/sizeof(intArr[0]);
largestSumContiguousSubarray(intArr, size);

return 0;
}
```

Let’s trace the execution of code and see if algorithm is working properly or not?
main() function calls lcsum (a, size) with a = [-1, 3, -5, 4, 6, -1, 2, -7, 13, -3 ] and size =  10.
We will scan from i=0 to i =10 using a for loop at line 10 and keep track o four counters. maxSum stores maximum sum of subarray till index i. currentSum store the sum of currently running subarray sum. These two may differ. startIndex which keeps track of start index of subarray with maxSum and endIndex which keeps track of last index. currStartIndex and currEndIndex indicate start and end index respectively of current subarray being considered. To start with startIndex = endIndex = currStartIndex = currEndIndex = 0. Also, currSum and maxSum are 0.

For i =0; currSum  = 0 + -1 = -1.  Condition that sum should be greater than 0 for index to considered as part of solution, index 0 is discarded from solution. First condition (maxSum < currSum) is false, so none of statements in that if block is executed. Second condition (currSum < 0) is true, hence we will move currStartIndex = i + 1  = 1 and reset currSum = 0.
After first run maxSum = 0, currSum = 0, startIndex = currStartIndex = 1. endIndex still need to be found.

Now, for i = 1; currSum = 0 + 3 =3. First condition (maxSum < currSum) is true, hence maxSum is updated to 3, endIndex = 1 and startIndex = 1. So our probable subarray with maxSum is [3] with maxSum =3. Second condition is false.

For i = 2; currSum =  3 + (-5) = -2. First condition (maxSum < currSum) is false, so none of statements in that if block are executed. Second condition (currSum < 0) is true, hence we will move currStartIndex = i + 1  = 3 and reset currSum = 0.
Now, maxSum = 3, currSum = 0, startIndex = currStartIndex = endIndex = 1.

For i =3; currSum  = 0 + 4 =4
First condition (maxSum < currSum) (maxSum so far is 3) is true, hence maxSum is updated to 4, startIndex = 3 and endIndex = 3. So our probable subarray with maxSum is [4] with maxSum = 4.
Second condition is false.

For i =4; currSum = 4 + 6 = 10
First condition (maxSum < currSum) (maxSum so far is 4) is true, hence maxSum is updated to 10, startIndex = 3 and endIndex = 4. So our probable subarray with maxSum is [4,6] with maxSum =10. Second condition is false.

For i =5; currSum = 10 + (-1) = 9
First condition (maxSum < currSum) (maxSum = 10) is false, so none of statements in that if block is executed. Second condition (currSum < 0) is also false, nothing is executed.
Now, maxSum = 10, currSum = 9, startIndex = 3 endIndex = 4.

For i =6; currSum = 9 + 2 = 11.
First condition (maxSum < currSum) (maxSum so far is 10) is true, hence maxSum is updated to 11, startIndex = 3 and endIndex = 6. So our probable subarray with maxSum is [4,6,-1,2 ] with maxSum =11.
Second condition is false.

For i =7; currSum = 11 + (-7) = 4.
First condition (maxSum < currSum) is false, hence nothing executed in that block, also, currentSum is not less than 0, hence, startIndex = 3 and endIndex = 6. So  still probable subarray with maxSum is [4,6,-1,2 ] with maxSum =11.

For i =8; currSum = 4+ 13 = 17.
First condition (maxSum < currSum) (maxSum so far is 11) is true, hence maxSum is updated to 17, startIndex = 3 and endIndex = 8. Probable subarray with maxSum is [4,6,-1,2,-7,13 ] with maxSum =17.
Second condition is false.

For i =9; currSum = 17 + (-3) = 14.
First condition (maxSum < currSum) is false, hence nothing executed in that block, also, currentSum is not less than 0, hence, startIndex = 3 and endIndex = 8. So  still probable subarray with maxSum is [4,6,-1,2,-7,13 ] with maxSum =17.

Complexity of algorithm to find largest sum contiguous subarray using Kadane’s algorithm is O(N) in time and O(1) in space.

# Minimum edit distance between two strings

Minimum Edit distance between two strings is minimum number of operations one need to perform on one string so that it transforms into another string. Operations allowed are: insertion of a character, deletion of a character and substitution of a character. For example,

String S1  = EXPONENTIAL
String S2 = POLYNOMIAL
From above example, we see that in order to find minimum edit distance, we have to find best possible alignment of two strings. However, there are so many alignments possible with two strings, it will be very costly to consider each alignment and look for the best one. Can we break problem in smaller and easy to solve subproblems?

Problem at hand is to find minimum edit distance between X[1…n] and Y[1…m] strings. Consider prefix of each string X[1…i] and Y[1…j], let’s find edit distance for these prefixes and lets call it Edit(i,j).
At the end, we need to calculate Edit(n,m).

If we align two string, we start aligning right most part first. So there three possibilities at to treat right most characters.

Let’s consider each case one by one:
Case 1:
If last character of S1 does not match with last character of S2, let’s say we delete character from S1. Cost of this operation will be 1. Now, there are i-1 characters in X and j characters in Y to consider which is nothing but Edit(i-1,j).

Case 2:
If last character of S1 does not match with last character of S2, and a new character us added to S1. Cost of insert operation will be 1. There are i characters in X and j-1 characters in Y to consider which is nothing but Edit(i,j-1).

Case 3:
In this case, we do not insert or delete character. There two possibilities : Either the aligned characters match or they do not.
If they match, then find edit distance for i-1 and j-1 length prefix. No cost will incur if they match.
If they don’t match, then we need to substitute one with other. Cost of which will be 1 and our problem reduces to i-1 and j-1 prefixes.

Finally we are able to define our problem into subproblems which can be solved recursively.

```Edit(i,j) = min { 1+ Edit(i,j-1), 1 + Edit(i-1,j),
Edit(i-1, j-1) if X[i] == Y[j]
1+ Edit(i-1, j-1) if X[i] == Y[j] )```

What will be the base case for recursion? If both strings are of length zero, cost will be 0.
If one string is of length 0, then cost will be length of other string.

### Edit distance between two string : Recursive implementation

```#include<stdio.h>
#include<string.h>

int min(int a, int b) {
int min = a > b ? b : a;
return min;
}

int editDistance(char *s1, char *s2, int length1, int length2){

if(length1 == 0 && length2 == 0) return 0;

if(length1 == 0) return length2;

if(length2 == 0) return length1;

int isCharacterEqual = s1[length1] == s2[length2] ? 0 : 1;
return min(min( (1+ editDistance(s1,s2, length1-1, length2)),
( 1+ editDistance(s1,s2,length1, length2-1))),
( isCharacterEqual + editDistance(s1,s2, length1-1,length2-1)) );
}
//Driver program
int main(){

char *s = "EXPONENTIAL";
char *d = "POLYNOMIAL";
printf("Minimum distance between two strings is : %d",
editDistance(s,d, strlen(s), strlen(d)));
return 0;
}
```

It is evident that we are solving one sub problem again and again. To avoid that, we can use dynamic programming. There are two necessary conditions to apply dynamic programming to any problem : Problem should optimal subproblems and subproblems should be overlapping. These two conditions are met here.

To implement above formula in dynamic programming, a two dimensional table is required where
Table(i,j) stores Edit(i,j) and every cell can be calculated with bottom up approach. At the end Table(n,m) gives the final edit distance. Does not matter, if we fill table row wise or column wise, when we reach at cell (i,j), we will have all the required cells already filled in.

## Minimum edit distance using dynamic programming

```#include<stdio.h>
#include<string.h>

int editDistance(char *s1, char *s2){
int n = strlen(s1);
int m = strlen(s2);

int minimumDistance = 0;
int currentMinimum  = 0;
int Table[n+1][m+1] ;

memset(Table,0, sizeof(Table));

for(int i=0; i<=n; i++)
Table[i][0] =i;

for(int i=1; i<=m; i++)
Table[0][i] = i;

for(int i=1; i<=n; i++){
for(int j=1; j<=m; j++){
//Case 3 : Possibility 1 :If X[i] == Y[j]
if(s1[i-1] == s2[j-1]){
currentMinimum = Table[i-1][j-1];
}
//Case 3 : Possibility 2 :If X[i] != Y[j]
else{
currentMinimum =  Table[i-1][j-1] + 1;
}
//Case 1 : Deletion of character from S1
if(Table[i][j-1] > Table[i-1][j]){
minimumDistance = Table[i-1][j] + 1;
}
//Case 2 : Addition of character on S1
else {
minimumDistance = Table[i][j-1] + 1;
}

if(currentMinimum < minimumDistance){
minimumDistance = currentMinimum;
}
Table[i][j] = minimumDistance;
}
}
return Table[n-1][m-1];
}
//Driver program
int main(){

char *s = "EXPONENTIAL";
char *d = "POLYNOMIAL";
printf("Minimum distance between two strings is : %d",
editDistance(s,d));
return 0;
}
```

Complexity of  finding edit distance between two strings problem is O(NM) with extra space complexity of O(NM).

## Applications of edit distance algorithm

1.  Used in spell check program to find closest word.
2. In computational biology to align two genes.
3. In speech recognition,  machine translations and information extraction.

# Coin change problem using dynamic programming

Given a number S and coins of values V = {V1,V2,V3, V4}. Find number of ways change can be made for S using these coins.We have infinite supply of these coins. For example,
S = 4, V = {1,2,3}, there are four solutions: {1,1,1,1},{1,1,2},{2,2},{1,3}

Mathematically, we have to find number of solutions for following expression:
S = summation (k = 1 to m ) X(K) * V(K)

Here X(K) cannot be negative. We have solved similar problem in subset sum problem. Refer :Subset sum problem

There we have limitation that every element can be considered only once (there was no infinite supply)when while formulating the solution. So we were reducing the size of input every time we considered an element. However in our case we don’t need to decrease the size of the input as we have infinite supply of coins. We decrease the count only when there is no possibility of it being included further in solution.

Other difference is that subset problem was a decision problem where we need to say True or False for answer, here we need to give count.Rest all remains same.
So, with each coin there are two possibilities associated :

1. Either the coin is included in solution.
2. Or it is not included in solution.

If coin is included in solution, our problem of making change with coins reduces to find solution for N-V(m) using K coins. See we can again use the same coin in reduced problem.

If coin is not included in solution, and since coins given are in sorted order, it will never be included into solution, so just decrease the coins set by 1 and keep the required sum as N.

If we look at it, it is simple recursive formulation.

`C(N,m) = C(N,m-1) + C(N- V(m), m);`

What will be base condition then?
If with any number of coins we reach where change is required for 0, we have found one solution. i.e

`C(N,m) = 1 if N ==0`

What if we have considered all coins and N is still greater than 0, in that case combination we have considered does not provide a solution. Hence,

`C(N,m) =0 if N>0 and m<0`

What if sum required at a point is less than zero, that is we have included extra coins and that combination is not a solution. Hence,

`C(N,m) = 0 if N<0`

## Coin change problem implementation

```int coins(int values[], int N, int m){

if(N == 0) return 1;
if(N<0) return 0;
if(N>0 && m<0) return 0;

return coins(values, N, m-1) + coins(values, N-values[m], m);
}

//Driver program
int main(){
int N= 10;
int values[] = {2,3, 5,6};
printf("\n%d", coins(values, N, 3) );
return 0;
}
```

We can see that we are calculating the same sub problem again and again, that we can avoid using simple memorization.

Let’s say Coins(i, j) represents the number of ways in which change for i can be made using j coins. Now if the jth item is included, then numbers of ways will be Coins(i- v[j], j-1). If jth coin is not included, number of ways will be Coins (i, j-1).
Adding both of them will give us

`Coins(i,j) = Coins(i-v[j]) + Coins(i, j-1). `
```int coinsChangeProblem(int values[], int N, int m){

int table[N+1][m+1];
int include, exclude;
for(int i=0; i<=m; i++){
table[0][i]=1;
}

for(int i=1; i<=N; i++){
for(int j=0; j<=m; j++){
include = (i-values[j]>=0)? table[i-values[j]][j]: 0;
exclude = (j>=1)? table[i][j-1]:0;
table[i][j] = include + exclude;
}
}
return table[N][m];
}
/Driver program
int main(){
int N= 10;
int values[] = {2,3, 5,6};
printf("\n%d", coinsChangeProblem(values, N, 3) );
return 0;
}
```

Complexity of coin change problem of recursive code will be exponential while of that of dynamic programming approach will be O(N2)

# Longest arithmetic progression

Arithmetic progression AP is set of numbers in which difference between any two consecutive numbers is constant. For example, 1,4,7,10,13 form a arithmetic progression with each consecutive number differing from previous by 3.
Coming to our problem for today, given an array of integers, find length of longest arithmetic progression.

Assumption of below solutions is that the array given is already sorted. If input array is not sorted, do sorting before solving it for longest AP. This step costs us additional O(n log n) complexity.

Brute force solution would to solve problem using hash. Consider each possible pair of elements (u,v) in array and add that pair to hash table entry corresponding to difference between u and v. There can be n*(n-1)/2 such pairs. So our complexity becomes O(n2) with extra space for hash proportional to maximum difference between any two elements.

If we are short on space, another brute force method is to take a pair and check remaining elements for AP. As mentioned above there are n2 pairs possible and to check other elements for AP requires additional n operation and hence complexity of this method to find longest arithmetic progression will be O(n3).

Can we do better than this? Remember, Two numbers always form an AP. Why? How to check if three numbers form an AP?

```A[i], A[j] and A[k] form an AP if 2* A[j] = A[i] + A[k] where k>j and i<j.
```

In a given array, check for each index j and if there are index i and k satisfying above condition, mark it as an arithmetic progression.

How can we apply this insight to find length of longest arithmetic progression in array? As I mentioned above, two numbers always from an arithmetic progression, any number from set will always from AP of length 2 with last element of set.

If we formulate a table where Table[i][j] is set to length of AP with first and second element as A[i] and A[j]. When j = N, value is set to 2 because every element in array forms an AP of length 2 with last element. Move up from N-1 to 0 and fill Table [i][j] bottom up. Search for i < j and k > j such that A[i], A[j] and A[k] form an AP. Then,

`Table[i][j] = Table[j][k] + 1`

Since we are filling table bottom up and k>j; Table[j][k] would have been already calculated when we are calculating Table[i][j].

## Longest arithmetic progression implementation

```#include<stdlib.h>
#include<stdio.h>

#define max(a,b) (a>b) ? a:b

int longestArithmeticProgression(int a[], int n){
int i,j,k;
int Table[n][n];
int longestAP = 2;

for(i=0;i<n; i++)
Table[i][n-1] =2;

for(j= n-2; j>=1; j-- ){
i = j-1;
k = j+1;

while(i>=0 && k<n){
if(2* a[j] > a[i] + a[k]){
}
else if (2* a[j] < a[i] + a[k]){
/*Table[i][j] needs to be filled before we move up */
Table[i][j] =2;
i--;
}
else{
Table[i][j] = Table[j][k] +1;
longestAP = max(longestAP, Table[i][j]);
i--;
k++;
}
}
while(i>=0){
Table[i][j] =2;
i--;
}
}
return longestAP;
}

int main(){
int array[] = {1,7,10,13,16,19};
int n = sizeof(array)/sizeof(array[0]);
printf("Length of longest arithmetic progression is : %d",
longestArithmeticProgression(array,n));
return 0;
}
```

Complexity of above code is O(n2) with O(n2) extra space.

Reference Some insights

# Counting paths on grid

Given a maze as shown in figure below, find count of all paths in maze to reach at right most bottom cell from leftmost top cell. You can move right, down and diagonally and not left.This problem is called as “Counting paths on grid”. We will solve this problem using recursion and dynamic programming both.

To understand basic principles of dynamic programming, please refer: Dynamic Programming basics

Best thing about Maze/Grid problems is that they reduce to smaller problem as soon as we make one move from current position. For example in this problem, after making one move, problem reduces as how many paths are possible from new cell to destination cell. Add 1 to solution of this subproblem and you have solution to original problem.

However, in counting paths on grid problem, we can move in three direction. Once we move in a particular direction (let’s say right) from a cell, that does not mean that all paths possible by going to other directions (down and diagonal) should not be counted. Hence, for each cell, count possible path if move is made to right, possible paths if move was made to down and possible paths move was made diagonally and add them up.

## Counting paths on grid : Recursive approach

When solution to problem depends on solution to smaller problems, that usually hints towards recursion. As counting paths on grid problem reduces to smaller problem with each move, recursion is natural choice. For recursion to succeed, find is what is base case or terminating condition for the recursion condition? In counting paths on grid problem, base case will be when we reach destination cell (rightmost bottom cell). Let i and j be current row and column of the grid, base case would be

`if(i == m && j == n) return 1`

Recursion formulation for paths on grid problem would be

```PossiblePaths(i,j,m,n) = PossiblePaths(i+1,j, m,n) // Move down
+ PossiblePaths(i, j+1, m,n) // Move right
+ PossiblePaths(i+1, j+1,m,n); // Move diagonally
```

### Counting paths on grid : Recursive implementation

```#include <stdio.h>

int PossiblePaths(int i,int j, int m, int n){
if(i > m || j > n) return 0;

if(i == m && j == n) return 1;

return PossiblePaths(i+1,j, m,n)
+ PossiblePaths(i, j+1, m,n)
+ PossiblePaths(i+1, j+1,m,n);
}

int main(void) {

int m = 4;
int n = 4;
printf("\n Number of paths in maze : %d",PossiblePaths(0,0,m,n) );
return 0;
}
```

## Count paths on grid : dynamic programming

Let’s see what happens in execution of recursive implementation.  Size of maze is 3×3, m and n equal to 3. To start i and j equal to 0.

From execution tree of recursive execution, it is evident that there are subproblems which are calculated multiple times. Number of such subproblems increases as size of maze increases and tree gets bigger. We know that there are two basic conditions that a problem must satisfy before dynamic programming can be applied to it.

1. There should be optimal subproblem, which reduce original problem to smaller subproblem.
2. There should be overlapping subproblems which asks for tabulation of the results from subproblems, to be used for solution of bigger problems.

With counting paths in maze problem, both conditions to apply DP are met.
To store results of subproblems, create a two dimensional, Table with dimensions same as maze.Table[i][j] stores number of paths possible to reach Maze[i][j]. Answer will Table[m][n].

Table (i,j) can be reached at by either coming from Table(i-1,j) (Moving down) or by coming from Table(i,j-1) (Moving right) or by coming from Table (i-1, j-1) (Moving diagonally).

Table[i][j] is calculated as:

```Table(i,j) = Table(i-1,j) + Table(i,j-1)+ Table(i-1,j-1)
Table[i][0] = Table[0][j] = 1
```

### Counting paths on grid : dynamic programming implementation

```#include<stdlib.h>
#include<stdio.h>

int PossiblePaths(int m,int n){
int Table[m+1][n+1];
int i,j;

for(i=0;i<=m; i++){
Table[i][0] =1;
}
for(i=0;i<=n; i++){
Table[0][i] =1;
}
for(i=1; i<=m; i++ ){
for(j=1; j<=n; j++){
Table[i][j] = Table[i-1][j]
+ Table[i][j-1]
+ Table[i-1][j-1];
}
}
return Table[m][n];
}

int main(){
printf("%d",PossiblePaths(4,4));
return 0;
}
```

Problem with above implementation is it uses m*n space. This can be optimized by storing only row at a time instead of entire table. Why? Because from the equation to calculate Table[i][j], we can see that it it only depends on previous row. Space optimized version (Thanks to Jakube for suggesting it )

```#include<stdlib.h>
#include<stdio.h>

int PossiblePaths(int m,int n){
int Table[n+1];
int diagonalSum = 0;

for(int i=0;i<=n; i++){
Table[i] = 1;
}
for(int i=1; i<=m; i++ ){
int diagonalSum = 1;
for(int j=1; j<=n; j++){
int temp = Table[j];
Table[j] = Table[j] +  Table[j-1] +  diagonalSum;
diagonalSum = temp;
}
}
return Table[n];
}

int main(){
printf("%d",PossiblePaths(4,4));
return 0;
}
```

Finding all paths in a maze using dynamic programming takes extra O(n2)memory but reduces exponential time complexity to O(n2).

Please share if you find something wrong or missing. If you want to contribute to site, please refer Publish on Algorithms and Me and contact us. We would be happy to publish your work and in turn will pay you too.

# Balanced partition problem

Balanced partition problem a problem of dividing any given things into two equal valued parts. Value can be anything, it can be just numbers or some abstract notion attached to items in array. Let’s understand our problem for today : Divide an array of integers, into two balanced partitions. By balanced partitioning of array, we mean to divide array into two subsets (subsets are different from subarrays, subset can contain elements of array which are not contiguous where as subarray can contain elements which are contiguous) such that difference of sum of two sub sets is minimum, Best case will be when sum of two subsets are equal. This is a NP hard problem if there is no limit on the total sum of array but can be solved in O(n * N) if total sum has limit of N.

For example, in following array, difference between two subsets will be 1.
int c[] = {1,7,4,11}; {1,11} and {7,4}, so most balanced partition of given array c are {1,11} and {7,4}.
Some times problem is asked in a different way; for example there are 22 players and each player has a value associated with him which it brings to the team. Divide them into two teams of eleven each, so that difference between overall values of teams is minimum.

## Balanced partition problem : Algorithm analysis

Brute force method to divide array into two balanced partition, so that difference between two parts is minimum is to list all the subsets of given array and select two among them which have their difference as minimum. It has exponential complexity as number of subsets for an array of size n are n! and cannot be implemented for moderately big sized arrays.

Before going to the generic solution to this problem, let me tweak this problem a bit. What if we need to find out if there are two subsets of integers in array such that difference between sum of these two is zero. Essentially this is a special case of above given problem.

How should we go about solving the specific case of original problem? If difference between two sets is zero that means sum of both sets should be exactly equal to half of sum of all elements in array.Why? Because if any one subset is greater than or less than half of sum of all elements, then difference cannot be zero.

Now we get a specific version of problem which reduces find if there is subset of integers which add up to half the sum of all elements in array? This is subset sum problem which we have already solved here Subset sum problem

There is another simpler version of finding if there are some sets of array which add up to half of the sum of array. it goes like : take a table T of size N +1 where N is total sum of all elements of array. In this array, T[x] is true only if there are some elements in array which add up to x. Once , we fill all the entries, just check if T[N/2] is true or not.
Now how to build this array? Start with T[0] which will be true as we can always have sum equal to zero with empty set. Now set T[C[i]] will be set to true as we can have sum C[i] by taking element C[i]. C is given array here.

Now, when we have let’s T[j] set to true, that means we have some subset which adds up to j. In that case while processing C[i] we should also make T[j+ C[i]] as true because by add C[i] in previous subset, we can get new subset with sum j+C[i]

Above code can be optimized where we don’t calculate T[N-x] entries as if T[x] is true, other remaining sub set will eventually add up to N-x where N is total sum.

```int T[10240];
int partition(int C[], int size){
//compute the total sum
int N= 0;
for(int i= 0;i<n;i++) N+=C[i];
//initialize the table
T[0]=true;
for(int i=1;i<=N;i++)T[i]= false;

//process the numbers one by one
for (int i=0; i<n; i++){
for(int j=N-C[i]; j>=0; j--){
if(T[j]) T[j+C[i]]= true;
}
}
return T[N/2];
}
```

Problem becomes a bit tricky when we don’t to just find equal sum two subset but two subsets with minimum difference sum. Extra information which says what are the possible sums which can be generated using subsets of integers of given array in above step can be helpful.

Sum of all numbers in integers is given by N. Let’s half it and find a subset which has sum as close as possible to this half. That will give us other subset which is  least greater than half of sum of all elements of array and that will be minimum difference possible between two subsets. 🙂 Our expression would be

## Balanced partition problem : Implementation

```#include <stdio.h>
#include <stdlib.h>

int balancePartition(int set[], int n)
{
/*The value of subset[i][j] will be true if there is a subset
of set[0..j-1] with sum equal to i */
int i,j;
int sum =0;

for(i =0; i<=n; i++){
sum += set[i];
}

int subset[sum+1][n+1];
// If sum is 0, then answer is true
for (i = 0; i <= n; i++)
subset[0][i] = true;

// If sum is not 0 and set is empty, then answer is false
for (i = 1; i <= sum; i++)
subset[i][0] = false;

// Fill the subset table in botton up manner
for (i = 1; i <= sum; i++)
{
for ( j = 1; j <= n ; j++)
{
subset[i][j] = subset[i][j-1];
if (i >= set[j-1]){
subset[i][j] = subset[i][j] ||subset[i-set[j-1]][j-1];
}
}
}

int min =INT_MAX;

for(i=1; i<=sum; i++){
for(j=1; j<=n; j++){
/* If there is s subset with sum i, then check if the
difference between overall sum and
twice this sum is least or not.
If yes update the min */

if(subset[i][j] == true){
if(abs(sum - 2*i) < min){
min  = abs(sum - 2 *i);
}
}
}
}

printf("\n Difference between two sub sets will be : %d\n", min);
}
int main(){
int a[] = {1,7,4,11};
int n = sizeof(a)/sizeof(a[0]);
balancePartition(a,n-1);
return 0;
}
```

Complexity to split an array into two balanced partitions is O(nN) with space complexity of O(nN), where N is total sum of array.

# Minimum number of jumps to reach end of array

Given an array of integers, you can maximum jump a[i] positions from given index i. Find minimum number of jumps to reach end of array. For example, in following array, minimum number of jumps is 2.

At 2 we can either jump 0, 1 or 2 indices at a time. If we jump 2 indices, we would require two more jumps (at 1 and 1) to reach at 4.
However if we jump only one index, next time we will reach at the end of the array

## Minimum number of jumps to reach end

While finding minimum number of jumps to reach at end of array, something has to be minimized, that something is jumps. How can we do that?

Brute force way to solve this is to take longest jump possible from each index. While taking jump, take care that which among the possible landing indexes can give us maximum jump ahead. With every jump, select an index which gives maximum jump from that index. Just greedy algorithm at play.
Mathematically, for index i, scan through all indices e from i+1 till e = i + a[i] and calculate a value v = e +a[e]. Take e with maximum v. Let’s work out an example:

For example, in above array, for i =0, e = 1 and v = 1 (e)+3 (a[e]) =4;
e = 2 and v = 2 + 1 = 3
Select e which gives maximum v, that is 1. Once e is selected, i becomes that selected e. i = 1. A[i] = 3.

Since i + A[i] reaches at the end, we do not need to check anything else and break. Minimum number of jumps required to reach at the end of array is 2.

```#include<stdio.h>
#define INFINITY 9999

int minimumJumps(int a[], int size){

int max=0,count=0;

for(int i=0; i<size; ){
count++;
max = 0;
for(int j =0; j< a[i]; j++){
if(max < j+a[i+j]){
max = j + a[i+j];
}
}
if(max == 0) return INFINITY;

i = i+max;
}
return count;
}
//driver program
int main(){
int a[] = {1,1,2,3,1,4};
int size = sizeof(a)/sizeof(a[0]);
printf("Minimum Jump : %d", minimumJumps(a, size));
return 0;
}
```

What will be complexity of this method to find minimum number of jumps to reach end? It would be O(N2).

## Minimum number of jumps to reach end : Dynamic programming

Can we solve this using dynamic programming?  What’s needed is to find minimum jumps to reach Nth index. We can reduce this problem like,

`If we find minimum number of jumps for 0 and N-1 and can reach from any of them to Nth item, jumps for Nth index would be one more than that.`

Declare an distance array which will store minimum number of jumps to reach each index of array. Distance[i] will store minimum jumps to reach at index i. Distance[0] =0 as nothing is required to reach index 0. Distance[N] is what we are looking for.

If we cannot jump from index o to any index in array, we just return infinity as result. If that is not the case, proceed further and initialize number of jumps for each index except 0 as INFINITY.

Start filling distance of each index.

At index i, for each j from 0 to A[i], check if jumps[i] + 1 is less than jumps[i+j]. If jumps[i+j] was filled with jumps to reach index i+j, which was greater than jumps[i] +1, then change jumps[i+j] to jumps[i]+1. Why? Because index i+j can be now be reached from i with only one jump. When i+j is greater than size of array, end of array is reached and fill jumps[N] as jumps[i] +1 and that will be solution.

## Minimum number of jumps to reach end : implementation

```#include<stdio.h>

#define INFINITY 9999

int minimumNumberOfJumps(int a[], int size){

int jumps[size];
jumps[0] = 0;
for(int i=1; i<size; i++){
jumps[i] = INFINITY;
}
for(int i=0; i<size; i++){
for(int j=1; j<=a[i]; j++){
if(i+j < size){
if(jumps[i+j] > jumps[i]+1){
jumps[i+j] = jumps[i]+1;
}
}
else{
return jumps[i]+1;
}
}
}
return jumps[size-1];
}
//driver program
int main(){
int a[] = {1,5,5,3,1,4};
int size = sizeof(a)/sizeof(a[0]);
printf("Minimum Jump : %d", minimumNumberOfJumps(a, size));
return 0;
}
```

Let’s work out an example: [1,1,2,10,1,1,1,1,1,1,4]

Distance array will be of size 11. jumps[0] =0. Jumps to all indices are then initialize to INFINITY.

For i =0, j = 1
jumps[i +j] =jumps[0+1] =jumps[1] which is INFINITY as of now. Hence jumps[1] can be filled with jumps[i] + 1 which is jumps[0] + 1 = 1.

For i =1 j=1
jumps[i +j] =jumps[1+1] =jumps[2] which is INFINITY as of now. Hence Distance[2] can be filled with Distance[i] + 1 which is Distance[1] + 1 = 2.

For i =2 j=1,2 j =1: jumps[i +j] = jumps[2+1] = jumps[3] which is INFINITY as of now. Hence jumps[3] can be filled with jumps[i] + 1 which is jumps[2] + 1 = 3. jumps[3] =3

j = 2: jumps[i +j] = jumps[2+2] = jumps[4] which is INFINITY as of now. Hence jumps[4] can be filled with jumps[i] + 1 which is jumps[2] + 1 = 3. jumps[4] =3.

Now i =3. j =1,2,3,4,5,6,7,8,9,10. However, i + a[i] > size of array  and hence, solution will be jumps[3] + 1 = 4 and program exits.

Complexity of algorithm to find minimum jumps to reach end of an array using dynamic programming would be O(min(k,N) *N) along space complexity of O(N), where K is maximum jump.

# Subset sum problem

Given a set or an array of integers, find if there is subset with a given sum K. It is know as subset sum problem. For example, if array  A = [2,3,5,6,7,8,9] and K = 15, subsets which have K as sum are [ 3,5,7], [7,8], [6,9],[2,5,8]. Answer to the problem will be True.

Brute force solution is to find all subsets of array and check each one of them to see if it’s members add up to given sum. There can be 2n subsets for a set with n elements and hence the complexity of this solution is exponential for obvious reasons.

## Understanding subset sum problem

First thing we must understand is that in order to apply dynamic programming to any problem, it should satisfy two basic conditions. First, it can be subdivided into smaller subproblems and solutions to those subproblems lead to solution of original problem. Second, subproblems should be overlapping, so that optimization can be done using memorization.

Here, ask is to find a subset of set whose sum does not exceed S. This condition is similar to what we had in knapsack problem. There we had a limited capacity and we could take anything which does not exceed the limit. However there is a difference, in knapsack problem allows to take items less than capacity if value was greater in all others combinations. In subset sum problem discard subsets which have sum less or more than S.

What was strategy to solve knapsack problem? Yes, take each item available and check if it fits in constraint in current context. If it does, add item to solution and reduce problem to subproblem with N-1 items and reduced capacity of knapsack ( C-v ) where v is value of included item. If item does not satisfy constraint, ignore it and reduce problem to N-1 items and knapsack capacity C. Same approach can be used to solve subset sum problem too.

To apply DP to problem, let’s first come up with recursive solution to problem. Consider, first element of set to add as part of subset?  What all possible scenarios can happen here? Either element is added to subset or it is not added to subset. Right?

What if element is included to subset? If element is included, problem reduces to n-1 elements with sum to be found as S-value of element.

If element is not added to subset, it can happen if including element will make subset sum greater than S. Even then, problem reduces to n-1 elements, with required sum still being S.

## Recurrence relation for subset sum problem

```isSubsetSum(arr, i, n, sum)  = isSubsetSum(A, n-1, S-A[i])
if item A[i] can be included in subset
isSubsetSum(A, n-1, sum)
if item A[i] cannot be included in subset```

What would be the base case for recursive function? At any given point of time, if required sum is zero, subset is found; return true. Else, if all elements of array are considered and yet required is non zero, there is no subset with given sum; hence, return false.

```If S == 0  return true
If n == 0 && S != 0 return false
```

With some book keeping we can also print all subsets with given sum.

## Subset sum problem implementation

```#include<stdlib.h>
#include<stdio.h>

#define true 1
#define false 0
int isSubsetSum(int arr[], int n, int sum ){
printf("\nCalled with sum : %d, N = %d", sum, n );
if(!sum){
return true;
}
if(sum < 0) return false;

if(n <= 0  && sum > 0)  return false;
return isSubsetSum(arr, n-1, sum-arr[n])
+ isSubsetSum(arr, n-1, sum );
}

/* Driver program */
int main(){

int set[] = {1,3,5,4,6};
int n  =  sizeof(set)/sizeof(set[0]);

printf("\n Is there subset with given sum  : %s",
isSubsetSum(set, n-1, 10 ) ? "Yes" : "No");
return 0;
}
```

Complexity of solving subset sum problem using recursion is (2n). It is because, in worst case, all subsets (of length 1 to N ) of set are considered.

## Subset sum problem using dynamic programming

Look at computation tree of recursive function calls, it will be evident that subproblems are recalculated. Can this information be stored somehow, so that it not recalculated? Answer to question is yes.

Let’s create a two dimensional table with size S * N called Subset. Subset[i][j] is true, if there is a subset in A[0..j-1] with sum i. Otherwise Subset[i][j] is false.Final goal is to find value of Subset[S][N], which will be 1 if there is a subset of array with sum equal to S. To calculate Subset[S][N], we have to fill table.

Subset[i][j] is true iff one of the following two conditions is true:

```1. Subset[i][j-1] is true. If Subset[i][j-1] is true, it means that there is a subset with sum i in A[0..j-1], so obviously, there must be subset with sum i in A[0..j]

2. Subset[i-A[j]][j-i] is true. If Subset[i-A[j]][j-i]  is true, it means that there is subset with sum of (i–A[j]) in A[0..j-1]. Now if we select A[j], then a subset with sum of i can be obtained. Therefore Subset[i][j] is true.

Set Subset[0][j] = true (for 0<=j<=N). Its possible to obtain a sum = 0 from any value of j with empty subset.
```

Implementation of dynamic programming algorithm for subset sum problem

```#include<stdlib.h>
#include<stdio.h>

#define true 1
#define false 0

int isSubsetSum(int arr[], int n, int sum)
{
/* The value of subset[i][j] will be true if there is a
subset of set[0..j-1] */
int subset[sum+1][n+1];
int i,j;

/* If sum == 0, there will be empty set satisfying that condition
hence row with sum = 0 will have all true values. */
for (i = 0; i <= n; i++)
subset[0][i] = true;

/* If sum is not 0 and set is empty, no subset is there */
for (i = 1; i <= sum; i++)
subset[i][0] = false;

for (i=1; i<=sum; i++){
for (j=1; j<=n; j++){
/* If there is subset with j-1 elements, copy value */
subset[i][j] = subset[i][j-1];

/* Now if the latest element can be added */
if (i >= arr[j-1])
subset[i][j] = subset[i][j] || subset[i-arr[j-1]][j-1];
}
}
return subset[sum][n];
}

/* Driver program */
int main(){

int set[] = {1,3,5,4,6};
int n  =  sizeof(set)/sizeof(set[0]);
int subset[n];
printf("Is there a subset with given sum : %s",
isSubsetSum(set, n, 10) ? "Yes" : "No");
return 0;
}
```

Dynamic programming algorithm has time complexity O(N*S) and space complexity O(N*S) where N is size of set and S is sum required.

Please share if you find something wrong or missing. Also, if you want to contribute to website, please refer Publishing and contact us. We would love to publish your article and at the same time, will pay you too.

# Longest common subsequence problem

Given two strings X and Y, find longest common subsequence in two strings. A subsequence of string is set of all the characters in left to right order and not necessarily contiguous.For example: string ABCDEG has ABC, ADG, EG, BCDEG subsequences,whereas BDA is not a subsequence of given string. Longest common subsequence in two strings is maximum number of common characters in two strings with order of appearance in string is same.  For example
X = “ABCDSEFGD”
Y = “ACFEFXVGAB”
Longest Common Subsequence  will be ACEFG.

Brute force method to solve this is to find all subsequences of two string (exponential order) and then find the common one with maximum length. As evident, complexity of this method is exorbitant.

## Longest common subsequence using dynamic programming

To come up with dynamic programming solution, first we have to figure out the recursive nature of problem, so that problem can be divided into smaller subproblems and solved.  Second condition for applying dynamic programming is to have overlapping problems so that recursive solution can be optimized by storing results of subproblems.

Given two strings A and B, we start from first characters of them. If first character of A and B match, then this character is definitely part of longest subsequence of two string. We add that to out LCS and find longest subsequence now remaining characters of A and B. Problem reduced by one character.
What if first characters differ? Then there are three possibilities, either longest subsequence starts with first character of string A or it starts with first character of B or it contains none of them.  We have to find longest subsequence with these three cases and take the maximum of three. Actually, there are only two cases as third one implicit in other two.

If LCS[i,j] is the length of the LCS of A[1..i] with B[1..j]. How can we solve for LCS[i,j] in terms of the LCS’s of other smaller problems?

Case 1 : Check if A[i] == B[j]. If yes, problem is now reduced to find longest common subsequence in A[1…i-1] and B[1…j-1].

Case 2 : A[i] != B[j]. Exclude character A[i] and B[j] and find LCS for remaining string. First, we exclude character A[i]. Problem reduces to finding longest common subsequence of A[1…i-1] and B[1…j]. Now, exclude character B[j] and problem reduces to A[1…i] and B[1…j-1]. Take maximum of two cases. So the recursive relation comes up as

```
lcs  =  1 + longestCommonSubsequence(A[i-1], B[j-1]) if A[i] == B[j]
=  MAX (longestCommonSubsequence( A[i-1],B[j]),
longestCommonSubsequence( A[i], B[j-1]) if A[i] != B[j]
```
```#include <stdio.h>

int max(int a, int b){
return a>b ? a:b;
}
int longestCommonSubsequence(char *A, char *B){

if (*A == '\0' || *B == '\0') return 0;

else if (*A == *B) {
return 1 + longestCommonSubsequence(A+1, B+1);
}
else {
return	max(longestCommonSubsequence(A+1,B),
longestCommonSubsequence(A,B+1));
}
}

int main(void) {
char *a = "ABCDSEFGD";
char *b = "ACFEFXVGAB";

printf("\n Longest common subsequence : %d",
longestCommonSubsequence(a,b));

return 0;
}
```

Python implementation by Prince Mishra

```# start looking from the end, if current characters
#match, lcs = 1 + lcs of previous items

def lcs_naive(A, B):
# base case
# if any string cases to exist,there is
# no further checking
if not A or not B:
return 0

# propagation
if A[-1] == B[-1]:
return 1 + lcs_naive(A[:-1], B[:-1])

return max(lcs_naive(A[:-1], B), lcs_naive(A, B[:-1]))

A = '1234'
B = '5162'
print lcs_naive(A, B)

```

Complexity of recursive method to find longest subsequence is O(2n).

Notice that there are subproblems which are solved multiple times. How do we know that? Function call is made with suffix of A and B, there are (m+1) * (n+1) such combinations and if complexity is exponential, function is called with same suffixes multiple time.  To avoid solving those subproblems again and again, we can store values of those subproblems.

This gives us a perfect case for application of dynamic programming. We create a two dimensional table of size M X N. Table[i][j] stores the longest common subsequence of A[0…i] and B[0..j].

```Table[i,j]  =  1 + Table[i-1, j-1] if A[i] == B[j]
=  MAX (Table[i-1,j], Table[i, j-1]) if A[i] != B[j]```
```#include <stdio.h>
#include <string.h>

int max(int a, int b){
return a>b ? a:b;
}

int longestCommonSubsequence(char * A, char * B){
int lenA = strlen(A);
int lenB = strlen(B);

int Table[lenA+1][lenB+1];

for (int i=0; i<=lenA; i++){
Table[i][0] = 0;
}
for (int j=0; j<=lenB; j++){
Table[0][j] = 0;
}

for (int i=1; i<=lenA; i++){
for (int j=1; j<=lenB; j++){
if (A[i] == B[j])
Table[i][j] = 1 + Table[i-1][j-1];
else
Table[i][j] = max(Table[i-1][j], Table[i][j-1]);
}
}

return Table[lenA][lenB];
}

int main(void) {
char *a = "ABCDSEFGD";
char *b = "ACFEFXVGAB";

printf("\n Longest common subsequence : %d",
longestCommonSubsequence(a,b));

return 0;
}
```

How to find actual sequence of characters? To find sequence, we just walk backwards through
matrix starting from Table[lenA][lenB].Now, for each i and j we cross till we reach i and j as zero, do following steps:

```If either Table[i-1][j] or Table[i][j-1] have value equal to Table[i][j], then move to that either or them.
If both Table[i-1][j] and Table[i][j-1] less than Table[i][j], then move to Table[i-1][j-1]. Output the associated character.
Note that this will output the characters in the LCS in reverse order.
```

There is another approach where table is filled bottom up.In this method, we start from the end of two strings and go towards the first character. Table[0][0] gives us the answer.
Advantage of this method is that we do not need to initialize the matrix.

```#include <stdio.h>
#include <string.h>

int max(int a, int b){
return a>b ? a:b;
}
int longestCommonSubsequence(char * A, char * B){
int lenA = strlen(A);
int lenB = strlen(B);

int Table[lenA+1][lenB+1];
for (int i=0; i <= lenA; i++){
Table[i][0] = 0;
}

for (int j=0; j <= lenB; j++){
Table[0][j] = 0;
}

for (int i=1; i<= lenA; i++){
for (int j=1; j <= lenB; j++){
if (A[i] == B[j])
Table[i][j] = 1 + Table[i-1][j-1];
else
Table[i][j] = max(Table[i-1][j], Table[i][j-1]);
}
}
return Table[lenA][lenB];
}

int main(void) {
char *a = "ABCDSEFGD";
char *b = "ACFEFXVGAB";

printf("\n Longest common subsequence : %d",
longestCommonSubsequence(a,b));

return 0;
}
```

Finding longest common subsequence using dynamic programming has complexity of O(n2) with additional space complexity of O(n2).

There is one optimization which can be done in above implementation to reduce space complexity. Notice that to calculate Table[i][j], you need three values : Table[i-i][j-1], Table[i-1][j] and Table[i][j-1]. Once row i is calculate row i-1 has no use. Can we use this insight to save space?

```#include <stdio.h>
#include <string.h>

int max(int a, int b){
return a>b ? a:b;
}
int longestCommonSubsequence(char * A, char * B){
int lenA = strlen(A);
int lenB = strlen(B);

int Table[lenA+1][lenB+1];

for (int i=lenA; i >= 0; i--){
for (int j=lenB; j >= 0; j--){
if (A[i] == '\0' || B[j] == '\0')
Table[i][j] = 0;
else if (A[i] == B[j])
Table[i][j] = 1 + Table[i+1][j+1];
else
Table[i][j] = max(Table[i+1][j], Table[i][j+1]);
}
}
return Table[0][0];
}

int main(void) {
char *a = "ABCDSEFGD";
char *b = "ACFEFXVGAB";

printf("\n Longest common subsequence : %d",
longestCommonSubsequence(a,b));

return 0;
}

```

This method takes O(m*n*(min(m,n))) time and space complexity is O(min(m,n)).

Applications of longest common subsequence problems

Molecular biology. DNA sequences (genes) can be represented as sequences of four letters ACGT, corresponding to the four sub-molecules forming DNA. When biologists find a new sequences, they typically want to know what other sequences it is most similar to. One way of computing how similar two sequences are is to find the length of their longest common subsequence.

File comparison. The Unix program “diff” is used to compare two different versions of the same file, to determine what changes have been made to the file.

Screen redisplay. Many text editors display part of a file on the screen, updating the screen image as the file is changed to save network bandwidth. It is possible to view the computation of the minimum length sequence of characters needed to update the terminal as being a sort of common subsequence problem (the common subsequence tells you the parts of the display that are already correct and don’t need to be changed).

Please share if you find something wrong or missing. If you want to contribute to site, please refer : Publish on Algorithms and Me and contact us. We would be happy to publish your work and in turn will pay you too.

# Matrix chain multiplication

What is matrix chain multiplication?
Matrix chain multiplication is an optimization problem. Given a sequence of matrices, the goal is to find the most efficient way to multiply these matrices. The problem is not actually to perform the multiplications, but merely to decide the sequence of the matrix multiplications involved, so that number of operations done are minimum.

Formally, given N matrices, with dimensions, find an optimal order to matrix multiplication, so that minimum number of scalar operations are required to get result. For example, let there be three different matrices A1,A2,and A3. A1 be of dimensions 10 × 100; A2 be of dimension 100 × 5; A3 be of dimension 5 × 50; Then,

Multiplication Cost[((A1 A2) A3)] = (10 . 100 . 5) + (10 . 5 . 50) = 7,500 scalar multiplications.
Multiplication Cost[(A1 (A2 A3))] = (100 . 5 . 50) + (10 . 100 . 50) = 75,000 scalar multiplications.

Matrix chain multiplication is typical problem used to explain dynamic programming. Approach learnt here can be easily applied to many other similar problems like longest palindrome substring problem, boolean parenthesization problem etc.

## Matrix multiplication basics

There are two basic properties of matrix : number of rows and number of columns. These are dimensions of matrix. In our problem, dimensions of each matrix given in an array P where P[i-1] and P[i] denote row and column respectively of ith matrix.

```1. Matrix multiplication is associative i.e. A* (B*C) = (A*B) *C
2. It is not commutative i.e  A * (B*C) not equal to A * (C * B)
3. To multiply two matrices, they should be compatible i.e. no of columns in first matrix should be equal to number of rows of second matrix.
No of columns of first matrix = No. of rows of second matrix
```

## Matrix chain multiplication using dynamic programming

Look at matrix chain multiplication problem as : find an optimal way to put parenthesis around matrices, so that total number of scalar multiplication operations to calculate result are minimum.

Brute force method is to find out all possible combination of parentheses arrangements and check for one with minimum operation. Brute force approach implemented in recursive way, once parentheses are put around a matrix, problem reduces to put parentheses around N-1 matrices. However, implementation run time complexity is exponential and hence of cannot be used for large inputs. Exhaustive search to parenthesize matrices is O(4n/n3/2)

### Defining Optimal substructure

Let’s try to decompose the problem in smaller subproblem. What if we want to find optimal multiplication order matrices Ai..j, which will be to find order for matrices Ai,Ai+1,Ai+2….Aj. Original problem is to find optimal solution to A1..n
In any multiplication sequence, at the end, we multiply two matrices to get result of multiplication of Ai..j, these two matrices being Ai..k and Ak+1..j for some i<k

What should be the value of K then? We will check for all values of K from i+1 to j, K is some intermediate matrix between matrix i and j. Idea is to find K, such that cost(Ai, Aj) becomes minimum.

If we find k such thatAi..j has optimal solution, what is guarantee that Ai..k and Ak+1..j will also be optimal? Let’s prove it with contradiction.
If Ai..k was not optimal, we can find some other k by better parenthesization and get a cheaper final solution, leading to a contradiction.
Similarly, if Ak+1..j was not optimal we could replace it by a better parenthesization and get a cheaper final solution, also leading to a contradiction. Hence comes the Optimal Substructure Property.

### Defining recursive nature

We don’t know the value of k, but we can assume k, and recursively calculate Ai..j, return k such that Ai..j is minimum.

```Cost (Ai, Aj) = 0 if i=j
mini<k<j { Cost(Ai,Ak) + Cost(Ak+1,Aj)+(P[i-1] * P[k] * P[j]) } if i<j```

Let’s see recursive implementation of matrix chain multiplication.

```#include<stdio.h>
#include<stdlib.h>

#define INT_MAX  99999999

int MatrixChainMultiplication(int p[], int i, int j) {
printf( "\n (%d, %d)", i,j);
if(i == j)
return 0;  // base condition, no operation required to multiply
// same matrix.
int min = INT_MAX;
int count;

for (int k=i; k<j; k++) {
count = MatrixChainMultiplication(p, i, k) +
MatrixChainMultiplication(p, k+1, j) +
p[i-1]*p[k]*p[j];

if (count < min)
min = count;
}
return min;
}

// Driver program to test above function
int main()
{
int dimensions[] = {1, 2, 3, 4};
int n = sizeof(dimensions)/sizeof(dimensions[0]);

printf("Minimum number of multiplications is %d ",
MatrixChainMultiplication(dimensions, 1, n-1));
return 0;
}
```

From the recursion tree, it is evident that there are many subproblems which are being calculated multiple times.

That is where dynamic programming has a chance. To apply dynamic programming, two requirements needs to fit. First, problem should be reducible to smaller problems. Second, solution of those subproblems should overlap.

For Ak+1..j, let M[i,j] denote minimum number of multiplications needed to compute
Ak+1..j. The optimum cost can be described by the following recursive definition.

`M[i,j]  = M[i,k] + M[K+1,j] + ((P[i-1] * P[k] * P[j])`

Since we don’t know k, this equation reduces to,

`M[i,j]  = mini<k<j M[i,k] + M[K+1,j] + ((P[i-1] * P[k] * P[j])`

As we see cost of multiplication of single matrix is 0, well nothing to be done. Hence,M[i,i] = 0 as cost of multiplying a single matrix will be 0.
We have check all lengths of groups from 2 to N, starting from matrix 1 to N. M[1, N] will give us the final cost.For each length from index i,j will be calculated as i+L-1.

If L = 2, chain considered is of length 2, hence  j =  i + L -1  = 1 +2 -1 =2.  Starts with i = 1 and loop runs to i = N-L +1 and considering two matrices at time. Similarly, when length L=3, chain is of size 3, i varies from 0 to N and j from varies according to expression i+L-1.

For each length from 2 to N, find minimum cost(i,j) for each i varying from 0 to N, j varying as i+L-1. To find minimum cost(i,j), find a K such that expression

`Cost (Ai, Aj) = Cost(Ai,Ak) + Cost(Ak+1,Aj )+(P[i-1] * P[k] * P[j])`

becomes minimum. Since K > i and < j, cost(i,k) and cost(k+1,j) would have been already calculated.

This is top down filling of table where M [1, 2] will be filled first and then going up to fill till M [1,N].

## Matrix chain multiplication implementation in C

```#include<stdlib.h>
#include<stdio.h>

#define MAX_INT 10000000

int matrixMultiplication(int p[], int N){

int L,i, j, temp;

int M[N][N];

for(i=0; i<N; i++){
for(j=0; j<N; j++){
M[i][j] = 0;
}
}

for(L=2; L<N; L++){
/* For every position i, we check every chain of length L */
for(i=1; i<N-L+1; i++){

j = i+L-1;
M[i][j] = MAX_INT;
/* For matrix i to j, check every split K */
for(int k=i; k<j; k++){
temp = M[i][k] + M[k+1][j] + p[i-1] * p[k] * p[j];
/* Check if the current count is less than minimum */
if(temp < M[i][j]){
M[i][j] = temp;
}
}
}
}
return M[1][N-1];
}

/* Driver program to run above code */
int main(){

int p [] ={10, 20, 30, 40, 30};
int n = sizeof(p)/sizeof(p[0]);

printf("%d\n", matrixMultiplication(p,n));

return 0;
}
```

Time complexity of matrix chain multiplication using dynamic programming is O(n3). Also, space complexity is O(n2).

Can we recover the actual sequence of matrix multiplication? Yes, we need bit of extra book keeping. Create another store S[N][N] which stores k for each Ai..j. To find sequence, recursive go through S backwards starting from S[1,n]. I leave the rest from exercise, if you could code it, please share it in comments.

Reference
Matrix multiplication

Please share if there is something wrong or missing. If you want to contribute to algorithms and me, please refer to publishing.