Showing posts with label Dynamic Programming. Show all posts
Showing posts with label Dynamic Programming. Show all posts
Monday, March 4, 2013
Sunday, September 23, 2012
Maximum Sum Subsequence in an unsorted array of size n
We are given an array A having n integers. What we want to find is a contiguous subseqnence (A[i], A[i+1], …, A[j]) such that the sum of the elements in that subsequence is maximized. (Note that, in general, there may be more than one such optimal subsequence.)
Note that if all elements in A are non-negative, we could directly return the entire array as the required optimal subsequence.
APPROACH 1
A simple brute-force method could be to compute the sums of all possible subseqnences in A. How many subsequences are there? There are possible subsequences. Computing the sum of a subsequence takes O(n) in the worst-case. Hence, the running time of the algorithm would be .
In C++, we could write the following function to do what was explained above:
// start and end are the starting and ending indices of an optimal subsequence.
void f ( int* A, int n, int &start, int& end)
{
int sum, max = A[0];
for (int i = 0; i < n ; i++)
for (int j = i; j < n; j++)
{
sum = 0;
for (int k = i; k <=j; k++)
sum+= A[k];
if (sum >= max)
{
start = i;
end = j;
max = sum;
}
}
}
————
APPROACH 2:
We can improve the running time to by being a bit more clever in computing the sums of different subsequences. We observe that the sum of the subsequence A[i ... j+1] = A[j+1] + sum of the subsequence A[i ... j].
In C++, we could write as follows:
void f (int *A, int n, int &start, int &end)
{
int sum, max = A[0];
for (int i = 0; i < n; i++)
{
sum = 0;
for (int j = i; j < n; j++)
{
sum + = A[j];
if (sum >= max)
{
start = i;
end = j;
max = sum;
}
}
}
}
———–
APPROACH 3:
Using dynamic programming, we can solve the problem in linear time.
We consider a linear number of subproblems, each of which can be solved using previously solved subproblems in constant time, this giving a running time of .
Let denote the sum of a maximum sum contiguous subsequence ending exactly at index .
Thus, we have that:
(for all )
Also, S[0] = A[0].
——–
Using the above recurrence relation, we can compute the sum of the optimal subsequence for array A, which would just be the maximum over for .
Since we are required to output the starting and ending indices of an optimal subsequence, we would use another array where would store the starting index for a maximum sum contiguous subsequence ending at index i.
In prseducode form, the algorithm would look thus:
Create arrays S and T each of size n.
S[0] = A[0];
T[0] = 0;
max = S[0];
max_start = 0, max_end = 0;
For i going from 1 to n-1:
// We know that S[i] = max { S[i-1] + A[i], A[i] .
If ( S[i-1] > 0)
S[i] = S[i-1] + A[i];
T[i] = T[i-1];
Else
S[i] = A[i];
T[i] = i;
If ( S[i] > max)
max_start = T[i];
max_end = i;
max = S[i];
EndFor.
Output max_start and max_end.
———–
The above algorithm takes time and space.
——–
We can however improve upon the space requirements, reducing it to . The key idea is this: for computing S[i] and T[i], all we need are the values of S[i-1] and T[i-1] apart from the given array A. Hence, there is no need to store all values of the S and T arrays.
We could proceed as follows:
max = A[0];
max_start = 0;
max_end = 0;
S = A[0];
T = 0;
For i going from 1 to n-1:
// S = max { S + A[i], A[i] )
if ( S > 0)
S = S + A[i];
Else
S = A[i];
T = i;
If ( S > max)
max_start = T;
max_end = i;
max = S;
End For.
Output max_end and max_start.
———–
The above algorithm takes time and space.
Labels:Data
Adobe Question
,
Amazon Interview
,
DirectI Interview
,
Dynamic Programming
,
Google Interview
,
Microsoft Interview
,
PocketGems
Sunday, August 26, 2012
Tuesday, February 14, 2012
How fast can robots can complete the given sequence
Blue and Orange are friendly robots. An evil computer mastermind has
locked them up in separate hallways to test them, and then possibly give
them cake.
Each hallway contains 100 buttons labeled with the positive integers {1, 2, ..., 100}. Button k is always k meters from the start of the hallway, and the robots both begin at button 1. Over the period of one second, a robot can walk one meter in either direction, or it can press the button at its position once, or it can stay at its position and not press the button. To complete the test, the robots need to push a certain sequence of buttons in a certain order. Both robots know the full sequence in advance. How fast can they complete it?
For example, let's consider the following button sequence:
Here,
Each hallway contains 100 buttons labeled with the positive integers {1, 2, ..., 100}. Button k is always k meters from the start of the hallway, and the robots both begin at button 1. Over the period of one second, a robot can walk one meter in either direction, or it can press the button at its position once, or it can stay at its position and not press the button. To complete the test, the robots need to push a certain sequence of buttons in a certain order. Both robots know the full sequence in advance. How fast can they complete it?
For example, let's consider the following button sequence:
O 2, B 1, B 2, O 4
Here,
O 2
means button 2 in Orange's hallway, B 1
means button 1 in Blue's hallway, and so on. The robots can push this
sequence of buttons in 6 seconds using the strategy shown below:
Time | Orange | Blue -----+------------------+----------------- 1 | Move to button 2 | Stay at button 1 2 | Push button 2 | Stay at button 1 3 | Move to button 3 | Push button 1 4 | Move to button 4 | Move to button 2 5 | Stay at button 4 | Push button 2 6 | Push button 4 | Stay at button 2Note that Blue has to wait until Orange has completely finished pushing
O 2
before it can start pushing B 1
.
Labels:Data
Dynamic Programming
,
Google CodeJam
Tuesday, November 29, 2011
Tuesday, November 15, 2011
Given Table of Cities , Find Minimum Number of Hopes Required to Fly From One to Other City
There is a very Primitive Database and it has a table say "Travel". The content of table is as follows:
Source | Dest
--------------
Sea | LA
LA | FL
LA | MA
FL | Sea
Sea | FL
--------------
Sea | LA
LA | FL
LA | MA
FL | Sea
Sea | FL
The ask is to find out all routes between (Sea) to (FL) with mininum hop.
the Result would be:
1. Sea -> FL
2. Sea -> LA - > FL
the Result would be:
1. Sea -> FL
2. Sea -> LA - > FL
You have to write a Middle tier function to achieve above result. You can assume there is DBAPI that return the Destination city if you provide the source city.
Labels:Data
Amazon Interview
,
DirectI Interview
,
Dynamic Programming
,
Facebook Interview
,
Google Interview
,
Microsoft Interview
,
Recursion
Thursday, October 20, 2011
Provide an efficient algorithm to solve the question "Can I buy N items?
A shop sells an item in packets of 6, 9, and 17. A customer can buy any number of packets, but a packet cannot be broken up. Provide an efficient algorithm to solve the question "Can I buy N items?". For example, is it possible to buy 29 items? What about 19? Your algorithm simply needs to return true or false.
Labels:Data
Coin Change
,
DirectI Interview
,
Dynamic Programming
,
Facebook Interview
,
Google Interview
,
Knapsack
Saturday, August 6, 2011
Coin Denomination Problem
Let us say we have an amount we have to pay someone: say $97. The given currency has notes worth $1, $5, $10 and $50. Now, what is the minimum number of currency notes with which the amount can be made? In this case, its eight: two $1 notes, a $5 note, four $10 notes and a $50 notes - eight in all.
If a computer had to solve the same problem, how would it do it? Let us see below. ‘Denomination’ refers to the list of available currency notes: $1, $5, $10 and $50 in this case - each note is a denomination.
In General
Problem: We are given a set of denominations d1,d2,d3,...,dn in increasing order. (Without loss of generality) we assume also that d1=1 (that is, there is always a $1 note, and that is the first denomination) so that it is always possible to produce all amounts from the given denominations. We are also given an amount of money amt. We must use a minimum number of coins (or currency notes) of the given denominations to produce amt.
Great Info http://www.seeingwithc.org/topic1html.html
If a computer had to solve the same problem, how would it do it? Let us see below. ‘Denomination’ refers to the list of available currency notes: $1, $5, $10 and $50 in this case - each note is a denomination.
In General
Problem: We are given a set of denominations d1,d2,d3,...,dn in increasing order. (Without loss of generality) we assume also that d1=1 (that is, there is always a $1 note, and that is the first denomination) so that it is always possible to produce all amounts from the given denominations. We are also given an amount of money amt. We must use a minimum number of coins (or currency notes) of the given denominations to produce amt.
Great Info http://www.seeingwithc.org/topic1html.html
You must partition the cities into two subsequences (not necessarily contiguous) such that person A visits all cities in the first subsequence (in order), person B visits all cities in the second subsequence (in order), and such that the sum of the total distances travelled by A and B is minimized.
Two-Person Traversal of a Sequence of Cities. You are given an ordered sequence of n cities, and the distances between every pair of cities. You must partition the cities into two subsequences (not necessarily contiguous) such that person A visits all cities in the first subsequence (in order), person B visits all cities in the second subsequence (in order), and such that the sum of the total distances travelled by A and B is minimized. Assume that person A and person B start initially at the first city in their respective subsequences.
Recursion Relation: We recurse on C(i; j), the minimum distance traveled if person A ends at city
i and person B ends at city j. Assume WLOG i < j. The relation is:
{ to j-1 //end
{ sum d(k, k + 1) if i = 0
{ for (k=1) //start
C(i; j) =
{ min 0
where d(i; j) is the distance between cities i and j.
Running Time: There are n2 entries in C(i; j) and lling in each entry takes O(n) for a total of O(n3).
Refrence http://people.csail.mit.edu/bdean/6.046/dp/
http://courses.csail.mit.edu/6.006/fall10/handouts/dpproblems-sol.pdf
Recursion Relation: We recurse on C(i; j), the minimum distance traveled if person A ends at city
i and person B ends at city j. Assume WLOG i < j. The relation is:
{ to j-1 //end
{ sum d(k, k + 1) if i = 0
{ for (k=1) //start
C(i; j) =
{ min 0
where d(i; j) is the distance between cities i and j.
Running Time: There are n2 entries in C(i; j) and lling in each entry takes O(n) for a total of O(n3).
Refrence http://people.csail.mit.edu/bdean/6.046/dp/
http://courses.csail.mit.edu/6.006/fall10/handouts/dpproblems-sol.pdf
Labels:Data
Dynamic Programming
,
Google Interview
Wednesday, August 3, 2011
Design An Algorithm to Exhibit Did You Mean Feature of Google Search Engine
When people search at Google, they often type two words without space. For example, instead of typing "binary tree", someone can type "binarytree". Given an input, what is a good way of finding if it is a combination of two valid words e.g. In Directly You have Add Space between two such or if are using some data structure , whene words would have been inserted , you would have mainted a boolen varible to defined the end of word (eod)
An Exanmple Will be Let's say you have a phrase without any spaces - eg. "thisisawesome". Given a dictionary, how would you add spaces in this string?
Another Good Working Code You Can Refer Avinash Solution
Time Complexity O(K) k is length of input string
An Exanmple Will be Let's say you have a phrase without any spaces - eg. "thisisawesome". Given a dictionary, how would you add spaces in this string?
When people search at Google, they often type two words without space. For example, instead of typing "binary tree", someone can type "binarytree". Given an input, what is a good way of finding if it is a combination of two valid words e.g. In Directly You have Add Space between two such or if are using some data structure , where words would have been inserted , you would have maintained a boolean variable to defined the end of word (eod)
An Example Will be Let's say you have a phrase without any spaces - eg. "thisisawesome". Given a dictionary, how would you add spaces in this string?
An Example Will be Let's say you have a phrase without any spaces - eg. "thisisawesome". Given a dictionary, how would you add spaces in this string?
Trie trie = new Trie();
trie.insert("i");
trie.insert("this");
trie.insert("saw");
trie.insert("is");
trie.insert("a");
trie.insert("some");
trie.insert("awesome");
trie.insert("i");
trie.insert("this");
trie.insert("saw");
trie.insert("is");
trie.insert("a");
trie.insert("some");
trie.insert("awesome");
String inputString = "thisisawesome";
String outputString = "";
int left = 0;
int right = 1;
String outputString = "";
int left = 0;
int right = 1;
while ( right <= inputString.length())
{
{
String str = inputString.substring(left,right);
boolean found = trie.containsKey(str);
if ( found)
{
String str = inputString.substring(left,right);
boolean found = trie.containsKey(str);
if ( found)
{
outputString += str;
outputString += " ";
left += str.length();
right = left + 1;
}
outputString += str;
outputString += " ";
left += str.length();
right = left + 1;
}
else
{
right++;
}
}
}
right++;
}
}
}
Another Good Working Code You Can Refer Avinash Solution
Time Complexity O(K) k is length of input string
Labels:Data
Dynamic Programming
,
Google Interview
,
Large Scalable System
,
Machine Learning
,
Trie
Tuesday, August 2, 2011
Design an algorithm such that maximum number of jobs can be performed in that given time interval.
You are given N jobs with their start and end time mentioned.These jobs may have their timings overlapped.You have to suggest an algorithm such that maximum number of jobs can be performed in that given time interval.
Given a set of n jobs j1, j2, · · · , jn, find a set of compatible jobs with the
maximum weight.
• Each job ji starts at sji and and finishes at fji .
• Each job ji has weight (value) vji .
• Two jobs are compatible if their intervals do not overlap.
In other words, we are trying to find a set of jobs ji1 , ji2 , · · · , jik such that the following is maximized:
sum(vjix) for each x=1 to k
The dynamic programming paradigm gives us two options when examining
each job. The optimal solution either contains the job, or it doesn’t. As
an example, if the optimal solution contains j1, then we pick j1 and find the
optimal solution for jobs compatible with j1. If the optimal solution does
not contain j1, then we find the optimal solution for j2, j3, · · · , jn. (This assumes that the jobs are arranged in order of their finish time, such that fj1
Once we select a job to be a part of the optimal solution, we exclude a
host of other jobs that are incompatible with the selection. To assist in our
definition of an optimal solution, we define jip to be the largest index job,
ip < i, that is compatible with ji. In other words, jip is the job for which
sji − fjip is minimized.
We can then let OPT(i) be the set containing the optimal solution for
jobs j1, j2, · · · , ji. It is defined as follows for each ji:
• Case 1: OPT selects ji
– We cannot select jobs jip+1, · · · , ji−1, so we exclude them from
consideration.
– We recurse and find the optimal solution for jobs j1, j2, · · · , jip by
setting OPT(i) = vji + OPT(ip).
• Case 2: OPT does not select ji
– We recurse and find the optimal solution for jobs j1, j2, · · · , ji−1
by setting OPT(i) = OPT(i − 1).
Since we don’t know which case will return the optimal solution, we must
try both. Once we have calculated both possibilities, we pick the case which
returns the larger set. Now we can create a more compact definition, together
with a simple implementation:
OPT(i) = max{vji + OPT(ip),OPT(i − 1)} | i 1
= 0 | otherwise
The recursive algorithm looks like
COMPUTE_OPT(i):
if i == 0:
return 0
return max {
COMPUTE_OPT(i - 1),
v[j(i)] + COMPUTE_OPT(p(i))
}
The problem with the algorithm as it stands is that a lot of computation is
repeated. This is visualized by looking at the recursive tree structure:
-COMPUTE_OPT(5)
-COMPUTE_OPT(3)
-COMPUTE_OPT(1)
-COMPUTE_OPT(2)
-COMPUTE_OPT(0)
-COMPUTE_OPT(1)
-COMPUTE_OPT(4)
-COMPUTE_OPT(2)
-COMPUTE_OPT(0)
-COMPUTE_OPT(1)
-COMPUTE_OPT(3)
-COMPUTE_OPT(1)
-COMPUTE_OPT(2)
-COMPUTE_OPT(0)
-COMPUTE_OPT(1)
It is easy to see that for even a small set of jobs, the amount of repeated
computation is huge. To avoid this, we introduce memoization, which stores
the values for OPT(i) in a global array as they are calculated. If the value is
needed later, it is simply retrieved from the array, pruning the recursive tree
such that each descent only happens once. The following revised algorithm
implements memoization:
M_COMPUTE_OPT(i):
if M[i] != null:
return M[i]
if i == 0:
M[0] = 0
else:
M[i] = max {
M_COMPUTE_OPT(i - 1),
v[j(i)] + M_COMPUTE_OPT(p(i))
}
return M[i]
This memoized version saves valuable computational resources, but each
recursive call adds overhead. The algorithm runs in O(n) time, but its recursive
nature makes it bulky.
We can eliminate the overhead of recursion by constructing an iterative (nonrecursive)
algorithm that builds a bottom-up list of the values.
M[0] = 0
for i = 1 to n:
M[i] = max {
M(i - 1),
v[j(i)] + M(p(i))
}
This is obviously linear, and without the added computational weight of
a recursive algorithm. To reconstruct the solution, we simply run backwards
through the array (take M[n] and compare to M[n-1], etc.).
PRINT_SOLUTION(i):
if i == 0:
return
if v[j(i)] + M[p(i)] == M[i]:
print j(i)
PRINT_SOLUTION(p(i))
else:
PRINT_SOLUTION(i - 1)
Time Complexity O(N)
Space Compleixty O(N)
Resources
http://en.wikipedia.org/wiki/Interval_scheduling
http://pages.cs.wisc.edu/~shuchi/courses/787-F09/scribe-notes/lec3.pdf
http://www.cs.cornell.edu/courses/cs482/2007sp/dynamic.pdf
http://www.cs.uiuc.edu/class/sp08/cs473/Lectures/lec10.pdf
http://www.cs.sfu.ca/~ssa121/personal/spring08/705/dp.pdf
http://www.cs.uiowa.edu/~hzhang/c31/6-dynamic.pdf
classes.soe.ucsc.edu/.../06dynamic-programming-weighted-interv-sched.ppt
http://www.cs.princeton.edu/courses/archive/spr05/cos423/lectures/06dynamic-programming.pdf
http://www.cse.psu.edu/~asmith/courses/cse565/F08/www/lec-notes/CSE565-F08-Lec-17.pdf
Given a set of n jobs j1, j2, · · · , jn, find a set of compatible jobs with the
maximum weight.
• Each job ji starts at sji and and finishes at fji .
• Each job ji has weight (value) vji .
• Two jobs are compatible if their intervals do not overlap.
In other words, we are trying to find a set of jobs ji1 , ji2 , · · · , jik such that the following is maximized:
sum(vjix) for each x=1 to k
The dynamic programming paradigm gives us two options when examining
each job. The optimal solution either contains the job, or it doesn’t. As
an example, if the optimal solution contains j1, then we pick j1 and find the
optimal solution for jobs compatible with j1. If the optimal solution does
not contain j1, then we find the optimal solution for j2, j3, · · · , jn. (This assumes that the jobs are arranged in order of their finish time, such that fj1
Once we select a job to be a part of the optimal solution, we exclude a
host of other jobs that are incompatible with the selection. To assist in our
definition of an optimal solution, we define jip to be the largest index job,
ip < i, that is compatible with ji. In other words, jip is the job for which
sji − fjip is minimized.
We can then let OPT(i) be the set containing the optimal solution for
jobs j1, j2, · · · , ji. It is defined as follows for each ji:
• Case 1: OPT selects ji
– We cannot select jobs jip+1, · · · , ji−1, so we exclude them from
consideration.
– We recurse and find the optimal solution for jobs j1, j2, · · · , jip by
setting OPT(i) = vji + OPT(ip).
• Case 2: OPT does not select ji
– We recurse and find the optimal solution for jobs j1, j2, · · · , ji−1
by setting OPT(i) = OPT(i − 1).
Since we don’t know which case will return the optimal solution, we must
try both. Once we have calculated both possibilities, we pick the case which
returns the larger set. Now we can create a more compact definition, together
with a simple implementation:
OPT(i) = max{vji + OPT(ip),OPT(i − 1)} | i 1
= 0 | otherwise
The recursive algorithm looks like
COMPUTE_OPT(i):
if i == 0:
return 0
return max {
COMPUTE_OPT(i - 1),
v[j(i)] + COMPUTE_OPT(p(i))
}
The problem with the algorithm as it stands is that a lot of computation is
repeated. This is visualized by looking at the recursive tree structure:
-COMPUTE_OPT(5)
-COMPUTE_OPT(3)
-COMPUTE_OPT(1)
-COMPUTE_OPT(2)
-COMPUTE_OPT(0)
-COMPUTE_OPT(1)
-COMPUTE_OPT(4)
-COMPUTE_OPT(2)
-COMPUTE_OPT(0)
-COMPUTE_OPT(1)
-COMPUTE_OPT(3)
-COMPUTE_OPT(1)
-COMPUTE_OPT(2)
-COMPUTE_OPT(0)
-COMPUTE_OPT(1)
It is easy to see that for even a small set of jobs, the amount of repeated
computation is huge. To avoid this, we introduce memoization, which stores
the values for OPT(i) in a global array as they are calculated. If the value is
needed later, it is simply retrieved from the array, pruning the recursive tree
such that each descent only happens once. The following revised algorithm
implements memoization:
M_COMPUTE_OPT(i):
if M[i] != null:
return M[i]
if i == 0:
M[0] = 0
else:
M[i] = max {
M_COMPUTE_OPT(i - 1),
v[j(i)] + M_COMPUTE_OPT(p(i))
}
return M[i]
This memoized version saves valuable computational resources, but each
recursive call adds overhead. The algorithm runs in O(n) time, but its recursive
nature makes it bulky.
We can eliminate the overhead of recursion by constructing an iterative (nonrecursive)
algorithm that builds a bottom-up list of the values.
M[0] = 0
for i = 1 to n:
M[i] = max {
M(i - 1),
v[j(i)] + M(p(i))
}
This is obviously linear, and without the added computational weight of
a recursive algorithm. To reconstruct the solution, we simply run backwards
through the array (take M[n] and compare to M[n-1], etc.).
PRINT_SOLUTION(i):
if i == 0:
return
if v[j(i)] + M[p(i)] == M[i]:
print j(i)
PRINT_SOLUTION(p(i))
else:
PRINT_SOLUTION(i - 1)
Time Complexity O(N)
Space Compleixty O(N)
Resources
http://en.wikipedia.org/wiki/Interval_scheduling
http://pages.cs.wisc.edu/~shuchi/courses/787-F09/scribe-notes/lec3.pdf
http://www.cs.cornell.edu/courses/cs482/2007sp/dynamic.pdf
http://www.cs.uiuc.edu/class/sp08/cs473/Lectures/lec10.pdf
http://www.cs.sfu.ca/~ssa121/personal/spring08/705/dp.pdf
http://www.cs.uiowa.edu/~hzhang/c31/6-dynamic.pdf
classes.soe.ucsc.edu/.../06dynamic-programming-weighted-interv-sched.ppt
http://www.cs.princeton.edu/courses/archive/spr05/cos423/lectures/06dynamic-programming.pdf
http://www.cse.psu.edu/~asmith/courses/cse565/F08/www/lec-notes/CSE565-F08-Lec-17.pdf
Labels:Data
Dynamic Programming
,
Facebook Interview
,
Google Interview
Tuesday, July 26, 2011
Painter Partition Problem
You have to paint N boards of length {A0, A1, A2 … AN-1}. There are K painters available and you are also given how much time a painter takes to paint 1 unit of board. You have to get this job done as soon as possible under the constraints that any painter will only paint continuous sections of board, say board {2, 3, 4} or only board {1} or nothing but not board {2, 4, 5}.
Labels:Data
Dynamic Programming
,
Google Interview
Find The Number of Unique Path in Maze Where Robot is Sitting at Top-Left Corner & Can Move According to Given Constraints
A robot is located at the top-left corner of a m x n grid (marked ‘Start’ in the diagram below). The robot can only move either down or right at any point in time. The robot is trying to reach the bottom-right corner of the grid . marked ‘end' in the diagram below.How many possible unique paths are there?
I posted the solution with post because i was aware of such problem earlier :)
Basic Solution,Approach,Algoriothms Using BackTracing
Start form top left corner (say we are at position 1,1 in starting deonted by row=r,column=c (e.g. r=1, c=1 initially) & then will will keep moving untill reach when r=m and c=n e.g. bottom-left corner) writing code for this is simple.
int backtrack(int r, int c, int m, int n) {
if (r == m && c == n)
return 1;
if (r > m || c > n)
return 0;
return backtrack(r+1, c, m, n) + backtrack(r, c+1, m, n);
}
but as normal backtracking problems it inovolves repeated calculations
so is very inefficient in the sense that it recalculates the same solutio n over patah again & again. so we need to use a dynamic programming (DP) technique called memoization akso called top down approach.
const int M_MAX = 10;
const int N_MAX = 10;
int backtrack(int r, int c, int m, int n, int mat[][N_MAX+2]) {
if (r == m && c == n)
return 1;
if (r > m || c > n)
return 0;
if (mat[r+1][c] == -1)
mat[r+1][c] = backtrack(r+1, c, m, n, mat);
if (mat[r][c+1] == -1)
mat[r][c+1] = backtrack(r, c+1, m, n, mat);
return mat[r+1][c] + mat[r][c+1];
}
int bt(int m, int n) {
int mat[M_MAX][N_MAX];
memset(mat, -1, sizeof(mat));
return backtrack(1, 1, m, n, mat);
}
Time Complexity O(M+N)
Space Complexity O(M*N)
Bottom-Up Dynamic Programming More Efficient
As we know The total unique paths at above matrix (r,c) is equal to the sum of total unique paths from matrix to the right (r,c+1) and the matrix below (r+1,c).
(For clarity, we will solve this part assuming an X*Y Matrix)
Each path has (X-1)+(Y-1) steps. Imagine the following paths:
X X Y Y X (move right -> right -> down -> down -> right)
X Y X Y X (move right -> down -> right -> down -> right)
...& so on
Each path can be fully represented by the moves at which we move
right. That is, if I were to ask you which path you took, you could
simply say “I moved right on step 3 and 4.”
Since you must always move right X-1 times, and you have X-1 + Y-1
total steps, you have to pick X-1 times to move right out of X-1+Y-1
choices. Thus, there are C(X-1, X-1+Y-1) paths (e.g., X-1+Y-1 choose
X-1):
(X-1 + Y-1)! / ((X-1)! * (Y-1)!)..
const int M_MAX = 100;
const int N_MAX = 100;
int dp(int m, int n) {
int mat[M_MAX][N_MAX] = {0};
mat[m][n+1] = 1;
for (int r = m; r >= 1; r--)
for (int c = n; c >= 1; c--)
mat[r][c] = mat[r+1][c] + mat[r][c+1];
return mat[1][1];
}
Lets Assume you have M×N sample matrix or grid. so it doen't matter how you traverse the grids, you always traverse a total of M steps. To be more exact, you always have to choose M-N steps to the right (R) and N steps to the bottom (B). Therefore, the problem can be transformed to a question of how many ways can you choose M-N R‘s and N B‘s in these M+N-2 steps. The answer is C(M,N) (or C(M,M-N)). Therefore, the general solution for a m x n grid is C(m+n-2, m-1) & this is our answer.
Time Complexity O(M*N)
Space Complexity O(M*N)
Follow Up: Using Above We Have Only Calculated the number of paths can we print those path as well if yes what will be time complexity.
I posted the solution with post because i was aware of such problem earlier :)
Basic Solution,Approach,Algoriothms Using BackTracing
Start form top left corner (say we are at position 1,1 in starting deonted by row=r,column=c (e.g. r=1, c=1 initially) & then will will keep moving untill reach when r=m and c=n e.g. bottom-left corner) writing code for this is simple.
int backtrack(int r, int c, int m, int n) {
if (r == m && c == n)
return 1;
if (r > m || c > n)
return 0;
return backtrack(r+1, c, m, n) + backtrack(r, c+1, m, n);
}
but as normal backtracking problems it inovolves repeated calculations
so is very inefficient in the sense that it recalculates the same solutio n over patah again & again. so we need to use a dynamic programming (DP) technique called memoization akso called top down approach.
const int M_MAX = 10;
const int N_MAX = 10;
int backtrack(int r, int c, int m, int n, int mat[][N_MAX+2]) {
if (r == m && c == n)
return 1;
if (r > m || c > n)
return 0;
if (mat[r+1][c] == -1)
mat[r+1][c] = backtrack(r+1, c, m, n, mat);
if (mat[r][c+1] == -1)
mat[r][c+1] = backtrack(r, c+1, m, n, mat);
return mat[r+1][c] + mat[r][c+1];
}
int bt(int m, int n) {
int mat[M_MAX][N_MAX];
memset(mat, -1, sizeof(mat));
return backtrack(1, 1, m, n, mat);
}
Time Complexity O(M+N)
Space Complexity O(M*N)
Bottom-Up Dynamic Programming More Efficient
As we know The total unique paths at above matrix (r,c) is equal to the sum of total unique paths from matrix to the right (r,c+1) and the matrix below (r+1,c).
(For clarity, we will solve this part assuming an X*Y Matrix)
Each path has (X-1)+(Y-1) steps. Imagine the following paths:
X X Y Y X (move right -> right -> down -> down -> right)
X Y X Y X (move right -> down -> right -> down -> right)
...& so on
Each path can be fully represented by the moves at which we move
right. That is, if I were to ask you which path you took, you could
simply say “I moved right on step 3 and 4.”
Since you must always move right X-1 times, and you have X-1 + Y-1
total steps, you have to pick X-1 times to move right out of X-1+Y-1
choices. Thus, there are C(X-1, X-1+Y-1) paths (e.g., X-1+Y-1 choose
X-1):
(X-1 + Y-1)! / ((X-1)! * (Y-1)!)..
const int M_MAX = 100;
const int N_MAX = 100;
int dp(int m, int n) {
int mat[M_MAX][N_MAX] = {0};
mat[m][n+1] = 1;
for (int r = m; r >= 1; r--)
for (int c = n; c >= 1; c--)
mat[r][c] = mat[r+1][c] + mat[r][c+1];
return mat[1][1];
}
Lets Assume you have M×N sample matrix or grid. so it doen't matter how you traverse the grids, you always traverse a total of M steps. To be more exact, you always have to choose M-N steps to the right (R) and N steps to the bottom (B). Therefore, the problem can be transformed to a question of how many ways can you choose M-N R‘s and N B‘s in these M+N-2 steps. The answer is C(M,N) (or C(M,M-N)). Therefore, the general solution for a m x n grid is C(m+n-2, m-1) & this is our answer.
Time Complexity O(M*N)
Space Complexity O(M*N)
Follow Up: Using Above We Have Only Calculated the number of paths can we print those path as well if yes what will be time complexity.
Avid TV Watcher
Here is a TV avid person. He wants to spend his max time on TV. There are N channels with different program of different length and diff times. WAP so that the person can spend his max time watching TV.Precondition: If that person watches a program, he watches it completely.
Ex:
Channel1:
prog1 – 8:00- 8:30
prog2: 9:00 – 10:00
prog3: 10:15 – 12:00
channel2:
prg1 – 8:15 – 10:00
prg2: 10:30 – 12:00
So in this case max time will be if he watches:
ch2/prg1 + ch1/prg3
Ex:
Channel1:
prog1 – 8:00- 8:30
prog2: 9:00 – 10:00
prog3: 10:15 – 12:00
channel2:
prg1 – 8:15 – 10:00
prg2: 10:30 – 12:00
So in this case max time will be if he watches:
ch2/prg1 + ch1/prg3
Labels:Data
Amazon Interview
,
Dynamic Programming
Design an algorithm to find the maximum number of apples you can collect ?
As This is Season of Compnies Visiting Collages & my mailbox is full of problems :D Here is Another One
A table composed of N*M cells,each having a certain quantity of apples, is given. you start from the upper-left corner. At each step you can go down or right one cell.Design an algorithm to find the maximum number of apples you can collect ,if you are moving from upper-left corner to bottom-right corner
A table composed of N*M cells,each having a certain quantity of apples, is given. you start from the upper-left corner. At each step you can go down or right one cell.Design an algorithm to find the maximum number of apples you can collect ,if you are moving from upper-left corner to bottom-right corner
Labels:Data
Dynamic Programming
,
Facebook Interview
,
Google Interview
Yuckdonald's is considering opening a series of restaurants along Quaint Valley Highway (QVH).Give an efficient algorithm to compute the maximum expected total profit subject to the given constraints.
A Reader send me this interestinmg problem some times back & he also confirmed that recently google asked it :)
Yuckdonald's is considering opening a series of restaurants along Quaint Valley Highway (QVH).The n possible locations are along a straight line, and the distances of these locations from the start of QVH are, in miles and in increasing order,m1;m2; : : : ;mn.
The constraints are as follows:
1)At each location,Yuckdonald's may open at most one restaurant. The expected profi t from opening a restaurant at location i is pi, where
pi > 0 and i = 1; 2; : : : ; n.
2)Any two restaurants should be at least k miles apart, where k is a
positive integer.
Give an efficient algorithm to compute the maximum expected total
profit subject to the given constraints.
More Info About The Problem can be found here
http://www.cs.grinnell.edu/~coahranm/csc301/f2007/hw/a8.pdf
http://www.cs.berkeley.edu/~vazirani/algorithms/chap6.pdf
http://users.eecs.northwestern.edu/~dda902/336/hw6-sol.pdf
https://groups.google.com/forum/#!topic/algogeeks/JUpjPTR494Y
http://www.solvemyproblem.net/Webed/forum/thread.asp?tid=1213
Yuckdonald's is considering opening a series of restaurants along Quaint Valley Highway (QVH).The n possible locations are along a straight line, and the distances of these locations from the start of QVH are, in miles and in increasing order,m1;m2; : : : ;mn.
The constraints are as follows:
1)At each location,Yuckdonald's may open at most one restaurant. The expected profi t from opening a restaurant at location i is pi, where
pi > 0 and i = 1; 2; : : : ; n.
2)Any two restaurants should be at least k miles apart, where k is a
positive integer.
Give an efficient algorithm to compute the maximum expected total
profit subject to the given constraints.
More Info About The Problem can be found here
http://www.cs.grinnell.edu/~coahranm/csc301/f2007/hw/a8.pdf
http://www.cs.berkeley.edu/~vazirani/algorithms/chap6.pdf
http://users.eecs.northwestern.edu/~dda902/336/hw6-sol.pdf
https://groups.google.com/forum/#!topic/algogeeks/JUpjPTR494Y
http://www.solvemyproblem.net/Webed/forum/thread.asp?tid=1213
Labels:Data
Dynamic Programming
,
Google Interview
,
ICPC
,
Summer Fall 2007
Write an algorithm that loots the maximum amount of money from row of houses.efficiently
There is a row of houses in which each house contains some amount of money.
Write an algorithm that loots the maximum amount of money from these houses.
The only restriction is that you cannot loot two houses that are directly
next to each other.
Write an algorithm that loots the maximum amount of money from these houses.
The only restriction is that you cannot loot two houses that are directly
next to each other.
Labels:Data
Dynamic Programming
,
Facebook Interview
Monday, July 25, 2011
Given a positive integer n, find the minimum number of steps that takes n to 1 eg: 1.)For n = 1 , output: 0 2.) For n = 4 , output: 2 ( 4 /2 = 2 /2 = 1 ) 3.) For n = 7 , output: 3 ( 7 -1 = 6 /3 = 2 /2 = 1 )
On a positive integer, you can perform any one of the following 3 steps. 1.) Subtract 1 from it. ( n = n - 1 ) , 2.) If its divisible by 2, divide by 2. ( if n % 2 == 0 , then n = n / 2 ) , 3.) If its divisible by 3, divide by 3. ( if n % 3 == 0 , then n = n / 3 ). Using These Conditions Solve Above Question Efficiently ?
Approach / Idea: One can think of greedily choosing the step, which makes n as low as possible and conitnue the same, till it reaches 1. If you observe carefully, the greedy strategy doesn't work here. Eg: Given n = 10 , Greedy --> 10 /2 = 5 -1 = 4 /2 = 2 /2 = 1 ( 4 steps ). But the optimal way is --> 10 -1 = 9 /3 = 3 /3 = 1 ( 3 steps ). So, we need to try out all possible steps we can make for each possible value of n we encounter and choose the minimum of these possibilities.
It all starts with recursion :). F(n) = 1 + min{ F(n-1) , F(n/2) , F(n/3) } if (n>1) , else 0 ( i.e., F(1) = 0 ) . Now that we have our recurrence equation, we can right way start coding the recursion. Wait.., does it have over-lapping subproblems ? YES. Is the optimal solution to a given input depends on the optimal solution of its subproblems ? Yes... Bingo ! its DP :) So, we just store the solutions to the subproblems we solve and use them later on, as in memoization.. or we start from bottom and move up till the given n, as in dp. As its the very first problem we are looking at here, lets see both the codes.
Memoization
[code]
int memo[n+1]; // we will initialize the elements to -1 ( -1 means, not solved it yet )
int getMinSteps ( int n )
{
if ( n == 1 ) return 0; // base case
if( memo[n] != -1 ) return memo[n]; // we have solved it already :)
int r = 1 + getMinSteps( n - 1 ); // '-1' step . 'r' will contain the optimal answer finally
if( n%2 == 0 ) r = min( r , 1 + getMinSteps( n / 2 ) ) ; // '/2' step
if( n%3 == 0 ) r = min( r , 1 + getMinSteps( n / 3 ) ) ; // '/3' step
memo[n] = r ; // save the result. If you forget this step, then its same as plain recursion.
return r;
}
[/code]
Bottom-Up DP
[code]
int getMinSteps ( int n )
{
int dp[n+1] , i;
dp[1] = 0; // trivial case
for( i = 2 ; i < = n ; i ++ )
{
dp[i] = 1 + dp[i-1];
if(i%2==0) dp[i] = min( dp[i] , 1+ dp[i/2] );
if(i%3==0) dp[i] = min( dp[i] , 1+ dp[i/3] );
}
return dp[n];
}
Labels:Data
Dynamic Programming
,
Facebook Interview
,
Google Interview
Wednesday, July 20, 2011
Find the number of ways of writing a number as the sum of positive integers?
Above is Also called the Partition Function P of n, it is denoted by P(n).
For example,5 can be written as.
5
4+1
3+2
3+1+1
2+2+1
2+1+1+1
1+1+1+1+1
The partitions of 4 are listed below:
1. 4
2. 3 + 1
3. 2 + 2
4. 2 + 1 + 1
5. 1 + 1 + 1 + 1
The partitions of 8 are listed below:
1. 8
2. 7 + 1
3. 6 + 2
4. 6 + 1 + 1
5. 5 + 3
6. 5 + 2 + 1
7. 5 + 1 + 1 + 1
8. 4 + 4
9. 4 + 3 + 1
10. 4 + 2 + 2
11. 4 + 2 + 1 + 1
12. 4 + 1 + 1 + 1 + 1
13. 3 + 3 + 2
14. 3 + 3 + 1 + 1
15. 3 + 2 + 2 + 1
16. 3 + 2 + 1 + 1 + 1
17. 3 + 1 + 1 + 1 + 1 + 1
18. 2 + 2 + 2 + 2
19. 2 + 2 + 2 + 1 + 1
20. 2 + 2 + 1 + 1 + 1 + 1
21. 2 + 1 + 1 + 1 + 1 + 1 + 1
22. 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1
Approach
One way of getting a handle on the partition function involves an intermediate function p(k, n), which represents the number of partitions of n using only natural numbers at least as large as k. For any given value of k, partitions counted by p(k, n) fit into exactly one of the following categories:
1. smallest addend is k
2. smallest addend is strictly greater than k.
The number of partitions meeting the first condition is p(k, n − k). To see this, imagine a list of all the partitions of the number n − k into numbers of size at least k, then imagine appending "+ k" to each partition in the list. Now what is it a list of? As a side note, one can use this to define a sort of recursion relation for the partition function in term of the intermediate function, namely
1+ sum(P(k,n-k)=P(n)
for i=1 to floor n/2
where |_n_| is the floor function.
The number of partitions meeting the second condition is p(k + 1, n) since a
partition into parts of at least k that contains no parts of exactly k must have all parts at least k + 1.
Since the two conditions are mutually exclusive, the number of partitions meeting either condition is p(k + 1, n) + p(k, n − k). The recursively defined function is thus:
* p(k, n) = 0 if k > n
* p(k, n) = 1 if k = n
* p(k, n) = p(k+1, n) + p(k, n − k) otherwise.
Got The Hint of DP using matrix of P[n,n] :)
Euler invented a generating function which gives rise to a recurrence equation in P(n),
For example,5 can be written as.
5
4+1
3+2
3+1+1
2+2+1
2+1+1+1
1+1+1+1+1
The partitions of 4 are listed below:
1. 4
2. 3 + 1
3. 2 + 2
4. 2 + 1 + 1
5. 1 + 1 + 1 + 1
The partitions of 8 are listed below:
1. 8
2. 7 + 1
3. 6 + 2
4. 6 + 1 + 1
5. 5 + 3
6. 5 + 2 + 1
7. 5 + 1 + 1 + 1
8. 4 + 4
9. 4 + 3 + 1
10. 4 + 2 + 2
11. 4 + 2 + 1 + 1
12. 4 + 1 + 1 + 1 + 1
13. 3 + 3 + 2
14. 3 + 3 + 1 + 1
15. 3 + 2 + 2 + 1
16. 3 + 2 + 1 + 1 + 1
17. 3 + 1 + 1 + 1 + 1 + 1
18. 2 + 2 + 2 + 2
19. 2 + 2 + 2 + 1 + 1
20. 2 + 2 + 1 + 1 + 1 + 1
21. 2 + 1 + 1 + 1 + 1 + 1 + 1
22. 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1
Approach
One way of getting a handle on the partition function involves an intermediate function p(k, n), which represents the number of partitions of n using only natural numbers at least as large as k. For any given value of k, partitions counted by p(k, n) fit into exactly one of the following categories:
1. smallest addend is k
2. smallest addend is strictly greater than k.
The number of partitions meeting the first condition is p(k, n − k). To see this, imagine a list of all the partitions of the number n − k into numbers of size at least k, then imagine appending "+ k" to each partition in the list. Now what is it a list of? As a side note, one can use this to define a sort of recursion relation for the partition function in term of the intermediate function, namely
1+ sum(P(k,n-k)=P(n)
for i=1 to floor n/2
where |_n_| is the floor function.
The number of partitions meeting the second condition is p(k + 1, n) since a
partition into parts of at least k that contains no parts of exactly k must have all parts at least k + 1.
Since the two conditions are mutually exclusive, the number of partitions meeting either condition is p(k + 1, n) + p(k, n − k). The recursively defined function is thus:
* p(k, n) = 0 if k > n
* p(k, n) = 1 if k = n
* p(k, n) = p(k+1, n) + p(k, n − k) otherwise.
Got The Hint of DP using matrix of P[n,n] :)
Euler invented a generating function which gives rise to a recurrence equation in P(n),
Labels:Data
BackTracking
,
Dynamic Programming
,
Google Interview
Subscribe to:
Posts
(
Atom
)