There are two methods to do this :-
1st
1)convert both tree into doubly linked list
2) merge both these doubly linked list
3) then create a tree from these merge linked list by taking median of this list as root and traverse left from root to make left subtree and traverse right from root to make right sub tree
this has complexity O(n1+n2) and space complexity O(1)
BinaryTree* sortedListToBST(ListNode *& list, int start, int end) {
if (start > end) return NULL;
// same as (start+end)/2, avoids overflow
int mid = start + (end - start) / 2;
BinaryTree *leftChild = sortedListToBST(list, start, mid-1);
BinaryTree *parent = new BinaryTree(list->data);
parent->left = leftChild;
list = list->next;
parent->right = sortedListToBST(list, mid+1, end);
return parent;
}
1)Find the inorder traversal of both the trees
2)Merge them into one (Giving one sorted tree)
3)Find the median of this sorted array(O(1))
Call this procedure recursively
struct tree
{
int data ;
tree * left , *right ;
};
void insert(int * array , int left , int right)
{
if (left<=right){
int mid=(left+right)/2;
tree * root= malloc(sizeof (tree));
root->data=array[mid];
root->left=insert(array , left , mid-1);
root->right=insert(array , mid+1 , right);
return root;
}
}
Time Complexity O(N)
Saturday, June 25, 2011
Imagine you have a special keyboard with the following keys: 1. A 2. Ctrl+A 3. Ctrl+C 4. Ctrl+V where CTRL+A, CTRL+C, CTRL+V each acts as one function key for “Select All”, “Copy”, and “Paste” operations respectively.
If you can only press the keyboard for N times (with the above four keys), please write a program to produce maximum numbers of A. If possible, please also print out the sequence of keys.
That is to say, the input parameter is N (No. of keys that you can press), the output is M (No. of As that you can produce).
A Most Effective Solution Is Given at
www.ihas1337code.com/2011/01/ctrla-ctrlc-ctrlv.html
That is to say, the input parameter is N (No. of keys that you can press), the output is M (No. of As that you can produce).
A Most Effective Solution Is Given at
www.ihas1337code.com/2011/01/ctrla-ctrlc-ctrlv.html
Labels:Data
Google Interview
Write the code/algorithm to find the k-th Smallest Element in the Union of Two Sorted Arrays .
Given two sorted arrays A, B of size m and n respectively. Find the k-th smallest element in the union of A and B. You can assume that there are no duplicate elements.
would have to admit that this problem is pretty tricky to solve. Like most difficult problems, it requires some pretty clever observations to solve in a neat way.
The trivial way, O(m+n):
Merge both arrays and the k-th smallest element could be accessed directly. Merging would require extra space of O(m+n). The linear run time is pretty good, but could we improve it even further?
A better way, O(k):
There is an improvement from the above method, thanks to readers who suggested this. Using two pointers, you can traverse both arrays without actually merging them, thus without the extra space. Both pointers are initialized to point to head of A and B respectively, and the pointer that has the smaller of the two is incremented one step. The k-th smallest is obtained by traversing a total of k steps. This algorithm is very similar to finding intersection of two sorted arrays.
static int findKthSMallest(int[] A, int[] B, int k)//Need to Verify
{
int a_offset = 0, b_offset = 0;
if (A.length + B.length < k) return -1; while (true) { if (a_offset < A.length) { while (b_offset == B.length || A[a_offset] <= B[b_offset]) { a_offset++; if (a_offset + b_offset == k) return A[a_offset]; } } if (b_offset < B.length) { while (a_offset == A.length || A[a_offset] >= B[b_offset]) {
b_offset++;
}
if (a_offset + b_offset == k) return B[b_offset];
}
}
}
The best solution, but non-trivial, O(lg m + lg n):
Although the above solution is an improvement both in run time and space complexity, it only works well for small values of k, and thus is still in linear run time. Could we improve the run time further?
The above logarithmic complexity gives us one important hint. Binary search is a great example of achieving logarithmic complexity by halving its search space in each iteration. Therefore, to achieve the complexity of O(lg m + lg n), we must halved the search space of A and B in each iteration.
We try to approach this tricky problem by comparing middle elements of A and B, which we identify as Ai and Bj. If Ai is between Bj and Bj-1, we have just found the i+j+1 smallest element. Why? Therefore, if we choose i and j such that i+j = k-1, we are able to find the k-th smallest element. This is an important invariant that we must maintain for the correctness of this algorithm.
Idea is like this since both the arrays may not be of same length lets
divide (k-1) smallest elements proportionally in both the arrays:
let i point the array A by
i=m/(m+n) * (k-1) [since we have to divide k-1 elements among two]
j=(k-1) - i
then try to insert A[i] between B[j-1] and B[j] if three are not in asc
order try to insert B[j] between A[i-1] and A[i]
If any one of the above satisfies we found kth smallest element else,
check which one is smallest among A[i] and B[j] its logical that if A[i] is
smallest then we can A[0] to A[i] for the next iteration and
k becomes k-i-1 also m becomes m-i-1 i.e now we have only m-i-1+n elements
out of which we have to find k-i-1th smallest thus the iteration goes
on until we
find our kth smallest element.
Consider 2 arrays
A={5,7,9,20}; length of A: m=4
B={10,12,21,27,35,50}; length of B: n=6
let K be 4
i=4/10*3=1; A[1]=7;
j=3-1=2; B[2]=21;
B[1]=12 A[1]=7 B[2]=21 [not in asc order]
A[0]=5 B[2]=21 A[1]=7 [not in asc order]
so now,
k=k-i-1 =4-1-1=2
m=m-i-1=4-1-1=2
n=6
A={9,20}; length of A: m=2
B={10,12,21,27,35,50}; length of B: n=6
i=2/8*1=0; A[0]=9;
j=1-0=1; B[1]=12;
(acutally A[-1] is just for understanding)
B[0]=10 A[0]=9 B[1]=12 [not in asc order]
A[-1]=-INF B[1]=12 A[0]=9 [not in asc order]
now,
k=k-i-1=2-0-1=1;
m=m-i-1=2-0-1=1;
n=6;
A={20}; length of A: m=1
B={10,12,21,27,35,50}; length of B: n=6
i=1/7*0=0; A[0]=20;
j=0-0=0; B[0]=10;
(acutally A[-1] and B[-1] are just for understanding)
B[-1]=-INF A[0]=20 B[0]=10 [not in asc order]
A[-1]=-INF B[0]=10 A[0]=20 [in asc order]
We got the Kth(4th) smallest element which is 10.
int findKthSmallest(int A[], int m, int B[], int n, int k) {
assert(m >= 0); assert(n >= 0); assert(k > 0); assert(k <= m+n); int i = (int)((double)m / (m+n) * (k-1)); int j = (k-1) - i; assert(i >= 0); assert(j >= 0); assert(i <= m); assert(j <= n); // invariant: i + j = k-1 // Note: A[-1] = -INF and A[m] = +INF to maintain invariant int Ai_1 = ((i == 0) ? INT_MIN : A[i-1]); int Bj_1 = ((j == 0) ? INT_MIN : B[j-1]); int Ai = ((i == m) ? INT_MAX : A[i]); int Bj = ((j == n) ? INT_MAX : B[j]); if (Bj_1 < Ai && Ai < Bj) return Ai; else if (Ai_1 < Bj && Bj < Ai) return Bj; assert((Ai > Bj && Ai_1 > Bj) ||
(Ai < Bj && Ai < Bj_1)); // if none of the cases above, then it is either: if (Ai < Bj) // exclude Ai and below portion // exclude Bj and above portion return findKthSmallest(A+i+1, m-i-1, B, j, k-i-1); else /* Bj < Ai */ // exclude Ai and above portion // exclude Bj and below portion return findKthSmallest(A, i, B+j+1, n-j-1, k-j-1); } Time Complexity O(logn+logm0 Space Complexity O(1) Run Here http://ideone.com/SkaAI source http://www.ihas1337code.com/2011/01/find-k-th-smallest-element-in-union-of.html Another Algorithm & Solution Given By My Friend Dhumanshu Algorithms you have two arrays a,b and given k now logic is to compare k/2th element of first array and k/2th element of 2nd array. this is because total no. of elements under consideration are k/2 + k/2 = k(th element we have to find) you have to take care of the case wen k is odd, in that case compare k/2th of first and (k/2 + 1)th of 2nd array now if a[k/2] > b[k/2] but a[k/2] < b[k/2 + 1] this means if we sort first k/2 elements of both arrays together i.e total k elements then a[k/2] would be the last one - our required answer. if above fails then check for b with a in the same manner. if above fails, it means we have to expand our set - the elements in consideration (earlier we took k/2 of each) now check if a[k/2]>b[k/2], but here a[k/2] is also > b[k/2 +1], now we have to look on left side of array a and right side of array b,
so call recursively with array a between (0,k/2 -1) and array b between (k/2 +1 , b.length).
if above fails then check for b with a viceversa.
This is the algo behind but u have to take care of special cases like if one array elements are all out of set, you are left with 1 array, so call normal binary search on that leftover array to find kth element.
Working Code
#include
#include
int ssearch(int *a,int l,int h,int k)
{
if(l + k -1 > h)
return -1;
else
return a[l+k-1];
}
int kthlargest(int *a,int *b,int la,int ra,int lb,int rb,int k)
{
//get optimum mida and midb
int mida = la + k/2 - 1,midb = lb + k/2 - 1 + k%2;
if(midb>rb)
{
mida += midb-rb;
midb = rb;
}
else if(mida>ra)
{
midb += mida-ra;
mida = ra;
}
//check extremes in case one array expires
if(mida>ra || midara)
return ssearch(b,lb,rb,k-(ra-la+1));
if(midb>rb || midbrb)
return ssearch(a,la,ra,k-(rb-lb+1));
if(mida=b[midb] ? a[mida] : b[midb];
//either way
if(b[midb]>=a[mida])
if(mida==ra || a[mida+1]>=b[midb])
return b[midb];
else
return kthlargest(a,b,mida+1,ra,lb,midb-1,k-mida-1+la);
else
if (midb==rb || a[mida]<=b[midb+1]) return a[mida]; else return kthlargest(a,b,la,mida-1,midb+1,rb,k-midb-1+lb); } int main() { int a[]={4,8,12,18,25,33,56}; int b[]={1,2,3,6,17,18,25,26,32,89}; int k,i; for(i=0;ib[0]?b[0]:a[0]);
else
printf("k th smallest element is %d\n",kthlargest(a,b,0,sizeof(a)/sizeof(int)-1,0,sizeof(b)/sizeof(int)-1,k));
return 0;
}
would have to admit that this problem is pretty tricky to solve. Like most difficult problems, it requires some pretty clever observations to solve in a neat way.
The trivial way, O(m+n):
Merge both arrays and the k-th smallest element could be accessed directly. Merging would require extra space of O(m+n). The linear run time is pretty good, but could we improve it even further?
A better way, O(k):
There is an improvement from the above method, thanks to readers who suggested this. Using two pointers, you can traverse both arrays without actually merging them, thus without the extra space. Both pointers are initialized to point to head of A and B respectively, and the pointer that has the smaller of the two is incremented one step. The k-th smallest is obtained by traversing a total of k steps. This algorithm is very similar to finding intersection of two sorted arrays.
static int findKthSMallest(int[] A, int[] B, int k)//Need to Verify
{
int a_offset = 0, b_offset = 0;
if (A.length + B.length < k) return -1; while (true) { if (a_offset < A.length) { while (b_offset == B.length || A[a_offset] <= B[b_offset]) { a_offset++; if (a_offset + b_offset == k) return A[a_offset]; } } if (b_offset < B.length) { while (a_offset == A.length || A[a_offset] >= B[b_offset]) {
b_offset++;
}
if (a_offset + b_offset == k) return B[b_offset];
}
}
}
The best solution, but non-trivial, O(lg m + lg n):
Although the above solution is an improvement both in run time and space complexity, it only works well for small values of k, and thus is still in linear run time. Could we improve the run time further?
The above logarithmic complexity gives us one important hint. Binary search is a great example of achieving logarithmic complexity by halving its search space in each iteration. Therefore, to achieve the complexity of O(lg m + lg n), we must halved the search space of A and B in each iteration.
We try to approach this tricky problem by comparing middle elements of A and B, which we identify as Ai and Bj. If Ai is between Bj and Bj-1, we have just found the i+j+1 smallest element. Why? Therefore, if we choose i and j such that i+j = k-1, we are able to find the k-th smallest element. This is an important invariant that we must maintain for the correctness of this algorithm.
Idea is like this since both the arrays may not be of same length lets
divide (k-1) smallest elements proportionally in both the arrays:
let i point the array A by
i=m/(m+n) * (k-1) [since we have to divide k-1 elements among two]
j=(k-1) - i
then try to insert A[i] between B[j-1] and B[j] if three are not in asc
order try to insert B[j] between A[i-1] and A[i]
If any one of the above satisfies we found kth smallest element else,
check which one is smallest among A[i] and B[j] its logical that if A[i] is
smallest then we can A[0] to A[i] for the next iteration and
k becomes k-i-1 also m becomes m-i-1 i.e now we have only m-i-1+n elements
out of which we have to find k-i-1th smallest thus the iteration goes
on until we
find our kth smallest element.
Consider 2 arrays
A={5,7,9,20}; length of A: m=4
B={10,12,21,27,35,50}; length of B: n=6
let K be 4
i=4/10*3=1; A[1]=7;
j=3-1=2; B[2]=21;
B[1]=12 A[1]=7 B[2]=21 [not in asc order]
A[0]=5 B[2]=21 A[1]=7 [not in asc order]
so now,
k=k-i-1 =4-1-1=2
m=m-i-1=4-1-1=2
n=6
A={9,20}; length of A: m=2
B={10,12,21,27,35,50}; length of B: n=6
i=2/8*1=0; A[0]=9;
j=1-0=1; B[1]=12;
(acutally A[-1] is just for understanding)
B[0]=10 A[0]=9 B[1]=12 [not in asc order]
A[-1]=-INF B[1]=12 A[0]=9 [not in asc order]
now,
k=k-i-1=2-0-1=1;
m=m-i-1=2-0-1=1;
n=6;
A={20}; length of A: m=1
B={10,12,21,27,35,50}; length of B: n=6
i=1/7*0=0; A[0]=20;
j=0-0=0; B[0]=10;
(acutally A[-1] and B[-1] are just for understanding)
B[-1]=-INF A[0]=20 B[0]=10 [not in asc order]
A[-1]=-INF B[0]=10 A[0]=20 [in asc order]
We got the Kth(4th) smallest element which is 10.
int findKthSmallest(int A[], int m, int B[], int n, int k) {
assert(m >= 0); assert(n >= 0); assert(k > 0); assert(k <= m+n); int i = (int)((double)m / (m+n) * (k-1)); int j = (k-1) - i; assert(i >= 0); assert(j >= 0); assert(i <= m); assert(j <= n); // invariant: i + j = k-1 // Note: A[-1] = -INF and A[m] = +INF to maintain invariant int Ai_1 = ((i == 0) ? INT_MIN : A[i-1]); int Bj_1 = ((j == 0) ? INT_MIN : B[j-1]); int Ai = ((i == m) ? INT_MAX : A[i]); int Bj = ((j == n) ? INT_MAX : B[j]); if (Bj_1 < Ai && Ai < Bj) return Ai; else if (Ai_1 < Bj && Bj < Ai) return Bj; assert((Ai > Bj && Ai_1 > Bj) ||
(Ai < Bj && Ai < Bj_1)); // if none of the cases above, then it is either: if (Ai < Bj) // exclude Ai and below portion // exclude Bj and above portion return findKthSmallest(A+i+1, m-i-1, B, j, k-i-1); else /* Bj < Ai */ // exclude Ai and above portion // exclude Bj and below portion return findKthSmallest(A, i, B+j+1, n-j-1, k-j-1); } Time Complexity O(logn+logm0 Space Complexity O(1) Run Here http://ideone.com/SkaAI source http://www.ihas1337code.com/2011/01/find-k-th-smallest-element-in-union-of.html Another Algorithm & Solution Given By My Friend Dhumanshu Algorithms you have two arrays a,b and given k now logic is to compare k/2th element of first array and k/2th element of 2nd array. this is because total no. of elements under consideration are k/2 + k/2 = k(th element we have to find) you have to take care of the case wen k is odd, in that case compare k/2th of first and (k/2 + 1)th of 2nd array now if a[k/2] > b[k/2] but a[k/2] < b[k/2 + 1] this means if we sort first k/2 elements of both arrays together i.e total k elements then a[k/2] would be the last one - our required answer. if above fails then check for b with a in the same manner. if above fails, it means we have to expand our set - the elements in consideration (earlier we took k/2 of each) now check if a[k/2]>b[k/2], but here a[k/2] is also > b[k/2 +1], now we have to look on left side of array a and right side of array b,
so call recursively with array a between (0,k/2 -1) and array b between (k/2 +1 , b.length).
if above fails then check for b with a viceversa.
This is the algo behind but u have to take care of special cases like if one array elements are all out of set, you are left with 1 array, so call normal binary search on that leftover array to find kth element.
Working Code
#include
#include
int ssearch(int *a,int l,int h,int k)
{
if(l + k -1 > h)
return -1;
else
return a[l+k-1];
}
int kthlargest(int *a,int *b,int la,int ra,int lb,int rb,int k)
{
//get optimum mida and midb
int mida = la + k/2 - 1,midb = lb + k/2 - 1 + k%2;
if(midb>rb)
{
mida += midb-rb;
midb = rb;
}
else if(mida>ra)
{
midb += mida-ra;
mida = ra;
}
//check extremes in case one array expires
if(mida>ra || mida
return ssearch(b,lb,rb,k-(ra-la+1));
if(midb>rb || midb
return ssearch(a,la,ra,k-(rb-lb+1));
if(mida
//either way
if(b[midb]>=a[mida])
if(mida==ra || a[mida+1]>=b[midb])
return b[midb];
else
return kthlargest(a,b,mida+1,ra,lb,midb-1,k-mida-1+la);
else
if (midb==rb || a[mida]<=b[midb+1]) return a[mida]; else return kthlargest(a,b,la,mida-1,midb+1,rb,k-midb-1+lb); } int main() { int a[]={4,8,12,18,25,33,56}; int b[]={1,2,3,6,17,18,25,26,32,89}; int k,i; for(i=0;i
else
printf("k th smallest element is %d\n",kthlargest(a,b,0,sizeof(a)/sizeof(int)-1,0,sizeof(b)/sizeof(int)-1,k));
return 0;
}
Labels:Data
Google Interview
Friday, June 24, 2011
You are given 2 number streams. You need to find whether they will create the same BST or not.
Example:
Array1:10 5 20 15 30
Array2:10 20 15 30 5
Result: True
Array1:10 5 20 15 30
Array2:10 15 30 20 5
Result: False (see corresponding trees below)
1st Approach (Basic Solution)
Algorithm.
1.Create BST from each Array O(nlogn)
2.Check two BST are same or not by comparing each nodes both at poistion.
Time Complexity O(nlogn)
Space Complexity O(1)
2nd Approach
Array1:10 5 20 15 30
Array2:10 20 15 30 5
Result: True
Array1:10 5 20 15 30
Array2:10 15 30 20 5
Result: False (see corresponding trees below)
1st Approach (Basic Solution)
Algorithm.
1.Create BST from each Array O(nlogn)
2.Check two BST are same or not by comparing each nodes both at poistion.
Time Complexity O(nlogn)
Space Complexity O(1)
2nd Approach
Labels:Data
Amazon Interview
,
Google Interview
Given a row of notes (with specified values), two players play a game. At each turn, any player can pick a note from any of the two ends. How will the first player maximize his score? Both players will play optimally.
Problem Statement
Kingdom of Maplewood is a beautiful country comprising of a lot of small islands of different areas. All the islands are in a straight row. King Rosewood is getting old and has decided to divide the islands among his two sons - Eric and Finn. Luckily, the total number of islands is even. He has also decided a few rules for the division of islands:
i) Eric and Finn will be given alternate turns to choose the islands
ii) They can only choose one island at a time from either the beginning or the end of the row of islands.
iii) Once an island is chosen by someone,it cannot be chosen by other person.
Suppose you are Eric and you are given the first choice. Find out the maximum area you are sure you can pick.
Detailed Analysis
so basically There are n coins in a line. (Assume n is even). Two players take turns to take a coin from one of the ends of the line until there are no more coins left. The player with the larger amount of money wins.
1. Would you rather go first or second? Does it matter?
2. Assume that you go first, describe an algorithm to compute the maximum
amount of money you can win.
1st Approach (whether n is even or odd)
if we sort all the coins assume we have array of denominations then it doesn't matter coin odd or even if i start the picking 1st i will definitely win because i will always start 1st or last depending upon sorting.
Although Algorithm will give make sure us that we will win & get the maximum sum but still it don't give us optimal solution don't believe see below.
Time Complexity O(nlogn) So Not Efficient Can we do better.?
2nd Approach (Greedy Approach )
1. Count the sum of all coins that are odd-numbered. (Call this A)
2. Count the sum of all coins that are even-numbered. (Call this B)
3. If A > B, take the left-most coin first. Choose all odd-numbered coins in
subsequent moves.
4. If A < B, take the right-most coin first. Choose all even-numbered coins in subsequent moves. 5. If A == B, you will guarantee to get a tie if you stick with taking only even-numbered/odd-numbered coins. lets run this for 3 2 2 3 1 2 A=3+2+2=7 B=2+3+1=6 A>B so get maximum money that we can make is 7 unit.
although we are able to decrease the time to O(n) to O(nlogn) but still we are sure that we wont get the optimal solution because there can be a test case for which above algo will fail although we will win but won't get the optimal solution .e.g maximum money
Above
Take a Counter test case 3 2 2 3 1 2
A pick left 3 B has two choice left & right 2 no matter what he choose A will choose 2 then we have 2 3 1 left in array so B will definitely choose 2 because he don't have any choice so then a will choose 3 thus making a sum of 3+3+2=8 which more money then what we got in last solution .
So what to do ? we need an efficient algorithm in terms of minimum time & space that can run on big data set as well...yes its sounds like DP..but how we will come up recursive solution , where is overlapping occurs ?
let me explain say we have an array denoted by A1...Ai.....Aj...An .as we always start from any of the end we will come up in middle after some inputs & lets say we are at Ai.....Aj.in array as all before Ai & all after aj have counted by us isn't it.??.
so The remaining coins are { Ai … Aj } and it is your turn. Let P(i, j) denotes the maximum amount of money you can get. the question comes Should you choose Ai or Aj?
so we have two option every time lets choose Ai first then remaining have A(i+1)...Aj for opponent , as he is smart as much you are he will also choose best to maximize the money. so the maximum amount he can get is P(i+1..j).
so lets say we have already computed some for i..to..j denoted by sum(i..j)
so when you choose Ai then maximum amount you can get is say p1
p1=sum(i..j)-P(i+1...j);
& when you choose Aj then opponent have maximum sum denoted by p(i+1..j-1)
then maximum amount you can get is say p2
p2=sum(i...j)-P(i+1...j-1)
then optimal solution can be founded as as we said earlier maximum amount of money we can get when we are in range A[..Aj is denoted by p(i,j)_
P(i, j) = max { P1, P2 }
= max { Sum{Ai ... Aj} - P(i+1, j),
Sum{Ai ... Aj} - P(i, j-1)}
or we write
P(i, j) = Sum{Ai ... Aj} - min { P(i+1, j), P(i, j-1) }
Most Efficient Solution
There is another solution which does not rely on computing and storing results of Sum{Ai … Aj}, therefore is more efficient in terms of time and space. Let us rewind back to the case where you take Ai, and the remaining coins become { Ai+1 … Aj }.
You took Ai from the coins { Ai … Aj }. The opponent will choose either Ai+1 or Aj. Which one would he choose?
Let us look one extra step ahead this time by considering the two coins the opponent will possibly take, Ai+1 and Aj. If the opponent takes Ai+1, the remaining coins are { Ai+2 … Aj }, which our maximum is denoted by P(i+2, j). On the other hand, if the opponent takes Aj, our maximum is P(i+1, j-1). Since the opponent is as smart as you, he would have chosen the choice that yields the minimum amount to you.
Therefore, the maximum amount you can get when you choose Ai is:
P1 = Ai + min { P(i+2, j), P(i+1, j-1) }
Similarly, the maximum amount you can get when you choose Aj is:
P2 = Aj + min { P(i+1, j-1), P(i, j-2) }
Therefore,
P(i, j) = max { P1, P2 }
= max { Ai + min { P(i+2, j), P(i+1, j-1) },
Aj + min { P(i+1, j-1), P(i, j-2) } }
Although the above recurrence relation could be implemented in few lines of code, its complexity is exponential. The reason is that each recursive call branches into a total of four separate recursive calls, and it could be n levels deep from the very first call). Memoization provides an efficient way by avoiding re-computations using intermediate results stored in a table. Below is the code which runs in O(n2) time and takes O(n2) space.
Working Code:
#include
#define MAX(A,B) ((A>=B)? A: B)
#define MIN(A,B) ((A7))
return 0;
if (P2[i][j] == 0) {
counter ++;
P2[i][j] = MAX(A[i] + MIN(maxMoney2(A, P2, i+2, j), maxMoney2(A, P2, i+1, j-1)),
A[j] + MIN(maxMoney2(A, P2, i+1,j-1), maxMoney2(A, P2, i, j-2)));
}
return P2[i][j];
}
int main()
{
int value;
value = maxMoney2(coin_input, P2, 0, 5);
printf("The max money is %d, total calculation: %d\r\n", value, counter);
}
Time Complexity O(N^2)
Space Complexity O(N^2)
Run Here https://ideone.com/l3E3x
Kingdom of Maplewood is a beautiful country comprising of a lot of small islands of different areas. All the islands are in a straight row. King Rosewood is getting old and has decided to divide the islands among his two sons - Eric and Finn. Luckily, the total number of islands is even. He has also decided a few rules for the division of islands:
i) Eric and Finn will be given alternate turns to choose the islands
ii) They can only choose one island at a time from either the beginning or the end of the row of islands.
iii) Once an island is chosen by someone,it cannot be chosen by other person.
Suppose you are Eric and you are given the first choice. Find out the maximum area you are sure you can pick.
Detailed Analysis
so basically There are n coins in a line. (Assume n is even). Two players take turns to take a coin from one of the ends of the line until there are no more coins left. The player with the larger amount of money wins.
1. Would you rather go first or second? Does it matter?
2. Assume that you go first, describe an algorithm to compute the maximum
amount of money you can win.
1st Approach (whether n is even or odd)
if we sort all the coins assume we have array of denominations then it doesn't matter coin odd or even if i start the picking 1st i will definitely win because i will always start 1st or last depending upon sorting.
Although Algorithm will give make sure us that we will win & get the maximum sum but still it don't give us optimal solution don't believe see below.
Time Complexity O(nlogn) So Not Efficient Can we do better.?
2nd Approach (Greedy Approach )
1. Count the sum of all coins that are odd-numbered. (Call this A)
2. Count the sum of all coins that are even-numbered. (Call this B)
3. If A > B, take the left-most coin first. Choose all odd-numbered coins in
subsequent moves.
4. If A < B, take the right-most coin first. Choose all even-numbered coins in subsequent moves. 5. If A == B, you will guarantee to get a tie if you stick with taking only even-numbered/odd-numbered coins. lets run this for 3 2 2 3 1 2 A=3+2+2=7 B=2+3+1=6 A>B so get maximum money that we can make is 7 unit.
although we are able to decrease the time to O(n) to O(nlogn) but still we are sure that we wont get the optimal solution because there can be a test case for which above algo will fail although we will win but won't get the optimal solution .e.g maximum money
Above
Take a Counter test case 3 2 2 3 1 2
A pick left 3 B has two choice left & right 2 no matter what he choose A will choose 2 then we have 2 3 1 left in array so B will definitely choose 2 because he don't have any choice so then a will choose 3 thus making a sum of 3+3+2=8 which more money then what we got in last solution .
So what to do ? we need an efficient algorithm in terms of minimum time & space that can run on big data set as well...yes its sounds like DP..but how we will come up recursive solution , where is overlapping occurs ?
let me explain say we have an array denoted by A1...Ai.....Aj...An .as we always start from any of the end we will come up in middle after some inputs & lets say we are at Ai.....Aj.in array as all before Ai & all after aj have counted by us isn't it.??.
so The remaining coins are { Ai … Aj } and it is your turn. Let P(i, j) denotes the maximum amount of money you can get. the question comes Should you choose Ai or Aj?
so we have two option every time lets choose Ai first then remaining have A(i+1)...Aj for opponent , as he is smart as much you are he will also choose best to maximize the money. so the maximum amount he can get is P(i+1..j).
so lets say we have already computed some for i..to..j denoted by sum(i..j)
so when you choose Ai then maximum amount you can get is say p1
p1=sum(i..j)-P(i+1...j);
& when you choose Aj then opponent have maximum sum denoted by p(i+1..j-1)
then maximum amount you can get is say p2
p2=sum(i...j)-P(i+1...j-1)
then optimal solution can be founded as as we said earlier maximum amount of money we can get when we are in range A[..Aj is denoted by p(i,j)_
P(i, j) = max { P1, P2 }
= max { Sum{Ai ... Aj} - P(i+1, j),
Sum{Ai ... Aj} - P(i, j-1)}
or we write
P(i, j) = Sum{Ai ... Aj} - min { P(i+1, j), P(i, j-1) }
Most Efficient Solution
There is another solution which does not rely on computing and storing results of Sum{Ai … Aj}, therefore is more efficient in terms of time and space. Let us rewind back to the case where you take Ai, and the remaining coins become { Ai+1 … Aj }.
You took Ai from the coins { Ai … Aj }. The opponent will choose either Ai+1 or Aj. Which one would he choose?
Let us look one extra step ahead this time by considering the two coins the opponent will possibly take, Ai+1 and Aj. If the opponent takes Ai+1, the remaining coins are { Ai+2 … Aj }, which our maximum is denoted by P(i+2, j). On the other hand, if the opponent takes Aj, our maximum is P(i+1, j-1). Since the opponent is as smart as you, he would have chosen the choice that yields the minimum amount to you.
Therefore, the maximum amount you can get when you choose Ai is:
P1 = Ai + min { P(i+2, j), P(i+1, j-1) }
Similarly, the maximum amount you can get when you choose Aj is:
P2 = Aj + min { P(i+1, j-1), P(i, j-2) }
Therefore,
P(i, j) = max { P1, P2 }
= max { Ai + min { P(i+2, j), P(i+1, j-1) },
Aj + min { P(i+1, j-1), P(i, j-2) } }
Although the above recurrence relation could be implemented in few lines of code, its complexity is exponential. The reason is that each recursive call branches into a total of four separate recursive calls, and it could be n levels deep from the very first call). Memoization provides an efficient way by avoiding re-computations using intermediate results stored in a table. Below is the code which runs in O(n2) time and takes O(n2) space.
Working Code:
#include
#define MAX(A,B) ((A>=B)? A: B)
#define MIN(A,B) ((A7))
return 0;
if (P2[i][j] == 0) {
counter ++;
P2[i][j] = MAX(A[i] + MIN(maxMoney2(A, P2, i+2, j), maxMoney2(A, P2, i+1, j-1)),
A[j] + MIN(maxMoney2(A, P2, i+1,j-1), maxMoney2(A, P2, i, j-2)));
}
return P2[i][j];
}
int main()
{
int value;
value = maxMoney2(coin_input, P2, 0, 5);
printf("The max money is %d, total calculation: %d\r\n", value, counter);
}
Time Complexity O(N^2)
Space Complexity O(N^2)
Run Here https://ideone.com/l3E3x
Labels:Data
DirectI Interview
,
Facebook Interview
,
Google Interview
Partition of Array Problem-NP Complete
the partition problem is an NP-complete problem. The problem is to decide whether a given multiset of integers can be partitioned into two "halves" that have the same sum. More precisely, given a multiset S of integers, is there a way to partition S into two subsets S1 and S2 such that the sum of the numbers in S1 equals the sum of the numbers in S2? The subsets S1 and S2 must form a partition in the sense that they are disjoint and they cover S. The optimization version asks for the "best" partition, and can be stated as: Find a partition into two subsets S1,S2 such that max({sum}(S_1),{sum}(S_2)) is minimized (sometimes with the additional constraint that the sizes of the two sets in the partition must be equal, or differ by at most 1).
In Progress
In Progress
Labels:Data
Facebook Interview
,
Google Interview
,
Yahoo Interview
Sum Of SubSet problem- NP Complete Problem
the subset sum problem is an important problem in complexity theory and cryptography. The problem is this: given a set of integers, is there a non-empty subset whose sum is exactly zero? For example, given the set { −7, −3, −2, 5, 8}, the answer is yes because the subset { −3, −2, 5} sums to zero. The problem is NP-complete.
A Simple Example
int C(int n,int k)
{
if (k==0 || k==n)
return 1;
return C(n-1,k-1) + C(n-1,k);
}
www.cs.binghamton.edu/~dima/cs333/backtracking.ppt
www.cs.umsl.edu/~sanjiv/classes/cs5130/lectures/bt.pdf
www.mgt.ncu.edu.tw/~ylchen/algorithm/chapter5.doc
max.cs.kzoo.edu/cs215/lectures/w5-graph-coloring.pdf
www.slidefinder.net/b/backtracking_sum_subsets_knapsack/7064040
www.cs.utep.edu/ofuentes/cs2402/backtrackingStack.doc
A Simple Example
int C(int n,int k)
{
if (k==0 || k==n)
return 1;
return C(n-1,k-1) + C(n-1,k);
}
www.cs.binghamton.edu/~dima/cs333/backtracking.ppt
www.cs.umsl.edu/~sanjiv/classes/cs5130/lectures/bt.pdf
www.mgt.ncu.edu.tw/~ylchen/algorithm/chapter5.doc
max.cs.kzoo.edu/cs215/lectures/w5-graph-coloring.pdf
www.slidefinder.net/b/backtracking_sum_subsets_knapsack/7064040
www.cs.utep.edu/ofuentes/cs2402/backtrackingStack.doc
Labels:Data
BackTracking
,
Facebook Interview
,
Google Interview
Bin Packing Problem . One of The My Fav. Problem NP Hard
n objects to be placed in bins of capacity L each.,Determine the minimum number of bins needed to accommodate all n objects.
Given: n objects to be placed in bins of capacity L each.
Object i requires li units of bin capacity.
Objective: determine the minimum number of bins needed to
accommodate all n objects.
eg. Let L = 10
l1=5 t4==7
l2=6 t5=5
l3=3 t6=4
Data Structure Used:Array
Theorem
Bin packing problem is NP complete when formulated as a decision problem.
As an optimization problem bin packing is NP-hard
Approximation Algorithm for Bin Packing:
1. First Fit (FF)
- Label bins as 1, 2, 3, . . .
- Objects are considered for packing in the order 1, 2, 3, . . .
- Pack object i in bin j where j is the least index such that
bin j can contain object i.
2. Best Fit (BF)
Same as FF, except that when object i is to be packed, find out
that bin which after accommodating object i will have the least
amount of space left.
3. First Fit Decreasing (FFD)
reorder objects so that
li>li+1 1<=i<=n then use FF. 4. Best Fit Decreasing (BFD) reorder objects as above and then use BF. Th. Packing generated by either FF or BF uses no more then 17/10 OPT + 2 Bins That by either FFD or BFD uses no more than 11/9 OPT+ 4 Bins 1/9=9/9+2/9 This is a very straightforward greedy approximation algorithm. The algorithm processes the items in arbitrary order. For each item, it attempts to place the item in the first bin that can accommodate the item. If no bin is found, it opens a new bin and puts the item within the new bin. It is rather simple to show this algorithm achieves an approximation factor of 2. This is due to the observation that at any given time, it is impossible for 2 bins to be at most half full. The reason is that if at some point a bin was at most half full, meaning it has at least a space of V / 2, the algorithm will not open a new bin for any item whose size is at most V / 2. Only after the bin fills with more than V / 2 or if an item with a size larger than V / 2 arrives, the algorithm may open a new bin. Thus if we have B bins, at least B − 1 bins are more than half full. Therefore \sum_{i=1}^n a_i>\tfrac{B-1}{2}V. Because \tfrac{\sum_{i=1}^n a_i}{V} is a lower bound of the optimum value OPT, we get that B − 1 < 2OPT and therefore B ≤ 2OPT.[1] See analysis below for better approximation results. float[] used = new float[n + 1]; //used[j] is the amount of space in bin j already used up. int i, j; Initialize all used entries to 0.0 Sort S into descending(nonincreasing)order, giving the sequence S1 >= S2 >= ... >= Sn.
for(i = 1; i <= n; i++)
//Look for a bin in which s[i] fits.
for(j = 1; j <= n; j++)
if (used[j]+si<+1.0)
bin[i] = j;
used[j] += si;
break; //exit for(j)
//continue for(i)
Time Complexity O(nlogn)
Space Complexity O(1)
Auxiliary Space O(n)
Run Here
Source:
A guide to the Theory of NP-Completeness by Michael R. Garey and David S. Johnson
http://en.wikipedia.org/wiki/Bin_packing_problem
Given: n objects to be placed in bins of capacity L each.
Object i requires li units of bin capacity.
Objective: determine the minimum number of bins needed to
accommodate all n objects.
eg. Let L = 10
l1=5 t4==7
l2=6 t5=5
l3=3 t6=4
Data Structure Used:Array
Theorem
Bin packing problem is NP complete when formulated as a decision problem.
As an optimization problem bin packing is NP-hard
Approximation Algorithm for Bin Packing:
1. First Fit (FF)
- Label bins as 1, 2, 3, . . .
- Objects are considered for packing in the order 1, 2, 3, . . .
- Pack object i in bin j where j is the least index such that
bin j can contain object i.
2. Best Fit (BF)
Same as FF, except that when object i is to be packed, find out
that bin which after accommodating object i will have the least
amount of space left.
3. First Fit Decreasing (FFD)
reorder objects so that
li>li+1 1<=i<=n then use FF. 4. Best Fit Decreasing (BFD) reorder objects as above and then use BF. Th. Packing generated by either FF or BF uses no more then 17/10 OPT + 2 Bins That by either FFD or BFD uses no more than 11/9 OPT+ 4 Bins 1/9=9/9+2/9 This is a very straightforward greedy approximation algorithm. The algorithm processes the items in arbitrary order. For each item, it attempts to place the item in the first bin that can accommodate the item. If no bin is found, it opens a new bin and puts the item within the new bin. It is rather simple to show this algorithm achieves an approximation factor of 2. This is due to the observation that at any given time, it is impossible for 2 bins to be at most half full. The reason is that if at some point a bin was at most half full, meaning it has at least a space of V / 2, the algorithm will not open a new bin for any item whose size is at most V / 2. Only after the bin fills with more than V / 2 or if an item with a size larger than V / 2 arrives, the algorithm may open a new bin. Thus if we have B bins, at least B − 1 bins are more than half full. Therefore \sum_{i=1}^n a_i>\tfrac{B-1}{2}V. Because \tfrac{\sum_{i=1}^n a_i}{V} is a lower bound of the optimum value OPT, we get that B − 1 < 2OPT and therefore B ≤ 2OPT.[1] See analysis below for better approximation results. float[] used = new float[n + 1]; //used[j] is the amount of space in bin j already used up. int i, j; Initialize all used entries to 0.0 Sort S into descending(nonincreasing)order, giving the sequence S1 >= S2 >= ... >= Sn.
for(i = 1; i <= n; i++)
//Look for a bin in which s[i] fits.
for(j = 1; j <= n; j++)
if (used[j]+si<+1.0)
bin[i] = j;
used[j] += si;
break; //exit for(j)
//continue for(i)
Time Complexity O(nlogn)
Space Complexity O(1)
Auxiliary Space O(n)
Run Here
Source:
A guide to the Theory of NP-Completeness by Michael R. Garey and David S. Johnson
http://en.wikipedia.org/wiki/Bin_packing_problem
Labels:Data
Facebook Interview
,
Google Interview
Wap to Find Next Higher Number of Given NUmber , Containing Same Number of SetBits
# Problem
* Given a number m find the next higher number r , that has same number of 1-bits.
* Ex : 3 (0000011) => 5(0000101)
* 6(0000110) => 9(0001001)
* 11(0001011) => 13(0001101)
* 23(0010111) => 27(0011011)
* 24(0011000) => 33(0100001)
* 44(0101100) => 49(0110001)
* 46(0101110) => 51(00110011)
1st Thinking(Algorithm)
1.count number of set bits in given number .
2. start from that number in loop & count number of set bits in all number > n untill we found the same set bits in any number, once found stop.
but problem with this algorithm is that it will take too much time for bignumber
O(N) where N is number of iteration untill we find number > N with same number of set bits.
2nd Solution (Most Optimized)
# Observations I
* Look at the input and the outputs again and see if you can make some algorithm out of it
* 3 (0000011) => 5(0000101)
* 6(0000110) => 9(0001001)
* 11(0001011) => 13(0001101)
* 23(0010111) => 27(0011011)
* 24(0011000) => 33(0100001)
* 44(0101100) => 49(0110001)
* 46(0101110) => 51(00110011)
# Observations II
* Hint : Now concentrate on the highlighted parts of input
* 3 (0000 011 ) => 5(0000 101 )
* 6(000 0110 ) => 9(000 1001 )
* 11(0001 011 ) => 13(0001 101 )
* 23(001 0111 ) => 27(001 1011 )
* 24(0 011000 ) => 33(0 100001 )
* 44(01 01100 ) => 49(01 10001 )
* 46(01 01110 ) => 51(01 10011 )
# Observations III
* As you can see,
o the non-highlighted part is same in i/p and o/p as well
o And the highlighted part is consecutive 1’s from the least-significant side (right hand side)
* 3 (0000 011 ) => 5(0000 101 )
* 6(000 0110 ) => 9(000 1001 )
* 11(0001 011 ) => 13(0001 101 )
* 23(001 0111 ) => 27(001 1011 )
* 24(0 011000 ) => 33(0 100001 )
* 44(01 01100 ) => 49(01 10001 )
* 46(01 01110 ) => 51(01 10011 )
# Observations IV
* As you can see, the non-highlighted part is same in i/p and o/p as well
* 3 (0000 011 ) => 5(0000 101 )
* 6(000 0110 ) => 9(000 1001 )
* 11(0001 011 ) => 13(0001 101 )
* 23(001 0111 ) => 27(001 1011 )
* 24(0 011000 ) => 33(0 100001 )
* 44(01 01100 ) => 49(01 10001 )
* 46(01 01110 ) => 51(01 10011 )
# Observations V
* Now lets just look at what changed
* 011 => 101
* 0110 => 1001
* 011 => 101
* 0111 => 1011
* 011000 => 100001
* 01100 => 10001
* 01110 => 10011
* Do you see a pattern?
# Observations VI
* Yes, as you have rightly observed, left hand side is :
o A 0 followed by
o One or more 1’s (say x) followed by
o Zero or more 0’s (say y)
* Is changed to
o A 1 followed by
o (y+1) zeroes followed by
o (x-1) 1’s
* 0 11 => 1 0 1
* 0 11 000 => 1 0 000 1
Algorithm
# Now let’s frame the algorithm
* Given a bit-pattern, start from right, find successive zeroes (xxxx01111 0000 )
* Followed by zeroes find successive 1’s (xxxx0 1111 0000 )
* Stop on hitting a zero (xxxx 0 1111 0000 )
* Interchange that zero with a 1 from successive 1’s (xxxx 1 0 111 0000 )
* Now move the remaining 1’s to extreme right, filling the gap with zeroes (xxxx 1 0 0000 111 )
# Doing it programmatically in C
* unsigned snoob(unsigned x) {
o unsigned smallest, ripple, ones;
o // x = xxx0 1111 0000
o smallest = x & -x; // 0000 0001 0000
o ripple = x + smallest; // xxx1 0000 0000
o ones = x ^ ripple; // 0001 1111 0000
o ones = (ones >> 2)/smallest; // 0000 0000 0111
o return ripple | ones; // xxx1 0000 0111
* }
Working Code:
#include
using namespace std;
typedef unsigned int uint_t;
// this function returns next higher number with same number of set bits as x.
uint_t snoob(uint_t x)
{
uint_t rightOne;
uint_t nextHigherOneBit;
uint_t rightOnesPattern;
uint_t next = 0;
if(x)
{
// right most set bit
rightOne = x & -(signed)x;
// reset the pattern and set next higher bit
// left part of x will be here
nextHigherOneBit = x + rightOne;
// nextHigherOneBit is now part [D] of the above explanation.
// isolate the pattern
rightOnesPattern = x ^ nextHigherOneBit;
// right adjust pattern
rightOnesPattern = (rightOnesPattern)/rightOne;
// correction factor
rightOnesPattern >>= 2;
// rightOnesPattern is now part [A] of the above explanation.
// integrate new pattern (Add [D] and [A])
next = nextHigherOneBit | rightOnesPattern;
}
return next;
}
int main()
{
int x = 156;
cout<<"Next higher number with same number of set bits is "<
getchar();
return 0;
}
Time Complexity O(1)
Space Complexity O(1)
Run Here https://ideone.com/8R2N2
Source http://www.gowrikumar.com
* Given a number m find the next higher number r , that has same number of 1-bits.
* Ex : 3 (0000011) => 5(0000101)
* 6(0000110) => 9(0001001)
* 11(0001011) => 13(0001101)
* 23(0010111) => 27(0011011)
* 24(0011000) => 33(0100001)
* 44(0101100) => 49(0110001)
* 46(0101110) => 51(00110011)
1st Thinking(Algorithm)
1.count number of set bits in given number .
2. start from that number in loop & count number of set bits in all number > n untill we found the same set bits in any number, once found stop.
but problem with this algorithm is that it will take too much time for bignumber
O(N) where N is number of iteration untill we find number > N with same number of set bits.
2nd Solution (Most Optimized)
# Observations I
* Look at the input and the outputs again and see if you can make some algorithm out of it
* 3 (0000011) => 5(0000101)
* 6(0000110) => 9(0001001)
* 11(0001011) => 13(0001101)
* 23(0010111) => 27(0011011)
* 24(0011000) => 33(0100001)
* 44(0101100) => 49(0110001)
* 46(0101110) => 51(00110011)
# Observations II
* Hint : Now concentrate on the highlighted parts of input
* 3 (0000 011 ) => 5(0000 101 )
* 6(000 0110 ) => 9(000 1001 )
* 11(0001 011 ) => 13(0001 101 )
* 23(001 0111 ) => 27(001 1011 )
* 24(0 011000 ) => 33(0 100001 )
* 44(01 01100 ) => 49(01 10001 )
* 46(01 01110 ) => 51(01 10011 )
# Observations III
* As you can see,
o the non-highlighted part is same in i/p and o/p as well
o And the highlighted part is consecutive 1’s from the least-significant side (right hand side)
* 3 (0000 011 ) => 5(0000 101 )
* 6(000 0110 ) => 9(000 1001 )
* 11(0001 011 ) => 13(0001 101 )
* 23(001 0111 ) => 27(001 1011 )
* 24(0 011000 ) => 33(0 100001 )
* 44(01 01100 ) => 49(01 10001 )
* 46(01 01110 ) => 51(01 10011 )
# Observations IV
* As you can see, the non-highlighted part is same in i/p and o/p as well
* 3 (0000 011 ) => 5(0000 101 )
* 6(000 0110 ) => 9(000 1001 )
* 11(0001 011 ) => 13(0001 101 )
* 23(001 0111 ) => 27(001 1011 )
* 24(0 011000 ) => 33(0 100001 )
* 44(01 01100 ) => 49(01 10001 )
* 46(01 01110 ) => 51(01 10011 )
# Observations V
* Now lets just look at what changed
* 011 => 101
* 0110 => 1001
* 011 => 101
* 0111 => 1011
* 011000 => 100001
* 01100 => 10001
* 01110 => 10011
* Do you see a pattern?
# Observations VI
* Yes, as you have rightly observed, left hand side is :
o A 0 followed by
o One or more 1’s (say x) followed by
o Zero or more 0’s (say y)
* Is changed to
o A 1 followed by
o (y+1) zeroes followed by
o (x-1) 1’s
* 0 11 => 1 0 1
* 0 11 000 => 1 0 000 1
Algorithm
# Now let’s frame the algorithm
* Given a bit-pattern, start from right, find successive zeroes (xxxx01111 0000 )
* Followed by zeroes find successive 1’s (xxxx0 1111 0000 )
* Stop on hitting a zero (xxxx 0 1111 0000 )
* Interchange that zero with a 1 from successive 1’s (xxxx 1 0 111 0000 )
* Now move the remaining 1’s to extreme right, filling the gap with zeroes (xxxx 1 0 0000 111 )
# Doing it programmatically in C
* unsigned snoob(unsigned x) {
o unsigned smallest, ripple, ones;
o // x = xxx0 1111 0000
o smallest = x & -x; // 0000 0001 0000
o ripple = x + smallest; // xxx1 0000 0000
o ones = x ^ ripple; // 0001 1111 0000
o ones = (ones >> 2)/smallest; // 0000 0000 0111
o return ripple | ones; // xxx1 0000 0111
* }
Working Code:
#include
using namespace std;
typedef unsigned int uint_t;
// this function returns next higher number with same number of set bits as x.
uint_t snoob(uint_t x)
{
uint_t rightOne;
uint_t nextHigherOneBit;
uint_t rightOnesPattern;
uint_t next = 0;
if(x)
{
// right most set bit
rightOne = x & -(signed)x;
// reset the pattern and set next higher bit
// left part of x will be here
nextHigherOneBit = x + rightOne;
// nextHigherOneBit is now part [D] of the above explanation.
// isolate the pattern
rightOnesPattern = x ^ nextHigherOneBit;
// right adjust pattern
rightOnesPattern = (rightOnesPattern)/rightOne;
// correction factor
rightOnesPattern >>= 2;
// rightOnesPattern is now part [A] of the above explanation.
// integrate new pattern (Add [D] and [A])
next = nextHigherOneBit | rightOnesPattern;
}
return next;
}
int main()
{
int x = 156;
cout<<"Next higher number with same number of set bits is "<
getchar();
return 0;
}
Time Complexity O(1)
Space Complexity O(1)
Run Here https://ideone.com/8R2N2
Source http://www.gowrikumar.com
Labels:Data
Facebook Interview
,
Google Interview
Implement a data structure SetOfStacks that mimics below problem.
Imagine a (literal) stack of plates. If the stack gets too high, it might topple. Therefore,
in real life, we would likely start a new stack when the previous stack exceeds some threshold. Implement a data structure SetOfStacks that mimics this. SetOfStacks
should be composed of several stacks, and should create a new stack once the previous one exceeds capacity. SetOfStacks.push() and SetOfStacks.pop() should behave identically to a single stack (that is, pop() should return the same values as it would if there were just a single stack).
in real life, we would likely start a new stack when the previous stack exceeds some threshold. Implement a data structure SetOfStacks that mimics this. SetOfStacks
should be composed of several stacks, and should create a new stack once the previous one exceeds capacity. SetOfStacks.push() and SetOfStacks.pop() should behave identically to a single stack (that is, pop() should return the same values as it would if there were just a single stack).
Labels:Data
FlipKart Interview
Subscribe to:
Posts
(
Atom
)