Showing posts with label D.E Shaw. Show all posts
Showing posts with label D.E Shaw. Show all posts

Friday, August 19, 2011

A palindromic number reads the same both ways. The largest palindrome made from the product of two 2-digit numbers is 9009 = 91 * 99. Find the largest palindrome made from the product of two 3-digit numbers.

A palindromic number reads the same both ways. The largest palindrome made from the product of two 2-digit numbers is 9009 = 91 * 99. Find the largest palindrome made from the product of two 3-digit numbers.

Saturday, August 6, 2011

Coin Denomination Problem

Let us say we have an amount we have to pay someone: say $97. The given currency has notes worth $1, $5, $10 and $50. Now, what is the minimum number of currency notes with which the amount can be made? In this case, its eight: two $1 notes, a $5 note, four $10 notes and a $50 notes - eight in all.

If a computer had to solve the same problem, how would it do it? Let us see below. ‘Denomination’ refers to the list of available currency notes: $1, $5, $10 and $50 in this case - each note is a denomination.

In General

Problem: We are given a set of denominations d1,d2,d3,...,dn in increasing order. (Without loss of generality) we assume also that d1=1 (that is, there is always a $1 note, and that is the first denomination) so that it is always possible to produce all amounts from the given denominations. We are also given an amount of money amt. We must use a minimum number of coins (or currency notes) of the given denominations to produce amt.

Great Info http://www.seeingwithc.org/topic1html.html

Saturday, July 16, 2011

How to Count Number of Set Bits e.g. Number of 1's efficiently ?

This Question is asked in almost all core s/w companies & most of we know logarithmic solution of this problem but there exist a constant time solution using bit manipulations stuck !!!!:) See 2nd Method to solve the same problem in O(1)

1st Brian Kernighan’s Algorithm:
Subtraction of 1 from a number toggles all the bits (from right to left) till the rightmost set bit(including the righmost set bit). So if we subtract a number by 1 and do bitwise & with itself (n & (n-1)), we unset the righmost set bit. If we do n & (n-1) in a loop and count the no of times loop executes we get the set bit count.
Beauty of the this solution is number of times it loops is equal to the number of set bits in a given integer.

1 Initialize count: = 0
2 If integer n is not zero
(a) Do bitwise & with (n-1) and assign the value back to n
n: = n&(n-1)
(b) Increment count by 1
(c) go to step 2
3 Else return count

#include

/* Function to get no of set bits in binary
representation of passed binary no. */
int countSetBits(int n)
{
unsigned int count = 0;
while (n)
{
n &= (n-1) ;
count++;
}
return count;
}

/* Program to test function countSetBits */
int main()
{
int i = 16;
printf("%d", countSetBits(i));
getchar();
return 0;
}

Time Complexity O(logn)
Space Complexity O(1)
Run Here https://ideone.com/2y2KJ

2nd Method is Most Efficient One !!!!!!

Assuming that the integer is 32 bits, this is pretty good:

x = (x & 0x55555555) + ((x >> 1) & 0x55555555);
x = (x & 0x33333333) + ((x >> 2) & 0x33333333);
x = (x & 0x0F0F0F0F) + ((x >> 4) & 0x0F0F0F0F);
x = (x & 0x00FF00FF) + ((x >> 8) & 0x00FF00FF);
x = (x & 0x0000FFFF) + ((x >> 16) & 0x0000FFFF);

where
0x55555555 = 01010101 01010101 01010101 01010101
0x33333333 = 00110011 00110011 00110011 00110011
0x0F0F0F0F = 00001111 00001111 00001111 00001111
0x00FF00FF = 00000000 11111111 00000000 11111111
0x0000FFFF = 00000000 00000000 11111111 11111111


Notice that the hex constants are respectively alternate bits,
alternate pairs of bits, alternate groups of four bits, alternate
bytes, and the low-order half of the int.

The first statement determines the number of one-bits in each pair of
bits. The second statement adds adjacent pairs of bits to get the
number of bits in each group of four bits. Then these are added to get
the number of bits in each byte, short int, and finally in the whole
int.

but it works at low level ??

Suppose that the first four bits of x from the left are abcd. Lets separate the bits into pairs with a comma: ab,cd. The first four bits of the hex constant0x55... are 0101, or separated into pairs: 01,01. The logical product of x with this constant is 0b,0d. The first four bits of x>>1 are 0abc, or separated into pairs are 0a,bc. The logical product of this with the constant is 0a,0c. The sum 0b,0d + 0a,0c is a+b,c+d, where a+b = 00, 01, or 10, and b+c = 00, 01, or 10. Thus we have replaced each pair of bits in x with the sum of the two bits originally in the pair.

The next statement uses the constant 0x333.... The first four bits of this are 0011, split into pairs as 00,11. The logical product of the first four bits of x with this constant gives 00,c+d. Furthermore (a+b,c+d)>>2 = 00,a+b. Then 00,c+d + 00,a+b gives the four-bit quantity a+b+c+d, i.e., the number of one bits set in the first four bits of the original x.

The next statements continue to double the number of bits included in each sum.
& so on

#include
using namespace std;

int main()
{
int x=0x00000016;
x = (x & 0x55555555) + ((x >> 1) & 0x55555555);

cout<> 2) & 0x33333333);

cout<> 4) & 0x0F0F0F0F);

cout<> 8) & 0x00FF00FF);

cout<> 16) & 0x0000FFFF);

cout< return 0;

}

Time Complexity O(1)
Space Complexity O(1)
Run Here https://ideone.com/uDKGK
More Info http://graphics.stanford.edu/~seander/bithacks.html#CountBitsSetNaive

Saturday, April 30, 2011

How to find if a node/pointer corrupted in a linked list

How would you find out if one of the pointers in a linked list is corrupted or not?
This is a really good interview question. The reason is that linked lists are used in a wide variety of scenarios and being able to detect and correct pointer corruptions might be a very valuable tool. For example, data blocks associated with files in a file system are usually stored as linked lists. Each data block points to the next data block. A single corrupt pointer can cause the entire file to be lost!

1 Discover & Fix Bugs
Discover and fix bugs when they corrupt the linked list and not when effect becomes visible in some other part of the program. Perform frequent consistency checks (to see if the linked list is indeed holding the data that you inserted into it).

2 set Pointer to NUll after Freeing Object
It is good programming practice to set the pointer value to NULL immediately after freeing the memory pointed at by the pointer. This will help in debugging, because it will tell you that the object was freed somewhere beforehand. Keep track of how many objects are pointing to a object using reference counts if required.

3 Use Debugger Tool Like DDD,Purify,
Use a good debugger to see how the datastructures are getting corrupted and trace down the problem. Debuggers like ddd on linux and memory profilers like Purify, Electric fence are good starting points. These tools should help you track down heap corruption issues easily.

4.Avoid Global Variable
Avoid global variables when traversing and manipulating linked lists. Imagine what would happen if a function which is only supposed to traverse a linked list using a global head pointer accidently sets the head pointer to NULL!.

5 Check Add& Delete Node After such Opeartion
Its a good idea to check the addNode() and the deleteNode() routines and test them for all types of scenarios. This should include tests for inserting/deleting nodes at the front/middle/end of the linked list, working with an empty linked list, running out of memory when using malloc() when allocating memory for new nodes, writing through NULL pointers, writing more data into the node fields then they can hold (resulting in corrupting the (probably adjacent) “prev” and “next” pointer fields), make sure bug fixes and enhancements to the linked list code are reviewed and well tested (a lot of bugs come from quick and dirty bug fixing), log and handle all possible errors (this will help you a lot while debugging), add multiple levels of logging so that you can dig through the logs. The list is endless…

6.Keep Track of Number of Nodes After Every Node after Initializing Linked List
Each node can have an extra field associated with it. This field indicates the number of nodes after this node in the linked list. This extra field needs to be kept up-to-date when we inserte or delete nodes in the linked list (It might become slightly complicated when insertion or deletion happens not at end, but anywhere in the linked list). Then, if for any node, p->field > 0 and p->next == NULL, it surely points to a pointer corruption.
You could also keep the count of the total number of nodes in a linked list and use it to check if the list is indeed having those many nodes or not.

The problem in detecting such pointer corruptions in C is that its only the programmer who knows that the pointer is corrupted. The program has no way of knowing that something is wrong. So the best way to fix these errors is check your logic and test your code to the maximum possible extent. I am not aware of ways in C to recover the lost nodes of a corrupted linked list. C does not track pointers so there is no good way to know if an arbitrary pointer has been corrupted or not. The platform may have a library service that checks if a pointer points to valid memory (for instance on Win32 there is a IsBadReadPtr, IsBadWritePtr API.) If you detect a cycle in the link list, it’s definitely bad. If it’s a doubly linked list you can verify, pNode->Next->Prev == pNode.

I have a hunch that interviewers who ask this question are probably hinting at something called Smart Pointers in C++. Smart pointers are particularly useful in the face of exceptions as they ensure proper destruction of dynamically allocated objects. They can also be used to keep track of dynamically allocated objects shared by multiple owners. This topic is out of scope here, but you can find lots of material on the Internet for Smart Pointers.

Thursday, March 10, 2011

WAP for External Sorting

/* external sort */

#include
#include
#include

/****************************
* implementation dependent *
****************************/

/* template for workfiles (8.3 format) */
#define FNAME "_sort%03d.dat"
#define LNAME 13

/* comparison operators */
#define compLT(x,y) (x < y)
#define compGT(x,y) (x > y)

/* define the record to be sorted here */
#define LRECL 100
typedef int keyType;
typedef struct recTypeTag {
keyType key; /* sort key for record */
#if LRECL
char data[LRECL-sizeof(keyType)]; /* other fields */
#endif
} recType;

/******************************
* implementation independent *
******************************/

typedef enum {false, true} bool;

typedef struct tmpFileTag {
FILE *fp; /* file pointer */
char name[LNAME]; /* filename */
recType rec; /* last record read */
int dummy; /* number of dummy runs */
bool eof; /* end-of-file flag */
bool eor; /* end-of-run flag */
bool valid; /* true if rec is valid */
int fib; /* ideal fibonacci number */
} tmpFileType;

static tmpFileType **file; /* array of file info for tmp files */
static int nTmpFiles; /* number of tmp files */
static char *ifName; /* input filename */
static char *ofName; /* output filename */

static int level; /* level of runs */
static int nNodes; /* number of nodes for selection tree */

void deleteTmpFiles(void) {
int i;

/* delete merge files and free resources */
if (file) {
for (i = 0; i < nTmpFiles; i++) {
if (file[i]) {
if (file[i]->fp) fclose(file[i]->fp);
if (*file[i]->name) remove(file[i]->name);
free (file[i]);
}
}
free (file);
}
}

void termTmpFiles(int rc) {

/* cleanup files */
remove(ofName);
if (rc == 0) {
int fileT;

/* file[T] contains results */
fileT = nTmpFiles - 1;
fclose(file[fileT]->fp); file[fileT]->fp = NULL;
if (rename(file[fileT]->name, ofName)) {
perror("io1");
deleteTmpFiles();
exit(1);
}
*file[fileT]->name = 0;
}
deleteTmpFiles();
}

void cleanExit(int rc) {

/* cleanup tmp files and exit */
termTmpFiles(rc);
exit(rc);
}

void *safeMalloc(size_t size) {
void *p;

/* safely allocate memory and initialize to zero */
if ((p = calloc(1, size)) == NULL) {
printf("error: malloc failed, size = %d\n", size);
cleanExit(1);
}
return p;
}

void initTmpFiles(void) {
int i;
tmpFileType *fileInfo;

/* initialize merge files */
if (nTmpFiles < 3) nTmpFiles = 3;
file = safeMalloc(nTmpFiles * sizeof(tmpFileType*));
fileInfo = safeMalloc(nTmpFiles * sizeof(tmpFileType));
for (i = 0; i < nTmpFiles; i++) {
file[i] = fileInfo + i;
sprintf(file[i]->name, FNAME, i);
if ((file[i]->fp = fopen(file[i]->name, "w+b")) == NULL) {
perror("io2");
cleanExit(1);
}
}
}

recType *readRec(void) {

typedef struct iNodeTag { /* internal node */
struct iNodeTag *parent;/* parent of internal node */
struct eNodeTag *loser; /* external loser */
} iNodeType;

typedef struct eNodeTag { /* external node */
struct iNodeTag *parent;/* parent of external node */
recType rec; /* input record */
int run; /* run number */
bool valid; /* input record is valid */
} eNodeType;

typedef struct nodeTag {
iNodeType i; /* internal node */
eNodeType e; /* external node */
} nodeType;

static nodeType *node; /* array of selection tree nodes */
static eNodeType *win; /* new winner */
static FILE *ifp; /* input file */
static bool eof; /* true if end-of-file, input */
static int maxRun; /* maximum run number */
static int curRun; /* current run number */
iNodeType *p; /* pointer to internal nodes */
static bool lastKeyValid; /* true if lastKey is valid */
static keyType lastKey; /* last key written */

/* read next record using replacement selection */

/* check for first call */
if (node == NULL) {
int i;

if (nNodes < 2) nNodes = 2;
node = safeMalloc(nNodes * sizeof(nodeType));
for (i = 0; i < nNodes; i++) {
node[i].i.loser = &node[i].e;
node[i].i.parent = &node[i/2].i;
node[i].e.parent = &node[(nNodes + i)/2].i;
node[i].e.run = 0;
node[i].e.valid = false;
}
win = &node[0].e;
lastKeyValid = false;

if ((ifp = fopen(ifName, "rb")) == NULL) {
printf("error: file %s, unable to open\n", ifName);
cleanExit(1);
}
}

while (1) {

/* replace previous winner with new record */
if (!eof) {
if (fread(&win->rec, sizeof(recType), 1, ifp) == 1) {
if ((!lastKeyValid || compLT(win->rec.key, lastKey))
&& (++win->run > maxRun))
maxRun = win->run;
win->valid = true;
} else if (feof(ifp)) {
fclose(ifp);
eof = true;
win->valid = false;
win->run = maxRun + 1;
} else {
perror("io4");
cleanExit(1);
}
} else {
win->valid = false;
win->run = maxRun + 1;
}

/* adjust loser and winner pointers */
p = win->parent;
do {
bool swap;
swap = false;
if (p->loser->run < win->run) {
swap = true;
} else if (p->loser->run == win->run) {
if (p->loser->valid && win->valid) {
if (compLT(p->loser->rec.key, win->rec.key))
swap = true;
} else {
swap = true;
}
}
if (swap) {
/* p should be winner */
eNodeType *t;

t = p->loser;
p->loser = win;
win = t;
}
p = p->parent;
} while (p != &node[0].i);

/* end of run? */
if (win->run != curRun) {
/* win->run = curRun + 1 */
if (win->run > maxRun) {
/* end of output */
free(node);
return NULL;
}
curRun = win->run;
}

/* output top of tree */
if (win->run) {
lastKey = win->rec.key;
lastKeyValid = true;
return &win->rec;
}
}
}

void makeRuns(void) {
recType *win; /* winner */
int fileT; /* last file */
int fileP; /* next to last file */
int j; /* selects file[j] */


/* Make initial runs using replacement selection.
* Runs are written using a Fibonacci distintbution.
*/

/* initialize file structures */
fileT = nTmpFiles - 1;
fileP = fileT - 1;
for (j = 0; j < fileT; j++) {
file[j]->fib = 1;
file[j]->dummy = 1;
}
file[fileT]->fib = 0;
file[fileT]->dummy = 0;

level = 1;
j = 0;


win = readRec();
while (win) {
bool anyrun;

anyrun = false;
for (j = 0; win && j <= fileP; j++) {
bool run;

run = false;
if (file[j]->valid) {
if (!compLT(win->key, file[j]->rec.key)) {
/* append to an existing run */
run = true;
} else if (file[j]->dummy) {
/* start a new run */
file[j]->dummy--;
run = true;
}
} else {
/* first run in file */
file[j]->dummy--;
run = true;
}

if (run) {
anyrun = true;

/* flush run */
while(1) {
if (fwrite(win, sizeof(recType), 1, file[j]->fp) != 1) {
perror("io3");
cleanExit(1);
}
file[j]->rec.key = win->key;
file[j]->valid = true;
if ((win = readRec()) == NULL) break;
if (compLT(win->key, file[j]->rec.key)) break;
}
}
}

/* if no room for runs, up a level */
if (!anyrun) {
int t;
level++;
t = file[0]->fib;
for (j = 0; j <= fileP; j++) {
file[j]->dummy = t + file[j+1]->fib - file[j]->fib;
file[j]->fib = t + file[j+1]->fib;
}
}
}
}

void rewindFile(int j) {
/* rewind file[j] and read in first record */
file[j]->eor = false;
file[j]->eof = false;
rewind(file[j]->fp);
if (fread(&file[j]->rec, sizeof(recType), 1, file[j]->fp) != 1) {
if (feof(file[j]->fp)) {
file[j]->eor = true;
file[j]->eof = true;
} else {
perror("io5");
cleanExit(1);
}
}
}

void mergeSort(void) {
int fileT;
int fileP;
int j;
tmpFileType *tfile;

/* polyphase merge sort */

fileT = nTmpFiles - 1;
fileP = fileT - 1;

/* prime the files */
for (j = 0; j < fileT; j++) {
rewindFile(j);
}

/* each pass through loop merges one run */
while (level) {
while(1) {
bool allDummies;
bool anyRuns;

/* scan for runs */
allDummies = true;
anyRuns = false;
for (j = 0; j <= fileP; j++) {
if (!file[j]->dummy) {
allDummies = false;
if (!file[j]->eof) anyRuns = true;
}
}

if (anyRuns) {
int k;
keyType lastKey;

/* merge 1 run file[0]..file[P] --> file[T] */

while(1) {
/* each pass thru loop writes 1 record to file[fileT] */

/* find smallest key */
k = -1;
for (j = 0; j <= fileP; j++) {
if (file[j]->eor) continue;
if (file[j]->dummy) continue;
if (k < 0 ||
(k != j && compGT(file[k]->rec.key, file[j]->rec.key)))
k = j;
}
if (k < 0) break;

/* write record[k] to file[fileT] */
if (fwrite(&file[k]->rec, sizeof(recType), 1,
file[fileT]->fp) != 1) {
perror("io6");
cleanExit(1);
}

/* replace record[k] */
lastKey = file[k]->rec.key;
if (fread(&file[k]->rec, sizeof(recType), 1,
file[k]->fp) == 1) {
/* check for end of run on file[s] */
if (compLT(file[k]->rec.key, lastKey))
file[k]->eor = true;
} else if (feof(file[k]->fp)) {
file[k]->eof = true;
file[k]->eor = true;
} else {
perror("io7");
cleanExit(1);
}
}

/* fixup dummies */
for (j = 0; j <= fileP; j++) {
if (file[j]->dummy) file[j]->dummy--;
if (!file[j]->eof) file[j]->eor = false;
}

} else if (allDummies) {
for (j = 0; j <= fileP; j++)
file[j]->dummy--;
file[fileT]->dummy++;
}

/* end of run */
if (file[fileP]->eof && !file[fileP]->dummy) {
/* completed a fibonocci-level */
level--;
if (!level) {
/* we're done, file[fileT] contains data */
return;
}

/* fileP is exhausted, reopen as new */
fclose(file[fileP]->fp);
if ((file[fileP]->fp = fopen(file[fileP]->name, "w+b"))
== NULL) {
perror("io8");
cleanExit(1);
}
file[fileP]->eof = false;
file[fileP]->eor = false;

rewindFile(fileT);

/* f[0],f[1]...,f[fileT] <-- f[fileT],f[0]...,f[T-1] */
tfile = file[fileT];
memmove(file + 1, file, fileT * sizeof(tmpFileType*));
file[0] = tfile;

/* start new runs */
for (j = 0; j <= fileP; j++)
if (!file[j]->eof) file[j]->eor = false;
}
}

}
}


void extSort(void) {
initTmpFiles();
makeRuns();
mergeSort();
termTmpFiles(0);
}

int main(int argc, char *argv[]) {

/* command-line:
*
* ext ifName ofName nTmpFiles nNodes
*
* ext in.dat out.dat 5 2000
* reads in.dat, sorts using 5 files and 2000 nodes, output to out.dat
*/
if (argc != 5) {
printf("%s ifName ofName nTmpFiles nNodes\n", argv[0]);
cleanExit(1);
}

ifName = argv[1];
ofName = argv[2];
nTmpFiles = atoi(argv[3]);
nNodes = atoi(argv[4]);

printf("extSort: nFiles=%d, nNodes=%d, lrecl=%d\n",
nTmpFiles, nNodes, sizeof(recType));

extSort();

return 0;
}

Tuesday, February 22, 2011

WAP for Quick Sort Algorithm

Data Structure used:Array

Algorithm:Divide & Conquer
1. Phase:Partition O(N) Time
2. Phase: Divide the array in two part Recursively & call partition function until we are run out of array O(nlogn)

Recursive Function Looks Like T(n)=2*T(n/2)+O(n)

Working Code:
public class QuickSort
{

public static int SIZE = 1000000;

public int[] sort(int[] input) {
quickSort(input, 0, input.length-1);
return input;
}

public static void main(String args[]){
int[] a = new int[SIZE];
for(int i=0;i pivot)
j--;
if (i <= j) {
tmp = arr[i];
arr[i] = arr[j];
arr[j] = tmp;
i++;
j--;
}
}
return i;
}
}
//Time taken to sort a million elements : 156 milliseconds

Time Complexity O(nlogn)
Space Complexity O(logn)
Run Here http://ideone.com/clone/ClBCZ

Complexity Analysis ( More Focus on Space )

very Few of us Know & are abale to come up space compelxity of quick sort if asked during the interview isn't it ?

Quicksort has a space complexity of O(logn), even in the worst case, when it is carefully implemented such that

* in-place partitioning is used. This requires O(1).
* After partitioning, the partition with the fewest elements is (recursively) sorted first, requiring at most O(logn) space. Then the other partition is sorted using tail-recursion or iteration.

The version of quicksort with in-place partitioning uses only constant additional space before making any recursive call. However, if it has made O(logn) nested recursive calls, it needs to store a constant amount of information from each of them. Since the best case makes at most O(logn) nested recursive calls, it uses O(logn) space. The worst case makes O(n) nested recursive calls, and so needs O(n) space.

However, if we consider sorting arbitrarily large lists, we have to keep in mind that our variables like left and right can no longer be considered to occupy constant space; it takes O(logn) bits to index into a list of n items. Because we have variables like this in every stack frame, in reality quicksort requires O(log2n) bits of space in the best and average case and O(nlogn) space in the worst case. This isn't too terrible, though, since if the list contains mostly distinct elements, the list itself will also occupy O(nlogn) bits of space.