time complexity, Java - time

Please see the code that I wrote based on a school example.
public class Test {
public static void main(String [] args)
{
int number = 0;
int [] array = new int[number+1];
array[number] = 0;
methodName(number, array);
}
public static void methodName(int n, int[] b )
{
if (n == 0)
{
System.out.println(" b is : " + b);
return;
}
else
{
b[n-1] = 0;
methodName(n-1, b);
b[n-1] = 1;
methodName(n-1, b);
}
}
}
I am trying to calculate the best and worst case time complexity of this code.
As far as I understand the best case would be O(1).
And I'm having a difficulty determining the worst case.
There are four basic operations in the else loop.
I know that this is a progressively growing function and I have a feeling it is close to being O(!n).
Thank you for your time.

IF methodName is not getting called from anywhere else than main,
then it would always be O(1)

Related

Big O of union method of dynamic connectivity problem

i faced a paradox to analyse this function, Why the time complexity of this function is N^2 and not N?
public void union(int a, int b) {
int aid = ids[a];
int bid = ids[b];
for (int i = 0; i < ids.length; i++) {
if (ids[i] == aid) {
ids[i] = bid;
}
}
}
Its an implementation of eager approach, to solve dynamic connectivity problem , complete code is:
// Union method has N^2 time complexity!!
class EagerApproach extends UnionFind {
protected int[] ids;
EagerApproach(int[] input) {
super(input);
ids = new int[input.length];
System.arraycopy(input, 0, ids, 0, input.length);
}
public boolean connected(int a, int b) {
return ids[a] == ids[b];
}
public void union(int a, int b) {
int aid = ids[a];
int bid = ids[b];
for (int i = 0; i < ids.length; i++) {
if (ids[i] == aid) {
ids[i] = bid;
}
}
}
public int[] getIds() {
return ids;
}
}
Provided your array access ids[x] is in constant time O(1), the time complexity of the union method is linear in the length of the array ids. So
O(ids.length)
or O(n) if we define n as ids.length.
Be careful with the definition of n and ids though. If, in your specific application, n was defined as ids.length = n * n, then this is obviously O(n^2) with n being sqrt(ids.length).

Need help on Rod Cutting with Memoization

I had implelmented the rod cutting using memoization technique in java and here is the code that i have come up so far:
public class RodCutMemo {
public static int [] memo;
public static void main(String args [])
{ int [] prices = {0,2,3,5,8,6,4,9,10,12,15,16,17,18,20,22,31,50} ;
int n=5;
memo = new int [n+1];
for(int i =1;i<=n;i++)
{ memo[i]=-9999;}
System.out.println(maxProfitRodCutMemo(prices ,n));
}
public static int maxProfitRodCutMemo(int [] prices,int n)
{ if(memo[n]>=0)
{
return memo[n];}
//else if(n==0)
//{
// return 0;
//}
else
{ int q = -9999;
for(int i =1;i<=n;i++)
{q=Math.max(q,prices[i]+maxProfitRodCutMemo(prices, n-i) );}
return q;}}}
I have two questions here...
Q1)I have commented out one of the base conditions..if(n==0).Is that required in code.Am i missing some corner case without that??
Yes!
The simple answer to your question comes out when you consider i=n(in the for loop in the function maxProfitRodCutMemo), which will lead to calling of maxProfitRodCutMemo(int [] prices,0), which wont give you a correct result
according to your code (as you dont have any condition to check it)..

solving knapsack dynamic programming debugging

I tried to solve the classical knapsap problem myself. But I am getting a wrong answer as 108. Could you help me to figure out what I have done wrong. Here I am using recursion.
Weight limit is 10
answer is 5+3+2 ==> 25+15+14=54
public class KnapSack {
public static int[] weight={6,5,4,3,2};
public static int[] value={12,25,24,15,14};
public static void main(String[] args) {
System.out.println(c(0,0,10));
}
public static int c(int currentElement,int currentValue,int currentReamainder){
int p = 0;
if(currentReamainder<=0) return currentValue;
for(int i=currentElement;i<weight.length;i++){
if(currentReamainder<weight[i]) return currentValue;
p = Math.max(value[i]+c(i+1,currentValue+value[i],currentReamainder-weight[i]),c(i+1,currentValue,currentReamainder))
}
return p;
}
}
Update:
What should I do to print the weights of the optimum solution ?
Your error is this line
p=Math.max(value[i]+c(i+1,currentValue+value[i],currentReamainder-weight[i]),c(i+1,currentValue,currentReamainder));
it should be
int val = Math.max(value[i]+c(i+1,currentValue+value[i],currentReamainder-weight[i]),c(i+1,currentValue,currentReamainder));
p = Math.max(val, p);
The last bug is when you both updating currentValue and return p at the same time, so imagine the last call, when the function return currentValue, plus the last value[i] in each step, so your result is double
So , your function should be (notice I have removed the currentValue parameter, which is not necessary):
public static int c(int currentElement,int currentReamainder){
int p = 0;
if(currentReamainder<=0) return 0;
for(int i=currentElement;i<weight.length;i++){
if(currentReamainder<weight[i]) break;//This line is not valid, only when the weight array is sorted(ascending order)
int val = Math.max(value[i]+c(i+1,currentReamainder-weight[i]),c(i+1,currentReamainder));
p = Math.max(val, p);
}
return p;
}

A more effective algorithm

This is my first time posting question, do pardon me if anything I do is wrong.
My question here is how to get a faster algorithm from this code? i'm currently using 2 stacks to implement the code such that it will get the minimum value out of the range of index User asks for input.
Example (2,3,4,5,1), if (user selects (1,4)), it means they are looking at (2,3,4,5), which the output is 2.
Thanks.
import java.util.*;
interface StackADT <Integer> {
// check whether stack is empty
public boolean empty();
// retrieve topmost item on stack
public int peek() throws EmptyStackException;
// remove and return topmost item on stack
public int pop() throws EmptyStackException;
// insert item onto stack
public void push(int item);
}
class StackArr <Integer> implements StackADT <Integer> {
private int[] arr;
private int top;
private int maxSize;
private final int INITSIZE = 1000;
public StackArr() {
arr = (int[]) new int[INITSIZE]; // creating array of type E
top = -1; // empty stack - thus, top is not on an valid array element
maxSize = INITSIZE;
}
public boolean empty() {
return (top < 0);
}
public int peek() throws EmptyStackException {
if (!empty()) return arr[top];
else throw new EmptyStackException();
}
public int pop() throws EmptyStackException {
int obj = peek();
top--;
return obj;
}
public void push(int obj) {
if (top >= maxSize - 1) enlargeArr();
top++;
arr[top] = obj;
}
}
class RMQ{
//declare stack object
Stack<Integer> stack1;
public RMQ(){
stack1 = new Stack<Integer>();
}
public void insertInt(int num){
stack1.push(num);
}
public int findIndex(int c, int d){
Stack<Integer> tempStack = new Stack<Integer>();
Stack<Integer> popStack = new Stack<Integer>();
tempStack = (Stack)stack1.clone();
while (d != tempStack.size())
{
tempStack.pop();
}
int minValue = tempStack.pop();
popStack.push(minValue);
while (c <= tempStack.size())
{
int tempValue = tempStack.pop();
if(tempValue >= minValue)
{
continue;
}
else
{
popStack.push(tempValue);
minValue = tempValue;
}
}
return popStack.pop();
}
}
public class Pseudo{
public static void main(String[] args){
//declare variables
int inputNum;
int numOfOperations;
//create object
RMQ rmq = new RMQ();
Scanner sc = new Scanner(System.in);
//read input
inputNum = sc.nextInt();
//add integers into stack
for(int i=0; i < inputNum; i++){
rmq.insertInt(sc.nextInt());
}
// read input for number of queries
numOfOperations = sc.nextInt();
// Output queries
for(int k=0; k < numOfOperations; k++){
int output = rmq.findIndex(sc.nextInt(), sc.nextInt());
System.out.println(output);
}
}
}
Why are you using a stack? Simply use an array:
int[] myArray = new int[inputNum];
// fill the array...
// get the minimum between "from" and "to"
int minimum = Integer.MAX_VALUE;
for(int i = from ; i <= to ; ++i) {
minimum = Math.min(minimum, myArray[i])
}
And that's it!
The way I understand your question is that you want to do some preprocessing on a fixed array that then makes your find min operation of a range of elements very fast.
This answer describes an approach that does O(nlogn) preprocessing work, followed by O(1) work for each query.
Preprocessing O(nlogn)
The idea is to prepare a 2d array SMALL[a,k] where SMALL[a,k] is the minimum of the 2^k elements starting at a
You can compute this array in a recursive way by starting at k==0 and then building up the value for each higher element by combining two previous elements together.
SMALL[a,k] = min(SMALL[a,k-1] , SMALL[a+2^(k-1),k-1])
Lookup O(1) per query
You are then able to instantly find the min for any range by combining 2 preprepared answers.
Suppose you want to find the min for elements from 100 to 133. You already know the min of 32 elements 100 to 131 (in BIG[100,5]) and also the min of 32 elements from 102 to 133 (in BIG[102,5]) so you can find the smallest of these to get the answer.
This is Range Minimum Query problem.
There are some algorthms and data structures to solve it effectively

O(1) lookup in non-contiguous memory?

Is there any known data structure that provides O(1) random access, without using a contiguous block of memory of size O(N) or greater? This was inspired by this answer and is being asked for curiosity's sake rather than for any specific practical use case, though it might hypothetically be useful in cases of a severely fragmented heap.
Yes, here's an example in C++:
template<class T>
struct Deque {
struct Block {
enum {
B = 4*1024 / sizeof(T), // use any strategy you want
// this gives you ~4KiB blocks
length = B
};
T data[length];
};
std::vector<Block*> blocks;
T& operator[](int n) {
return blocks[n / Block::length]->data[n % Block::length]; // O(1)
}
// many things left out for clarity and brevity
};
The main difference from std::deque is this has O(n) push_front instead of O(1), and in fact there's a bit of a problem implementing std::deque to have all of:
O(1) push_front
O(1) push_back
O(1) op[]
Perhaps I misinterpreted "without using a contiguous block of memory of size O(N) or greater", which seems awkward. Could you clarify what you want? I've interpreted as "no single allocation that contains one item for every item in the represented sequence", such as would be helpful to avoid large allocations. (Even though I do have a single allocation of size N/B for the vector.)
If my answer doesn't fit your definition, then nothing will, unless you artificially limit the container's max size. (I can limit you to LONG_MAX items, store the above blocks in a tree instead, and call that O(1) lookup, for example.)
You can use a trie where the length of the key is bounded. As lookup in a trie with a key of length m is O(m), if we bound the length of the keys then we bound m and now lookup is O(1).
So think of the a trie where the keys are strings on the alphabet { 0, 1 } (i.e., we are thinking of keys as being the binary representation of integers). If we bound the length of the keys to say 32 letters, we have a structure that we can think of as being indexed by 32-bit integers and is randomly-accessible in O(1) time.
Here is an implementation in C#:
class TrieArray<T> {
TrieArrayNode<T> _root;
public TrieArray(int length) {
this.Length = length;
_root = new TrieArrayNode<T>();
for (int i = 0; i < length; i++) {
Insert(i);
}
}
TrieArrayNode<T> Insert(int n) {
return Insert(IntToBinaryString(n));
}
TrieArrayNode<T> Insert(string s) {
TrieArrayNode<T> node = _root;
foreach (char c in s.ToCharArray()) {
node = Insert(c, node);
}
return _root;
}
TrieArrayNode<T> Insert(char c, TrieArrayNode<T> node) {
if (node.Contains(c)) {
return node.GetChild(c);
}
else {
TrieArrayNode<T> child = new TrieArray<T>.TrieArrayNode<T>();
node.Nodes[GetIndex(c)] = child;
return child;
}
}
internal static int GetIndex(char c) {
return (int)(c - '0');
}
static string IntToBinaryString(int n) {
return Convert.ToString(n, 2);
}
public int Length { get; set; }
TrieArrayNode<T> Find(int n) {
return Find(IntToBinaryString(n));
}
TrieArrayNode<T> Find(string s) {
TrieArrayNode<T> node = _root;
foreach (char c in s.ToCharArray()) {
node = Find(c, node);
}
return node;
}
TrieArrayNode<T> Find(char c, TrieArrayNode<T> node) {
if (node.Contains(c)) {
return node.GetChild(c);
}
else {
throw new InvalidOperationException();
}
}
public T this[int index] {
get {
CheckIndex(index);
return Find(index).Value;
}
set {
CheckIndex(index);
Find(index).Value = value;
}
}
void CheckIndex(int index) {
if (index < 0 || index >= this.Length) {
throw new ArgumentOutOfRangeException("index");
}
}
class TrieArrayNode<TNested> {
public TrieArrayNode<TNested>[] Nodes { get; set; }
public T Value { get; set; }
public TrieArrayNode() {
Nodes = new TrieArrayNode<TNested>[2];
}
public bool Contains(char c) {
return Nodes[TrieArray<TNested>.GetIndex(c)] != null;
}
public TrieArrayNode<TNested> GetChild(char c) {
return Nodes[TrieArray<TNested>.GetIndex(c)];
}
}
}
Here is sample usage:
class Program {
static void Main(string[] args) {
int length = 10;
TrieArray<int> array = new TrieArray<int>(length);
for (int i = 0; i < length; i++) {
array[i] = i * i;
}
for (int i = 0; i < length; i++) {
Console.WriteLine(array[i]);
}
}
}
Well, since I've spent time thinking about it, and it could be argued that all hashtables are either a contiguous block of size >N or have a bucket list proportional to N, and Roger's top-level array of Blocks is O(N) with a coefficient less than 1, and I proposed a fix to that in the comments to his question, here goes:
int magnitude( size_t x ) { // many platforms have an insn for this
for ( int m = 0; x >>= 1; ++ m ) ; // return 0 for input 0 or 1
return m;
}
template< class T >
struct half_power_deque {
vector< vector< T > > blocks; // max log(N) blocks of increasing size
int half_first_block_mag; // blocks one, two have same size >= 2
T &operator[]( size_t index ) {
int index_magnitude = magnitude( index );
size_t block_index = max( 0, index_magnitude - half_first_block_mag );
vector< T > &block = blocks[ block_index ];
size_t elem_index = index;
if ( block_index != 0 ) elem_index &= ( 1<< index_magnitude ) - 1;
return block[ elem_index ];
}
};
template< class T >
struct power_deque {
half_power_deque forward, backward;
ptrdiff_t begin_offset; // == - backward.size() or indexes into forward
T &operator[]( size_t index ) {
ptrdiff_t real_offset = index + begin_offset;
if ( real_offset < 0 ) return backward[ - real_offset - 1 ];
return forward[ real_offset ];
}
};
half_power_deque implements erasing all but the last block, altering half_first_block_mag appropriately. This allows O(max over time N) memory use, amortized O(1) insertions on both ends, never invalidating references, and O(1) lookup.
How about a map/dictionary? Last I checked, that's O(1) performance.

Resources