How much data can I allocate on heap in codechef problems? [closed] - allocation

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 8 years ago.
Improve this question
What is the maximum amount of data I can allocate in a solution in codechef ?

Codechef uses SPOJ servers. The new problems use Cube cluster, so as mentioned here- http://www.spoj.com/clusters/ memory limit is 1536 MB. Which means you have plenty of memory available on heap (a large portion of total memory) and need not worry for any reasonable solution.

Related

How to align long texts? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
I want to align a pair of long texts with ~20M chars each.
I've used in the past Smith-Waterman algorithm but (from my limited understanding) it requires creating a 2-dimensional array with the size of the texts (20M by 20M array) - which is not practical.
So I'm looking for an algorithm to align a pair of long texts that will keep a practical memory size and execution time.
UPDATE
I've also tried Myers and Miller using this implementation: https://www.codeproject.com/Articles/42279/Investigating-Myers-diff-algorithm-Part-of
But I still got out of memory exception on "not so large" texts (1MB).

Concurrent algorithm for strongly connected components (SCCs) [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking for code must demonstrate a minimal understanding of the problem being solved. Include attempted solutions, why they didn't work, and the expected results. See also: Stack Overflow question checklist
Closed 9 years ago.
Improve this question
Is anybody aware of a concurrent version of Tarjan's SCCs algorithm, Kosaraju's algorithm or any other fast, O(|V| + |E|) algorithm for finding SCCs? Neither of those algorithms seem to be very hard to multithread, but I'd be happy for somebody else to have done this job.
What I'm trying to handle here is an 8 GB directed graph, which I keep in RAM using a big AWS instance, and I'd like to make a good use of all 16 cores.
That's possibly the best paper I have found so far, I'll have a go at the implementation. http://domino.research.ibm.com/library/cyberdig.nsf/1e4115aea78b6e7c85256b360066f0d4/d8e3597a4172437b8525709f006e42b0?OpenDocument

Linear Probing in Hashing [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking for code must demonstrate a minimal understanding of the problem being solved. Include attempted solutions, why they didn't work, and the expected results. See also: Stack Overflow question checklist
Closed 9 years ago.
Improve this question
A hash table with 10 buckets with one slot per bucket is depicted .The Symbol S1 to S7 are initially entered using a hashing function with linear probing . The maximum no. of comparisons needed in searching an item that is not present??
I am unable to solve this question. Please explain me how it can be computed in simple language for a learner
Consider what happens when all symbols hash to the same number (say zero for simplicity). How many comparisons are required to insert S1, then S2, etc?

In Windows what is a "runtime image"? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
I am reading the book "MySQL 5.0 Certification Study Guide".
On page 362 it states:
• mysql-debug contains support for debugging. Normally, you don't choose this server for production use because it has a larger runtime image and uses more memory.
What is an "image"? I have searched extensively to try to find the answer.
The image is the size of the executable code in memory.
In general, "X uses more memory than Y" could refer to both the runtime image size and the amount of space allocated for non-executable data. This quotation is clarifying that both are worse in the debug version.

What is the reason of memory gap in hard drives? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I want to know that what is the exact reason behind memory difference in our hard drives or pen drives .
As when we say , We have a pen drive of 4GB but actual usable space is 3.7GB . What happens to the rest of memory? Are the manufacturing companies stealing these memory spaces from us or there is any technical reason behind this?
Thanks,
Nitesh Kumar
They use decimal prefixes, you're using binary prefixes. This gives a discrepancy of approximately 2.4% per prefix magnitude.

Resources