I have an complex algorithm which uses really deep recursion. Because there is stack overflow with some specific data I have tried to rewrite it without recursion (using external stack on the heap). So I have two modifications of the same algorithm. Then I have performed some tests and I have found out that recursive implementation is much time faster than another one.
Can someone explain it to me, please? It is part of my final university project to discuss these results (why is one implementation highly faster than another one). I think that it is because of different caching of stack and heap but I am not sure.
Thanks a lot!
EDIT
OK, there is a code. The algorithm is written in C++ and solves tree isomorphism problem. Both implementations are same except one method which compares two nodes. The comparison is defined recursively - one node is lesser than another if one of it's children is lesser than corresponding child of another node.
Recursive version
char compareTo( const IMisraNode * nodeA, const IMisraNode * nodeB ) const {
// comparison of same number of children
int min = std::min( nodeA->getDegree( ), nodeB->getDegree( ) );
for ( int i = 0; i < min; ++i ) {
char res = compareTo( nodeA->getChild( i ), nodeB->getChild( i ) );
if ( res < 0 ) return -1;
if ( res > 0 ) return 1;
}
if ( nodeA->getDegree( ) == nodeB->getDegree( ) ) return 0; // same number of children
else if ( nodeA->getDegree( ) == min ) return -1;
else return 1;
}
Nonrecursive implementation
struct Comparison {
const IMisraNode * nodeA;
const IMisraNode * nodeB;
int i;
int min; // minimum of count of children
Comparison( const IMisraNode * nodeA, const IMisraNode * nodeB ) :
nodeA( nodeA ), nodeB( nodeB ),
i( 0 ), min( std::min( nodeA->getDegree( ), nodeB->getDegree( ) ) ) { }
} ;
char compareTo( const IMisraNode * nodeA, const IMisraNode * nodeB ) const {
Comparison * cmp = new Comparison( nodeA, nodeB );
// stack on the heap
std::stack<Comparison * > stack;
stack.push( cmp );
char result = 0; // result, the equality is assumed
while ( !result && !stack.empty( ) ) { // while they are not same and there are nodes left
cmp = stack.top( );
// comparison of same children
if ( cmp->i < cmp->min ) {
// compare these children
stack.push( new Comparison( cmp->nodeA->getChild( cmp->i ), cmp->nodeB->getChild( cmp->i ) ) );
++cmp->i; // next node
continue; // continue in comparing on next level
}
if ( cmp->nodeA->getDegree( ) != cmp->nodeB->getDegree( ) ) { // count of children is not same
if ( cmp->nodeA->getDegree( ) == cmp->min ) result = -1; // node A has lesser count of children
else result = 1;
}
delete cmp;
stack.pop( );
}
while ( !stack.empty( ) ) { // clean stack
delete stack.top( );
stack.pop( );
}
return result;
}
Your non-recursive code does dynamic memory allocation (explicitly with new, and implicitly by your use of std::stack), while the recursive one does not. Dynamic memory allocation is an extremely expensive operation.
To speed things up, try storing values, not pointers:
stack <Comparison> astack;
then code like:
astack.push( Comparison( cmp->nodeA->getChild( cmp->i ), cmp->nodeB->getChild( cmp->i ) ) );
Comparison cp = astack.top();
This doesn't answer your speed-comparison question, but rather suggests ways to increase stack size for your recursive solution.
You can increase the stack size (default: 1MB) under VC++: search "stacksize" in Visual Studio help.
And you can do the same under gcc. There's an SO discussion on that very question.
Related
I was trying to solve problem D from this competition (it's not really important to read the task) and I noted that these two codes that make the same thing are slightly different in time of execution:
map < string, vector <string> > G;
// Version 1
bool dfs(string s, string t) {
if( s == t ) return true;
for(int i = 0; i < int(G[s].size()); i++) {
if( dfs( G[s][i], t ) ) return true;
}
return false;
}
// Version 2
bool dfs(string s, string t) {
if( s == t ) return true;
for(auto r: G[s]) {
if( dfs( r, t ) ) return true;
}
return false;
}
In particular Version 1 gets TLE in evaluation, instead Version 2 pass without any problem. According to this question it's strange that Version 1 is slower, and testing on my PC with the largest testcase I get same time of execution... Can you help me?
In version one you have int(G[s].size()) In the for loop, which calls the size function on the variable for every iteration of the loop. Try creating a variable before the for loop that evaluates that size function once, and use it for your comparison in the loop. This Will be faster than the version 1 you currently have.
I'm trying to do a very easy genetic algorithm in c (for school research project). I am kind of stuck on calculating the fitness percentage.
I'm trying to match a random string from user input, with a dictionary word. (one could imagine a scrabble game algorithm or anything else)
For instance when the user input is "hello" and the dictionary word "hello",
both strings match and a fitness of 100% should be correct. With "hellp" and "hello" a fitness of almost 100% and with "uryyb" fitness should be (far) below 100%.
Perhaps does anybody know how to do fitness function or know (general) reference of this sort of fitness functions?
Here I allocate memory for an array of dictionary words
int row;
//alloceer eerst amount_words void *
woorden = (char **) malloc( amount_words * (len + 1) );
for( row = 0; row <= amount_words; row++ )
woorden[row] = (char *) malloc ( len + 1 );
return;
these could also be freed:
int row;
for( row = 0; row <= amount_words; row++ )
free( woorden[row] );
free( woorden );
return;
Here I open a dictionary file.
FILE *f;
int amount_words = 0;
char woord[40];
f = fopen("words.txt", "r");
while(!feof(f)) {
fscanf( f, "%s\n", woord );
if( strlen(woord) == len ) {
amount_words++;
if( !is_valid_str( woord ) )
amount_words--;
}
}
fclose(f);
return amount_words;
I rudely strip characters:
char is_valid_str( char *str )
{
int i;
for( i=0; i <= zoek_str_len - 1; i++ )
if( str[i] < 'a' || str[i] > 'z' )
return FALSE;
return TRUE;
}
I calculate the amount of words of certain length
amount_len_words( int len )
{
FILE *f;
int amount_words = 0;
char woord[40];
f = fopen("words.txt", "r");
while(!feof(f)) {
fscanf( f, "%s\n", woord );
if( strlen(woord) == len ) {
amount_words++;
if( !is_valid_str( woord ) )
amount_words--;
}
}
fclose(f);
return amount_words;
}
I read an array of words, certain length
FILE *f;
int i=0;
int lenwords;
char woord[40];
lenwords = amount_len_words( len );
alloc_woorden( lenwords, len );
f = fopen("words.txt", "r");
while( !feof( f ) ) {
fscanf(f,"%s\n", woord );
if( strlen(woord) == len ) {
if( is_valid_str( woord ) ) {
strncpy(woorden[i++], woord, len);
//printf("->%s\n", woorden[i]);
}
}
}
for( i=0;i < lenwords;i++) {
printf("%s\n", woorden[i] );
}
Here is the main routine
int i;
char zoek_str[40];
if( argc <= 1 ) {
printf( "gebruik: %s zoek_string\n", argv[0] );
return 0;
}
if( strlen( argv[1] ) > 39 ) {
printf( "Zoek string maximaal 39 lowercase karakters.\n" );
return 0;
}
strcpy( zoek_str, argv[1] );
zoek_str_len = strlen ( zoek_str );
if( !is_valid_str( zoek_str ) ) {
printf( "Ongeldige zoek string. Neemt alleen lowercase karakters!\n" );
return 0;
}
printf("%s\n",zoek_str);
init_words( zoek_str_len );
return 0;
}
These two are the functions I'm currently puzzling about:
double calculate_fitness( char *zoek )
{
}
And
void mutate( char *arg )
{
}
Thereafter I would calculate generation by generation.
Note that I only search at fixed length strings ex: strlen(argv[1])
example output of all of this could be:
generation string word percentage
1 hfllr hello 89.4%
2 hellq hello 90.3%
3 hellp hello 95.3%
4 hello hello 100%
or something like that.
By comparing the two strings letter by letter a metric could be correct/max_length where 'correct' is the number of letters that match and 'max_length' is the length of the longest string.
For something more involved you could look up the concept of Edit distance.
see Edit distance
also see Levenshtein distance
Basically what you are trying to measure is the minimum number of operations required to transform one string into the other.
First of all, you need a metric on "distance between strings". A commonly used one is the Levenshtein distance, which measure the distance between two strings as the minimum number of single-character edits (i.e. insertions, deletions or substitutions) required to change one string into the other.
Googling this you can find multiple code examples on how to compute such distance. Once you have the distance, your fitness should be inversely proportional to it.
I would like to create a stoploss order that will be placed above the high of the previous order's initiation bar in case this is a Sell order OR below the low of the previous order's initiation bar in case this is a Buy order.
Here is a picture to illustrate the issue ( the example depicts a sell order case ):
Any idea how to do that? The code below works fine if I use stoploss that is fixed. If I replace the stoploss with variables that are based on High or Low no orders are fired.
Here is my code:
//| Expert initialization function |
//+------------------------------------------------------------------+
/* -----------------------------------------------------------------------------
KINDLY RESPECT THIS & DO NOT MODIFY THE EDITS AGAIN
MQL4 FORMAT IS NOT INDENTATION SENSITIVE,
HAS IDE-HIGHLIGHTING
AND
HAS NO OTHER RESTRICTIVE CONDITIONS ----------- THIS CODING-STYLE HELPS A LOT
FOR BOTH
EASY & FAST
TRACKING OF NON-SYNTACTIC ERRORS
AND
IMPROVES FAST ORIENTATION
IN ALGORITHM CONSTRUCTORS' MODs
DURING RAPID PROTOTYPING
IF YOU CANNOT RESIST,
SOLVE RATHER ANY OTHER PROBLEM,
THAT MAY HELP SOMEONE ELSE's POST, THX
------------------------------------------- KINDLY RESPECT
THE AIM OF StackOverflow
------------------------------------------- TO HELP OTHERS DEVELOP UNDERSTANDING,
THEIRS UNDERSTANDING, OK? */
extern int StartHour = 14;
extern int TakeProfit = 70;
extern int StopLoss = 40;
extern double Lots = 0.01;
extern int MA_period = 20;
extern int MA_period_1 = 45;
extern int RSI_period14 = 14;
extern int RSI_period12 = 12;
void OnTick() {
static bool IsFirstTick = true;
static int ticket = 0;
double R_MA = iMA( Symbol(), Period(), MA_period, 0, 0, 0, 1 );
double R_MA_Fast = iMA( Symbol(), Period(), MA_period_1, 0, 0, 0, 1 );
double R_RSI14 = iRSI( Symbol(), Period(), RSI_period14, 0, 0 );
double R_RSI12 = iRSI( Symbol(), Period(), RSI_period12, 0, 0 );
double HH = High[1];
double LL = Low[ 1];
if ( Hour() == StartHour ) {
if ( IsFirstTick == true ) {
IsFirstTick = false;
bool res1 = OrderSelect( ticket, SELECT_BY_TICKET );
if ( res1 == true ) {
if ( OrderCloseTime() == 0 ) {
bool res2 = OrderClose( ticket, Lots, OrderClosePrice(), 10 );
if ( res2 == false ) {
Alert( "Error closing order # ", ticket );
}
}
}
if ( High[1] < R_MA
&& R_RSI12 > R_RSI14
&& R_MA_Fast >= R_MA
){
ticket = OrderSend( Symbol(),
OP_BUY,
Lots,
Ask,
10,
Bid - LL * Point * 10,
Bid + TakeProfit * Point * 10,
"Set by SimpleSystem"
);
}
if ( ticket < 0 ) {
Alert( "Error Sending Order!" );
}
else {
if ( High[1] > R_MA
&& R_RSI12 > R_RSI14
&& R_MA_Fast <= R_MA
){
ticket = OrderSend( Symbol(),
OP_SELL,
Lots,
Bid,
10,
Ask + HH * Point * 10,
Ask - TakeProfit * Point * 10,
"Set by SimpleSystem"
);
}
if ( ticket < 0 ) {
Alert( "Error Sending Order!" );
}
}
}
}
else {
IsFirstTick = true;
}
}
Major issue
Once having assigned ( per each Market Event Quote Arrival )
double HH = High[1],
LL = Low[ 1];
Your instruction to OP_SELL shall be repaired:
ticket = OrderSend( Symbol(),
OP_SELL,
Lots,
Bid,
10,
// ----------------------v--------------------------------------
// Ask + HH * 10 * Point,
// intention was High[1] + 10 [PT]s ( if Broker allows ), right?
NormalizeDouble( HH + 10 * Point,
Digits // ALWAYS NORMALIZE FOR .XTO-s
),
// vvv----------------------------------------------------------
// Ask - TakeProfit * Point * 10, // SAFER TO BASE ON BreakEvenPT
NormalizeDouble( Ask
- TakeProfit * Point * 10,
Digits // ALWAYS NORMALIZE FOR .XTO-s
),
"Set by SimpleSystem"
);
Symmetrically review and modify the OP_BUY case.
For Broker T&C collisions ( these need not get reflected in backtest ) review:
MarketInfo( _Symbol, MODE_STOPLEVEL )
MarketInfo( _Symbol, MODE_FREEZELEVEL )
or inspect in the MT4.Terminal in the MarketWatch aMouseRightClick Symbols -> Properties for STOPLEVEL distance.
Minor Issue
Review also your code for OrderClose() -- this will fail due to having wrong Price:
// ---------------------------------------------vvvvv----------------------------
bool res2 = OrderClose( ticket, Lots, OrderClosePrice(), 10 ); # was db.POOL()-SELECT'd
It is some what identical to what we do in hashing , and after adding elements inn the hash table i am simply searching for each element in in decreasing order and removing the element if it is found after printing it , i used it in solving Following very easy problem on codechef here is the basic algo that i had used , but i want to know what is it called ?
func(int nos){
int arr[1000000] = {0};
while( nos-- ) {
int k;
cin>>k;
arr[k]++;
}
for( i=0 ; i<1000000; ) {
if( arr[i]==0 )
{
i++;
continue;
}
cout<<i<<endl;
arr[i]--;
}
}
Thanks !
This is known as counting sort.
Mongodb supports many useful array operations such as $push and $pop but I can't seem to find any information about their algorithmic complexity nor how they are implemented to figure out their runtime complexity. Any help would be greatly appreciated.
I think when it comes to Mongo updates, there are only three relevant cases:
1) an in-place atomic update. For example just increment an integer. This is very fast.
2) an in-place replace. The whole document has to be rewritten, but it still fits into the current space (it shrank or there is enough padding).
3) a document migration. You have to write the document to a new location.
In addition to that there is the cost of updating affected indexes (all, if the whole thing had to be moved).
What you actually do inside of the document (push around an array, add a field) should not have any significant impact on the total cost of the operation, which seem to depend mostly linearly on the size of the document (network and disk transfer costs).
Here's where they are implemented. You can figure out the complexity from there.
This is the $pop operator, for example (this seems like O(N) to me):
case POP: {
uassert( 10135 , "$pop can only be applied to an array" , in.type() == Array );
BSONObjBuilder bb( builder.subarrayStart( shortFieldName ) );
int n = 0;
BSONObjIterator i( in.embeddedObject() );
if ( elt.isNumber() && elt.number() < 0 ) {
// pop from front
if ( i.more() ) {
i.next();
n++;
}
while( i.more() ) {
bb.appendAs( i.next() , bb.numStr( n - 1 ) );
n++;
}
}
else {
// pop from back
while( i.more() ) {
n++;
BSONElement arrI = i.next();
if ( i.more() ) {
bb.append( arrI );
}
}
}
ms.pushStartSize = n;
verify( ms.pushStartSize == in.embeddedObject().nFields() );
bb.done();
break;
}