I am using MQL4.
Currently, I am using [Expert Advisor]-s in MT4.StrategyTester, and set a period-of-time by the build-in pull-down calendar GUI-elements.
What I want to do is to setup a period-of-time straight in an MQL4-source code.
If it is realized, for example, I can compare the result
'from 2011/01-to 2011/12'
to
'from 2012/01-to 2012/12'
and so on.
There is an easy solution to the requirement, even with an added value for a fully automated, large-scale hyper-parameter optimisations inside the said MT4.StrategyTester tool, using the proposed pair of parameters ( aStartFromDATE and aRunTillDATE ) as an iterable tuple, that could be harnessed into a TradingStrategy robustness cross-validations of its release-candidates over some sweeping/sliding calendar window-of-time.
extern datetime aStartFromDATE = D'2010.01.01 00:00';
extern datetime aRunTillDATE = D'2345.01.01 00:00';
void OnTick(){
if ( Time < aStartFromDATE
|| Time > aRunTillDATE
){
IgnoreTicksOutsideTheGivenPeriodOfTime();
return;
}
// SURE TO BE INSIDE THE GIVEN ( MT4.STRATEGY/TESTER ITERABLE ) PERIOD OF TIME
...
..
.
}
void IgnoreTicksOutsideTheGivenPeriodOfTime(){
// Ignore, but still may do execute some utility service during a void run
}
Be careful on different scopes of syntax support:
One might be also cautious on use-cases, that include StrategyTester restrictions for some of the powerful new-syntax-constructors:
A PrintFormat() being one of such un-supported pieces inside the StrategyTester during hyper-parameter optimisations.
PrintFormat() function does not work during optimization in the Strategy Tester.
Related
In order to make my question easy to understand I want to use the following example:
The following code is called nonblock do-loop in fortran language
DO 20 I=1, N ! line 1
DO 20 J=1, N ! line 2
! more codes
20 CONTINUE ! line 4
Pay attention that the label 20 at line 4 means the end of both the inner do-loop and the outer do-loop.
I want my flex program to parse the feature correctly: when flex reads the label 20, it will return ENDDO terminal twice.
Firstly, because I also use bison, so every time bison calls yylex() to get one terminal. If I can ask bison to get terminals from yylex() in some cases, and from another function in other cases, maybe I could solve this problem, however, I got no idea here then.
Of course there are some workarounds, for eample, I can use flex's start condition but I don't think it is a good solution. So I ask if there's any way to solve my question without a workaround?
It is easy enough to modify the lexical scanner produced by (f)lex to implement a token queue, but that is not necessarily the optimal solution. (See below for a better solution.) (Also, it is really not clear to me that for your particular problem, fabricating the extra token in the lexer is truly appropriate.)
The general approach is to insert code at the top of the yylex function, which you can do by placing the code immediately after the %% line and before the first rule. (The code must be indented so that it is not interpreted as a rule.) For non-reentrant scanners, this will typically involve the use of a local static variable to hold the queue. For a simple but dumb example, using the C API but compiling with C++ so as to have access to the C++ standard library:
%%
/* This code will be executed each time `yylex` is called, before
* any generated code. It may include declarations, even if compiled
* with C89.
*/
static std::deque<int> tokenq;
if (!tokenq.empty()) {
int token = tokenq.front();
tokenq.pop_front();
return token;
}
[[:digit:]]+ { /* match a number and return that many HELLO tokens */
int n = atoi(yytext);
for (int i = 0; i < n; ++i)
tokenq.push_back(HELLO);
}
The above code makes no attempt to provide a semantic value for the queued tokens; you could achieve that using something like a std::queue<std::pair<int, YYSTYPE>> for the token queue, but the fact that YYSTYPE is typically a union will make for some complications. Also, if that were the only reason to use the token queue, it is obvious that it could be replaced with a simple counter, which would be much more efficient. See, for example, this answer which does something vaguely similar to your question (and take note of the suggestions in Note 1 of that answer).
Better alternative: Use a push parser
Although the token queue solution is attractive and simple, it is rarely the best solution. In most cases, code will be clearer and easier to write if you request bison to produce a "push parser". With a push parser, the parser is called by the lexer every time a token is available. This makes it trivial to return multiple tokens from a lexer action; you just call the parser for each token. Similarly, if a rule doesn't produce any tokens, it simply fails to call the parser. In this model, the only lexer action which actually returns is the <<EOF>> rule, and it only does so after calling the parser with the END token to indicate that parsing is complete.
Unfortunately, the interface for push parsers is not only subject to change, as that manual link indicates; it is also very badly documented. So here is a simple but complete example which shows how it is done.
The push parser keeps its state in a yypstate structure, which needs to be passed to the parser on each call. Since the lexer is called only once for each input file, it is reasonable for the lexer to own that structure, which can be done as above with a local static variable [Note 1]: the parser state is initialized when yylex is called, and the EOF rule deletes the parser state in order to reclaim whatever memory it is using.
It is usually most convenient to build a reentrant push parser, which means that the parser does not rely on the global yylval variable [Note 2]. Instead, a pointer to the semantic value must be provided as an additional argument to yypush_parse. If your parser doesn't refer to the semantic value for the particular token type, you can provide NULL for this argument. Or, as in the code below, you can use a local semantic value variable in the lexer. It is not necessary that every call to the push parser provide the same pointer. In all, the changes to the scanner definition are minimal:
%%
/* Initialize a parser state object */
yypstate* pstate = yypstate_new();
/* A semantic value which can be sent to the parser on each call */
YYSTYPE yylval;
/* Some example scanner actions */
"keyword" { /* Simple keyword which just sends a value-less token */
yypush_parse(pstate, TK_KEYWORD, NULL); /* See Note 3 */
}
[[:digit:]]+ { /* Token with a semantic value */
yylval.num = atoi(yytext);
yypush_parse(pstate, TK_NUMBER, &yylval);
}
"dice-roll" { /* sends three random numbers */
for (int i = 0; i < 2; ++i) {
yylval.num = rand() % 6;
yypush_parse(pstate, TK_NUMBER, &yylval);
}
<<EOF>> { /* Obligatory EOF rule */
/* Send the parser the end token (0) */
int status = yypush_parse(pstate, 0, NULL);
/* Free the pstate */
yypstate_delete(pstate);
/* return the parser status; 0 is success */
return status;
}
In the parser, not much needs to be changed at all, other than adding the necessary declarations: [Note 4]
%define api.pure full
%define api.push-pull push
Notes
If you were building a reentrant lexer as well, you would use the extra data section of the lexer state object instead of static variables.
If you are using location objects in your parser to track source code locations, this also applies to yylloc.
The example code does not do a good job of detecting errors, since it doesn't check return codes from the calls to yypush_parse. One solution I commonly use is some variant on the macro SEND:
#define SEND(token) do { \
int status = yypush_parse(pstate, token, &yylval); \
if (status != YYPUSH_MORE) { \
yypstate_delete(pstate); \
return status; \
} \
} while (0)
It's also possible to use a goto to avoid the multiple instances of the yypstate_delete and return. YMMV.
You may have to modify the prototype of yyerror. If you are using locations and/or providing extra parameters to the push_parser, the location object and/or the extra parameters will also be present in the yyerror call. (The error string is always the last parameter.) For whatever reason, the parser state object is not provided to yyerror, which means that the yyerror function no longer has access to variables such as yych, which are now members of the yypstate structure rather than being global variables, so if you use these variables in your error reporting (which is not really recommended practice), then you will have to find an alternative solution.
Thanks to one of my friends, he provide a way to achieve
If I can ask bison to get terminals from yylex() in some cases, and from another function in other cases
In flex generated flex.cpp code, there is a macro
/* Default declaration of generated scanner - a define so the user can
* easily add parameters.
*/
#ifndef YY_DECL
#define YY_DECL_IS_OURS 1
extern int yylex (void);
#define YY_DECL int yylex (void)
#endif /* !YY_DECL */
so I can "rename" flex's yylex() function to another function like pure_yylex().
So my problem is solved by:
push all terminals I want to give bison to a global vector<int>
implement a yylex() function by myself, when bison call yylex(), this function will firstly try to get terminals from a that global vector<int>
if vector<int> is empty, yylex() calls pure_yylex(), and flex starts to work
How do I disable autotrading globally in MetaTrader 4/5 from within MQL4/5 code without using DLLs?
Here you go, (Author is Tiago Praxedes)
#define MT_WMCMD_EXPERTS 32851
#define WM_COMMAND 0x0111
#define GA_ROOT 2
#include <WinAPI\winapi.mqh>
void SetAlgoTradingStatus(bool Enable)
{
bool Status = (bool) TerminalInfoInteger(TERMINAL_TRADE_ALLOWED);
if(Enable != Status)
{
HANDLE hChart = (HANDLE) ChartGetInteger(ChartID(), CHART_WINDOW_HANDLE);
PostMessageW(GetAncestor(hChart, GA_ROOT), WM_COMMAND, MT_WMCMD_EXPERTS, 0);
}
}
void OnTick()
{
SetAlgoTradingStatus(false);
}
Lifted it from here: Source
Yes, MQL4/5 Expert Adviser may locally forbid self to trade this way:
if ( IsTradeAllowed() )
{ Comment( __FILE__, " [EA] Trading is allowed, will disable self." );
...
}
else
{ Comment( __FILE__, " [EA] Trading is not allowed, will disable self." );
...
}
// ---------------------------------------// GRACEFULLY RELEASE ALL RESOURCES BEFORE FIN'd
// ********
// FINALLY: EXPERT-ADVISOR SIG_TERM -> self ( MT4 )
ExpertRemove(); /* MT4 The Expert Advisor
is not stopped immediately
as you call ExpertRemove();
just a flag to stop the EA operation is set.
That is:
- any next event won't be processed,
- OnDeinit() will be called
and
- the Expert Advisor will be unloaded and removed from the chart.
*/
If the idea is that you have several EA's on different pairs and you want to disable them all at the same time you could place a specific trade, that I would call an information trade and not meant to be used for trading.
Choose a price very far away from current price.
For this example, we can use 9,999.000 as price and we place the trade.
Loop through the trades and look for a price = 9999.
If you find it use that to disable trading.
If you want it to start again, you can have a button or manually delete that trade that is 9999.
Now the block is not there.
All the EA's you may have going can see this.
Also, an offsite computer will be able to see it as well.
This was used as a way to hand off trading control to a team of traders.
The new owner would send the trade that bumps everyone off.
Remove that trade but the other EA's are asleep until a human turns them back on or a "Wake Up" trade is sent.
You can send values in the Comment, TP and SL values as prices.
There are many ways to use this method.
For this to work though you would have to have your own code and make the changes. If you buy an EA and want to turn it on and off without the source code then this method won't work.
We have people who run code for simulations, testing etc. on some supercomputers that we have. What would be nice is, if as part of a build process we can check that not only that the code compiles but that the ouput matches some pattern which will indicate we are getting meaningful results.
i.e. the researcher may know that the value of x must be within some bounds. If not, then a logical error has been made in the code (assuming it compiles and their is no compile time error).
Are there any pre-written packages for this kind of thing. The code is written in FORTRAN, C, C++ etc.
Any specific or general advice would be appreciated.
I expect most unit testing frameworks could do this; supply a toy test data set and see that the answer is sane in various different ways.
A good way to ensure that the resulting value of any computation (whether final or intermediate) meets certain constraints, is to use an object oriented programming language like C++, and define data-types that internally enforce the conditions that you are checking for. You can then use those data-types as the return value of any computation to ensure that said conditions are met for the value returned.
Let's look at a simple example. Assume that you have a member function inside of an Airplane class as a part of a flight control system that estimates the mass of the airplane instance as a function of the number passengers and the amount of fuel that plane has at that moment. One way to declare the Airplane class and an airplaneMass() member function is the following:
class Airplane {
public:
...
int airplaneMass() const; // note the plain int return type
...
private:
...
};
However, a better way to implement the above, would be to define a type AirplaneMass that can be used as the function's return type instead of int. AirplaneMass can internally ensure (in it's constructor and any overloaded operators) that the value it encapsulates meets certain constraints. An example implementation of the AirplaneMass datatype could be the following:
class AirplaneMass {
public:
// AirplaneMass constructor
AirplaneMass(int m) {
if (m < MIN || m > MAX) {
// throw exception or log constraint violation
}
// if the value of m meets the constraints,
// assign it to the internal value.
mass_ = m;
}
...
/* range checking should also be done in the implementation
of overloaded operators. For instance, you may want to
make sure that the resultant of the ++ operation for
any instance of AirplaneMass also lies within the
specified constraints. */
private:
int mass_;
};
Thereafter, you can redeclare class Airplane and its airplaneMass() member function as follows:
class Airplane {
public:
...
AirplaneMass airplaneMass() const;
// note the more specific AirplaneMass return type
...
private:
...
};
The above will ensure that the value returned by airplaneMass() is between MIN and MAX. Otherwise, an exception will be thrown, or the error condition will be logged.
I had to do that for conversions this month. I don't know if that might help you, but it appeared quite simple a solution to me.
First, I defined a tolerance level. (Java-ish example code...)
private static final double TOLERANCE = 0.000000000001D;
Then I defined a new "areEqual" method which checks if the difference between both values is lower than the tolerance level or not.
private static boolean areEqual(double a, double b) {
return (abs(a - b) < TOLERANCE);
}
If I get a false somewhere, it means the check has probably failed. I can adjust the tolerance to see if it's just a precision problem or really a bad result. Works quite well in my situation.
How much information hiding is necessary? I have boilerplate code before I delete a record, it looks like this:
public override void OrderProcessing_Delete(Dictionary<string, object> pkColumns)
{
var c = Connect();
using (var cmd = new NpgsqlCommand("SELECT COUNT(*) FROM orders WHERE order_id = :_order_id", c)
{ Parameters = { {"_order_id", pkColumns["order_id"]} } } )
{
var count = (long)cmd.ExecuteScalar();
// deletion's boilerplate code...
if (count == 0) throw new RecordNotFoundException();
else if (count > 1) throw new DatabaseStructureChangedException();
// ...boiler plate code
}
// deleting of table(s) goes here...
}
NOTE: boilerplate code is code-generated, including the "using (var cmd = new NpgsqlCommand( ... )"
But I'm seriously thinking to refactor the boiler plate code, I wanted a more succint code. This is how I envision to refactor the code (made nicer with extension method (not the sole reason ;))
using (var cmd = new NpgsqlCommand("SELECT COUNT(*) FROM orders WHERE order_id = :_order_id", c)
{ Parameters = { {"_order_id", pkColumns["order_id"]} } } )
{
cmd.VerifyDeletion(); // [EDIT: was ExecuteWithVerification before]
}
I wanted the executescalar and the boilerplate code to goes inside the extension method.
For my code above, does it warrants code refactoring / information hiding? Is my refactored operation looks too opaque?
I would say that your refactor is extremely good, if your new single line of code replaces a handful of lines of code in many places in your program. Especially since the functionality is going to be the same in all of those places.
The programmer coming after you and looking at your code will simply look at the definition of the extension method to find out what it does, and now he knows that this code is defined in one place, so there is no possibility of it differing from place to place.
Try it if you must, but my feeling is it's not about succinctness but whether or not you want to enforce the behavior every time or most of the time. And by extension, if the verify-condition changes that it would likely change across the board.
Basically, reducing a small chunk of boiler-plate code doesn't necessarily make things more succinct; it's just one more bit of abstractness the developer has to wade through and understand.
As a developer, I'd have no idea what "ExecuteWithVerify" means. What exactly are we verifying? I'd have to look it up and remember it. But with the boiler-plate code, I can look at the code and understand exactly what's going on.
And by NOT reducing it to a separate method I can also tune the boiler-plate code for cases where exceptions need to be thrown for differing conditions.
It's not information-hiding when you extract or refactor your code. It's only information-hiding when you start restricting access to your extension definition after refactoring.
"new" operator within a Class (except for the Constructor) should be Avoided at all costs. This is what you need to refactor here.
Sorry if this is basic but I was trying to pick up on .Net 3.5.
Question: Is there anything great about Func<> and it's 5 overloads? From the looks of it, I can still create a similar delgate on my own say, MyFunc<> with the exact 5 overloads and even more.
eg: public delegate TResult MyFunc<TResult>() and a combo of various overloads...
The thought came up as I was trying to understand Func<> delegates and hit upon the following scenario:
Func<int,int> myDelegate = (y) => IsComposite(10);
This implies a delegate with one parameter of type int and a return type of type int. There are five variations (if you look at the overloads through intellisense). So I am guessing that we can have a delegate with no return type?
So am I justified in saying that Func<> is nothing great and just an example in the .Net framework that we can use and if needed, create custom "func<>" delegates to suit our own needs?
Thanks,
The greatness lies in establishing shared language for better communication.
Instead of defining your own delegate types for the same thing (delegate explosion), use the ones provided by the framework. Anyone reading your code instantly grasps what you are trying to accomplish.. minimizes the time to 'what is this piece of code actually doing?'
So as soon as I see a
Action = some method that just does something and returns no output
Comparison = some method that compares two objects of the same type and returns an int to indicate order
Converter = transforms Obj A into equivalent Obj B
EventHandler = response/handler to an event raised by some object given some input in the form of an event argument
Func = some method that takes some parameters, computes something and returns a result
Predicate = evaluate input object against some criteria and return pass/fail status as bool
I don't have to dig deeper than that unless it is my immediate area of concern. So if you feel the delegate you need fits one of these needs, use them before rolling your own.
Disclaimer: Personally I like this move by the language designers.
Counter-argument : Sometimes defining your delegate may help communicate intent better. e.g. System.Threading.ThreadStart over System.Action. So it’s a judgment call in the end.
The Func family of delegates (and their return-type-less cousins, Action) are not any greater than anything else you'd find in the .NET framework. They're just there for re-use so you don't have to redefine them. They have type parameters to keep things generic. E.g., a Func<T0,bool> is the same as a System.Predicate<T> delegate. They were originally designed for LINQ.
You should be able to just use the built-in Func delegate for any value-returning method that accepts up to 4 arguments instead of defining your own delegate for such a purpose unless you want the name to reflect your intention, which is cool.
Cases where you would absolutely need to define your delegate types include methods that accept more than 4 arguments, methods with out, ref, or params parameters, or recursive method signatures (e.g., delegate Foo Foo(Foo f)).
In addition to Marxidad's correct answer:
It's worth being aware of Func's related family, the Action delegates. Again, these are types overloaded by the number of type parameters, but declared to return void.
If you want to use Func/Action in a .NET 2.0 project but with a simple route to upgrading later on, you can cut and paste the declarations from my version comparison page. If you declare them in the System namespace then you'll be able to upgrade just by removing the declarations later - but then you won't be able to (easily) build the same code in .NET 3.5 without removing the declarations.
Decoupling dependencies and unholy tie-ups is one singular thing that makes it great. Everything else one can debate and claim to be doable in some home-grown way.
I've been refactoring slightly more complex system with an old and heavy lib and got blocked on not being able to break compile time dependency - because of the named delegate lurking on "the other side". All assembly loading and reflection didn't help - compiler would refuse to just cast a delegate() {...} to object and whatever you do to pacify it would fail on the other side.
Delegate type comparison which is structural at compile time turns nominal after that (loading, invoking). That may seem OK while you are thinking in terms of "my darling lib is going to be used forever and by everyone" but it doesn't scale to even slightly more complex systems. Fun<> templates bring a degree of structural equivalence back into the world of nominal typing . That's the aspect you can't achieve by rolling out your own.
Example - converting:
class Session (
public delegate string CleanBody(); // tying you up and you don't see it :-)
public static void Execute(string name, string q, CleanBody body) ...
to:
public static void Execute(string name, string q, Func<string> body)
Allows completely independent code to do reflection invocation like:
Type type = Type.GetType("Bla.Session, FooSessionDll", true);
MethodInfo methodInfo = type.GetMethod("Execute");
Func<string> d = delegate() { .....} // see Ma - no tie-ups :-)
Object [] params = { "foo", "bar", d};
methodInfo.Invoke("Trial Execution :-)", params);
Existing code doesn't notice the difference, new code doesn't get dependence - peace on Earth :-)
One thing I like about delegates is that they let me declare methods within methods like so, this is handy when you want to reuse a piece of code but you only need it within that method. Since the purpose here is to limit the scope as much as possible Func<> comes in handy.
For example:
string FormatName(string pFirstName, string pLastName) {
Func<string, string> MakeFirstUpper = (pText) => {
return pText.Substring(0,1).ToUpper() + pText.Substring(1);
};
return MakeFirstUpper(pFirstName) + " " + MakeFirstUpper(pLastName);
}
It's even easier and more handy when you can use inference, which you can if you create a helper function like so:
Func<T, TReturn> Lambda<T, TReturn>(Func<T, TReturn> pFunc) {
return pFunc;
}
Now I can rewrite my function without the Func<>:
string FormatName(string pFirstName, string pLastName) {
var MakeFirstUpper = Lambda((string pText) => {
return pText.Substring(0,1).ToUpper() + pText.Substring(1);
});
return MakeFirstUpper(pFirstName) + " " + MakeFirstUpper(pLastName);
}
Here's the code to test the method:
Console.WriteLine(FormatName("luis", "perez"));
Though it is an old thread I had to add that func<> and action<> also help us use covariance and contra variance.
http://msdn.microsoft.com/en-us/library/dd465122.aspx