How to draw a chart from a CSV-file in MQL4? - algorithmic-trading

I'm new to MQL and MetaTrader 4,but I want to read a .CSV-file and draw the values I've got into the chart of the Expert Advisor I'm working on.
Every .CSV file has the form of:
;EURUSD;1
DATE;TIME;HIGH;LOW;CLOSE;OPEN;VOLUME
2014.06.11;19:11:00;1.35272;1.35271;1.35271;1.35272;4
2014.06.11;19:14:00;1.35287;1.35282;1.35284;1.35283;30
Where the EURUSD part is the _Symbol, which another program generated, the 1 is the period, and all the other things are the data to draw.
Is there any form to do it inside an Expert Advisor, or do I need to use a Custom Indicator?
If that's the case, how can I do it in the simplest way?
P.S.: I read the data in a struct:
struct entry
{
string date;
string time;
double high;
double low;
double close;
double open;
int volume;
};

There are three principally different approaches available in MT4
First, one mayreshuffle data-cells into a compatible format T,O,H,L,C,V and import records using F2 History Center [Import] facility of the MetaTrader Terminal. One may create one's own Symbol-name so as to avoid name-colliding cases in the History Center database.
This way, one lets MT4 to create system-level illustrations of the TOHLCV-data, using the platform's underlying graphical engine.
Second,
one may ignore the underlying graphical engine andwork on a user-controlled GUI-overlayso as to implement an algorithm to read a CSV file and create a set of MQL4 GUI-objects algorithmically, based on the data contained in the said CSV file. An experience based decision whether to use an { ExpertAdvisor | CustomIndicator } would yield to use a Script for this purpose, due to it's one-shot processing.One shall realise, MT4 code-execution ecosystem does a specific context-binding between an MQL4-code ( which is being run ) and an MT4.Graph which does not allow a code launched on a GBPJPY MT4.Graph to process directly objects, related with FTSE.100 MT4.Graph. Yes, if asked to, one may implement a few add-ons and develop a sofisticated distributed processing model to make this work "accross" the said context-binding borders.
Third,
and for some cases the most interesting way is a file based approach, whereone may
pre-process CSV data in a similar way as in second option but not inside a live-MT4 process, but "beforehand" and
generate one's own Profile file, keeping an MT4 convention of placing & content of - ~/profiles/<aProfileNAME>/chart01.chr - ~/profiles/<aProfileNAME>/order.wnd
-~/profiles/lastprofile.ini, referring <aProfileNAME> on it's first row
This way, once the MT4 session starts, the pre-fabricated files are pilot-tape auto-loaded and displayed as one wishes to, Q.E.D.
A .chr file syntax sample:
<chart>
id=130394787628125000
comment=msLIB.TERMINAL: _______________2013.04.15 08:00:00 |cpuClockTIXs = 448765484 |
symbol=EURCHF
period=60
leftpos=6188
digits=4
scale=4
graph=1
fore=0
grid=0
volume=1
scroll=0
shift=1
ohlc=1
...
<window>
height=100
fixed_height=0
<indicator>
name=main
<object>
type=10
object_name=Fibo 16762
...
<object>
type=16
object_name=msLIB.RectangleOnEVENT
period_flags=0
create_time=1348596865
color=25600
style=0
weight=1
background=0
filling=0
selectable=1
hidden=0
zorder=0
time_0=1348592400
value_0=1.213992
time_1=1348624800
value_1=1.209486
ray=0
</object>
...
<object>
type=17
object_name=msLIB.TriangleMarker
period_flags=0
create_time=1348064992
color=17919
style=2
weight=1
background=0
filling=0
selectable=1
hidden=0
zorder=0
time_0=1348052400
value_0=1.213026
time_1=1348070400
value_1=1.213026
time_2=1348070400
value_2=1.210476
</object>

Related

Apache Arrow C++: What's the best fast alternative to parquet::StreamWriter?

I'm writing a program to convert a tabular ("rownar") custom binary file format to Parquet using Arrow C++.
The core of the program works as follows:
ColInfo colinfo = ...; // fixed schema per file, not known at compile time
parquet::StreamWriter w = ...;
on_row(RowData row) {
for (col : colinfo) {
w << convertCell(row, col);
}
}
Here, on_row is called by the input file parser for each row parsed.
This works fine but is pretty slow, with StreamWriter::<< being the bottleneck.
Question: What's an alternative to StreamWriter that's similarly easy to use but faster?
Constraints:
Can't change the callback-based interface shown above.
Input data doesn't fit into memory.
I've looked into the reader_writer{,2}.cc examples in the Arrow repository that use the WriteBatch interface. Is that the recommended way to quickly create Parquet files? If so, what's the recommended way to size row groups? Or is there an interface that abstracts away row groups, like with StreamWriter? And what's the recommended size num_values to WriteBatch?
Secondary question: What are some good opportunities to concurrently create the Parquet file? Can batches, chunks, columns, or row groups by written concurrently?

How to load value from dynamically specified parameter in NiFi

I have several processes with almost same flow like "Get some parameters, extract data from database according to them and upload them to target". The parameters vary slightly across processes as well as targets but only a bit. Most of the process is the same. I would like to extract those differences to parameter-context and dynamically load them. My idea is to have parameters defined following way and then using them.
So core of question is:
How to dynamically choose which parameter group load and use?
Having several parameter contexts with same-named/different-valued parameters and dynamically switching them would be probably the best, but it is not possible as far as I know.
Also duplicating flows is out-of-the-table. Any error correction would be spread out over several places and maintenance would be a nightmare.
Moreover, I know I can do it like "In GenetrateFlowFile for process A set value1=#{A_value1} and in GenetrateFlowFile for process B set value1=#{B_value1}. But this is tedious, error-prone and scales kinda bad. Not speaking of situation when I can have dozens of parameters and several processes. Also it is a kind of hardcoding, not configuring...
I was hoping for something like defining group=A and then using it like value1=#{ ${ group:append('_value1') } } but this does not work - it is evaluated as parameter literally named ${ group:append('_value1') }.
TL;DR: Use evaluateELString().
The actual solution is to set in GenetrateFlowFile processor group=A and in next UpdateAttribute processor set the following:
value1=${ group:prepend('hash{ '):append('_value1 }'):replace('hash', '#'):evaluateELString() }
The magic being done here is "Take value of group slap around it #{ and _value1 } to make it valid NiFi Expression Language statement and then evaluate it." (Notice - the word hash and function replace is there since I didnĀ“t manage to escape the # char right before {.)
If you would like to have your value1 at the beginning of the statement then you can use following code. The result is same, it is easier to use (often-changed value value1 is at the beginning of the statement) and is less readable "what is really going on?"-wise.
value1=${ literal('value1'):prepend('_'):prepend(${ group }):prepend('hash{ '):append(' }'):replace('hash', '#'):evaluateELString() }

How to use Kettle to handle the linefeed in a field

I want to use Kettle to handle a data set.
The format of the data set is as below
product/productId: B00006HAXW
review/userId: A1RSDE90N6RSZF
review/profileName: Joseph M. Kotow
review/helpfulness: 9/9
review/score: 5.0
review/time: 1042502400
review/summary: Pittsburgh - Home of the OLDIES
review/text: I have all of the doo wop DVD's and this one is as good or better than the
1st ones. Remember once these performers are gone, we'll never get to see them again.
Rhino did an excellent job and if you like or love doo wop and Rock n Roll you'll LOVE
this DVD !!
I read every line of the data first and then transform every eight rows into a record.
However, there is such data:
review/profileName: nancy "crzyfnyfrog
I love my purple pigtails."
This field contains a linefeed and I don't know how to handle it.
Now I use the script code to achieve the function I want, but I still want to know how to use based components to solve it.

Summing XML Data in MSWord Mail Merge

I have a report card written in Word that uses an XML file for its input. In the XML file, if a student remains in the same section all three trimesters there will be one node for that class; if they change sections at the trimester they'll have one node for each section. The nodes look something like this (greatly simplified):
<ReportCardSectionFB Abs1="2" Abs2="11" CourseID="ELMATH1" CourseTitle="Math" PeriodStart="3" TeacherName="Jones, Jennifer" TermCode="Year" SectionID="ELMATH1-4" />
<ReportCardSectionFB Abs1="1.50" Abs2="6" CourseID="ELMATH1" CourseTitle="Math" PeriodStart="3" TeacherName="Smith, Tina" TermCode="Year" SectionID="ELMATH1-3" />
There is no indicator within the XML as to which trimester the node belongs to.
In the Word document, we're pulling the absence data with the following mail merge command:
{MERGEFIELD "ReportCardSectionFB[#PeriodStart='3']/ #Abs1" \# 0.# \* MERGEFORMAT }
That's not working in this situation: it only gets the absence data from the first node it comes across, i.e.: 2.0. Is there a way to get the sum of #Abs1 for all period 3 classes, i.e.: 3.5? If not, is there a way to only get the last #Abs1 for period 3, i.e.: 1.5?
I recommend you to use this 3rd party product, which can use xml as input and is capable of merging it with MS Word template. I is also much more powerful than the built-in Word's mail merge. You can see some examples here.
You could also try summing the absences in Synergy - there's a new checkbox under AttDef1, 2, etc. that adds up all the absences for the data range - Include all day data for the entire date range regardless of section enrollment or section timeframe. That way the absences should be the same for each section, if that works for your district.
You can also try the SET function in Word to nest the MERGEFIELDS as bookmarks and use the Word operator functions to then add the bookmarks.

Correlating multiple dynamic values

How can I get the value of important id and ValueType?
I have tried using web_save_param_regexp (but unfortunately I don't fully understand how the function works).
I have also tried using web_save_param (with the help of offset and length).
unfortunately once again I cannot get the accurate value some values change in length specially when the total amount values dynamically changes per run.
<important id=\"insertsomevalueshere\" record=\"1\" nucTotal=\"NUC609.40\"><total amount=\"68.75\" currency=\"USD\"/><total amount=\"609.40\" currency=\"USD\"/><out avgsomecost=\"540.65\" ValueType=\"insertsomevalueshere\" containsawesomeness=\"1\" Score=\"-97961\" somedatatype=\"1\" typeofData=\"VAL\" web=\"1\">
Put these lines of code before the line of code which does your web request:
web_reg_save_param_regexp("ParamName=importantid","Regexp=<important id=\\\"(.*?)\\\"",LAST);
web_reg_save_param_regexp("ParamName=ValueType","Regexp= ValueType=\\\"(.*?)\\\"",LAST);
You will then have two stored parameters 'importantid' and 'ValueType'
Dynamic number of elements to correlate? Your path for resubmission is through web_custom_request(). You will need to build the string you need dynamically with the name:value pairs for all of the data which needs to be included.
This path will place a premium on your string manipulation skills in the language of the tool. The default path is through C, but you have other language options if your skills are more refined in another language.

Resources