How to save a .csv file in C drive (MQL4) - quantitative-finance

I try to change the path of saving .csv file to C drive (the main dir).
Here the current code:
y="\"";
yy="\\";
Patch = "Csv_Files"+x2+Symbol()+x2;
I want the file that created to be rather saved in C drive ( the main dir ) instead in C:\Users\username\AppData\Roaming\MetaQuotes\Terminal\*********************\MQL4 dir.
How I can do that?
Thanks!

Short version
In pure-MQL4 no one can. MQL4 a-priori sand-boxes all fileIO to take place just in subdirectories of Terminal/MQL4/Files, resp. Tester/Files ( in case of testing a code inside a Strategy Tester ).
Still how to do that?
For security reasons, working with files is strictly controlled in the MQL4 language. Files with which file operations are conducted using MQL4 means, cannot be outside the file sandbox.
If FILE_COMMON is specified among flags, the file is opened in a shared folder for all MetaTrader 4 client terminals ( another magic ).
Nevertheless, you can setup distributed processing and orchestrate fileIO operations to be performed externally, via an unrestricted process and setup a peer-to-peer messaging or other DLL-based heterogenous distributed programming solution to escape from the box and write files wherever you need, even on a completely opposite side of the Earth ( remote logs et al ).

Related

Is it possible to set access priority to files in windows

i am working on a system that has 2 parts. the parts are as follow:
1- a producer application that generate and append files in space A, and i cannot modify it ( i must not make noise for this part).
2- a transformer application that copies the files from space A(near the part1) to Space B.
. If part 2 starts to copy a file from A to B and during the copying of the file, part1 wants to append the same file, the part1 will stopped, because the file is under control of part2.
I want to if the part1 wants to append the file, control of the file granted to part 1, and it's not important to what happens for part2.
IS it possible to do that in windows??
If you are working within one computer system (not via network), then in general the correct way would be to monitor the writes, made by part 1, as they happen, and copy the written data on-the-fly. This is achieved using a file system filter driver, which intercepts write operations and captures the data being written.

Why isn't copy operation implemented in kernel?

It's my understanding that most file IO operations are implemented in the kernel, such as CRUD, move or remove. However file copy is not implemented as a kernel level API.
In order to detect a file copy in the kernel one will need to use heuristics approach (discussion on this approach), e.g. as detect file reads, file creates and file writes from the same user with the same file name, but different paths.
Why copy is a user land operation?
First, because caring about whether or not two different files have the same content, where one file's content is copied directly from the other, is a user-space concern that has no logical reason to exist inside a kernel.
At best.
Bytes are bytes.
Second, how would the kernel distinguish copying a file between what are just two different file descriptors? See the man page for sendfile(). Why should the kernel track if the calling user called sendfile() to send the contents of a file to a TCP socket to who-knows-where or to another file?
Third, even if the kernel tracked copying a file, what on God's good Earth would it do with such data?
If you care about such file copy events, set up auditing.

Compare-and-Swap over POSIX-compliant filesystem objects

There are several operations which POSIX-compliant operating systems can do atomically with filesystem objects (files and folders). Here is a list of such presumably atomic operations:
rename or move file or folder
create hardlink
create symlink
create folder
create and open an empty file
Is it possible to build Compare-and-Swap algorithm for manipulating a file based on these operations?
Let’s suppose we have several processes which are performing concurrent read/write on a single file. A file is characterized by its revision. Let’s say the revision is added to file name, and there is a symlink to the file which can be used by the processes to read it. The processes cannot (for some reasons) synchronize with mutexes, semaphores and so on, but they are able to create auxiliary files and folders. Are they able to perform revision-based Compare-and-Swap modifications of the file (create a new file, create and rename symlink), in the meaning that if several processes are going to modify it simultaneously, one will success and the rest will fail with some error code?
The algorithm has to be resistant to sudden termination of any processes at any step of algorithm.
Oh boy.
Let's assume that each process has access to a unique identifier, to avoid problems breaking symmetry. Here's a wait-free implementation of a one-shot consensus object.
Create a directory with a unique name.
Create a file in that directory whose name is the creating process's input.
Rename the directory to the name of the consensus object. This will fail unless this is the first such rename.
List the directory to which we tried to rename our own. The name of the file inside is the consensus decision.
Now it's possible to simulate an arbitrary object in a wait-free manner, using standard results in distributed computing. Have fun garbage collecting =P
If you consider fcntl(2) in your list of atomic operations, you can easily build a general mutex primitive. I use the flock(1) command line tool to do this in shell scripts regularly. (flock(1) is part of the util-linux-ng package.)
flock(2) is not specified by POSIX but fcntl(2) is. I think flock(1) may use fcntl(2) in some cases (e.g. NFS).
So the algorithm is something like:
Do a non-blocking fcntl() on some file that is unique to the resource you want to manipulate. This may be the data file itself, or some empty file that every process agrees to use as the mutex object.
2a. If the fcntl is successful, swap the data in the file.
2b. If the fcntl is not successful, don't touch the data file.
Release the fcntl on the file.
You could of course do a blocking fcntl(2), but there won't be any way to know what order each process blocks and gets woken up, so whether this is appropriate depends on the application.
Note that fcntl(2) is advisory, so it won't prevent unwanted manipulation of the data file.

Move or copy and truncate a file that is in use

I want to be able to (programmatically) move (or copy and truncate) a file that is constantly in use and being written to. This would cause the file being written to would never be too big.
Is this possible? Either Windows or Linux is fine.
To be specific what I'm trying to do is log video with FFMPEG and create hour long videos.
It is possible in both Windows and Linux, but it would take cooperation between the applications involved. If the application that is writing the new data to the file is not aware of what the other application is doing, it probably would not work (well ... there is some possibility ... back to that in a moment).
In general, to get this to work, you would have to open the file shared. For example, if using the Windows API CreateFile, both applications would likely need to specify FILE_SHARE_READ and FILE_SHARE_WRITE. This would allow both (multiple) applications to read and write the file "concurrently".
Beyond sharing the file, though, it would also be necessary to coordinate the operations between the applications. You would need to use some kind of locking mechanism (either by locking some part of the file or some shared mutex/semaphore). Note that if you use file locking, you could lock some known offset in the file to act as a "semaphore" (it can even be a byte value beyond the physical end of the file). If one application were appending to the file at the same exact time that the other application were truncating it, then it would lead to unpredictable results.
Back to the comment about both applications needing to be aware of each other ... It is possible that if both applications opened the file exclusively and kept retrying the operations until they succeeded, then perform the operation, then close the file, it would essentially allow them to work without "knowledge" of each other. However, that would probably not work very well and not be very efficient.
Having said all that, you might want to consider alternatives for efficiency reasons. For example, if it were possible to have the writing application write to new files periodically, it might be more efficient than having to "move" the data constantly out of one file to another. Also, if you needed to maintain some portion of the file (e.g., move out the first 100 MB to another file and then move the second 100 MB to the beginning) that could be a fairly expensive operation as well.
logrotate would be a good option is linux, comes stock on just about any distro. I'm sure there's a similar windows service out there somewhere

command line wisdom for 2 panel file manager user

Want to upgrade my file management productivity by replacing 2 panel file manager with command line (bash or cygwin). Can commandline give same speed? Please advise a guru way of how to do e.g. copy of some file in directory A to the directory B. Is it heavy use of pushd/popd? Or creation of links to most often used directories? What are the best practices and a day-to-day routine to manage files of a command line master?
Can commandline give same speed?
My experience is that commandline copying is significantly faster (especially in the Windows environment). Of course the basic laws of physics still apply, a file that is 1000 times bigger than a file that copies in 1 second will still take 1000 seconds to copy.
..(howto) copy of some file in directory A to the directory B.
Because I often have 5-10 projects that use similar directory structures, I set up variables for each subdir using a naming convention :
project=NewMatch
NM_scripts=${project}/scripts
NM_data=${project}/data
NM_logs=${project}/logs
NM_cfg=${project}/cfg
proj2=AlternateMatch
altM_scripts=${proj2}/scripts
altM_data=${proj2}/data
altM_logs=${proj2}/logs
altM_cfg=${proj2}/cfg
You can make this sort of thing as spartan or baroque as needed to match your theory of living/programming.
Then you can easily copy the cfg from 1 project to another
cp -p $NM_cfg/*.cfg ${altM_cfg}
Is it heavy use of pushd/popd?
Some people seem to really like that. You can try it and see what you thing.
Or creation of links to most often used directories?
Links to dirs are, in my experience used more for software development where a source code is expecting a certain set of dir names, and your installation has different names. Then making links to supply the dir paths expected is helpful. For production data, is just one more thing that can get messed up, or blow up. That's not always true, maybe you'll have a really good reason to have links, but I wouldn't start out that way, just because it is possible to do.
What are the best practices and a day-to-day routine to manage files of a command line master?
( Per above, use standardized directory structure for all projects.
Have scripts save any small files to a directory your dept keeps in the /tmp dir, .
i.e /tmp/MyDeptsTmpFile (named to fit your local conventions) )
It depends. If you're talking about data and logfiles, dated fileNames can save you a lot of time. I recommend dateFmts like YYYYMMDD(_HHMMSS) if you need the extra resolution.
Dated logfiles are very handy, when a current process seems like it is taking a long time, you can look at the log file from a week ago and quantify exactly how long this process took, a week, month, 6 months (up to how much space you can afford). LogFiles should also capture all STDERR messages, so you never have to re-run a bombed program just to see what the error message was.
This is Linux/Unix you're using, right? Read the man page for the cp cmd installed on your machine. I recommend using an alias like alias CP='/bin/cp -pi' so you always copy a file with the same permissions and with the original files' time stamp. Then it is easy to use /bin/ls -ltr to see a sorted list of files with the most recent files showing up at the bottom of the list. (No need to scroll back to the top, when you sort by time,reverse). Also the '-i' option will warn you that you are going to overwrite a file, and this has saved me more than a couple of times.
I hope this helps.
P.S. as you appear to be a new user, if you get an answer that helps you please remember to mark it as accepted, and/or give it a + (or -) as a useful answer.

Resources