I'm dealing with multiple processes that read eachothers's drawables and thus need synchronization. XLockDisplay is supposed to "lock out all other threads" from using the display, but does that apply across multiple processes?
Also, do all processes need to call XInitThreads or just the one(s) calling XLockDisplay?
XLockDisplay func (and LockDisplay macros) has to be used inside the same XClient app, ie process... They make no sense btw XClients (so btw 2 processes). This is a way to protect against multiple threads (so inside the same process) attempting to access the same X connection (eg see GLX-1.4, ch. 2.7)
In order to read the whole content (buffer) of another window, you could take a look at any app that makes a screenshot from your desktop or from a single window (see 'scrot' source code for example).
If you want to exchange data btw XClients, use their Properties/Atoms (see XLib ICCC).
Related
Is it possible at all to read the value (presumably a variable, since it changes every few seconds and is shown on screen) from a process in Windows? This is some custom, fairly old (10y) Windows GUI application that shows values (part counter) from manufacturing machine connected to it via some proprietary protocol (even using a dedicated PCI commmunications card).
I got the idea when reading about people modifying settings in a game (change high-score, change difficulty level, etc).
On Windows, there is an official API ReadProcessMemory for reading data from a process's memory:
ReadProcessMemory copies the data in the specified address range from the address space of the specified process into the specified buffer of the current process. Any process that has a handle with PROCESS_VM_READ access can call the function.
While I am hopeful that it works once the address/offset of the value in question is known, I am not so sure if this application will allocate memory differently when started the next time.
This is how I would approach it:
continuously, e.g. every second
take a screen shot of the application,
take a process dump (procdump from sysinternals) of the application
analyse the process dump and try to find the location/offset of the value in question
compare process dumps from different startups of the application to see if the value is at the same offset
Is this feasible, or is it completely obvious that memory allocation is very dynamic (between restarts and even during runtime) and using an offset-based approach will be doomed?
My process creates a log file and appends a new line at the end of the file by using a, e.g:
fopen("log.txt", "a");
The order of the writes is not critical, but I need to ensure that fopen always succeeds. My question is, can the call above be executed from multiple processes at the same time on Windows, Linux and macOS without any race-condition?
If not, what is the most common and easy way to ensure I can write to the log file? There is file-lokcing, but also a file-lock (aka log.txt.lock) possible. Could anyone share some insights or resources which go more into detail?
If you do not use any synchronization between processes, you'll highly likely have moment when several processes will try to write to the file and the best you can get is mesh of input strings.
In order to synchronize any work in several processes (multiprocessing module). Use Lock. It will prevent several processes to do some work simultaneously.
It will look something like this:
import multiprocessing
# create lock in main process and "send" it to child processes.
lock = multiprocessing.Lock()
# ...
# in child Process
with lock:
do_some_work()
If you need more detailed example, feel free to ask.
Also you can check example in official docs
Say I want to write a tail like application for Windows to monitor a bunch of files. Such an application should report when some of the monitored files is updated by any other application.
It can be assumed that the files being monitored are being constantly appended by other processes, but not modified in any other way. Before implementing some pooling solution (that is, iterate through the files to be monitored, seek to the end of each one, record this pointer, compare to previous end etc.) I would appreciate if someone more experienced with the Overlapped IO could tell me if I can make use of it.
For instance, is it possible to write the monitoring application in such a way that it opens all the files that need to be monitored, seek to the end of them, and try to read one byte with ReadFileEx() registering a callback.
Is there a way to make this work so that when another process write to some of the files the proper callback is invoked? Or necessarily the monitoring application will always get an EOF for such a call?
Is this approach a sensible one? Or is it a bad idea?
I have two processes. I want to copy a few pages of one process to another process such that the values of variables in first process become equal to values of variables of the second process whose pages are copied.
I am not looking for fork. I just want to copy a particular page from one process to another and want the first process to point to same memory area as the other process.
Any help would be great.
It sounds like you want memmap. Unfortunately, memmap does so many things and I don't know exactly what you want to do so I cannot tell you what setting you ned to use. It would depend upon exactly who you want to use things.
I want to increase the throughput of a script which does net I/O (a scraper). Instead of making it multithreaded in ruby (I use the default 1.9.1 interpreter), I want to launch multiple processes. So, is there a system for doing this to where I can track when one finishes to re-launch it again so that I have X number running at any time. ALso some will run with different command args. I was thinking of writing a bash script but it sounds like a potentially bad idea if there already exists a method for doing something like this on linux.
I would recommend not forking but instead that you use EventMachine (and the excellent em-http-request if you're doing HTTP). Managing multiple processes can be a bit of a handful, even more so than handling multiple threads, but going down the evented path is, in comparison, much simpler. Since you want to do mostly network IO, which consist mostly of waiting, I think that an evented approach would scale as well, or better than forking or threading. And most importantly: it will require much less code, and it will be more readable.
Even if you decide on running separate processes for each task, EventMachine can help you write the code that manages the subprocesses using, for example, EventMachine.popen.
And finally, if you want to do it without EventMachine, read the docs for IO.popen, Open3.popen and Open4.popen. All do more or less the same thing but give you access to the stdin, stdout, stderr (Open3, Open4), and pid (Open4) of the subprocess.
You can try fork http://ruby-doc.org/core/classes/Process.html#M003148
You can get the PID in return and see if this process run again or not.
If you want manage IO concurrency. I suggest you to use EventMachine.
You can either
implement (or find an equivalent gem) a ThreadPool (ProcessPool, in your case), or
prepare an array of all, let's say 1000 tasks to be processed, split it into, say 10 chunks of 100 tasks (10 being the number of parallel processes you want to launch), and launch 10 processes, of which each process right away receives 100 tasks to process. That way you don't need to launch 1000 processes and control that not more than 10 of them work at the same time.