We have a QProcess that runs a bash script. The script finishes properly and produces expected output, but the finished signal takes a very long time (minutes) afterward to emit. Basically, our script is generating an encrypted tarball from a list of files fed as an argument. The final bundle is sitting there on disk, intact, but Process takes a very long time to return. This is preventing our UI from moving on to the next task, because we need to ensure the script has run to completion programatically, instead of through inspection. We're not doing anything other than
connect(myProcess, SIGNAL(finished()), mySlot, SLOT(tidyUp()));
myProcess.start();
We can monitor the size of the file with Qt, and we have an estimate of its final size based on the file list we feed the script, but the script hangs around for a very long time after the file has reached its estimated size. We've inserted sync statements, but that doesn't seem to have any effect. When the script is run on the command line, the file grows, and the script stops as soon as it reaches its final size.
Why is QProcess not sending it's finished signal immediately after the script completes?
We would very much like to attach a progress bar indicating percentage of file size produced, or give some other indication of progress, but we're stumped by this behavior. We've tried using both a worker thread moved to a QThread, and running the QProcess directly in a busy loop, calling processEvents(), to no avail.
Turns out this was a problem with the commands I had in my pipe. GPG plunks out the file fairly quickly (quite variable timing, though) but then often spends quite a lot of time idling/working after the file itself has reached its final size. I'm not sure what it's doing, or why it only does on some runs of the same content, but eventually it finishes, the script completes, and I get my finished() signal delivered. I may have to put a more elaborate progress bar in place that switches to busy wait if the file size hasn't changed for a while, but it appears that Qt is working as expected here after all.
Related
I have a program which uses linux inotify syscall to monitor files generated in a folder.
The program monitors the file sizes, so uses a IN_MODIFY flag for the file. Since file could be written at a fast rate, and we don't want inotify queue to get overflowed, the mask also uses IN_ONESHOT mask which makes inotify to delete the watch when it sends an event for file modification. The program then adds a watch again. and the process repeats, as described in the following loop.
eventLoop:
program adds a watch on a file, gets a watch fd (say a0)
file gets modified, program gets an event on watch fd a0
(since IN_ONESHOT was used, the watch is auto deleted by inotify now.)
program handles the modify (0x2) event and does its logic
program adds a watch again on the same file, gets a new watch fd (say a1) (this step is same as step 1.)
While testing it is observed that after some iterations of the above loop, the watch fd returned in the step 5 is same as one returned in step 1. and once this happens, there are no more events received for file changes. the loop essentially halts.
To find out more about the behavior, I executed the same program under strace, the same problem occurs, but after a really long time. If earlier this was occurring in 2 minutes, with strace this problem occurs after 3-4 hours. But it does occur.
Is this a known issue? Am i doing something wrong in my code? (The code is in golang, and running in a container on a kubernetes cluster)
Just the fact that under strace the problem takes longer to appear make me wonder if the problem lies inside inotify. (I am figuring out how to trace the kernel, but i may not succeed in that)
Update
I modified the program to add a retry if the watch fd returned by inotify is same as the old one. After adding this retry in a loop makes the program to continue the expected behavior.
TL;DR: I need something way faster than FSO.write OR another way to share a variable in memory between different script instances.
Hello, I am running CCPulse (on Windows 7), which is a Call Center monitoring tool. Agents are represented as "Objects" and can have various statistics (like calls taken, total talk duration etc). CCPulse allows to apply thresholds and actions to any statistic. These are basically vbscripts and as far as I can tell, there are no restrictions.
This allows me to take the "Threshold StatValue" and do things with it, ie writing it to a file. The issue is that if I apply a threshold to a statistic for all agents, the script executes for each agent object seperately (in sequence, not parallel). However, I want to export all the agent stats to a single csv file.
I already got it working, by creating a file if it doesn't exist, then open/ReadAll into a string. If an agent has not been written to the file yet his stat values get appended as a newline in the string, if he already exists in this file I search and replace his line using a regex pattern. I then write the entire multiline string back to the file:
Set objFile = objFSO.OpenTextFile(inFile,2)
objFile.Write strMemoryBuffer
objFile.Close
set objFile = nothing
strMemoryBuffer contains the files original content, with either a new line or a modified line. This string (and subsequently the export file) is around 30kb in size after all agents have been exported. It looks like this (simplified):
LoginID;Calls;TotalTalkTime
2243;08;9403
2132;12;8439
As I said, since the script runs seperately for each agent, only one line is ever added/modified per pass (CCpulse will execute the script one object at a time, until all are finished).
The write process is very slow however, using Timer() it says it needs between 0.10 and 0.15 seconds! That is way too slow, as I need to run the script on almost 500 agents (ideally in no more than 30 second intervals), but all the writing would take over a minute (CCPulse would create a backlog of threshold operations which could never be finished. I can decrease the recalculation frequency, but that is detrimental in other ways).
If I comment out only the above block, execution time dramatically decreases to ~0.02 seconds. So reading the file and manipulating the string takes almost no time at all, just the write process is slow.
I am writing the file locally to a hard drive (no SSD though). I cannot use a RAM Disk.
I also already tried writing to the volatile environment, but somehow, this is even slower (it does work, but for some reason the explorer process goes crazy with up to 50% cpu usage and ccpulse locks up, allthough the export file is still being updated).
The ideal solution would to have the string being repeadetly manipulated only in memory, and then written to file like only once every 30 seconds or something like that, but I don't know how I can make the strMemoryBuffer variable available to the "next" agent. Any ideas?
I have a shell script which runs very large simulation binaries. This becomes problematic when I want to request some output of variables in the script. For instance, when I run 10 large simulations, I want to be able to print which iteration I am on without having to wait a minute or two for the current simulation to terminate.
Currently, I am using the trap command. However, the script does not react immediately to signals but will only execute the binded function when the current iteration terminates. I will post the code if anyone needs it.
You should start threads for each large thing you're going to run. Have those threads dump results somewhere, then you have your main method free waiting to interrogate the results on the fly.
I'm currently using this example as a guide to redirect standard error of a child process launched by CreateProcess.
However unlike the example currently I'm waiting until the process finishes (checking GetExitCodeProcess), closing the pipe and then reading the error if a non-zero return code comes back.
However I've since read if the pipe fills up the client process will block until the pipe is cleared. The reason I'm not currently reading from the pipe during execution is that the ReadFile call blocks during execution (standard error is only output at the end) so I can't pump the message queue to avoid the GUI from "ghosting" and being marked not responding.
I can't find any reference to how big the pipe is by default (although I can set a size myself), is this something I need to worry about given I'm buffering the output into a string variable for later use anyway? (ie. it would need to fit into the available memory for the process so it has a hard limit there, it's not going to a file like most of the examples have)
Hi I am trying to use screen as part of a cronjob.
Currently I have the following command:
screen -fa -d -m -S mapper /home/user/cron
Is there anyway I can make this command do nothing if the screen mapper already exists? The mapper is set on a half an hour cronjob, but sometimes the mapping takes more than half an hour to complete and so they and up overlapping, slowing each other down and sometimes even causing the next one to be slow and so I end up with lots of mapper screens running.
Thanks for your time,
ls /var/run/screen/S-"$USER"/*.mapper >/dev/null 2>&1 || screen -S mapper ...
This will check if any screen sessions named mapper exist for the current user, and only if none do, will launch the new one.
Why would you want a job run by cron, which (by definition) does not have a terminal attached to it, to do anything with the screen? According to Wikipedia, 'GNU Screen is a software application which can be used to multiplex several virtual consoles, allowing a user to access multiple separate terminal sessions'.
However, assuming there is some reason for doing it, then you probably need to create a lock file which the process checks before proceeding. At this point, you need to run a shell script from the cron entry (which is usually a good technique anyway), and the shell script can check whether the previous run of the task has completed, exiting if not. If the previous incarnation is complete, then the current incarnation creates a lock file containing its PID and runs the job. When it completes, it removes the lock file. You should review the shell trap command, and make sure that the lock file is removed if the shell script exits as a result of a trappable signal (you can't trap KILL and some process-control signals).
Judging from another answer, the screen program already creates lock files; you may not have to do anything special to create them - but will need to detect whether they exist. Also check the GNU manual for screen.