I have written a go program which is doing many tasks concurrently (in goroutines), and showing the status of these task on the console like this -
[Ongoing] Task1
[Ongoing] Task2
[Ongoing] Task3
Now Any of these tasks can finish first or last. I want to update the status of these tasks at the same place. If Task2 finishes first then it should show something like this-
[Ongoing] Task1
[Done] Task2
[Ongoing] Task3
I tried uilive library, but It's always updating the last line like this. (I think it's not for updating multiple lines)
[Ongoing] Task1
[Ongoing] Task2
[Done] Task2
How do I achieve this?
I was looking to implement a similar functionality and came across this question. Not sure if you're still looking for a solution, but I found one :)
It turns out uilive can very well be used to update multiple lines, by making use of the Newline() function on a *uilive.Writer. Editing the example on their repository to write the download progress on multiple lines, we get:
writer := uilive.New() // writer for the first line
writer2 := writer.Newline() // writer for the second line
// start listening for updates and render
writer.Start()
for i := 0; i <= 100; i++ {
fmt.Fprintf(writer, "Downloading File 1.. %d %%\n", i)
fmt.Fprintf(writer2, "Downloading File 2.. %d %%\n", i)
time.Sleep(time.Millisecond * 5)
}
fmt.Fprintln(writer, "Finished downloading both files :)")
writer.Stop() // flush and stop rendering
For implementing something to track the progress of multiple concurrently running tasks, as #blami's answer also states, a good approach is to have a separate goroutine that handles all terminal output.
If you're interested, I've implemented this exact functionality in mllint, my tool for linting machine learning projects. It runs all its underlying linters (i.e. tasks) in parallel and prints their progress (running / done) in the manner that you describe in your question. See this file for the implementation of that. Specifically, the printTasks function reads from the list of tasks in the system and prints each task's display name and status on a new line.
Note that especially with multi-line printing, it becomes important to have control over flushing the print buffer, as you do not want your writer to automatically flush halfway through writing an update to the buffer. Therefore, I set the writer's RefreshDuration to something large (e.g. time.Hour) and call Flush() on the writer after printing all lines, i.e. at the end of printTasks.
Stdout is a stream so you can't address already printed line or character and change it later. Such updates in CLI libraries are made by moving cursor back (by printing escape sequences - non printable character sequences that affect users terminal) and overwriting the text. For that a reference point is needed so the library (or you) know where cursor is and how many lines were printed, etc.
One of possible approaches can be creating a separate goroutine that handles all terminal output printing and have other goroutines that do the actual work only communicate updates (e.g. over channels) to it. Centralizing "state" of terminal in such routine should make it easier to update using technique describe above.
While not a drop-in solution for your situation I recommend to look at mpb - a library that allows to render multiple asynchronously updating progress bars. Maybe you can design your solution in similar way or use it as base as it already handles differences between OS'es, etc.
Related
I have a grpc server(golang) which I want to start and stop via command line tool, after stopping the server it should perform some housekeeping tasks and exit the process.
I can do this by keeping a loop waiting for user input. Ex -
func main() {
for {
var input string
fmt.Scanln(&input)
//parse input
// if 'start' execute - go start()
// if 'stop' execute - stop() and housekeepingTask() and break
}
}
There can be different approaches. Is there any better idea or approach which can be used ?
I am looking for something similar how kafka/any db start and stop works.
Any pointer to an existing solution or approach would be helpful.
I got one of the correct approaches which was commented/answered by #Mehran also. But let me take a moment and answer it in detail -
This can be solved by Inter-Process-Communication. We can have a bash file that sends user signals to the program (if already running) and based on its process can act on it. (even we can have stdin file which process can read and act upon)
Some useful links -
- https://blog.mbassem.com/2016/05/15/handling-user-defined-signals-in-go/
- Send command to a background process
- How to signal an application without killing it in Linux?
I will try to add a working go program.
I'm trying to capture output written from each task as it is executed. The code below works as expected when running Gradle with --max-workers 1, but when multiple tasks are running in parallel this code below picks up output written from other tasks running simultaneously.
The API documentation states the following about the "getLogging" method on Task. From what it says I judge that it should support capturing output from single tasks regardless of any other tasks running at the same time.
getLogging()
Returns the LoggingManager which can be used to control the logging level and standard output/error capture for this task. https://docs.gradle.org/current/javadoc/org/gradle/api/Task.html
graph.allTasks.forEach { Task task ->
task.ext.capturedOutput = [ ]
def listener = { task.capturedOutput << it } as StandardOutputListener
task.logging.addStandardErrorListener(listener)
task.logging.addStandardOutputListener(listener)
task.doLast {
task.logging.removeStandardOutputListener(listener)
task.logging.removeStandardErrorListener(listener)
}
}
Have I messed up something in the code above or should I report this as a bug?
It looks like every LoggingManager instance shares an OutputLevelRenderer, which is what your listeners eventually get added to. This did make me wonder why you weren't getting duplicate messages because you're attaching the same listeners to the same renderer over and over again. But it seems the magic is in BroadcastDispatch, which keeps the listeners in a map, keyed by the listener object itself. So you can't have duplicate listeners.
Mind you, for that to hold, the hash code of each listener must be the same, which seems surprising. Anyway, perhaps this is working as intended, perhaps it isn't. It's certainly worth an issue to get some clarity on whether Gradle should support listeners per task. Alternatively raise it on the dev mailing list.
I need to perform data analysis on files in a directory as they come in.
I'd like to know, if it is better,
to implement an event listener on the directory, and start the analysis process when activated. Then having the program go into sleep forever: while(true), sleep(1e10), end
or to have a loop polling for changes and reacting.
I personally prefer the listeners way, as one is able to start the analysis twice on two new files coming in NEARLY the same time but resulting in two events. While the other solution might just handle the first one and after that finds the second new data.
Additional idea for option 1: Hiding the matlab GUI by calling frames=java.awt.Frame.getFrames and setting frames(index).setVisible(0) on the index matching the com.mathworks.mde.desk.MLMainFrame-frame. (This idea is taken from Yair Altman)
Are there other ways to realize such things?
In this case, (if you are using Windows), the best way is to use the power of .NET.
fileObj = System.IO.FileSystemWatcher('c:\work\temp');
fileObj.Filter = '*.txt';
fileObj.EnableRaisingEvents = true;
addlistener(fileObj,'Changed',#eventhandlerChanged);
There are different event types, you can use the same callback for them, or different ones:
addlistener(fileObj, 'Changed', #eventhandlerChanged );
addlistener(fileObj, 'Deleted', #eventhandlerChanged );
addlistener(fileObj, 'Created', #eventhandlerChanged );
addlistener(fileObj, 'Renamed', #eventhandlerChanged );
Where eventhandlerChanged is your callback function.
function eventhandlerChanged(source,arg)
disp('TXT file changed')
end
There is no need to use sleep or polling. If your program is UI based, then there is nothing else to do, when the user closes the figure, the program has ended. The event callbacks are executed exactly like button clicks. If your program is script-like, you can use an infinite loop.
More info in here: http://www.mathworks.com/help/matlab/matlab_external/working-with-net-events-in-matlab.html
I want to run an unknown amount (unknown at compile time) of NSTasks and I want to run an unknown amount (again, at compile time, max. 8) of them simultaneously. So basically I loop through a list of files, generate an NSTask, run it until the maximum of simultaneous tasks are ran and whenever one finishes another NSTask starts until all of them are done.
My approach would be creating a class that generates an NSTask and subclass it to change parameters here and there when there's a different input (changes that are made from the interface). Then the superclass will run the NSTask and will have an #synthesize method returning its progress. Those objects will be generated in the above repeat loop and the progress will be displayed.
Is this a good way to go? If so, can someone give me a quick example of how the repeat loop would look like? I don't know how I would reference to all objects once they're run.
for (; !done ;) {
if (maxValue ≥ currentValue) {
//Run Object with next file.
//Set currentValue.
}
//display progress and set done to YES if needed and set currentValue to it -1 if needed
}
Thanks in advance.
There's no loop exactly.
Create an array for tasks not yet started, another with tasks that are running, and another with tasks that have finished. Have a method that pulls one task from the pending-tasks array, starts (launches) it, and adds it to the running-tasks array. After creating the arrays and filling out the pending-tasks array, call that method eight times.
When a task finishes, remove the task from the running-tasks array and add it to the finished-tasks array, then check whether there are any tasks yet to run. If there's at least one, call the run-another-one method again. Otherwise, check whether there are any still running: If not, all tasks have finished, and you can assemble the results now (if you haven't been displaying them live).
Ryan Tomayko touched off quite a fire storm with this post about using Unix process control commands.
We should be doing more of this. A lot more of this. I'm talking about fork(2), execve(2), pipe(2), socketpair(2), select(2), kill(2), sigaction(2), and so on and so forth. These are our friends. They want so badly just to help us.
I have a bit of code (a delayed_job clone for DataMapper that I think would fit right in with this, but I'm not clear on how to take advantage of the listed commands. Any Ideas on how to improve this code?
def start
say "*** Starting job worker #{#name}"
t = Thread.new do
loop do
delay = Update.work_off(self)
break if $exit
sleep delay
break if $exit
end
clear_locks
end
trap('TERM') { terminate_with t }
trap('INT') { terminate_with t }
trap('USR1') do
say "Wakeup Signal Caught"
t.run
end
end
Ahh yes... the dangers of "We should do more of this" without explaining what each of those do and in what circumstances you'd use them. For something like delayed_job you may even be using fork without knowing that you're using fork. That said, it really doesn't matter. Ryan was talking about using fork for preforking servers. delayed_job would use fork for turning a process into a daemon. Same system call, different purposes. Running delayed_job in the foreground (without fork) vs in the background (with fork) will result in a negligible performance difference.
However, if you write a server that accepts concurrent connections, now Ryan's advice is right on the money.
fork: creates a copy of the original process
execve: stops executing the current file and begins executing a new file in the same process (very useful in rake tasks)
pipe: creates a pipe (two file descriptors, one for read, one for write)
socketpair: like a pipe, but for sockets
select: let's you wait for one or more of multiple file descriptors to be ready with a timeout
kill: used to send a signal to a process
sigaction: lets you change what happens when a process receives a signal
5 months later, you can view my solution at http://github.com/antarestrader/Updater. Look at lib/updater/fork_worker.rb