How to force garbage collection from Shell in Go - go

You can force garbage collection in java using jcmd <pid> GC.run as shown at this StackOverflow link: How do you Force Garbage Collection from the Shell? . I understand that forcing garbage collection is frowned upon, but I was wondering if there was a similar command for golang. Like this question, I would like know if garbage collection can be done from the command line instead of calling Runtime.GC().

It is not possible to run the GC from the command line. Think that your program is a standalone compiled version.
If you need to "force" the GC to run at certain times, I think you could use two formulas:
In your app, check the existence of a file with inotify. When the file appears, you run the GC
In your app, wait for a signal from the operating system (Linux), such as SIGUSR1, and run the GC. Then you send the signal from the console using:
kill -10 pid
Where pid is the identifier of the running program as it appears in ps -aux

Related

How do I know who calls the process on macOS?

An unidentified process on macOS periodically creates an empty temporary file in my Downloads folder but doesn't remove it later
I managed to figure out that immediate culprit is mktemp, but I want to understand which process calls it
I figure that I can use $ lldb > process attach -name mktemp -watifor to attach to mktemp when it's being launched, but can't figure out how to know who called it in first place
Is there any solution whether with or without lldb to know it?
That's actually a little trickier than you might think.
You can find the parent process of a given process on macOS easily by running ps -j <PID> though Terminal. The parent pid is the third column in the output. Or Activity Monitor has a hierarchical display that shows these relationships graphically.
lldb prints the pid of the process it is attaching to when it attaches, or you can find it in the output of the lldb command target list. So that's easy to find...
However, for technical reasons, when the debugger attaches to a process that process gets "reparented" by the kernel to the debugger. So if you ask a process running under lldb who its parent process is, the answer will always be "debugserver" - which is lldb's debugger stub. And there isn't an easy way to see what the original parent was.
I got this to work, though this is a bit of a hack. You want to suspend the process you are debugging so it doesn't exit on you, then detach from it so it gets reparented back to the real parent. So:
In lldb, run:
(lldb) expr (void) task_suspend((void *)mach_task_self())
Since that suspended the task before returning, the command won't actually complete. So
Use ^C in the lldb console to interrupt the expression evaluation. The target process is already suspended so this will just cancel the task_suspend return code.
Now detach:
(lldb) detach
When you do that the mktemp process will be reparented by the system back to its original parent,
Now you can run ps -j in Terminal and find the process you are looking for, and it will be the original parent.
If you need to get the process running again, attach to it with lldb again and call task_resume with the same arguments as you called task_suspend above.

Julia invoke script on existing REPL from command line

I want to run a Julia script from window command line, but it seems everytime I run > Julia code.jl, a new instance of Julia is created and the initiation time (package loading, compiling?) is quite long.
Is there a way for me to skip this initiation time by running the script on the current REPL/Julia instance? (which usually saves me 50% of running time).
I am using Julia 1.0.
Thank you,
You can use include:
julia> include("code.jl")
There are several possible solutions. All of them involve different ways of sending commands to a running Julia session. The first few that come to my mind are:
use sockets as explained in https://docs.julialang.org/en/v1/manual/networking-and-streams/#A-simple-TCP-example-1
set up a HTTP server e.g. using https://github.com/JuliaWeb/HTTP.jl
use named pipes, as explained in Named pipe does not wait until completion in bash
communicate e.g. through the file system (e.g. make Julia scan some folder for .jl files and if it finds them there they get executed and moved to another folder or deleted) - this is probably simplest to implement correctly
In all the solutions you can send the command to Julia by executing some shell command.
No matter which approach you prefer the key challenge is sanitizing the code to handle errors properly (i.e. a situation when you sent some command to the Julia session and it crashes or when you send requests faster than Julia is able to handle them). This is especially important if you want the Julia server to be detached from the terminal.
As a side note: when using the Distributed module from stdlib in Julia for multiprocessing you actually do a very similar thing (but the communication is Julia to Julia) so you can also have a look how this module is implemented to get the feeling how it can be done.

How to Get Child Process IDs Without the jobs Command

Okay, so I'm working with a shell script that needs to be as portable as possible, and I want to make sure I can cleanly tidy up any child-processes using a trap command.
Now, on more recent platforms the jobs -p command can be used to get a list of child process ids, suitable for throwing straight into a kill command to tidy things up without any fuss.
However, some environments don't have this. To work around this I'm using a variable into which I throw process IDs, but it's messy, and a typo could result in some or all child processes not being killed when they should be.
So, in the absence of the jobs command, what alternatives are there? Or put another way, what is the most compatible method to kill all child-processes of a script?
To give you an idea of potential limitations; the most basic system I need to work with has no pgrep, and only a basic version of ps only supporting the -w flag. It does have access to special files under /proc/$$/ but I'm not sure what to do with those (do any of them even list child processes?). This has been a big part of the difficulty, as many similar questions list solutions using tools I don't have access to, I just love compatibility issues =)
You can get the child pid using `!`
$!

Custom shell script to kill uWSGI by pid

I'm to write a script which will grab all the text from a file (/tmp/pidfile.txt) which is just a pid number, and then store that as a variable (say pidvar) or something and execute the following:
kill -2 pidvar
Seems simple enough I just don't know how to grab the pid from the .txt file. I have python installed if this helps. Trying to make it easier to kill uWSGI, any suggestions on an alternative would be welcome.
Thanks in advance for any help.
The literal answer to your question (using a bash extension to be slightly more efficient) would be
kill -2 "$(</tmp/pidfile.txt)"
...or, to be compatible with POSIX sh but slightly less efficient...
kill -2 "$(cat /tmp/pidfile.txt)"
...but don't do it either of those ways.
pidfiles are prone to race conditions, whereas process-tree-based supervision systems can guarantee that they only ever deliver a signal to the correct process.
runit, daemontools, Upstart, systemd, and many other alternatives are available which will guarantee that there's no risk of sending a signal to the wrong process based on stale data. CentOS is probably the last major operating system that doesn't ship with one of these (though future versions will almost certainly use systemd), but they're available as third-party packages -- and if you want your system to be reliable (detecting unexpected failures and restarting services as soon as they go down, for instance, without having to do it with your own code), you should be using one of them.
For instance, with systemd:
systemctl kill -s SIGINT uwsgi.service
...or, with runit:
sv interrupt uwsgi
...whereas with upstart, you can configure a completely arbitrary restart command to be triggered on initctl reload uwsgi.
For general best-practices documentation on using shell scripts for process management, see the appropriate page on the wooledge.org wiki, maintained by irc.freenode.org's #bash channel.
It is generally easier to ask uwsgi to kill itself. You can do this by using the "master-fifo" option in your config, and then send a "q" to the fifo. This is described here: http://uwsgi-docs.readthedocs.org/en/latest/MasterFIFO.html.

How to handle abnormal program termination in Perl on Windows

I have a Perl program on Windows that needs to execute cleanup actions on exit. I wrote a signal handler using sigtrap, but it doesn't always work. I can intercept Ctrl-C, but if the machine is rebooted or the program is killed some other way, neither the signal handler nor the END block are run. I've read that Windows doesn't really have signals, and signal handling on windows is sort of a hack in Perl. My question is, how can I handle abnormal termination the Windows way? I want to run my cleanup code regardless of how or why the program terminates (excluding events that can't be caught). I've read that Windows uses events instead of signals, but I can't find information on how to deal with Windows events in Perl.
Unfortunately, I don't have the authority to install modules from CPAN, so I'll have to use vanilla ActiveState Perl. And to make things even more interesting, most of the machines I'm using only have Perl 5.6.1.
Edit: I would appreciate any answers, even if they require CPAN modules or newer versions of Perl. I want to learn about Windows event handling in Perl, and any information would be welcome.
In all operating systems, you can always abruptly terminate any program. Think of kill -9 command in Unix/Linux. You do that on any program, and it stops instantly. No way to trap it. No way for the program to request a few more operating system cycles for a clean up.
I'm not up on the difference between Unix and Windows signals, but you can imagine why each OS must allow what we call in Unix SIGKILL - a sure and immediate way to kill any program.
Imagine you have a buggy program that intercepts a request to terminate (a SIGTERM in Unix), and it enters a cleanup phase. Instead of cleaning up, the program instead gets stuck in a loop that requests more and more memory. If you couldn't pull the SIGKILL emergency cord, you'd be stuck.
The ultimate SIGKILL, of course is the plug in the wall. Pull it, and the program (along with everything else) comes to a screeching halt. There's no way your program can say "Hmm... the power is out and the machines has stopped running... Better start up the old cleanup routine!"
So, there's no way you can trap every program termination signal, and, your program will have to account for that. What you can do is see if your program needs to do a cleanup before running. On Windows, you can put an entry in the registry when your program starts up, and remove it when it shuts down and does a cleanup. In Unix, you can put a file or directory name starting wit a period in the $ENV{HOME} directory.
Back in the 1980s, I wrote accounting software for a very proprietary OS. When the user pressed the ESCAPE button, we were suppose return immediately to the main menu. If the user was entering an order, and took stuff out of inventory, the transaction would be incomplete, and inventory would be showing the items as being sold even though the order was incomplete. The solution was to check for these incomplete orders the next time someone entered an order, and back out the changes in inventory before entering the new order. Your program may have to do something similar.

Resources