GNU parallel -Dall option - parallel-processing

Like in the title, what is the "-Dall" option, what does it do exactly?
parallel -Dall .....
I see it provides more context but when I tried to find some documentation on it I was not able to.

-D controls debugging. -Dall = all debugging.
The reason why there is no documentation is because the output changes between versions. In other words: You should never rely on the the output from -Dall.
Instead of understanding that output your time is better spent on reading
https://zenodo.org/record/1146014 and
https://www.gnu.org/software/parallel/parallel_design.html

Related

Query GPU memory usage and/or user by PID

I have a list of PIDs of processes running on different GPUs. I want to get the used GPU memory of each process based on its PID. nvidia-smi yields the information I want; however, I don't know how to grep it, as the output is sophisticated. I have already looked for how to do it, but I have not found any straightforward answers.
While the default output of nvidia-smi is "sophisticated" or rather formatted for interfacing with humans rather than scripts, the command provides lots of options for use in scripts. The ones most fitting for use case seem to be --query-compute-apps=pid,used_memory specifying the information that you need and --format=csv,noheader,nounits specifying the minimal, machine readable output formatting.
So the resulting command is
nvidia-smi --query-compute-apps=pid,used_memory --format=csv,noheader,nounits
I recommend taking a look at man nvidia-smi for further information and options.
nvidia-smi --query-compute-apps=pid,used_memory,gpu_bus_id --format=csv
gpu_bus_id will help you if you have multiple gpus

How Can I Get System.cmd to End Normally when expecting input on STDIN?

I've spotted something that I find very puzzling about the behavior of System.cmd. I just wanted to ask if anyone might have thoughts on what I may be doing wrong or what may be going on.
I've been trying to wrap an Elixir script around the ack programmer's grep. I tried this:
{_message, errlevel} = System.cmd("ack",[])
And I get back the help text that ack displays on an empty command line; I won't bother to reproduce it here because it's not necessarily germane to the question.
Then I try this:
{_message, errlevel} = System.cmd("ack",[""])
And it looks like iex hangs. Now I realize in the first case the output may be going to stderr rather than stdout. But there's another reason I'm asking about this; I found something even more interesting to me. Because I'm not 100% committed to using ack I thought I'd try ripgrep on the thought that it might interact with stdout better.
So if I do this:
{_message, errlevel} = System.cmd("rg",[])
Same as ack with no arguments--shows me the help text. Again I'm guessing it's probably out to stderr. I could check to confirm my assumption but what's even more interesting to me is that when I do this:
{_message, errlevel} = System.cmd("rg",[""])
It hangs again!
I had always figured the issue is with how ack interacts with stdout but now I'm not so sure since I see the same behavior with ripgrep. This is Elixir 1.13.2 on MacOSX 13.1. I've seen this same behavior with older versions of MacOSX.
Any idea how I can get the ack and/or ripgrep process to terminate so I get a response back? I've seen this https://hexdocs.pm/elixir/main/Port.html#module-zombie-operating-system-processes and I can try it but I was hoping for something slightly less hacky, I guess. Any suggestions? Also if I use the :stderr_to_stdout option set to true, it doesn't seem to make any difference.
I've seen this Q & A but I'm not totally clear on how using Task.start_link would help in this case. I mean would one do a Task.start_link on System.cmd?
You are executing a command that expects input on STDIN, but with System.cmd/3, there is no mechanism to provide the input.
Elixir has no way to know the behaviour of the command you are executing, so waits for the process to terminate, which never happens. As José mentioned on the issue Roger Lipscombe raised, this is expected behaviour.
If you want to send the OS process input via STDIN, you need to use Ports. However, there are limitations there too, which I asked about here.
For ack specifically, it reads from STDIN if you don't provide a filename. So you can workaround the limitation by putting the data in a file, and providing the filename as an argument, rather than piping the data via OS streams.
Looks like a bug. I've filed https://github.com/elixir-lang/elixir/issues/12321.

How to set less to clear the screen on exit only if the output fills more than a single page?

I'm trying to figure out a way to pipe the output of a command (ag, in this case) to less -F (i.e. --quit-if-one-screen), but if the output is less than one page, the screen just flashes the content before it disappears. I've read that I can use -X (--no-init) to disable clearing the screen upon exiting less, but in that case long output doesn't get cleared either, which kinda defeats the purpose of a pager.
Is there a way to make less -X work with -F? I.e., to clear the output upon exiting less, except if the output fits in a single page?
It's 2018 now and Less is available in version 530. One of the key changes is the behavior of less -F with content of less than one full screen.
The solution is easy: Install Less 530 from your package repository, or download from Free Software Foundation and compile it yourself. Then you can have less -F leaving the content on screen if it doesn't fill up one full screen.
This very question has been answered in Unix.SE. The top-voted answer there has actually been expanded into a full-fledged command-line tool that can act as a replacement for less: https://github.com/stefanheule/smartless.
I've been using it myself with great results (plus the author is very responsive to bug reports and feature requests on Github), so I highly recommend it to anyone facing this issue.

Process Management w/ bash/terminal

Quick bash/terminal question -
I work a lot on the command line, but have never really had a good idea of how to manage running processes with it - I am aware of 'ps', but it always gives me an exceedingly long and esoteric list of junk, including like 30 google chrome workers, and I always end up going back to activity monitor to get a clean look at what's actually going on.
Can anyone offer a bit of advice on how to manage running processes from the command line? Is there a way to get a clean list of what you've got running? I often use 'killall' on process names that I know as a quick way to get rid of something that's freezing up - can I get those names to display via terminal rather than the strange long names and numbers that ps displays by default? And can I search for a specific process or quick regex of a process, like '*ome'?
If anyone has the answers to these three questions, that would be amazingly helpful to many people, I'm sure : )
Thanks!!
Yes grep is good.
I don't know what you want to achieve but do you know the top command ? Il gives you a dynamic view of what's going on.
On Linux you have plenty of commands that should help you getting what you want in a script and piping commands is a basic we are taught when studying IT.
You can also get a look to the man of jobs and I would advise you to read some articles about process management basics. :)
Good Luck.
ps -o command
will give you a list of just the process names (more exactly, the command that invoked the process). Use grep to search, like this:
ps -o command | grep ".*ome"
there may be scripts out there..
but for example if you're seeing a lot of chrome that you're not interested in, something as simple as the following would help:
ps aux | grep -v chrome
other variations could help showing each image only once... so you get one chrome, one vim etc.. (google show unique rows with perl or python or sed for example)
you could use ps to specifiy one username... so you filter out system processes, or if more than one user is logged to the machine etc.
Ps is quite versatile with the command line arguments.. a little digging help finding a lot of nice tweaks and flags in combinations with other tools such as perl and sed etc..

How to bundle bash completion with a program and have it work in the current shell?

I sweated over the question above. The answer I'm going to supply took me a while to piece together, but it still seems hopelessly primitive and hacky compared to what one could do were completion to be redesigned to be less staticky. I'm almost afraid to ask if there's some good reason that completion logic seems to be completely divorced from the program it's completing for.
I wrote a command line library (can be seen in scala trunk) which lets you flip a switch to have a "--bash" option. If you run
./program --bash
It calculates the completion file, writes it out to a tempfile, and echoes
. /path/to/temp/file
to the console. The result is that you can use backticks like so:
`./program --bash`
and you will have completion for "program" in the current shell since it will source the tempfile.
For a concrete example: check out scala trunk and run test/partest.

Resources