Files from which vmstat information come from [closed] - superuser

Closed. This question is not about programming or software development. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 6 months ago.
Improve this question
I would very much like to know the names and locations of the files from which vmstat command information comes from in Linux OS.

User space tools reporting statistics likely get them from the kernel through procfs/sysfs file systems.
As most of them are open sources, it is easy to read the source code. For example, a way to get the package name it comes from is to display the version:
$ vmstat --version
vmstat from procps-ng 3.3.16
From the preceding, you know that you can grab the source code from procps-ng package.
And as said in the comments by Stark, you can spy the execution of vmstat with strace tool and you will see which files are opened to get the information:
$ strace vmstat
[...]
openat(AT_FDCWD, "/proc/meminfo", O_RDONLY) = 3
[...]
openat(AT_FDCWD, "/proc/stat", O_RDONLY) = 4
[...]
openat(AT_FDCWD, "/proc/vmstat", O_RDONLY) = 5
[...]

Related

How to get memory utilization as percentage? [closed]

Closed. This question is not about programming or software development. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 7 days ago.
Improve this question
How can I get memory used in percentage on mac OS using the terminal. Found this command on Stack Overflow:
$ top -l 1 | grep -E "^Phys"
PhysMem: 16G used (2991M wired), 21M unused
Can anyone help me understand how to derive percentage of memory used from the above output? Any other commands would also help.
Output expected is:
physicalmemory- 16gb, used memory - 14gb

Where does the file ''$'\033\033\033' come from in Linux? [closed]

Closed. This question is not about programming or software development. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 11 days ago.
Improve this question
In my directory at a Linux server I have discovered a file with such a strange name.
From the command history I can track that it was probably created by this command:
sudo docker logs <container_id> -n 200000 | less
I suspect I have entered some combination of letters in less (probably starting with s to save a file).
Do you know what exactly has happened?
P.S. If you want to remove such a file, see How to escape the escape character in bash?
I have discovered that such a file is created when you type s in a piped less and then you are asked to enter the log file name. If you type triple Escape and then Enter, you will get such a file.
The command s is actually helpful to save the contents of a piped less.

netcat in a batch file to run it as a separate process [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 1 year ago.
Improve this question
I want to run netcat as a separate process inside a batch file. For that purpose I am using the following method
start "" netcat.exe -l -p 1234 > output.txt
however with this option, the output.txt is showing zero bytes. I guess it is considering it as output of start rather than of netcat.
If I use the following way, it works fine but I want to run parallely in the batch file so that I can proceed with other things in the batch file
netcat.exe -l -p 1234 > output.txt
Please suggest me how can I achieve this..

Parallel curl (or wget) downloads [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 2 years ago.
Improve this question
I have a file with 2192 urls, one on each line. I am trying to download them in parallel like this:
cat urls.txt | tr -d '\r' | xargs -P 8 -n 1 curl -s -LJO -n -c ~/.urs_cookies -b ~/.urs_cookies
However, after counting all of the files once they are downloaded ls -1 | wc -l, I only have 1400 files. I know that the URLs are all properly formatted (they were autogenerated by the website where I am downloading the data from).
I can rerun the above command and get a few more files each time, but this is not sufficient. Further, downloading the files one at a time would be an option, but the server takes about 30 seconds to respond to the request, but each file only takes about 2 seconds to download. I have at least 5 files with 2192 URLs each. I would very much like to do a parallel download.
Can anyone help me figure out why parallel downloads would stop early?
If you're okay with a (slightly) different tool, may I recommend using GNU Wget2? It is the spiritual successor to GNU Wget. It is already available in the Debian and OpenSUSE repositories and on the AUR
Wget2 provides multi-threaded downloads out of the box with a nice progress bar to view the current status. It also supports HTTP/2 and many other newer features that were nearly impossible to add into Wget.
See my answer here: https://stackoverflow.com/a/49386440/952658 for some more details.
With Wget2, you can simply run $wget2 -i urls.txt and it will start downloading your files in parallel.
EDIT: As mentioned in the other answer, a disclaimer: I maintain both Wget and Wget2. So I'm clearly biased towards this tool

how to extract bz2 files in command line of fedora system? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 8 years ago.
Improve this question
I just find tar, unzip, bunzip2, 7z, unrar, gunzip, all do not work for bz2 files.
Is there a simple method to do that?
Or where can I download a rpm file to install?
bunzip2 should work flawlessly, as should the -j option to tar. If they don't, you're doing something wrong, and should post the command line plus the error message.
If in doubt, run file on the archive to make sure it actually is compressed in bz2 format (and not just named like that).

Resources