Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 7 months ago.
Improve this question
I apologize for the somewhat simple question (I am new to UNIX programming). I am using an HPC system and would like to parallelize a task in a for loop (in this case, a simple unzip to be applied to several files).
How could I do this? I thought that by requesting multiple cores the parallelization would start automatically, but actually the program is operating sequentially.
Thank you very much!
for i in *.zip; do unzip "$i" -d "${i%%.zip}"; done
In bash it would look something like:
for item in bunch-of-items; do
(
the loop body
is here
) &
done
Where the parentheses group the commands, and the whole loop body is put in the background.
If the rest of your program needs all the background jobs to complete, use the wait command.
Related
Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 6 days ago.
Improve this question
Why would this bash operation fail to update the file in some cases?
{
flock -x 3
STR="${SLURM_ARRAY_TASK_ID}","${THIS_FILE}"
printf "%s\n" "${STR}" >&3
} 3>>"${WRITTEN_FILE_LIST}"
This command is executed in a script that gets launched concurrently multiple times, and no other operations on this file ever occur.
In the rare cases when it failed:
None of the referenced variables were empty (SLURM_ARRAY_TASK_ID is an integer, THIS_FILE is a short string, STR is a short string, and WRITTEN_FILE_LIST is a short string).
WRITTEN_FILE_LIST is a valid file path to a CSV.
Most of the other processes were able to update the file.
The process reached this block without error.
I only know it failed because the entries were missing from the file.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
I've written a few dozen Bash scripts for my Mac over the years, although probably 80% of Bash code I have is in .bash_profile. Lately I've been doing things I used to do with Bash by using Python instead.
So, given languages like Python or Ruby (or even PHP), with the exception of login scripts such as .bash_profile (which may not be an exception), are there any tasks that Bash can do that generic scripting languages cannot?
Bash is oldschool UNIX - pulling little utilities together to achieve a greater goal mostly by using pipes and plumbing output from one command to the next.
There is definitely a lot to be said for having the skills involved in this style of seat of the pants programming. Too many people head off and write a self contained program to achieve something that can be done using a few command line inputs.
So in answer to your question , yes. A bash script can teach you to understand the multitude of bash scripts out there and it can do most things on a UNIX box in close to the most efficient way. Bash is here to stay.
Well, first off, bash is itself a shell, so it comes with builtin features like job control (suspend, etc.), file handle/terminal redirection (2 &> 1 and friends) and terminal control (like being able to display the current path in the titlebar, etc.). Other languages that don't have a built-in shell with access to termcap don't have those abilities. Pipe redirection is hard to get right (python's subprocess.popen has a bunch of limitations due to threads and potential deadlocks for example, while bash has access to tee etc.)
No. Bash is written in C, and the programs it runs are written in other languages (which are usually either C or implemented in C). Thus, everything that Bash does can be — and already is — done by other programming languages.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
Let's say as an example that I wanted to write a script file that not only kept count of how many times it's been called, but would average the time that it has been called since the first time. and to do this without relying on environmental variables or secondary files. And report the number of lapsed days as well. This would mean that it would have to be self-modifying. Now when a script is loaded and executed, the saved version on disk can be changed without effecting the copy in memory, so that works, or should. Just change the copy on file.
But making it happen can be a bit tricky. So what is your best solution?
Sounds a bit weird, but a bash script is just text, so you can always edit it IF you have permission:
Take this example:
#!/bin/bash
VAR=1
let VAR=VAR+1
echo Set to $VAR
perl -pi -e 's/^VAR=\d+/VAR='$VAR'/' $0
Trying it out:
$ /tmp/foo.sh
Set to 9
$ /tmp/foo.sh
Set to 10
$ /tmp/foo.sh
Set to 11
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I have a program running on a little Raspberry Pi. Is there any way for me to write a bash script that is always checking if the program is running or not? If the program crashes, the script does something (e-mails me, for example).
The simplest implementation would look like this:
#!/bin/bash
while true; do
program "$#"
done
Basically, the script runs program, and then immediately re-runs it if it exits. It's simple, efficient, and robust.
If you want the script to watch something already running, and restart it, then you have a more difficult task. You have to grep the output of ps, or some such, and that means patterns, text manipulation, and possibly some magic. It also means regular polling, which is inefficient, and/or means there will be a noticeable gap between one process exiting, then new one starting.
Alternatively, you could have a "program.pid" file somewhere, which make life easier, but you still need to check that the process with the given PID is the program it ought to be, and it's still all about polling.
Aside: You might like to consider setting your program up as a system service. I'm not sure what Rasbian uses, but both Upstart and Systemd can handle services that must be restarted when they die.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 8 years ago.
Improve this question
In unix, I use the who-command to list out all the users who currently logged in to the system.
I wish to write a bash shell script which displays the same outputs as the who-command.
I have tried the following:
vi log.sh -now there is a file log.sh
now, typed who
save and quit
give execute permission: chmod +x log.sh
execute: sh -vx log.sh
This will give the same output as using who.
However, is there another way to write such a shell script?
It is hard to answer as I suspect this is homework (and I don't want give you the full answer). Moreover I don't know how proficient you might be in various programming area. So, I will only try to make an answer that is in accordance with the How do I ask and answer Homework Community Wiki
Is there another way?....
Yes it is. Obviously: who has to work somehow. At the very worst, you might search into the source code to know how it works.
Fortunately this question does not require such extreme solution. As it has been said in a comment, who reads from /var/tmp/utmp or /var/run/utmp on my Debian system.
cat /var/run/utmp
You will see this is a binary "file". You have to somehow decode it. That's where man utmp might came to an help. It will expose the C structure corresponding to one record in the utmp file.
With that knowledge, you will be able to process the file with your favorite language. Please note bash (or any shell) is probably not the best language to deal with binary data structures.
As I said first, you didn't give enough background for I (us?) to give some precise advices. Anyway, if digging into the kernel data-structures is ... well ... way above what can be expected from you, maybe some "simple" solution based on grep/awk/bash/whatever might be sufficient to filter the output of:
ps -edf
Taking this as a challenge, I come with this solution:
#!/bin/bash
shopt -s extglob
while read record; do
# ut_name at offset 44 size 32
ut_name="${record:44:32}"
# ut_line at offset 8 size 32
ut_line="${record:8:32}"
echo ${ut_name%%*(.)} ${ut_line%%*(.)}
done < <(hexdump -v -e '384/1 "%_p"' -e '"\n"' /var/run/utmp)
# ^^^
# according to utmp.h, sizeof(struct utmp) is 384 bytes
# so hexdump outputs here one line for each record.
As bash is not good at handling binary data (especially containing \x00) I had to rely on hexdump with a custom format to "decode" utmp records.
Of course, this is far from being perfect, and producing and output really identical to the one given by who might require some decent amount of efforts. But this might be a good starting point...