Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking for code must demonstrate a minimal understanding of the problem being solved. Include attempted solutions, why they didn't work, and the expected results. See also: Stack Overflow question checklist
Closed 9 years ago.
Improve this question
I have a file (for example system.log). I need to scan this file to find a specific string that must appear several times during a period of 5 minutes.
I imagine that I can make a script with 4 parameters:
the location of the file
the string to search
the number of times that the string appears
the period of time
If this script finds the string in the specified period of time it writes, for example, the message 'success'
Here's the begin of the script
#!/bin/ksh
#set -x
#############
# VARIABLES #
#############
location="/moteurs/websphere/7.0/esb/managed01/logs/"
file="SystemOut.log"
pattern="WSV0605W: Thread \"SIBFAPInboundThreadPool"
string=$(grep -ic "${pattern}" ${location}/${file})
Now that I've defined my variables, I don't know how can I make a function that scans SystemOut.log every 5 minutes.
Do you kown how can I create this shell?
Yes. Use your favorite editor, write the shell script, execute it. You are done.
Just a half answer, but maybe it gets you somewhere:
To run something repeatedly every five minutes, you can make a loop with a sleep:
while true
do
echo "I'm doing something!" # replace this by the actual code
sleep $[5*60]
done
This will run the code (here just an echo) every five minutes. — Okay, plus the time the code needs, so do not rely on the time being accurate; to get it accurate, you can use
while sleep $[ 5*60 - $(date +%s) % (5*60) ]
do
echo "I'm doing something!" # replace this by the actual code
done
This will always wait for a full 5-minute point on the clock before executing the code.
Related
Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 6 days ago.
Improve this question
Why would this bash operation fail to update the file in some cases?
{
flock -x 3
STR="${SLURM_ARRAY_TASK_ID}","${THIS_FILE}"
printf "%s\n" "${STR}" >&3
} 3>>"${WRITTEN_FILE_LIST}"
This command is executed in a script that gets launched concurrently multiple times, and no other operations on this file ever occur.
In the rare cases when it failed:
None of the referenced variables were empty (SLURM_ARRAY_TASK_ID is an integer, THIS_FILE is a short string, STR is a short string, and WRITTEN_FILE_LIST is a short string).
WRITTEN_FILE_LIST is a valid file path to a CSV.
Most of the other processes were able to update the file.
The process reached this block without error.
I only know it failed because the entries were missing from the file.
Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 7 months ago.
Improve this question
I apologize for the somewhat simple question (I am new to UNIX programming). I am using an HPC system and would like to parallelize a task in a for loop (in this case, a simple unzip to be applied to several files).
How could I do this? I thought that by requesting multiple cores the parallelization would start automatically, but actually the program is operating sequentially.
Thank you very much!
for i in *.zip; do unzip "$i" -d "${i%%.zip}"; done
In bash it would look something like:
for item in bunch-of-items; do
(
the loop body
is here
) &
done
Where the parentheses group the commands, and the whole loop body is put in the background.
If the rest of your program needs all the background jobs to complete, use the wait command.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
Let's say as an example that I wanted to write a script file that not only kept count of how many times it's been called, but would average the time that it has been called since the first time. and to do this without relying on environmental variables or secondary files. And report the number of lapsed days as well. This would mean that it would have to be self-modifying. Now when a script is loaded and executed, the saved version on disk can be changed without effecting the copy in memory, so that works, or should. Just change the copy on file.
But making it happen can be a bit tricky. So what is your best solution?
Sounds a bit weird, but a bash script is just text, so you can always edit it IF you have permission:
Take this example:
#!/bin/bash
VAR=1
let VAR=VAR+1
echo Set to $VAR
perl -pi -e 's/^VAR=\d+/VAR='$VAR'/' $0
Trying it out:
$ /tmp/foo.sh
Set to 9
$ /tmp/foo.sh
Set to 10
$ /tmp/foo.sh
Set to 11
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 8 years ago.
Improve this question
In unix, I use the who-command to list out all the users who currently logged in to the system.
I wish to write a bash shell script which displays the same outputs as the who-command.
I have tried the following:
vi log.sh -now there is a file log.sh
now, typed who
save and quit
give execute permission: chmod +x log.sh
execute: sh -vx log.sh
This will give the same output as using who.
However, is there another way to write such a shell script?
It is hard to answer as I suspect this is homework (and I don't want give you the full answer). Moreover I don't know how proficient you might be in various programming area. So, I will only try to make an answer that is in accordance with the How do I ask and answer Homework Community Wiki
Is there another way?....
Yes it is. Obviously: who has to work somehow. At the very worst, you might search into the source code to know how it works.
Fortunately this question does not require such extreme solution. As it has been said in a comment, who reads from /var/tmp/utmp or /var/run/utmp on my Debian system.
cat /var/run/utmp
You will see this is a binary "file". You have to somehow decode it. That's where man utmp might came to an help. It will expose the C structure corresponding to one record in the utmp file.
With that knowledge, you will be able to process the file with your favorite language. Please note bash (or any shell) is probably not the best language to deal with binary data structures.
As I said first, you didn't give enough background for I (us?) to give some precise advices. Anyway, if digging into the kernel data-structures is ... well ... way above what can be expected from you, maybe some "simple" solution based on grep/awk/bash/whatever might be sufficient to filter the output of:
ps -edf
Taking this as a challenge, I come with this solution:
#!/bin/bash
shopt -s extglob
while read record; do
# ut_name at offset 44 size 32
ut_name="${record:44:32}"
# ut_line at offset 8 size 32
ut_line="${record:8:32}"
echo ${ut_name%%*(.)} ${ut_line%%*(.)}
done < <(hexdump -v -e '384/1 "%_p"' -e '"\n"' /var/run/utmp)
# ^^^
# according to utmp.h, sizeof(struct utmp) is 384 bytes
# so hexdump outputs here one line for each record.
As bash is not good at handling binary data (especially containing \x00) I had to rely on hexdump with a custom format to "decode" utmp records.
Of course, this is far from being perfect, and producing and output really identical to the one given by who might require some decent amount of efforts. But this might be a good starting point...
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
Imagine that you are preparing for an in-depth technical interview and you are asked to rate your expertise in shell scripting (hypothetically on a scale of one to ten). Then look at the following shell command line example and answer the questions: What does this do? and Why?
unset foo; echo bar | read foo; echo "$foo"
What level of expertise would you map to correctly answer this question for the general case (not merely for one or another, specific, version of the shell)?
Now imagine that you're given the following example:
cat "$SOMELIST_OF_HOSTS" | while read host; do ssh $host "$some_cmd"; done
... and the interviewer explains that this command "doesn't work" and that it seems to only execute the ssh command on a few of the hosts listed in the (large) file (something like on in every few hundred hostnames, seemingly scattered from among the list). Naturally he or she asks: Why is it doing that? and How might you fix it?
Then rate the level of expertise to which you would map someone who can answer those questions correctly.
The first one is novice intermediate (see below) level. unset, echo, read and basic variable use should be encountered within the first 1000 lines of Bash or so, working on typical shell code.
The second is intermediate level IMO; I'd been using Bash for some years before I found out about innocuous commands like ssh gobbling standard input. It's a good test for the ssh command specifically, but since it's a bit of an anomaly it might be better to test with simply cat to see if the candidate understands the basis of the problem.
But as I think #IgnacioVazquez-Abrams is pointing out, you can't rate much based on just two narrow questions - As others have pointed out, why not just give them an actual issue to work on? You'll get an infinitely better idea of their ability to actually get work done.
Edit: As #IgnacioVazquez-Abrams also pointed out, these essentially test the same thing. So I'd rate both intermediate.
Note that the first example is dependent on the shell. The pipe is an IPC (inter-process communications) operator and the shell can implement that by creating a subshell on either side of the pipe. (Technically I suppose some shell could even evaluate both sides in separate sub-processes).
The read command is a built-in (must, inherently be so). So, in shells such as bash and the classic Bourne shell derivatives the suprocess (subshell) is on the right of the pipe (reading from the current shell) and that process ends after its read (at the semicolon in this example). Korn shell (at least as far back as '93) and zsh put their subshell on the other side of the pipe and are reading data from those into the current process.
That's the point of the interview question.
The point of my question here is to look for some consensus or metric for how highly to rate this level of question. It's not a matter of trivia because it does affect real world scripts and portability for shell scripting and it relies upon fundamental understanding of the underlying UNIX and shell semantics (IPC, pipes, and subprocesses/subshells).
The second example is similar but more subtle. I will point out that the following change "works" (the ssh will execute on each of the hosts in the file):
cat $SOME_FILE | while read host; do ssh "$host" "$some_cmd" < /dev/null; done
Here the issue is that the ssh command buffers up input even if the command on the remote never reads from its stdin. Because the shell/subshell (reading from the pipe) and the ssh are sharing the same input stream, the ssh is "stealing" most of the input from the pipeline, leaving only the occasional line for the read command.
This is not an artificial question. I actually encountered it in my work and had to figure it out. I know from experience that understanding this second example is at least a notch or two above the first. I also know, also from years of experience, that fewer than 10% of the candidates (for sysadmin and programming positions) that I've interviewed can get the first question right away).
I've never used the second question in a live interview and I've been discouraged from doing so.