When using a shell script file i want to be able to prewrite the input for the configuration.
how can i give the answers automatically?
there are multiple questions with short answers required for example- 'Please select an option: 1, 2 or 3' and provide answer 2. I considered using a heredoc but would this be the most appropriate option?
My example script:
./install.sh <<HERE
1
1
2
1
1 2 3
1 2
2
HERE
thanks for any help given.
Related
I have an infinite loop which uses aws cli to get the microservice names, it's parameters like desired tasks,number of running task etc for an environment.
There are 100's of microservices running in an environment. I have a requirement to compare the value of aws ecs metric running task for a particular microservice in the current loop and with that of the previous loop.
Say name a microservice X has the metric running task 5. As it is an infinite loop, after some time, again the loop come for the microservice X. Now, let's assume the value of running task is 4. I want to compare the running task for currnet loop, which is 4 with the value of the running task for the previous run, which is 5.
If you are asking a generic question of how to keep a previous value around so it can be compared to the current value, just store it in a variable. You can use the following as a starting point:
#!/bin/bash
previousValue=0
while read v; do
echo "Previous value=${previousValue}; Current value=${v}"
previousValue=${v}
done
exit 0
If the above script is called testval.sh. And you have an input file called test.in with the following values:
2
1
4
6
3
0
5
Then running
./testval.sh <test.in
will generate the following output:
Previous value=0; Current value=2
Previous value=2; Current value=1
Previous value=1; Current value=4
Previous value=4; Current value=6
Previous value=6; Current value=3
Previous value=3; Current value=0
Previous value=0; Current value=5
If the skeleton script works for you, feel free to modify it for however you need to do comparisons.
Hope this helps.
I dont know how your input looks exactly, but something like this might be useful for you :
The script
#!/bin/bash
declare -A app_stats
while read app tasks
do
if [[ ${app_stats[$app]} -ne $tasks && ! -z ${app_stats[$app]} ]]
then
echo "Number of tasks for $app has changed from ${app_stats[$app]} to $tasks"
app_stats[$app]=$tasks
else
app_stats[$app]=$tasks
fi
done <<< "$( cat input.txt)"
The input
App1 2
App2 5
App3 6
App1 6
The output
Number of tasks for App1 has changed from 2 to 6
Regards!
I'm running several independent programs on a single machine in parallel.
The processes (say 100) are all relatively short (<5 minutes) and their output is limited to a few hundred lines (~kilobytes).
Usually the output in a terminal then becomes mangled because the processes write directly to the same buffer. I would like these outputs to be un-mangled so that it's easier to debug certain processes. I could write these outputs to temporary files but I would like to limit disk IO and would prefer another method if possible. It would require cleaning up and probably won't really improve code readability.
Is there any shell native method that allows buffers to be PID separated which then flushes to stdout/stderr when the process terminates ? Do you see any other way to do this ?
Update
I ended up using the tail -n 1000000 trick from the comment of #Gem. Since the commands I'm using are long and (covering multiple lines) and I was already using subshells ( ... ) & that was a quite minimal change from ( ... ) & to ( ... ) 2>&1 | tail -n 1000000 &.
You can do that with GNU Parallel. Use -k to keep the output in order and ::: to separate the arguments you want passed to your program.
Here we run 4 instances of echo in parallel:
parallel -k echo {} ::: {0..4}
0
1
2
3
4
Now add in --tag to tag your output lines with the filenames or parameters you are using:
parallel --tag -k 'echo "Line 1, param {}"; echo "Line 2, param {}"' ::: {1..4}
1 Line 1, param 1
1 Line 2, param 1
2 Line 1, param 2
2 Line 2, param 2
3 Line 1, param 3
3 Line 2, param 3
4 Line 1, param 4
4 Line 2, param 4
You should notice that each line is tagged on the left side with the parameters and that the two lines from each job are kept together.
You can now specify how your output is organised.
Use --group to group output by job
Use --line-buffer to buffer a line at a time
Use --ungroup if you want output all mixed up, but as soon as available
Sounds like you just want syslog, or rather logger its Bash interface. Example:
echo "Something happened!" | logger -i -p local0.notice
If you insist on getting output to stderr too use --stderr. rsyslog will handle buffering, atomic writes, etc, and is presumably pretty good at optimizing disk I/O. However you could also easily configure rsyslog to route the log facility (i.e. local0 or what ever you choose to use) where ever you want, such as on a tmpfs or dedicated disk, or even over TCP. See /etc/rsyslog.conf.
I want to save the particular result to the text/excel file through bash.
I tried below command it works fine, but I need only last result passed/failed, not the each step of execution.
I used below command to execute
$ bash eg.sh Behatscripts.txt > output.xls or
$ bash eg.sh Behatscripts.txt > output.txt
Below is the output console in my case, this whole thing is writing into the .txt/.xls file. But I need only last part that is:
1 scenario (1 passed)
3 steps (3 passed)
Executing the Script : eg.feature
----------------------------------------
#javascript
Feature: home page Validation
In order to check the home page of our site
As a website/normal user
I should be able to find some of the links/texts on the home page
Scenario: Validate the links in the header and footer # features\cap_english_home.feature:8
Given I am on the homepage # FeatureContext::iAmOnHomepage()
When I visit "/en" # FeatureContext::assertVisit()
Then I should see the following <links> # FeatureContext::iShouldSeeTheFollowingLinks()
| links |
| Dutch |
1 scenario (1 passed)
3 steps (3 passed)
0m14.744s
Give some suggestion to put condition to save only last part of the output console, thanks in advance.
I'm new to coding and am trying to iterate through folders to find a specific file (called the same thing in each folder). I have 3 folders (CONTROL, GROUP1, and GROUP2). Each folder has 2 subfolders in it from the folder names (2 3 4 5 6 7. Each subfolder has a file in it with the subfolder name such as 2_diff.nii or 3_diff.nii. I'd like the code to go into each folder, find the subfolders, and then perform an analysis of the file in there. The problem is that my code seems to be looking for each subfolder in each main folder and each main folder only has 2 of the subfolders out of (2 3 4 5 6 7). Any tips would be greatly appreciated - thank you!!
Folders=(CONTROL GROUP1 GROUP2)
SubFolders=(2 3 4 5 6 7)
data_source=/Users/sheena/Desktop/test/
for j in ${Folders[#]}; do
cd ${data_source}/${j}/
for i in ${SubFolders[#]}; do
fslroi ${i}_diff.nii ${i}_nodif 0 1 #I want to analyze the file <subfolder>_diff.nii and name the output as <subfolder>_nodif.nii
done
done
The way I understand your question is that in each of the directories CONTROL, GROUP1, and GROUP1 there are 2 files of the form x_diff.nii where x is a digit between 2 and 7. At least that's how I read your code.
You don't in advance know which two digits there are.
The easiest way I see it is to run through all possible SubFolders like you do, but use the continue statement early to jump to the next if it doesn't exist:
Folders=(CONTROL GROUP1 GROUP2)
SubFolders=(2 3 4 5 6 7)
data_source=/Users/sheena/Desktop/test/
for j in ${Folders[#]}; do
cd ${data_source}/${j}/
for i in ${SubFolders[#]}; do
if [[ ! -e ${i}_diff.nii ]]; then
continue
fi
fslroi ${i}_diff.nii ${i}_nodif 0 1
done
done
You could replace the if clause above with a single line:
for i in ${SubFolders[#]}; do
[[ -e ${i}_diff.nii ]] || continue
fslroi ${i}_diff.nii ${i}_nodif 0 1
done
But I find the more expressive if - fi block easier to read and understand, and that's important too.
I have a fortran program (which I cannot modify) that requires several inputs from the user (in the command line) when it is run. The program takes quite a while to run, and I would like to retain use of the terminal by running it in the background; however, this is not possible due to its interactive nature.
Is there a way, using a bash script or some other method, that I can pass arguments to the program without directly interacting with it via the command line?
I'm not sure if this is possible; I tried searching for it but came up empty, though I'm not exactly sure what to search for.
Thank you!
ps. I am working on a unix system where I cannot install things not already present.
You can pipe it in:
$ cat delme.f90
program delme
read(*, *) i, j, k
write(*, *) i, j, k
end program delme
$ echo "1 2 3" | ./delme
1 2 3
$ echo "45 46 47" > delme.input
$ ./delme < delme.input
45 46 47
$ ./delme << EOF
> 3 2 1
> EOF
3 2 1