I have an AWS RDS process that generates 4 different output as Creating, Modifying,Backing-up and Available. This output of the process changes every 4 to 5 minutes and finally when the process completes it generates last output as available. Which is am storing in a variable "dbState".
What I am trying to do is run a spinner until the variable has the available value.
For this, I will have to run two loops 1st one that keeps checking the value of the variable. 2nd one which keeps running the loop and spinner until the variable value becomes available.
while :; do
dbState=(`aws rds describe-db-instances --db-instance-identifier $Instance_Identifier --query DBInstances[*].DBInstanceStatus --output text`)
sp='/-\|'
printf ' '
sleep 0.1
while [ "$dbState" != "available" ]; do
printf '\b%.1s' "Please wait.....$sp"
sp=${sp#?}${sp%???}
sleep 0.1
done
sleep 120
done
But for some reason it gets stuck in the 2nd loop and spinner keeps running even until the vale of variable becomes available.
Please help me here i can not think of any logic to achieve that.
All i want to to show spinner until the variable vale becomes available.
First, let's focus on the inner loop:
while [ "dbState" != "available" ]; do
printf '\b%.1s' "Please wait.....$sp"
sp=${sp#?}${sp%???}
sleep 0.1
done
Notice how dbState is never updated inside this loop? So there is never an exit condition from the loop. You would have to check the RDS instance state inside each iteration of the loop, so you probably only need the outer loop, and to convert the inner loop into an if statement.
Further, you have a typo in your condition. You are comparing the literal string "dbState" to the string "available". I believe you want to compare the value of the dbState variable, which would be: "$dbState" != "available".
Note that the AWS CLI Tool already has a method for waiting until an RDS instance state is "available":
aws rds wait db-instance-available --db-instance-identifier $Instance_Identifier
Related
I am trying to create my first bash script. The goal of this script is to check at what rate my public IP changes. It is a fairly straight forward script. First it checks if the new address is different from the old one. If so then it should update the old one to the new one and print out the date along with the new IP address.
At this point I have created a simple script in order to accomplish this. But I have two main problems.
First the script keeps on printing out the IP even tough it hasn't changed and I have updated the PREV_IP with the CUR_IP.
My second problem is that I want the output to direct to a file instead of outputting it into the terminal.
The interval is currently set to 1 second for test purposes. This will change to a higher interval in the final product.
#!/bin/bash
while true
PREV_IP=00
do
CUR_IP=$(curl https://ipinfo.io/ip)
if [ $PREV_IP != "$CUR_IP" ]; then
PREV_IP=$CUR_IP
"$(date)"
echo "$CUR_IP"
sleep 1
fi
done
I also get a really weird output. I have edited my public IP to xx.xxx.xxx.xxx:
Sat 20 Mar 09:45:29 CET 2021
xx.xxx.xxx.xxx
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:--
while true
PREV_IP=00
do
is the reason you are seeing ip each loop. It's the same as while true; PREV_IP=00; do. The exit status of true; PREV_IP=00 is the exit status of last command - the exit status of assignment is 0 (success) - so the loop will always execute. But PREV_IP will be reset to 00 each loop... This is a typo and you meant to set prev_ip once, before the loop starts.
"$(date)"
will try execute the output of date command, as a next command. So it will print:
$ "$(date)"
bash: sob, 20 mar 2021, 10:57:02 CET: command not found
And finally, to silence curl, read man curl first and then find out about -s. I use -sS so errors are also visible.
Do not use uppercase variables in your scripts. Prefer lower case variables. Check you scripts with http://shellcheck.net . Quote variable expansions.
I would sleep each loop. Your script could look like this:
#!/bin/bash
prev=""
while true; do
cur=$(curl -sS https://ipinfo.io/ip)
if [ "$prev" != "$cur" ]; then
prev="$cur"
echo "$(date) $cur"
fi
sleep 1
done
that I want the output to direct to a file instead of outputting it into the terminal.
Then research how redirection works in shell and how to use it. The simplest would be to redirect echo output.
echo "$(date) $cur" >> "a_file.txt"
The interval is currently set to 1 second for test purposes. This will change to a higher interval in the final product.
You are still limited with the time it takes to connect to https://ipinfo.io/ip. And from ipinfo.io documentation:
Free usage of our API is limited to 50,000 API requests per month.
And finally, I wrote a script where I tried to use many public services as I found ,get_ip_external for getting external ip address. You may take multiple public services for getting ipv4 address and choose a random/round-robin one so that rate-limiting don't kick that fast.
While, I was working with Terraform I had a question; I will be able to destroy some specific resources using terraform destroy --target [] --target [] or terraform state -rm ; this is okay if we have 50 servers but what if I have 1000 servers and would like to terminate such as odd number instances or even number instances using the array numbers in the list or could we write a script to gather all the corrupted instances and execute that script to terminate all those instances and make that script reusable!!
Is there any way for this, I have searched all over the internet but couldn't find any solution; may be this question is dumb but I was just curious!!!!
Is there any documentation that explains this is would not be possible through terraform!!!!
You could expose the count as an output:
output "server_count" {
value = var.server_count
}
and write a script (shell/Python/etc) that takes that count as an argument and uses it to taint every odd resource:
#!/bin/bash
# usage: taint_odd_servers.sh <num servers>
SERVER_COUNT=$1
i=0
while [ $i -lt $SERVER_COUNT ]
do
REMAINDER=$(( $i % 2 ))
if [ $REMAINDER -ne 0 ]
then
terraform taint "your_server_resource[${i}]"
fi
i=$(($i+1))
done
You could then call that script like:
taint_odd_servers.sh $(terraform output server_count)
I have an infinite loop which uses aws cli to get the microservice names, it's parameters like desired tasks,number of running task etc for an environment.
There are 100's of microservices running in an environment. I have a requirement to compare the value of aws ecs metric running task for a particular microservice in the current loop and with that of the previous loop.
Say name a microservice X has the metric running task 5. As it is an infinite loop, after some time, again the loop come for the microservice X. Now, let's assume the value of running task is 4. I want to compare the running task for currnet loop, which is 4 with the value of the running task for the previous run, which is 5.
If you are asking a generic question of how to keep a previous value around so it can be compared to the current value, just store it in a variable. You can use the following as a starting point:
#!/bin/bash
previousValue=0
while read v; do
echo "Previous value=${previousValue}; Current value=${v}"
previousValue=${v}
done
exit 0
If the above script is called testval.sh. And you have an input file called test.in with the following values:
2
1
4
6
3
0
5
Then running
./testval.sh <test.in
will generate the following output:
Previous value=0; Current value=2
Previous value=2; Current value=1
Previous value=1; Current value=4
Previous value=4; Current value=6
Previous value=6; Current value=3
Previous value=3; Current value=0
Previous value=0; Current value=5
If the skeleton script works for you, feel free to modify it for however you need to do comparisons.
Hope this helps.
I dont know how your input looks exactly, but something like this might be useful for you :
The script
#!/bin/bash
declare -A app_stats
while read app tasks
do
if [[ ${app_stats[$app]} -ne $tasks && ! -z ${app_stats[$app]} ]]
then
echo "Number of tasks for $app has changed from ${app_stats[$app]} to $tasks"
app_stats[$app]=$tasks
else
app_stats[$app]=$tasks
fi
done <<< "$( cat input.txt)"
The input
App1 2
App2 5
App3 6
App1 6
The output
Number of tasks for App1 has changed from 2 to 6
Regards!
Let's say we had a loop that we want to have run as quickly as possible. Let's say something was being done to a list of hosts inside that loop; just for the sake of argument, let's say it was a redis query. Let's say that the list of hosts may change occasionally due to hosts being added/removed from a pool (not load balanced); however, the list is predictable (e.g., they all start with “foo” and end with 2 digits. So we want to run this occasionally; say, once every 15 minutes:
listOfHosts=$(dig +noall +ans foo{00..99}.domain | while read -r n rest; do printf '%s\n' ${n%.}; done)
to get the list of hosts. Let's say our loop looked something like this:
while :; do
for i in $listOfHosts; do
redis-cli -h $i llen something
done
(( ( $(date +%s) % 60 * 15) == 0 )) && callFunctionThatSetslistOfHosts
done
(now obviously there's some things missing, like testing to see if we've already run callFunctionThatSetslistOfHosts in the current minute and only running it once, and doing something with the redis output, and maybe the list of hosts should be an array, but basically this is it.)
How can we run callFunctionThatSetslistOfHosts asynchronously so that it doesn't slow down the loop. I.e., have it running in the background setting listOfHosts occasionally (e.g. once every 15 minutes), so that the next time the inner loop is run it gets a potentially different set of hosts to run the redis query on?
My major problem seems to be that in order to set listOfHosts in a loop, that loop has to be a subshell, and listOfHosts is local to that subshell, and setting it doesn't affect the global listOfHosts.
I may resort to pipes, but will have to poll the reader before generating a new list — not that that's terribly bad if I poll slowly, but I thought I'd present this as a problem.
Thanks.
I wonder if i could achieve something like the following logic:
given a set of jobs to be done fold_num and a limit number of worker processes, say work_num, i hope to run work_num processes in parallel until all jobs fold_num are done. Finally, there is some other processing on the results of all these jobs. We can assume fold_num is always several times of work_num.
I haven't got the following snippet working so far, with tips from How to wait in bash for several subprocesses to finish and return exit code !=0 when any subprocess ends with code !=0?
#!/bin/bash
worker_num=5
fold_num=10
pids=""
result=0
for fold in $(seq 0 $(( $fold_num-1 ))); do
pids_idx=$(( $fold % ${worker_num} ))
echo "pids_idx=${pids_idx}, pids[${pids_idx}]=${pids[${pids_idx}]}"
wait ${pids[$pids_idx]} || let "result=1"
if [ "$result" == "1" ]; then
echo "some job is abnormal, aborting"
exit
fi
cmd="echo fold$fold" # use echo as an example, real command can be time-consuming to run
$cmd &
pids[${pids_idx}]="$!"
echo "pids=${pids[*]}"
done
# when the for-loop completes, do something else...
The output looks like:
pids_idx=0, pids[0]=
pids=5846
pids_idx=1, pids[1]=
fold0
pids=5846 5847
fold1
pids_idx=2, pids[2]=
pids=5846 5847 5848
fold2
pids_idx=3, pids[3]=
pids=5846 5847 5848 5849
fold3
pids_idx=4, pids[4]=
pids=5846 5847 5848 5849 5850
pids_idx=0, pids[0]=5846
fold4
./test_wait.sh: line 12: wait: pid 5846 is not a child of this shell
some job is abnormal, aborting
Question:
1. Seems the pids array has recorded correct process IDs, but failed to be 'wait' for. Any ideas how to fix this?
2. Do we need to use wait after the for-loop? if so, what to do after the for-loop?
alright, I guess I got a working solution with tips from folks on 'parallel'.
export worker_names=("foo", "bar")
export worker_num=${#worker_names[#]}
function some_computation {
fold=$1
cmd="..." #involves worker_names and fold
echo $cmd; $cmd
}
export -f some_computation # important, to make this function visible to subprocesses
for fold in $(seq 0 $(( $fold_num-1 ))); do
sem -j $worker_num some_computation $fold
done
sem --wait # wait for all jobs to complete
# do something below
Couple of things here:
I haven't got parallel working because of the post-computation processing i need to do after those parallel jobs. The parallel version i tried failed to wait for job completion. So i used GNU sem which stands for semaphore.
exporting variables is crucial for the computation function to access to in this situation. Otherwise those global variables are invisible.
exporting the computation function is also necessary for the same reason. Notice the -f option.
sem --wait perfectly fulfills the needs to wait for parallel jobs.
HTH.