shell script for Net-snmp (get/walk) is not efficient - bash

#!/bin/bash
for i in `seq 1 3000`
do
index=`snmpget -v 2c -c public -Oqv localhost 1.3.6.1.4.1.21067.4.1.1.1.$i`
done
for i in `seq 1 3000`
do
upload=`snmpget -v 2c -c public -Oqv localhost 1.3.6.1.4.1.21067.4.1.1.10.$i`
done
for i in `seq 1 3000`
do
download=`snmpget -v 2c -c public -Oqv localhost 1.3.6.1.4.1.21067.4.1.1.11.$i`
done
(ubuntu-12.04)
above is my shell script....with every execution of snmpget command it returns an integer and stores value in above three variables...
the problem is the data table is of 9000 values. so with this script it is giving too much time consuption and bettelnake.
can any one suggest me some simple "SNMPWALK"(or anything else) used script with that I can store all this data in to a single array[9000] or with three parse,in three different arrays with index of 1 to 3000.so I can decrease time as much as possible.
for example : snmpwalk -v 2c -c public -Oqv localhost 1.3.6.1.4.1.21067 gives all the values,but I dont know how to store all these in a array with different index.
..................................................................
see I have tried below : but giving me errors...
cat script.sh
#!/bin/sh
OUTPUT1=$(snmpbulkwalk -Oqv -c public -v 2c localhost 1.3.6.1.2.1.2.2.1.1 2> /dev/null)
i=1
for LINE in ${OUTPUT1} ;
do
OUTPUT1[$i]=$LINE;
i=`expr $i + 1`
done
sh script.sh
j4.sh: 6: j4.sh: OUTPUT1[1]=1: not found
j4.sh: 6: j4.sh: OUTPUT1[2]=2: not found

try something like this:
OID="1.3.6.1.4.1.21067.4.1.1"
declare -a index=($(snmpwalk-v 2c -c public -Oqv localhost ${OID}.1))
declare -a upload=($(snmpwalk-v 2c -c public -Oqv localhost ${OID}.10))
declare -a download=($(snmpwalk-v 2c -c public -Oqv localhost ${OID}.11))
echo "retrieved ${#index[#]} elements"
echo"#${index[1]}: up=${upload[1]} down=${download[1]}
note, that in general i would suggest to use some higher-level language (like python) rather than bash to work more efficiently with snmp...

I would suggest if its a table that you are retrieving use SNMPTABLE rather than walk or get.

Related

How to implement a counter using dictionary in Bash

#!/bin/bash
clear
Counter() {
declare -A dict
while read line; do
if [[ -n "${dict[$line]}" ]]; then
((${dict[$line]}+1))
else
dict["$line"]=1
fi
done < /home/$USER/.bash_history
echo ${!dict[#]} ${dict[#]}
}
Counter
I'm trying to write a script that counts the most used commands in your bash history using dictionary to store commands as keys and amount of times you used a command as a value but my code fails successefully.
Can you help me write the code that works.
Python code for reference:
def Counter(file):
dict = {}
for line in file.read().splitlines():
if line not in dict:
dict[line] = 1
else:
dict[line] += 1
for k, v in sorted(dict.items(), key=lambda x: x[1]):
print(f"{k} was used {v} times")
with open("/home/igor/.bash_history") as bash:
Counter(bash)
Output:
echo $SHELL was used 11 times
sudo apt-get update was used 14 times
ls -l was used 14 times
ldd /opt/pt/bin/PacketTracer7 was used 15 times
zsh was used 17 times
ls was used 26 times
There's no need to initialize the value to 1 for the first occurrence. Bash can do that for you.
The problem is you can't use an empty string as a key, so prepend something and remove it when showing the value.
#! /bin/bash
Counter() {
declare -A dict
while read line; do
c=${dict[x$line]}
dict[x$line]=$((c+1))
done < /home/$USER/.bash_history
for k in "${!dict[#]}" ; do
echo "${dict[$k]}"$'\t'"${k#x}"
done
}
Counter | sort -n
Python code for reference:
To count occurrences of lines, in shell you would typically do sort | uniq -c | sort you would do:
sort ~/.bash_history | uniq -c | sort -rn | head

display grid of data in bash

would like to get an opinion on how best to do this in bash, thank you
for x number of servers, each has it's own list of replication agreements and their status.. it's easy to run a few commands and get this data, ex;
get servers, output (setting/variable in/from a local config file);
. ./ldap-config ; echo "$MASTER $REPLICAS"
dc1-server1 dc1-server2 dc2-server1 dc2-server2 dc3...
for dc1-server1, get agreements, output;
ipa-replica-manage -p $(cat ~/.dspw) list -v $SERVER.$DOMAIN | grep ': replica' | sed 's/: replica//'
dc2-server1
dc3-server1
dc4-server1
for dc1-server1, get agreement status codes, output;
ipa-replica-manage -p $(cat ~/.dspw) list -v $SERVER.$DOMAIN | grep 'status: Error (' | sed -e 's/.*status: Error (//' -e 's/).*//'
0
0
18
so output would be several columns based on the 'get servers' list with each 'replica: status' under each server, for that server
looking to achieve something like;
dc2-server1: 0 dc2-server2: 0 dc1-server1: 0 ...
dc3-server1: 0 dc3-server2: 18 dc3-server1: 13 ...
dc4-server1: 18 dc4-server2: 0 dc4-server1: 0 ...
Generally eval is considered evil. Nevertheless, I'm going to use it.
paste is handy for printing files side-by-side.
Bash process substitutions can be used where you'd use a filename.
So, I'm going to dynamically build up a paste command and then eval it
I'm going to use get.sh as a placeholder for your mystery commands.
cmd="paste"
while read -ra servers; do
for server in "${servers[#]}"; do
cmd+=" <(./get.sh \"$server\" agreements | sed 's/\$/:/')"
cmd+=" <(./get.sh \"$server\" status)"
done
done < <(./get.sh servers)
eval "$cmd" | column -t

How to make a bash script that will use cdhit on each file in the directory separately?

I have a directory with >500 multifasta files. I want to use the same program (cd-hit-est) to cluster sequences in each of the files and then save the output in another directory. I want the name to be the same of the file to be the same as in the original file.
for file in /dir/*.fasta;
do
echo "$file";
cd-hit-est -i $file -o /anotherdir/${file} -c 0.98 -n 9 -d 0 -M 120000 -T 32;
done
I get partial output and then an error:
...
^M# comparing sequences from 33876 to 33910
.................---------- new table with 34 representatives
^M# comparing sequences from 33910 to 33943
.................---------- new table with 33 representatives
^M# comparing sequences from 33943 to 33975
................---------- new table with 32 representatives
^M# comparing sequences from 33975 to 34006
................---------- new table with 31 representatives
^M# comparing sequences from 34006 to 34036
...............---------- new table with 30 representatives
^M# comparing sequences from 34036 to 34066
...............---------- new table with 30 representatives
^M# comparing sequences from 34066 to 35059
.....................
Fatal Error:
file opening failed
Program halted !!
---------- new table with 993 representatives
35059 finished 34719 clusters
No output file was produced. Could anyone help me understand where do I make a mistake?
doit() {
file="$1"
echo "$file";
cd-hit-est -i "$file" -o /anotherdir/$(basename "$transcriptome") -c 0.98 -n 9 -d 0 -M 120000 -T 32;
}
env_parallel doit ::: /dir/*.fasta
OK, it seems that I have an answer now, in any case if somebody is looking for a similar answer.
for file in /dir/*.fasta;
do
echo "$file";
cd-hit-est -i "$file" -o /anotherdir/$(basename "$transcriptome") -c 0.98 -n 9 -d 0 -M 120000 -T 32;
done
Calling the output file in another way did the trick.

bash scripting - how to loop through results called within a function

I have a function that gives me a list of IPs and for each IP in my list, I want to run a query. The problem I'm having is its only looping through (1) of the results and not the rest.
getPartition ()
{
_knife=$(which knife);
_grep=$(which grep);
_awk=$(which awk);
cd ~/home/foo/.chef
local result=$(${_knife} search "chef_environment:dev AND role:myapp AND ec2_region:us-east-1" | ${_grep} IP | ${_awk} '{ print $2 }');
read -a servers <<< $result;
echo "Checking ${#servers[#]} servers";
for i in ${servers[#]};
do
local host='10.1.2.123'
local db='mystate'
_mongo=$(which mongo);
echo -n "$i";
local exp="db.foobarcluster_servers.find(
{\"node_host\":\"${i}\",\"node_type\":\"PROCESS\",\"region\":\"us-east-1\",\"status\":\"ACTIVE\"},{\"partition_range_start\":1,\"partition_range_end\":1, _id:0}).pretty();";
${_mongo} ${host}/${db} --eval "$exp" | grep -o -e "{[^}]*}";
done
}
So, I tried using for, but its only running the query for (1) of the (5) hosts listed.
I can see in my output for result that the list of IPs look like this:
+ local 'result=10.8.3.34
10.8.2.161
10.8.3.514
10.8.4.130
10.8.2.173'
So, I'm just returning results for (1) of the IPs it should be (5) of them because I have 5 IPs:
+ read -a servers
+ echo 'Checking 1 servers'
Checking 1 servers
+ for i in ${servers[#]}
+ local host=10.1.2.130
+ local db=mystate
++ which mongo
+ _mongo=/usr/local/bin/mongo
+ echo -n 10.8.3.34
10.8.3.34+ local 'exp=db.foobarcluster_servers.find(
{"node_host":"10.8.3.34","node_type":"PROCESS","region":"us-east-1","status":"ACTIVE"},{"partition_range_start":1,"partition_range_end":1, _id:0}).pretty();'
+ /usr/local/bin/mongo 10.8.3.34/mystate --eval 'db.foobarcluster_servers.find(
{"node_host":"10.8.3.34","node_type":"PROCESS","region":"us-east-1","status":"ACTIVE"},{"partition_range_start":1,"partition_range_end":1, _id:0}).pretty();'
+ grep -o -e '{[^}]*}'
{ "partition_range_start" : 31, "partition_range_end" : 31 }
+ set +x
Results:
{ "partition_range_start" : 31, "partition_range_end" : 31 }
I'm expecting:
{ "partition_range_start" : 31, "partition_range_end" : 31 }
{ "partition_range_start" : 32, "partition_range_end" : 32 }
{ "partition_range_start" : 33, "partition_range_end" : 33 }
{ "partition_range_start" : 34, "partition_range_end" : 34 }
{ "partition_range_start" : 35, "partition_range_end" : 35 }
How do I effectively loop through my IPs? Did I set up result properly as a variable to hold that list of IPs?
Good idea using set -x - another good debugging tactic (that also makes reading set -x easier) would be to comment out parts that aren't relevant to the issue (e.g. make the for loop simply print its iterations, hard-code the value of result, etc.) to try to narrow down the issue.
If I try to replicate what you're doing myself:
demo() {
local result='10.8.3.34
10.8.2.161
10.8.3.514
10.8.4.130
10.8.2.173'
read -a servers <<< $result
echo "Checking ${#servers[#]} servers"
for i in ${servers[#]}; do
echo "$i"
done
}
Which outputs (with set -x):
$ demo
+ demo
+ local 'result=10.8.3.34
10.8.2.161
10.8.3.514
10.8.4.130
10.8.2.173'
+ read -a servers
+ echo 'Checking 5 servers'
Checking 5 servers
+ for i in '${servers[#]}'
+ echo 10.8.3.34
10.8.3.34
+ for i in '${servers[#]}'
+ echo 10.8.2.161
10.8.2.161
+ for i in '${servers[#]}'
+ echo 10.8.3.514
10.8.3.514
+ for i in '${servers[#]}'
+ echo 10.8.4.130
10.8.4.130
+ for i in '${servers[#]}'
+ echo 10.8.2.173
10.8.2.173
In other words, the code you shared appears to be working as expected. Perhaps there's a typo you corrected while transcribing?
A key thing to note (per help read) is that read "Reads a single line from the standard input ... the line is split into fields as with word splitting". In other words, a multi-line input does not all get read by a call to read, only the first line does. We can test this by tweaking the demo function above to use:
read -a servers <<< "$result"
Which causes the output you describe:
$ demo
+ demo
+ local 'result=10.8.3.34
10.8.2.161
10.8.3.514
10.8.4.130
10.8.2.173'
+ read -a servers
+ echo 'Checking 1 servers'
Checking 1 servers
+ for i in '${servers[#]}'
+ echo 10.8.3.34
10.8.3.34
So that's likely the source of your issue - by quoting $result (which generally is a good idea) read respects the newlines separating the elements, and stops reading after it sees the first one.
Instead use the readarray command, which has more sane behavior for tasks like this. It will "read lines from a file into an array variable", rather than stopping after the first line.
You can then skip the indirection of writing to result, as well, and just pipe directly into readarray:
readarray -t servers < <(
${_knife} search "chef_environment:dev AND role:myapp AND ec2_region:us-east-1"
| ${_grep} IP | ${_awk} '{ print $2 }')
it's because read reads only until first input line delimiter "\n"
adding option -d '', reads until end of input
result=serv1$'\n'serv2$'\n'serv3
read -a servers <<< $result
printf "<%s>\n" "${servers[#]}"
read -d '' -a servers <<< $result
printf "<%s>\n" "${servers[#]}"
There's also readarray builtin which can be used to read an array
readarray -t servers <<< $result
printf "<%s>\n" "${servers[#]}"
-t to remove the newlines for each element of the array
The problem is that…
read -a servers <<< $result
is only reading the first line into the array
Change that line to…
servers=( $result )
This converts every whitespace-delimited value in $result into an array element. Effectively servers=( <ip> <ip> <ip> <ip> <ip> )

Make back quotes to be evaluated when using cat > file.txt in bash

I have seen something like behinds.
cat >/etc/swift/swift.conf <<EOF
[swift-hash]
# random unique strings that can never change (DO NOT LOSE)
swift_hash_path_prefix = `od -t x8 -N 8 -A n </dev/random`
swift_hash_path_suffix = `od -t x8 -N 8 -A n </dev/random`
EOF
It seems that it intends to get and write random value while using cat. I have tried 'ctrl+d'(I'm running under putty client) at where 'EOF' was wrote, but 'od' was't executed. It just write raw input od -t x8 -N 8 -A n </dev/random. How can I run those back quoted command?

Resources