Per-page count of the Kyocera SNMP printer - snmp

I am trying to create a project to insert the counts of my printer into the database or into a text file.
To create a .txt file with the general total counts, by color and black, both prints and copies I use the following OIDs with the following command lines in ubuntu:
Total counter:
snmpwalk -c public -v 1 192.168.0.123 1.3.6.1.2.1.43.10.2.1.4.1.1 > /var/www/html/wordpress/total.txt
Total copies
snmpwalk -c public -v 1 192.168.0.123 1.3.6.1.4.1.1347.42.3.1.1.1.1.2 > /var/www/html/wordpress/totalcopias.txt
Total impressions
snmpwalk -c public -v 1 192.168.0.123 1.3.6.1.4.1.1347.42.3.1.1.1.1.1 > /var/www/html/wordpress/totalimpressões.txt
Total de impressão a preto
snmpwalk -c public -v 1 192.168.0.123 1.3.6.1.4.1.1347.42.3.1.2.1.1.1 > /var/www/html/wordpress/totalimpressõespreto.txt
Total de impressão a cores
snmpwalk -c public -v 1 192.168.0.123 1.3.6.1.4.1.1347.42.3.1.2.1.1.1.3 > /var/www/html/wordpress/totalimpressõescores.txt
Total de cópias a preto
snmpwalk -c public -v 1 192.168.0.123 1.3.6.1.4.1.1347.42.3.1.2.1.1.2.1 > /var/www/html/wordpress/totalcopiaspreto.txt
Total de cópias a cores
snmpwalk -c public -v 1 192.168.0.123 1.3.6.1.4.1.1347.42.3.1.2.1.1.2.3 > /var/www/html/wordpress/totalcopiascores.txt
But I created my users in my printer and I control monthly impressions and copies per user.
But so far, first I do not know if it is possible to access this information with OID, but if it is possible I have not yet found the OID to register in a text file as in the examples above.

Related

How to make a bash script that will use cdhit on each file in the directory separately?

I have a directory with >500 multifasta files. I want to use the same program (cd-hit-est) to cluster sequences in each of the files and then save the output in another directory. I want the name to be the same of the file to be the same as in the original file.
for file in /dir/*.fasta;
do
echo "$file";
cd-hit-est -i $file -o /anotherdir/${file} -c 0.98 -n 9 -d 0 -M 120000 -T 32;
done
I get partial output and then an error:
...
^M# comparing sequences from 33876 to 33910
.................---------- new table with 34 representatives
^M# comparing sequences from 33910 to 33943
.................---------- new table with 33 representatives
^M# comparing sequences from 33943 to 33975
................---------- new table with 32 representatives
^M# comparing sequences from 33975 to 34006
................---------- new table with 31 representatives
^M# comparing sequences from 34006 to 34036
...............---------- new table with 30 representatives
^M# comparing sequences from 34036 to 34066
...............---------- new table with 30 representatives
^M# comparing sequences from 34066 to 35059
.....................
Fatal Error:
file opening failed
Program halted !!
---------- new table with 993 representatives
35059 finished 34719 clusters
No output file was produced. Could anyone help me understand where do I make a mistake?
doit() {
file="$1"
echo "$file";
cd-hit-est -i "$file" -o /anotherdir/$(basename "$transcriptome") -c 0.98 -n 9 -d 0 -M 120000 -T 32;
}
env_parallel doit ::: /dir/*.fasta
OK, it seems that I have an answer now, in any case if somebody is looking for a similar answer.
for file in /dir/*.fasta;
do
echo "$file";
cd-hit-est -i "$file" -o /anotherdir/$(basename "$transcriptome") -c 0.98 -n 9 -d 0 -M 120000 -T 32;
done
Calling the output file in another way did the trick.

Using AWK and Fping to capture out put to variable in Bash

I am trying to save the outputs of fping in to a variable in Bash, this should be easy but I just cant get it to work. I have tried various methods, tried using things like AWK and CUT on the captured variable but comming up with empty varibles.
My thought process is as follows.
typing fping 8.8.8.8 -c 2 gives me the output
8.8.8.8 : [0], 84 bytes, 15.1 ms (15.1 avg, 0% loss)
8.8.8.8 : [1], 84 bytes, 15.0 ms (15.0 avg, 0% loss)
8.8.8.8 : xmt/rcv/%loss = 2/2/0%, min/avg/max = 15.0/15.0/15.1
typing fping -c 1 8.8.8.8 | awk '/min/' only returns the last line which is what i want.
8.8.8.8 : xmt/rcv/%loss = 2/2/0%, min/avg/max = 15.0/15.0/15.1
so typing output=$(fping -c 1 8.8.8.8 | awk '/min/')
and I expected to save that last line in to a variable that I can then further process. But instead i get a BLANK variable even though it shows the line as below??
$ output=$(fping -c 1 8.8.8.8 | awk '/min/')
8.8.8.8 : xmt/rcv/%loss = 1/1/0%, min/avg/max = 15.1/15.1/15.1
I was also looking at first using AWK to extract jsut the 5 and 6th colum values to make post processing easier
something like
output=$(fping -c 1 8.8.8.8 | awk '/min/ {loss= $5, time=$6}')
this syntax may be wrong at the moment but to give a variable like bwlow with all the values ready to extract
"2/2/0% 15.0/15.0/15.1"
What am i doing wrong? how can i save that last line of the output in to a variable? I am ok splitting it up, but why does the AWK not extract the right bit and save it?
Thank you
Here's the complete, unabbreviated output of your attempt:
user#host$ output=$(fping -c 1 8.8.8.8 | awk '/min/')
8.8.8.8 : xmt/rcv/%loss = 1/1/0%, min/avg/max = 0.66/0.66/0.66
user#host$
The fact that you're getting output on screen is crucial, it means the data is not being captured. That typically indicates the data is written to stderr instead. Here's what you get when you redirect stdout to stderr:
user#host$ output=$(fping -c 1 8.8.8.8 2>&1 | awk '/min/')
(no output)
and indeed, the variable now has a value:
user#host$ printf '%s\n' "$output"
8.8.8.8 : xmt/rcv/%loss = 1/1/0%, min/avg/max = 0.77/0.77/0.77

Simulating User Interaction In Gromacs in Bash

I am currently doing parallel cascade simulations in GROMACS 4.6.5 and I am inputting the commands using a bash script:
#!/bin/bash
pdb2gmx -f step_04_01.pdb -o step_04_01.gro -water none -ff amber99sb -ignh
grompp -f minim.mdp -c step_04_01.gro -p topol.top -o em.tpr
mdrun -v -deffnm em
grompp -f nvt.mdp -c em.gro -p topol.top -o nvt.tpr
mdrun -v -deffnm nvt
grompp -f md.mdp -c nvt.gro -t nvt.cpt -p topol.top -o step_04_01.tpr
mdrun -v -deffnm step_04_01
trjconv -s step_04_01.tpr -f step_04_01.xtc -pbc mol -o step_04_01_pbc.xtc
g_rms -s itasser_2znh.tpr -f step_04_01_pbc.xtc -o step_04_01_rmsd.xvg
Commands such as trjconv and g_rms require user interaction to select options. For instance when running trjconv you are given:
Select group for output
Group 0 ( System) has 6241 elements
Group 1 ( Protein) has 6241 elements
Group 2 ( Protein-H) has 3126 elements
Group 3 ( C-alpha) has 394 elements
Group 4 ( Backbone) has 1182 elements
Group 5 ( MainChain) has 1577 elements
Group 6 ( MainChain+Cb) has 1949 elements
Group 7 ( MainChain+H) has 1956 elements
Group 8 ( SideChain) has 4285 elements
Group 9 ( SideChain-H) has 1549 elements
Select a group:
And the user is expected to enter eg. 0 into the terminal to select Group 0. I have tried using expect and send, eg:
trjconv -s step_04_01.tpr -f step_04_01.xtc -pbc mol -o step_04_01_pbc.xtc
expect "Select group: "
send "0"
However this does not work. I have also tried using -flag like in http://www.gromacs.org/Documentation/How-tos/Using_Commands_in_Scripts#Within_Script but it says that it is not a recognised input.
Is my expect \ send formatted correctly? Is there another way around this in GROMACS?
I don't know gromacs but I think they are just asking you to to use the bash syntax:
yourcomand ... <<EOF
1st answer to a question
2nd answer to a question
EOF
so you might have
trjconv -s step_04_01.tpr -f step_04_01.xtc -pbc mol -o step_04_01_pbc.xtc <<EOF
0
EOF
You can use
echo 0 | trjconv -s step_04_01.tpr -f step_04_01.xtc -pbc mol -o step_04_01_pbc.xtc
And if you need to have multiple inputs, just use
echo 4 4 | g_rms -s itasser_2znh.tpr -f step_04_01_pbc.xtc -o step_04_01_rmsd.xvg

bash awk file compare

I have a config
[LogicalUnit1] UnitInquiry "NFSN00Y5IP51ZL" LUN0 /mnt/extent0 64MB
[LogicalUnit2] UnitInquiry "NFSN00N49CQL28" LUN0 /mnt/extent1 64MB
[LogicalUnit3] UnitInquiry "NFSNBRGQOCXK" LUN0 /mnt/extent4 10MB
[LogicalUnit4] UnitInquiry "NFSNE7IXADFJ" LUN0 /mnt/extent5 25MB
which is read via a bash script, using awk i parse the file and get variables
awk '/UnitInquiry/ {print $1, $3, $5, $6}' $ctld_config | while read a b c d ; do
if [ -f $a ]
then
ctladm create -b block -o file=$c -S $b -d $a
ctladm devlist -v > $lun_config
else
truncate -s $d $c ; ctladm create -b block -o file=$c -S $b -d $a
fi
this will initialize the luns properly on bootup, however if i add a lun then it will recreate them all again, how can i compare whats running, to whats configured and only reinitialize the ones not already live, there is a command to list the devices
ctladm devlist -v
LUN Backend Size (Blocks) BS Serial Number Device ID
0 block 131072 512 "NFSN00Y5IP51ZL [LogicalUnit1]
lun_type=0
num_threads=14
file=/mnt/extent0
1 block 131072 512 "NFSN00N49CQL28 [LogicalUnit2]
lun_type=0
num_threads=14
file=/mnt/extent1
2 block 20480 512 "NFSNBRGQOCXK" [LogicalUnit3]
lun_type=0
num_threads=14
file=/mnt/extent4
3 block 51200 512 "NFSNE7IXADFJ" [LogicalUnit4]
lun_type=0
num_threads=14
file=/mnt/extent5
Why not add the following after the then:
ctladm devlist -v | grep -q "$a" && continue
This will
run the command that show the currently active devices
check if the LogicalUnit name you want to register is already listed, and if yes...
skip the rest of the loop.
If $a (logical unit name) is not unique enough, you can also grep for another, more unique identifier, e.g. the serial number.

shell script for Net-snmp (get/walk) is not efficient

#!/bin/bash
for i in `seq 1 3000`
do
index=`snmpget -v 2c -c public -Oqv localhost 1.3.6.1.4.1.21067.4.1.1.1.$i`
done
for i in `seq 1 3000`
do
upload=`snmpget -v 2c -c public -Oqv localhost 1.3.6.1.4.1.21067.4.1.1.10.$i`
done
for i in `seq 1 3000`
do
download=`snmpget -v 2c -c public -Oqv localhost 1.3.6.1.4.1.21067.4.1.1.11.$i`
done
(ubuntu-12.04)
above is my shell script....with every execution of snmpget command it returns an integer and stores value in above three variables...
the problem is the data table is of 9000 values. so with this script it is giving too much time consuption and bettelnake.
can any one suggest me some simple "SNMPWALK"(or anything else) used script with that I can store all this data in to a single array[9000] or with three parse,in three different arrays with index of 1 to 3000.so I can decrease time as much as possible.
for example : snmpwalk -v 2c -c public -Oqv localhost 1.3.6.1.4.1.21067 gives all the values,but I dont know how to store all these in a array with different index.
..................................................................
see I have tried below : but giving me errors...
cat script.sh
#!/bin/sh
OUTPUT1=$(snmpbulkwalk -Oqv -c public -v 2c localhost 1.3.6.1.2.1.2.2.1.1 2> /dev/null)
i=1
for LINE in ${OUTPUT1} ;
do
OUTPUT1[$i]=$LINE;
i=`expr $i + 1`
done
sh script.sh
j4.sh: 6: j4.sh: OUTPUT1[1]=1: not found
j4.sh: 6: j4.sh: OUTPUT1[2]=2: not found
try something like this:
OID="1.3.6.1.4.1.21067.4.1.1"
declare -a index=($(snmpwalk-v 2c -c public -Oqv localhost ${OID}.1))
declare -a upload=($(snmpwalk-v 2c -c public -Oqv localhost ${OID}.10))
declare -a download=($(snmpwalk-v 2c -c public -Oqv localhost ${OID}.11))
echo "retrieved ${#index[#]} elements"
echo"#${index[1]}: up=${upload[1]} down=${download[1]}
note, that in general i would suggest to use some higher-level language (like python) rather than bash to work more efficiently with snmp...
I would suggest if its a table that you are retrieving use SNMPTABLE rather than walk or get.

Resources