Shell doesn't respond first input character after executing my program which executes shell command - bash

I had a C program which is trying to do remote command execution using (rsh) command. It is trying to pull IP information and executed command. After executing my program, I was landed in shell prompt. My C program successfully existed (Strace and ps -ef confirmed the same).
When a type any character in a shell now, shell doesn't respond. It works well for next key typing.
I am using bash shell
C program looks something like this.
char *cmdline = "/sbin/ip link show bond0 2>/dev/null && inf=bond0 || inf=eth0;"
"/sbin/ip -6 addr show $inf scope link | /bin/awk '/inet6/ { print \$2 }'";
char *getpeerip = "/usr/bin/rsh $(/sbin/ifconfig eth1 | /bin/awk '/inet/ { print \$2 }'"
"| /bin/awk -F: '{ print \$2 }'"
"| /bin/awk -F. '{ printf \"%%s.%%s.%%s.%d\", \$1, \$2, \$3 }') "
"\"/sbin/ip link show bond0 2>/dev/null && inf=bond0 || inf=eth0;"
"/sbin/ip -6 addr show $inf scope link | /bin/awk '/inet6/ { print \\$2 }'\"";
char lnkbuf[300];
cpid = p_getset->local_addr_ids[i].entity_instance;
if (cpid == getThisCPNum()) {
cmd_file = popen(cmdline, "r");
} else {
sprintf(lnkbuf, getpeerip, 1+getPeerCPSlot());
cmd_file = popen(lnkbuf, "r");
}
if (cmd_file != NULL) {
fgets(lnkbuf, 100, cmd_file);
fclose(cmd_file);
}

The output of "ip link show bond0" over rsh containing escape character causing console input problem.
I changed command like this. It worked.
char *cmdline = "/sbin/ip link show bond0 >/dev/null 2>/dev/null && inf=bond0 || inf=eth0;"
I did the same thing for variable getpeerip.

Related

Bash script does nothing when I run it, seems to keep waiting

I've written my first script, one in which I want to know if 2 files have the same values in a specific column.
Both files are WEKA machine-learning prediction outputs for different algorithms, hence they have to be in the same format, but the prediction column would be different.
Here's the code I've written based on the tutorial presented in https://linuxconfig.org/bash-scripting-tutorial-for-beginners:
#!/bin/bash
lineasdel1=$(wc -l $1 | awk '{print $1}')
lineasdel2=$(wc -l $2 | awk '{print $1}')
if [ "$lineasdel1" != "$lineasdel2" ]; then
echo "Files $1 and $2 have different number of lines, unable to perform"
exit 1
fi
function quitalineasraras {
awk '$1!="==="&&NF>0'
}
function acomodo {
awk '{gsub(/^ +| +$/, ""); gsub(/ +0/, " W 0"); gsub(/ +1$/, " W 1"); gsub(/ +/, "\t") gsub(/\+\tW/, "+"); print}'
}
function procesodel1 {
quitalineasraras "$1" | acomodo
}
function procesodel2 {
quitalineasraras "$2" | acomodo
}
el1procesado=$(procesodel1)
el2procesado=$(procesodel2)
function pegar {
paste <(echo "$el1procesado") <(echo "$el2procesado")
}
function contarintersec {
awk 'BEGIN {FS="\t"} $3==$8 {n++} END {print n}'
}
unido=$(pegar)
interseccion=$(contarintersec $unido)
echo "Estos 2 archivos tienen $interseccion coincidencias."
I ran all individual codes of all functions in the terminal and verified they work successfully (I'm using Linux Mint 19.2). Script's permissions also have been changed to make it executable. Paste command also is supposed to work with that variable syntax.
But when I run it via:
./script.sh file1 file2
if both files have the same number of lines, and I press enter, no output is obtained; instead, the terminal opens an empty line with cursor waiting for something. In order to write another command, I've got to press CTRL+C.
If both files have different number of lines the error message prints successfully, so I think the problem has something to do with the functions, with the fact that awk has different syntax for some chores, or with turning the output of functions into variables.
I know that I'm missing something, but can't come up with what could be.
Any help will be appreciated.
what could be.
function quitalineasraras {
awk '$1!="==="&&NF>0'
}
function procesodel1 {
quitalineasraras "$1" | acomodo
}
el1procesado=$(procesodel1)
The positional variables $1 are set for each function separately. The "$1" inside procesodel1 expands to empty. The quitalineasraras is passed one empty argument "".
The awk inside quitalineasraras is passed only the script without the filename, so it reads the input for standard input, ie. it waits for the input on standard input.
The awk inside quitalineasraras without any file arguments makes your script seem to wait.

How to execute AWK in TCL list with variables?

I need to run a few commands from terminal via a TCL script. This TCL script works fine.
set cmd_list [list {mkdir testdir} {cd testdir} ]
set dataportInterface "eth0"
lappend cmd_list [list ifconfig -a | grep -m 1 $dataportInterface]
However, as soon as I add the awk command, it starts failing.
Error is "Failed : extra characters after close-brace".
lappend cmd_list [list ifconfig -a | grep -m 1 $dataportInterface | awk 'BEGIN {FS="[:]"} { print $1 } END {}' | xargs -t ethtool -S]
I have tried several possibilities, but none of them work. For example, if I try
lappend cmd_list { }
then, $dataportInterface's value eth0 is not taken.
I tried putting a \ in front of all special characters (: " ' [ { and that threw the error $1 not recognized. I put the awk command in {{ brackets and that didn't work either. (error: doesn't recognize {{awk)
What is the correct way to go about this, and why?
Single quotes have no significance to Tcl - try changing '...' to {...}

Mounted volumes & bash in OSX

I'm working on a disk space monitor script in OSX and am struggling to first generate a list of volumes. I need this list to be generated dynamically as it changes over time; having this work properly would also make the script portable.
I'm using the following script snippet:
#!/bin/bash
PATH=/bin:/usr/bin:/sbin:/usr/sbin export PATH
FS=$(df -l | grep -v Mounted| awk ' { print $6 } ')
while IFS= read -r line
do
echo $line
done < "$FS"
Which generates:
test.sh: line 9: /
/Volumes/One-TB
/Volumes/pfile-archive-offsite-three-CLONE
/Volumes/ERDF-Files-Offsite-Backup
/Volumes/ESXF-Files-Offsite-Backup
/Volumes/ACON-Files-Offsite-Backup
/Volumes/LRDF-Files-Offsite-Backup
/Volumes/EPLK-Files-Offsite-Backup: No such file or directory
I need the script to generate output like this:
/
/Volumes/One-TB
/Volumes/pfile-archive-offsite-three-CLONE
/Volumes/ERDF-Files-Offsite-Backup
/Volumes/ESXF-Files-Offsite-Backup
/Volumes/ACON-Files-Offsite-Backup
/Volumes/LRDF-Files-Offsite-Backup
/Volumes/EPLK-Files-Offsite-Backup
Ideas, suggestions? Alternate or better methods of generating a list of mounted volumes are also welcome.
Thanks!
Dan
< is for reading from a file. You are not reading from a file but from a bash variable. So try using <<< instead of < on the last line.
Alternatively, you don't need to store the results in a variable, then read from the variable; you can directly read from the output of the pipeline, like this (I have created a function for neatness):
get_data() {
df -l | grep -v Mounted| awk ' { print $6 } '
}
get_data | while IFS= read -r line
do
echo $line
done
Finally, the loop doesn't do anything useful, so you can just get rid of it:
df -l | grep -v Mounted| awk ' { print $6 } '

Persistent AWK Program

I have been tasked with writing a BASH script to filter log4j files and pipe them over netcat to another host. One of the requirements is that the script must keep track of what it has already sent to the server and not send it again due to licensing constraints on the receiving server (the product on the server is licensed on a data-per-day model).
To achieve the filtering I'm using AWK encapsulated in a BASH script. The BASH component works fine - it's the AWK program that's giving me grief when I try to get it to remember what has already been sent to the server. I am doing this by grabbing the time stamp of a line each time a line matches my pattern. At the end of the program the last time stamp is written to a hidden file in current working directory. On successive runs of the program AWK will read this file in to a variable. Now each time a line matches the pattern it's time stamp is also compared to the one in the variable. If it is newer it is printed, otherwise it is not.
Desired Output:
INFO 2012-11-07 09:57:12,479 [[artifactid].connector.http.mule.default.receiver.02] org.mule.api.processor.LoggerMessageProcessor: MsgID=5017f1ff-1dfa-48c7-a03c-ed3c29050d12 InteractionStatus=Accept InteractionDateTime=2012-08-07T16:57:33.379+12:00 Retailer=CTCT RequestType=RemoteReconnect
Hidden File:
2012-10-11 12:08:19,918
So that's the theory, now my issue.
The script works fine for contrived/ trivial examples such as:
INFO 2012-11-07 09:57:12,479 [[artifactid].connector.http.mule.default.receiver.02] org.mule.api.processor.LoggerMessageProcessor: MsgID=5017f1ff-1dfa-48c7-a03c-ed3c29050d12 InteractionStatus=Accept InteractionDateTime=2012-08-07T16:57:33.379+12:00 Retailer=CTCT RequestType=RemoteReconnect
However, if I run it over a full blown log file with stack traces etc in it then the indentation levels appear to wreck havoc on my program. The first run of the program will produce the desired results - matching lines will be printed and the latest time stamp written to the hidden file. Running it again is when the problem crops up. The output of the program contains the indented lines from stack traces etc (see the block below) and I can't figure out why. This then stuffs the hidden file as the last matching line doesn't contain a time stamp and some garbage is written to it making any further runs pointless.
Undesired output:
at package.reverse.domain.SomeClass.someMethod(SomeClass.java:233)
at package.reverse.domain.processor.SomeClass.process(SomeClass.java:129)
at package.reverse.domain.processor.someClass.someMethod(SomeClassjava:233)
at package.reverse.domain.processor.SomeClass.process(SomeClass.java:129)
Hidden file after:
package.reverse.domain.process(SomeClass.java:129)
My awk program:
FNR == 1 {
CMD = "basename " FILENAME
CMD | getline FILE;
FILE = "." FILE ".last";
if (system("[ -f "FILE" ]") == 0) {
getline FIRSTLINE < FILE;
close(FILE);
print FIRSTLINE;
}
else {
FIRSTLINE = "1970-01-01 00:00:00,000";
}
}
$0 ~ EXPRESSION {
if (($2 " " $3) > FIRSTLINE) {
print $0;
LASTLINE=$2 " " $3;
}
}
END {
if (LASTLINE != "") {
print LASTLINE > FILE;
}
}
Any assistance with finding out why this is happening would be greatly appreciated.
UPDATE:
BASH Script:
#!/bin/bash
while getopts i:r:e:h:p: option
do
case "${option}"
in
i) INPUT=${OPTARG};;
r) RULES=${OPTARG};;
e) PATFILE=${OPTARG};;
h) HOST=${OPTARG};;
p) PORT=${OPTARG};;
?) printf "Usage: %s: -i <\"file1.log file2.log\"> -r <\"rules1.awk rules2.awk\"> -e <\"patterns.pat\"> -h <host> -p <port>\n" $0;
exit 1;
esac
done
#prepare expression with sed
EXPRESSION=`cat $PATFILE | sed ':a;N;$!ba;s/\n/|/g'`;
EXPRESSION="^(INFO|DEBUG|WARNING|ERROR|FATAL)[[:space:]]{2}[[:digit:]]{4}\\\\-[[:digit:]]{1,2}\\\\-[[:digit:]]{1,2}[[:space:]][[:digit:]]{1,2}:[[:digit:]]{2}:[[:digit:]]{2},[[:digit:]]{3}.*"$EXPRESSION".*";
#Make sure the temp file is empty
echo "" > .temp;
#input through awk.
for file in $INPUT
do
awk -v EXPRESSION="$EXPRESSION" -f $RULES $file >> .temp;
done
#send contents of file to splunk indexer over udp
cat .temp;
#cat .temp | netcat -t $HOST $PORT;
#cleanup temporary files
if [ -f .temp ]
then
rm .temp;
fi
Patterns File (The stuff I want to match):
Warning
Exception
Awk script as above.
Example.log
info 2012-09-04 16:00:11,638 [[adr-com-adaptor-stub].connector.http.mule.default.receiver.02] nz.co.amsco.interop.multidriveinterop: session not initialised
error 2012-09-04 16:00:11,639 [[adr-com-adaptor-stub].connector.http.mule.default.receiver.02] nz.co.amsco.adrcomadaptor.processor.comadaptorprocessor: nz.co.amsco.interop.exceptions.systemdownexception
nz.co.amsco.interop.exceptions.systemdownexception
at nz.co.amsco.adrcomadaptor.processor.comadaptorprocessor.getdeviceconfig(comadaptorprocessor.java:233)
at nz.co.amsco.adrcomadaptor.processor.comadaptorprocessor.process(comadaptorprocessor.java:129)
at org.mule.processor.chain.defaultmessageprocessorchain.doprocess(defaultmessageprocessorchain.java:99)
at org.mule.processor.chain.abstractmessageprocessorchain.process(abstractmessageprocessorchain.java:66)
at org.mule.processor.abstractinterceptingmessageprocessorbase.processnext(abstractinterceptingmessageprocessorbase.java:105)
at org.mule.processor.asyncinterceptingmessageprocessor.process(asyncinterceptingmessageprocessor.java:90)
at org.mule.processor.chain.defaultmessageprocessorchain.doprocess(defaultmessageprocessorchain.java:99)
at org.mule.processor.chain.abstractmessageprocessorchain.process(abstractmessageprocessorchain.java:66)
at org.mule.processor.AbstractInterceptingMessageProcessorBase.processNext(AbstractInterceptingMessageProcessorBase.java:105)
at org.mule.interceptor.AbstractEnvelopeInterceptor.process(AbstractEnvelopeInterceptor.java:55)
at org.mule.processor.AbstractInterceptingMessageProcessorBase.processNext(AbstractInterceptingMessageProcessorBase.java:105)
Usage:
./filter.sh -i "Example.log" -r "rules.awk" -e "patterns.pat" -h host -p port
Note that host and port are both unused in this version as the output is just thrown onto stdout.
So if I run this I get the following output:
info 2012-09-04 16:00:11,638 [[adr-com-adaptor-stub].connector.http.mule.default.receiver.02] nz.co.amsco.interop.multidriveinterop: session not initialised
error 2012-09-04 16:00:11,639 [[adr-com-adaptor-stub].connector.http.mule.default.receiver.02] nz.co.amsco.adrcomadaptor.processor.comadaptorprocessor: nz.co.amsco.interop.exceptions.systemdownexception
at nz.co.amsco.adrcomadaptor.processor.comadaptorprocessor.getdeviceconfig(comadaptorprocessor.java:233)
at nz.co.amsco.adrcomadaptor.processor.comadaptorprocessor.process(comadaptorprocessor.java:129)
If I run it again on the same unchanged file I should get no output however I am seeing:
nz.co.amsco.adrcomadaptor.processor.comadaptorprocessor.process(comadaptorprocessor.java:129)
I have been unable to determine why this is happening.
You didn't provide any sample input that could reproduce your problem so let's start by just cleaning up your script and go from there. Change it to this:
BEGIN{
expression = "^(INFO|DEBUG|WARNING|ERROR|FATAL)[[:space:]]{2}[[:digit:]]{4}-[[:digit:]]{1,2}-[[:digit:]]{1,2}[[:space:]][[:digit:]]{1,2}:[[:digit:]]{2}:[[:digit:]]{2},[[:digit:]]{3}.*Exception|Warning"
# Do you really want "(Exception|Warning)" in brackets instead?
# As written "Warning" on its own will match the whole expression.
}
FNR == 1 {
tstampFile = "/" FILENAME ".last"
sub(/.*\//,".",tstampFile)
if ( (getline prevTstamp < tstampFile) > 0 ) {
close(tstampFile)
print prevTstamp
}
else {
prevTstamp = "1970-01-01 00:00:00,000"
}
nextTstamp = ""
}
$0 ~ expression {
currTstamp = $2 " " $3
if (currTstamp > prevTstamp) {
print
nextTstamp = currTstamp
}
}
END {
if (nextTstamp != "") {
print nextTstamp > tstampFile
}
}
Now, do you still have a problem? If so, show us how you run the script, i.e. the bash command you are executing, and post some small sample input that reproduces your problem.

converting the hash tag timestamps in history file to desired string

when i store the output of history command via ssh in a file i get something like this
ssh -i private_key user#ip 'export HISTFILE=~/.bash_history; export HISTTIMEFORMAT="%D-%T "; set -o history; history' > myfile.txt
OUTPUT
#1337431451
command
as far as ive learnt this hash string represents a timestamp. how do i change this to a string of my desired format
P.S- using history in ssh is not outputting with timestamps. Tried almost everything. So i guess the next best thing to do would be to convert these # timestamps to a readable date time format myself. How do i go about it?
you can combine rows with paste command:
paste -sd '#\n' .bash_history
and convert date with strftime in awk:
echo 1461136015 | awk '{print strftime("%d/%m/%y %T",$1)}'
as a result bash history with timestamp can be parsed by the next command:
paste -sd '#\n' .bash_history | awk -F"#" '{d=$2 ; $2="";print NR" "strftime("%d/%m/%y %T",d)" "$0}'
which converts:
#1461137765
echo lala
#1461137767
echo bebe
to
1 20/04/16 10:36:05 echo lala
2 20/04/16 10:36:07 echo bebe
also you can create script like /usr/local/bin/fhistory with content:
#!/bin/bash
paste -sd '#\n' $1 | awk -F"#" '{d=$2 ; $2="";print NR" "strftime("%d/%m/%y %T",d)" "$0}'
and quickly parse bash history file with next command:
fhistory .bash_history
Interesting question: I have tried it but found no simple and clean solution to access the history in a non-interactive shell. However, the format of the history file is simple, and you can write a script to parse it. The following python script might be interesting. Invoke it with ssh -i private_key user#ip 'path/to/script.py .bash_history':
#! /usr/bin/env python3
import re
import sys
import time
if __name__ == '__main__':
pattern = re.compile(br'^#(\d+)$')
out = sys.stdout.buffer
for pathname in sys.argv[1:]:
with open(pathname, 'rb') as f:
for line in f:
timestamp = 0
while line.startswith(b'#'):
match = pattern.match(line)
if match: timestamp, = map(int, match.groups())
line = next(f)
out.write(time.strftime('%F %T ', time.localtime(timestamp)).encode('ascii'))
out.write(line)
Using just Awk and in a slightly more accurate way:
awk -F\# '/^#1[0-9]{9}$/ { if(cmd) printf "%5d %s %s\n",n,ts,cmd;
ts=strftime("%F %T",$2); cmd=""; n++ }
!/^#1[0-9]{9}$/ { if(cmd)cmd=cmd " " $0; else cmd=$0 }' .bash_history
This parses only lines starting with something that looks like a timestamp (/^#1[0-9]{9}$/), compiles all subsequent lines up until the next timestamp, combines multi-line commands with " " (1 space) and prints the commands in a format similar to history including a numbering.
Note that the numbering does not (necessarily) match if there are multi-line commands.
Without the numbering and breaking up multi-line commands with a newline:
awk -F\# '/^#1[0-9]{9}$/ { if(cmd) printf "%s %s\n",ts,cmd;
ts=strftime("%F %T",$2); cmd="" }
!/^#1[0-9]{9}$/ { if(cmd)cmd=cmd "\n" $0; else cmd=$0 }' .bash_history
Finally, a quick and dirty solution using GNU Awk (gawk) to also sort the list:
gawk -F\# -v histtimeformat="$HISTTIMEFORMAT" '
/^#1[0-9]{9}$/ { i=$2 FS NR; cmd[i]="" }
!/^#1[0-9]{9}$/ { if(cmd[i]) cmd[i]=cmd[i] "\n" $0; else cmd[i]=$0 }
END { PROCINFO["sorted_in"] = "#ind_str_asc"
for (i in cmd) { split(i,arr)
print strftime(histtimeformat,arr[1]) cmd[i]
}
}'

Resources