compiled code with the -fPIC and -pie option and with certain checks able to see that binary generated is not havinf TEXTREL and position independent.
But once i load the image i can see that it is terminating, tried to get the exit status with below step in a script, getting the pid and exit status 0
./testASLR-image -n 2048 -m 100 -M 400 -c /config | tee -a /dev/console &
new_pid = $!
+echo "pid new in while is $new_pid and ! value $!" | tee -a /dev/console $BOOTLOG_FILE
+wait $!
+echo "exit status is $?" | tee -a /dev/console $BOOTLOG_FILE
it seems returning 0. also it is not entering the main(), no print from main is coming on console.
added signal handler and core generation, it is not generating coredump.
my module use many kernel module and library, Is there any dependency on the shared libray which can cause such loading failure? is there any way to find if my binary is having any such library usage?
Related
I have a script that ssh's to some servers. Sometimes an unexpected problem causes ssh to hang indefinitely. I want to avoid this by killing ssh if it runs too long.
I'm also using a wrapper function for input redirection. I need to force a tty with the -t flag to make a process on the server happy.
function _redirect {
if [ "$DEBUG" -eq 0 ]; then
$* 1> /dev/null 2>&1
else
$*
fi
return $?
exit
}
SSH_CMD="ssh -t -o BatchMode=yes -l robot"
SERVER="192.168.1.2"
ssh_script=$(cat <<EOF
sudo flock -w 60 -n /path/to/lock -c /path/to/some_golang_binary
EOF
)
_redirect timeout 1m $SSH_CMD $SERVER "($ssh_script)"
The result is a timeout with this message printed:
tcsetattr: Interrupted system call
The expected result is either the output of the remote shell command, or a timeout and proper exit code.
when I type
timeout 1m ssh -t -o BatchMode=yes -o -l robot 192.168.1.2 \
"(sudo sudo flock -w 60 -n /path/to/lock -c /path/to/some_golang_binary)" \
1> /dev/null
I get the expected result.
I suspect these two things:
1)The interaction between GNU timeout and ssh is causing the tcsetattr system call to take a very long time (or hang), then timeout sends a SIGTERM to interrupt it and it prints that message. There is no other output because this call is one of the first things done. I wonder if timeout launches ssh in a child process that cannot have a terminal, then uses its main process to count time and kill its child.
I looked here for the reasons this call can fail.
2) _redirect needs a different one of $#, $*, "$#", "$*" etc. Some bad escaping/param munging breaks the arguments to timeout which causes this tcsetattr error. Trying various combinations of this has not yet solved the problem.
What fixed this was --foreground flag to timeout.
I'd like to open an xcode workspace in terminal, wait some time, then close that workspace (xcode has some hidden magic it does on projects that makes this necessary in an automated build process).
So something like-
pid=`open proj.xcworkspace`
sleep 30
kill $pid
Because multiple xcode projects may be running in the same time. I can't simply kill xcode, just the process I started.
How can I get the PID of an application I open in terminal?
You can catch the current Process ID with $$.
Or you can search for Process IDs with ps -C PROGRAMM_NAME -o pid=.
suleiman#antec:~$ ps -C icedove -o pid=
887
suleiman#antec:~$ ps -C vlc -o pid=
29405
Addition...
Here is a working example of what I mean ...
#!/bin/bash
i=0
while [ "$i" -le 10 ]
do
example.sh &
Pid[$i]=$(ps -C "example.sh" -o "pid=")
Pid[$i]=$(echo "${Pid[$i]}" | tail -n 1)
#echo "${Pid[$i]}"
i=$(($i +1))
done
for PID in "${Pid[#]}"
do
kill "$PID"
done
exit 0
Specifically, I'm writing a script to make it easier to compile and run my C++ code. It's easy for it to tell if the compilation succeeded of failed, but I also want to add a state where it "compiled with warnings".
$out # to avoid an "ambiguous redirect"
g++ -Wall -Wextra $1 2> out
if [ $? == 0 ]
then
# this is supposed to test the length of the output string
# unless there are errors, $out should be length 0
if [ ${#out} == 0 ]
then
# print "Successful"
else
# print "Completed with Warnings"
fi
else
# print "Failed"
fi
As it is, the failure case check works fine, but $out is always an empty string, though stderr is no longer displaying on the screen, $out is never actually set. If possible, I would also like stderr to still go to the screen.
I hope what I've said makes sense. Cheers.
g++ -Wall -Wextra $1 2> out
This redirects stderr to a file named out, not a variable named $out.
If you want to run gcc and see stdout and stderr on screen as well as save stderr's output, you could use a named pipe (FIFO). It's a bit roundabout, but it'd get the job done.
mkfifo stderr.fifo
gcc -Wall -o /dev/null /tmp/warn.c 2> stderr.fifo &
tee stderr.log < stderr.fifo >&2
rm -f stderr.fifo
wait
After running these commands, the warnings will be available in stderr.log. Taking advantage of the fact that wait will return gcc's exit code, you could then do something like:
if wait; then
if [[ -s stderr.log ]]; then
# print "Completed with Warnings"
else
# print "Successful"
fi
else
# print "Failed"
fi
Annotated:
# Created a named pipe. If one process writes to the pipe, another process can
# read from it to see what was written.
mkfifo stderr.fifo
# Run gcc and redirect its stderr to the pipe. Do it in the background so we can
# read from the pipe in the foreground.
gcc -Wall -o /dev/null /tmp/warn.c 2> stderr.fifo &
# Read from the pipe and write its contents both to the screen (stdout) and to
# the named file (stderr.log).
tee stderr.log < stderr.fifo >&2
# Clean up.
rm -f stderr.fifo
# Wait for gcc to finish and retrieve its exit code. `$?` will be gcc's exit code.
wait
To capture in a variable and display on the screen, use tee:
out=$( g++ -Wall -Wextra "$1" 2>&1 >dev/null | tee /dev/stderr )
This throws out the standard output of g++ and redirects standard error to standard output. That output is piped to tee, which writes it to the named file (/dev/stderr, so that the messages go back to the original standard error) and standard output, which is captured in the variable out.
Is there some similar option in dash shell corresponding to pipefail in bash?
Or any other way of getting a non-zero status if one of the commands in pipe fail (but not exiting on it which set -e would).
To make it clearer, here is an example of what I want to achieve:
In a sample debugging makefile, my rule looks like this:
set -o pipefail; gcc -Wall $$f.c -o $$f 2>&1 | tee err; if [ $$? -ne 0 ]; then vim -o $$f.c err; ./$$f; fi;
Basically it runs opens the error file and source file on error and runs the programs when there is no error. Saves me some typing. Above snippet works well on bash but my newer Ubunty system uses dash which doesn't seem to support pipefail option.
I basically want a FAILURE status if the first part of the below group of commands fail:
gcc -Wall $$f.c -o $$f 2>&1 | tee err
so that I can use that for the if statement.
Are there any alternate ways of achieving it?
Thanks!
I ran into this same issue and the bash options of set -o pipefail and ${PIPESTATUS[0]} both failed in the dash shell (/bin/sh) on the docker image I'm using. I'd rather not modify the image or install another package, but the good news is that using a named pipe worked perfectly for me =)
mkfifo named_pipe
tee err < named_pipe &
gcc -Wall $$f.c -o $$f > named_pipe 2>&1
echo $?
See this answer for where I found the info: https://stackoverflow.com/a/1221844/431296
The Q.'s sample problem requires:
I basically want a FAILURE status if the first part of the ... group of commands fail:
Install moreutils, and try the mispipe util, which returns the exit status of the first command in a pipe:
sudo apt install moreutils
Then:
if mispipe "gcc -Wall $$f.c -o $$f 2>&1" "tee err" ; then \
./$$f
else
vim -o $$f.c err
fi
While 'mispipe' does the job here, it is not an exact duplicate of the bash shell's pipefail; from man mispipe:
Note that some shells, notably bash, do offer a
pipefail option, however, that option does not
behave the same since it makes a failure of any
command in the pipeline be returned, not just the
exit status of the first.
I have a custom script in Xcode which returns an error, but suppose I don't care. Xcode doesn't care about /dev/null and won't compile
sdef "$INPUT_FILE_PATH" | sdp -fh -o "$DERIVED_FILES_DIR"
--basename "$INPUT_FILE_BASE"
--bundleid `defaults read "$INPUT_FILE_PATH/Contents/Info" CFBundleIdentifier`
It's basically for generating a .h file based on Apple Script Definitions, and it went all fine up until a recent OS X update.
In the terminal, all I have to so is end this command with
2>/dev/null
and no error is returned. Whatever I try with 2> or just > or even &> doesn't work in Xcode, it will always return me an error.
/bin/sh -c "sdef \"$INPUT_FILE_PATH\" | sdp -fh -o \"$DERIVED_FILES_DIR\"
--basename \"$INPUT_FILE_BASE\" --bundleid `defaults read
\"$INPUT_FILE_PATH/Contents/Info\" CFBundleIdentifier` 2> /dev/null"
Command /bin/sh failed with exit code 1
Appending 2>/dev/null does not prevent the error status being returned by the sdef command, it just hides the error message.
Replace it with
|| echo "Failed".
If the sdef fails, the second part of the command is exited, and the echo should not report a bad status.