How to read escape characters in shell script? - shell

How to read control c, control z in shell script?
Thanks in advance
added...
What is my requirement was, im deleting a file at the end of script. If the script was stopped (by control c or control z) also i need to delete that file

Every time an user make control c (or any other special combination) a signal are send to your script.
You will need to capture this signal in your script using the trap command.
It's long to explain, but this web contain a good explanation about managing signals: http://linuxcommand.org/wss0160.php

#!/bin/sh
trap 'echo Hi there' INT USR1 TERM
while true; do sleep 1; done
Read man kill for the list of allowed signals that you can put there, note the description field in the SIGNALS section of the kill man page that mentions which signal can be blocked (trapped) by your shell script.
Note: Ctrl + c is the INT (interrupt) signal

Related

Send signal with data from bash script

I want to send a signal to a process including data (int) from a bash script. I know how to do that from a C program but I didn't find the way if exists from a script.
Note : Pipe isn't an option

BASH: error in C program that runs in subshell breaks the main shell

I have a bash script that runs a list of small programs mostly written in C and Python, since the programs themselves are NOT bug free and they might crash or run into infinite loop, so in the BASH script, I run the programs in subshell so it won't break the main shell, here is what it likes:
#!/usr/bin/env bash
set -e
for py_p in "${py_program_list[#]}"; do
(python "$py_p") || echo "terminated!"
done
for c_p in "${c_program_list[#]}"; do
("$c_p") || echo "terminated!"
done
The problem is, when loops in python programs, the bash script won't be affected by any error in the python program which is what I expected. However, the bash script exit immediately if any C program exit with error.
UPDATE:
I am using BASH 3.2 in OSX 10.9.5
UPDATE 2:
Updated the question to make it more clear, sorry for the confusion. The problem I have is about the C program, the python part confirm the error in subshell won't affect the main shell but the C program breaks the rule.
the Python scripts are fine, no matter I use Ctrl + C or they crash
for some reason, they won't stop the main shell from running which is
what I expect. But the C programs don't, type Ctrl + C when a C
program is running will exit the bash script.
Python handles the interrupt signal itself (outputting Traceback …KeyboardInterrupt) and then terminates normally, returning the exit status 1 to bash.
Your C programs evidently don't handle the signal, so the default action is taken, to terminate the process; bash is informed that the program was terminated by signal SIGINT.
Now bash behaves differently depending on the kind of the child program's termination (normal or signaled): In the first case, it continues execution with || echo "terminated!", in the second case, it terminates itself, as you observed.
You can change that behavior by trapping the signal in your script, e. g. by inserting
trap "echo interrupted" INT
somewhere before the for c_p loop.
Everything depends on the Python programs exit status. Maybe they return the same value regardless they fact their execution was successful or not. So... basically, you cannot rely on their exit status.

Ctrl C does not kill foreground process in Unix

I have the following code written in a script anmed test.csh to start a GUI based application in foreground in Solaris Unix. When I run the script and want to kill the GUI process using Keyboard Ctrl + C, the process is not getting terminated. If I open the GUI application directly from the terminal, I am able to kill the process using Ctrl + C. Can someone help me understand why am I not able to kill the process invoked from a script?
#! /usr/bin/csh
# some script to set env variables
# GUI Process
cast
Then I execute the script using the following command. I am not able to terminate the vcast process using Ctrl + C command.
source test.csh
If it is being launched into its own thread then the hangup request may not get to the application. You could add a signal handler to cascade the hangup request or look at the process table to see what the process id is for the app and then kill it. This could also be scripted very easily.
You should better execute the script directly, instead of sourcing it.
1) first add #!/bin/csh at the beginning of your script,
2) set it as executable :
$ chmod u+x test.csh
3) execute it directly:
$ ./test.csh
you should be able to kill it. Anyway, consider that the problem may be related to some executable code that you are running within your script. Consider to try to debug your script by copy-pasting line after line in a terminal until you reach the point where it lags.
Another possible annoying issue can be an infinite while loop. Check for this kind of error too. Maybe you have a while loop that never gets the breaking point.
Regards

how to implement event handling in shell script?

In my shell script, im deleting a file at the end of script. And i need it to be deleted even if the script was stopped by (ctrl c or ctrl z)..Is there any way to read that and delete the file?
Thanks in advance
Like #pgl said, trap is what you want. The syntax is:
trap <actionhere> <event> [event...]
The action is one and only one argument, but it can run several commands. The event is either exit (when you call exit manually) or a signal by its "short" name, ie without the SIG prefix (for instance, INT for SIGINT.
Example:
trap "rm -f myfile" INT exit
You can change the trap all along the script. And of course, you can use variable interpolation in your action.
You can catch ctrl+c's with the trap builtin. Try this to get started:
help trap

What does $$ mean in the shell?

I once read that one way to obtain a unique filename in a shell for temp files was to use a double dollar sign ($$). This does produce a number that varies from time to time... but if you call it repeatedly, it returns the same number. (The solution is to just use the time.)
I am curious to know what $$ actually is, and why it would be suggested as a way to generate unique filenames.
$$ is the process ID (PID) in bash. Using $$ is a bad idea, because it will usually create a race condition, and allow your shell-script to be subverted by an attacker. See, for example, all these people who created insecure temporary files and had to issue security advisories.
Instead, use mktemp. The Linux man page for mktemp is excellent. Here's some example code from it:
tempfoo=`basename $0`
TMPFILE=`mktemp -t ${tempfoo}` || exit 1
echo "program output" >> $TMPFILE
In Bash $$ is the process ID, as noted in the comments it is not safe to use as a temp filename for a variety of reasons.
For temporary file names, use the mktemp command.
$$ is the id of the current process.
Every process in a UNIX like operating system has a (temporarily) unique identifier, the PID. No two processes running at the same time can have the same PID, and $$ refers to the PID of the bash instance running the script.
This is very much not a unique idenifier in the sense that it will never be reused (indeed, PIDs are reused constantly). What it does give you is a number such that, if another person runs your script, they will get a different identifier whilst yours is still running. Once yours dies, the PID may be recycled and someone else might run your script, get the same PID, and so get the same filename.
As such, it is only really sane to say "$$ gives a filename such that if someone else runs the same script whist my instance is still running, they will get a different name".
$$ is your PID. It doesn't really generate a unique filename, unless you are careful and no one else does it exactly the same way.
Typically you'd create something like /tmp/myprogramname$$
There're so many ways to break this, and if you're writing to locations other folks can write to it's not too difficult on many OSes to predict what PID you're going to have and screw around -- imagine you're running as root and I create /tmp/yourprogname13395 as a symlink pointing to /etc/passwd -- and you write into it.
This is a bad thing to be doing in a shell script. If you're going to use a temporary file for something, you ought to be using a better language which will at least let you add the "exclusive" flag for opening (creating) the file. Then you can be sure you're not clobbering something else.
$$ is the pid (process id) of the shell interpreter running your script. It's different for each process running on a system at the moment, but over time the pid wraps around, and after you exit there will be another process with same pid eventually.As long as you're running, the pid is unique to you.
From the definition above it should be obvious that no matter how many times you use $$ in a script, it will return the same number.
You can use, e.g. /tmp/myscript.scratch.$$ as your temp file for things that need not be extremely reliable or secure. It's a good practice to delete such temp files at the end of your script, using, for example, trap command:
trap "echo 'Cleanup in progress'; rm -r $TMP_DIR" EXIT
$$ is the pid of the current shell process. It isn't a good way to generate unique filenames.
It's the process ID of the bash process. No concurrent processes will ever have the same PID.
The $$ is the process id of the shell in which your script is running. For more details, see the man page for sh or bash. The man pages can be found be either using a command line "man sh", or by searching the web for "shell manpage"
Let me second emk's answer -- don't use $$ by itself as a "unique" anything. For files, use mktemp. For other IDs within the same bash script, use "$$$(date +%s%N)" for a reasonably good chance of uniqueness.
-k
In Fish shell (3.1.2):
The $ symbol can also be used multiple times, as a kind of "dereference" operator (the * in C or C++)
set bar bazz
set foo bar
echo $foo # bar
echo $$foo # same as echo $bar → bazz
Also, You can grab login username via this command. Eg.
echo $(</proc/$$/login id). After that, you need to use getent command.

Resources