I am trying to login on one of the remote server(Box1) and trying to read one file on remote server(Box1).
That contain the another server(Box2) details, base upon that details I have to come back to the local server and ssh to another server(Box2) for some data crunching. and so on.....
ssh box1.com << EOF
if [[ ! -f /home/rakesh/tomar.log ]]
then
echo "LOG file not found"
else
echo " LOG file present"
export server_node1= `cat /home/rakesh/tomar.log`
fi
EOF
ssh box2.com << EOF
if [[ ! -f /home/rakesh/tomar.log ]]
then
echo "LOG file not found"
else
echo " LOG file present"
export server_node2= `cat /home/rakesh/tomar.log`
fi
EOF
but I am not getting value of "server_node1" and "server_node2" on local machine.
any help would be appreciated.
Just like bash -c 'export foo=bar' cannot declare a variable in the calling shell where you typed this, an ssh command cannot declare a variable in the calling shell. You will have to refactor so that the calling shell receives the information and knows what to do with it.
I agree with the comment that storing a log file in a variable is probably not a sane, or at least elegant, thing to do, but the easy way to do what you are attempting is to put the ssh inside the assignment.
server_node1=$(ssh box1.com cat tomar.log)
server_node2=$(ssh box2.com cat tomar.log)
A few notes and amplifications:
The remote shell will run in your home directory, so I took it out (on the assumption that /home/rt9419 is your home directory, obviously).
In case of an error in the cat command, the exit code of ssh will be the error code from cat, and the error message on standard error will be visible on your standard error, so the echo seemed quite superfluous. (If you want a custom message, variable=$(ssh whatever) || echo "Custom message" >&2 would do that. Note the redirection to standard error; it doesn't seem to matter here, but it's good form.)
If you really wanted to, you could run an arbitrarily complex command in the ssh; as outlined above, it didn't seem necessary here, but you could do assigment=$(ssh remote 'if [[ things ]]; then for variable in $(complex commands to drive a loop); do : etc etc; done; fi; more </dev/null; exit "$variable"') or whatever.
As further comments on your original attempt,
The backticks in the here document in your attempt would be evaluated by your local shell before the ssh command even ran. There are separate questions about how to fix that; see e.g. How have both local and remote variable inside an SSH command. but in short, unless you absolutely require the local shell to be able to modify the commands you send, probably put them in single quotes, like I did in the silly complex ssh example above.
The function of export is to make variables visible to child processes. There is no way to affect the environment of a parent process (short of having it cooperate and/or coordinate the change, as in the code above). As an example to illustrate the difference, if you set PERL5LIB to a directory with Perl libraries, but fail to export it, the Perl process you start will not see the variable; it is only visible to the current shell. When you export it, any Perl process you start as a child of this shell will also see this variable and the value you assigned. In other words, you export variables which are not private to the current shell (and don't export private ones; aside from making sure they are private, this saves the amount of memory which needs to be copied between processes), but that still only makes them visible to children, by the design of the U*x process architecture.
You should get back the file from box1and box2 with an scp:
scp box1.com:/home/rt9419/tomar.log ~/tomar1.log
#then you can cat!
export server_node1=`cat ~/tomar1.log`
idem with box2
scp box2.com:/home/rt9419/tomar.log ~/tomar2.log
#then you can cat!
export server_node2=`cat ~/tomar2.log`
There are several possibilities. In your case, you could on the remote system create a file (in bash syntax), containing the assignments of these variables, for example
echo "export server_node2='$(</home/rt9419/tomar.log)'" >>export_settings
(which makes me wonder why you want the whole content of your logfile be stored into a variable, but this is another question), then transfer this file to your host (for example with scp) and source it from within your bash script.
Related
I have a bash script that needs to connect to another server for parts of it's execution. I have tried many of the standard instructions and syntaxes for executing ssh commands, but with little progress.
On the remote server, I need to source a shell script that contains several env parameters for some software. One of these parameters are then used in a filepath to point to an executable, which contains a function ' -lprojects ' that can list the projects for the software on that server.
I have verified that running the commands on the server itself works multiple times. My issue is when I try to run the same commands over SSH. If I use the approach where I use the env variable for the filepath, it shows that the variable is null in the filepath, giving a file/directory not found error. If I hard-code the filepath to point to the executable, it gives me an error saying that the shell script is not sourced (which I assume it needs for other functions and apis for the executable to reveal it's -lprojects function)
Here is how the code looks like somewhat:
ssh remote.server 'source /filepath/remotescript.sh'
filelist=$(ssh remote.server $REMOTEVARIABLE'/bin/executable -lprojects')
echo ${filelist[#]}
for file in $filelist
do
echo $file
ssh SERVER2 awk 'something' /filepath/"$file"/somefile.txt | sed 'something' >> filepath/values.csv;
done
As you can see, I then also need to loop through the contents of the -lprojects output in the remote.server, do some awk and sed on the files to extract the wanted text (this works), but then I need to write that back to the client (local server) values.csv file. This is more generic, as there will be several servers I have to do this for, but all of them have to write to the same .csv file. For simplicity, you can just regard this as a one remote server case, since it is vital I get it working for at least one now in the beginning.
Note that I also tried something like:
ssh remote.server << EOF
'source /filepath/remotescript.sh'
filelist=$(ssh remote.server $REMOTEVARIABLE'/bin/executable -lprojects')
EOF
But with similar results. Also placing the single-quotes in the filelist both before and after the remotevariable, etc.
How do I go about properly doing this?
To access the environment variable, you must source the script that defines the environment within the same SSH call as the one where you are using it, otherwise, you're running your commands in two different shells which are unrelated:
filelist=$(ssh remote.server 'source /filepath/remotescript.sh; $REMOTEVARIABLE/bin/executable -lprojects')
Assuming executable outputs one file name per line, you can use readarray to achieve the effect :
readarray -t filelist < <(ssh remote.server '
source /filepath/remotescript.sh
$REMOTEVARIABLE/bin/executable -lprojects
'
)
echo ${filelist[#]}
for file in $filelist
do
echo $file
ssh SERVER2 awk 'something' /filepath/"$file"/somefile.txt | sed 'something' >> filepath/values.csv;
done
I have a script doing something like this:
var1=""
ssh xxx#yyy<<'EOF'
[...]
var2=`result of bash command`
echo $var2 #print what I need
var1=$var2 #is there a way to pass var2 into global var1 variable ?
EOF
echo $var1 # the need is to display the value of var2 created in EOF block
Is there a way to do this?
In general, an executed command has three paths of delivering information:
By stating an exit code.
By making output.
By creating files.
It is not possible to change a (environment) variable of the parent process. This is true for all child processes, and your ssh process is no exemption.
I would not rely on ssh to pass the exit code of the remote process, though (because even if it works in current implementations, this is brittle; ssh could also want to state its own success or failure with its exit code, not the remote process's).
Using files also seems inappropriate because the remote process will probably have a different file system (but if the remote and the local machine share an NFS for instance, this could be an option).
So I suggest using the output of the remote process for delivering information. You could achieve this like this:
var1=$(ssh xxx#yyy<<'EOF'
[...]
var2=$(result of bash command)
echo "$var2" 1>&2 # to stderr, so it's not part of the captured output
# and instead shown on the terminal
echo "$var2" # to stdout, so it's part of the captured output
EOF
)
echo "$var1"
I have a shell script which usually runs nearly 10 mins for a single run,but i need to know if another request for running the script comes while a instance of the script is running already, whether new request need to wait for existing instance to compplete or a new instance will be started.
I need a new instance must be started whenever a request is available for the same script.
How to do it...
The shell script is a polling script which looks for a file in a directory and execute the file.The execution of the file takes nearly 10 min or more.But during execution if a new file arrives, it also has to be executed simultaneously.
the shell script is below, and how to modify it to execute multiple requests..
#!/bin/bash
while [ 1 ]; do
newfiles=`find /afs/rch/usr8/fsptools/WWW/cgi-bin/upload/ -newer /afs/rch/usr$
touch /afs/rch/usr8/fsptools/WWW/cgi-bin/upload/.my_marker
if [ -n "$newfiles" ]; then
echo "found files $newfiles"
name2=`ls /afs/rch/usr8/fsptools/WWW/cgi-bin/upload/ -Art |tail -n 2 |head $
echo " $name2 "
mkdir -p -m 0755 /afs/rch/usr8/fsptools/WWW/dumpspace/$name2
name1="/afs/rch/usr8/fsptools/WWW/dumpspace/fipsdumputils/fipsdumputil -e -$
$name1
touch /afs/rch/usr8/fsptools/WWW/dumpspace/tempfiles/$name2
fi
sleep 5
done
When writing scripts like the one you describe, I take one of two approaches.
First, you can use a pid file to indicate that a second copy should not run. For example:
#!/bin/sh
pidfile=/var/run/$(0##*/).pid
# remove pid if we exit normally or are terminated
trap "rm -f $pidfile" 0 1 3 15
# Write the pid as a symlink
if ! ln -s "pid=$$" "$pidfile"; then
echo "Already running. Exiting." >&2
exit 0
fi
# Do your stuff
I like using symlinks to store pid because writing a symlink is an atomic operation; two processes can't conflict with each other. You don't even need to check for the existence of the pid symlink, because a failure of ln clearly indicates that a pid cannot be set. That's either a permission or path problem, or it's due to the symlink already being there.
Second option is to make it possible .. nay, preferable .. not to block additional instances, and instead configure whatever it is that this script does to permit multiple servers to run at the same time on different queue entries. "Single-queue-single-server" is never as good as "single-queue-multi-server". Since you haven't included code in your question, I have no way to know whether this approach would be useful for you, but here's some explanatory meta bash:
#!/usr/bin/env bash
workdir=/var/tmp # Set a better $workdir than this.
a=( $(get_list_of_queue_ids) ) # A command? A function? Up to you.
for qid in "${a[#]}"; do
# Set a "lock" for this item .. or don't, and move on.
if ! ln -s "pid=$$" $workdir/$qid.working; then
continue
fi
# Do your stuff with just this $qid.
...
# And finally, clean up after ourselves
remove_qid_from_queue $qid
rm $workdir/$qid.working
done
The effect of this is to transfer the idea of "one at a time" from the handler to the data. If you have a multi-CPU system, you probably have enough capacity to handle multiple queue entries at the same time.
ghoti's answer shows some helpful techniques, if modifying the script is an option.
Generally speaking, for an existing script:
Unless you know with certainty that:
the script has no side effects other than to output to the terminal or to write to files with shell-instance specific names (such as incorporating $$, the current shell's PID, into filenames) or some other instance-specific location,
OR that the script was explicitly designed for parallel execution,
I would assume that you cannot safely run multiple copies of the script simultaneously.
It is not reasonable to expect the average shell script to be designed for concurrent use.
From the viewpoint of the operating system, several processes may of course execute the same program in parallel. No need to worry about this.
However, it is conceivable, that a (careless) programmer wrote the program in such a way that it produces incorrect results, when two copies are executed in parallel.
Question,
I want to have a bash script that will have a global variable that can be incremented from other bash scripts.
Example:
I have a script like the following:
#! /bin/bash
export Counter=0
for SCRIPT in /Users/<user>/Desktop/*sh
do
$SCRIPT
done
echo $Counter
That script will call all the other bash scripts in a folder and those scripts will have something like the following:
if [ "$Output" = "$Check" ]
then
echo "OK"
((Counter++))
I want it to then increment the $Counter variable if it does equal "OK" and then pass that value back to the initial batch script so I can keep that counter number and have a total at the end.
Any idea on how to go about doing that?
Environment variables propagate in one direction only -- from parent to child. Thus, a child process cannot change the value of an environment variable set in their parent.
What you can do is use the filesystem:
export counter_file=$(mktemp "$HOME/.counter.XXXXXX")
for script in ~user/Desktop/*sh; do "$script"; done
...and, in the individual script:
counter_curr=$(< "$counter_file" )
(( ++counter_curr ))
printf '%s\n' "$counter_curr" >"$counter_file"
This isn't currently concurrency-safe, but your parent script as currently written will never call more than one child at a time.
An even easier approach, assuming that the value you're tracking remains relatively small, is to use the file's size as a proxy for the counter's value. To do this, incrementing the counter is as simple as this:
printf '\n' >>"$counter_file"
...and checking its value in O(1) time -- without needing to open the file and read its content -- is as simple as checking the file's size; with GNU stat:
counter=$(stat -f %z "$counter_file")
Note that locking may be required for this to be concurrency-safe if using a filesystem such as NFS which does not correctly implement O_APPEND; see Norman Gray's answer (to which this owes inspiration) for a working implementation.
You could source the other scripts, which means they're not running in a sub-process but "inline" in the calling script like this:
#! /bin/bash
export counter=0
for script in /Users/<user>/Desktop/*sh
do
source "$script"
done
echo $counter
But as pointed out in the comments i'd only advise to use this approach if you control the called scripts yourself. If they for example exit or have variables clashing with each other, bad things could happen.
As described, you can't do this, since there isn't anything which corresponds to a ‘global variable’ for shell scripts.
As the comment suggests, you'll have to use the filesystem to communicate between scripts.
One simple/crude way of doing what you describe would be to simply have each cooperating script append a line to a file, and the ‘global count’ is the size of this file:
#! /bin/sh -
echo ping >>/tmp/scriptcountfile
then wc -l /tmp/scriptcountfile is the number of times that's happened. Of course, there's a potential race condition there, so something like the following would sequence those accesses:
#! /bin/sh -
(
flock -n 9
echo 'do stuff...'
echo ping >>/tmp/stampfile
) 9>/tmp/lockfile
(the flock command is available on Linux, but isn't portable).
Of course, then you can start to do fancier things by having scripts send stuff through pipes and sockets, but that's going somewhat over the top.
Recently I wrote a script which sets an environment variable, take a look:
#!/bin/bash
echo "Pass a path:"
read path
echo $path
defaultPath=/home/$(whoami)/Desktop
if [ -n "$path" ]; then
export my_var=$path
else
echo "Path is empty! Exporting default path ..."
export my_var=$defaultPath
fi
echo "Exported path: $my_var"
It works just great but the problem is that my_var is available just locally, I mean in console window where I ran the script.
How to write a script which allow me to export global environment variable which can be seen everywhere?
Just run your shell script preceded by "." (dot space).
This causes the script to run the instructions in the original shell. Thus the variables still exist after the script finish
Ex:
cat setmyvar.sh
export myvar=exists
. ./setmyvar.sh
echo $myvar
exists
Each and every shell has its own environment. There's no Universal environment that will magically appear in all console windows. An environment variable created in one shell cannot be accessed in another shell.
It's even more restrictive. If one shell spawns a subshell, that subshell has access to the parent's environment variables, but if that subshell creates an environment variable, it's not accessible in the parent shell.
If all of your shells need access to the same set of variables, you can create a startup file that will set them for you. This is done in BASH via the $HOME/.bash_profile file (or through $HOME/.profile if $HOME/.bash_profile doesn't exist) or through $HOME/.bashrc. Other shells have their own set of startup files. One is used for logins, and one is used for shells spawned without logins (and, as with bash, a third for non-interactive shells). See the manpage to learn exactly what startup scripts are used and what order they're executed).
You can try using shared memory, but I believe that only works while processes are running, so even if you figured out a way to set a piece of shared memory, it would go away as soon as that command is finished. (I've rarely used shared memory except for named pipes). Otherwise, there's really no way to set an environment variable in one shell and have another shell automatically pick it up. You can try using named pipes or writing that environment variable to a file for other shells to pick it up.
Imagine the problems that could happen if someone could change the environment of one shell without my knowledge.
Actually I found an way to achieve this (which in my case was to use a bash script to set a number of security credentials)
I just call bash from inside the script and the spawned shell now has the export values
export API_USERNAME=abc
export API_PASSWORD=bbbb
bash
now calling the file using ~/.app-x-setup.sh will give me an interactive shell with those environment values setup
The following were extracted from 2nd paragraph from David W.'s answer: "If one shell spawns a subshell, that subshell has access to the parent's environment variables, but if that subshell creates an environment variable, it's not accessible in the parent shell."
In case a user need to let parent shell access your new environment variables, just issue the following command in parent shell:
source <your_subshell_script>
or using shortcut
. <your_subshell_script>
You got to add the variable in your .profile located in /home/$USER/.profile
Yo can do that with this command:
echo 'TEST="hi"' >> $HOME/.profile
Or by edit the file with emacs, for example.
If you want to set this variable for all users, you got to edit /etc/profile (root)
There is no global environment, really, in UNIX.
Each process has an environment, originally inherited from the parent, but it is local to the process after the initial creation.
You can only modify your own, unless you go digging around in the process using a debugger.
write it to a temporary file, lets say ~/.myglobalvar and read it from anywhere
echo "$myglobal" > ~/.myglobalvar
Environment variables are always "local" to process execution the export command allow to set environment variables for sub processes. You can look at .bashrc to set environment variables at the start of a bash shell. What you are trying to do seems not possible as a process cannot modify (or access ?) to environment variables of another process.
You can update the ~/.bashrc or ~/.bash_profile file which is used to initialize the environment.
Take a look at the loading behavior of your shell (explained in the manpage, usually referring to .XXXshrc or .profile). Some configuration files are loaded at login time of an interactive shell, some are loaded each time you run a shell. Placing your variable in the latter might result in the behavior you want, e.g. always having the variable set using that distinct shell (for example bash).
If you need to dynamically set and reference environment variables in shell scripts, there is a work around. Judge for yourself whether is worth doing, but here it is.
The strategy involves having a 'set' script which dynamically writes a 'load' script, which has code to set and export an environment variable. The 'load' script is then executed periodically by other scripts which need to reference the variable. BTW, the same strategy could be done by writing and reading a file instead of a variable.
Here's a quick example...
Set_Load_PROCESSING_SIGNAL.sh
#!/bin/bash
PROCESSING_SIGNAL_SCRIPT=./Load_PROCESSING_SIGNAL.sh
echo "#!/bin/bash" > $PROCESSING_SIGNAL_SCRIPT
echo "export PROCESSING_SIGNAL=$1" >> $PROCESSING_SIGNAL_SCRIPT
chmod ug+rwx $PROCESSING_SIGNAL_SCRIPT
Load_PROCESSING_SIGNAL.sh (this gets dynamically created when the above is run)
#!/bin/bash
export PROCESSING_SIGNAL=1
You can test this with
Test_PROCESSING_SIGNAL.sh
#!/bin/bash
PROCESSING_SIGNAL_SCRIPT=./Load_PROCESSING_SIGNAL.sh
N=1
LIM=100
while [ $N -le $LIM ]
do
# DO WHATEVER LOOP PROCESSING IS NEEDED
echo "N = $N"
sleep 5
N=$(( $N + 1 ))
# CHECK PROCESSING_SIGNAL
source $PROCESSING_SIGNAL_SCRIPT
if [[ $PROCESSING_SIGNAL -eq 0 ]]; then
# Write log info indicating that the signal to stop processing was detected
# Write out all relevent info
# Send an alert email of this too
# Then exit
echo "Detected PROCESSING_SIGNAL for all stop. Exiting..."
exit 1
fi
done
~/.bin/SOURCED/lazy script to save and load data as flat files for system.
[ ! -d ~/.megadata ] && mkdir ~/.megadata
function save_data {
[ -z "$1" -o -z "$2" ] && echo 'save_data [:id:] [:data:]' && return
local overwrite=${3-false}
[ "$overwrite" = 'true' ] && echo "$2" > ~/.megadata/$1 && return
[ ! -f ~/.megadata/$1 ] && echo "$2" > ~/.megadata/$1 || echo ID TAKEN set third param to true to overwrite
}
save_data computer engine
cat ~/.megadata/computer
save_data computer engine
save_data computer megaengine true
function get_data {
[ -z "$1" -o -f $1 ] && echo 'get_data [:id:]' && return
[ -f ~/.megadata/$1 ] && cat ~/.megadata/$1 || echo ID NOT FOUND
:
}
get_data computer
get_data computer
Maybe a little off topic, but when you really need it to set it temporarily to execute some script and ended up here looking for answers:
If you need to run a script with certain environment variables that you don't need to keep after execution you could do something like this:
#!/usr/bin/env sh
export XDEBUG_SESSION=$(hostname);echo "running with xdebug: $XDEBUG_SESSION";$#
In my example I just use XDEBUG_SESSION with a hostname, but you can use multiple variables. Keep them separated with a semi-colon. Execution as follows (assuming you called the script debug.sh and placed it in the same directory as your php script):
$ debug.sh php yourscript.php