My bash script:
#!/bin/bash
cd /tmp
Before running my script:
pwd: /
After running my script:
pwd: /
After runnig my script trough sourcing it:
pwd: /tmp
How I can stay at the path from the script without sourcing it ?
You can't. Changes to the current directory only affect the current process.
Let me elaborate a little bit on this:
When you run the script, bash creates a new process for it, and changes to the current directory only affect that process.
When you source the script, the script is executed directly by the shell you are running, without creating extra processes, so changes to the current directory are visible to your main shell process.
So, as Ignacio pointed out, this can not be done
Ignacio is correct. However, as a heinous hack (totally ill advised and this really should get me at least 10 down votes) you can exec a new shell when you're done
#!/bin/bash
...
cd /
exec bash
Here's a silly idea. Use PROMPT_COMMAND. For example:
$ export PROMPT_COMMAND='test -f $CDFILE && cd $(cat $CDFILE) && rm $CDFILE'
$ export CDFILE=/tmp/cd.$$
Then, make the last line of your script be 'pwd > $CDFILE'
If you really need this behavior, you can make your script return the directory, then use it somehow. Something like:
#!/bin/bash
cd /tmp
echo $(pwd)
and then you can
cd $(./script.sh)
Ugly, but does the trick in this simple case.
You can define a function to run in the current shell to support this. E.g.
md() { mkdir -p "$1" && cd "$1"; }
I have the above in my ~/.bashrc
Related
I have the following shell file that contains this:
sh
nightlyTag() {
echo $1-alpha.$(date +%Y%m%d).$(($(date +%s%N)/1000000))
}
yarnPubCanaryVersion() {
if [ -z "$1" ]
then
echo "No version argument supplied, maybe you meant v1.0.0?"
return 1
fi
version=`nightlyTag $1`
yarn version --new-version $version --no-git-tag-version
npm publish --tag canary
git reset --hard HEAD
}
I make the file executable with chmod +x canary.sh, then I run it doing ./canary.sh then my terminal changes to sh-3.2$ then I try to run the functions in the terminal like this nightlyTag and I get
sh: nightlyTag: command not found
Same for yarnPubCanaryVersion.
I was looking at this SO question
You won't be able to run functions from the terminal after you run the script.
You need to source the script to do this:
source ./canary.sh
Or add the contents of the file to the .bashrc file or its equivalent, and then source it.
The source command is used to load any function file into the current shell.
Now once you call those functions you will get the expected output.
At the top of your sh file you need to include:
#! /path/to/bash
the path to the bash that you are using.
Creating a script to pass to a few different people and ran into an env problem. The script wouldn't run unless I supplied it with $PATH, $HOME, and $GOPATH at the beginning of the file. Like so:
HOME=/home/Hustlin
PATH=/home/Hustlin/bin:/home/Hustlin/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/usr/local/go/bin:/bin:/home/Hustlin/go/bin
export GOPATH=$HOME/go
export PATH=$PATH:$GOROOT/bin:$GOPATH/bin
This is not advantageous when trying to pass the script around and each person has to set these variables themselves. This file would rarely be run by the User and would most often be run via crontab.
I would love to hear a better way of coding this so I'm not asking everyone I send the script to update these variables.
Thank you all in advance!!!
EDIT
The script is being run via crontab with no special permissions.
1,16,31,46 * * * * /home/Hustlin/directory1/super_cool_script.sh
Here is the script I am running:
#!/bin/bash
# TODO Manually put your $PATH and $HOME here.
PATH=/home/Hustlin/bin:/home/Hustlin/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/usr/local/go/bin:/bin:/home/Hustlin/go/bin
HOME=/home/Hustlin
export GOPATH=$HOME/go
export PATH=$PATH:$GOROOT/bin:$GOPATH/bin
# Field1
field1="foo"
# Welcome message.
echo Starting the update process...
# Deposit directory.
mkdir -p $HOME/directory1/sub1/data/body
mkdir -p $HOME/directory1/sub2/system
# Run command
program1 command1
# Run longer command.
program1 command2 field1
sleep 3
program1 command3 -o $HOME/directory1/sub1/data $field1
sleep 1
# Unzip and discard unnecessary files.
unzip $HOME/directory1/sub1/data/$field1 -d $HOME/directory1/sub1/data
rm $HOME/directory1/sub1/data/bar.yaml $HOME/dircetory1/sub1/data/char.txt
rm $HOME/directory1/sub1/data/$field1.zip
# Rename
mv $HOME/directory1/sub1/data/body.json $HOME/directory1/sub1/data/body/$(date -d '1 hour ago' +%d-%m-%Y_%H).json
echo Process complete.
I changed most of the program and command names for privacy. What I did post still represents what is being done and how the files are being moved.
The issue is crontab, not the script.
When you run the script on your terminal, you are logged in a session with all environment variables set, so the script can use it.
But when you run it from crontab it an "empty" session, so it does not have any environment variable set, it doesn't even know about your user.
Run the script on crontab like this:.
su --login Hustlin /home/Hustlin/directory1/super_cool_script.sh
Check this documentation.
http://man7.org/linux/man-pages/man1/su.1.html
bash -l -c /path/to/script will make bash execute all .bashrc and .profile files first, so it will have HOME and PATH variables set.
I'm on a Linux machine using screen, and I'm attempting to write a (fairly portable) function which runs a bash function in a new, detached screen session which automatically closes upon completion. I've had some success, but I noticed the following behavior:
If I include the definition of mail_submit() in my ~/.bashrc file, I can run
mail_submit foo
in the terminal, and also I can access the alias in a new screen session:
screen -S test
mail_submit foo
However, the following command does not work:
screen -d -m -S test sh -c 'mail_submit foo'
presumably because sh -c starts a fresh shell that has no knowledge of my ~/.bashrc profile. So, I can use the following fix:
screen -d -m -S test sh -c 'source ~/.bashrc; mail_submit foo'
which does work.
But if I want to wrap this functionality up into a bash alias (which is my ultimate goal here), this will cause a weird self-referential situation.
Question: What is an easy way to either have sh -c know the location of my ~/.bashrc profile, or use a variant of sourcing the file and creating an alias?
EDIT: I could save the shell script in my home directory, and create an alias which runs
screen -d -m -S test bash -c '~/mail_submit.sh $1'
but I'd still be curious to hear other possible fixes.
A default ~/.bashrc contains this ([[ "$-" != *i* ]] && return) little piece of code on top of it (or somewhere else in the upper part). This line will prevent the ~/.bashrc from beeing sourced if the bash shell doesn't run in interactive mode.
You could:
Remove this line
Create a new file which will only contain the alias you need and source that
Create a little bash script instead of an alias and run that
Do you mean screen -d -m -S test bash -c 'mail_submit foo'?
It looks like you're trying to run the command with the shell (sh), and not the bourne again shell (bash), which is the shell interpreter which actually reads the ~/.bashrc profile.
Edit: The .bashrc file is not being sourced by default because screen does not create the bash process as a login shell, which is when the .bashrc file is read. Creating a .screenrc file with the line defshell -bash will create the bash process as a login shell instead, which will then call the .bashrc file.
I've created a bash script to take a backup of a folder to a remote location via cygwin cron however I'm experiencing an issue. The script at the end will execute a command like this one
/usr/bin/tar -zcvf //192.168.1.108/Backup/Folder/Folder.Backup.2015-12-03.1219.tar.gz /cygdrive/d/Folder
Although when I use the command it produces and then executes in the context of a cygwin bash shell it works correctly, when I run it via a cron job it fails because it doesn't recognize the remote location path correctly. If I change the path to a local /cygdrive location or to ~/ it works correctly even via cron so somehow I'm thinking that the network shares are not being correctly viewed by cygwin in it's cron environment.
Any ideas how I could solve this issue?
Here's my bash script
#!/usr/bin/bash
#the path needs to be set to execute gzip command or tar command breaks
export PATH=$PATH:/usr/bin:/bin:/usr/local/bin:/usr/local/sbin:/sbin
if [ $# -ne 3 ]
then
echo "USAGE: backup-clients <path> <name_prefix> <source>";
exit 1;
fi
DATE=`date "+%Y-%m-%d.%H%M"`;
FILEPATH="$1/$2.Backup.$DATE.tar.gz";
COMMAND="/usr/bin/tar -zcvf $FILEPATH $3";
echo "COMMAND="$COMMAND;
eval $COMMAND;
Which I run with the command
/usr/bin/bash /cygdrive/d/mybackupscript.bash "//192.168.1.108/Backup/Folder" "Folder" "/cygdrive/d/Folder"
I really appreciate any help you can provide.
I was trying the below program,
This is a simple script, to cd into a folder
#! /bin/bash
cd /root/
But this below command , doesnt get into the folder
EDITED
#!/bin/bash
alias ex="cd /fs/fm"
alias ex1="source setenv"
alias ex2="cd /fs/fm/tests"
alias ex3="runtest"
To get into /root/ you should make sure that you have permissions. It's accessible if you're running as root itself but if you're running as a normal user you should consider becoming root first. One way is to use sudo:
sudo bash script.sh
And again, make sure your script is in UNIX format. Certainly you can't change to /root/\r.
sed -i 's|\r||' script.sh
dos2unix script.sh
This will never work The script you're running is a separate process, when it finishes you get back to the original environment (cwd, enviroment variables, etc...).
Create an alias:
alias r="cd /root"
or execute the script within your shell:
. myscript
Note: . is a synonym for source.