I am trying to execute a shell script for automating the process rather than manually running the python script. But i am getting the error folder not found.
cd /home/gaurav/AndroPyTool
export ANDROID_HOME=$HOME/android-sdk-linux/
export PATH=$PATH:$ANDROID_HOME/tools
export PATH=$PATH:$ANDROID_HOME/platform-tools
source ~/.bashrc
source droidbox_env/bin/activate
alias mycd='cd /home/gaurav/AndroPyTool/test'
mycd
pwd
python androPyTool.py -all -csv EXPORTCSV.csv -s mycd
>>>> AndroPyTool -- STEP 1: Filtering apks
Folder not found!
This is the error i am getting because the script is not able to find the path that i have provided above.
The part after "-s" in the code represents the folder path where the file stored.
The issue here is that you are not passing the path to the python program. The python program is not aware of bash aliases and bash will only expand aliases when it is interpreting the token as a command.
When bash reads python androPyTool.py -all -csv EXPORTCSV.csv -s mycd It interprets python as the command and all other space separated tokens are arguments that will be passed into python. Python then invokes androPyTool.py and passes the subsequent arguments to that script. So the program is receiving literally mycd as the argument for -s.
Moreover, even if mycd is expanded, it wouldn't be the correct argument for -s. androPyTool.py is expecting just the /path/to/apks, not cd /path/to/apks/.
I don't really think that using the alias in this script makes much sense. It actually makes the script harder to read and understand. If you want to wrap a command, I recommend defining a function, and occasionally you can use variable expansion (but that mixes code and data which can lead to issues). EDIT: As has been pointed out in the comments, aliases are disabled for scripts.
Finally there are some other suspicious issues with your script. Mainly, why are you sourcing .bashrc? If this script is run by you in your user's environment, .bashrc will already be sourced and there is no need to re-source it. On the other hand, if this is not intended to be run in your environment, and there is something in the .bashrc file that you need in your script, I recommend pulling just that out and nothing else.
But the most immediate issue that I can see is that sourcing .bashrc after you modify path runs the risk of overwriting the changes to PATH you just made. Depending on the contents of the .bashrc file, sourcing it may not be idempotent, meaning that running it more than once could have side effects. Finally, anything could get thrown in that .bashrc file down the road since that's what its for. So now your script may depend on something that likely will be changing. This opens up the possibility that bugs will creep in to your script unexpectedly.
Related
I have made a few bash scripts that I have saved to individual folders. I have Windows 10. The scripts have functions that executes commands in bash. I am now able to execute these .sh scripts from any directory, since I have added the folders they are saved in to the path variable. But I want to make it even easier for me, and be able to only have to type the function in the bash console to execute the command.
This is an example of one of the scripts. It is saved as file_lister.sh. I am able to run this by typing "file_lister.sh" but I want to run it by only typing the function name in the script, which is "list_files". How do I do this? Thanks in advance.
#!/bin/bash
function list_files(){
cp C:/Users/jmell/OneDrive/projects/file_lister/file_lister.py file_lister.py
python file_lister.py
cwd=$(pwd)
if [ $cwd != "/c/Users/jmell/OneDrive/projects/file_lister" ]
then
rm file_lister.py
fi
}
list_files
Unless you source all of your scripts (e.g. in your .bashrc file), the functions won't be defined. However, your function is doing a lot of extra work that it really shouldn't be. The example script can be reduced to
#!/bin/bash
python C:/Users/jmell/OneDrive/projects/file_lister/file_lister.py
Better, yet, keep in mind that the shebang line is read and stripped off by the shell. It specifies the interpreter to use. Instead of creating a wrapper script, add the following line to file_lister.py:
#!/path/to/python
At that point, I'd also recommend renaming file_lister.py to just file_lister.
Is there a way to add bash functions to the $PATH, or to the bash shell, without requiring an end-user to source them manually?
In other words, if we have a software library that exports only bash functions, we normally require the end-user to source the bash scripts with
. "$HOME/.the_scripts/"*.sh
and then using them. But is there a way to somehow get the bash functions into the shell without requiring the user to add a line of code to ~/.bashrc or ~/.bash_profile, etc?
What am I trying to do? I am trying to obviate the need for users to add a call to source a bash script for a library they just installed.
One suggestion I got was to write a container script, to a folder, where that folder is already in the $PATH.
Say I have a script like so:
#!/usr/bin/env bash
my_func(){
echo "this is my func, $1, $2, $3"
export foo="my_func"
}
my_func a b c
I could write that script to a folder in $PATH and then execute the script, which will then call the bash function(s).
Not sure how great/universal a solution this is, but it would work for some use cases I suppose. This will not work if you want to export env variables to the current shell, etc, because the bash function(s) would be run in a subshell as far as I know from the command line / current script.
If you read about the Bash Startup Files, you notice that /etc/profile is one of the files that is processed. If you read that file, you'll see that it sources all *.sh files in /etc/profile.d
If you can have your script libraries installed in /etc/profile.d, the functions will be available for all interactive login shell sessions.
I would like to write a script that has several commands of the kind
> export PATH=$PREFIX/bin
Where
> $PREFIX = /home/usr
or something else. Instead of typing it into the the Shell (/bin/bash) I would run the script to execute the commands.
Tried it with sh and then with a .py script having the line,
> commands.getstatusoutput('export PATH=$PREFIX/bin')
but these result into the error "bad variable name".
Would be thankful for some ideas!
If you need to adjust PATH (or any other environment variable) via a script after your .profile and equivalents have been run, you need to 'dot' or 'source' the file containing the script:
. file_setting_path
source file_setting_path
The . notation applies to all Bourne shell derivatives, and is standardized by POSIX. The source notation is used in C shell and has infected Bash completely unnecessarily.
Note that the file (file_setting_path) can be specified as a pathname, or if it lives in a directory listed on $PATH, it will be found. It only needs to be readable; it does not have to be executable.
The way the dot command works is that it reads the named file as part of the current shell environment, rather than executing it in a sub-shell like a normal script would be executed. Normally, the sub-shell sets its environment happily, but that doesn't affect the calling script.
The bad variable name is probably just a complaint that $PREFIX is undefined.
Usually a setting of PATH would look something like
export PATH=$PATH:/new/path/to/programs
so that you retain the old PATH but add something onto the end.
You are best off putting such things in your .bashrc so that they get run every time you log in.
I have a shell script on a mac (OSX 10.9) named msii810161816_TMP_CMD with the following content.
matlab
When I execute it, I get
./msii810161816_TMP_CMD: line 1: matlab: command not found
However, when I type matlab into the shell directly it starts as normal. How can it be that the same command works inside the shell but not inside a shell script? I copy-pasted the command directly from the script into the shell and it worked ...
PS: When I replace the content of the script with
echo matlab
I get the desired result, so I can definitely execute the shell script (I use ./msii810161816_TMP_CMD)
Thanks guys!
By default, aliases are not expanded in non-interactive shells, which is what shell scripts are. Aliases are intended to be used by a person at the keyboard as a typing aid.
If your goal is to not have to type the full path to matlab, instead of creating an alias you should modify your $PATH. Add /Applications/MATLAB_R2014a.app/bin to your $PATH environment variable and then both you and your shell scripts will be able to simply say
matlab
This is because, as commenters have stated, the PATH variable inside of the shell executing the script does not include the directory containing the matlab executable.
When a command name is used, like "matlab", your shell looks at every directory in the PATH in order, searching for one containing an executable file with the name "matlab".
Without going into too much detail, the PATH is determined by the shell being invoked.
When you execute bash, it combines a global setting for basic directories that must be in the PATH with any settings in your ~/.bashrc which alter the PATH.
Most likely, you are not running your script in a shell where the PATH includes matlab's directory.
To verify this, you can take the following steps:
Run which matlab. This will show you the path to the matlab executable.
Run echo "$PATH". This will show you your current PATH settings. Note that the directory from which matlab is included in the colon-separated list.
Add a line to the beginning of your script that does echo "$PATH". Note that the directory from which matlab is not included.
To resolve this, ensure that your script is executed in a shell that has the needed directory in the PATH.
You can do this a few ways, but the two most highly recommended ones would be
Add a shebang line to the start of your script. Assuming that you want to run it with bash, do #!/bin/bash or whatever the path to your bash interpreter is.
The shebang line is not actually fully standardized by POSIX, so BSD-derived systems like OSX will happily handle multiple arguments to the shebanged executable, while Linux systems pass at most one argument.
In spite of this, the shebang is an easy and simple way to document what should be used to execute the script, so it's a good solution.
Explicitly invoke your script with a shell as its interpreter, as in bash myscript.sh or tcsh myscript.sh or even sh myscript.sh
This is not incompatible with using a shebang line, and using both is a common practice.
I believe that the default shell on OSX is always bash, so you should start by trying with that.
If these instructions don't help, then you'll have to dig deeper to find out why or how the PATH is being altered between the calling context and the script's internal context.
Ultimately, this is almost certainly the source of your issue.
First of all, I'd like to be corrected on my vocabulary. I assume Terminal provides an environment for bash, which is a type/version of shell. Is this correct?
I'm trying to utilize bash and shell more in my development processes to speed up deployment. However, I'm only beginning to understand the basics outside the commands I've learned from growing up.
I've started making functions in Terminal to help automate some of my more repetitious tasks. This is all find and dandy until I exit terminal.
I assume that shell runs in an instance, so that instance is lost when I exit terminal. I notice that shell leaves a .bash_history, also accessible using 'history', where I can see my old functions from old sessions. However, of course, they no longer appear to execute.
I recognize that I could create a shell script, but this introduces compiling issues as well as having to pay more attention to where the scripts are stored relatively.
Question: When I create bash scripts using command(){}, they do not persist after closing the terminal. Can I make them do that so on new terminal sessions I have access to my old functions without resorting to shell scripts?
Edit: I also wanted to mention that I tried extensively to find an issue to this using traditional means, but "save" and "exit" in the search term often will direct to other aspects of shell.
Your first statement is correct. A terminal instance runs a type of shell (bash, sh, csh)
You can add them to your ~/.bashrc file or add the saved script path (no compiling needed) to your PATH variable.
You could also just copy scripts to /usr/local/bin for quick access anytime. You would have no need to keep track of where they are relatively. This is quite handy and makes your scripts available to other users (or not if permissions are set correctly)
See the Using History Interactively section of the Bash Reference Manual for ways you can execute commands from your history.
For example, typing !?foo and pressing Enter will execute the most recent command containing "foo". I like to have shopt -s histverify histreedit in my ~/.bashrc so I can edit and confirm the command, if necessary rather than executing it immediately.
Also see the Commands For Manipulating The History section for keystrokes you can use to search for history entries to recall and execute.
For example, pressing Ctrl-r and typing foo will recall the most recent command containing "foo". You can press Ctrl-r additional times to continue searching in reverse for additional matching commands. Press Enter when you're ready to execute the one currently shown or Ctrl-g to abort the search.
If you add stty -ixon to your ~/.bashrc, then you can use Ctrl-s to search forward through history after you've begun searching backward.
Of course, you can save your functions by editing ~/.bashrc and adding them to it. I prefer to keep my functions in a file I created called ~/bin/functions and then add a line to ~/.bashrc to source that file. The line looks like . ~/bin/functions.
I save larger scripts in /usr/local/bin or ~/bin. The former should already be in your path and you can add the latter to your path by editing ~/.bashrc.
After you type in the functions on the command-line you could recall them using command-line editing (as #Dennis Williamson mentioned), but there is an easier method: declare -f. This command lists all current functions, and you can redirect them to a file:
home/user1> function myfunc {
> echo "Hollow world!"
> }
/home/user1> declare -f > myfuncs
/home/user1> more myfuncs
myfunc ()
{
echo "Hollow world"
}
Note how Bash changes the function syntax from Korn shell to Bourne shell! Fortunately there is no difference between the two in Bash (unlike ksh93).
When you need to load the function it is a simple matter:
/home/user1> source myfuncs
/home/user1> myfunc
Hollow world!
You don't need execute permissions by the way, only read.
You could (as others have said) add this to one of your start-up files, like .bashrc.
You can create a simple library which would contain all your functions. This would basically solve your problem of storing functions.
yeshwantnaik$ cat my_functions.lib
function do_something(){
#your code goes here
}
Save it wherever you want. Preferably to your $HOME directory.
Load the library. Don't miss the dot in the beginning.
yeshwantnaik$ . $HOME/my_functions.lib
Now you can run your function directly.
yeshwantnaik$ do_something
Let Linux do stuff for you
You can skip the step of manually loading the library by letting Linux do it for you when you log in automatically.
Run below commands
echo ". $HOME/my_functions.lib" >> ~/.bashrc
echo ". $HOME/my_functions.lib" >> ~/.bash_profile
source ~/.bashrc
source ~/.bash_profile
That's it. You can directly execute your function from the command line without doing anything.