Can we run a bash script from a repo? - bash

I am at beginner level. I am not sure if this is feasible or not. I have a bash script on bitbucket repo which does some kind of setup. To run that bash script I have to download that locally and run the .sh file. Is there any way I can run the script through bitbucket repo without downloading?

You'll always need to download the file (i.e: retrieve it from the server), but you can produce a pipline to retrieve-and-execute in one. The simplest would be:
curl ${url} | bash
You'll need to locate the URL that presents the raw file (rather than the HTML web page). For BitBucket this will look something like below. You can substitue ${commit_id} for a branch or tag name instead.
https://bitbucket.org/${user}/${repo}/raw/${commit_id}/${file}
Beware however that this often causes raised eyebrows from a security point of view, especially if retrieving the file via HTTP (rather than HTTPS), as you're basically running unknown code on your computer. Using sudo in this pipeline is even more concerning.
The user needs to be prepared to trust whatever is stored in the repository, so make sure that you only allow trusted users to push (or merge), and make sure that you review changes to the file in question carefully.
You should also be aware that when running a script like this (equally for bash ${file} or bash < ${file}), the shebang will not be respected - it will just be seen as a comment and ignored.
If, for example you script begins as below (-e to exit on error, and -u to handle undefined variables as an error) then these flags will not be set.
#!/bin/bash -eu
# ... body of script ...
When "executing" the file directly (i.e: chmod +x ./my_script.sh, ./my_script.sh), the kernel process the shebang and invokes /bin/bash -eu... but when executing the script via one of the above methods, the bash invocation is in the pipeline.
Instead, it is preferable to set these flags in the body of your script, so that the method of execution doesn't matter:
#!/bin/bash
set -eu
# ... body of script ...

Related

Script piped into bash fails to expand globs during rm command

I am writing a script with the intention of being able to download and run it from anywhere, like:
bash <(curl -s https://raw.githubusercontent.com/path/to/script.sh)
The command above allows me to download the script, run interactive commands (e.g. read), and - for the most part - Just Works. I have run into an issue during the cleanup portion of my script, however, and haven't been able to discern a fix
During cleanup I need to remove several .bkp files created by the script's execution. To do so I run rm -f **/*.bkp inside the script. When a local copy of the script is run, this works great! When run via bash/curl, however, it removes nothing. I believe this has something to do with a failure to expand the glob as a result of the way I've connected the I/O of bash and curl, but have been unable to find a way to get everything to play nice
How can I meet all of the following requirements?
Download and run a script from a remote resource
Ensure that the user's keyboard input is connected for use in e.g. read calls within the script
Correctly expand the glob passed to rm
Bonus points: colorize input with e.g. echo -e "\x1b[31mSome error text here\x1b[0m" (also not working, suspected to be related to the same bash/curl I/O issues)

calling another scripts to run in current script

I'm writing a shell script. what it does is it will create a file by the input that is received from the user. Now, i want to add the feature called "view a file" for my current script. Now, it's unreasonal to retype it again since i've already had a script that helps
I know it's crazy when it is possible to it with normal shell command. I'm actually writing a script that help me to create pages that are generated from the touch command. (this pages had attached date, author name, subjects, and title).
The question is how to call a another script or inhere another script?
Couple of ways to do this. My prefered way is by using source
You can -
Call your other script with the source command (alias is .) like this: source /path/to/script.
Make the other script executable, add the #!/bin/bash line at the top, and the path where the file is to the $PATH environment variable. Then you can call it as a normal command.
Use the bash command to execute it: /bin/bash /path/to/script

How Secure is using execFile for Bash Scripts?

I have a node.js app which is using the child_process.execFile command to run a command-line utility.
I'm worried that it would be possible for a user to run commands locally (a rm / -rf horror scenario comes to mind).
How secure is using execFile for Bash scripts? Any tips to ensure that flags I pass to execFile are escaped by the unix box hosting the server?
Edit
To be more precise, I'm more wondering if the arguments being sent to the file could be interpreted as a command and executed.
The other concern is inside the bash script itself, which is technically outside the scope of this question.
Using child_process.execFile by itself is perfectly safe as long as the user doesn't get to specify the command name.
It does not run the command in a shell (like child_process.exec does), so there is no need to escape anything.
child_process.execFile will execute commands with the user id of the node process, so it can do anything that user could do, which includes removing all the server files.
Not a good idea to let user pass in command as you seem to be implying by your question.
You could consider running the script in a sandbox by using chroot, and limiting the commands and what resides on the available file system, but this could get complet in a hurry.
The command you pass will get executed directly via some flavor of exec, so unless what you trying to execute is a script, it does not need to be escaped in any way.

Ruby, Unicorn, and environment variables

While playing with Heroku, I found their approach of using environment variables for server-local configuration brilliant. Now, while setting up an application server of my own, I find myself wondering how hard that would be to replicate.
I'm deploying a sinatra application, riding Unicorn and Nginx. I know nginx doesn't like to play with the environment, so that one's out. I can probably put the vars somewhere in the unicorn config file, but since that's under version control with the rest of the app, it sort of defeats the purpose of having the configuration sit in the server environment. There is no reason not to keep my app-specific configuration files together with the rest of the app, as far as I'm concerned.
The third, and last (to my knowledge) option, is setting them in the spawning shell. That's where I got lost. I know that login and non-login shells use different rc files, and I'm not sure whether calling something with sudo -u http stuff is or not spawning a login shell. I did some homework, and asked google and man, but I'm still not entirely sure on how to approach it. Maybe I'm just being dumb... either way, I'd really appreciate it if someone could shed some light on the whole shell environment deal.
I think your third possibility is on the right track. What you're missing is the idea of a wrapper script, whose only function is to set the environment and then call the main program with whatever options are required.
To make a wrapper script that can function as a control script (if prodEnv use DB=ProdDB, etc), there is one more piece that simplifies this problem. Bash/ksh both support a feature called sourcing files. This an operation that the shell provides, to open a file and execute what is in the file, just as if it was in-lined in the main script. Like #include in C and other languages.
ksh and bash will automatically source /etc/profile, /var/etc/profile.local (sometimes), $HOME/.profile. There are other filenames that will also get picked up, but in this case, you'll need to make your own env file and the explicitly load it.
As we're talking about wrapper-scripts, and you want to manage how your environment gets set up, you'll want to do the sourcing inside the wrapper script.
How do you source an environment file?
envFile=/path/to/my/envFile
. $envFile
where envFile will be filled with statements like
dbServer=DevDBServer
webServer=QAWebServer
....
you may discover that you need to export these variable for them to be visble
export dbServer webServer
An alternate assignment/export is supported
export dbServer=DevDBServer
export webServer=QAWebServer
Depending on how non-identical your different environments are, you can have your wrapper script figure out which environment file to load.
case $( /bin/hostame ) in
prodServerName )
envFile=/path/2/prod/envFile ;;
QASeverName )
envFile=/path/2/qa/envFile ;;
devSeverName )
envFile=/path/2/dev/envFile ;;
esac
. ${envFile}
#NOW call your program
myProgram -v -f inFile -o outFile ......
As you develop more and more scripts in your data processing environment, you can alway source your envFile at the top. When you eventually change the physical location of a server (or it's name), then you have only one place that you need to make the change.
IHTH
Also a couple of gems dealing with this. figaro that works both with or without heroku. Figaro uses a yaml file (in config and git ignored) to keep track of variables. Another option is dotenv that reads variables from an .env file. And also another article with all them options.
To spawn an interactive shell (a.k.a. login shell) you need to invoke sudo like this:
sudo -i -u <user> <command>
Also you may use -E to preserve the environment. This will allow some variables to be pased for your current environment to the command invoked with sudo.
I solved a similar problem by explicitly telling Unicorn to read a variables file as part of startup in its init.d script. First I created a file in a directory above the application root called variables. In this script I call export on all my environment variables, e.g. export VAR=value. Then I defined a variable GET_VARS=source /path/to/variables in the /etc/init.d/unicorn file. Finally, I modified the start option to read su - $USER -c "$GET_VARS && $CMD" where $CMD is the startup command and $USER is the app user. Thus, the variables defined in the file are exported into the shell of Unicorn's app user on startup. Note that I used an init.d script almost identical to the one from this article.

How to test things in crontab

This keeps happening to me all the time:
1) I write a script(ruby, shell, etc).
2) run it, it works.
3) put it in crontab so it runs in a few minutes so I know it runs from there.
4) It doesnt, no error trace, back to step 2 or 3 a 1000 times.
When I ruby script fails in crontab, I can't really know why it fails cause when I pipe output like this:
ruby script.rb >& /path/to/output
I sorta get the output of the script, but I don't get any of the errors from it and I don't get the errors coming from bash (like if ruby is not found or file isn't there)
I have no idea what environmental variables are set and whether or not it's a problem. Turns out that to run a ruby script from crontab you have to export a ton of environment variables.
Is there a way for me to just have crontab run a script as if I ran it myself from my terminal?
When debugging, I have to reset the timer and go back to waiting. Very time consuming.
How to test things in crontab better or avoid these problems?
"Is there a way for me to just have crontab run a script as if I ran it myself from my terminal?"
Yes:
bash -li -c /path/to/script
From the man page:
[vindaloo:pgl]:~/p/test $ man bash | grep -A2 -m1 -- -i
-i If the -i option is present, the shell is interactive.
-l Make bash act as if it had been invoked as a login shell (see
INVOCATION below).
G'day,
One of the basic problems with cron is that you get a minimal environment being set by cron. In fact, you only get four env. var's set and they are:
SHELL - set to /bin/sh
LOGNAME - set to your userid as found in /etc/passwd
HOME - set to your home dir. as found in /etc/passwd
PATH - set to "/usr/bin:/bin"
That's it.
However, what you can do is take a snapshot of the environment you want and save that to a file.
Now make your cronjob source a trivial shell script that sources this env. file and then executes your Ruby script.
BTW Having a wrapper source a common env. file is an excellent way to enforce a consistent environment for multiple cronjobs. This also enforces the DRY principle because it gives you just one point to update things as required, instead of having to search through a bunch of scripts and search for a specific string if, say, a logging location is changed or a different utility is now being used, e.g. gnutar instead of vanilla tar.
Actually, this technique is used very successfully with The Build Monkey which is used to implement Continuous Integration for a major software project that is common to several major world airlines. 3,500kSLOC being checked out and built several times a day and over 8,000 regression tests run once a day.
HTH
'Avahappy,
Run a 'set' command from inside of the ruby script, fire it from crontab, and you'll see exactly what's set and what's not.
To find out the environment in which cron runs jobs, add this cron job:
{ echo "\nenv\n" && env|sort ; echo "\nset\n" && set; } | /usr/bin/mailx -s 'my env' you#example.com
Or send the output to a file instead of email.
You could write a wrapper script, called for example rbcron, which looks something like:
#!/bin/bash
RUBY=ruby
export VAR1=foo
export VAR2=bar
export VAR3=baz
$RUBY "$*" 2>&1
This will redirect standard error from ruby to the standard output. Then you run rbcron in your cron job, and the standard output contains out+err of ruby, but also the "bash" errors existing from rbcron itself. In your cron entry, redirect 2>&1 > /path/to/output to get output+error messages to go to /path/to/output.
If you really want to run it as yourself, you may want to invoke ruby from a shell script that sources your .profile/.bashrc etc. That way it'll pull in your environment.
However, the downside is that it's not isolated from your environment, and if you change that, you may find your cron jobs suddenly stop working.

Resources