Executing remote script by curl doesn't work - bash

I'm trying to run one script (which installs and setups Postgres DB) from another script (test.sh). The problem is that the script which is called inside my script is remote (online on GitHub) and it doesn't work.
I tried to call the script three different ways.
locally - works
remotely using bash <(curl... - raises error
remotely using curl ... | bash -s - downloads script and hangs...
test.sh
#!/bin/bash
# this works correctly (it's the same but local)
sudo bash postgres/setup.sh
# this raises "bash: /dev/fd/63: No such file or directory"
sudo bash <(curl -s https://raw.githubusercontent.com/milano-slesarik/sh-ubuntu-instant/master/postgres/setup.sh)
# this hangs
curl https://raw.githubusercontent.com/milano-slesarik/sh-ubuntu-instant/master/postgres/setup.sh | sudo bash -s
The last one probably downloads the script but then hangs...
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 1532 100 1532 0 0 8281 0 --:--:-- --:--:-- --:--:-- 8281
Do you know how to make it work?
I want my scripts to be online so I don't have to clone/download then all the time.

The reason why it hangs when you run curl -s $script |bash -s seems to be a misbehaviour in the script itself, at least as long as you run it like that. You can debug this by adding the -x flag to the bash command. Then you can see the following message spawning in an endless loop, due to the while statement contained in the script:
++ read -p 'Do you wan'\''t to install postgres? [y/n]: ' yn
++ case $yn in
Also, as long as you are running the script this way, to avoid this behaviour I would remove those lines in which the script is expecting some kind of user input. If that is not the case, the easiest way is just to download it via curl and then run it locally from your terminal.

This may achieve what you wanted :
sudo bash -c "bash <(curl -s https://raw.githubusercontent.com/milano-slesarik/sh-ubuntu-instant/master/postgres/setup.sh)"

Related

Git Bash confuse with Slash '/' when running bash script on Windows

How can I run bash script on git bash with slash '/' on Windows.
Here is my script file. I can run well on Ubuntu/Linux.
#!/bin/sh
KONG_ADMIN_HOST=localhost
KONG_ADMIN_PORT=8001
registerServiceRoute() {
printf "\nRegister %s service route\n" $1
curl -XPOST ${KONG_ADMIN_HOST}:${KONG_ADMIN_PORT}/services/$1/routes \
--data protocols=grpc \
--data name=$1-grpc \
--data paths=$2 -----> This line '/'is understood wrong in Git Bash Windows
printf "\n"
}
SERVICE_NAME=auth
SERVICE_PATH=/com.bht.saigonparking.api.grpc.auth.AuthService/ ----------> path with '/' here
registerServiceRoute ${SERVICE_NAME} ${SERVICE_PATH}
As soon as I run the script on Git Bash on Windows,
The console tell that Paths should start with /,
which means it confuse with / in the path. although / is already correct.
Here is the git bash Windows console output:
Register auth service route
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 248 100 143 100 105 674 495 --:--:-- --:--:-- --:--:-- 1169{"message":"schema violation (paths.1: should start with: \/)","name":"schema violation","fields":{"paths":["should start with: \/"]},"code":2}
How can I execute script with command have '/' on Git Bash Windows ?
I am looking forward to hearing from you all ! Thanks a lot !
Add the following before your curl command:
export MSYS_NO_PATHCONV=1
Caveat emptor: The solution is based off a similar issue that occurs for docker commands on Windows (ref: The DevOps 2.1 Toolkit: Docker Swarm)

Teamcity with Subversion post commit script on windows

We would like Teamcity to build our solutions on every commit into subversion.
Following the documentation, we are to create a .sh script :-
SERVER=https://buildserver-url
USER=buildserver-user
PASS="<password>"
LOCATOR=$1
# The following is one-line:
(sleep 10; curl --user $USER:$PASS -X POST "$SERVER/app/rest/vcs-root-instances/commitHookNotification?locator=$LOCATOR" -o /dev/null) >/dev/null 2>&1 <&1 &
exit 0
Subversion is running on a windows environment, and so the .sh file will fail.
We are trying to convert this into a .bat file of which we have :-
set SERVER=https://buildserver-url
set USER=buildserver
set PASS=password
LOCATOR=%1%
(timeout 10; curl --user %USER%:%PASS% -X POST "%SERVER%/app/rest/vcs-root-instances/commitHookNotification?locator=%LOCATOR%" -o /dev/null) >/dev/null 2>%1% <%1% &
exit 0
However, this is still failing when trying to execute with
"The system cannot find the path specified"
It seems that perhaps we havnt converted this correctly?
Are the programs you're referencing (such as curl and timeout.exe) in locations that are present in the $PATH/%PATH% variable? How about any other files you're referencing - are you specifying full paths
Side note: Did you install curl and timeout.exe on the Windows server?
Also, /dev/null does not exist on Windows; you need to redirect to NUL. You can't just change the file extension and some of your syntax and expect a bash script to work on Windows.
Were I in your shoes, I'd skip batch altogether and write the script in something modern and sane like Powershell.

Running commands in bash function as a given user

I'm working with the edx open source code base and I'm putting together a dotfile to make it easier to perform various tasks on the server.
I'm having trouble with the following bash function
edx-compile_assets() {
sudo -H -u edxapp bash
source /edx/app/edxapp/edxapp_env
cd /edx/app/edxapp/edx-platform
paver update_assets cms --settings=aws
paver update_assets lms --settings=aws
}
When sudo -H -u edxapp bash is run, the function halts and nothing happens, when I exit from that env, then the remaining functions execute not as the edxapp user, but as a normal user, which causes the commands to fail.
So basically, it looks like sudo -H -u edxapp bash starts a separate process, and when that process ends, the remainder of the function executes
I guess what I'm looking for is a simple way to run commands as the edxapp user which is activated with sudo -H -u edxapp bash
Any help is appreciated, thanks :)
IIRC, bash takes a-c option, allowing you to invoke bash and immediately execute a command/script. So I'd put the extra statements (source, cd, paver ...) into a script and add -c myRemainingScript.sh at the end of your sudo line.

Resuming curl download that goes through some bash script

I am trying to download a huge file via curl. As far as I can see it there is some bash script hooked in between to deliver the correct file (in that case a virtual machine that runs IE10):
curl -s https://raw.githubusercontent.com/xdissent/ievms/master/ievms.sh | IEVMS_VERSIONS=10 bash
Due to a wobbly internet connection the download fails constantly so I need a way to resume the download at its current position. I've tried resuming the download like so:
curl -s -C - https://raw.githubusercontent.com/xdissent/ievms/master/ievms.sh | IEVMS_VERSIONS=10 bash
However, all I get is some MD5 check failed error...am I missing something?
The curl command you're running there doesn't download the VM images. It downloads a bash script called ievms.sh and then pipes the script to bash, which executes it.
Looking at the script, it looks like the file it downloads for IE10 is here:
http://virtualization.modern.ie/vhd/IEKitV1_Final/VirtualBox/OSX/IE10_Win8.zip
I think if you download that file (you could use your browser or curl) and put it in ~/.ievms, and then run the command again, it should see that the file has already been downloaded and finish the installation.
If the partially-downloaded file is already there, then you could resume that download with this command:
curl -L "http://virtualization.modern.ie/vhd/IEKitV1_Final/VirtualBox/OSX/IE10_Win8.zip" \
-C - -o ~/.ievms/IE10_Win8.zip
(Then run the original IEVMs curl command to finish installation.)

bash command doesnt seem to work, but its echo does?

Well, I'm new to linux so this may be a very newbie kinda of thing, here it goes:
I have a script in which I'm trying to send some different jobs to remote computers (in fact Amazon's EC2 instances), these jobs are in fact the same function which I run with different parameters.
eventually in the script code I have this line:
nohup ssh -fqi key.pem ubuntu#${Instance_Id[idx]} $tmp
if I do:
echo nohup ssh -fqi key.pem ubuntu#${Instance_Id[idx]} $tmp
I get:
nohup ssh -fqi key.pem ubuntu#ec2-72-44-41-228.compute-1.amazonaws.com '(nohup ./Script.sh 11 1&)'
Now the weird thing. If I run the code with no echo in the script it doesnt work! it says in the nohup.out (in my laptop, no nohup.out is created in the remote instance) bash: (nohup ./Script.sh 10 1&): No such file or directory
The file does exist locally and remotely and is chmod +x.
If I simply run the very same script with an echo in front of the problematic line and copy its output and paste in the terminal, it works!.
Any clues welcome, thanks!
Try removing the single quotes from $tmp. It looks like bash is treating (nohup ./Script.sh 10 1&) as the command with no parameters, but technically nohup is the command with the parameters ./Script.sh 10 1.
The problem is the single quotes around the nohup command in your $tmp variable. These don't get used on the shell locally, so SSH passes them verbatim. This means remotely the ssh server tries to interpret (nohup ./Script.sh 10 1&) as a command (looks for a file named that) which there clearly isn't. Make sure you remove the single quotes in $tmp.

Resources