What regex or command can I use to trim this output? - bash

I am trying to capture a specific output in a $ variable for my remote server configuration that will run commands one after another.
In an ubuntu environment that has pm2 node package installed, it comes with a command that will output something I need to run.
Command 1:
PM2=$(pm2 startup systemd)
Will output this string when I run echo $PM2:
[PM2] Init System found: systemd [PM2] You have to run this command as root. Execute the following command: sudo env PATH=$PATH:/usr/bin /usr/lib/node_modules/pm2/bin/pm2 startup systemd -u username --hp /home/username
I need to capture this exact output as a $var:
sudo env PATH=$PATH:/usr/bin /usr/lib/node_modules/pm2/bin/pm2 startup systemd -u username --hp /home/username
So I can have my cloud init config file run it in the next command.
Command 2:
$PM2
How can I get $PM2 to only have the output value of
sudo env PATH=$PATH:/usr/bin /usr/lib/node_modules/pm2/bin/pm2 startup systemd -u username --hp /home/username

This may help :
pm2response=$(pm2 startup systemd) # Use lower case for user defined variables
${pm2response#*Execute the following command:} # Shell param expansion
But this assumes that your string has the phrase Execute the following command: in it though I guess I am right in assuming so. Good luck!
Note : More on SHELL Parameter Substitution [ here ]

Related

execute aws command in script with sudo

I am running a bash script with sudo and have tried the below but am getting the error below using aws cp. I think the problem is that the script is looking for the config in /root which does not exist. However doesn't the -E preserve the original location? Is there an option that can be used with aws cp to pass the location of the config. Thank you :).
sudo -E bash /path/to/.sh
- inside of this script is `aws cp`
Error
The config profile (name) could not be found
I have also tried `export` the name profile and `source` the path to the `config`
You can use the original user like :
sudo -u $SUDO_USER aws cp ...
You could also run the script using source instead of bash -- using source will cause the script to run in the same shell as your open terminal window, which will keep the same env together (such as user) - though honestly, #Philippe answer is the better, more correct one.

AWS EC2 User Data: Commands not recognized when using sudo

I'm trying to create an EC2 User-data script to run other scripts on boot up. However, the scripts that I run fail to recognize some commands and variables that I'd already declared. I'm running the commands as the "ubuntu" user but it still isn't working.
My user-data script looks something like this:
export user="ubuntu"
sudo su $user -c ". ./run_script"
Within the script, I have these lines:
THIS_PATH="/some/path"
echo "export SOME_PATH=$THIS_PATH" >> ~/.bashrc
source ~/.bashrc
However, the script can't run SOME_PATH/application, and echo $SOME_PATH this returns a blank line. I'm confused because $SOME_PATH/application works when I log into the EC2 using SSH and my debug logs using whoami returns "ubuntu."
Am I missing something here?
Your data script is executed as root and su command leaves $HOME and other ENV variables intact (note that sudo is redundant). "su -" does not help either
So, do not use ~ or $HOME but full path /home/ubuntu/.bashrc
I found out the problem. It seems that source ~/.bashrc isn't enough to restart the shell -- the environment variables worked after I referenced them in another bash script.

Command not found with sudo, but works without sudo

I've installed a binary dep in my GOPATH at /home/me/go/bin to be used.
Running dep successfully executes the binary, however running sudo dep results in sudo: dep: command not found:
$ dep
Dep is a tool for managing dependencies for Go projects
Usage: "dep [command]"
...
Use "dep help [command]" for more information about a command.
$ sudo dep
sudo: dep: command not found
The paths are not the issue here:
$ echo $PATH
/usr/share/Modules/bin:/usr/local/bin:/usr/local/sbin:/usr/bin:/usr/sbin:/var/lib/snapd/snap/bin:/home/me/.local/bin:/home/me/bin:/home/me/.local/bin:/home/me/bin:/home/me/go/bin:/home/me/.local/bin:/home/me/bin:/home/me/go/bin
$ sudo echo $PATH
/usr/share/Modules/bin:/usr/local/bin:/usr/local/sbin:/usr/bin:/usr/sbin:/var/lib/snapd/snap/bin:/home/me/.local/bin:/home/me/bin:/home/me/.local/bin:/home/me/bin:/home/me/go/bin:/home/me/.local/bin:/home/me/bin:/home/me/go/bin
The paths are identical as me and as superuser both referencing the key directory /home/me/go/bin.
Why does running dep without sudo succeed but with sudo results in command not found?
By default, sudo does NOT pass the user's original PATH into the superuser process, and it gets some default PATH defined on the system. That's easy to see if you run "sudo env" to see the entire environment of the sudo'ed process:
$ sudo env | grep PATH
PATH=/sbin:/bin:/usr/sbin:/usr/bin
The command you tried, "sudo echo $PATH" doesn't check anything, because the shell first translates the $PATH to whatever value this variable has - and only then calls the command (sudo), so it just prints your outer environment's value :-)
To get your PATH to pass inside sudo, you can do something like this:
$ sudo PATH=$PATH sh -c env | grep PATH
PATH=/usr/share/Modules/bin:/usr/lib64/ccache:/home/nyh/gaps:/home/nyh/bin:/usr/local/bin:/usr/bin:/usr/X11R6/bin:/bin:/usr/sbin:/sbin:/usr/games:/usr/local/android-sdk-linux/tools:/usr/local/android-sdk-linux/platform-tools:/home/nyh/google-cloud-sdk/bin
Basically the command I passed for sudo to run starts by setting PATH to $PATH (remember that $PATH is expanded by the outer shell, before sudo ever runs, so is the real path I want!) and running a shell (which will use this new PAT) to "env". As you can see, env did get the right path. You can replace "env" by whatever program you wanted to run.

Running Docker commands included in a shell script alongside other Linux commands and switching users

Using the Linux terminal, I run bash scripts (.sh files) containing sequences of commands I want to execute.
The issue is that I am unable to run a Docker command from within my shell script. I can run this Docker command when it's typed directly at the terminal with root privileges but not when I include it in the shell script file.
My script executed as a general user from command line, looks like this:
#!/usr/bin/env bash
cd /home/user/docker_backup
# remove /home/user/docker_backup/data
rm -rf data
# Switch to root privileges. my system is set to only run Docker as root
su
# Copy a folder from Docker container to host OS
docker cp <container-name>:/home/user/data /home/user/docker_backup
# More general user commands
cd ..
My code only runs until the su line above. After i enter the root password, nothing happens. if i type exit, i get permission errors, meaning the docker cp command failed.
**
This is my desired solution
**After thorough research, as I wanted to run my script as a general user, and only run certain commands as Root when necessary, I came up with a solution that works.
My script now looks like this (run with
$ sh script_name.sh):
#!/usr/bin/env bash
cd /home/user/docker_backup
# remove /home/user/docker_backup/data
rm -rf data
# Switch to root privileges. my system is set to only run Docker as root
su - root -c "docker cp <container-name>:/home/user/data /home/user/docker_backup"
# More general user commands
cd ..
Run shell script as general user. For commands that require root privileges, I use su - root -c "<command>". Terminal prompts for root password and executes command in quotes as root, then shell proceeds as general user.
Actually posting this as an answer:
You switch your current user to root during the script, but the script was executed by your own user.
So the docker cp command will also be executed as your own user, but you will be logged into the root account.
This results in you not seeing the output of docker cp (which might give you insight to not working - I think insufficient privilege).
A solution to this is either using sudo before docker cp, starting the script as root or adding your user to the group "docker", which authorizes your user to use the docker commands
I had the similar issue where the docker commands were running fine on the Terminal but the same commands were not running when I compiled them into a bash script and the issue was basically because of two reasons.
The docker commands need to be run with uplifted privileges that is with the sudo command ( Eg: sudo docker ps works but docker ps won't work). One could add the current user to docker group so that we need not use sudo with each docker command. Please visit this link and follow the section 2 to do the same.
Run the script in the correct way
One should have #! bin/bash at the starting of the script. It is a shebang that is required by each script.
One should save the file without .sh extension
One should provide the execution permission to the script by giving command chmod 777 script_name
run the script with bash script_name

Execute gcloud commands in a bash script

gcloud init command doesn't offer login prompt during a bash script execution.
But it offered the login after I typed exit command manually after script ended.
vagrant#vagrant-ubuntu-trusty-64:~$ exit
logout
Welcome! This command will take you through the configuration of gcloud.
Settings from your current configuration [default] are:
Your active configuration is: [default]
Pick configuration to use:
[1] Re-initialize this configuration [default] with new settings
[2] Create a new configuration
Please enter your numeric choice: 1
Your current configuration has been set to: [default]
To continue, you must log in. Would you like to log in (Y/n)?
My bash script:
#!/usr/bin/env bash
OS=`cat /proc/version`
function setupGCE() {
curl https://sdk.cloud.google.com | bash
`exec -l $SHELL`
`gcloud init --console-only`
`chown -R $USER:$USER ~/`
}
if [[ $OS == *"Ubuntu"* || $OS == *"Debian"* ]]
then
sudo apt-get -y install build-essential python-pip python-dev curl
sudo pip install apache-libcloud
setupGCE
fi
How can I get the login prompt during the bash script execution?
There are a number of issues with the posted snippet.
The correct snippet is (probably):
function setupGCE() {
curl https://sdk.cloud.google.com | bash
gcloud init --console-only
chown -R $USER:$USER ~/
}
The first error with the original, which you discovered yourself (the what of it at least it not the why), is that exec -l $SHELL is blocking progress. It does that because you've run an interactive shell that is now waiting on you for input and the function is waiting for that process to exit before continuing.
Additionally, exec replaces the current process with the spawned process. You got lucky here actually. Had you not wrapped the call to exec in single quotes your function would have exited the shell script entirely when you exited the $SHELL it launched. As it is, however, exec just replaced the sub-shell that the backticks added and so you were left with a child process that could safely exit and return you to the parent/main script.
The second issue is that backticks run the command they surround and then replace themselves with the output. This is why
echo "bar `echo foo` baz"
outputs bar foo baz, etc. (Run set -x before running that to see what commands are actually being run.) So when you write
`gcloud init --console-only`
what you are saying is "run gcloud init --console-only then take its output and replace the command with that" which will then attempt to run the output as a command itself (which is likely not what you wanted). Similarly on the other lines.
This happens to not have been problematic here though as chown and likely gcloud init don't return anything and so the resulting command line is empty.
Somehow the exec -l $SHELL did all the mess. I changed it to source ~/.bashrc and now it works.

Resources