aws ec2 php exec sudo not working - amazon-ec2

We are trying to automate website deployment on aws ec2 ubuntu instance using php.
So to deploy website on aws ec2 ubuntu instance we have to follow steps
Make folder in www folder
Make conf file site-available
Execute a2ensite
restart apache
Create database
Create user and grant all access
I am trying to do all things using PHP. So I stuck at 1st steps for following code
echo shell_exec("sudo mkdir /var/www/{$_POST['environment']} 2>&1");
echo shell_exec("sudo mkdir /var/www/{$_POST['environment']}/{$_POST['site']}/public_html 2>&1");
echo shell_exec("sudo mkdir /var/www/{$_POST['environment']}/{$_POST['site']}/ssls 2>&1");
echo shell_exec("sudo mkdir /var/www/{$_POST['environment']}/{$_POST['site']}/conf 2>&1");
echo shell_exec("sudo mkdir /var/www/{$_POST['environment']}/{$_POST['site']}/sqls 2>&1");
but got following error
sudo: no tty present and no askpass program specified
I searched for some alternatives (In my mind)
Set password to sudo (Didnt get anything on google)
Connect ec2 using PHP library (Library is available but dont know how to execute commands)
Suggest anything.

That error indicates the script needs to ask the user for a password to sudo and it can't. You can allow the user running the php commands to sudo without a password. See this related answer or search for how to allow sudo without a password and you will find lots of guides.

Related

Azure Bash - Permission denied when running . <(sudo wget -q -O - https://aka.ms/microservices-data-aspnet-core-setup)

I'm trying to setup my environment to learn azure from the Microsoft learning page https://learn.microsoft.com/en-us/learn/modules/microservices-data-aspnet-core/environment-setup
but when i run . <(sudo wget -q -O - https://aka.ms/microservices-data-aspnet-core-setup) to pull the repo and run the services, i get the error below
~/clouddrive/aspnet-learn/modules/microservices-data-aspnet-core/setup ~/clouddrive/aspnet-learn
~/clouddrive/aspnet-learn
bash: /home/username/clouddrive/aspnet-learn/src/deploy/k8s/quickstart.sh: Permission denied
bash: /home/username/clouddrive/aspnet-learn/src/deploy/k8s/create-acr.sh: Permission denied
cat: /home/username/clouddrive/aspnet-learn/deployment-urls.txt: No such file or directory
this used to work until it stopped working and I'm not sure what caused it to break or how to fix it.
I've tried deleting the 'Storage account' and the resources, but doesn't seem to work. also, when i delete the storage account and create a new one then try again, it seems to have the old data stored and i need to run a remove, so somehow this data isnt really being deleted when i delete the 'Storage account'
Before running this script, please remove or rename the existing /home/username/clouddrive/aspnet-learn/ directory as follows:
Remove: rm -r /home/username/clouddrive/aspnet-learn/
any idea what is wrong here, or how i can actually reset this to work like a new storage?
Note: I saw some solutions which say to start with sudo, for elevated permission, but didnt manage to get this to work
I have done the repro by following the given document
Able to deploy a modified version of the eShopOnContainers reference app
Again I executed the same command ,
. <(wget -q -O - https://aka.ms/microservices-data-aspnet-core-setup)
got the same error which you have got
If we try to run the deploy script without cleaning the already created resource/app,will get the above error.
If you want to re-run the setup script, run the below command first to clean the resource
cd ~ && \
rm -rf ~/clouddrive/aspnet-learn && \
az group delete --name eshop-learn-rg --yes
OR
Remove: rm -r /home/username/clouddrive/aspnet-learn/
Rename: mv /home/username/clouddrive/aspnet-learn/ ~/clouddrive/new-name-here/
The above command removes or renames the existing /home/username/clouddrive/aspnet-learn/ directory
Now you can run the script again

Prevent .bash_profile from executing when connecting via SSH

I have several servers running Ubuntu 18.04.3 LTS. Although it's considered bad practice to auto login, I understand the risks.
I've done the following to auto-login the user:
sudo mkdir /etc/systemd/system/getty#tty1.service.d
sudo nano /etc/systemd/system/getty#tty1.service.d/override.conf
Then I add the following to the file:
[Service]
ExecStart=
ExecStart=-/sbin/agetty --noissue --autologin my_user %I $TERM
Type=idle
Then, I edit the following file for the user to be able to automatically start a program:
sudo nano /home/my_user/.bash_profile
# Add this to the file:
cd /home/my_user/my_program
sudo ./program
This works great on the console when the server starts, however, when I SSH into the server, the same program is started and I don't want that.
The simplest solution is to SSH with a different user but is there a way to prevent the program from running when I SSH in using the same user?
The easy approach is to check the environment for variables ssh sets; there are several.
# only run my_program on login if not connecting via ssh
if [ -z "$SSH_CLIENT" ]; then
cd /home/my_user/my_program && sudo ./program
fi

AWS bootstrap AMI with credentials for downloading form a private s3 bucket

I'm trying to bootstrap an AMI to download a file from a private S3 bucket.
I've set the credentials correctly and when I try to copy a file from my bucket to my EC2 instance I can see in the system log that it was unable to detect my credentials.
What's weird that when I log to my instance using SSH and using the same command I'm successfully copying my file to my instance.
Here is the bash script for the UserData:
#! /bin/bash
yum update -y
sudo yum remove java-1.7.0-openjdk -y
sudo yum install java-1.8.0 -y
mkdir -p home/ec2-user/.aws
cat > home/ec2-user/.aws/config << EOF
[default]\n
aws_access_key_id=my_access_key_id\n
aws_secret_access_key=my_secret_key\n
region=eu-west-1
EOF
aws s3 cp s3://my_bucket_name/filename home/ec2-user/filename
Thanks,
It looks like your script will only run when the current directory is / because:
cat > home/ec2-user/...
is relative. This will work when you log in and happen to run with CWD=/.
I haven't tested it yet but I would suspect cloudinit does not run UserData from /. You can either prove that by updating the script with a leading / or I can confirm it later.

WGET seems not to work with user data on AWS EC2 launch

I launch an centos AMI I created, and try to add user data as a file which looks like this:
#!/bin/bash
mkdir /home/centos/testing
cd testing
wget https://validlink
So simply, on launch, the user data creates a folder called testing and downloads this validURL which I will not put as it links to my data - however it is valid and accessible.
When I launch the instance, the folder testing is created successfully, however there is no file inside the directory.
When I ssh into the instance, and run the wget command as a sudo, the file is downloaded successfully inside the testing folder.
Why does the file not get downloaded on the ec2 launch through user data?
You have no way of knowing the current working directory when you execute the cd command. So specify full path:
cd /home/centos/testing
Try this:
#!/bin/bash
mkdir /home/centos/testing
cd /home/centos/testing
wget https://validlink
Run it using the root user.
Try this instead:
#!/bin/bash
sudo su
yum -y install wget
mkdir /home/centos/testing
cd /home/centos/testing
wget https://validlink

Providing password using a variable to become a sudo user in Jenkins

I have a jenkins job, which has its own set of build servers. The process which i follow is building applications on the jenkins build server and then I use "send files or execute commands over ssh" to copy my build and deploy the same using a shell script.
As a part of the deployment commands, I have quite a few steps to be done, like mkdir, tar -xzvf etc.I want to execute these deployment steps with a specific user "K". But when i type the sudo su - k command, the jenkins job fails because i am unable to feed the password to it.
#!/bin/bash
sudo su - K << \EOF
cd /DIR1/DIR2;
cp ~/MY_APP.war .
mkdir DIR 3
tar -xzvf MY_APP.war
EOF
To handle that, I used a PASSWORD parameter and made the build as parameterized, so that i can use the same PASSWORD in the shell script.
I have tried to use Expect, but looks like commands like cd, tar -xzvf are not working inside it and even if they work they will not be executed with the K as a user since the terminal may expire(please correct if wrong).
export $PASSWORD
/usr/bin/expect << EOD
spawn sudo su - k
expect "password for K"
send -- "$PASSWORD"
cd /DIR1/DIR2;
cp ~/MY_APP.war .
mkdir DIR 3
tar -xzvf MY_APP.war
EOD
Note: I do not have the root access to the servers and hence cannot tweak the host key files. Is there a work around for this problem?
Even if you get it working, having passwords in scripts or on the command line probably is not ideal from a security standpoint. Two things I would suggest :
1) Use a public SSH key owned by the user on your initiating system as an authorized key on the remote system to allow logging as the intended user on the remote system without a password. You should have all you need to do that (no root access required, only to the users you already use on each system).
2) Set-up the "sudoers" file on the remote system so that the user you log in as is allowed to perform the commands you need as the required user. You would need the system administrator help for that.
Like so:
SUDO_PASSWORD=TheSudoPassword
...
ssh kilroy#somehost "echo $SUDO_PASSWORD | sudo -S some_root_command"
Later
How can i use this in the 1st snippet?
Write a file:
deploy.sh
#!/bin/sh
cd /DIR1/DIR2
cp ~/MY_APP.war .
mkdir DIR 3
tar -xzvf MY_APP.war
Then:
chmod +x deploy.sh
scp deploy.sh kilroy#somehost:~
ssh kilroy#somehost "echo $SUDO_PASSWORD | sudo -S ./deploy.sh"

Resources