I have shell script that uses aws cli, my script will be executed with sudo (Ex: sudo ./test.sh)
But I got the message: Unable to locate credentials. You can configure credentials by running "aws configure".
Actually, I did config for both sudo aws configure and aws configure
What did I do wrong?
Please help.
Thanks!
You might have to run sudo with -E to preserve the environment variables set by aws cli.
sudo -E ./test.sh
AWS CLI configured your credentials in $HOME/.aws/credentials. Normally when you use sudo, it doesn't change the value of the $HOME environment variable and so the AWS credentials file will be generated in the same location. You can check this by running aws configure as a normal user, typing in a key, then running sudo aws configure and you will be able to see that the default value would be the key that you just put in.
So at this point, you should be able to run sudo aws <facility> <some-command> and it will work fine - AWS CLI will use your current user's AWS credentials. I just tested it to make sure.
I suspect the problem is that you either invoke your script in a way that forces initialization of the session, such as bash -l - in which case AWS CLI will try to use the credentials of the root user; or you run your script from a user other than the one where you set up the AWS credentials and you expect that because you both use sudo it will get the same credentials (which is not the case as we demonstrated).
You should either:
configure the AWS credentials for the root user by running sudo -i and then aws configure from withing a fully initialized root session, then make sure that all your scripts use a full root session (use #!/bin/bash -l as the shebang).
If your issue is the second one and you don't want to do the complex solution suggested in (1), you should configure the AWS credentials for each of the users.
You can do the following:
sudo cp -r /home/<username>/.aws /home/root
Now you can use the same user credentials for root.
Related
I have a VM in Google Compute Engine that I want to start and stop daily - I have already written Cloud Functions for this. When the VM starts, I want it to run a startup script. In the bash startup script, I first need to pull data from git - Cloud Source Repository. This causes it to crash:
Error: Permission denied (publickey)
The startup script looks like this:
#!/bin/bash
cd /home/my_home_directory/git_repo
git pull;
cd some_directory_in_repo
python3 some_script.py;
shutdown -h now;
The VM has its own service account, which, as far as I know, runs the script. What I basically want, is to run the script with a "user" - service account - that does not have a home directory on the VM (the service account has the necessary permissions for accessing the repository, though). I also set up SSH key for the service account, then I registered the public key on my user profile and this works when I execute the script under my user.
Is there a solution for this, other than run the script under my user (which works, as I said), please?
Note: If I execute the startup script like the one below, it also works.
#!/bin/bash
cd /home/my_home_directory/git_repo
sudo -u my_username bash -c \
'git pull;
cd some_directory_in_repo
python3 some_script.py;
shutdown -h now;'
Thanks
I am trying to run bash script to upload the file into S3 bucket
like : aws s3 <cp xyx.txt> s3://<tos3bucket>
Is that possible to run the aws command without configuring using $ aws configure like below detail.
Either by external file or by command -u 'key' -p 'value' is there?
My aim is run the aws cli without configuring it
I have tried by below
AWS_ACCESS_KEY_ID=
AWS_SECRET_ACCESS_KEY=
But I got:
upload failed: ... Unable to locate credentials
If I configure aws it works.
Yes, you can run the aws CLI command without configuring or export the key-value but keep in mind add space before the command to avoid command writing to history.
<space> REGION=us-west-2 AWS_ACCESS_KEY_ID="You_key" AWS_SECRET_ACCESS_KEY="secret" aws ec2 describe-instances
I want to deploy my spring boot application to EC2.
I am using scp command to copy my jar to ec2 var folder,but I get permission denied.
I checked and found that this folder has access to root user only.
Also when I am logged in as ec2-user,I dont have rights to create new folder.
Can you please suggest how as a ec2-user,code can be deployed to Ec2 instance.
Since you are logging in as ec2-user, I am assuming that you are using one of the aws managed Linux AMI. In these AMIs ec2-user can run commands as root, simply run command as sudo.
Eg: sudo mkdir test
You can also switch to root user by using sudo -su root
Please find below link to aws doc for user management
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/managing-users.html
I'm attempting to use a script which automatically creates snapshots of all EBS volumes on an AWS instance. This script is running on several other instances with no issue.
The current instance already has an AWS profile configured which is used for other purposes. My understanding is I should be able to specify the profile my script uses, but I can't get this to work.
I've added a new set of credentials to the /home/ubuntu/.aws file by adding the following under the default credentials which are already there:
[snapshot_creator]
aws_access_key_id=s;aldkas;dlkas;ldk
aws_secret_access_key=sdoij34895u98jret
In the script I have tried adding AWS_PROFILE=snapshot_creatorbut when I run it I get the error Unable to locate credentials. You can configure credentials by running "aws configure".
So, I delete my changes to /home/ubuntu/.aws and instead run aws configure --profile snapshot_creator. However after entering all information I get the error [Errno 17] File exists: '/home/ubuntu/.aws'.
So I add my changes to the .aws file again and this time in the script for every single command starting with aws ec2 I add the parameter --profile snapshot_creator, but this time when I run the script I get The config profile (snapshot_creator) could not be found.
How can I tell the script to use this profile? I don't want to change the environment variables for the instance because of the aforementioned other use of AWS CLI for other purposes.
Credentials should be stored in the file "/home/ubuntu/.aws/credentials"
I guess this error is because it couldn't create a directory. Can you delete the ".aws" file and re-run the configure command? It should create the credentials file under "/home/ubuntu/.aws/"
File exists: '/home/ubuntu/.aws'
So I installed a LAMP on a Google Cloud instance with debain wheezy7. Everything is working fine but I am not able to work the ftp. I am following this tutorial by digital ocean
I am stuck at this last step where I need to make vsftpd allow the user to write outside the chroot file.
The error is get is
hetunandu_gmail_com#lamp:~$ mkdir /root/hetunandu/files
mkdir: cannot create directory /root/hetunandu/files': Permission denied
Then when i use sudo with it i get this error
hetunandu_gmail_com#lamp:~$ sudo mkdir /root/hetunandu/files
mkdir: cannot create directory /root/hetunandu/files': No such file or directory
Where do I go from here?
Also I dont know how to get my username and password setup for FTP
I followed the tutorial and could not replicate your issue. I initially got "Permission denied" but you can circumvent this by running:
$ sudo su
and then
$ mkdir -p /root/$USER/files
Why not use /home/$USER ? not sure why you want to create the folders under /root.
As for your second question, regarding the username and password, I am not sure I understand. From the Developers Console > Compute Engine > VM Instances > click SSH and that should log you in with root privileges. then you can create all the users you want:
$ sudo adduser test_user
Please don't use FTP as it's an insecure clear-text protocol which will let others see your password and easily get access your instance, read/modify/delete your files, etc.
Instead, you should use secure protocols such as SCP or SFTP with public key authentication.
Here are some options to transfer files to/from your GCE VM instance:
sftp CLI tool, as described in this answer
gcloud compute copy-files, as described in this answer
WinSCP with SFTP