Amazon ec2 SES with sendmail path configuration - amazon-ec2

I have setup my ec2 instance with SES documented at http://docs.aws.amazon.com/ses/latest/DeveloperGuide/scripts-mtas-sendmail.html
everything is working accordingly except that the last point
sudo /usr/bin/sendmail -f from#example.com to#example.com
throws error saying /usr/bin/sendmail: No such file or directory
instead using path with "sbin" i am able to send mail successfully.
sudo /usr/sbin/sendmail -f from#example.com to#example.com
how can i configure php to use this path instead. setting my php ini with sendmail_path = /usr/sbin/sendmail; didnt work..

Related

Change Document Root for Laravel Instance

I have a dedicated server running CentOS.
I have installed
WHM/Cpanel. On my server, I have a domain (example.com) and a user
(example).
The website domain example.com points to /home/example/public_html/. However, my project is Laravel, so the index point is in /public. I need to change the document root from
/home/example/public_html/ to /home/example/public_html/public
I ran the following commands:
nano /var/cpanel/userdata/example/example.com
nano /var/cpanel/userdata/example/example.com_SSL
rm -vf /var/cpanel/userdata/example/example.com.cache
rm -vf /var/cpanel/userdata/example/example.com_SSL.cache
/scripts/updateuserdatacache
/scripts/rebuildhttpdconf
service httpd restart
The problem:
When I run these commands, I see that nothing changed and I see DO NOT HAVE PERMISSION page of Laravel index.php (root not public).
When I run these commands for an empty project I see the results, and when I copy and paste the Laravel project I again see the permission denied page.
What is it?

My AWS CLI didn't work with sudo

I have shell script that uses aws cli, my script will be executed with sudo (Ex: sudo ./test.sh)
But I got the message: Unable to locate credentials. You can configure credentials by running "aws configure".
Actually, I did config for both sudo aws configure and aws configure
What did I do wrong?
Please help.
Thanks!
You might have to run sudo with -E to preserve the environment variables set by aws cli.
sudo -E ./test.sh
AWS CLI configured your credentials in $HOME/.aws/credentials. Normally when you use sudo, it doesn't change the value of the $HOME environment variable and so the AWS credentials file will be generated in the same location. You can check this by running aws configure as a normal user, typing in a key, then running sudo aws configure and you will be able to see that the default value would be the key that you just put in.
So at this point, you should be able to run sudo aws <facility> <some-command> and it will work fine - AWS CLI will use your current user's AWS credentials. I just tested it to make sure.
I suspect the problem is that you either invoke your script in a way that forces initialization of the session, such as bash -l - in which case AWS CLI will try to use the credentials of the root user; or you run your script from a user other than the one where you set up the AWS credentials and you expect that because you both use sudo it will get the same credentials (which is not the case as we demonstrated).
You should either:
configure the AWS credentials for the root user by running sudo -i and then aws configure from withing a fully initialized root session, then make sure that all your scripts use a full root session (use #!/bin/bash -l as the shebang).
If your issue is the second one and you don't want to do the complex solution suggested in (1), you should configure the AWS credentials for each of the users.
You can do the following:
sudo cp -r /home/<username>/.aws /home/root
Now you can use the same user credentials for root.

FTP on lampstack - Google cloud platform

So I installed a LAMP on a Google Cloud instance with debain wheezy7. Everything is working fine but I am not able to work the ftp. I am following this tutorial by digital ocean
I am stuck at this last step where I need to make vsftpd allow the user to write outside the chroot file.
The error is get is
hetunandu_gmail_com#lamp:~$ mkdir /root/hetunandu/files
mkdir: cannot create directory /root/hetunandu/files': Permission denied
Then when i use sudo with it i get this error
hetunandu_gmail_com#lamp:~$ sudo mkdir /root/hetunandu/files
mkdir: cannot create directory /root/hetunandu/files': No such file or directory
Where do I go from here?
Also I dont know how to get my username and password setup for FTP
I followed the tutorial and could not replicate your issue. I initially got "Permission denied" but you can circumvent this by running:
$ sudo su
and then
$ mkdir -p /root/$USER/files
Why not use /home/$USER ? not sure why you want to create the folders under /root.
As for your second question, regarding the username and password, I am not sure I understand. From the Developers Console > Compute Engine > VM Instances > click SSH and that should log you in with root privileges. then you can create all the users you want:
$ sudo adduser test_user
Please don't use FTP as it's an insecure clear-text protocol which will let others see your password and easily get access your instance, read/modify/delete your files, etc.
Instead, you should use secure protocols such as SCP or SFTP with public key authentication.
Here are some options to transfer files to/from your GCE VM instance:
sftp CLI tool, as described in this answer
gcloud compute copy-files, as described in this answer
WinSCP with SFTP

Failed to upload a file to ec2 instance

I'm a newbie and all I want is to set up an ec2 instance for fever rss.
Here is my info: os x 10.9.2, aws with ami of ubuntu 12.04 lts. I set up lamp on ec2 following this guide: http://www.robotmedia.net/2011/04/how-to-create-an-amazon-ec2-instance-with-apache-php-and-mysql-lamp/
Now I can ssh to my server public IP using terminal. After connected the server, I typed
scp -i /path/to/keypair.pem /path/to/test.txt ubuntu#theServerPublicIP:~/
and got the error as follows:
Warning: Identity file keypair.pem not accessible: No such file or directory.
I have tried to resolve the problem by:
1. change permission of .pem file to 600 on my os x.
chmod 600 keypair.pem
and ssh again, scp again, and got same error. Then I change its permission to 400 on my os x,
chmod 600 keypair.pem
and redid ssh and scp, and got same error.
rewrite file path using ~/path/to/file for both of keypair.pem and test.txt, and then redid ssh and scp, got same error.
Next rewrite file path using /Users/myUserName/path/to/file for both files and redid ssh and scp, got same error.
Next cd to the folder of keypair.pem and test.txt (I put them in the same folder), and tried the above two naming and got same error for each.
change path on the server. I have tried "~","~/","/","/var/www/", for all I still got the same error.
I also tried forklift because I saw the developer of Fever using it in the demo video. I tried all options for connection: sftp... but couldn't connect to the server.
Please help to get the test.txt uploaded... then I will be able to upload the fever folder.
Thanks!
If you gotta do this frequently I advise you to create an alias.
For example: I was running a webserver on EC2 instance and I had HTML contents in a local dir awsplaywww
$ alias syncaws="rsync -avrz --delete /home/sanket/workspace/awsplaywww/ -e ssh sanket#awsplay1.ddns.net:/var/www/html/"
Now everytime I update a HTML file or something and need to send it back to server. I just open terminal and type syncaws and job done!

How to launch the web server on Amazon Web Services?

Just started using EC2, launched the instance with Amazon Linux AMI, installed my web app on it...and I thought I was ready to go.
I go to the public DNS they gave me for my instance and nothing happens. I get the Google Chrome "Oops" ...
After re-reading the doc I saw some notes that I need to launch the web server. By doing this:
sudo chkconfig httpd on
sudo service httpd start
But I can't seem to locate these files.
Any ideas what I am doing wrong or where those files are located? The closest I got was /etc but even if I try the command lines from there, I get the same errors.
Thanks for your help.
UPDATE
Here is an image of the security group I have for my instance. This is correct right?
Is it possible you don't have Apache/Tomcat installed? Check with:
rpm -q httpd
rpm -q tomcat5
If they are not installed, run:
sudo yum install httpd tomcat5
Which type of webapps(php/java/Ruby etc) you are trying to deploy? If it is php, where you are trying to put in apache directory? For example default root document location in apache is /var/www/html . If you put your webapp inside /var/www/html it should be available through direct url http://x.x.x.x/{nameofwebapp}

Resources