Here is a list of my stack:
Nginx UWSGI Tomcat Solr Virtualenv Supervisor
I'm trying to set up my server correctly and I am wondering which of these should be run as root and which should have their own user accounts. If they shouldnt run as user accounts, should every one have its own account, or should programs like Nginx and Uwsgi be grouped under one account called "web" for example.
Any feedback on this would be much appreciated!
I would not run any of those services as root. I personally use each of them under its own user account.
Use root only to perform installations or other maintenance tasks. But not to run any services or user programs.
Related
I want to deploy my Laravel App in a VPS hosting plan.
I have a WHM, but I've no experience deploying my app and configure the server.
I don't have a domain, so I want to test my app using an IP address (like DigitalOcean)
any help?
Edit:
I've completed these steps into my WHM.
Have SSH access to the VPS
Have a sudo user and set up some kind of firewall (for example ufw)
Install required software (nginx, MySQL, PHP, Composer, npm) and additional PHP modules if necessary.
I've created an account ( CPanel ) and I've completed steps
Create a database
Checkout your application using VCS like Git
Configure your .env file.
Install your composer packages, run npm, or anything you would like to do
The account ( CPanel provides an IP address that looks like http://xxx.xxx.x.xx/~cpanel-account-name/).
I can access the website correctly ( however all images are broken and even laravel-routes are not found 404). I know the issue is because ( ~cpanel-account-name/ ) found at the end of the URL.
But how can I fix It?
Since this is quite a broad topic that consists of multiple questions, perhaps you could elaborate on steps you have already taken or the step you are stuck at / need help with?
In short, you need to do the following:
Have SSH access to the VPS
Have a sudo user and set-up some kind of firewall (for example ufw)
Install required software (nginx, MySQL, PHP, Composer, npm) and additional PHP modules if necessary.
Create a database
Checkout your application using VCS like Git
Configure your .env file.
Install your composer packages, run npm or anything you would like to do
Set-up nginx
If this seems daunting, I would advice to tackle it one by one and trying to research every step along the way. This might be challenging and time-consuming, but will be very rewarding!
Alternatively, a paid solution like Laravel Forge can help you take care of server management.
I am developing a project in spring. In this project I have written a cron job as well. I am going to deploy this on 4 AWS servers, but I want my cron job to run on only a single server (let's name that as admin server).
So here my question is how can I identify admin server uniquely. I was thinking to use the IP as identification but as far as I know IP is not static for AWS servers. is there any other way for identification so that I can put that check in my cron job code so that it will run only on admin server?
You can always start the admin instance with some user data as metadata or add a tag to your instance.
The metadata solution might be easier to integrate as you can just issue an unauthenticated HTTP request from within the instance to read the value. If that is a security concern for you, then you can go with the tag and use the API to retrieve the tag value.
I want to create a user in Heroku and want to give specific permission to this user to certain folder.
I've logged into heroku bash but I'm not able to create a user. It's giving me permission denied error. sudo also not working. I can't install anything in it.
Organisation admin user also not able to create a user.
Heroku will not allow you to do that.
Running heroku run bash is not the same as connecting to an SSH server.
When you build a new version of your application, Heroku will create a new container (much like Docker. It's LXC). Any instance of your application will run that container.
When you run a bash instance, a new instance of that container is created. You are not running on the same server as your app serves requests on.
That means the only moment when disk changes can be performed is at build time. So even if you could create users in a bash instance, those wouldn't be persisted accross instances.
Heroku will not let you create new linux users at build time anyway.
The only solution to access your app's code in a bash session is to run a one-off dyno. If you need to script that, you can use the platform api to boot a new dyno.
As for adding access, you can use the access:add command (also available as an api endpoint).
All users will be able to access all of your code though. You cannot restrict per folder.
I'm trying to use codedeploy with autoscaling in order to automate the deployment of my application.
I have everything ready. When developing all the parts (hooks' scripts, roles etc) I installed the codedeploy agent manually. Now I want to make it production ready, which means that the codedeploy agent will be installed at sysprep (by providing the powershell commands via user data in launch configuration).
The problem is that it's not working. The script either runs and fails for some reason (are there any logs to confirm?) or it doesn't run at all. My AMI is based on a aws standard windows AMI. The EC2ConfigService is present.
Do you have any idea of what could be the problem or if I have some way to find what's the problem (logs)?
You could take a look at C:\Program Files\Amazon\Ec2ConfigService\Logs\Ec2ConfigLog.txt
On Linux AMIs you can also find the user data script execution logs in the ec2 console when you right click your instance -> Instance Settings -> Get System Log.
I'm trying to use Capistrano 2.5.19 to deploy my Sinatra application. So far, I managed to successfully run deploy:setup, but when I try to perform the actual deployment or the check (deploy:check), Capistrano tells me that I don't have permission. I'm using sudo since I log in with my own user and the user used for deployment is called passenger and is member of the group www-data. Therefore is set :runner and :admin_runner to passenger. It seems, however, that Capistrano is not using sudo during the deployment, while it was definitively doing so during the setup (deploy:setup). Why is that? I thought that the user specified by the runner parameter is used for deployment.
Unfortunately, I cannot directly answer your questions, however, I would like to offer up a different solution, which is to take the time to properly set up ssh/rsa keys to accomplish what you want to do. This will allow you to both not worry about setting and changing users in addition to not having to worry about embedding authentication information inside your cap scripts.