I want to create a user in Heroku and want to give specific permission to this user to certain folder.
I've logged into heroku bash but I'm not able to create a user. It's giving me permission denied error. sudo also not working. I can't install anything in it.
Organisation admin user also not able to create a user.
Heroku will not allow you to do that.
Running heroku run bash is not the same as connecting to an SSH server.
When you build a new version of your application, Heroku will create a new container (much like Docker. It's LXC). Any instance of your application will run that container.
When you run a bash instance, a new instance of that container is created. You are not running on the same server as your app serves requests on.
That means the only moment when disk changes can be performed is at build time. So even if you could create users in a bash instance, those wouldn't be persisted accross instances.
Heroku will not let you create new linux users at build time anyway.
The only solution to access your app's code in a bash session is to run a one-off dyno. If you need to script that, you can use the platform api to boot a new dyno.
As for adding access, you can use the access:add command (also available as an api endpoint).
All users will be able to access all of your code though. You cannot restrict per folder.
Related
Heroku sent an email regarding scheduled maintenance for a hobby-dev hosted Postgres database I have. I received confirmation that the scheduled maintenance had been successfully completed and that my updated database credentials would reflect this.
After updating the environment variables in my app to reflect this change, I can no longer connect to the database. Scheduled maintenance changes have completed before with no issues, this is the first time I'm receiving this error.
Authentication failed against database server at `ec2-176-34-114-78.eu-west-1.compute.amazonaws.com`, the provided database credentials for `mydb` are not valid.
However, when I log into Heroku to view the database instance, the health checks are showing that it's available.
I've now tried using the new and old database credentials, but both are unable to connect to the DB. It also appears that I am unable to directly contact support on the hobby dev plan.
Do I have any other options to try troubleshoot this? Is it possible to force a new database credential update on Heroku?
Yes, you can use heroku pg:credentials:rotate to generate new credentials. But you shouldn't have to do this.
After updating the environment variables in my app to reflect this change
As the email told you, your credentials would automatically have been updated. There was nothing for you to do. As long as you are connecting via the DATABASE_URL environment variable, which is always recommended with Heroku Postgres¹, you should be good to go.
heroku pg:credentials:rotate behaves the same way, so running that command without understanding this isn't likely to help much.
¹Heroku may update these credentials at any time. Connecting via that environment variable is the best way to ensure you can always connect.
I have an AMI which has configured with production code setup.I am using Nginx + unicorn as server setup.
The problem I am facing is, whenever traffic goes up I need to boot the instance log in to instance and do a git pull,bundle update and also precompile the assets.Which is time consuming.So I want to avoid all this process.
Now I want to go with a script/process where I can automate whole deployment process, like git pull, bundle update and precompile as soon as I boot a new instance from this AMI.
Is there any best way process to get this done ? Any help would be appreciated.
You can place your code in /etc/rc.local (commands in this file will be executed when server will be loaded).
But the best way is using (capistrano). You need to add require "capistrano/bundler" to your deploy.rb file, and bundle update will be runned automatically. For more information you can read this article: https://semaphoreapp.com/blog/2013/11/26/capistrano-3-upgrade-guide.html
An alternative approach is to deploy your app to a separate EBS volume (you can still mount this inside /var/www/application or wherever it currently is)
After deploying you create an EBS snapshot of this volume. When you create a new instance, you tell ec2 to create a new volume for your instance from the snapshot, so the instance will start with the latest gems/code already installed (I find bundle install can take several minutes). All your startup script needs to do is to mount the volume (or if you have added it to the fstab when you make the ami then you don't even need to do that). I much prefer scaling operations like this to have no dependencies (eg what would you do if github or rubygems have an outage just when you need to deploy)
You can even take this a step further by using amazon's autoscaling service. In a nutshell you create a launch configuration where you specify the ami, instance type, volume snapshots etc. Then you control the group size either manually (through the web console or the api) according to a fixed schedule or based on cloudwatch metrics. Amazon will create or destroy instances as needed, using the information in your launch configuration.
I accessed the dyno via heroku run bash and made a file foo. However, when I check from my app, it still cannot find foo. So I dug deeper by trying to install nginx on a dyno, turning on autoindex, and I can confirm that the files are different from what's accessed via heroku run bash and from nginx. Why is that? How do I put files to the filesystem where my running process is showing.
When you issue heroku run bash a new dyno is created just for this one-off, and you are given access to it. Any file you create will "disappear" once your log-off, since the Heroku file-system is ephemeral.
That means the file-system is restored to its native state whenever a new dyno is created, or a dyno is rebooted. The "native" state is what's in your slug -- the "compiled" version of your app -- whatever is built by the build-pack after you "git push" to Heroku.
If you want a read-only file available to all your Dynos, either put it in your slug (for example: by including it in git, but also by using a different build-pack), or put it somewhere all your dynos can access (like a shared database, a Redis/Memcache instance, or most logically: S3).
Here is a list of my stack:
Nginx UWSGI Tomcat Solr Virtualenv Supervisor
I'm trying to set up my server correctly and I am wondering which of these should be run as root and which should have their own user accounts. If they shouldnt run as user accounts, should every one have its own account, or should programs like Nginx and Uwsgi be grouped under one account called "web" for example.
Any feedback on this would be much appreciated!
I would not run any of those services as root. I personally use each of them under its own user account.
Use root only to perform installations or other maintenance tasks. But not to run any services or user programs.
I'm trying to use Capistrano 2.5.19 to deploy my Sinatra application. So far, I managed to successfully run deploy:setup, but when I try to perform the actual deployment or the check (deploy:check), Capistrano tells me that I don't have permission. I'm using sudo since I log in with my own user and the user used for deployment is called passenger and is member of the group www-data. Therefore is set :runner and :admin_runner to passenger. It seems, however, that Capistrano is not using sudo during the deployment, while it was definitively doing so during the setup (deploy:setup). Why is that? I thought that the user specified by the runner parameter is used for deployment.
Unfortunately, I cannot directly answer your questions, however, I would like to offer up a different solution, which is to take the time to properly set up ssh/rsa keys to accomplish what you want to do. This will allow you to both not worry about setting and changing users in addition to not having to worry about embedding authentication information inside your cap scripts.