I'm trying to use Capistrano 2.5.19 to deploy my Sinatra application. So far, I managed to successfully run deploy:setup, but when I try to perform the actual deployment or the check (deploy:check), Capistrano tells me that I don't have permission. I'm using sudo since I log in with my own user and the user used for deployment is called passenger and is member of the group www-data. Therefore is set :runner and :admin_runner to passenger. It seems, however, that Capistrano is not using sudo during the deployment, while it was definitively doing so during the setup (deploy:setup). Why is that? I thought that the user specified by the runner parameter is used for deployment.
Unfortunately, I cannot directly answer your questions, however, I would like to offer up a different solution, which is to take the time to properly set up ssh/rsa keys to accomplish what you want to do. This will allow you to both not worry about setting and changing users in addition to not having to worry about embedding authentication information inside your cap scripts.
Related
I want to create a user in Heroku and want to give specific permission to this user to certain folder.
I've logged into heroku bash but I'm not able to create a user. It's giving me permission denied error. sudo also not working. I can't install anything in it.
Organisation admin user also not able to create a user.
Heroku will not allow you to do that.
Running heroku run bash is not the same as connecting to an SSH server.
When you build a new version of your application, Heroku will create a new container (much like Docker. It's LXC). Any instance of your application will run that container.
When you run a bash instance, a new instance of that container is created. You are not running on the same server as your app serves requests on.
That means the only moment when disk changes can be performed is at build time. So even if you could create users in a bash instance, those wouldn't be persisted accross instances.
Heroku will not let you create new linux users at build time anyway.
The only solution to access your app's code in a bash session is to run a one-off dyno. If you need to script that, you can use the platform api to boot a new dyno.
As for adding access, you can use the access:add command (also available as an api endpoint).
All users will be able to access all of your code though. You cannot restrict per folder.
I've asked this question on Capistrano's GitHub repository issue tracker (https://github.com/capistrano/capistrano/issues/1750) and was told to ask the same question here.
I'm trying to populate the deploy_to variable with a custom server property (named organisation) to deploy the same application multiple times to the same server.
set :deploy_to, "/home/deploy/sites/#{server.properties.organisation}"
It seems impossible to load the server Array? using the fetch() method.
I've done a couple different things for this case. If each installation is indeed identical, I'll deploy once and symlink the other installations. If each installation has different parameters, I'll create multiple targets (prod-1, prod-2, prod-2, et cetera) where each points at the same server. You can use helper methods to reduce code duplication. Then I'll write a script which runs bundle exec cap prod-1 deploy && bundle exec cap prod-2 deploy && ....
I have a Jenkins server that I want to deploy some code to some servers. To pick the right servers, I would like the jenkins job to query chef for nodes with a particular role.
However, I am not sure if that is a good idea or an anti-pattern, and I am not sure how to go about it in practice.
The jenkins server is already listed as a non-admin client, so I am wondering if I can use the existing credentials for something or if I should create a jenkins admin and set up a knife.rb in Jenkins home.
You would probably want to use one of the Chef scripting libraries like chef-api (Ruby), PyChef (Python), or Jclouds (Java) rather than knife itself. Using Jenkins for deploys is a bit wonky as it isn't reeeeally meant for that, but you can make it work. Tools like Push Jobs, Fabric, and RunDeck are possibly better suited, and all have direct integration with Chef's node catalog like you describe.
I have an AMI which has configured with production code setup.I am using Nginx + unicorn as server setup.
The problem I am facing is, whenever traffic goes up I need to boot the instance log in to instance and do a git pull,bundle update and also precompile the assets.Which is time consuming.So I want to avoid all this process.
Now I want to go with a script/process where I can automate whole deployment process, like git pull, bundle update and precompile as soon as I boot a new instance from this AMI.
Is there any best way process to get this done ? Any help would be appreciated.
You can place your code in /etc/rc.local (commands in this file will be executed when server will be loaded).
But the best way is using (capistrano). You need to add require "capistrano/bundler" to your deploy.rb file, and bundle update will be runned automatically. For more information you can read this article: https://semaphoreapp.com/blog/2013/11/26/capistrano-3-upgrade-guide.html
An alternative approach is to deploy your app to a separate EBS volume (you can still mount this inside /var/www/application or wherever it currently is)
After deploying you create an EBS snapshot of this volume. When you create a new instance, you tell ec2 to create a new volume for your instance from the snapshot, so the instance will start with the latest gems/code already installed (I find bundle install can take several minutes). All your startup script needs to do is to mount the volume (or if you have added it to the fstab when you make the ami then you don't even need to do that). I much prefer scaling operations like this to have no dependencies (eg what would you do if github or rubygems have an outage just when you need to deploy)
You can even take this a step further by using amazon's autoscaling service. In a nutshell you create a launch configuration where you specify the ami, instance type, volume snapshots etc. Then you control the group size either manually (through the web console or the api) according to a fixed schedule or based on cloudwatch metrics. Amazon will create or destroy instances as needed, using the information in your launch configuration.
Well, my head is spinning a bit here. I started with what i thought would be a simple task, to take regular db dumps on heroku and push them to a personal S3 account for backup.
I am not sure the best a approach to do this. Accessing S3 within Java is crystal clear, getting the db dump from heroku is clear as mud right now...
Disclaimer: i don't know Ruby, and i don't really want to learn Ruby if i don't have to, i really want to use Java (that is why i chose play) and i want to have it hosted, that is why i chose Heroku :-)
So, I could use the heroku Scheduler, but i am not understanding what scripts are being executed here - is it all scripts in /bin? What kind of scripts are these, are they ruby scripts? How do i add them as 'tasks' when they aren't rake tasks?
Can I use the pgbackups via URL somehow? It looks like the rake examples do pg_dump instead, write to a tmp file and then move it around from there. I'm pretty unclear how to access the heroku databased stuff from a script, the examples i have seen so far are in rake, so any insight there would be helpful...
Or coming at it from inside my java app, what is the status of the Heroku java API? If there is a way to get to the heroku runtime from my java, or somehow use the heroku.jar?
It would great to get some overall guidance and best practices in this area - thanks!!!
From the google group i found this tidbit:
http://groups.google.com/group/heroku/browse_thread/thread/7fe984c3d2d01f21/9474f31138636332?lnk=gst&q=scheduler+#9474f31138636332
"Sorry for the delayed response. We updated the docs to mention running Procfile entries via heroku run:
http://devcenter.heroku.com/articles/oneoff-admin-ps
Anything that works via heroku run works via Heroku Scheduler. Just put the name of the process type as the 'task" in Scheduler. No special syntax required. And you can even pass it arguments. "
From this and James Ward's last example above i am considering this answered.