I have a project to deploy with Capistrano 3.3.3. There are two different server machines: one is a webserver(role :app), the other is DB (role :db). On the DB server I have a Apache Solr service, and the devs need to update it's config files. These config files they store in a repository with the rest of project code. During deploy I need to upload this files to DB server to a solr directory. I have a legacy task that makes it.
desc 'Solr config update'
task :update_solr_config do
on roles(:app) do
execute "scp -i /home/user/dbserver.pem #{current_path}/stack/data-config-menu-produccion.xml user#dbserver:/usr/share/tomcat7/solr/menu/conf/data-config.xml"
execute "scp -i /home/user/dbserver.pem #{current_path}/stack/data-config-promociones-produccion.xml user#dbserver:/usr/share/tomcat7/solr/promociones/conf/data-config.xml"
execute "scp -i /home/user/dbserver.pem #{current_path}/stack/data-config-vista-produccion.xml user#dbserver:/usr/share/tomcat7/solr/vista/conf/data-config.xml"
end
end
But I'm think about what if there will be two DB servers? How have I to modify this task then?
I've read about Capistrano's methods upload, put, download, get and transfer. But I can't figure it out which of them could do a server-server file transfer. I suggest this task would be applied on :db roles to iterate each server in role.
desc 'Solr config update'
task :update_solr_config do
on roles(:db) do
...Some magic goes here...
end
end
Thanks for any help.
Related
I have a job that ssh into other servers and deploys some configuration with scp, but I can not find any way to access ssh key file used in my project configuration in TeamCity in order to execute shell command in my job - "ssh -I ~/.ssh/password", because TeamCity runs only in job directory. Therefore, I want to ask is there any way to access this SSH private key file that I mentioned in a project settings.
Just to say, I cannot use SSH-EXEC and SSH-UPLOAD as I have Shell script that ssh into many servers one by one reading from a file, therefore it would not be useful to have for each job one separate SSH exec job step in TeamCity project, so I have to somehow access the file without using standard SSH-EXEC and SSH-UPLOAD in a TeamCity
What have I tried?
I only had one idea - somehow to access SSH key that is located outside working directory by a path (I found this in documentation):
<TeamCity Data Directory>/config/projects/<project>/pluginData/ssh_keys
Problem with this, is that I cannot just cd into given path, as job does not want to go outside my working directory where job is executed by TeamCity. Therefore I could not access given directory where ssh_keys for my project is located.
UPD: Find out solution to use build feature SSH, that way you can execute SSH-key right with command line in job
I'm creating a Rundeck job which will be used to rollback an application. My .jar files are stored in a Nexus repository and I would like to add an option to Rundeck where I can choose a .jar version from Nexus and then run the rollback job on this.
I have tried using this plugin: https://github.com/nongfenqi/nexus3-rundeck-plugin, but it doesn't seem to be working. When I am logged in to Nexus I can access the JSON file listing the artifacts from my browser, but when I am logged off the JSON file is empty, even if the Nexus server is running.
When adding the JSON URL as a remote URL option in Rundeck like the picture below, I get no option to choose from when running the job, even if I am logged in to Nexus, as shown by picture number 2. Is there a way to pass user credentials with options, or any other workaround for this?
I would recommend you to install Apache / HTTPD locally on your rundeck server and use a CGI script for this.
Write a CGI script that queries your Nexus3 service for versions available on the jar file, and echo the results in JSON format.
Place the script in /var/www/cgi-bin/ with executable bit enabled. You can test it like so:
curl 'http://localhost/cgi-bin/script-name.py'
In your job you can configure your remote URL accordingly.
I find using local CGI script to be much more reliable and flexible. You can also handle any authentication requirements there.
I have my app stored on GitHub. To deploy it to Amazon, I use their EB deploy command which takes my git repository and sends it up. It then runs the container commands to load my data.
container_commands:
01_migrate:
command: "django-admin.py migrate"
leader_only: true
02_collectstatic:
command: "source /opt/python/run/venv/bin/activate && python manage.py collectstatic --noinput"
The problem is that I don't want the fixtures in my git. Git should not contain this data since it's shared with other users. How can I get my AWS to load the fixtures some other way?
You can use the old school way: scp to the ec2 instance.
You can go to the EC2 console to see the real EC2 instance associated to your EB environment (I assume you only have one instance). Write down the public ip, and then connect to the instance like you would do with a normal EC2 instance.
For example
scp -i [YOUR_AWS_KEY] [MY_FIXTURE_FILE] ec2-user#[INSTANCE_IP]:[PATH_ON_SERVER]
Note that the username has to be ec2-user.
But I do not recommend this way to deploy the project because you may need to manually execute the commands. This is, however, useful for me to get the fixture from a live server.
To avoid tracking fixtures in the git. I just use a simple workaround: create a local branch for EB deployment and track the fixtures along with other environment-specific credentials. Such EB branches should never be uploaded to the git remote repositories.
I noticed someone creating a bunch of scripts to run on GemFire clusters, where they have multiple copies of the same script where the only difference between the scripts is the server name.
Here is a picture of the Github repo
What the script looks like:
#!/bin/bash
source /sys_data/gemfire/scripts/gf-common.env
#----------------------------------------------------------
# Start the servers
#----------------------------------------------------------
(ssh -n <SERVER_HOST_NAME_HERE> ". ${GF_INST_HOME}/scripts/gfsh-server.sh gf_cache1 start")
SERVER_HOST_NAME_HERE = the IP address or server name that the script was designed for, removed for the purposes of this questions.
I would like to create one script with a parameter for the server name. Problem is: I'm not exactly sure where the best location would be to store/retrieve the server ip/host name(s), and let the script reference it, any ideas? The number of cache servers will vary depending on environment, application, and cluster.
Our development pipeline should work like this ideally:
Users commit a file to GitHub repo
Triggers Jenkins job
Jenkins job copies file to each cache server, shuts down that server using the stop_cache.sh script, then runs the start_cache.sh script. The number of cache servers can vary from cluster to cluster.
GemFire cache servers are updated with new file.
Went with the method suggested by #nos
Right now you have them hardcoded in each file it seems. So extract them to a separate file(s), loop through entries in that file and run for host in $(cat cache_hostnames.txt) ; ./stop_cache.sh $host ; done and something similar for other kinds of services?
Placed the server names in a file, and looped through the file.
This project might be of interest:
https://github.com/Pivotal-Data-Engineering/gemfire-manager
I am trying to write a capistrano task that will backup databases on multiple servers. The bash script that backs up the databases is on my local machine. However capistrano outputs this error message:
`backup' is only run for servers matching {}, but no servers matched
I am new to capistrano, is there some kind of setting I can set so that I can just run local commands.
Without a little more information, it's difficult to say exactly what the problem might be. It sounds to me like you're trying to run a bash script that's on your local computer on several remote servers. This is not something that Capistrano can do. It will run commands on remote servers, but only if the commands are present on those servers. If your bash script is something that needs to be run from the database server, you'll need to upload the script to those servers before running with with Capistrano. If, on the other hand, you're trying to run a script that connects to those servers itself, there's no reason to involve Capistrano. Running commands over an ssh connection is what it's designed for. If you could post the your Capfile, including the tasks you are trying to run, we might be able to give you more specific assistance.