Where can I pull the server host name from if we can't store it in this script? - bash

I noticed someone creating a bunch of scripts to run on GemFire clusters, where they have multiple copies of the same script where the only difference between the scripts is the server name.
Here is a picture of the Github repo
What the script looks like:
#!/bin/bash
source /sys_data/gemfire/scripts/gf-common.env
#----------------------------------------------------------
# Start the servers
#----------------------------------------------------------
(ssh -n <SERVER_HOST_NAME_HERE> ". ${GF_INST_HOME}/scripts/gfsh-server.sh gf_cache1 start")
SERVER_HOST_NAME_HERE = the IP address or server name that the script was designed for, removed for the purposes of this questions.
I would like to create one script with a parameter for the server name. Problem is: I'm not exactly sure where the best location would be to store/retrieve the server ip/host name(s), and let the script reference it, any ideas? The number of cache servers will vary depending on environment, application, and cluster.
Our development pipeline should work like this ideally:
Users commit a file to GitHub repo
Triggers Jenkins job
Jenkins job copies file to each cache server, shuts down that server using the stop_cache.sh script, then runs the start_cache.sh script. The number of cache servers can vary from cluster to cluster.
GemFire cache servers are updated with new file.

Went with the method suggested by #nos
Right now you have them hardcoded in each file it seems. So extract them to a separate file(s), loop through entries in that file and run for host in $(cat cache_hostnames.txt) ; ./stop_cache.sh $host ; done and something similar for other kinds of services?
Placed the server names in a file, and looped through the file.

This project might be of interest:
https://github.com/Pivotal-Data-Engineering/gemfire-manager

Related

How to access ssh key file in a Teamcity right in a job without SSH upload

I have a job that ssh into other servers and deploys some configuration with scp, but I can not find any way to access ssh key file used in my project configuration in TeamCity in order to execute shell command in my job - "ssh -I ~/.ssh/password", because TeamCity runs only in job directory. Therefore, I want to ask is there any way to access this SSH private key file that I mentioned in a project settings.
Just to say, I cannot use SSH-EXEC and SSH-UPLOAD as I have Shell script that ssh into many servers one by one reading from a file, therefore it would not be useful to have for each job one separate SSH exec job step in TeamCity project, so I have to somehow access the file without using standard SSH-EXEC and SSH-UPLOAD in a TeamCity
What have I tried?
I only had one idea - somehow to access SSH key that is located outside working directory by a path (I found this in documentation):
<TeamCity Data Directory>/config/projects/<project>/pluginData/ssh_keys
Problem with this, is that I cannot just cd into given path, as job does not want to go outside my working directory where job is executed by TeamCity. Therefore I could not access given directory where ssh_keys for my project is located.
UPD: Find out solution to use build feature SSH, that way you can execute SSH-key right with command line in job

Docker: Oracle database 18.4.0 XE wants to configure a new database on startup

I'm trying to configure an Oracle Database container. My problem is whenever I'm trying to restart the container, the startup script wants to configure a new database and failing to do so, because there already is a database configured on the specified volume.
What can I do let the container know that I'd like to use my existing database?
The start script is the stock one that I downloaded from the Oracle GitHub:
Link
UPDATE: So apparently, the problem arises when /etc/init.d/oracle-xe-18c start returns that no database has been configured, which triggers the startup script to try and configure one.
UPDATE 2: I tried creating the db without any environment variables passed and after restarting the container, the database is up and running. This is an annoying workaround, but this is the one that seems to work. If you have other ideas, please let me know
I think that you should connect to the linux image with:
docker exec -ti containerid bash
Once there you should check manually for the following:
if $ORACLE_BASE/oradata/$ORACLE_SID exists as it does the script and if $ORACLE_BASE/admin/$ORACLE_SID/adump does not.
Another thing that you should execute manually is
/etc/init.d/oracle-xe-18c start | grep -qc "Oracle Database is not configured
UPDATE AFTER COMMENT=====
I don't have the script but you should run it with bash -x to see what is the script looking for in order to debug what's going on
What makes no sense is that you are saying that $ORACLE_BASE/admin/$ORACLE_SID/adump does not exist but if the docker deployed and you have a database running, the first time the script run it should have created this.
I think I understand the source of the problem from start to finish.
The thing I overlooked in the documentation is that the Express Edition of Oracle Database does not support a SID/PBD other than the default. However, the configuration script (seemingly /etc/init.d/oracle-xe-18c, but not surly) was only partially made with this fact in mind. Which means that if I set the ORACLE_SID and/or ORACLE_PWD environmental variables when installing, the database will be up and running, with 2 suspicious errors, when trying to copy 2 files.
mv: cannot stat '/opt/oracle/product/18c/dbhomeXE/dbs/spfileROPIDB.ora': No such file or directory
mv: cannot stat '/opt/oracle/product/18c/dbhomeXE/dbs/orapwROPIDB': No such file or directory
When stopping and restarting the docker container, I'll get an error message, because the configuration script created folder/file names according to those variables, however, the docker image is built in a way that only supports the default names, causing it to try and reconfigure a new database, but seeing that one already exists.
I hope it makes sense.

how do I change the location of the httpd.conf for Apache on windows?

I am working on setting up a load balancing cluster on windows server 2012 and have a shared drive where I want the configuration files for Apache to exist at. This way each member of the LB can load the exact same config files. How do I change where the config file is located independently of where the ServerRoot is?
Start the Apache process with the -d parameter and give your alternative ServerRoot as an argument, though I'd imagine it would be a much better idea for you to use some mechanism to sync the files locally to each server.
Also read http://httpd.apache.org/docs/2.4/mod/core.html#mutex, as it's advised if you're running from a networked file system.
If you just want to specify the main config file, start the process with the -f parameter and the path to the config file as an argument.

Jekyll private deployment?

I have created jekyll site. Regarding the deployment I don't want to host on github pages. To host private domain I came know from documentation to copy the all files from _site folder. That's all wicked.
Question:
Each time I add new blog post, I am running cmd>jekyll build then I am copying newly created html to hosted domain. Is there any easy way to update without compiling each time ?
The reason, Why I am asking is because it will updated by non technical person
Thanks for the help!!
If you don't want to use GitHub Pages, AFAIK there's no other way than to compile your site each time you make a change.
But of course you can script/automate as much as possible.
That's what I do with my own blog as well. I'm hosting it on my own webspace instead of GitHub Pages, so I need to do these steps for each update:
Compile on local machine
Upload via FTP
I can do this with a single click (okay, a single double-click).
Note: I'm on Windows, so the following solution is for Windows.
But if you're using Linux/MacOS/whatever, of course you can use the tools given there to build something similar.
I'm using a batch file (the Windows equivalent to a shell script) to compile my site and then call WinSCP, a free command-line FTP client.
WinSCP allows me to store session configurations, so I saved the connection to my server there once.
Because of this, I didn't want to commit WinSCP to my (public) repository, so my script expects WinSCP in the parent folder.
The batch file looks like this:
call jekyll build
echo If the build succeeded, press RETURN to upload!
pause
set uploadpath=%~dp0\_site
%~dp0\..\winscp.com /script=build-upload.txt /xmllog=build-upload.log
pause
The first parameter in the WinSCP call (/script=build-upload.txt) specifies the script file which contains the actual WinSCP commands
This is in the script file:
option batch abort
option confirm off
open blog
synchronize remote -delete "%uploadpath%"
close
exit
Some explanations:
%~dp0 (in the batch file) is the folder where the current batch file is
The set uploadpath=... line (in the batch file) saves the complete path to the generated site into an environment variable
The open blog line (in the script file) opens a connection to the pre-saved session configuration (which I named blog)
The synchronize remote ... line (in the script file) uses the synchronize command to sync from the local folder (saved in %uploadpath%, the environment variable from step 2) to the server.
IMO this solution is suitable for non-technical persons as well.
If the technical person in your case doesn't know how to use source control, you could even script committing & pushing, too.
There are a number of options available which are mentioned in the documentation: http://jekyllrb.com/docs/deployment-methods/
If you are using Git, I would recommend the Git Post-Receive Hook approach. It simply builds the site after the new code is received:
GIT_REPO=$HOME/myrepo.git
TMP_GIT_CLONE=$HOME/tmp/myrepo
PUBLIC_WWW=/var/www/myrepo
git clone $GIT_REPO $TMP_GIT_CLONE
jekyll build -s $TMP_GIT_CLONE -d $PUBLIC_WWW
rm -Rf $TMP_GIT_CLONE
exit
Since you mentioned that it will be updated by a non-technical person, you might try something like rack-jekyll to automatically rebuild when new files are FTP'd.

capistrano is only run for servers matching

I am trying to write a capistrano task that will backup databases on multiple servers. The bash script that backs up the databases is on my local machine. However capistrano outputs this error message:
`backup' is only run for servers matching {}, but no servers matched
I am new to capistrano, is there some kind of setting I can set so that I can just run local commands.
Without a little more information, it's difficult to say exactly what the problem might be. It sounds to me like you're trying to run a bash script that's on your local computer on several remote servers. This is not something that Capistrano can do. It will run commands on remote servers, but only if the commands are present on those servers. If your bash script is something that needs to be run from the database server, you'll need to upload the script to those servers before running with with Capistrano. If, on the other hand, you're trying to run a script that connects to those servers itself, there's no reason to involve Capistrano. Running commands over an ssh connection is what it's designed for. If you could post the your Capfile, including the tasks you are trying to run, we might be able to give you more specific assistance.

Resources