I'm searching for a particular command for a shell script and it's possible I don't understand correctly how Apache works.
I'm doing a build script for a installation of project for a local VM for the develop. Like we recover the prod database, we need to change the url in the dump of the database before to import it in local. I could inform manually the new URL in my script or pass it in option but I want to do a script which take all cases and be the more simple to launch.
I know which is the vhost on my local environment. So I want to know if it's possible to use a bash command to recover its servername.
The servername have generally this form:
[name_of_project].local
or
[name_of_project].[under_project].local
But it's not an absolute rule.
I looked the httpd command but I don't think something fix my problem. I searched on Internet but I find nothing about a command line. So I don't know if it's possible or if I will have to parse my conf file to recover my servername.
Thanks !
Related
I have a job that ssh into other servers and deploys some configuration with scp, but I can not find any way to access ssh key file used in my project configuration in TeamCity in order to execute shell command in my job - "ssh -I ~/.ssh/password", because TeamCity runs only in job directory. Therefore, I want to ask is there any way to access this SSH private key file that I mentioned in a project settings.
Just to say, I cannot use SSH-EXEC and SSH-UPLOAD as I have Shell script that ssh into many servers one by one reading from a file, therefore it would not be useful to have for each job one separate SSH exec job step in TeamCity project, so I have to somehow access the file without using standard SSH-EXEC and SSH-UPLOAD in a TeamCity
What have I tried?
I only had one idea - somehow to access SSH key that is located outside working directory by a path (I found this in documentation):
<TeamCity Data Directory>/config/projects/<project>/pluginData/ssh_keys
Problem with this, is that I cannot just cd into given path, as job does not want to go outside my working directory where job is executed by TeamCity. Therefore I could not access given directory where ssh_keys for my project is located.
UPD: Find out solution to use build feature SSH, that way you can execute SSH-key right with command line in job
I have Icinga2 installed with icingaweb2 on Ubuntu 19.10 and I have Icinga director Installed for the configuration which is really awesome.
I created some command and attached them to linux-agent host template.
that was a mistake cause I added more servers that don't need these new commands.
so I created a new host template called my-linux-agent as a duplicate of linux-agent and now I want to move all my custom commands to the new host template and I can't find any way to do that.
versions:
Icinga2: 2.10.5
IcingaWeb2: 2.7.1
thanks
so i couldn't find a way to move the command to a different host directly
but you can easily duplicate the command to a different host and remove the previous one using the following simple steps:
open the relevant command
click on duplicate to duplicate the command
choose the relevant host and click save
open the previous command and delete it
I'm trying to configure an Oracle Database container. My problem is whenever I'm trying to restart the container, the startup script wants to configure a new database and failing to do so, because there already is a database configured on the specified volume.
What can I do let the container know that I'd like to use my existing database?
The start script is the stock one that I downloaded from the Oracle GitHub:
Link
UPDATE: So apparently, the problem arises when /etc/init.d/oracle-xe-18c start returns that no database has been configured, which triggers the startup script to try and configure one.
UPDATE 2: I tried creating the db without any environment variables passed and after restarting the container, the database is up and running. This is an annoying workaround, but this is the one that seems to work. If you have other ideas, please let me know
I think that you should connect to the linux image with:
docker exec -ti containerid bash
Once there you should check manually for the following:
if $ORACLE_BASE/oradata/$ORACLE_SID exists as it does the script and if $ORACLE_BASE/admin/$ORACLE_SID/adump does not.
Another thing that you should execute manually is
/etc/init.d/oracle-xe-18c start | grep -qc "Oracle Database is not configured
UPDATE AFTER COMMENT=====
I don't have the script but you should run it with bash -x to see what is the script looking for in order to debug what's going on
What makes no sense is that you are saying that $ORACLE_BASE/admin/$ORACLE_SID/adump does not exist but if the docker deployed and you have a database running, the first time the script run it should have created this.
I think I understand the source of the problem from start to finish.
The thing I overlooked in the documentation is that the Express Edition of Oracle Database does not support a SID/PBD other than the default. However, the configuration script (seemingly /etc/init.d/oracle-xe-18c, but not surly) was only partially made with this fact in mind. Which means that if I set the ORACLE_SID and/or ORACLE_PWD environmental variables when installing, the database will be up and running, with 2 suspicious errors, when trying to copy 2 files.
mv: cannot stat '/opt/oracle/product/18c/dbhomeXE/dbs/spfileROPIDB.ora': No such file or directory
mv: cannot stat '/opt/oracle/product/18c/dbhomeXE/dbs/orapwROPIDB': No such file or directory
When stopping and restarting the docker container, I'll get an error message, because the configuration script created folder/file names according to those variables, however, the docker image is built in a way that only supports the default names, causing it to try and reconfigure a new database, but seeing that one already exists.
I hope it makes sense.
I noticed someone creating a bunch of scripts to run on GemFire clusters, where they have multiple copies of the same script where the only difference between the scripts is the server name.
Here is a picture of the Github repo
What the script looks like:
#!/bin/bash
source /sys_data/gemfire/scripts/gf-common.env
#----------------------------------------------------------
# Start the servers
#----------------------------------------------------------
(ssh -n <SERVER_HOST_NAME_HERE> ". ${GF_INST_HOME}/scripts/gfsh-server.sh gf_cache1 start")
SERVER_HOST_NAME_HERE = the IP address or server name that the script was designed for, removed for the purposes of this questions.
I would like to create one script with a parameter for the server name. Problem is: I'm not exactly sure where the best location would be to store/retrieve the server ip/host name(s), and let the script reference it, any ideas? The number of cache servers will vary depending on environment, application, and cluster.
Our development pipeline should work like this ideally:
Users commit a file to GitHub repo
Triggers Jenkins job
Jenkins job copies file to each cache server, shuts down that server using the stop_cache.sh script, then runs the start_cache.sh script. The number of cache servers can vary from cluster to cluster.
GemFire cache servers are updated with new file.
Went with the method suggested by #nos
Right now you have them hardcoded in each file it seems. So extract them to a separate file(s), loop through entries in that file and run for host in $(cat cache_hostnames.txt) ; ./stop_cache.sh $host ; done and something similar for other kinds of services?
Placed the server names in a file, and looped through the file.
This project might be of interest:
https://github.com/Pivotal-Data-Engineering/gemfire-manager
It is probably a simple question but I dont know how to solve my problem.
There is a proxy on the network I am using. I execute php scripts on command line (composer.phar not to same the name). The script, using curl obviously, try to download file from http urls. I cannot download because of the proxy.
So my question is this one :
where should I configure proxy settings to use a tool like composer.phar ?
Maybe the command line is not the problem and I would have the same problem if I were using curl in an application.
Thank you
I have to add a HTTP_PROXY system environment variable to make it work