Web command to pass arguments to ddev exec? - ddev

Nowadays most CMS provide some kind of cli interface, like ./typo3cms or in the case of craft ./craft.
Instead of running ddev exec ./craft do/something, I'd like to add a web command craft that tunnels that do/something, so I can just write ddev craft do/something.
I understand this is nice to have :-)
But can I have it?

It's a nice to have that you can, in fact, have right now. Check out the documentation on custom commands: https://ddev.readthedocs.io/en/stable/users/extend/custom-commands/
Your integration would look something like this:
#!/bin/bash
## Description: Run craft inside the web container
## Usage: craft [flags] [args]
## Example: "ddev craft some command"
craft $#

Related

Jenkins: how would i bash script the initialAdminPassword set up for a dockerfile as opposed to pasting into the browser

Is it possible to bash script the setup process of jenkins, for example i have jenkins container set-up on my local machine, i would like to complete jenkins set-up entirely on bash for the purpose of scripting this entirely from a dockerfile.
I need to be able to pass initialAdminPassword without the use of a browser and just from the terminal.
Is it possible to complete the set-up from the terminal?
Yes it is possible to skip the manual setup.
I don't know your particular setup, but let's assume you retrieve the password from jenkins instance :
cat /var/jenkins_home/secrets/initialAdminPassword
or
docker exec "myjenkinscontainer" bash -c 'cat $JENKINS_HOME/secrets/initialAdminPassword'
You could then connect to Jenkins as admin, using that password.
curl --silent -u "admin:$mypassword" http://localhost:8080/manage
If you've configured Jenkins Security to not allow API calls, then you might require to generate first a crumb token that you would use in every request, instead of the password. To issue a crumb token, you might do something similar :
curl -s 'http://admin:$mypassword#localhost:8080/crumbIssuer/api/xml?xpath=concat(//crumbRequestField,\":\",//crumb)'
Then you might need pass this crumb value instead of password in further requests.
Depending on your situation / Jenkins configuration, I might provide more details.

How to create multiple torify instances for bash scripts?

What we have:
Multiple tor connections open at different ports.
What we want:
Create torify2, torify3, ... to handle multiple requests from different bash scripts simultanously.
Like:
bash_1.sh
torify curl ifconfig.me
...
bash_2.sh
torify2 curl ifconfig.me
...
bash_3.sh
torify3 curl ifconfig.me
...
I am new to stackoverflow. Feel free to comment so I can improve my skills in how to ask questions.
There are at least a couple of easy methods to do what you want since multiple Tor instances are already up and running.
Torify just calls torsocks so if you read the man page for torsocks, there aren't any options for specifying host/port for Tor, but it does use a config file which you can switch using the TORSOCKS_CONF_FILE environment variable.
The location of your config file may vary, but check /etc/tor/torsocks.conf for the default. Make a copy for each Tor instance, and change the TorPort in each file to a different Tor port.
Then, you can test that it works by running:
TORSOCKS_CONF_FILE=/tmp/torsocks-1.conf torsocks curl ifconfig.me
You can either run each instance like that, specifying a different config, or if you want to put that into a script, try:
torify1.sh
#!/bin/bash
TORSOCKS_CONF_FILE=/path/to/torsocks1.conf torsocks "$#"
Make one of the above scripts for each conf file and Tor SOCKS port you have running. The "$#" just passes all the command line arguments to your script to Torify.
You'd just run your script like: torify1.sh curl -v --compressed http://ifconfig.me/
Hope that helps.

bash or something else: updating configuration files programmatically?

What's the best approach to update an /etc/rc.conf configuration file programmatically?
Specifically, on an arch linux machine, I want to be able to programmatically update
DAEMONS=(syslog-ng network sshd ntpd netfs crond)
to
DAEMONS=(syslog-ng network sshd ntpd netfs crond postgresql)
after postgresql is successfully installed via pacman.
I presume I can write a function that does something like:
line="DAEMONS=(syslog-ng network sshd ntpd netfs crond)"
sed -i "/${line}/ s/)/ postgresql)/" /etc/rc.conf
specifically to handle this postgresql scenario.
However, going one step further, is there a more generic way (using a library if there's one you can recommend) that programmatically includes my service (such as memcached, or like a task server like zeromq etc) in the DAEMONS parameter in my /etc/rc.conf file?
I wouldn't know about a generic way (there seems to be very few tools which do any parsing and modification of shell code), but one way to update a simple array like this one could be to actually read it, change it, then write back the whole line - Something like this:
source /etc/rc.conf
DAEMONS+=(postgresql)
sed -i -e s/'^DAEMONS=.*'/"DAEMONS=(${DAEMONS[#]})"/ /etc/rc.conf

ec2-describe-images -o self -o amazon returns nothing

I just want to install the ec2-api-tools. So I follows the instructions on this links
https://help.ubuntu.com/community/EC2StartersGuide
But when I give ec2-describe-images -o self -o amazon on the command line, it returns nothing.There is no error. It just waits like waiting for an input. What is the wrong I have done?
Thanks for helps.
I hope you have set environment variable correctly (and set appropriate privilege, I remember it's chmod 400). E.g.
$ export EC2_PRIVATE_KEY=/PATH/TO/PK/pk-XXXXXXXXXXXXXXXXXXXX.pem
$ export EC2_CERT=/PATH/TO/CERT/cert-XXXXXXXXXXXXXXXXXXXX.pem
Also, Apart from this your command looks good. Also, keep in mind that owner amazon has lots of AMIs listed.. so it may not be super quick to see the result.. you might need to wait a couple to 10s of seconds.
Please read these:
Installing Amazon EC2 Command Line Tools
Latest CLI reference to Amazon API Command Line
It's not recognizing your credentials. Double check those settings.

How can I automate running commands remotely over SSH to multiple servers in parallel?

I've searched around a bit for similar questions, but other than running one command or perhaps a few command with items such as:
ssh user#host -t sudo su -
However, what if I essentially need to run a script on (let's say) 15 servers at once. Is this doable in bash? In a perfect world I need to avoid installing applications if at all possible to pull this off. For argument's sake, let's just say that I need to do the following across 10 hosts:
Deploy a new Tomcat container
Deploy an application in the container, and configure it
Configure an Apache vhost
Reload Apache
I have a script that does all of that, but it relies on me logging into all the servers, pulling a script down from a repo, and then running it. If this isn't doable in bash, what alternatives do you suggest? Do I need a bigger hammer, such as Perl (Python might be preferred since I can guarantee Python is on all boxes in a RHEL environment thanks to yum/up2date)? If anyone can point to me to any useful information it'd be greatly appreciated, especially if it's doable in bash. I'll settle for Perl or Python, but I just don't know those as well (working on that). Thanks!
You can run a local script as shown by che and Yang, and/or you can use a Here document:
ssh root#server /bin/sh <<\EOF
wget http://server/warfile # Could use NFS here
cp app.war /location
command 1
command 2
/etc/init.d/httpd restart
EOF
Often, I'll just use the original Tcl version of Expect. You only need to have that on the local machine. If I'm inside a program using Perl, I do this with Net::SSH::Expect. Other languages have similar "expect" tools.
The issue of how to run commands on many servers at once came up on a Perl mailing list the other day and I'll give the same recommendation I gave there, which is to use gsh:
http://outflux.net/unix/software/gsh
gsh is similar to the "for box in box1_name box2_name box3_name" solution already given but I find gsh to be more convenient. You set up a /etc/ghosts file containing your servers in groups such as web, db, RHEL4, x86_64, or whatever (man ghosts) then you use that group when you call gsh.
[pdurbin#beamish ~]$ gsh web "cat /etc/redhat-release; uname -r"
www-2.foo.com: Red Hat Enterprise Linux AS release 4 (Nahant Update 7)
www-2.foo.com: 2.6.9-78.0.1.ELsmp
www-3.foo.com: Red Hat Enterprise Linux AS release 4 (Nahant Update 7)
www-3.foo.com: 2.6.9-78.0.1.ELsmp
www-4.foo.com: Red Hat Enterprise Linux Server release 5.2 (Tikanga)
www-4.foo.com: 2.6.18-92.1.13.el5
www-5.foo.com: Red Hat Enterprise Linux Server release 5.2 (Tikanga)
www-5.foo.com: 2.6.18-92.1.13.el5
[pdurbin#beamish ~]$
You can also combine or split ghost groups, using web+db or web-RHEL4, for example.
I'll also mention that while I have never used shmux, its website contains a list of software (including gsh) that lets you run commands on many servers at once. Capistrano has already been mentioned and (from what I understand) could be on that list as well.
Take a look at Expect (man expect)
I've accomplished similar tasks in the past using Expect.
You can pipe the local script to the remote server and execute it with one command:
ssh -t user#host 'sh' < path_to_script
This can be further automated by using public key authentication and wrapping with scripts to perform parallel execution.
You can try paramiko. It's a pure-python ssh client. You can program your ssh sessions. Nothing to install on remote machines.
See this great article on how to use it.
To give you the structure, without actual code.
Use scp to copy your install/setup script to the target box.
Use ssh to invoke your script on the remote box.
pssh may be interesting since, unlike most solutions mentioned here, the commands are run in parallel.
(For my own use, I wrote a simpler small script very similar to GavinCattell's one, it is documented here - in french).
Have you looked at things like Puppet or Cfengine. They can do what you want and probably much more.
For those that stumble across this question, I'll include an answer that uses Fabric, which solves exactly the problem described above: Running arbitrary commands on multiple hosts over ssh.
Once fabric is installed, you'd create a fabfile.py, and implement tasks that can be run on your remote hosts. For example, a task to Reload Apache might look like this:
from fabric.api import env, run
env.hosts = ['host1#example.com', 'host2#example.com']
def reload():
""" Reload Apache """
run("sudo /etc/init.d/apache2 reload")
Then, on your local machine, run fab reload and the sudo /etc/init.d/apache2 reload command would get run on all the hosts specified in env.hosts.
You can do it the same way you did before, just script it instead of doing it manually. The following code remotes to machine named 'loca' and runs two commands there. What you need to do is simply insert commands you want to run there.
che#ovecka ~ $ ssh loca 'uname -a; echo something_else'
Linux loca 2.6.25.9 #1 (blahblahblah)
something_else
Then, to iterate through all the machines, do something like:
for box in box1_name box2_name box3_name
do
ssh $box 'commmands_to_run_everywhere'
done
In order to make this ssh thing work without entering passwords all the time, you'll need to set up key authentication. You can read about it at IBM developerworks.
You can run the same command on several servers at once with a tool like cluster ssh. The link is to a discussion of cluster ssh on the Debian package of the day blog.
Well, for step 1 and 2 isn't there a tomcat manager web interface; you could script that with curl or zsh with the libwww plug in.
For SSH you're looking to:
1) not get prompted for a password (use keys)
2) pass the command(s) on SSH's commandline, this is similar to rsh in a trusted network.
Other posts have shown you what to do, and I'd probably use sh too but I'd be tempted to use perl like ssh tomcatuser#server perl -e 'do-everything-on-one-line;' or you could do this:
either scp the_package.tbz tomcatuser#server:the_place/.
ssh tomcatuser#server /bin/sh <<\EOF
define stuff like TOMCAT_WEBAPPS=/usr/local/share/tomcat/webapps
tar xj the_package.tbz or rsync rsync://repository/the_package_place
mv $TOMCAT_WEBAPPS/old_war $TOMCAT_WEBAPPS/old_war.old
mv $THE_PLACE/new_war $TOMCAT_WEBAPPS/new_war
touch $TOMCAT_WEBAPPS/new_war [you don't normally have to restart tomcat]
mv $THE_PLACE/vhost_file $APACHE_VHOST_DIR/vhost_file
$APACHECTL restart [might need to login as apache user to move that file and restart]
EOF
You want DSH or distributed shell, which is used in clusters a lot. Here is the link: dsh
You basically have node groups (a file with lists of nodes in them) and you specify which node group you wish to run commands on then you would use dsh, like you would ssh to run commands on them.
dsh -a /path/to/some/command/or/script
It will run the command on all the machines at the same time and return the output prefixed with the hostname. The command or script has to be present on the system, so a shared NFS directory can be useful for these sorts of things.
Creates hostname ssh command of all machines accessed.
by Quierati
http://pastebin.com/pddEQWq2
#Use in .bashrc
#Use "HashKnownHosts no" in ~/.ssh/config or /etc/ssh/ssh_config
# If known_hosts is encrypted and delete known_hosts
[ ! -d ~/bin ] && mkdir ~/bin
for host in `cut -d, -f1 ~/.ssh/known_hosts|cut -f1 -d " "`;
do
[ ! -s ~/bin/$host ] && echo ssh $host '$*' > ~/bin/$host
done
[ -d ~/bin ] && chmod -R 700 ~/bin
export PATH=$PATH:~/bin
Ex Execute:
$for i in hostname{1..10}; do $i who;done
There is a tool called FLATT (FLexible Automation and Troubleshooting Tool) that allows you to execute scripts on multiple Unix/Linux hosts with a click of a button. It is a desktop GUI app that runs on Mac and Windows but there is also a command line java client.
You can create batch jobs and reuse on multiple hosts.
Requires Java 1.6 or higher.
Although it's a complex topic, I can highly recommend Capistrano.
I'm not sure if this method will work for everything that you want, but you can try something like this:
$ cat your_script.sh | ssh your_host bash
Which will run the script (which resides locally) on the remote server.
Just read a new blog using setsid without any further installation/configuration besides the mainstream kernel. Tested/Verified under Ubuntu14.04.
While the author has a very clear explanation and sample code as well, here's the magic part for a quick glance:
#----------------------------------------------------------------------
# Create a temp script to echo the SSH password, used by SSH_ASKPASS
#----------------------------------------------------------------------
SSH_ASKPASS_SCRIPT=/tmp/ssh-askpass-script
cat > ${SSH_ASKPASS_SCRIPT} <<EOL
#!/bin/bash
echo "${PASS}"
EOL
chmod u+x ${SSH_ASKPASS_SCRIPT}
# Tell SSH to read in the output of the provided script as the password.
# We still have to use setsid to eliminate access to a terminal and thus avoid
# it ignoring this and asking for a password.
export SSH_ASKPASS=${SSH_ASKPASS_SCRIPT}
......
......
# Log in to the remote server and run the above command.
# The use of setsid is a part of the machinations to stop ssh
# prompting for a password.
setsid ssh ${SSH_OPTIONS} ${USER}#${SERVER} "ls -rlt"
Easiest way I found without installing or configuring much software is using plain old tmux. Say you have 9 linux servers. Pick a box as your main. Start a tmux session:
tmux
Then create 9 split tmux panes by doing this 8 times:
ctrl-b + %
Now SSH into each box in each pane. You'll need to know some tmux shortcuts. To navigate, press:
ctrl+b <arrow-keys>
Once your logged in to all your boxes on each pane. Now turn on pane synchronization where it lets you type the same thing into each box:
ctrl+b :setw synchronize-panes on
now when you press any keys, it will show up on every pane. to turn it off, just make on to off. to cycle resize panes, press ctrl+b < space-bar >.
This works alot better for me since I need to see each terminal output as sometimes servers crash or hang for whatever reason when downloading or upgrade software. Any issues, you can just isolate and resolve individually.

Resources