How to call a ddev web command from another one - ddev

As answered in ddev exec: command not found (.bash_aliases) shell scripts in .ddev/commands/web are fantastic.
Is it also possible to call a command from another one?
Like
#!/bin/bash
# pull prod content to local
dump_remote_database
sync_down_files
import_database
Which would (theoretically) call three separate commands defined in . ddev/commands/web
Currently I get
/mnt/ddev_config/commands/web/sync_down: line 5: dump_remote_database: command not found
/mnt/ddev_config/commands/web/sync_down: line 6: sync_down_files: command not found
/mnt/ddev_config/commands/web/sync_down: line 7: import_database: command not found

On a host command you could call another ddev command, but not web or other command, because they're executing inside the container and done even know that ddev exists.
So with a host command, for example ddev relaunch in .ddev/commands/host/relaunch, you could have this, where relaunch calls ddev launch:
#!/bin/bash
## Description: Launch a browser with drupal /user
## Usage: relaunch [path]
## Example: "ddev relaunch"
ddev launch /user
With a web container command though, you're executing inside the web container (which doesn't even know that ddev exists, it's its own little world). So in that case you might have to copy/paste some feature of another web command.
Lets say you don't like the built-in drush.example enough and you want just a "drushuli" command. Whereas the drush.example uses drush directly, you can just use drush directly yourself (inside the web container) with drush uli. So I'll copy .ddev/commands/drush.example and come up with:
#!/bin/bash
## Description: Run drush uli inside the web container
## Usage: drushuli [flags] [args]
## Example: "ddev drushuli"
drush uli
It's a pretty silly example, but you get the picture. Use the tools that are available to you in the environment you're working with.

Related

Auto Start Script

So I am making a script that can run these commands whenever a server boot/reboot:
sudo bash
su - erp
cd frappe-bench/
bench start >/tmp/bench_log &
I found guides here and there about how can I change user in script I came out with the following script:
#! /bin/sh
sudo -u erp bash
cd /home/erp/frappe-bench/
bench start >/tmp/bench_log &
And, I have created a service at /etc/systemd/system/ and set it to run automatically when the server boots up.
The problem is, whenever I run sudo systemctl start erpnextd.service and checked the status, it came up with this
May 24 17:10:05 appbsystem2 systemd[1]: Started ERPNext | Auto Restart.
May 24 17:10:05 appbsystem2 sudo[18814]: root : TTY=unknown ; PWD=/ ; USER=>erp ; COMMAND=/bin/bash
May 24 17:10:05 appbsystem2 systemd[1]: erpnextd.service: Succeeded.
But it still doesn't start up ERPNext.
All I wanted to do is make a script that will start erpnext automatically everytime a server reboot.
Note: I only install frappe-bench on user erp only
Because you are using systemd, you already have all the features from your script available, and better. So you don't even need the script anymore:
[Unit]
Description=...
[Service]
# Run as user erp.
User=erp
# You probably also want to run as group erp, if it exists.
Group=erp
# Change to this directory before executing.
WorkingDirectory=/home/erp/frappe-bench
# Redirect standard output to the given log file.
StandardOutput=file:/tmp/bench_log
# Redirect standard error to the same log file.
StandardError=file:/tmp/bench_log
# Command line for starting the program. Make sure to use an absolute path!
ExecStart=/full/path/to/bench start
[Install]
WantedBy=multi-user.target
Using crontab (the script will start after every reboot/startup)
#crontab -e
#reboot sh /full/path/to/bench start >/tmp/bench_log
The answer provide by Thomas is very helpful.
However, I found another workaround by adding the path of my script file into the bottom of /etc/rc.local file.
Both method works, just a matter of preference ;)

copy bash command history (recursive search commands) into Docker container

I have a container which I am using interactively (docker run -it), in it, i have to run a pretty common set of commands, though not always in a set order, hence I cannot just run a script.
Thus, I would like for a way to have my commands in recursive search (Ctrl+R) be available in the Docker container.
Any idea how I can do this?
Let's mount the history file into the container from the host so it's contains will get preserved the container death.
# In some directory
touch bash_history
docker run -v ./bash_history:/root/.bash_history:Z -it fedora /bin/bash
I would recommend to have separate bash history to the one that you use on the host for the safety reasons.
I found helpful info in these questions:
Docker and .bash_history
Docker: preserve command history
https://superuser.com/questions/1158739/prompt-command-to-reload-from-bash-history
They use docker volume mounts however, which mean that the container commands affect the local (host PC) commands, which I do not want.
It seems I will have to copy ~/.bash_history from local into container which will make the history work 'one-way'.
UPDATE: Working:
COPY your_command_script.sh some_folder/my_history
ENV HISTFILE myroot/my_history
RUN PROMPT_COMMAND="history -a; history -r"
Explanation:
copy command script into a file in container
tell the shell to look at a different file for history
reload the history file

Run an shell script on startup (not login) on Ubuntu 14.04

I have a build server. I'm using the Azure Build Agent script. It's a shell script that will run continuously while the server is up. Problem is that I cannot seem to get it to run on startup. I've tried /etc/init.d and /etc/rc.local and the agent is not being run. Nothing concerning the build agent in the boot logs.
For /etc/init.d I created the script agent.sh which contains:
#!/bin/bash
sh ~/agent/run.sh
Gave it the proper permissions chmod 755 agent.shand moved it to /etc/init.d.
and for /etc/rc.local, I just appended the following
sh ~/agent/run.sh &
before exit 0.
What am I doing wrong?
EDIT: added examples.
EDIT 2: Just noticed that the init.d README says that shell scripts need to start with #!/bin/sh and not #!/bin/bash. Also used absolute path, but no change.
FINAL EDIT: As #ewrammer suggested, I used cron and it worked. crontab -e and then #reboot /home/user/agent/run.sh.
It is hard to see what is wrong if you are not posting what you have done, but why not add it as a cron job with #reboot as pattern? Then cron will run the script every time the computer starts.
Just in case, using a supervisor could be a good idea, In Ubuntu 14 you don't have systemd but you can choose from others https://en.wikipedia.org/wiki/Process_supervision.
If using immortal, after installing it, you just need to create a run.yml file in /etc/immortal with something like:
cmd: /path/to/command
log:
file: /var/log/command.log
This will start your script/command on every start, besides ensuring your script/app is always up and running.

Reuse inherited image's CMD or ENTRYPOINT

How can I include my own shell script CMD on container start/restart/attach, without removing the CMD used by an inherited image?
I am using this, which does execute my script fine, but appears to overwrite the PHP CMD:
FROM php
COPY start.sh /usr/local/bin
CMD ["/usr/local/bin/start.sh"]
What should I do differently? I am avoiding the prospect of copy/pasting the ENTRYPOINT or CMD of the parent image, and maybe that's not a good approach.
As mentioned in the comments, there's no built-in solution to this. From the Dockerfile, you can't see the value of the current CMD or ENTRYPOINT. Having a run-parts solution is nice if you control the upstream base image and include this code there, allowing downstream components to make their changes. But docker there's one inherent issue that will cause problems with this, containers should only run a single command that needs to run in the foreground. So if the upstream image kicks off, it would stay running without giving your later steps a chance to run, so you're left with complexities to determine the order to run commands to ensure that a single command does eventually run without exiting.
My personal preference is a much simpler and hardcoded option, to add my own command or entrypoint, and make the last step of my command to exec the upstream command. You will still need to manually identify the script name to call from the upstream Dockerfile. But now in your start.sh, you would have:
#!/bin/sh
# run various pieces of initialization code here
# ...
# kick off the upstream command:
exec /upstream-entrypoint.sh "$#"
By using an exec call, you transfer pid 1 to the upstream entrypoint so that signals get handled correctly. And the trailing "$#" passes through any command line arguments. You can use set to adjust the value of $# if there are some args you want to process and extract in your own start.sh script.
If the base image is not yours, you unfortunately have to call the parent command manually.
If you own the parent image, you can try what the people at camptocamp suggest here.
They basically use a generic script as an entry point that calls run-parts on a directory. What that does is run all scripts in that directory in lexicographic order. So when you extend an image, you just have to put your new scripts in that same folder.
However, that means you'll have to maintain order by prefixing your scripts which could potentially get out of hand. (Imagine the parent image decides to add a new script later...).
Anyway, that could work.
Update #1
There is a long discussion on this docker compose issue about provisioning after container run. One suggestion is to wrap you docker run or compose command in a shell script and then run docker exec on your other commands.
If you'd like to use that approach, you basically keep the parent CMD as the run command and you place yours as a docker exec after your docker run.
Using mysql image as an example
Do docker inspect mysql/mysql-server:5.7 and see that:
Config.Cmd="mysqld"
Config.Entrypoint="/entrypoint.sh"
which we put in bootstrap.sh (remember to chmod a+x):
#!/bin/bash
echo $HOSTNAME
echo "Start my initialization script..."
# docker inspect results used here
/entrypoint.sh mysqld
Dockerfile is now:
FROM mysql/mysql-server:5.7
# put our script inside the image
ADD bootstrap.sh /etc/bootstrap.sh
# set to run our script
ENTRYPOINT ["/bin/sh","-c"]
CMD ["/etc/bootstrap.sh"]
Build and run our new image:
docker build --rm -t sidazhou/tmp-mysql:5.7 .
docker run -it --rm sidazhou/tmp-mysql:5.7
Outputs:
6f5be7c6d587
Start my initialization script...
[Entrypoint] MySQL Docker Image 5.7.28-1.1.13
[Entrypoint] No password option specified for new database.
...
...
You'll see this has the same output as the original image:
docker run -it --rm mysql/mysql-server:5.7
[Entrypoint] MySQL Docker Image 5.7.28-1.1.13
[Entrypoint] No password option specified for new database.
...
...

How to set the command history in a Dockerfile

I'm running the docker container locally to troubleshoot its state. I don't always want to execute the RUN/ENTRYPOINT, I often want to get into the running container, do some things, and then run the RUN/ENTRYPOINT.
It would be super convenient to have the RUN/ENTRYPOINT available after I docker run bash by just pressing the up key. So I thought it would be nice if I could modify the history with history -s ... in the Dockerfile. That way, as soon as I docker run bash, I can just press up and have the RUN/ENTRYPOINT available.
When I put this in the docker file, I got this error:
/bin/sh: 1: history: not found
Is there a way to set the bash history in a Dockerfile?
You get the error because RUN commands run in /bin/sh, which has no history command available.
To make this work, you need to run an interactive bash shell during the build, so it will store your history entry.
RUN bash -ic 'history -s foobar'
That should leave behind a history file with foobar as its most recent (and probably only) entry.
You will see an error during build about ioctl... that is normal, because interactive bash expects to find a terminal, and there won't be one. But it should still work fine.
bash: cannot set terminal process group (1): Inappropriate ioctl for device
bash: no job control in this shell
Note that this will be stored for the user you run the command as. If your image switches to a non-root user with the USER statement, you should put this after the USER line so it is stored in the user that your image runs as.

Resources