I am trying to create a nginx-laravel-mysql stack of docker containers using [laradock][1], a free docker-compose plugin for laravel.
To make it work, I have to run php artisan key:generate either from my local environment, or from within a running container (both bad practices).
I tried adding command: /bin/bash -c "php artisan key:generate" to my docker-compose.yml file. This causes an exit; when I run docker-compose ps, I see laradock_workspace_1 /bin/bash -c nohup php art ... Exit 1. Adding nohup causes the same result. In fact, any command I run here causes an exit
On to the Dockerfile. If I add RUN php artisan key:generate (or any variation of it), I get this:
ERROR: Service 'workspace' failed to build: The command '/bin/sh -c php artisan key:generate' returned a non-zero code: 1
If I run that same command as CMD or ENTRYPOINT, even with nohup, it runs and generates the key, but exits:
docker-compose ps says: laradock_workspace_1 /bin/sh -c nohup php artis... Exit 0
I can add restart: always to docker-compose.yml, but that begets a vicious cycles of key generate, exit, restart.
Any ideas how to execute this command (or any command) from Dockerfile or docker-compose.yml without exiting?
EDIT: to answer #dnephin's question: php artisan key:generate adds a hash to the /.env file and adds a value to a php array. It just takes that command, no input. When I run docker-compose run workspace php artisan key:generate, I get Could not open input file: artisan.
Strangely, when I run docker-compose run workspace pwd, I see the correct path to my laravel files (and I can see all of them if I run docker-compose exec workspace bash, but when I try to run docker-compose run workspace ls, I see nothing. It's like the files aren't there.
Related
I have a container that runs a database migration (source):
FROM golang:1.12-alpine3.10 AS downloader
ARG VERSION
RUN apk add --no-cache git gcc musl-dev
WORKDIR /go/src/github.com/golang-migrate/migrate
COPY . ./
ENV GO111MODULE=on
ENV DATABASES="postgres mysql redshift cassandra spanner cockroachdb clickhouse mongodb sqlserver firebird"
ENV SOURCES="file go_bindata github github_ee aws_s3 google_cloud_storage godoc_vfs gitlab"
RUN go build -a -o build/migrate.linux-386 -ldflags="-s -w -X main.Version=${VERSION}" -tags "$DATABASES $SOURCES" ./cmd/migrate
FROM alpine:3.10
RUN apk add --no-cache ca-certificates
COPY --from=downloader /go/src/github.com/golang-migrate/migrate/build/migrate.linux-386 /migrate
ENTRYPOINT ["/migrate"]
CMD ["--help"]
I want to integrate it into a docker-compose and make it dependent on the Postgres database service. However, since I have to wait until the database is fully initialised I have to wrap the migrate command in a script and thus replace the entrypoint of the migration container. I'm using the wait-for script to poll the database, which is a pure shell (not bash) script and should thus work in an alpine container.
This is how the service is defined in the docker-compose:
services:
database:
# ...
migration:
depends_on:
- database
image: migrate/migrate:v4.7.0
volumes:
- ./scripts/migrations:/migrations
- ./scripts/wait-for:/wait-for
entrypoint: ["/bin/sh"]
command: ["./wait-for database:5432", "--", "./migrate", "-path", "/migrations", "-database", "postgres://test:test#database:5432/test?sslmode=disable", "-verbose", "up"]
Running docker-compose up on this fails with
migration_1 | /bin/sh: can't open './wait-for database:5432': No such file or directory
Running the migrate container for itself with
docker run -it --entrypoint /bin/sh -v $(pwd)/scripts/wait-for:/wait-for migrate/migrate:v4.7.0
does work flawlessly, the script is there and can be run with /bin/sh ./wait-for.
So why does it fail as part of the docker-compose?
If you read the error message carefully, you will see that the file that cannot be found is not ./waitfor, it is ./wait-for database:5432. This is consistent with your input file, where that whole thing is given as the first element of the command list:
command: ["./wait-for database:5432", "--", "./migrate", "-path", "/migrations", "-database", "postgres://test:test#database:5432/test?sslmode=disable", "-verbose", "up"]
It's unclear to me what you actually want instead, since the working alternatives presented do not seem to be fully analogous, but possibly it's
command: ["./wait-for", "database:5432", "--", "./migrate", "-path", "/migrations", "-database", "postgres://test:test#database:5432/test?sslmode=disable", "-verbose", "up"]
Running the migrate container for itself with does work flawlessly
When you run it like:
docker run -it --entrypoint /bin/sh -v $(pwd)/scripts/wait-for:/wait-for migrate/migrate:v4.7.0
entrypoint /bin/sh is executed.
When you run it using docker-compose:
entrypoint (/bin/sh ) + command (./wait-for database:5432) ...` is executed.
./wait-for database:5432 as whole stands for executable that will run and it can't be found, that's why you get the error No such file or directory
Try to specify an absolute path to wait-for in command: and split ./wait-for database:5432 into "./wait-for", "database:5432".
It's possible that splitting will be enough
As an alternative you can follow CMD syntax docs and use different command syntax without array: command: ./wait-for database:5432 ...
ENTRYPOINT ["/bin/sh"] is not enough, you also need the -c argument.
Example (testing a docker-compose.yml with docker-compose run --rm MYSERVICENAMEFROMTHEDOCKERCOMPOSEFILE bash here):
entrypoint: ["/bin/sh"]
Throws:
/bin/sh: 0: cannot open bash: No such file
ERROR: 2
And some wrong syntax examples like
entrypoint: ["/bin/sh -c"]
(wrong!)
or
entrypoint: ["/bin/sh, -c"]
(wrong!)
throw errors:
starting container process caused: exec: "/bin/sh, -c": stat /bin/sh, -c: no such file or directory: unknown
ERROR: 1
starting container process caused: exec: "/bin/sh -c": stat /bin/sh -c: no such file or directory: unknown
ERROR: 1
In docker-compose or Dockerfile, for an entrypoint, you need the -c argument.
This is right:
entrypoint: "/bin/sh -c"
or:
entrypoint: ["/bin/sh", "-c"]
The -c is to make clear that this is a command executed in the command line, waiting for an additional command to be used in that command line. but not starting the bash /bin/sh just on its own. You can read that between the lines at What is the difference between CMD and ENTRYPOINT in a Dockerfile?.
i'm using bitbucket pipeline to deploy and run some artisan command,
but there is a problem that make me headache, when artisan command failed, envoy show the error/Exception, but not continue to run next envoy task.it's keep show me the exception till i kill the php process in vps server (using kill/pkill command)
here is my envoy
#task('start_check_log', ['on' => 'web'])
cd /home/deployer/mywork/laravel/
nohup bash -c "php artisan serve --env=dusk.local 2>&1 &" && sleep 2
curl -vk http://localhost:8000 &
php artisan check_log
sudo kill $(sudo lsof -t -i:8000)
php artisan cache:clear
php artisan config:clear
#endtask
php artisan check_log just to check the log file, i want to check if error occurred, but when error comes up, envoy stuck on that error.
I've resolved this problem, this is just my stupid, I 've to add command pipe in other to envoy continue the task php artisan check_log && sleep 2 and the envoy continue the process
My PHP image entry point is something like below. The entrypoint runs as root and it is necessary in my case . So any command I run on my container runs as root. For some particular command I want to run it as another user e.g when someone try to execute docker exec -it php composer install composer should run as another user set in entrypoint. when someone try to execute docker exec -it php drush status drush should run as another user set in entry point. Probably a if or switch statement inside entrypoint can help me. I was trying something like this https://unix.stackexchange.com/questions/476155/how-to-pass-multiple-parameters-to-su-user-c-command but passing parameter with double dash (--) breaks some command.
Dockerfile
COPY entrypoint.sh /
ENTRYPOINT ["/entrypoint.sh"]
CMD ["php-fpm"]
entrypoint.sh
#!/bin/sh
set -e
# first arg is `-f` or `--some-option`
if [ "${1#-}" != "$1" ]; then
set -- php-fpm "$#"
fi
exec "$#"
I'm not sure that I understand your use-case, but I use su-exec to drop privileges down to a non-root user within my entrypoint script. Most commonly I have to use this because I need to change permissions on a bind-mounted volume (usually /var/run/docker.sock).
Essentially I will do root level operations in my entrypoint, then drop down to a non-root user when executing the container service.
This blog explains the concept using gosu, su-exec is a refactor of gosu in C that is 10kb vs 1.8MB: https://denibertovic.com/posts/handling-permissions-with-docker-volumes/
Do note the security issues, which AFAIK are not a factor when using this in containers.
Ok I'm starting to lose my mind here. When I deploy my app to elastic beanstalk I get this error:
[2017-12-15 17:50:18] Tylercd100\LERN.CRITICAL: RuntimeException was thrown! The only supported ciphers are AES-128-CBC and AES-256-CBC with the correct key lengths.
To be clear I deploy my app source without dependencies installed and with APP_KEY not set, I'm leaving the dependency installation to elastic beanstalk which installs them during deployment.
In my aws .config file I have defined deployment commands as follows:
---
commands:
00init:
command: "sudo yum install gcc-c++"
01init:
command: "rm -f amazon-elasticache-cluster-client.so"
02init:
command: "wget https://s3.amazonaws.com/php-amazon-elasticache-cluster-client-7-1/amazon-elasticache-cluster-client.so"
03init:
command: "sudo mv amazon-elasticache-cluster-client.so /usr/lib64/php/7.1/modules/"
04init:
command: "echo \"extension=amazon-elasticache-cluster-client.so\" | sudo tee /etc/php-7.1.d/50-memcached.ini"
05init:
command: "sudo /etc/init.d/httpd restart"
container_commands:
00permissions:
command: "find * -type d -print0 | xargs -0 chmod 0755"
01permissions:
command: "find . -type f -print0 | xargs -0 chmod 0644"
02permissions:
command: "chmod -R 775 storage bootstrap/cache"
03cache:
command: "php artisan cache:clear"
04key:
command: "php artisan key:generate"
05cache:
command: "php artisan config:cache"
06cache:
command: "php artisan route:cache"
07optimize:
command: "php artisan optimize"
These commands are running during deployment to aws without any error.
When I go and check .env directly on the virtual machine the APP_KEY is set as it should be considering the commands above.
Yet I get the cipher error.
Assuming you set APP_KEY in elasticbeanstalk configuration page in dashboard, there are two things that I would like to point out.
1- When php artisan config:cache is run in container_commands, it caches file paths as /var/app/ondeck/... This causes runtime errors while laravel trying to access the cached files.
2- Cipher error occurs when laravel cannot access the APP_KEY value from your .env file. If a line like APP_KEY=${APP_KEY} exists in your .env file, that is the main cause of the error. You assume that APP_KEY value is going to be read from the environment configuration made in the dashboard. However, environment variables have not been set by the beanstalk yet somehow when your commands or container_commands are running. You can solve this issue my sourcing environment variables by yourself by including below command in your commands or files.
source /opt/elasticbeanstalk/support/envvars
e.g.
"/opt/elasticbeanstalk/hooks/appdeploy/post/91_config_cache.sh":
mode: "000755"
owner: root
group: root
content: |
#!/usr/bin/env bash
source /opt/elasticbeanstalk/support/envvars
echo "Running php artisan config:cache"
cd /var/app/current
php artisan config:cache
echo "Finished php artisan config:cache"
I am installing the Laravel Installer as part of a Docker container using Composer. Laravel is installed globally meaning it goes to ~/.composer/vendor and then add an executable under ~/.composer/vendor/bin.
I am adding the directory ~/.composer/vendor/bin to the $PATH in a Dockerfile as follow:
ENV PATH="~/.composer/vendor/bin:${PATH}"
If I run the command docker exec -it php-fpm bash and from inside the container I run echo $PATH I got the following:
# echo $PATH
/opt/remi/php71/root/usr/bin:/opt/remi/php71/root/usr/sbin:~/.composer/vendor/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
If I run the command laravel inside the container I got the following:
# laravel
Laravel Installer 1.3.3
Usage:
command [options] [arguments]
Options:
-h, --help Display this help message
-q, --quiet Do not output any message
-V, --version Display this application version
--ansi Force ANSI output
--no-ansi Disable ANSI output
-n, --no-interaction Do not ask any interactive question
-v|vv|vvv, --verbose Increase the verbosity of messages: 1 for normal output, 2 for more verbose output and 3 for debug
Available commands:
help Displays help for a command
list Lists commands
new Create a new Laravel application.
So everything seems to be working fine. But if I run the following command (outside the container) meaning from the host:
$ docker exec -it php-fpm laravel
I got the following error:
rpc error: code = 13 desc = invalid header field value "oci runtime error: exec failed: container_linux.go:247: starting container process caused \"exec: \\\"laravel\\\": executable file not found in $PATH\"\n"
What I am missing here? Can the command laravel be run from within the host?
The ~ is the problem here, it's not a valid path character for some shells exec doesn't process it as you'd hope, and it isn't expanded for you by the Dockerfile. Be explicit with your path with the following:
ENV PATH=/root/.composer/vendor/bin:${PATH}