How can I override ddev's php-fpm.conf or pool.d/www.conf? - ddev

There is no obvious way to override some php-fpm configuration in DDEV-Local's web container. Although it's easy to provide custom PHP configuration it's not as obvious how one would configure the php-fpm process itself.
In my case I want to change the security.limit-extensions value in pool.d/www.conf

There are two ways to do this. I'll create two separate answers to explain how.
The first technique is to create a custom Dockerfile (docs) which edits the www.conf (or any other file). You can also use the Dockerfile ADD command to add a complete file and override them.
In the case of this specific problem, we'll create a .ddev/web-build/Dockerfile with these contents:
# You can copy this Dockerfile.example to Dockerfile to add configuration
# or packages or anything else to your webimage
ARG BASE_IMAGE
FROM $BASE_IMAGE
ENV PHP_VERSION=7.4
RUN echo "security.limit_extensions = .php .html" >> /etc/php/${PHP_VERSION}/fpm/pool.d/www.conf
After you ddev start you'll have the new configuration.
Instead of the RUN echo approach that just appends to the file, given here for simplicity, you could RUN a sed/awk/perl statement to change the file in place.
And alternatively you could put the version of the www.conf that you want into the .ddev/web-build directory and
COPY www.conf /etc/php/${PHP_VERSION}/fpm/pool.d/www.conf

The second way to approach this is to use a custom docker-compose.*.yaml file (docs.
Here you'll copy the desired www.conf (or any other file) into your project's .ddev directory and then mount it into the web container on top of the previously provided one. For this specific example, you can copy the www.conf into the .ddev folder by cd .ddev && docker cp ddev-<projectname>-web:/etc/php/7.4/fpm/pool.d/www.conf . and edit it as you need to (edit it with "security.limit_extensions = .php .html").
Then a custom .ddev/docker-compose.*.yaml file like this can mount it into the proper directory (mine is called docker-compose.wwwconf.yaml):
version: "3.6"
services:
web:
volumes:
- "./www.conf:/etc/php/7.4/fpm/pool.d/www.conf"

if you are using docker-compose, mount zz-docker.conf where your customized configure placed, sample as bellow
php:
build: ./php
image: ctc/php:latest
container_name: ctc-php
expose:
- 9000
volumes:
- ./html:/var/www/html
- ./php/log:/var/log/php-fpm
- ./php/php-fpm.d/zz-docker.conf:/usr/local/etc/php-fpm.d/zz-docker.conf
networks:
- koogua
restart: always
zz-docker.conf looks like bellow:
[global]
daemonize = no
[www]
listen = 9000
pm.max_children = 50
pm.start_servers = 20
pm.min_spare_servers = 10
pm.max_spare_servers = 30
pm.max_requests = 500
note: mount www.conf will cause error

Related

How to build an image from Dockerfile using Earthly target?

I would like to build and image from Dockerfile using Earthly.
You might be wondering why do I want that, because I can describe images right inside of Earthfile, but I have 2 reasons for using external Dockerfile:
ADD command (which I need to download file by URL) is not supported by Earthly yet
I would like to use a heredoc syntax for embedding file's content into container right from Dockerfile. This requires # syntax=docker/dockerfile:1.4, which is again not available in Earthfile
So, here is what I tried to do.
My approximate Dockerfile looks like:
# syntax=docker/dockerfile:1.4
FROM gcr.io/distroless/java17:nonroot
WORKDIR /opt/app
ADD --chown=nonroot https://github.com/microsoft/ApplicationInsights-Java/releases/download/3.4.7/applicationinsights-agent-3.4.7.jar agent.jar
COPY <<EOF /opt/app/applicationinsights.json
{
"instrumentation": {}
}
EOF
And this is how I try to build it with Earthly:
base-image:
FROM earthly/dind:alpine
WORKDIR /build
ENV DOCKER_BUILDKIT=1 # <---- required to support heredoc syntax
COPY distroless-runtime-17.Dockerfile Dockerfile
WITH DOCKER --allow-privileged
RUN docker build . -t base-17-image
END
While the WITH DOCKER RUN part gets executed successfully, I do not know how to use the result of base-image target in other targets to package my app using the resulting base image. The FROM base-17-image just fails as if it does not exist (and this tag really does not exist - docker run base-17-image fails with the same reason).
It turned out to be very easy and natively supported:
The whole recipe is just 2 lines of code:
base-image:
FROM DOCKERFILE -f distroless-runtime-17.Dockerfile .
and the result can of the above step can be reused to package your application as: FROM +base-image

Not able to see folder added in windows dockerfile

Am pulling a windows servercore:lts2019 image as my base image, adding a folder to it and creating my own image called "mygitlabpath/windows-2019". The contents of the Dockerfile are as follows:
FROM mcr.microsoft.com/windows/servercore:ltsc2019
ADD folder-z c:/windows/system32/config/systemprofile/folder-z
SHELL ["powershell"]
RUN ls c:/windows/system32/config/systemprofile/folder-z ( at this step i see all contents of folder-z)
Now I use this image i created and try to access c:/windows/system32/config/systemprofile/folder-z but there is no such folder called folder-z :
image: mygitlabpath/windows-2019
stages:
- build
build:
stage: build
script:
- ls c:/windows/system32/config/systemprofile/ ( at this step i expect to see folder-z.. but i dont)
What is that am missing? Any help is appreciated
Thanks
You should use COPY instead of ADD. Works for me

How to run multiple lambda functions when deploying as a Docker image?

How does the dockerfile look like for aws lambda with docker image via aws-sam when declaring multiple functions/apps in templates.yaml?
Here is the sample dockerfile to run "a single app"
FROM public.ecr.aws/lambda/python:3.8
COPY app.py requirements.txt ./
RUN python3.8 -m pip install -r requirements.txt -t .
# Command can be overwritten by providing a different command in the template directly.
CMD ["app.lambda_handler"]
The Dockerfile itself looks the same. No changes needed there.
The presence of the CMD line in the Docker file looks like it needs to change, but that is misleading. The CMD value can be specified on a per-function basis in the template.yaml file.
The template.yaml file must be updated with information about the new function. You will need to add an ImageConfig property to each function. The ImageConfig property must specify the name of the function in the same way the CMD value otherwise would have done so.
You will also need to update each function's DockerTag value to be unique, though this may be a bug.
Here's the NodeJs "Hello World" example template.yaml's Resources section, updated to support multiple functions with a single Docker image:
Resources:
HelloWorldFunction:
Type: AWS::Serverless::Function
Properties:
PackageType: Image
ImageConfig:
Command: [ "app.lambdaHandler" ]
Events:
HelloWorld:
Type: Api
Properties:
Path: /hello
Method: get
Metadata:
DockerTag: nodejs14.x-v1-1
DockerContext: ./hello-world
Dockerfile: Dockerfile
HelloWorldFunction2:
Type: AWS::Serverless::Function
Properties:
PackageType: Image
ImageConfig:
Command: [ "app.lambdaHandler2" ]
Events:
HelloWorld:
Type: Api
Properties:
Path: /hello2
Method: get
Metadata:
DockerTag: nodejs14.x-v1-2
DockerContext: ./hello-world
Dockerfile: Dockerfile
This assumes the app.js file has been modified to provide both exports.lambdaHandler and exports.lambdaHandler2. I assume the corresponding python file should be modified similarly.
After updating template.yaml in this way, sam local start-api works as expected, routing /hello to lambdaHandler and /hello2 to lambdaHandler2.
This technically creates two separate Docker images (one for each distinct DockerTag value). However, the two images will be identical save for the tag, and based on the same Dockerfile, and the second image will therefore make use of Docker's cache of the first image.

Docker-compose - passing environment variables to Flask using script

This my project strcuture at host:
set_env_vars.sh
dev/
docker-compose-dev.yml
/services/
web/
.env-dev? <------
project/
config.py
api/
resources/
auth.py
set_env_vars.sh
export SPOTIFY_CLIENT_ID=my_id
export SPOTIFY_CLIENT_SECRET=my_secret
export SPOTIFY_REDIRECT_URI=http://localhost
export SPOTIFY_CACHE_PATH=/project/api/auth/spotify/.cache
which I run like so:
$ source ./set_env_vars.sh
docker-compose-dev.yml
services:
web:
environment:
- FLASK_ENV=development
- APP_SETTINGS=project.config.DevelopmentConfig
- SPOTIFY_CLIENT_ID=${SPOTIFY_CLIENT_ID}
- SPOTIFY_CLIENT_SECRET=${SPOTIFY_CLIENT_SECRET}
- SPOTIFY_REDIRECT_URI=${SPOTIFY_REDIRECT_URI}
- SPOTIFY_CACHE_PATH=${SPOTIFY_CACHE_PATH}
config.py
class DevelopmentConfig(BaseConfig):
SPOTIFY_CLIENT_ID = os.environ.get('SPOTIFY_CLIENT_ID')
SPOTIFY_CLIENT_SECRET = os.environ.get('SPOTIFY_CLIENT_SECRET')
SPOTIFY_REDIRECT_URI_ = os.environ.get('SPOTIFY_REDIRECT_URI')
SPOTIFY_CACHE_PATH = os.environ.get('SPOTIFY_CACHE_PATH')
auth.py
from project.config import DevelopmentConfig
sp = spotipy.Spotify(auth_manager=spotipy.oauth2.SpotifyOAuth(
DevelopmentConfig.SPOTIFY_CLIENT_ID,
DevelopmentConfig.SPOTIFY_CLIENT_SECRET,
DevelopmentConfig.SPOTIFY_REDIRECT_URI,
scope=DevelopmentConfig.SCOPE,
cache_path=DevelopmentConfig.SPOTIFY_CACHE_PATH))
But I'm getting the following error:
spotipy.oauth2.SpotifyOauthError: No client_id. Pass it or set a SPOTIPY_CLIENT_ID environment variable.
What am I missing?
This certainly looks like you need to pass the variables with the SPOTIPY_ spelling, as per the docs.
However I also noted that your code repeats the same variable names several times. Possibly this could lead to typos as you try to maintain the same variable names across several files.
A simpler way to approach this might be to have the variables contained in a .env-dev file:
SPOTIPY_CLIENT_ID=my_id
SPOTIPY_CLIENT_SECRET=my_secret
SPOTIPY_REDIRECT_URI=http://localhost
SPOTIPY_CACHE_PATH=/project/api/auth/spotify/.cache
Then load these in your docker-compose-dev.yml file:
services:
web:
env_file:
- .env-dev
Then in your Python code you could do:
import os, DevelopmentConfig
sp = spotipy.Spotify(auth_manager=spotipy.oauth2.SpotifyOAuth(
os.environ.get('SPOTIPY_CLIENT_ID'),
os.environ.get('SPOTIPY_CLIENT_SECRET'),
os.environ.get('SPOTIPY_REDIRECT_URI'),
scope=DevelopmentConfig.SCOPE,
cache_path = os.environ.get('SPOTIPY_CACHE_PATH')))
This method has less repitition, although loading of a configuration bypasses your config.DevelopmentConfig object for these variables.
However this method avoids loading the variables into the host's shell, and instead sets them inside a specific service. It also separates secerts so you can commit docker-compose.yml to source control.

Docker-Compose: Composing with Dockerfiles that need relative imports

To begin, this is my project hierarchy:
myproj/
- commons1/
- com1_file1.go
- ...
- commons2/
- com2_file1.go
- ...
- module1/
- mod1_file1.go
- Dockerfile
- ...
- module2/
- mod2_file1.go
- Dockerfile
- ...
- docker-compose.yml
What I'd like to do is that when module1 and module2 containers start up, they each have a copy of all the commonsN directories in their GOPATH's so that each can access the common libraries exposed by each of the commonsN directories.
For example, I would like to see something like this in the container for module1:
/go/
- src/
- commons1/
- com1_file1.go
- ...
- commons2/
- com2_file1.go
- ...
- module1/
- mod1_file1.go
- ...
Reason being is that this is basically how my local GOPATH looks (with the addition of the other modules of course) so that I can do something like this in my source files:
package main
import(
"fmt"
"myproj/commons1"
)
func main() {
fmt.Println("Some thing from common library :", commons1.SomethingFromCommons)
}
From my naive understanding of Docker, it appears I'm not allowed to modify my Dockerfiles to do something along the lines of COPY ../commons1 /go/src/commons1, so I'm wondering how I would go about accomplishing this?
I would strongly prefer to not go the Github route since the source code is all behind company proxies and whatnot and I'm imagining configuring all that is going to take way longer than simply copying some directories.
Edit
I have updated my docker-compose.yml file to look something like this per suggestion from barat:
version: '2'
services:
module1:
volumes:
- ./commons1:/go/src/myproj/commons1
build: module1/
Dockerfile for module1 looks like this:
FROM golang:1.8.0
RUN mkdir -p /go/src/app
WORKDIR /go/src/app
COPY . /go/src/app
RUN go get -d -v
RUN go install -v
ENTRYPOINT /go/bin/app
EXPOSE 8080
docker-compose build fails on the go get -d -v with error:
package myproj/commons1: unrecognized import path "myproj/commons1" (import path does not begin with hostname)
If myproj/commons1 was copied into /go/src/, then this shouldn't be an issue right? I'm guessing then it hasn't been copied over then?
You could build an image including commons1 and commons2 that your other images are based on.
FROM golang:1.8.0
RUN mkdir -p /go/src/myproj/commons1 && mkdir -p /go/src/myproj/commons2
COPY commons1/ /go/src/myproj/commons1/
COPY commons2/ /go/src/myproj/commons2/
The downside is this requires an external build step whenever you update one of the common projects:
docker build -t me/myproj:commons .
Then your compose apps can rely on the commons image instead of golang and build as normal without the volumes.
FROM me/myproj:commons
...
So problem was the go get -d -v command since it was complaining myproj/commons1 wasn't installed in $GOPATH/src basically. This I of course suspect was because Docker Compose wasn't mounting the volumes I mentioned before it ran the go get on docker-compose build so I made a work around in my docker-compose.yml but it is far from elegant:
version: '2'
services:
module1:
volumes:
- ./commons1:/go/src/myproj/commons1
build: module1/
ports:
- "8080:8080"
command: bash -c "go get -d -v && go install -v && /go/bin/app
This is obviously far from ideal because my Go binary is rebuilt every time I do a docker-compose up regardless of whether or not I ran docker-compose build.
This is also problematic because I wanted to use dockerize for certain containers to wait until another container has started up completely and it becomes quite messy now I think.

Resources