Docker: How to install memcached in container - caching

I am using php:5.6-apache docker image for local development. Now I am developing one portal using codeigniter3. I want to use memcached for caching purpose. But it was not found by default in php:5.6-apache image.
How can i install memcached in my container?
Right now I am trying to use memcached container for this purpose but don't get success yet. Any help will be appreciated.

With something like this:
docker-compose.yml
version: "3"
services:
php:
build: .
memcached:
image: memcached
You can point to memcached container from php container as this: memcached:11211
You will need to enable php-memcached mod in your current php Dockerfile.

Related

How to network 2 separate docker containers to communicate with eachother?

I'm pretty new to docker, and I've tried searching about networking but haven't found a solution that's worked.
I have a Laravel app that is using Laradock.
I also have an external 3rd party API that runs in its own docker container.
I basically want to specify the container name of the api inside my laravel .env file, and have it dynamically resolve the container ip so I can make API calls from my Laravel app. I can already do this with services that are already part of laradock like mariadb/mysql, but since my API is located in an external container, it can't connect to it.
I tried making a network and attaching them with;
docker network create my-network
Then inside my docker-compose.yml files for each of the containers, I specified;
networks:
my-network:
name: "my-network"
But if I try and ping them with;
docker exec -ti laradock-workspace-1 ping my-api
I can't connect and can't really figure out why. Was hoping someone familiar with docker might be able to explain why since I'm sure it's something very obvious I'm missing. Thanks!
By default Docker Compose uses a bridge network to provision inter-container communication. Read this article for more info about inter-container networking.
What matters for you, is that by default Docker Compose creates a hostname that equals the service name in the docker-compose.yml file.
Consider the following docker-compose.yml:
version: '3.9'
services:
server:
image: node:16.9.0
container_name: server
tty: true
stdin_open: true
depends_on:
- mongo
command: bash
mongo:
image: mongo
environment:
MONGO_INITDB_DATABASE: my-database
When you run docker-compose up, Docker will create a default network and assigns the service name as hostname for both mongo and server.
You can now access the backend container via:
docker exec -it server bash
And now you can ping the mongo container using Dockers internal network (default on port 27017 in this case):
curl -v http://mongo:27017/my-database
That's it. The same applies for your setup.

Phoenix server doesn't render css and images with Docker

I'm new to Elixir and Phoenix, and having to work in CI/CD environment I'm trying to figure out how to use Phoenix with Docker.
I've tried various tutorials and videos out there, many of them doesn't work, but those who do work, they have the same result.
Phoenix server doesn't seems to find some resources (the assets folder?).
But inside my Dockerfile I'm copying the entire app folder, and I can confirm that /assets is inside the container by attaching to it.
Dockerfile:
FROM elixir:alpine
RUN apk add --no-cache build-base git
WORKDIR /app
RUN mix local.hex --force && \
mix local.rebar --force
COPY . .
RUN mix do deps.get, deps.compile
CMD ["mix", "phx.server"]
Docker-compose
version: '3.6'
services:
db:
environment:
PGDATA: /var/lib/postgresql/data/pgdata
POSTGRES_PASSWORD: postgres
POSTGRES_USER: postgres
POSTGRES_HOST_AUTH_METHOD: trust
image: 'postgres:11-alpine'
restart: always
volumes:
- 'pgdata:/var/lib/postgresql/data'
web:
build: .
depends_on:
- db
environment:
MIX_ENV: dev
env_file:
- .env
ports:
- '4000:4000'
volumes:
- .:/app
volumes:
pgdata:
Steps I'm doing to create the containers and running the server:
docker-compose build
docker-compose run web mix ecto.create
docker-compose up
The database is created successfully in the db container.
What can be happening here?
Sorry if it's straightforward, I don't use Docker for a while and I still didn't understood Phoenix boilerplate completely.
If you know some good resources about Docker and CI/CD pipelines with Phoenix, I also appreciate so I can study it.
You also need to build the assets. npm install --prefix assets This needs to be done after after mix deps.get but can be done after the mix deps.compile which isn't really needed. You can start the server after mix deps.get and it will compile the deps and your app automatically.

Laravel Sail & Docker Adding Additional Sites (Multi Project)

It is now recommended to use Sail & Docker with Laravel 8
Now I use homestead, but I wanted to upgrade my system to the latest version 8 and I did the setup before I installed the Docker Desktop and Sail http: // localhost everything works, however nodejs npm and mysql redis are ready for everything
The topic I want to learn is sail & docker, how does multiple projects work in this structure?
For example Homestead before working on this config
- map: homestead.test
to: /home/vagrant/project1/public
- map: another.test
to: /home/vagrant/project2/public
Thanks
You need to change the ports (MySQL, Redis, MailHog etc.) if you want to run multiple projects at the same time.
Application, MySQL, and Redis Ports
Add the desired ports to the .env file:
APP_PORT=81
FORWARD_DB_PORT=3307
FORWARD_REDIS_PORT=6380
MailHog Ports
Update MailHog ports in the docker-compose.yml file. Change these lines:
ports:
- 1025:1025
- 8025:8025
to this:
ports:
- 1026:1025
- 8026:8025
Once the containers have been started, you can access your application at http://localhost:81 and the MailHog web interface at http://localhost:8026.
Yes. You have to change to all non-conflict ports for all Laravel Sail project.
If you want to use custom domain like what you did in Homestead,
you can use the Nginx Proxy to achieve Multiple Project just like Homestead
Here is my article: step-by-step tutorial you can follow with....

How to deploy Spring Boot RESTful Web Service Docker img to EC2?

What I'm trying to do is simple, deploy a Spring Boot RESTful Web Service to EC2 so it's accessible publicly.
For this I need to do the following:
Write Spring Boot web service, containerize and test locally - done
Here is my Dockerfile:
FROM openjdk:8-jdk-alpine
VOLUME /tmp
ARG JAR_FILE
ADD ${JAR_FILE} app.jar
ENTRYPOINT ["java","-Djava.security.egd=file:/dev/./urandom","-jar","/app.jar"]
When I docker run this locally on 8080 it works fine (returns static json).
Push to Dockerhub - done
Launch an Amazon Linux AMI on aws and make it http accessible (port 80) - done
Install apache (httpd) and start - done
Here is where I need some help
I run docker image from dockerhub like so
docker run --rm -p 80:8080 kaspartr/demo
It doesn't allow cause of course the port is taken by apache. And if I stop it and run it is deployed but I cannot access it online.
Can someone please explain how do you deploy docker image into the apache?
Do I need to change the Dockerfile or something else?
Thank you!!
Typically I run application on separate port and do docker forward:
Add yo you application.properties
server.port=9001
And add to docker-compose.yml:
version: '1'
services:
your-service:
build: .
ports:
- '9001:9001'
environment:
SERVICE_URL: http://service:9001/path
Since Apache has already taken port 80, you cannot make your container runs on port 80 too.
My guess is that you are planning to use Apache as a reverse proxy to your container. Try publishing your container to another port, and proxying port 80 to the container using Apache.

How do I run a docker-compose container in interactive mode such that other containers see it?

The scenario: I set a breakpoint in code, which is mounted (as a volume) into a container, created by docker-compose. The code is an odoo module, so it's part of the odoo container.
There is also a webapp container, which has a link to odoo in order to use the API.
The odoo container doesn't expose the API port because the host doesn't need it; the webapp container can, of course, see it.
services:
odoo:
volumes:
- localpath:/mnt/extra-addons-bundles
webapp:
links:
- odoo
Of course, the purpose of the breakpoint is to allow me to control the application - so I need a TTY attached. I can do this with docker-compose run --rm odoo. However, when I do this, it creates a new container, so the webapp never actually hits it. Plus, it doesn't tell me what the new container is called, so I have to manually figure it out to do that.
I can use docker exec to run another odoo in the odoo container but then I have to run it on a new port, and thus change the webapp config to use this new instance.
Is there a way of achieving what I want, i.e. to run the odoo container in interactive mode, such that the webapp container can see it, and without reconfiguring the webapp container?
The answer is to use tty: true in the docker-compose file, and docker attach to actually get that process attached to your terminal.
Try this and see if it works
services:
odoo:
volumes:
- localpath:/mnt/extra-addons-bundles
tty: true
stdin_open: true
webapp:
links:
- odoo
I have also added stdin_open in case you need it, if you don't just remove it
Edit-1
Also if you need to attach to the running container, then you need to use docker attach as docker-compose doesn't have that functionality
docker attach <containername>

Resources