Debug Odoo with Docker-compose in VS Code - debugging

I have a docker-compose file that runs Odoo14 fronted by Nginx. I would like to debug my Odoo plugins with Debudpy in VS code using this docker-compose configuration. I managed to get Debugpy installed in Docker using a "command" entry in the Odoo service docker-compose configuration.
Unfortunately, I cannot manage to get Odoo to run "wrapped" by Debugpy. I tried to override the entrypoint of the Odoo service to run debugpy and then the original entrypoint.sh like so:
entrypoint: python3 -m debugpy --listen 0.0.0.0:8888 entrypoint.sh
but this did not work.
Any idea how I could have a docker-compose file that runs Odoo & nginx and debug in VS code?
Help would be appreciated.
Thanks

Just in case someone needs it, the answer was actually quite simple. Instead of overriding the entrypoint file with mine, I just added the full command in my docker-compose file:
entrypoint: python3 -m debugpy --listen 0.0.0.0:8888 usr/bin/odoo -c /etc/odoo.conf
and it was enough to launch debugpy and odoo.

Related

access logs on laravel sail

I'm trying out laravel sail for a local project and I can't for the life of me find an easy way to exec into the container and tail my logs. I'm sick of googling and finding nothing-- does anyone have a link to docs or know the easiest way to accomplish normal devving with Laravel sail? I'm considering giving up this technique and doing it the normal Docker way.
Sail is just a way to configure your docker environment easily, you can still run every docker command as normal, or even publish the sail files and modify them for yourself (and then remove the package). To enter a container execute docker exec -it <container> bash or ./vendor/bin/sail shell or ./vendor/bin/sail root-shell. To tail logs of a container, you can run docker logs --follow <container>.

docker command not found when executed over bitbucket ssh pipeline

I'm using bitbucket pipeline to deploy my laravel application, when I push to my repo it start to build and it works perfectly until the docker exec command which will send inline command to execute inside the php container, I get the error
bash: line 3: docker: command not found
which is very wired because when I run the command directly on the same server at the same directory it works perfectly, docker is installed on the server and as you can see inside execute.sh docker-compose works with no issues however when running over the pipeline I get the error, notice the pwd to make sure the command executed in the right directory.
bitbucket-pipelines.yml
image: php:7.3
pipelines:
branches:
testing:
- step:
name: Deploy to Testing
deployment: Testing
services:
- docker
caches:
- composer
script:
- apt-get update && apt-get install -y unzip openssh-client
- curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer
- composer require phpunit/phpunit
- vendor/bin/phpunit laravel/tests/Unit
- ssh do.server.net 'bash -s' < execute.sh
Inside execute.sh it looks like this :
cd /home/docker/docker-laravel
docker-compose build && docker-compose up -d
pwd
docker exec -ti php sh -c "php helpershell.php"
exit
And the output from bitbucket pipeline build result looks like this :
Successfully built 1218483bd067
Successfully tagged docker-laravel_php:latest
Building nginx
Step 1/1 : FROM nginx:latest
---> 4733136e5c3c
Successfully built 4733136e5c3c
Successfully tagged docker-laravel_nginx:latest
Creating php ...
Creating mysql ...
Creating mysql ... done
Creating php ... done
Creating nginx ...
Creating nginx ... done
/home/docker/docker-laravel
bash: line 3: docker: command not found
I think that part of the reason this is happening is because docker-compose and docker are two separate commands. Just because one works does not mean they both work. Also you might want to check the indentation of your bitbucket-pipelines.yaml file because yaml can be pretty finicky.
See here for sample structure: https://confluence.atlassian.com/bitbucket/configure-bitbucket-pipelines-yml-792298910.html
Are you defining docker as a service in the bitbucket pipeline, according to the documentation, with a top level definitions entry? Like so:
definitions:
services:
docker:
memory: 512 # reduce memory for docker-in-docker from 1GB to 512MB
Alternatively docker is included and ready to use directly in the image the pipeline is running, then you might try removing the service key from your step as that could be conflicting with the docker installed on the image (and since you haven't instantiated the docker service via the top level definitions entry I've posted above, the pipeline may end up in a state where it thinks docker isn't setup.

ERR_EMPTY_RESPONSE from docker container

I've been trying to figure this out in the last hours but I'm stuck.
I have a very simple Dockerfile which looks like this:
FROM alpine:3.6
COPY gempbotgo /
COPY configs /configs
CMD ["/gempbotgo"]
EXPOSE 8025
gempbotgo is just an go binary which runs a webserver and some other stuff.
The webserver is running on 8025 and should answer with an hello world.
My issue is with exposing ports. I ran my container like this (after building it)
docker run --rm -it -p 8025:8025 asd
Everything seems fine but when I try to open 127.0.0.1:8025 in the browser or try a wget i just get an empty response.
Chrome: ERR_EMPTY_RESPONSE
The port is used and not restricted by the firewall on my Windows 10 system.
Running the go binary without container just on my "Bash on Ubuntu on Windows" terminal and then browsing to 127.0.0.1:8025 works without a hitch.
Other addresses returned a "ERR_CONNECTION_REFUSED" like 127.0.0.1:8030 so there definetly is something active on the port.
I then went into the conatiner with
docker exec -it e1cc6daae4cf /bin/sh
and checked in there with a wget what happens. Also there no issues. index.html file gets downloaded with a "Hello World"
Any ideas why docker is not sending any data? I've also ran my container with docker-compose but no difference there.
I also ran the container on my VPS hosted externally. Same issue there... (Debian)
My code: (note the Makefile)
https://github.com/gempir/gempbotgo/tree/docker
Edit:
After getting some comments I changed my Dockerfile to a multi-stage build. This is my Dockerfile now:
FROM golang:latest
WORKDIR /go/src/github.com/gempir/gempbotgo
RUN go get github.com/gempir/go-twitch-irc \
&& go get github.com/stretchr/testify/assert \
&& go get github.com/labstack/echo \
&& go get github.com/op/go-logging
COPY . .
RUN CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo -o app .
FROM alpine:latest
RUN apk --no-cache add ca-certificates
WORKDIR /root/
COPY configs ./configs
COPY --from=0 /go/src/github.com/gempir/gempbotgo/app .
CMD ["./app"]
EXPOSE 8025
Sadly this did not change anything, I kept everything as close as possbile to the guide here: https://docs.docker.com/engine/userguide/eng-image/multistage-build/#use-multi-stage-builds
I have also tried the minimalist Dockerfile from golang.org which looks like this:
FROM golang:onbuild
EXPOSE 8025
But no success either with that.
Your issue is that you are binding to the 127.0.0.1:8025 inside your code. This makes the code work from inside the container but not outside.
You need to bind to 0.0.0.0:8025 to bind to all interfaces inside the container. So traffic coming from outside of the container is also accepted by your Go app
Adding to the accepted answer: I had the same error message trying to run docker/getting-started.
The problem was that "getting-started" is using port 80 and this was
"occupied" (netsh http show urlacl) on my machine.
I had to use docker run -d -p 8888:80 docker/getting-started where
8888 was an unused port. And then open "http://localhost:8888/tutorial/".
I have the same problem using Dockerize GatsbyJS. As Tarun Lalwani's comment above, I resolved the problem by binding or using 0.0.0.0 as hostname
yarn develop -P 0.0.0.0 -p 8000
For me this was a problem with the docker swarm mode ingress network. I had to recreate it. https://docs.docker.com/network/overlay/#customize-the-default-ingress-network
Another possibility why you are getting that error is that the docker run command is run through a normal cmd prompt and not the admin command prompt. Make sure you run as an admin!

Use environment variables in docker

I have implemented docker project for automated setup. I use docker 1.9 on Ubuntu Server and utilize feature build-arg. I using it for set dynamic subdomain in apache virtual hosts file.
docker build --no-cache --build-arg domain=demo1.myapp.com -t imagename .
docker run -d -p 8080:80 imagename
I use domain and replace it in virtual hosts file using sed command in my script file
sed -i -e "s/defaulthost.com/$domain/g" /etc/apache2/sites-enabled/myApp.conf
My Dockerfile had code
ARG domain
RUN /bin/sh /script.sh $domain
Now I need to migrate application on AWS where I get Amazon Linux AMI. But here I get supported docker version 1.7, which do not support build-arg. I tried to upgrade but lot of dependencies block me.
Now I decide to use ENV environment variables like below.
docker run -d -p 8080:80 -e domain=demo1.myapp.com
I also changed Docker file like
My Dockerfile had code
RUN /bin/sh /script.sh
But It look like they not working in my scenario as at build time sed script replace empty value in apache file and build process failed.
If it is not possible without build arg or I am doing wrong way of set/use ENV
First, AWS can support docker 1.9.
See for instance "Getting overlay networking to work in AWS with Docker 1.9"
use a Docker Machine version 0.5.2-dev, as explained here
use the right AMI (Amazon Machine Image) Ubuntu 15.10
Set up the AWS environment variables
If you chose to remain with an old AMI and its docker 1.7, then -e option are for runtime only (creating/running containers), not build time (image).
That means if your ENTRYPOINT or CMD was: /script.sh, using inside the script $domain (and then launching your main process), that would work.

AWS docker set --no-cache flag

I am using EB on AWS to deploy a dockerfile.
Currently I deploy to scripts:
The dockerfile and a run.sh file which starts a server.
The dockerfile roughly looks like this
FROM ubuntu:14.04
MAINTAINER xy
[...install a java server...]
ADD run.sh /run.sh
RUN chmod +x /*.sh
EXPOSE 8080
CMD ["/run.sh"]
run.sh starts the java server.
I would like to set the --no-cache flag for the docker. Where can I set that?
You can't specify docker build's --no-cache because eb doesn't allow you to.
A workaround is to build the image locally (using --no-cache). Then use docker push to publish your image to Docker hub public registry.
Your Dockerfile could be simplified (untested) down to:
FROM custom_java_server_build:latest
MAINTAINER xy
EXPOSE 8080
CMD ["/run.sh"]
It does sound like you're creating a large image, you might be able to mitigate this by turning the entire install sequence into a single RUN statement. Don't forget to delete all your temporary files too.
You use --no-cache only in the docker build step. If the run script doesn't build the image, then you need to find what is building it and tell that process to use no-cache.

Resources