Kafka on EC2 instance for integration testing - amazon-ec2

I'm trying to set up some integration tests for part of our project that makes use of Kafka. I've chosen to use the spotify/kafka docker image which contains both kafka and Zookeeper.
I can run my tests (and they pass!) on a local machine if I run the kafka container as described at that project site. When I try to run it on my ec2 build server, however, the container dies. The final fatal error is "INFO gave up: kafka entered FATAL state, too many start retries too quickly".
My suspicion is that it doesn't like the address passed in. I've tried using both the public and the private ip address that ec2 provides, but the results are the same either way, just as with localhost.
Any help would be appreciated. Thanks!

It magically works now even though I'm still doing exactly what I was doing before. However, in order to help others who might come along, I will post what I did to get it to work.
I created the following batch file and have jenkins run this as a build step.
#!/bin/bash
if ! docker inspect -f 1 test_kafka &>/dev/null
then docker run -d --name test_kafka -p 2181:2181 -p 9092:9092 --env ADVERTISED_HOST=localhost --env ADVERTISED_PORT=9092 spotify/kafka
fi
even though the localhost resolves to the private ip address, it seems to take it now. The if block is just to test if the container already exists and reuse it otherwise.

Related

Cloud Run: deploying spring docker image causing error; Failed to start and then listen on the port defined by the PORT environment variable

I am trying to deploy spring boot docker image stored at Docker Registry to Cloud Run.
However, when I deployed the image, I got the error;
Cloud Run error: Container failed to start. Failed to start and then listen on the port defined by the PORT environment variable. Logs for this revision might contain more information.
I understand this could be caused by port and address setting, so I fixed these parts referring the official doc, though still experiencing the same error. Concretely, I set these things as below, on application.yml.
server:
port: ${PORT:8080}
address: ${ADDRESS:localhost}
I understand PORT variable would be passed by Cloud Run(in my case, port num is set to 8080 on Cloud Run). And also ADDRESS will be passed to by myself(the value is 0.0.0.0, referring the official doc).
For reference, the below is my Dockerfile building spring boot docker image;
# Stage1 - execute build process
FROM openjdk:14-jdk-alpine as build_process
WORKDIR /back_end
COPY . .
RUN ./gradlew build -x test
# Stage2 - boot app with the build output above
FROM openjdk:14-jdk-alpine
EXPOSE ${PORT}
COPY --from=build_process /back_end/build/libs/back_end-0.0.1-SNAPSHOT.jar ./app.jar
RUN adduser -D user
USER user
ENTRYPOINT ["sh","-c","java -jar app.jar"]
Any help would be really appreciated. Thank you so much for reading!
I figured it out why my docker image had failed Cloud Run's health check. After all, it is not about Port and IP address, but about the timing when the health check process executes.
The health check process seems to start immediately once the image is deployed, though my case it took almost 30 secs to launch spring boot tomcat server after deploying to Cloud Run.
That led to the failure of the health check process, so I fixed settings to launch tomcat server immediately too, which solved the issue I posted.

GCloud: unable to listen on the Port defined by env variable

I am trying to deploy on Google Cloud Platform for the first time using the following two tutorials:
Gcloud build quickstart
Gcloud deploy quickstart
However, when running the final command gcloud builds submit --config cloudbuild.yaml, where cloudbuild.yaml is the name of the yaml file as per tutorial, throws the following error:
Cloud Run error: Container failed to start. Failed to start and then listen on the port defined by the PORT environment variable. Logs for this revision might contain more information.
The image created by the build quickstart is not appropriate for the deploy quickstart. The latter, using Cloud Run needs something talking HTTP on port 8080.
If you using the deploy quickstart as-is, that should work. You can test this container image locally using:
docker run \
--interactive --tty \
--publish=8080:8080 \
gcr.io/gcbdocs/hello
and then try browsing or curling the endpoint http://localhost:8080. You should see Hello world!.
The error message from Cloud Run is somewhat generic and means that something went wrong. As a result it's often unhelpful.
If you're confident you're deploying a container image that talks HTTP on port 8080, I recommend you step through the instructions to try to see where you went wrong.

Cannot forward ports when running Linux container on Windows10 as a host

I'm new using Docker. I have been trying to deploy a Linux container (with Windows as a host) with a Google Cloud image inside using Docker. I'm able to do everything well, at the end the server is running perfectly, but when I want to check the server, using the localhost in the browser, I got a blank page with:
Blank page
This is the Dockerfile:
FROM google/cloud-sdk
ENV PATH /usr/lib/google-cloud-sdk/bin:$PATH
WORKDIR docker_folder
COPY local_folder/ .
RUN pwd
EXPOSE 8080
CMD ["java_dev_appserver.sh", "."]
This is the command I'm using to build my image (in the CMD):
docker build --tag serverdeploy .
This is the command I'm using to run my container
docker run -p 8080:8080 serverdeploy
This is the stack trace that I got when I run the server
where I know that I running the server
I did some research and looks like Docker had a problem with the ports when you use a Linux container in Windows (Not sure if it's already solved or not). I've already tried all the possible solutions that I found out there (even trying to replace 'localhost' by all the ip's that I get when I run ipconfig on the cmd) but I still get the same error.
And, as last hope, I need your help to understand what I'm doing wrong, or if I missing something
You are running your service bind to localhost - that means no remote connections are accepted (as well as binding to 127.0.0.1. And for your container the host is a remote connection.
Change binding to 0.0.0.0 (which I guess is default) and enjoy.
Btw sharing your java_dev_appserver.sh would be helpful for answering the question.

Fail to start tasks/services in Docker Swarm: hnsCall failed in Win32: The parameter is incorrect

I am trying the Docker Get Started tutorial, Part 3 (Services). So the part where I need to init a swarm and deploy a stack, all my service status is rejected:
The full error (using --no-trunc) is:
hnsCall failed in Win32: The parameter is incorrect. (0x57)
Here are the steps I am doing:
Ensure my image is correct (the docker run works well, I accessed localhost:4000 successfully). Then I stopped the container to make sure it does not interfere.
When I init the swarm, it says I have multiple addresses, so I chose a random one (I tried with either of them, same result) using --advertise-addr.
docker stack deploy works, but when I check the status with docker service ps, none of them are up. localhost:4000 has no listener.
Note: I switched Docker to a Windows container.
I am new to Docker and this is beyond me. Can anyone please suggest a solution/debug way?
I tried everything but cannot get it to run on a Windows container so I switched to Linux container. The Get Started part 3 runs well.

Running an docker image with cron

I am using an image from docker hub and it uses cron to perform some actions after some interval. I have registered and pushed it as described in documentation as a worker process (not a web). It also requires several environment variables.
I've run it from command line, e.g. docker run -t -e E_VAR1=VAL1 registry.heroku.com/image_name/worker and it worked for few days, then suddenly stopped and I had to run the command again.
Questions:
Is this a correct way to run a docker (as worker process) in Heroku?
Why might it stop running after few days? Is there any logs to check?
Is there a way to restart the process automatically?
How properly set environment variables for the docker in Heroku?
Thanks!
If you want to have this run in the background, you should use the -d flag to disconnect stdin and stdout, and not -t.
To check logs, user docker logs [container name or id]. You can find out the container's name and id using docker ps -a. That should give you an idea as to why the container stopped.
To have the container restart automatically add the --restart always flag when you run it. Alternatively, use --restart on-failure to only restart when it exited with a nonzero exit code.
The way you set environment variables seems fine.

Resources