Getting Quarkus working with podman-compose - quarkus

I installed quarkus, but it is failing when trying to download resources. I installed podman, podman-compose, podman-docker and podman-remote.
It looks like podman-compose is not being called by 'docker compose'. Is there another package I need to install or configure on RHEL9 to use Quarkus?
% ./mvnw quarkus:dev
...
2023-01-31 09:27:25,288 INFO [🐳 .io/postgres:14]] (docker-java-stream--933788147) Starting to pull image
2023-01-31 09:27:55,287 ERROR [🐳 .io/postgres:14]] (testcontainers-pull-watchdog-1) Docker image pull has not made progress in 30s - aborting pull

docker.io had hit a timeout limit. Logging in cleared the limit and now the skeleton project works. It looks like maven is actually using podman compose correctly under the covers.

Related

Sequelize migrations - Google cloud build trigger

I am currently trying to host a typescript/sequelize project in Google cloud build.
I am connecting through a unix socket and cloud sql proxy.
The app is deployed and a test running "sequelize.authenticate()" seems to be working.
Migrations to localhost seems to be working aswell.
I have written a cloud build trigger that does the following:
-builds a simple docker image
-pushes the simple docker image
-npm install
-downloads the cloud_sql_proxy
-initiates the cloud_sql_proxy
the next step would be to migrate a simple table to my gcloud database.
please check out my drawing for further details: https://excalidraw.com/#json=LnvpSjngbk7h1F0RzBgUP,HPwtVWgh-sFgrmvfU9JK0A
If i try to run "npx sequelize-cli db:migrate" gcloud gives the following message: [31mERROR:[39m connect ENOENT /cloudsql/xxxxxxx/.s.PGSQL.5432
but if i replace the command with npx sequelize-cli --version, it simply prints out the version and moves on with the rest of the trigger operations.

Error syncing pod on starting Beam - Dataflow pipeline from docker

We are constantly getting an error while starting our Beam Golang SDK pipeline (driver program) from a docker image which works when started from local / VM instance. We are using Dataflow runner for our pipeline and Kubernetes to deploy.
LOCAL SETUP:
We have GOOGLE_APPLICATION_CREDENTIALS variable set with service account for our GCP cluster. When running the job from local, job gets submitted to dataflow and completes successfully.
DOCKER SETUP:
Build image used is FROM golang:1.14-alpine. When we pack the same program with Dockerfile and try to run, it fails with error
User program exited: fork/exec /bin/worker: no such file or directory
On checking Stackdriver logs for more details, we see this:
Error syncing pod 00014c7112b5049966a4242e323b7850 ("dataflow-go-job-1-1611314272307727-
01220317-27at-harness-jv3l_default(00014c7112b5049966a4242e323b7850)"),
skipping: failed to "StartContainer" for "sdk" with CrashLoopBackOff:
"back-off 2m40s restarting failed container=sdk pod=dataflow-go-job-1-
1611314272307727-01220317-27at-harness-jv3l_default(00014c7112b5049966a4242e323b7850)"
Found reference to this error in Dataflow common errors doc, but it is too generic to figure out whats failing. After multiple retries, we were able to eliminate any permission / access related issues from pods. Not sure what else could be the problem here.
After multiple attempts, we decided to start the job manually from a new Debian 10 based VM instance and it worked. This brought to our notice that we are using alpine based golang image in Docker which may not have all the required dependencies installed to start the job.
On golang docker hub, we found a golang:1.14-buster where buster is codename for Debian 10. Using that for docker build helped us solve the issue. Self answering here to help anyone else facing the same issues.

GCloud: unable to listen on the Port defined by env variable

I am trying to deploy on Google Cloud Platform for the first time using the following two tutorials:
Gcloud build quickstart
Gcloud deploy quickstart
However, when running the final command gcloud builds submit --config cloudbuild.yaml, where cloudbuild.yaml is the name of the yaml file as per tutorial, throws the following error:
Cloud Run error: Container failed to start. Failed to start and then listen on the port defined by the PORT environment variable. Logs for this revision might contain more information.
The image created by the build quickstart is not appropriate for the deploy quickstart. The latter, using Cloud Run needs something talking HTTP on port 8080.
If you using the deploy quickstart as-is, that should work. You can test this container image locally using:
docker run \
--interactive --tty \
--publish=8080:8080 \
gcr.io/gcbdocs/hello
and then try browsing or curling the endpoint http://localhost:8080. You should see Hello world!.
The error message from Cloud Run is somewhat generic and means that something went wrong. As a result it's often unhelpful.
If you're confident you're deploying a container image that talks HTTP on port 8080, I recommend you step through the instructions to try to see where you went wrong.

spring boot takes like forever to start up in Openshift

I'm running a spring boot 1.4.3 app in openshift origin 1.3.
It takes more than 20 minutes to bring spring bootup.
the docker base container I'm using is alpine:3.4 with opendk8-jre.
the spring boot embedded container is using default tomcat one. I've installed the haveged and set -Djava.security.egd=file:/dev/./urandom
but if I run the image itself with docker run(I'm not using openshift), it can start up..
any idea why ?
Could it be the case that you don't have a maven proxy setup and are downloading all dependencies?
If it's the case your logs likely show that you are donwloading the same deps over and over.
run this command to see the logs:
oc logs _POD_NAME_
Also, have you tried the same in OpenShift Dev Preview and got similar results?

default fabric8 microservice errors out on integration test - Waiting for container:spring-boot. Reason:CrashLoopBackOff

Deployed fabric8 in Google Container Engine with 12 core 45GB RAM. Used gofabric8 0.4.69 for deploying fabric8 on GCE.
Tried to create a microservice, but it is failing in integration testing phase throwing the following error "Waiting for container:spring-boot. Reason:CrashLoopBackOff"
Please help to resolve this.
Which quickstart were you trying?
It sounds like the application terminated. I wonder if this shows any output:
kubectl get pod
kubectl logs nameofpod
where nameofpod is the pod that is crashing.
BTW the new fabric8-maven-plugin version (3.1.45 or later) now has a nicer fabric8:run goal.
If you clone the git repository to your local file system and update the version of fabric8-maven-plugin you should be able to run it via:
mvn fabric8:run
Then you get to see the output of the spring app in your console to see if something fails etc.

Resources