Passing active Spring profile to AWS ECS Task Container with Command option - spring

I am using AWS CodeBuild to build my Spring Boot application as Docker image and store it in Elastic Container Registry. Following is the excerpt from my Dockerfile
#run the app
ENTRYPOINT ["java","-jar","/app.jar"]
The build stage is working fine and I get a Docker image created in ECR. I want to use this same Docker image for staging and production environment. And in order to do that, I have to set the right spring profile when the Docker container is started. I have tried passing the spring profile through Command option in ECS Task Container like below but no luck.
-Dspring.profiles.active=test
-Dspring.profiles.active,test
"-Dspring.profiles.active=test"
I know this can be done in ENTRYPOINT command but I need to do it dynamically when the container starts. Can anyone guide what is the right way to pass spring profile to ECS Task Container

Following is the right command line argument to set the active profile through Command option in ECS Task Container
--spring.profiles.active=test

You should use environments in container definitions for passing spring boot profiles dynamically. You don't require an entry point in the DockerFile.
{
"family": "spring-boot-app",
"taskRoleArn": "99895575095",
"containerDefinitions": [
{
"image": "%REPOSITORY_URI%",
"name": "spring-boot-app",
"cpu": 10,
"memoryReservation": 300,
"essential": true,
"portMappings": [
{
"containerPort": 9020,
"hostPort": 9020
}
],
"environment": [
{
"name": "spring.profiles.active",
"value": "uat"
}]
}]
}
You can also parameterize the environment if you are using any shell script or AWS CLI commands but variables with two double quotes as shown below.
{"environment":[{"name":"spring.profiles.active","value":"'"$ENVIRONMENT"'"}]}

Related

Building a Docker Image with the Spring Boot Gradle Plugin and Colima

I'm trying to create a docker image of a Spring Boot application using the Gradle plugin. I'm using Spring Boot 2.6.4 and Gradle 7.1.1.
I'm on a Mac, and I don't have Docker Desktop installed. Indeed, I run docker using Colima.
The problem is that I cannot build the docker image with the command ./gradlew bootBuildImage since Gradle cannot find the docker daemon:
Connection to the Docker daemon at 'localhost' failed with error "[2] No such file or directory"; ensure the Docker daemon is running and accessible
Is there any configuration I have to do in Colima or my build.gradle file?
Colima creates a socket in the location ~/.colima/docker.sock by default. Running the command docker context ls should show a context named colima with the socket location shown in the DOCKER ENDPOINT column.
You can configure the Spring Boot Gradle plugin to use this socket by setting the DOCKER_HOST environment variable to unix:///Users/<user>/.colima/docker.sock or by adding the following to your build file as shown in the documentation.
tasks.named("bootBuildImage") {
docker {
host = "unix:///Users/<user>/.colima/docker.sock"
}
}

Cannot run Spring Boot aplication in Docker (Getting ERR_EMPTY_RESPONSE) in Windows 10?

I have a problem with my Spring Boot Application running in Docker.
Here is my Dockerfile embedded in my app shown below.
FROM adoptopenjdk:11-jre-hotspot
ARG JAR_FILE=target/*.jar
COPY ${JAR_FILE} app-0.0.1-SNAPSHOT.jar
ENTRYPOINT ["java","-jar","/app-0.0.1-SNAPSHOT.jar"]
After I run mvn clean install, I get an app-0.0.1-SNAPSHOT.jar and then define it into a Docker file
Next, I ran this command docker build -t app . I saw this container installed on my docker desktop.
After running this command docker image ls, I also saw this container in the list.
I ran this command docker run -p 9999:8080 app to run it in docker.
The container worked flawlessly after running this command (docker ps)
Next, I test any URL like http://localhost:9999/getCategoryById/1 instead of http://localhost:8080/getCategoryById/1 in Postman but I get the message (Could not send request). When I test this URL in the browser, I get the message ERR_EMPTY_RESPONSE.
I learned the container IP address via docker inspect container_id then I used http://172.17.0.2:9999/getCategoryById/1 but nothing changed.
I also checked if the IP address manages the package (ping 172.17.0.2) but I got Request timed out message.
Here is my project link : Link
How can I fix my issue?
In your application, server.port property in application.properties file, that's used to configure port for Spring Boot embedded Tomcat Server is 8082.
To access the application on the container port 8080, you'd need to override server.port property. One of the ways property can be overridden is using an environment variable like below,
docker run -e SERVER_PORT=8080 -p 9999:8080 app
where SERVER_PORT corresponds to the container port specified in -p <hostPort>:<containerPort>
Other option is to directly update the property in application.properties file like below. After the update, you can then use the same command you've used to run the docker image docker run -p 9999:8080 app
server.port= 8080

Create large number of Linux machines failed with provisioning state `Canceled`

I'm trying to deploy large number of linux machines using the azure-cli (v 2.0.7) using this bash script:
#!/usr/bin/env bash
number_of_servers=12
for i in `seq 1 1 ${number_of_servers}`;
do
az vm create --resource-group Automationsystem --name VM${i} --image BaseImage --admin-username azureuser --size Standard_F4S --ssh-key-value ~/.ssh/mykey.pub &
done
The machines are created from a custom image.
When I ran it I got the following error :
The resource operation completed with terminal provisioning state 'Canceled'.The operation has been preempted by a more recent operation
I tried to create less machines but the error still exists.
I looked at this question but it does not solved my problem.
can I create the machines from custom image?
Yes, we can use custom image to create Azure VMss.
We can use this template to deploy create vmss with custom image.
"sourceImageVhdUri": {
"type": "string",
"metadata": {
"description": "The source of the blob containing the custom image, must be in the same region of the deployment."
}
},
We should store image to Azure storage account first.

Is there a way to get Docker daemon REST API calls?

I'm looking for a way to see the contents of the requests the Docker CLI makes to the Docker Daemon for image creation, container creation etc. Is this possible to see, and if so, how?
You can run the docker daemon with debugging enabled which will show the URL being requested, but not the contents of the request. In /etc/docker/daemon.json, you can configure debugging with:
{ "debug": true }
Then restart the service. On systemd based systems, you can see the logs with journalctl -u docker.

Chronos setting environment variable leads to error

I tried this both on the chronos config and in my job definition:
"environmentVariables": [
{
"name": "DOCKER_API_VERSION",
"value": "$(docker version --format '{{.Server.Version}}')"
}
],
It always fails with:
docker: Error response from daemon: 404 page not found.
See 'docker run --help'.
The reason I'm trying to set that variable is because I'm running docker in docker and the client docker API sometimes has a different version than the server docker version and it has to be started with the DOCKER_API_VERSION env set in order to work.
I'm suspecting it's because of the computed value being set instead of a string value.
In the logs I can see it runs as supposed and I don't know why it crashes to be honest:
docker run ... -e DOCKER_API_VERSION=$(docker version --format '{{.Server.Version}}') ...

Resources