When I share a folder between my host and my containers, my files edited in Sublime are not syncing inside the containers.
I'm using Docker version 1.13.0, build 49bf474 and I tried many fixes that some issues on github told me to do, but none of them worked for me.
I'm sharing my C/ driver with docker host, configuring my compose like this:
uwsgi:
build: .
links:
- postgres
command: ./uwsgi.sh
env_file: .env
volumes:
- /static
- /data/media:/media
- ./api:/app
My volume ./api:/app works, but when i change something, its not reflects on the container and I can't use for development.
Here is my inspect for this container: (Mounts/Volumes)
"Mounts": [
{
"Type": "bind",
"Source": "/C/Users/tif/projetos/my/jl.api/api",
"Destination": "/app",
"Mode": "rw",
"RW": true,
"Propagation": ""
},
{
"Type": "bind",
"Source": "/data/media",
"Destination": "/media",
"Mode": "rw",
"RW": true,
"Propagation": ""
},
{
"Type": "volume",
"Name": "b931d6d30c2b8e1bcdc2a20d5e6d2c27dd515c5041d2ea64ca01b5dc08047879",
"Source": "/var/lib/docker/volumes/b931d6d30c2b8e1bcdc2a20d5e6d2c27dd515c5041d2ea64ca01b5dc08047879/_data",
"Destination": "/static",
"Driver": "local",
"Mode": "",
"RW": true,
"Propagation": ""
}
],
"Volumes": {
"/app": {},
"/media": {},
"/static": {}
},
This things I have already tried:
atomic_save: false (Sublime)
nginx.conf with sendfile off;
Someone have experienced this?
After some research, I could see that I was using uWsgi for development environment, and I couldn't get my app reloading without the py-autoreload.
All i had to get done was start my uwsgi setting the py-autoreload to 2 and my app started to reloads.
I'm starting this command on docker now:
"/usr/local/bin/uwsgi --socket :5000 --wsgi-file ......... --py-autoreload 2
Reading this could be useful if you are experiencing this issue: http://chase-seibert.github.io/blog/2014/03/30/uwsgi-python-reload.html
Related
I am trying to run an ECS task that contains 3 containers - postgres, redis, and an image from a private ECR repository. The custom image container definition has a command to wait until the postgres container can receive traffic via a bash command
"command": [
"/bin/bash",
"-c",
"while !</dev/tcp/postgres/5432; do echo \"Waiting for postgres database to start...\"; /bin/sleep 1; done; /bin/sh /app/start-server.sh;"
],
When I run this via docker-compose on my local machine through docker it works, but on the Amazon Linux 2 EC2 machine this is printed when the while loop runs:
/bin/bash: line 1: postgres: Name or service not known
/bin/bash: line 1: /dev/tcp/postgres/5432: Invalid argument
The postgres container runs without error and the last log from that container is
database system is ready to accept connections
I am not sure if this is a docker network issue or an issue with amazon linux 2's bash not being compiled with --enable-net-redirections which I found explained here
Task Definition:
{
"networkMode": "bridge",
"containerDefinitions": [
{
"environment": [
{
"name": "POSTGRES_DB",
"value": "metadeploy"
},
{
"name": "POSTGRES_USER",
"value": "<redacted>"
},
{
"name": "POSTGRES_PASSWORD",
"value": "<redacted>"
}
],
"essential": true,
"image": "postgres:12.9",
"mountPoints": [],
"name": "postgres",
"memory": 1024,
"logConfiguration": {
"logDriver": "awslogs",
"options": {
"awslogs-group": "metadeploy-postgres",
"awslogs-region": "us-east-1",
"awslogs-create-group": "true",
"awslogs-stream-prefix": "mdp"
}
}
},
{
"essential": true,
"image": "redis:6.2",
"name": "redis",
"memory": 1024
},
{
"command": [
"/bin/bash",
"-c",
"while !</dev/tcp/postgres/5432; do echo \"Waiting for postgres database to start...\"; /bin/sleep 1; done; /bin/sh /app/start-server.sh;"
],
"environment": [
{
"name": "DJANGO_SETTINGS_MODULE",
"value": "config.settings.local"
},
{
"name": "DATABASE_URL",
"value": "<redacted-postgres-url>"
},
{
"name": "REDIS_URL",
"value": "redis://redis:6379"
},
{
"name": "REDIS_HOST",
"value": "redis"
}
],
"essential": true,
"image": "the private ecr image uri built from here https://github.com/SFDO-Tooling/MetaDeploy",
"links": [
"redis"
],
"mountPoints": [
{
"containerPath": "/app/node_modules",
"sourceVolume": "AppNode_Modules"
}
],
"name": "web",
"portMappings": [
{
"containerPort": 8080,
"hostPort": 8080
},
{
"containerPort": 8000,
"hostPort": 8000
},
{
"containerPort": 6006,
"hostPort": 6006
}
],
"memory": 1024,
"logConfiguration": {
"logDriver": "awslogs",
"options": {
"awslogs-group": "metadeploy-web",
"awslogs-region": "us-east-1",
"awslogs-create-group": "true",
"awslogs-stream-prefix": "mdw"
}
}
}
],
"family": "MetaDeploy",
"volumes": [
{
"host": {
"sourcePath": "/app/node_modules"
},
"name": "AppNode_Modules"
}
]
}
The corresponding docker-compose.yml contains:
version: '3'
services:
postgres:
environment:
POSTGRES_DB: metadeploy
POSTGRES_USER: postgres
POSTGRES_PASSWORD: sample_db_password
volumes:
- ./postgres:/var/lib/postgresql/data:delegated
image: postgres:12.9
restart: always
redis:
image: redis:6.2
web:
build:
context: .
dockerfile: Dockerfile
command: |
/bin/bash -c 'while !</dev/tcp/postgres/5432; do echo "Waiting for postgres database to start..."; /bin/sleep 1; done; \
/bin/sh /app/start-server.sh;'
ports:
- '8080:8080'
- '8000:8000'
# Storybook server
- '6006:6006'
stdin_open: true
tty: true
depends_on:
- postgres
- redis
links:
- redis
environment:
DJANGO_SETTINGS_MODULE: config.settings.local
DATABASE_URL: postgres://postgres:sample_db_password#postgres:5432/metadeploy
REDIS_URL: redis://redis:6379
REDIS_HOST: redis
volumes:
- .:/app:cached
- /app/node_modules
Do I need to recompile bash to use --enable-net-redirections, and if so how can I do that?
Without bash's net redirection feature, your best bet is to use something like nc or netcat (if available) to determine if the port is open. If those aren't available, it may be worth modifying your app logic to better handle database failure cases.
Alternately, a potential better approach would be:
Adding a healthcheck to the postgres image.
Modifying the web service's depends_on clause "long syntax" to add a dependency on postgres being service_healthy instead of the default service_started.
This approach has two key benefits:
The postgres image likely has the tools to detect if the database is up and running.
The web service no longer needs to manually check if the database is ready or not.
I'm using Laravel.
I want to deploy it to ECS (B/G) to see how it works.
In the development environment, Laravel is running.
I was able to launch my Laravel project on EC2 using docker.
I want to use Fargate for the first time and deploy to ECS!
Also, CodeBuild has completed successfully.
appspec.yml
version: 0.0 Resources:
- TargetService:
Type: AWS::ECS::Service
Properties:
TaskDefinition: "<TASK_DEFINITION>"
LoadBalancerInfo:
ContainerName: "nginx"
ContainerPort: "80"
taskdef.json
{
"taskRoleArn": "arn:aws:iam::**********:role/ecsTaskExecutionRole",
"executionRoleArn": "arn:aws:iam::**********:role/ecsTaskExecutionRole",
"containerDefinitions": [
{
"logConfiguration": {
"logDriver": "awslogs",
"options": {
"awslogs-group": "/ecs/****-system",
"awslogs-region": "******",
"awslogs-stream-prefix": "ecs"
}
},
"entryPoint": [
"sh",
"-c"
],
"command": [
"php artisan config:cache && php artisan migrate && chmod -R 777 storage/ && chmod -R 777 bootstrap/cache/"
],
"cpu": 0,
"environment": [
{
"name": "APP_ENV",
"value": "staging"
}
],
"workingDirectory": "/var/www/html",
"image": "<IMAGE1_NAME>",
"name": "php"
},
{
"logConfiguration": {
"logDriver": "awslogs",
"options": {
"awslogs-group": "/ecs/****-system",
"awslogs-region": "****",
"awslogs-stream-prefix": "ecs"
}
},
"portMappings": [
{
"hostPort": 80,
"protocol": "tcp",
"containerPort": 80
}
],
"environment": [
{
"name": "APP_ENV",
"value": "staging"
}
],
"workingDirectory": "/var/www/html",
"image": "**********.dkr.ecr.**********.amazonaws.com/**********-nginx:latest",
"name": "nginx"
}
],
"placementConstraints": [],
"memory": "2048",
"family": "*****-system",
"requiresCompatibilities": [
"FARGATE"
],
"networkMode": "awsvpc",
"cpu": "1024",
"volumes": []
}
CodeDeploy stopped at INSTALL, and there are no errors.
As you can see in the capture, we can confirm that "<TASK_DEFINITION>" has been replaced.
I'd like to know if there's any information I'm missing.
I'm not sure how to set environment variables such as ".env", so I'm thinking this might be the cause.
CodeDeploy Failed
Revision
Task Definitions
ECR
ECR nginx
ECR src(laravel)
If you want to change .env file to set env variable, you may use ssh connection to your webserver and run nano .env command at root folder, to write the file.
You can also modify the file using ftp connection.
As per Dynamic system host example How can I configure debug on vscode? I am able to debug the host application but not the remote one.
Got it working. There was an # character before the folderName which I was missing and stumbled upon when I debugged the same app in WebStorm.
The application structure is-
remote
.vscode
workspace-launch-config
host
So the working launch config in vs-code is-
"launch": {
"version": "0.2.0",
"configurations": [
{
"type": "pwa-msedge",
"request": "launch",
"name": "Debug M3 # 8080",
"trace": true,
"url": "http://localhost:8080",
"webRoot": "${workspaceFolder:host}",
"sourceMaps": true,
"sourceMapPathOverrides": {
"webpack://#${workspaceFolderBasename:remote}/*": "${workspaceFolder:remote}/packages/*",
"webpack://${workspaceFolderBasename:host}/*": "${workspaceFolder:host}/*"
}
}
]
}
Newbie to Microservices here.
I have been looking into develop a microservice with spring actuator while having Consul for service discovery and fail recovery.
I have configured a cluster as explained in Consul documentation.
Now what I'm trying to do is configure a Consul Watch to trigger when any of my service is down and execute a shell script to restart my service. Following is my configuration file.
{
"bind_addr": "127.0.0.1",
"datacenter": "dc1",
"encrypt": "EXz7LsrhpQ4idwqffiFoQ==",
"data_dir": "/data",
"log_level": "INFO",
"enable_syslog": true,
"enable_debug": true,
"enable_script_checks": true,
"ui":true,
"node_name": "SpringConsulClient",
"server": false,
"service": { "name": "Apache", "tags": ["HTTP"], "port": 8080,
"check": {"script": "curl localhost >/dev/null 2>&1", "interval": "10s"}},
"rejoin_after_leave": true,
"watches": [
{
"type": "service",
"handler": "/Consul-Script.sh"
}
]
}
Any help/tip would be greatly appreciate.
Regards,
Chrishan
Take a closer look at the description of the service watch type in the official documentation. It has an example, how you can specify it:
{
"type": "service",
"service": "redis",
"args": ["/usr/bin/my-service-handler.sh", "-redis"]
}
Note that it has no property handler and but takes a path to the script as an argument. And one more:
It requires the "service" parameter
It seems, in you case you need to specify it as follows:
"watches": [
{
"type": "service",
"service": "Apache",
"args": ["/fully/qualified/path/to/Consul-Script.sh"]
}
]
I made an application Dockerized in a Docker container. I intended to make the application able to access files from our HDFS. The Docker image is to be deployed on the same cluster where we have HDFS installed via Marathon-Mesos.
Below is the json to be POST to Marathon. It seems that my app is able to read and write files in the HDFS. Can someone comment on the safety of this? Would files changed by my app correctly changed in the HDFS as well? I Googled around and didn't find any similar approaches...
{
"id": "/ipython-test",
"cmd": null,
"cpus": 1,
"mem": 1024,
"disk": 0,
"instances": 1,
"container": {
"type": "DOCKER",
"volumes": [
{
"containerPath": "/home",
"hostPath": "/hadoop/hdfs-mount",
"mode": "RW"
}
],
"docker": {
"image": "my/image",
"network": "BRIDGE",
"portMappings": [
{
"containerPort": 8888,
"hostPort": 0,
"servicePort": 10061,
"protocol": "tcp",
}
],
"privileged": false,
"parameters": [],
"forcePullImage": true
}
},
"portDefinitions": [
{
"port": 10061,
"protocol": "tcp",
"labels": {}
}
]
}
You might have a look at the Docker volume docs.
Basically, the volumes definition in the app.json would trigger the start of the Docker image with the flag -v /hadoop/hdfs-mount:/home:RW, meaning that the host path gets mapped to the Docker container as /home in read-write mode.
You should be able to verify this if you SSH into the node which is running the app and do a docker inspect <containerId>.
See also
https://mesosphere.github.io/marathon/docs/native-docker.html