Launching a simple python script on an AWS ray cluster with docker - amazon-ec2

I am finding it incredibly difficult to follow rays guidelines to running a docker image on a ray cluster in order to execute a python script. I am finding a lack of simple working examples.
So I have the simplest docker file:
FROM rayproject/ray
WORKDIR /usr/src/app
COPY . .
CMD ["step_1.py"]
ENTRYPOINT ["python3"]
I use this to create can image and push this to docker hub. ("myimage" is just an example)
docker build -t myimage .
docker push myimage
"step_1.py" just prints hello every second for 200 seconds:
import time
for i in range(200):
time.sleep(1)
print("hello")
This is my config.yaml. again very simple:
cluster_name: simple-1
min_workers: 0
max_workers: 2
docker:
image: "myimage"
container_name: "my_simple_docker_container"
pull_before_run: True
idle_timeout_minutes: 5
provider:
type: aws
region: eu-west-2
availability_zone: eu-west-2a
file_mounts_sync_continuously: False
auth:
ssh_user: ubuntu
ssh_private_key: /home/user/.ssh/aws_ubuntu_test.pem
head_node:
InstanceType: c5.2xlarge
ImageId: ami-xxxxx826a6b31fd2c
KeyName: aws_ubuntu_test
BlockDeviceMappings:
- DeviceName: /dev/sda1
Ebs:
VolumeSize: 200
worker_nodes:
InstanceType: c5.2xlarge
ImageId: ami-xxxxx826a6b31fd2c
KeyName: aws_ubuntu_test
InstanceMarketOptions:
MarketType: spot
head_setup_commands:
- pip install boto3==1.4.8
worker_setup_commands: []
head_start_ray_commands:
- ray stop
- ulimit -n 65536; ray start --head --port=6379 --object-manager-port=8076 --autoscaling-config=~/ray_bootstrap_config.yaml
worker_start_ray_commands:
- ray stop
- ulimit -n 65536; ray start --address=$RAY_HEAD_IP:6379 --object-manager-port=8076
I hit in the terminal:
ray up simple1.yaml:
and this error every time:
shared connection to x.x.xx.119 closed.
"docker cp" requires exactly 2 arguments.
See 'docker cp --help'.
Usage: docker cp [OPTIONS] CONTAINER:SRC_PATH DEST_PATH|-
docker cp [OPTIONS] SRC_PATH|- CONTAINER:DEST_PATH
Copy files/folders between a container and the local filesystem
Shared connection to x.x.xx.119 closed.
Just to add the docker image will run on any other remote machine just fine, just not on the the ray cluster.
If someone could please help me, I would be eternally grateful, and I will even promise to add a tutorial on medium after my struggles.

I think the issue might be around using ENTRYPOINT. The Ray ClusterLauncher starts docker using a command roughly like:
docker run --rm --name <NAME> -d -it --net=host <image_name> bash
When I ran docker build -t myimage . and then ran docker run --rm -it myimage bash, Docker errored with:
python3: can't open file 'bash': [Errno 2] No such file or directory

Related

Docker compose to have executed 'command: bash' and keep container open

The docker compose yml file below keeps the container open after I run docker compose up -d but command: bash does not get executed:
version: "3.8"
services:
service:
container_name: pw-cont
image: mcr.microsoft.com/playwright:v1.30.0-focal
stdin_open: true # -i
tty: true # -t
network_mode: host # --network host
volumes: # Ensures that updates in local/container are in sync
- $PWD:/playwright/
working_dir: /playwright
command: bash
After I spin the container up, I wanted to visit Docker Desktop > Running container's terminal.
Expectation: Since the file has command: bash, I expect that in docker desktop, when I go to the running container's terminal, it will show root#docker-desktop:/playwright#.
Actual: Container's terminal in docker desktop is showing #, still need to type bash to see root#docker-desktop:/playwright#.
Can the yml file be updated so that bash gets auto executed when spinning up the container?
docker compose doesn't provide that sort of interactive connection. Your docker-compose.yaml file is fine; once your container is running you can attach to it using docker attach pw-cont to access stdin/stdout for the container.
$ docker compose up -d
[+] Running 1/1
⠿ Container pw-cont Started 0.1s
$ docker attach pw-cont
root#rocket:/playwright#
root#rocket:/playwright#
I'm not sure what you are trying to achieve, but using the run command
docker-compose run service
gives me the prompt you expect.

Mount net share with cifs within a container using docker-compose

I have a docker-compose under windows with cap_add and privileges set in order to mount a windows net share in the dockerfile (running a debian) by using cifs.
During the build I always get "Unable to apply new capability set". However, if I get with the bash into the running container I can mount without any problem.
Here is the dockerfile relevant code:
RUN apt-get install cifs-utils -y
RUN mkdir /opt/shared
#RUN mount -v -t cifs //10.20.25.14/external /opt/shared -o
"user=username,password=mypass-,domain=mydm,sec=ntlm"
and this is the docker-compose part:
anaconda:
privileged: true
image: piano_anaconda:latest
security_opt:
- seccomp:unconfined
cap_add:
- SYS_ADMIN
- DAC_READ_SEARCH
build:
context: .
dockerfile: dockerfile_anaconda
I have read this as well but it did not really help to mount within the docker file.
What am I missing here?
Thanks in advance to all for your help.

install gitlab on Windows with Docker

I searched a lot and found that GitLab Community Edition is not installed on Windows so now I want to install it with the help of Docker. I do not know that is it possible and how I can do it ?
You need to install Docker for Windows.
Share drive for Docker (in Docker's settings > shared drives). For example, drive E:
Then, you need to create 3 directories on drive E: (e:\gitlab\config, e:\gitlab\logs, e:\gitlab\data)
From the Command Prompt, run:
docker run --detach --hostname gitlab.yourdomain.ru
--publish 443:443 --publish 80:80 --publish 22:22 --name gitlab
--restart always --volume e:\gitlab\config:/etc/gitlab
--volume e:\gitlab\logs:/var/log/gitlab
--volume e:\gitlab\data:/var/opt/gitlab gitlab/gitlab-ce:latest
That's it! You have now successfully run GitLab image.
Yes, you can run gitlab-ce on windows using Docker. First, make sure docker is installed on Windows, otherwise install it.
A detailed documentation for how to run gitlab using Docker is found under GitLab Docker images including how to access the web interface.
You can check gitlab documantation from Expose GitLab on different ports section.
Before starting the installation create 3 folder which names "config","data","logs" in a "gitlab" folder. And run your gitlab-ce image with docker run command.Gitlab should be running firstly.
Note that I will use 8082 port for gitlab server.You can change it with any port number.
1-open cmd and show your IP address.You need to look for the IPv4 Address in your network adapter :
ipconfig
2-Run your docker-ce image with this command :
docker run --detach --hostname YOUR-IP-ADRESS --publish 8082:8082 --publish 2282:22 --name gitlab --restart always --volume D:\DevOps\Gitlab/config:/etc/gitlab --volume D:\DevOps\Gitlab/logs:/var/log/gitlab --volume D:\DevOps\Gitlab/data:/var/opt/gitlab gitlab/gitlab-ce:latest
3-In docker terminal(in docker gui application press to "cli" buton) go here :
cd etc/gitlab
nano gitlab.rb
4-Go to end of file at gitlab.rb and write these lines :
external_url "http://your-ip-address:8082"
gitlab_rails['gitlab_shell_ssh_port'] = 2282
5-After save and close to gitlab.rb file enter this code for reconfiguration:
gitlab-ctl reconfigure
6-Remove your docker container and run with this command again:
docker run --detach --hostname YOUR-IP-ADRESS --publish 8082:8082 --publish 2282:22 --name gitlab --restart always --volume D:\DevOps\Gitlab/config:/etc/gitlab --volume D:\DevOps\Gitlab/logs:/var/log/gitlab --volume D:\DevOps\Gitlab/data:/var/opt/gitlab gitlab/gitlab-ce:latest
I found the solution here, there is a issue related with volumes when installing in Docker for Windows
https://gitlab.com/gitlab-org/omnibus-gitlab/-/issues/2280
Use the following docker-compose file:
web:
image: 'gitlab/gitlab-ce:13.7.1-ce'
restart: always
hostname: 'localhost'
environment:
GITLAB_OMNIBUS_CONFIG: |
#KO gitlab_rails['initial_root_password'] = 'adminadmin'
gitlab_rails['gitlab_shell_ssh_port'] = 2222
external_url 'http://localhost'
ports:
- '8185:80'
- '1443:443'
- '2222:22'
volumes:
- '/srv/gitlab/config:/etc/gitlab'
- '/srv/gitlab/logs:/var/log/gitlab'
#important here: do not mount /var/opt/gitlab but /var/opt as stated here:
# https://gitlab.com/gitlab-org/omnibus-gitlab/-/issues/2280
- '/srv/gitlab/data:/var/opt'
Do (cygwin) docker ps | grep gitlab until status (healthy) is shown, then open a browser at http://localhost:8185
If you are not asked the first time to change root password,
set it like this (cygwin):
docker exec -it $(docker ps | grep gitlab | awk '{print $1}') bash
root#dev:/# gitlab-rails console -e production
--------------------------------------------------------------------------------
Ruby: ruby 2.7.2p137 (2020-10-01 revision 5445e04352) [x86_64-linux]
GitLab: 13.12.5 (f37a805b0b4) FOSS
GitLab Shell: 13.18.0
PostgreSQL: 12.6
--------------------------------------------------------------------------------
Loading production environment (Rails 6.0.3.6)
irb(main):001:0> user = User.where(id: 1).first => #<User id:1 #root>
irb(main):002:0> user.password = 'adminadmin' => "adminadmin"
irb(main):003:0> user.password_confirmation = 'adminadmin' => "adminadmin"
irb(main):004:0> user.save Enqueued ActionMailer::MailDeliveryJob (Job ID: d5dce701-2a79-4bed-b0a4-2abb877c2081) to Sidekiq(mailers) with arguments: "DeviseMailer", "password_change", "deliver_now", {:args=>[#<GlobalID:0x00007f621582b210 #uri=#<URI::GID gid://gitlab/User/1>>]}
=> true
irb(main):005:0> exit
Then login, create a user, give him a first password, login with it update password, create a project, and use project's git url rather than http since the use of a port seems to generate some trouble with the http url. Generating a public private/key and registering the public on in gitlab might be required

How to use docker run with a Meteor image?

I have 2 containers mgmt-app who is a Meteor container and mgmt-mongo who is the MongoDB.
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
7b65be4ac454 gitlab-lab:5005/dfc/mongo:latest "/entrypoint.sh mongo" About an hour ago Up About an hour 27017/tcp mgmt-mongo
dff0b3c69c5f gitlab-lab:5005/dfc/mgmt-docker-gui:lab "/bin/sh -c 'sh $METE" About an hour ago Up 42 minutes 0.0.0.0:80->80/tcp mgmt-app
From my Docker host I want to run docker run gitlab-lab:5005/dfc/mgmt-docker-gui:lab ls -al
but I have this error:
=> Starting meteor app on port:80
/app/programs/server/node_modules/fibers/future.js:280
throw(ex);
^
Error: MONGO_URL must be set in environment
So I tried:
docker run -e "MONGO_URL=mongodb://mgmt-mongo:27017/meteor" gitlab-lab:5005/dfc/mgmt-docker-gui:lab ls -al
and then the error was:
/app/programs/server/node_modules/fibers/future.js:313
throw(ex);
^
MongoError: failed to connect to server [mgmt-mongo:27017] on first connect
I really don't understand because when I do a docker-compose up -d with this file:
mgmt-app:
image: gitlab-lab:5005/dfc/mgmt-docker-gui:latest
container_name: mgmt-app
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- $HOME/.docker:/root/.docker
- /home/dockeradm/compose/area:/home/dockeradm/compose/area
environment:
- ROOT_URL=http://localhost:80
- MONGO_URL=mongodb://mgmt-mongo:27017/meteor
ports:
- 80:80
restart: always
mgmt-mongo:
image: gitlab-lab:5005/dfc/mongo:latest
container_name: mgmt-mongo
volumes:
- mgmt_mongo_data_config:/data/configdb
- mgmt_mongo_data_db:/data/db
restart: always
everything go well.
So my request is, how should I do my docker run to execute my command ? (the command is not a simple ls -al but it's ok for the demo)
When you run the containers separately with docker run, they are not linked on the same docker network so the mongo container is not accessible from the app container. To remedy this, you should use either:
--link to mark the app container as linked to the mongo container. This works, but is deprecated.
a defined docker network for both containers to be linked by; this is more complex, but is the recommended architecture
By contrast, docker-compose automatically adds both containers to the same docker network, so they are immediately connectable without any extra configuration required:
By default Compose sets up a single network for your app. Each container for a service joins the default network and is both reachable by other containers on that network, and discoverable by them at a hostname identical to the container name.

Passing Elasticsearch and Kibana config file to docker containers

I have found a docker image devdb/kibana which runs Elasticsearch 1.5.2 and Kibana 4.0.2. However I would like to pass into this docker container the configuration files for both Elasticsearch (i.e elasticsearch.yml) and Kibana (i.e config.js)
Can I do that with this image itself? Or for that would I have to build a separate docker container?
Can I do that with this image itself?
yes, just use Docker volumes to pass in your own config files
Let say you have the following files on your docker host:
/home/liv2hak/elasticsearch.yml
/home/liv2hak/kibana.yml
you can then start your container with:
docker run -d --name kibana -p 5601:5601 -p 9200:9200 \
-v /home/liv2hak/elasticsearch.yml:/opt/elasticsearch/config/elasticsearch.yml \
-v /home/liv2hak/kibana.yml:/opt/kibana/config/kibana.yml \
devdb/kibana
I was able to figure this out by looking at your image Dockerfile parents which are: devdb/kibana→devdb/elasticsearch→abh1nav/java7→abh1nav/baseimage→phusion/baseimage
and also taking a peek into a devdb/kibana container: docker run --rm -it devdb/kibana find /opt -type f -name *.yml.
Or for that would I have to build a separate docker container?
I assume you mean build a separate docker image?. That would also work, for instance the following Dockerfile would do that:
FROM devdb/kibana
COPY elasticsearch.yml /opt/elasticsearch/config/elasticsearch.yml
COPY kibana.yml /opt/kibana/config/kibana.yml
Now build the image: docker build -t liv2hak/kibana .
And run it: docker run -d --name kibana -p 5601:5601 -p 9200:9200 liv2hak/kibana

Resources