I am running airflow pods and I am facing issue with installing package in pod
When I exec into a pod, I cannot run the following command
ps aux | grep airflow
Then
I used
apt-get update && apt-get install procps
but it is throwing this error
Reading package lists... Done
E: Could not open lock file /var/lib/apt/lists/lock - open (13: Permission denied)
E: Unable to lock directory /var/lib/apt/lists/
Any ideas to resolve this?
Thanks
To use apt-get in linux, you need the root access, and almost all the airflow images create a new user to remove this access from the docker image in order to avoid the problems, so to solve this problem, you can create a custom image and install whatever you need.
Assume you are using the official docker image apache/airflow
FROM apache/airflow
USER root
RUN apt-get update && apt-get install procps
USER airflow
Then you need to build the image to use it directly if you are working in localhost, or to push it to a docker registry (docker hub for ex), then configure your server to use it.
Related
I would like to install tools in my cluster VM to debug, like dnsutils or mysql to test connections.
My cluster VM use container optimized OS (cos).
Whenever I try
apt-get update
I got an error
-bash: apt-get: command not found
How could I achieve this ?
As explained here, execute
/usr/bin/toolbox
It will download docker images and login inside once completed, as root user.
You will be able to execute commands like apt-get update / install and debug
I have a project in Amazon-Sage-Maker. For this, I have to uninstall specific packages and install others in the terminal. But every time I close or stop the instance I have to go to the terminal and make all the installations again. Why is this happening?
The package with which I am experimenting with this trouble is psycopg2:
import psycopg2
Gives me a warning that suggests that I should uninstall it and install psycopg2-binary.
So I open the terminal and code:
pip uninstall psycopg2
Then in the notebook, I code:
import psycopg2
And have no problem, but if I close and open the instance back, I get the same error and have to go through all the process again.
Thanks for using SageMaker. The packages installed are not persistent when you restart the Notebook Instance. To avoid manually installing it every time, you can create a Lifecycle Config which installs your packages and attach it to you Notebook Instance. Script in Lifecycle Config will be run every time you restart your Notebook Instance.
For more information on how to use Lifecycle Config you can check out:
https://aws.amazon.com/blogs/machine-learning/customize-your-amazon-sagemaker-notebook-instances-with-lifecycle-configurations-and-the-option-to-disable-internet-access/
#anitasp, You have to create a Docker image, by doing the following:
Be sure to set up your SageMaker Execution Role Policy permissions on AWS IAM (besides S3) and also AmazonEC2ContainerServiceFullAccess, AmazonEC2ContainerRegistryFullAccess and AmazonSageMakerFullAccess.
Create and start instance in SageMaker and Open notebook. Clone the directory structure shown here at your instance: https://github.com/RubensZimbres/Repo-2018/tree/master/AWS%20SageMaker/Jupyter-Folder
Inside Jupyter, run:
! sudo service docker start
! sudo usermod -a -G docker ec2-user
! docker info
! chmod +x decision_trees/train
! chmod +x decision_trees/serve
! aws ecr create-repository --repository-name decision-trees
! aws ecr get-login --no-include-email
Copy and paste the login in the command line below
! docker login -u abc -p abc12345 http://abc123
Run
! docker build -t decision-trees .
! docker tag decision-trees your_aws_account_id.dkr.ecr.us-east-1.amazonaws.com/decision-trees:latest
! docker push your_aws_account_id.dkr.ecr.us-east-1.amazonaws.com/decision-trees:latest
! aws ecs register-task-definition --cli-input-json file://decision-trees-task-def.json
And adapt to your needs, according to the algorithm of your choice. You will need the Dockerfile, hyperparameters.json, etc.
The documented project is here: https://github.com/RubensZimbres/Repo-2018/tree/master/AWS%20SageMaker
By default, python packages installed from a Notebook Instance will not be persisted to the next Notebook Instance session. One solution for this problem is to:
1) Create (or clone from a current conda env) a new conda environment into /home/ec2-user/SageMaker, which is persisted between sessions. For example:
conda create --prefix /home/ec2-user/SageMaker/envs/custom-environment --clone tensorflow_p36
2) Next, create a new Lifecycle Configuration for “start notebook” with the following contents:
#!/bin/bash
sudo -u ec2-user -i <<'EOF'
ln -s /home/ec2-user/SageMaker/envs/custom-environment /home/ec2-user/anaconda3/envs/custom-environment
EOF
3) Finally, attach the Lifecycle Configuration to your Notebook Instance
Now, when you restart your Notebook Instance, your custom environment will be detected by conda and Jupyter. Any new packages you install to this environment will be persisted between sessions and then soft-linked back to conda at startup.
I'm trying to create my own image on Docker container. I wrote my Docker file as given below:
FROM ubuntu:latest
RUN apt-get update && apt-get install astyle ruby
But on running
docker build -t username/newname .
It gives an error Error response from daemon:
mkdir /var/lib/docker/tmp/docker-builder705720973: no space left on device
I'm new to Docker. Any help would be appreciated. Thanks in advance
You must have pulled a lot of images i guess. Check all your images using following command:
sudo docker images
If you see many images, try deleting some that you don't need anymore using the following command
sudo docker rmi <iamge_id>
Hope that helps
I'm trying to write a Powershell script to create a VM in Azure with Docker installed. From everything I've read, I should be able to do the following:
$image = "b39f27a8b8c64d52b05eac6a62ebad85__Ubuntu-14_04_3-LTS-amd64-server-20150908-en-us-30GB"
azure vm docker create -e 22 -l 'North Europe' --docker-cert-dir dockercert --vm-size Small <myvmname> $image $userName $password
docker --tls -H tcp://<myvmname>.cloudapp.net:4243 info
The vm creation works, however the docker command fails with the following error:
An error occurred trying to connect: Get https://myvmname.cloudapp.net:4243/v1.20/info: dial tcp 40.127.169.184:4243: ConnectEx tcp: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond.
Some articles I've found refer to port 2376 - but that doesn't work either.
Logging onto Azure portal and viewing the created VM - the Docker VM Extension doesn't seem to have been added and there's no endpoints other than the default SSH one. I was expecting these to have been created by the azure vm docker create command. Although I could be wrong with that bit.
A couple of example article I've looked at are here:
https://azure.microsoft.com/en-gb/documentation/articles/virtual-machines-docker-with-xplat-cli/
http://blog.siliconvalve.com/2015/01/07/get-started-with-docker-on-azure/
However, there's plenty of other articles saying the same thing.
Does anyone know what I'm doing wrong?
I know you are doing nothing wrong. My azurecli-dockerhost connection had been working for months and failed recently. I re-created my docker host using "azure vm docker create" but it does not work any more.
I believe it is a bug that the azure-docker team has to fix.
For the time being, my solution is to:
1) Launch a Ubuntu VM WITHOUT using the Azure docker extension
2) SSH into the VM and install docker with these lines:
sudo su; apt-get -y update
apt-get install linux-image-extra-$(uname -r)
modprobe aufs
curl -sSL https://get.docker.com/ | sh
3) Run docker within this VM directly without relying on a "client" and in particular the azure cli.
If you insist on using the docker client approach, my alternative suggestion would be to update your azure-cli and try 'azure vm docker create' again. Let me know how it goes.
sudo su
apt-get update; apt-get -y install nodejs-legacy; apt-get -y install npm; npm install azure-cli --global
To add an additional answer to my question, it turns out you can do the same using the docker create command ...
docker-machine create $vmname --driver azure --azure-publish-settings-file MySubscription.publishsettings
This method works for me.
I'm looking to move Jenkins to Amazon EC2 running Amazon Linux.
Currently we have Jenkins installed as a package (via yum). I'm considering running Jenkins as the contained jenkins.war on EC2 (for auto-upgrades and ease of deployment).
Unfortunately I've been unable to find much documentation regarding managing jenkins as the latter.
I'm trying to determine:
Which installation is preferred, and why?
If running as a contained jar:
How do I start/stop jenkins?
Should I create a jenkins user?
Installation Steps :
Please launch an Amazon Linux instance using Amazon Linux AMI.
Login to your Amazon Linux instance.
Become root using “sudo su -” command.
Update your repositories
yum update
Get Jenkins repository using below command
wget -O /etc/yum.repos.d/jenkins.repo http://pkg.jenkins-ci.org/redhat-stable/jenkins.repo
Get Jenkins repository key
rpm --import http://pkg.jenkins-ci.org/redhat-stable/jenkins-ci.org.key
Install jenkins package
yum install jenkins
Start jenkins and make sure it starts automatically at system startup
service jenkins start
chkconfig jenkins on
Open your browser and navigate to http://<Elastic-IP>:8080. You will see jenkins dashboard.
That’s it. You have your jenkins setup up and running. Now, you can create jobs to build the code.
Reference: http://sanketdangi.com/post/62715793234/install-configure-jenkins-on-amazon-linux
Jenkins Installation Ubuntu 14.04/16.01
Please follow the steps given below.
Switch to root user sudo su -
sudo apt-get update
sudo apt-get install default-jdk
sudo apt-get install default-jre
wget -q -O - https://pkg.jenkins.io/debian/jenkins-ci.org.key | sudo apt-key add -
echo deb https://pkg.jenkins.io/debian-stable binary/ | sudo tee /etc/apt/sources.list.d/jenkins.list
sudo apt-get update
apt-get install jenkins
Get jenkins Password from:- vi /var/lib/jenkins/secrets/initialAdminPassword
Browse:- eg: 192.168.xx.xx:8080