I'm trying to setup my environment to learn azure from the Microsoft learning page https://learn.microsoft.com/en-us/learn/modules/microservices-data-aspnet-core/environment-setup
but when i run . <(sudo wget -q -O - https://aka.ms/microservices-data-aspnet-core-setup) to pull the repo and run the services, i get the error below
~/clouddrive/aspnet-learn/modules/microservices-data-aspnet-core/setup ~/clouddrive/aspnet-learn
~/clouddrive/aspnet-learn
bash: /home/username/clouddrive/aspnet-learn/src/deploy/k8s/quickstart.sh: Permission denied
bash: /home/username/clouddrive/aspnet-learn/src/deploy/k8s/create-acr.sh: Permission denied
cat: /home/username/clouddrive/aspnet-learn/deployment-urls.txt: No such file or directory
this used to work until it stopped working and I'm not sure what caused it to break or how to fix it.
I've tried deleting the 'Storage account' and the resources, but doesn't seem to work. also, when i delete the storage account and create a new one then try again, it seems to have the old data stored and i need to run a remove, so somehow this data isnt really being deleted when i delete the 'Storage account'
Before running this script, please remove or rename the existing /home/username/clouddrive/aspnet-learn/ directory as follows:
Remove: rm -r /home/username/clouddrive/aspnet-learn/
any idea what is wrong here, or how i can actually reset this to work like a new storage?
Note: I saw some solutions which say to start with sudo, for elevated permission, but didnt manage to get this to work
I have done the repro by following the given document
Able to deploy a modified version of the eShopOnContainers reference app
Again I executed the same command ,
. <(wget -q -O - https://aka.ms/microservices-data-aspnet-core-setup)
got the same error which you have got
If we try to run the deploy script without cleaning the already created resource/app,will get the above error.
If you want to re-run the setup script, run the below command first to clean the resource
cd ~ && \
rm -rf ~/clouddrive/aspnet-learn && \
az group delete --name eshop-learn-rg --yes
OR
Remove: rm -r /home/username/clouddrive/aspnet-learn/
Rename: mv /home/username/clouddrive/aspnet-learn/ ~/clouddrive/new-name-here/
The above command removes or renames the existing /home/username/clouddrive/aspnet-learn/ directory
Now you can run the script again
I'm trying to use a vagrant file I received to set up a VM in Ubuntu with virtualbox.
After using the vagrant up command I get the following error:
File provisioner:
* File upload source file /home/c-server/tools/appDeploy.sh must exist
appDeploy.sh does exist in the correct location and looks like this:
#!/bin/bash
#
# Update the app server
#
/usr/local/bin/aws s3 cp s3://dev-build-ci-server/deploy.zip /tmp/.
cd /tmp
unzip -o deploy.zip vagrant/tools/deploy.sh
cp -f vagrant/tools/deploy.sh /tmp/.
rm -rf vagrant
chmod +x /tmp/deploy.sh
dos2unix /tmp/deploy.sh
./deploy.sh
rm -rf ./deploy.sh ./deploy.zip
#
sudo /etc/init.d/supervisor stop
sudo /etc/init.d/supervisor start
#
Since the script exists in the correct location, I'm assuming it's looking for something else (maybe something that should exist on my local computer). What that is, I am not sure.
I did some research into what the file provisioner is and what it does but I cannot find an answer to get me past this error.
It may very well be important that this vagrant file will work correctly on Windows 10, but I need to get it working on Ubuntu.
In your Vagrantfile, check that the filenames are capitalized correctly. Windows isn't case-sensitive but Ubuntu is.
I'm using cygwin as my terminal on Windows 7. I have found several suggestions to run ssh-agent in cygwin so I don't have to enter my password every time I run a git fetch/pull/push. I added the following to my .bash_profile and restarted my cygwin session:
SSH_ENV="$HOME/.ssh/environment"
function start_agent {
echo "Initialising new SSH agent..."
/usr/bin/ssh-agent -s | sed 's/^echo/#echo/' > "${SSH_ENV}"
echo succeeded
chmod 600 "${SSH_ENV}"
. "${SSH_ENV}" > /dev/null
/usr/bin/ssh-add;
}
# Source SSH settings, if applicable
if [ -f "${SSH_ENV}" ]; then
. "${SSH_ENV}" > /dev/null
#ps ${SSH_AGENT_PID} doesn't work under cywgin
ps -ef | grep ${SSH_AGENT_PID} | grep ssh-agent$ > /dev/null || {
start_agent;
}
else
start_agent;
fi
It looks as if the ssh-agent and ssh-add are run successfully, but I am still prompted for my password.
Initialising new SSH agent...
succeeded
Enter passphrase for /cygdrive/c/Users/<username>/.ssh/id_rsa:
Identity added: /cygdrive/c/Users/<username>/.ssh/id_rsa (/cygdrive/c/Users/<username>/.ssh/id_rsa)
$ ssh-add -l
2048 <fingerprint> /cygdrive/c/Users/<username>/.ssh/id_rsa (RSA)
$ git fetch --all
Fetching origin
Enter passphrase for key '/c/Users/<username>/.ssh/id_rsa':
I am in fact using SSH and not HTTPS for my git connection (redacted private info):
$ git remote -v
origin ssh://git#XXX/XXX.git (fetch)
origin ssh://git#XXX/XXX.git (push)
The closest problem I've found for this issue is the following question:
ssh-agent doesn't work / save me from typing passphrase for git
However, I didn't rename my ssh under /git/bin.
Any suggestions on how to diagnose this issue? Thanks!
Here is an easier solution to this than the one above:
Launch Services (can be found by typing Services in the search
box on the Taskbar)
Edit the settings of the "OpenSSH Authentication Agent"
Set the Startup type to Automatic and Start the Service
Launch the Edit the System Environment Variables (type Environment
in the search box on the Taskbar)
Add GIT_SSH variable with the value set to
"C:\Windows\System32\OpenSSH\ssh.exe"
Now when an SSH key is added, you will not need to continue to type the passphrase in a Windows command prompt or in a Cygwin Bash shell.
The problem is still remain. Seems it is related to the different paths and hasn't been fixed yet.
You may consider to use expect as an alternatif to ssh-agent for login with passphrase interaction.
In Linux it use to be installed like this:
$ sudo apt-get update
$ apt-get install --assume-yes --no-install-recommends apt-utils expect
In cygwin you may find it under Tcl:
Here is a simple step on how to use it.
Create a file, lets locate it and name ~/.expect_push
#!/usr/bin/expect -f
spawn git push origin master
expect "Username for 'https://github.com':"
send "<username>\n";
expect "Password for 'https://<username>#github.com':"
send "<password>\n";
interact
Change the above <username> and <password> to your login id with following notes:
Modify the contents follow precisely to your git login interaction.
If your <password> contain the char $ put it as \$ otherwise the authentication will be failed.
For example if your password mypa$$word then put <password> as mypa\$\$word.
Create another file ~/.git_push
#!/usr/bin/bash
git remote set-url origin git#github.com:<username>/<repo>.git
git add .
git commit -m "my test commit using expect"
expect ~/.ssh/.expect_push
Make them executable and create symlink in /bin
$ chmod +x ~/.expect_push
$ chmod +x ~/.git_push
$ cd /bin
$ ln -s ~/.git_push push
Then you can push to the remote without need to fill in username and password or passphrase
$ cd /path/to/your/git/folder
$ push
I have copied this code from what seems to be various working dockerfiles around, here is mine:
FROM ubuntu
MAINTAINER Luke Crooks "luke#pumalo.org"
# Update aptitude with new repo
RUN apt-get update
# Install software
RUN apt-get install -y git python-virtualenv
# Make ssh dir
RUN mkdir /root/.ssh/
# Copy over private key, and set permissions
ADD id_rsa /root/.ssh/id_rsa
RUN chmod 700 /root/.ssh/id_rsa
RUN chown -R root:root /root/.ssh
# Create known_hosts
RUN touch /root/.ssh/known_hosts
# Remove host checking
RUN echo "Host bitbucket.org\n\tStrictHostKeyChecking no\n" >> /root/.ssh/config
# Clone the conf files into the docker container
RUN git clone git#bitbucket.org:Pumalo/docker-conf.git /home/docker-conf
This gives me the error
Step 10 : RUN git clone git#bitbucket.org:Pumalo/docker-conf.git /home/docker-conf
---> Running in 0d244d812a54
Cloning into '/home/docker-conf'...
Warning: Permanently added 'bitbucket.org,131.103.20.167' (RSA) to the list of known hosts.
Permission denied (publickey).
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.
2014/04/30 16:07:28 The command [/bin/sh -c git clone git#bitbucket.org:Pumalo/docker-conf.git /home/docker-conf] returned a non-zero code: 128
This is my first time using dockerfiles, but from what I have read (and taken from working configs) I cannot see why this doesn't work.
My id_rsa is in the same folder as my dockerfile and is a copy of my local key which can clone this repo no problem.
Edit:
In my dockerfile I can add:
RUN cat /root/.ssh/id_rsa
And it prints out the correct key, so I know its being copied correctly.
I have also tried to do as noah advised and ran:
RUN echo "Host bitbucket.org\n\tIdentityFile /root/.ssh/id_rsa\n\tStrictHostKeyChecking no" >> /etc/ssh/ssh_config
This sadly also doesn't work.
My key was password protected which was causing the problem, a working file is now listed below (for help of future googlers)
FROM ubuntu
MAINTAINER Luke Crooks "luke#pumalo.org"
# Update aptitude with new repo
RUN apt-get update
# Install software
RUN apt-get install -y git
# Make ssh dir
RUN mkdir /root/.ssh/
# Copy over private key, and set permissions
# Warning! Anyone who gets their hands on this image will be able
# to retrieve this private key file from the corresponding image layer
ADD id_rsa /root/.ssh/id_rsa
# Create known_hosts
RUN touch /root/.ssh/known_hosts
# Add bitbuckets key
RUN ssh-keyscan bitbucket.org >> /root/.ssh/known_hosts
# Clone the conf files into the docker container
RUN git clone git#bitbucket.org:User/repo.git
You should create new SSH key set for that Docker image, as you probably don't want to embed there your own private key. To make it work, you'll have to add that key to deployment keys in your git repository. Here's complete recipe:
Generate ssh keys with ssh-keygen -q -t rsa -N '' -f repo-key which will give you repo-key and repo-key.pub files.
Add repo-key.pub to your repository deployment keys.
On GitHub, go to [your repository] -> Settings -> Deploy keys
Add something like this to your Dockerfile:
ADD repo-key /
RUN \
chmod 600 /repo-key && \
echo "IdentityFile /repo-key" >> /etc/ssh/ssh_config && \
echo -e "StrictHostKeyChecking no" >> /etc/ssh/ssh_config && \
// your git clone commands here...
Note that above switches off StrictHostKeyChecking, so you don't need .ssh/known_hosts. Although I probably like more the solution with ssh-keyscan in one of the answers above.
There's no need to fiddle around with ssh configurations. Use a configuration file (not a Dockerfile) that contains environment variables, and have a shell script update your docker file at runtime. You keep tokens out of your Dockerfiles and you can clone over https (no need to generate or pass around ssh keys).
Go to Settings > Personal Access Tokens
Generate a personal access token with repo scope enabled.
Clone like this: git clone https://MY_TOKEN#github.com/user-or-org/repo
Some commenters have noted that if you use a shared Dockerfile, this could expose your access key to other people on your project. While this may or may not be a concern for your specific use case, here are some ways you can deal with that:
Use a shell script to accept arguments which could contain your key as a variable. Replace a variable in your Dockerfile with sed or similar, i.e. calling the script with sh rundocker.sh MYTOKEN=foo which would replace on https://{{MY_TOKEN}}#github.com/user-or-org/repo. Note that you could also use a configuration file (in .yml or whatever format you want) to do the same thing but with environment variables.
Create a github user (and generate an access token for) for that project only
You often do not want to perform a git clone of a private repo from within the docker build. Doing the clone there involves placing the private ssh credentials inside the image where they can be later extracted by anyone with access to your image.
Instead, the common practice is to clone the git repo from outside of docker in your CI tool of choice, and simply COPY the files into the image. This has a second benefit: docker caching. Docker caching looks at the command being run, environment variables it includes, input files, etc, and if they are identical to a previous build from the same parent step, it reuses that previous cache. With a git clone command, the command itself is identical, so docker will reuse the cache even if the external git repo is changed. However, a COPY command will look at the files in the build context and can see if they are identical or have been updated, and use the cache only when it's appropriate.
BuildKit has a feature just for ssh which allows you to still have your password protected ssh keys, the result looks like:
# syntax=docker/dockerfile:experimental
FROM ubuntu as clone
LABEL maintainer="Luke Crooks <luke#pumalo.org>"
# Update aptitude with new repo
RUN apt-get update \
&& apt-get install -y git
# Make ssh dir
# Create known_hosts
# Add bitbuckets key
RUN mkdir /root/.ssh/ \
&& touch /root/.ssh/known_hosts \
&& ssh-keyscan bitbucket.org >> /root/.ssh/known_hosts
# Clone the conf files into the docker container
RUN --mount=type=ssh \
git clone git#bitbucket.org:User/repo.git
And you can build that with:
$ eval $(ssh-agent)
$ ssh-add ~/.ssh/id_rsa
(Input your passphrase here)
$ DOCKER_BUILDKIT=1 docker build -t your_image_name \
--ssh default=$SSH_AUTH_SOCK .
Again, this is injected into the build without ever being written to an image layer, removing the risk that the credential could accidentally leak out.
BuildKit also has a features that allow you to pass an ssh key in as a mount that never gets written to the image:
# syntax=docker/dockerfile:experimental
FROM ubuntu as clone
LABEL maintainer="Luke Crooks <luke#pumalo.org>"
# Update aptitude with new repo
RUN apt-get update \
&& apt-get install -y git
# Make ssh dir
# Create known_hosts
# Add bitbuckets key
RUN mkdir /root/.ssh/ \
&& touch /root/.ssh/known_hosts \
&& ssh-keyscan bitbucket.org >> /root/.ssh/known_hosts
# Clone the conf files into the docker container
RUN --mount=type=secret,id=ssh_id,target=/root/.ssh/id_rsa \
git clone git#bitbucket.org:User/repo.git
And you can build that with:
$ DOCKER_BUILDKIT=1 docker build -t your_image_name \
--secret id=ssh_id,src=$(pwd)/id_rsa .
Note that this still requires your ssh key to not be password protected, but you can at least run the build in a single stage, removing a COPY command, and avoiding the ssh credential from ever being part of an image.
If you are going to add credentials into your build, consider doing so with a multi-stage build, and only placing those credentials in an early stage that is never tagged and pushed outside of your build host. The result looks like:
FROM ubuntu as clone
# Update aptitude with new repo
RUN apt-get update \
&& apt-get install -y git
# Make ssh dir
# Create known_hosts
# Add bitbuckets key
RUN mkdir /root/.ssh/ \
&& touch /root/.ssh/known_hosts \
&& ssh-keyscan bitbucket.org >> /root/.ssh/known_hosts
# Copy over private key, and set permissions
# Warning! Anyone who gets their hands on this image will be able
# to retrieve this private key file from the corresponding image layer
COPY id_rsa /root/.ssh/id_rsa
# Clone the conf files into the docker container
RUN git clone git#bitbucket.org:User/repo.git
FROM ubuntu as release
LABEL maintainer="Luke Crooks <luke#pumalo.org>"
COPY --from=clone /repo /repo
...
To force docker to run the git clone even when the lines before have been cached, you can inject a build ARG that changes with each build to break the cache. That looks like:
# inject a datestamp arg which is treated as an environment variable and
# will break the cache for the next RUN command
ARG DATE_STAMP
# Clone the conf files into the docker container
RUN git clone git#bitbucket.org:User/repo.git
Then you inject that changing arg in the docker build command:
date_stamp=$(date +%Y%m%d-%H%M%S)
docker build --build-arg DATE_STAMP=$date_stamp .
Another option is to use a multi-stage docker build to ensure that your SSH keys are not included in the final image.
As described in my post you can prepare your intermediate image with the required dependencies to git clone and then COPY the required files into your final image.
Additionally if we LABEL our intermediate layers, we can even delete them from the machine when finished.
# Choose and name our temporary image.
FROM alpine as intermediate
# Add metadata identifying these images as our build containers (this will be useful later!)
LABEL stage=intermediate
# Take an SSH key as a build argument.
ARG SSH_KEY
# Install dependencies required to git clone.
RUN apk update && \
apk add --update git && \
apk add --update openssh
# 1. Create the SSH directory.
# 2. Populate the private key file.
# 3. Set the required permissions.
# 4. Add github to our list of known hosts for ssh.
RUN mkdir -p /root/.ssh/ && \
echo "$SSH_KEY" > /root/.ssh/id_rsa && \
chmod -R 600 /root/.ssh/ && \
ssh-keyscan -t rsa github.com >> ~/.ssh/known_hosts
# Clone a repository (my website in this case)
RUN git clone git#github.com:janakerman/janakerman.git
# Choose the base image for our final image
FROM alpine
# Copy across the files from our `intermediate` container
RUN mkdir files
COPY --from=intermediate /janakerman/README.md /files/README.md
We can then build:
MY_KEY=$(cat ~/.ssh/id_rsa)
docker build --build-arg SSH_KEY="$MY_KEY" --tag clone-example .
Prove our SSH keys are gone:
docker run -ti --rm clone-example cat /root/.ssh/id_rsa
Clean intermediate images from the build machine:
docker rmi -f $(docker images -q --filter label=stage=intermediate)
For bitbucket repository, generate App Password (Bitbucket settings -> Access Management -> App Password, see the image) with read access to the repo and project.
Then the command that you should use is:
git clone https://username:generated_password#bitbucket.org/reponame/projectname.git
Nowsaday you can use the Buildkit option --ssh default when you build your container ; Prior to build, you need to add your SSH deploy key to your ssh-agent.
Here is the full process from the beginning :
Create a key pair on your deployment server. Just run ssh-keygen -t ecdsa Store your key pair into ~/.ssh
Add your public key generated (.pub extension) at your git provider website (gitlab, github..)
Add your key to your ssh-agent (a program that basically manages your keys easier than handling every file)
eval $(ssh-agent)
ssh-add /path/to/your/private/key
Add this to your Dockerfile :
# this 3 first lines add your provider public keys to known_host
# so git doesn't get an error from SSH.
RUN mkdir -m 700 /root/.ssh && \
touch -m 600 /root/.ssh/known_hosts && \
ssh-keyscan your-git-provider.com > /root/.ssh/known_hosts
# now you can clone with --mount=type=ssh option,
# forwarding to Docker your host ssh agent
RUN mkdir -p /wherever/you/want/to/clone && cd /wherever/you/want/to/clone && \
--mount=type=ssh git clone git#gitlab.com:your-project.git
And now you can finally build your Dockerfile (with buildkit enabled)
DOCKER_BUILDKIT=1 docker build . --ssh default
As you cannot currently pass console parameters to build in docker-compose, this solution is not available yet for docker-compose, but it should be soon (it's been done on github and proposed as a merge request)
p.s. this solution is quick & easy; but at a cost of reduced security (see comments by #jrh).
Create an access token: https://github.com/settings/tokens
pass it in as an argument to docker
(p.s. if you are using CapRover, set it under App Configs)
In your Dockerfile:
ARG GITHUB_TOKEN=${GITHUB_TOKEN}
RUN git config --global url."https://${GITHUB_TOKEN}#github.com/".insteadOf "https://github.com/"
RUN pip install -r requirements.txt
p.s. this assumes that private repos are in the following format in requirements.txt:
git+https://github.com/<YOUR-USERNAME>/<YOUR-REPO>.git
For other people searching I had the same issue adding --ssh default flag made it work
In addition to #Calvin Froedge's approach to use Personal Access Token (PAT),
You may need to add oauth or oauth2 as username before your PAT to authenticate like this:
https://oauth:<token>#git-url.com/user/repo.git
recently had a similar issue with a private repository in a Rust project.
I suggest avoiding ssh permissions/config altogether.
Instead:
clone the repository within the build environment e.g. CI, where the permissions already exist (or can be easily configured)
copy the files into the Dockerfile (this can also be cached natively within the CI)
Example
part 1) within CI
CARGO_HOME=tmp-home cargo fetch
part 2) within Dockerfile
COPY tmp-home $CARGO_HOME
the process is the same regardless of language/package system
I want to write a script that will connect to my server through my id (two layers of authentication) after i run the script.
ssh id#server->password
after this authentication one more authentication superuser authentication
username :
password :
My OS is MAC.
It's a lot tricker to get everything right so that this will "just work". The poorest documented problem is the correct protections on the login directory, the .ssh directory and the files in the .ssh directory. This is the script that I use to set everything up correctly:
#!/bin/tcsh -x
#
# sshkeygen.sh
#
# make sure your login directory has the right permissions
chmod 755 ~
# make sure your .ssh dir exists and has the right permissions
mkdir -pv -m 700 ~/.ssh
chmod 0700 ~/.ssh
# remove any existing rsa/dsa keys
rm -f ~/.ssh/id_rsa* ~/.ssh/id_dsa*
# if your ssh keys don't exist
set keyname = "`whoami`_at_`hostname`-id_rsa"
echo "keyname: $keyname"
if( ! -e ~/.ssh/$keyname ) then
# generate them
ssh-keygen -b 1024 -t rsa -f ~/.ssh/$keyname -P ''
endif
cd ~/.ssh
# set the permissions
chmod 0700 *id_rsa
chmod 0644 *id_rsa.pub
# create symbolic links to them for the (default) id_rsa files
ln -sf $keyname id_rsa
ln -sf $keyname.pub id_rsa.pub
I have another script that copies the "whoamiathostname-id_rsa.pub" file onto a shared server (as admin) and then merges it into that systems .ssh/authorized_keys file which it then copies back onto the local machine. The first time these scripts run the user is prompted for the admin password to the shared server but after that everything will "just work".
Oh, and it's "Mac" (not "MAC"). [\pedantic] ;-)