Clone private git repo with dockerfile - bash

I have copied this code from what seems to be various working dockerfiles around, here is mine:
FROM ubuntu
MAINTAINER Luke Crooks "luke#pumalo.org"
# Update aptitude with new repo
RUN apt-get update
# Install software
RUN apt-get install -y git python-virtualenv
# Make ssh dir
RUN mkdir /root/.ssh/
# Copy over private key, and set permissions
ADD id_rsa /root/.ssh/id_rsa
RUN chmod 700 /root/.ssh/id_rsa
RUN chown -R root:root /root/.ssh
# Create known_hosts
RUN touch /root/.ssh/known_hosts
# Remove host checking
RUN echo "Host bitbucket.org\n\tStrictHostKeyChecking no\n" >> /root/.ssh/config
# Clone the conf files into the docker container
RUN git clone git#bitbucket.org:Pumalo/docker-conf.git /home/docker-conf
This gives me the error
Step 10 : RUN git clone git#bitbucket.org:Pumalo/docker-conf.git /home/docker-conf
---> Running in 0d244d812a54
Cloning into '/home/docker-conf'...
Warning: Permanently added 'bitbucket.org,131.103.20.167' (RSA) to the list of known hosts.
Permission denied (publickey).
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.
2014/04/30 16:07:28 The command [/bin/sh -c git clone git#bitbucket.org:Pumalo/docker-conf.git /home/docker-conf] returned a non-zero code: 128
This is my first time using dockerfiles, but from what I have read (and taken from working configs) I cannot see why this doesn't work.
My id_rsa is in the same folder as my dockerfile and is a copy of my local key which can clone this repo no problem.
Edit:
In my dockerfile I can add:
RUN cat /root/.ssh/id_rsa
And it prints out the correct key, so I know its being copied correctly.
I have also tried to do as noah advised and ran:
RUN echo "Host bitbucket.org\n\tIdentityFile /root/.ssh/id_rsa\n\tStrictHostKeyChecking no" >> /etc/ssh/ssh_config
This sadly also doesn't work.

My key was password protected which was causing the problem, a working file is now listed below (for help of future googlers)
FROM ubuntu
MAINTAINER Luke Crooks "luke#pumalo.org"
# Update aptitude with new repo
RUN apt-get update
# Install software
RUN apt-get install -y git
# Make ssh dir
RUN mkdir /root/.ssh/
# Copy over private key, and set permissions
# Warning! Anyone who gets their hands on this image will be able
# to retrieve this private key file from the corresponding image layer
ADD id_rsa /root/.ssh/id_rsa
# Create known_hosts
RUN touch /root/.ssh/known_hosts
# Add bitbuckets key
RUN ssh-keyscan bitbucket.org >> /root/.ssh/known_hosts
# Clone the conf files into the docker container
RUN git clone git#bitbucket.org:User/repo.git

You should create new SSH key set for that Docker image, as you probably don't want to embed there your own private key. To make it work, you'll have to add that key to deployment keys in your git repository. Here's complete recipe:
Generate ssh keys with ssh-keygen -q -t rsa -N '' -f repo-key which will give you repo-key and repo-key.pub files.
Add repo-key.pub to your repository deployment keys.
On GitHub, go to [your repository] -> Settings -> Deploy keys
Add something like this to your Dockerfile:
ADD repo-key /
RUN \
chmod 600 /repo-key && \
echo "IdentityFile /repo-key" >> /etc/ssh/ssh_config && \
echo -e "StrictHostKeyChecking no" >> /etc/ssh/ssh_config && \
// your git clone commands here...
Note that above switches off StrictHostKeyChecking, so you don't need .ssh/known_hosts. Although I probably like more the solution with ssh-keyscan in one of the answers above.

There's no need to fiddle around with ssh configurations. Use a configuration file (not a Dockerfile) that contains environment variables, and have a shell script update your docker file at runtime. You keep tokens out of your Dockerfiles and you can clone over https (no need to generate or pass around ssh keys).
Go to Settings > Personal Access Tokens
Generate a personal access token with repo scope enabled.
Clone like this: git clone https://MY_TOKEN#github.com/user-or-org/repo
Some commenters have noted that if you use a shared Dockerfile, this could expose your access key to other people on your project. While this may or may not be a concern for your specific use case, here are some ways you can deal with that:
Use a shell script to accept arguments which could contain your key as a variable. Replace a variable in your Dockerfile with sed or similar, i.e. calling the script with sh rundocker.sh MYTOKEN=foo which would replace on https://{{MY_TOKEN}}#github.com/user-or-org/repo. Note that you could also use a configuration file (in .yml or whatever format you want) to do the same thing but with environment variables.
Create a github user (and generate an access token for) for that project only

You often do not want to perform a git clone of a private repo from within the docker build. Doing the clone there involves placing the private ssh credentials inside the image where they can be later extracted by anyone with access to your image.
Instead, the common practice is to clone the git repo from outside of docker in your CI tool of choice, and simply COPY the files into the image. This has a second benefit: docker caching. Docker caching looks at the command being run, environment variables it includes, input files, etc, and if they are identical to a previous build from the same parent step, it reuses that previous cache. With a git clone command, the command itself is identical, so docker will reuse the cache even if the external git repo is changed. However, a COPY command will look at the files in the build context and can see if they are identical or have been updated, and use the cache only when it's appropriate.
BuildKit has a feature just for ssh which allows you to still have your password protected ssh keys, the result looks like:
# syntax=docker/dockerfile:experimental
FROM ubuntu as clone
LABEL maintainer="Luke Crooks <luke#pumalo.org>"
# Update aptitude with new repo
RUN apt-get update \
&& apt-get install -y git
# Make ssh dir
# Create known_hosts
# Add bitbuckets key
RUN mkdir /root/.ssh/ \
&& touch /root/.ssh/known_hosts \
&& ssh-keyscan bitbucket.org >> /root/.ssh/known_hosts
# Clone the conf files into the docker container
RUN --mount=type=ssh \
git clone git#bitbucket.org:User/repo.git
And you can build that with:
$ eval $(ssh-agent)
$ ssh-add ~/.ssh/id_rsa
(Input your passphrase here)
$ DOCKER_BUILDKIT=1 docker build -t your_image_name \
--ssh default=$SSH_AUTH_SOCK .
Again, this is injected into the build without ever being written to an image layer, removing the risk that the credential could accidentally leak out.
BuildKit also has a features that allow you to pass an ssh key in as a mount that never gets written to the image:
# syntax=docker/dockerfile:experimental
FROM ubuntu as clone
LABEL maintainer="Luke Crooks <luke#pumalo.org>"
# Update aptitude with new repo
RUN apt-get update \
&& apt-get install -y git
# Make ssh dir
# Create known_hosts
# Add bitbuckets key
RUN mkdir /root/.ssh/ \
&& touch /root/.ssh/known_hosts \
&& ssh-keyscan bitbucket.org >> /root/.ssh/known_hosts
# Clone the conf files into the docker container
RUN --mount=type=secret,id=ssh_id,target=/root/.ssh/id_rsa \
git clone git#bitbucket.org:User/repo.git
And you can build that with:
$ DOCKER_BUILDKIT=1 docker build -t your_image_name \
--secret id=ssh_id,src=$(pwd)/id_rsa .
Note that this still requires your ssh key to not be password protected, but you can at least run the build in a single stage, removing a COPY command, and avoiding the ssh credential from ever being part of an image.
If you are going to add credentials into your build, consider doing so with a multi-stage build, and only placing those credentials in an early stage that is never tagged and pushed outside of your build host. The result looks like:
FROM ubuntu as clone
# Update aptitude with new repo
RUN apt-get update \
&& apt-get install -y git
# Make ssh dir
# Create known_hosts
# Add bitbuckets key
RUN mkdir /root/.ssh/ \
&& touch /root/.ssh/known_hosts \
&& ssh-keyscan bitbucket.org >> /root/.ssh/known_hosts
# Copy over private key, and set permissions
# Warning! Anyone who gets their hands on this image will be able
# to retrieve this private key file from the corresponding image layer
COPY id_rsa /root/.ssh/id_rsa
# Clone the conf files into the docker container
RUN git clone git#bitbucket.org:User/repo.git
FROM ubuntu as release
LABEL maintainer="Luke Crooks <luke#pumalo.org>"
COPY --from=clone /repo /repo
...
To force docker to run the git clone even when the lines before have been cached, you can inject a build ARG that changes with each build to break the cache. That looks like:
# inject a datestamp arg which is treated as an environment variable and
# will break the cache for the next RUN command
ARG DATE_STAMP
# Clone the conf files into the docker container
RUN git clone git#bitbucket.org:User/repo.git
Then you inject that changing arg in the docker build command:
date_stamp=$(date +%Y%m%d-%H%M%S)
docker build --build-arg DATE_STAMP=$date_stamp .

Another option is to use a multi-stage docker build to ensure that your SSH keys are not included in the final image.
As described in my post you can prepare your intermediate image with the required dependencies to git clone and then COPY the required files into your final image.
Additionally if we LABEL our intermediate layers, we can even delete them from the machine when finished.
# Choose and name our temporary image.
FROM alpine as intermediate
# Add metadata identifying these images as our build containers (this will be useful later!)
LABEL stage=intermediate
# Take an SSH key as a build argument.
ARG SSH_KEY
# Install dependencies required to git clone.
RUN apk update && \
apk add --update git && \
apk add --update openssh
# 1. Create the SSH directory.
# 2. Populate the private key file.
# 3. Set the required permissions.
# 4. Add github to our list of known hosts for ssh.
RUN mkdir -p /root/.ssh/ && \
echo "$SSH_KEY" > /root/.ssh/id_rsa && \
chmod -R 600 /root/.ssh/ && \
ssh-keyscan -t rsa github.com >> ~/.ssh/known_hosts
# Clone a repository (my website in this case)
RUN git clone git#github.com:janakerman/janakerman.git
# Choose the base image for our final image
FROM alpine
# Copy across the files from our `intermediate` container
RUN mkdir files
COPY --from=intermediate /janakerman/README.md /files/README.md
We can then build:
MY_KEY=$(cat ~/.ssh/id_rsa)
docker build --build-arg SSH_KEY="$MY_KEY" --tag clone-example .
Prove our SSH keys are gone:
docker run -ti --rm clone-example cat /root/.ssh/id_rsa
Clean intermediate images from the build machine:
docker rmi -f $(docker images -q --filter label=stage=intermediate)

For bitbucket repository, generate App Password (Bitbucket settings -> Access Management -> App Password, see the image) with read access to the repo and project.
Then the command that you should use is:
git clone https://username:generated_password#bitbucket.org/reponame/projectname.git

Nowsaday you can use the Buildkit option --ssh default when you build your container ; Prior to build, you need to add your SSH deploy key to your ssh-agent.
Here is the full process from the beginning :
Create a key pair on your deployment server. Just run ssh-keygen -t ecdsa Store your key pair into ~/.ssh
Add your public key generated (.pub extension) at your git provider website (gitlab, github..)
Add your key to your ssh-agent (a program that basically manages your keys easier than handling every file)
eval $(ssh-agent)
ssh-add /path/to/your/private/key
Add this to your Dockerfile :
# this 3 first lines add your provider public keys to known_host
# so git doesn't get an error from SSH.
RUN mkdir -m 700 /root/.ssh && \
touch -m 600 /root/.ssh/known_hosts && \
ssh-keyscan your-git-provider.com > /root/.ssh/known_hosts
# now you can clone with --mount=type=ssh option,
# forwarding to Docker your host ssh agent
RUN mkdir -p /wherever/you/want/to/clone && cd /wherever/you/want/to/clone && \
--mount=type=ssh git clone git#gitlab.com:your-project.git
And now you can finally build your Dockerfile (with buildkit enabled)
DOCKER_BUILDKIT=1 docker build . --ssh default
As you cannot currently pass console parameters to build in docker-compose, this solution is not available yet for docker-compose, but it should be soon (it's been done on github and proposed as a merge request)

p.s. this solution is quick & easy; but at a cost of reduced security (see comments by #jrh).
Create an access token: https://github.com/settings/tokens
pass it in as an argument to docker
(p.s. if you are using CapRover, set it under App Configs)
In your Dockerfile:
ARG GITHUB_TOKEN=${GITHUB_TOKEN}
RUN git config --global url."https://${GITHUB_TOKEN}#github.com/".insteadOf "https://github.com/"
RUN pip install -r requirements.txt
p.s. this assumes that private repos are in the following format in requirements.txt:
git+https://github.com/<YOUR-USERNAME>/<YOUR-REPO>.git

For other people searching I had the same issue adding --ssh default flag made it work

In addition to #Calvin Froedge's approach to use Personal Access Token (PAT),
You may need to add oauth or oauth2 as username before your PAT to authenticate like this:
https://oauth:<token>#git-url.com/user/repo.git

recently had a similar issue with a private repository in a Rust project.
I suggest avoiding ssh permissions/config altogether.
Instead:
clone the repository within the build environment e.g. CI, where the permissions already exist (or can be easily configured)
copy the files into the Dockerfile (this can also be cached natively within the CI)
Example
part 1) within CI
CARGO_HOME=tmp-home cargo fetch
part 2) within Dockerfile
COPY tmp-home $CARGO_HOME
the process is the same regardless of language/package system

Related

Copying a war file from GitLab to EC2

I am trying to create a CI/CD pipeline to build war file and deploy it to EC2 from GitLab.
Once the war file is created, I would like to copy it to some folder in EC2 so that from there I would like to copy it to tomcat server.
The following is the ".gitlab-ci.yml" file.
stages:
- build
- deploy
build:
stage: build
image: maven:3-jdk-8
script:
- mvn install
artifacts:
paths:
- target/
deploy:
stage: deploy
before_script:
# Generate SSH Key
- mkdir -p ~/.ssh
- echo -e "$EC2_SSH_PRIVATE_KEY" > ~/.ssh/id_rsa
- chmod 600 ~/.ssh/id_rsa
- '[[ -f /.dockerenv ]] && echo -e "Host *\n\tStrictHostKeyChecking no\n\n" > ~/.ssh/config'
script:
- scp target/gitlabec2pipeline.war ec2-user#$EC2_DEPLOY_SERVER:/gitlabec2pipeline.war
- bash .gitlab-deploy-ec2.sh
I have added the AWS_ACCESS_KEY_ID and AWS_SECRET_KEY variables.
But when the above pipeline is run, in the deploy stage the scp command is giving "permission denied" error.
Any idea on how to solve this?
Error Message:
Running with gitlab-runner 14.5.2 (e91107dd)
on blue-3.shared.runners-manager.gitlab.com/default zxwgkjAP
Resolving secrets
00:00
Preparing the "docker+machine" executor
Using Docker executor with image ruby:2.5 ...
Pulling docker image ruby:2.5 ...
Using docker image sha256:27d049ce98db4e55ddfaec6cd98c7c9cfd195bc7e994493776959db33522383b for ruby:2.5 with digest ruby#sha256:ecc3e4f5da13d881a415c9692bb52d2b85b090f38f4ad99ae94f932b3598444b ...
Preparing environment
00:01
Running on runner-zxwgkjap-project-31676452-concurrent-0 via runner-zxwgkjap-shared-1639429231-955193ca...
Getting source from Git repository
00:02
$ eval "$CI_PRE_CLONE_SCRIPT"
Fetching changes with git depth set to 50...
Initialized empty Git repository in /builds/te2122/deploytoaws/.git/
Created fresh repository.
Checking out dc27fd6f as master...
Skipping Git submodules setup
Downloading artifacts
00:02
Downloading artifacts for build (1880203000)...
Downloading artifacts from coordinator... ok id=1880203000 responseStatus=200 OK token=9RSALYus
Executing "step_script" stage of the job script
00:02
Using docker image sha256:27d049ce98db4e55ddfaec6cd98c7c9cfd195bc7e994493776959db33522383b for ruby:2.5 with digest ruby#sha256:ecc3e4f5da13d881a415c9692bb52d2b85b090f38f4ad99ae94f932b3598444b ...
$ mkdir -p ~/.ssh
$ echo -e "$EC2_SSH_PRIVATE_KEY" > ~/.ssh/id_rsa
$ chmod 600 ~/.ssh/id_rsa
$ [[ -f /.dockerenv ]] && echo -e "Host *\n\tStrictHostKeyChecking no\n\n" > ~/.ssh/config
$ scp target/gitlabec2pipeline.war ec2-user#$EC2_DEPLOY_SERVER:/gitlabec2pipeline.war
Warning: Permanently added '54.205.169.131' (ECDSA) to the list of known hosts.
scp: /gitlabec2pipeline.war: Permission denied
Cleaning up project directory and file based variables
00:00
ERROR: Job failed: exit code 1
Thank you.
I have added the AWS_ACCESS_KEY_ID and AWS_SECRET_KEY variables.
AWS keys don't matter here since you're using SSH key for authentication when you use ssh (or scp) to connect to the EC2 instance.
If you are sure you're using the correct key value, your environment variable $EC2_SSH_PRIVATE_KEY is not well-formed. Common issues come up with newlines in the variable and use of CRLF (often can be added when copy/pasting the key in the web UI) instead of LF.
To work around this more reliably, you could:
use a file type variable, then copy the file into ~/.ssh OR
base64 encode your key before putting it into the CI variable. This avoids any possible issues with newlines and CRLF. Then decode it in the job, placing the value into the file.
Update:
It looks like your user does not have permission to the destination directory on the server. You need to either give appropriate permission to the ec2-user or choose a different destination location where the user does have permission to write files.

Prevent git from overwriting file owner upon git pull

I've seen a handful of similar questions on here, but none of the solutions given seem to be working... wondering if they're outdated, or this case is somehow different...so I wanted to open up a new thread to talk about it.
I've run into a frustrating problem where, every time I perform and git pull, it changes the owner to the pull-er's user. What happens then is that the site shows the following error:
Warning: file_get_contents(/var/www/html/wp-content/themes/<my-theme>/resources/views/<changed-file>): failed to open stream: Permission denied in /var/www/html/wp-includes/class-wp-theme.php on line 1207
which can only be fixed by running chown www-data on the changed file.
This will become an issue when more people begin to work on the site, or when important files are change (default template/header/footer..), and the site goes blank until chown is run.
Site details
Laravel, wordpress, ubuntu 18, armor hosting
Git repo stored in custom theme
I've tried a few solutions, but none seem to work, (perhaps because they're implemented incorrectly..)
Solutions I've tried
1: set filemode to false - I set filemode to false, locally and globally, on my local machine and the server in question. I've tried changing the case to "fileMode" too.
2: implement post-update hook - I added a post update hook to automatically update the file permissions/ownership. Here's the script (note that the git repo is in the custom theme):
#!/bin/sh
# default owner user
OWNER="www-data:www-data"
# changed file permission
PERMISSION="664"
# web repository directory
REPO_DIR="/var/www/html/wp-content/themes/quorum-theme"
# remote repository
REMOTE_REPO="origin"
# public branch of the remote repository
REMOTE_REPO_BRANCH="master"
cd $REPO_DIR || exit
unset GIT_DIR
files="$(git diff-tree -r --name-only --no-commit-id HEAD#{1} HEAD)"
git merge FETCH_HEAD
for file in $files
do
sudo chown $OWNER $file
sudo chmod $PERMISSION $file
done
exec git-update-server-info
Let me know if there is anything else worth trying, or if you notice an issue with my code...
All the best,
Jill
You are pretty close to the correct solution.
You need to enable the following hooks:
post-merge, called after a successful git pull
post-checkout, called after a successful git checkout
If you are sure to only use git pull, the post-merge hook is enough.
Enabling both hooks guarantee you the hook is always called at not extra cost.
The content of the hook should be like:
#!/bin/sh
# default owner user
OWNER="www-data:www-data"
# web repository directory
REPO_DIR="/var/www/html/wp-content/themes/quorum-theme"
echo
echo "---"
echo "--- Resetting ownership to ${OWNER} on ${REPO_DIR}"
sudo chown -R $OWNER $REPO_DIR
echo "--- Done"
echo "---"
The script will reset the ownership to OWNER of all files and directory inside REPO_DIR.
I have copied the values from your post, eventually change it to your needs.
To enable the hook you should:
create a file named post-merge with the script above
move it inside the directory .git/hook/ of your repo
give it the executable permission with chmod +x post-merge
Repeat eventually these steps for the post-checkout hook, that needs to be equal to the post-merge hook.
Pay attention to perform a sudo git pull if your user is not root. All the files and directories in the target directory are owned by www-data, you need to perform the git pull command with a superuser privilege or the command will fail.
From the looks of your question, it looks like you are using git pull to deploy in production.
git is not a deployment tool. If you want to deploy your code, I would invite you to write a deployment script.
The first version of your script could be :
# deploy.sh
# cd to the appropriate directory :
cd /var/www/mysite
# change to the correct user before pulling :
sudo -u www-data git pull
An updated version would be to stop depending on git pull.
Ideally : you want to be able to identify the versions of your code that can be deployed to productions, and not depend on the fact that "git pull will work without triggering merge conflicts".
Here is the outline of a generic workflow you can follow :
When you want to deploy to production :
produce some artifact that packs your code from an identified commit : for php code this can be a simple .tar.gz
# set a clearly identifiable tag on target commit
git tag v-x.y.z
# create a tar.gz archive that stores the files :
# look at 'git help archive'
git archive -o ../myapp-x.y.z.tgz v-x.y.z
push that artifact your production server
scp myapp-x.y.z.tgz production-server:
run your deployment script, without calling git anymore :
# deploy.sh :
# usage : ./deploy.sh myapp-x.y.z.tgz
archive="$1"
# extract the archive to a fresh folder :
mkdir /var/www/mysite.new
tar -C /var/www/mysite.new -xzf "$archive"
chown -R www-data: /var/www/mysite.new
# replace old folder with new folder :
mv /var/www/mysite /var/www/mysite.old
mv /var/www/mysite.new /var/www/mysite
Some extra actions you will generally want to manage around your deployment :
backup your database before deploying
hanlde config parameters (copy your production config file ? setup the environment ? ...)
apply migration actions
restart apache or nginx
...
You probably want to version that deploy.sh script along with your project.
My approach works for me.
First, add a file named post-merge to /path/to/your_project/.git/hooks/
cd /path/to/your_project/.git/hooks/
touch post-merge
Then, change it's ownership to same as <your_project> folder(this is the same as nginx and php-fpm runner), in my case, I use www:www
sudo chown www:www post-merge
Then change it's file mode to 775(then it can be executed)
sudo chmod 775 post-merge
Then put the snippet below to post-merge. To understand the snippet, see here(actually that's me).
#!/bin/sh
# default owner user
OWNER="www:www"
# changed file permission
PERMISSION="664"
# web repository directory
REPO_DIR="/www/wwwroot/your_project/"
# remote repository
REMOTE_REPO="origin"
# public branch of the remote repository
REMOTE_REPO_BRANCH="master"
cd $REPO_DIR || exit
unset GIT_DIR
files="$(git diff-tree -r --name-only --no-commit-id HEAD#{1} HEAD)"
for file in $files
do
sudo chown $OWNER $file
sudo chmod $PERMISSION $file
done
exec git-update-server-info
Everything is done, now, go back to your_project folder
cd /path/to/your_project/
run git pull under your_project folder, remember you must run as root or sudo(I remember sudo)
sudo git pull
Now check the new file that pulled from remote repository, see if its ownership has been changed to www:www(if it was as expected, the ownership of the new pulled file should be changed to www:www).
This approach is much better than sudo chown -R www:www /www/wwwroot/your_project/, because it only change the new file's ownership, not all of then! Say I just pulled 2 new file, if you change the whole folder's ownership, it's costs more time and server resources(cpu usage, memory usage...), that's totally unnecessary.

Setting up multiple github accounts to use with visual studio in Windows

I tried following the steps in https://code.tutsplus.com/tutorials/quick-tip-how-to-work-with-github-and-multiple-accounts--net-22574 but it fails in the very first step. I am using Windows 10
I ran the ssh-keygen command in gitbash but got the following error:
My user name has a space in between, so how do I deal with this to setup my github accounts? Thanks.
With the latest version of Git, I recommend adding -m PEM, and, in your case, the target file path:
cd /c/Users/Ab*
mkdir .ssh
chmod 700 .ssh
cd .ssh
ssh-keygen -t rsa -P "" -m PEM -f ./id_rsa

Docker Build Failed "chmod: cannot access '/main.sh': No such file or directory"

[this is the error I'm getting after build command ]
Step 7/9 : RUN chmod +x /main.sh
---> Running in 6e880a009c7d
chmod: cannot access '/main.sh': No such file or directory
The command '/bin/sh -c chmod +x /main.sh' returned a non-zero code: 1
and here is my docker file
FROM centos:latest
MAINTAINER Aditya Gupta
#install git
RUN yum -y update
RUN yum -y install git
#make git repo folder, change GIT_LOCATION
RUN mkdir -p /home/centos/doimages/dockimg;cd /home/centos/doimages/dockimg;
RUN git clone https://(username):(password)#gitlab.com/abc/xyz.git (foldername);cd (foldername)/
Run chmod +x ./main.sh
RUN echo " ./main.sh\n "
EXPOSE Portnumber
When you perform a RUN step in a Dockerfile, a temporary container is launched, often with a shell parsing your command. When that command finishes, the container exits, and docker packages the filesystem changes as an image layer. That process is repeated from the beginning for each RUN line.
The key piece there is the shell exits, losing environment variables you've set, background processes you've run, and in this case, the current working directory you tried to set here:
RUN git clone https://(username):(password)#gitlab.com/abc/xyz.git (foldername);cd (foldername)/
Instead of a cd in a RUN command, you can update the value of WORKDIR:
RUN git clone https://(username):(password)#gitlab.com/abc/xyz.git (foldername)
WORKDIR foldername
You want to execute a shell file which does not exist on your docker machine. use ADD command to add your script to your docker image!
-- somewehe inside your dockerfile befor the execution ---
ADD ./PATH/ON/HOST/main.sh /PATH/YOU/LIKE/ON/DOCKER/MACHINE
Then try to build your docker machine
issue is resolved with workdir and cloning manually without docker file and then give the path to mainsh in dockerfile.

Using cygwin ssh-agent is running but git is still prompting for passphrase

I'm using cygwin as my terminal on Windows 7. I have found several suggestions to run ssh-agent in cygwin so I don't have to enter my password every time I run a git fetch/pull/push. I added the following to my .bash_profile and restarted my cygwin session:
SSH_ENV="$HOME/.ssh/environment"
function start_agent {
echo "Initialising new SSH agent..."
/usr/bin/ssh-agent -s | sed 's/^echo/#echo/' > "${SSH_ENV}"
echo succeeded
chmod 600 "${SSH_ENV}"
. "${SSH_ENV}" > /dev/null
/usr/bin/ssh-add;
}
# Source SSH settings, if applicable
if [ -f "${SSH_ENV}" ]; then
. "${SSH_ENV}" > /dev/null
#ps ${SSH_AGENT_PID} doesn't work under cywgin
ps -ef | grep ${SSH_AGENT_PID} | grep ssh-agent$ > /dev/null || {
start_agent;
}
else
start_agent;
fi
It looks as if the ssh-agent and ssh-add are run successfully, but I am still prompted for my password.
Initialising new SSH agent...
succeeded
Enter passphrase for /cygdrive/c/Users/<username>/.ssh/id_rsa:
Identity added: /cygdrive/c/Users/<username>/.ssh/id_rsa (/cygdrive/c/Users/<username>/.ssh/id_rsa)
$ ssh-add -l
2048 <fingerprint> /cygdrive/c/Users/<username>/.ssh/id_rsa (RSA)
$ git fetch --all
Fetching origin
Enter passphrase for key '/c/Users/<username>/.ssh/id_rsa':
I am in fact using SSH and not HTTPS for my git connection (redacted private info):
$ git remote -v
origin ssh://git#XXX/XXX.git (fetch)
origin ssh://git#XXX/XXX.git (push)
The closest problem I've found for this issue is the following question:
ssh-agent doesn't work / save me from typing passphrase for git
However, I didn't rename my ssh under /git/bin.
Any suggestions on how to diagnose this issue? Thanks!
Here is an easier solution to this than the one above:
Launch Services (can be found by typing Services in the search
box on the Taskbar)
Edit the settings of the "OpenSSH Authentication Agent"
Set the Startup type to Automatic and Start the Service
Launch the Edit the System Environment Variables (type Environment
in the search box on the Taskbar)
Add GIT_SSH variable with the value set to
"C:\Windows\System32\OpenSSH\ssh.exe"
Now when an SSH key is added, you will not need to continue to type the passphrase in a Windows command prompt or in a Cygwin Bash shell.
The problem is still remain. Seems it is related to the different paths and hasn't been fixed yet.
You may consider to use expect as an alternatif to ssh-agent for login with passphrase interaction.
In Linux it use to be installed like this:
$ sudo apt-get update
$ apt-get install --assume-yes --no-install-recommends apt-utils expect
In cygwin you may find it under Tcl:
Here is a simple step on how to use it.
Create a file, lets locate it and name ~/.expect_push
#!/usr/bin/expect -f
spawn git push origin master
expect "Username for 'https://github.com':"
send "<username>\n";
expect "Password for 'https://<username>#github.com':"
send "<password>\n";
interact
Change the above <username> and <password> to your login id with following notes:
Modify the contents follow precisely to your git login interaction.
If your <password> contain the char $ put it as \$ otherwise the authentication will be failed.
For example if your password mypa$$word then put <password> as mypa\$\$word.
Create another file ~/.git_push
#!/usr/bin/bash
git remote set-url origin git#github.com:<username>/<repo>.git
git add .
git commit -m "my test commit using expect"
expect ~/.ssh/.expect_push
Make them executable and create symlink in /bin
$ chmod +x ~/.expect_push
$ chmod +x ~/.git_push
$ cd /bin
$ ln -s ~/.git_push push
Then you can push to the remote without need to fill in username and password or passphrase
$ cd /path/to/your/git/folder
$ push

Resources