Resolve remote origin already exists error at Git lab Runner - continuous-integration

My Error at git lab runner terminal
fatal: remote origin already exists.
warning: failed to remove code/ecom_front_proj/dist/sections: Permission denied
ERROR: Job failed: exit status 1
I am trying to deploy my project to an AWS server by git runner using CI CD. First time the code deploys successfully. If I commit a second time it shows the above error.
If I delete my runner and create new one it is deploying successfully.
I don't know how to delete the remote origin file that already exists.
My Git.yml
image: docker
>
> services:
> - docker:dind
>
> stages:
> - test
> - deploy
>
> test: stage: test only:
> - master
> script:
> - echo run tests in this section
>
> step-deploy-prod: stage: deploy only:
> - master script:
>
> - sudo docker system prune -f
> - sudo docker volume prune -f
> - sudo docker image prune -f
> - sudo docker-compose build --no-cache
> - sudo docker-compose up -d environment: development
My Docker file
FROM node:6 LABEL Aathi <aathi#techardors.com>
>
> RUN apk update && apk add git RUN apk add nodejs RUN apk add nginx
> RUN set -x ; \ addgroup -g 82 -S www-data ; \ adduser -u 82 -D -S
> -G www-data www-data && exit 0 ; exit 1
>
> COPY ./nginx.conf /etc/nginx/nginx.conf
> #COPY ./localhost.crt /etc/nginx/localhost.crt
> #COPY ./localhost.key /etc/nginx/localhost.key COPY ./code/ecom_front_proj /sections WORKDIR sections RUN npm install RUN
> npm install -g #angular/cli RUN ng build --prod
My docker Compose File
version: '2'
>
> services: web:
> container_name: nginx
> build: .
> ports:
> - "4200:4200"
> command: nginx -g "daemon off";
> volumes:
> - ./code/ecom_front_proj/dist/sections:/www:ro
My nginx file
user www-data; worker_processes 1; pid /run/nginx.pid;
>
> events { worker_connections 768; # multi_accept on; }
>
> http { sendfile off; tcp_nopush on; tcp_nodelay on;
> keepalive_timeout 65; types_hash_max_size 2048;
>
> include /etc/nginx/mime.types; default_type
> application/octet-stream;
>
> #access_log /var/log/nginx/access.log; #error_log
> /var/log/nginx/error.log;
>
> gzip on; gzip_disable "msie6";
>
> server { #listen 8443 ssl; listen 4200; #server_name
> localhost;
>
> #ssl_certificate localhost.crt; #ssl_certificate_key
> localhost.key;
>
> location / {
> root /sections/dist/sections;
> index index.html;
> }
>
> } }

Looks like you run gitlab-runner version 11.9.0 and it has a bug.
Alternatively, your gitlab-runner was installed with privileges that not allow it to change file structure in the mentioned path, consider reinstalling or adding these privileges.

Related

unable to execute a bash script in k8s cronjob pod's container

Team,
/bin/bash: line 5: ./repo/clone.sh: No such file or directory
cannot run above file but I can cat it well. I tried my best and still trying to find but no luck so far..
my requirement is to mount bash script from config map to a directory inside container and run it to clone a repo but am getting below message.
cron job
spec:
concurrencyPolicy: Allow
jobTemplate:
metadata:
spec:
template:
metadata:
spec:
containers:
- args:
- -c
- |
set -x
pwd && ls
ls -ltr /
cat /repo/clone.sh
./repo/clone.sh
pwd
command:
- /bin/bash
envFrom:
- configMapRef:
name: sonarscanner-configmap
image: artifactory.build.team.com/product-containers/user/sonarqube-scanner:4.7.0.2747
imagePullPolicy: IfNotPresent
name: sonarqube-sonarscanner
securityContext:
runAsUser: 0
volumeMounts:
- mountPath: /repo
name: repo-checkout
dnsPolicy: ClusterFirst
initContainers:
- args:
- -c
- cd /
command:
- /bin/sh
image: busybox
imagePullPolicy: IfNotPresent
name: clone-repo
securityContext:
privileged: true
volumeMounts:
- mountPath: /repo
name: repo-checkout
readOnly: true
restartPolicy: OnFailure
securityContext:
fsGroup: 0
volumes:
- configMap:
defaultMode: 420
name: product-configmap
name: repo-checkout
schedule: '*/1 * * * *'
ConfigMap
kind: ConfigMap
metadata:
apiVersion: v1
data:
clone.sh: |-
#!bin/bash
set -xe
apk add git curl
#Containers that fail to resolve repo url can use below step.
repo_url=$(nslookup ${CODE_REPO_URL} | grep Non -A 2 | grep Name | cut -d: -f2)
repo_ip=$(nslookup ${CODE_REPO_URL} | grep Non -A 2 | grep Address | cut -d: -f2)
if grep ${repo_url} /etc/hosts; then
echo "git dns entry exists locally"
else
echo "Adding dns entry for git inside container"
echo ${repo_ip} ${repo_url} >> /etc/hosts
fi
cd / && cat /etc/hosts && pwd
git clone "https://$RU:$RT#${CODE_REPO_URL}/r/a/${CODE_REPO_NAME}" && \
(cd "${CODE_REPO_NAME}" && mkdir -p .git/hooks && \
curl -Lo `git rev-parse --git-dir`/hooks/commit-msg \
https://$RU:$RT#${CODE_REPO_URL}/r/tools/hooks/commit-msg; \
chmod +x `git rev-parse --git-dir`/hooks/commit-msg)
cd ${CODE_REPO_NAME}
pwd
output pod describe
Warning FailedCreatePodSandBox 1s kubelet, node1 Failed create pod sandbox: rpc error: code = Unknown desc = failed to start sandbox container for pod "sonarqube-cronjob-1670256720-fwv27": Error response from daemon: OCI runtime create failed: container_linux.go:345: starting container process caused "process_linux.go:303: getting the final child's pid from pipe caused \"EOF\"": unknown
pod logs
+ pwd
+ ls
/usr/src
+ ls -ltr /repo/clone.sh
lrwxrwxrwx 1 root root 15 Dec 5 16:26 /repo/clone.sh -> ..data/clone.sh
+ ls -ltr
total 60
.
drwxr-xr-x 2 root root 4096 Aug 9 08:58 sbin
drwx------ 2 root root 4096 Aug 9 08:58 root
drwxr-xr-x 2 root root 4096 Aug 9 08:58 mnt
drwxr-xr-x 5 root root 4096 Aug 9 08:58 media
drwxrwsrwx 3 root root 4096 Dec 5 16:12 repo <<<<< MY MOUNTED DIR
.
+ cat /repo/clone.sh
#!bin/bash
set -xe
apk add git curl
#Containers that fail to resolve repo url can use below step.
repo_url=$(nslookup ${CODE_REPO_URL} | grep Non -A 2 | grep Name | cut -d: -f2)
repo_ip=$(nslookup ${CODE_REPO_URL} | grep Non -A 2 | grep Address | cut -d: -f2)
if grep ${repo_url} /etc/hosts; then
echo "git dns entry exists locally"
else
echo "Adding dns entry for git inside container"
echo ${repo_ip} ${repo_url} >> /etc/hosts
fi
cd / && cat /etc/hosts && pwd
git clone "https://$RU:$RT#${CODE_REPO_URL}/r/a/${CODE_REPO_NAME}" && \
(cd "${CODE_REPO_NAME}" && mkdir -p .git/hooks && \
curl -Lo `git rev-parse --git-dir`/hooks/commit-msg \
https://$RU:$RT#${CODE_REPO_URL}/r/tools/hooks/commit-msg; \
chmod +x `git rev-parse --git-dir`/hooks/commit-msg)
cd code_dir
+ ./repo/clone.sh
/bin/bash: line 5: ./repo/clone.sh: No such file or directory
+ pwd
pwd/usr/src
Assuming the working directory is different thant /:
If you want to source your script in the current process of bash (shorthand .) you have to add a space between the dot and the path:
. /repo/clone.sh
If you want to execute it in a child process, remove the dot:
/repo/clone.sh

Shell Script To Automate Nginx Blocks

I'm trying to automate nginx setup through the script
Command - SECRET_KEY='xxxx' HTTP='$http_update' AMCU_HOST='$host' AMCU_RURI='$redirect_uri' sh ./script.sh domain.com port username
In the server block file, I want HTTP='$http_update' but it replaces it and just leave blank ''. Same for AMCU_HOST='$host' and AMCU_RURI='$redirect_uri'. Tried without attributes in command but same happens again...leaves '' as script thinks it as $ attr.
script.sh
#!/bin/bash
domain=$1
port=$2
user=$3
block="/etc/nginx/sites-available/$domain"
ssh="/home/$user/.ssh/authorized_keys"
#Create User
echo "▶ Creating User"
sudo useradd $user
#User mkdir
echo "▶ Updating home dir"
sudo mkdir /home/$user
#Create .ssh/authkeys
echo "▶ Updating SSH dir"
cd /home/$user && mkdir .ssh/
#Create the SSH Auth file:
echo "▶ Updating SSH AuthKey"
sudo tee $ssh > /dev/null <<EOF
$SECRET_KEY
EOF
#Create the Nginx server block file:
echo "▶ Updating NGINX Server Block"
sudo tee $block > /dev/null <<EOF
server {
listen 80;
server_name $domain;
return 301 https://$domain$AMCU_RURI;
}
server {
#Secure HTTP (HTTPS)
listen 443 ssl;
server_name $domain;
error_page 500 502 503 504 /500.html;
location /500.html {
root /var/www/html;
internal;
}
ssl_certificate /etc/letsencrypt/live/$domain/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/$domain/privkey.pem; # managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
location / {
proxy_pass http://localhost:$port;
proxy_http_version 1.1;
proxy_set_header Upgrade $HTTP;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $AMCU_HOST;
proxy_cache_bypass $HTTP;
}
}
EOF
#Link to make it available
echo "▶ Linking Server Blocks"
sudo ln -s $block /etc/nginx/sites-enabled/
#Test configuration and reload if successful
echo "▶ Reloading Server"
sudo nginx -t && sudo service nginx reload

How to export environment variable on remote host with GitlabCI

I'm using GitlabCI to deploy my Laravel applications.
I'm wondering how should I manage the .env file. As far as I've understood I just need to put the .env.example under version control and not the one with the real values.
I've set all the keys my app needs inside Gitlab Settings -> CI/CD -> Environment Variables and I can use them on the runner, for example to retrieve the SSH private key to connect to the remote host, but how should I deploy these variables to the remote host as well? Should I write them with bash in a "runtime generated" .env file and then copy it? Should I export them via ssh on the remote host? Which is the correct way to manage this?
If you open to another solution i propose using fabric(fabfile) i give you an example:
create .env.default with variable like :
DB_CONNECTION=mysql
DB_HOST=%(HOST)s
DB_PORT=3306
DB_DATABASE=laravel
DB_USERNAME=%(USER)s
DB_PASSWORD=%(PASSWORD)s
After installing fabric add fabfile on you project directory:
from fabric.api import env , run , put
prod_env = {
'name' : 'prod' ,
'user' : 'user_ssh',
'deploy_to' : '/path_to_project',
'hosts' : ['ip_server'],
}
def set_config(env_config):
for key in env_config:
env[key] = env_config[key]
def prod():
set_config(prod_env)
def deploy(password,host,user):
run("cd %s && git pull -r",env.deploy_to)
process_template(".env.default",".env" , { 'PASSWORD' : password , 'HOST' : host,'USER': user } )
put( ".env" , "/path_to_projet/.env" )
def process_template(template , output , context ):
import os
basename = os.path.basename(template)
output = open(output, "w+b")
text = None
with open(template) as inputfile:
text = inputfile.read()
if context:
text = text % context
#print " processed \n : %s" % text
output.write(text)
output.close()
Now you can run from you local to test script :
fab prod deploy:password="pass",user="user",host="host"
It will deploy project on your server and check if it process .env
If it works now it's time for gitlab ci this is an example file :
image: python:2.7
before_script:
- pip install 'fabric<2.0'
# Setup SSH deploy keys
- 'which ssh-agent || ( apt-get install -qq openssh-client )'
- eval $(ssh-agent -s)
- ssh-add <(echo "$SSH_PRIVATE_KEY")
- mkdir -p ~/.ssh
- '[[ -f /.dockerenv ]] && echo -e "Host *\n\tStrictHostKeyChecking no\n\n" > ~/.ssh/config'
deploy_staging:
type: deploy
script:
- fab prod deploy:password="$PASSWORD",user="$USER",host="$HOST"
only:
- master
$SSH_PRIVATE_KEY,$PASSWORD,$USER,$HOST is environnement variable gitlab,you should add a $SSH_PRIVATE_KEY private key which have access to the server.
Hope i don't miss a step.

Running docker commands in bash script leads to segmentation fault

The commands are like:
docker run / stop / rm ...
which works in terminal while causes segmentation fault in bash script.
I compared the environments between bash script and terminal, as shown below.
2c2
< BASHOPTS=cmdhist:complete_fullquote:extquote:force_fignore:hostcomplete:interactive_comments:progcomp:promptvars:sourcepath
---
> BASHOPTS=cmdhist:complete_fullquote:expand_aliases:extquote:force_fignore:hostcomplete:interactive_comments:login_shell:progcomp:promptvars:sourcepath
7,8c7,8
< BASH_LINENO=([0]="0")
< BASH_SOURCE=([0]="./devRun.sh")
---
> BASH_LINENO=()
> BASH_SOURCE=()
10a11
> COLUMNS=180
14a16,18
> HISTFILE=/home/me/.bash_history
> HISTFILESIZE=500
> HISTSIZE=500
19a24
> LINES=49
22a28
> MAILCHECK=60
28c34,37
< PPID=12558
---
> PIPESTATUS=([0]="0")
> PPID=12553
> PS1='[\u#\h \W]\$ '
> PS2='> '
32,33c41,42
< SHELLOPTS=braceexpand:hashall:interactive-comments
< SHLVL=2
---
> SHELLOPTS=braceexpand:emacs:hashall:histexpand:history:interactive-comments:monitor
> SHLVL=1
42,52c51
< _=./devRun.sh
< dao ()
< {
< echo "Dao";
< docker run -dti -v /tmp/projStatic:/var/projStatic -v ${PWD}:/home --restart always -p 50000:50000 --name projDev daocloud.io/silencej/python3-uwsgi-alpine-docker sh;
< echo "Dao ends."
< }
< docker ()
< {
< docker run -dti -v ${PWD}:/home --restart always -p 50000:50000 --name projDev owen263/python3-uwsgi-alpine-docker sh
< }
---
> _=/tmp/env.log
UPDATE:
The info and version:
docker version
Client:
Version: 1.13.1
API version: 1.26
Go version: go1.7.5
Git commit: 092cba3727
Built: Sun Feb 12 02:40:56 2017
OS/Arch: linux/amd64
Server:
Version: 1.13.1
API version: 1.26 (minimum version 1.12)
Go version: go1.7.5
Git commit: 092cba3727
Built: Sun Feb 12 02:40:56 2017
OS/Arch: linux/amd64
Experimental: false
docker info
Containers: 0
Running: 0
Paused: 0
Stopped: 0
Images: 1
Server Version: 1.13.1
Storage Driver: overlay2
Backing Filesystem: extfs
Supports d_type: true
Native Overlay Diff: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge host macvlan null overlay
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: aa8187dbd3b7ad67d
You've rewritten the docker command in a shell, it's entirely possible this is even a recursive definition. Remove this from your environment:
docker ()
{
docker run -dti -v ${PWD}:/home --restart always -p 50000:50000 --name projDev owen263/python3-uwsgi-alpine-docker sh
}

How to run a shell script on every request?

I want to run a shell script every time my nginx server receives any HTTP request. Any simple ways to do this?
You can execute a shell script via Lua code from the nginx.conf file to achieve this. You need to have the HttpLuaModule to be able to do this.
Here's an example to do this.
location /my-website {
content_by_lua_block {
os.execute("/bin/myShellScript.sh")
}
}
I found the following information online at this address: https://www.ruby-forum.com/topic/2960191
This does expect that you have fcgiwrap installed on the machine. It is really as simple as:
sudo apt-get install fcgiwrap
Example script (Must be executable)
#!/bin/sh
# -*- coding: utf-8 -*-
NAME=`"cpuinfo"`
echo "Content-type:text/html\r\n"
echo "<html><head>"
echo "<title>$NAME</title>"
echo '<meta name="description" content="'$NAME'">'
echo '<meta name="keywords" content="'$NAME'">'
echo '<meta http-equiv="Content-type"
content="text/html;charset=UTF-8">'
echo '<meta name="ROBOTS" content="noindex">'
echo "</head><body><pre>"
date
echo "\nuname -a"
uname -a
echo "\ncpuinfo"
cat /proc/cpuinfo
echo "</pre></body></html>"
Also using this as an include file, not restricted to only shell
scripts.
location ~ (\.cgi|\.py|\.sh|\.pl|\.lua)$ {
gzip off;
root /var/www/$server_name;
autoindex on;
fastcgi_pass unix:/var/run/fcgiwrap.socket;
include /etc/nginx/fastcgi_params;
fastcgi_param DOCUMENT_ROOT /var/www/$server_name;
fastcgi_param SCRIPT_FILENAME /var/www/$server_name$fastcgi_script_name;
}
I found it extremely helpful for what I am working on, I hope it help you out with your RaspberryPI project.
Install OpenResty (OpenResty is just an enhanced version of Nginx by means of addon modules ) Refer https://openresty.org/en/getting-started.html for this
Configure aws cli on the instance
Write a shell script which download a file from specified S3 bucket
Do the required changes in nginx.conf file
Restart the nginx server
I have tested the http request using curl and file gets download in /tmp directory of respective instance:
curl -I http://localhost:8080/
OutPut:
curl -I http://localhost:8080/
HTTP/1.1 200 OK
Server: openresty/1.13.6.2
Date: Tue, 14 Aug 2018 07:34:49 GMT
Content-Type: text/plain
Connection: keep-alive
Content of nginx.conf file:
worker_processes 1;
error_log logs/error.log;
events {
worker_connections 1024;
}
http {
server {
listen 8080;
location / {
default_type text/html;
content_by_lua '
ngx.say("<p>hello, world</p>")
';
}
location / {
content_by_lua_block{
os.execute("sh /tmp/s3.sh")
}
}
}
}
If you prefer full control in Python:
Create /opt/httpbot.py:
#!/usr/bin/env python3
from http.server import HTTPServer, BaseHTTPRequestHandler
import subprocess
class Handler(BaseHTTPRequestHandler):
def do_GET(self):
self._handle()
def do_POST(self):
self._handle()
def _handle(self):
try:
self.log_message("command: %s", self.path)
if self.path == '/foo':
subprocess.run(
"cd /opt/bar && GIT_SSH_COMMAND='ssh -i .ssh/id_rsa' git pull",
shell=True,
)
finally:
self.send_response(200)
self.send_header("content-type", "application/json")
self.end_headers()
self.wfile.write('{"ok": true}\r\n'.encode())
if __name__ == "__main__":
HTTPServer(("127.0.0.1", 4242), Handler).serve_forever()
No concurrency/parallelism here, so httpbot runs one command at a time, no conflicts.
Run apt install supervisor
Create /etc/supervisor/conf.d/httpbot.conf:
[program:httpbot]
environment=PYTHONUNBUFFERED="TRUE"
directory=/opt
command=/opt/httpbot.py
autorestart=true
redirect_stderr=true
stdout_logfile=/var/log/httpbot.log
stdout_logfile_maxbytes=1MB
stdout_logfile_backups=10
Add to your nginx server:
location /foo {
proxy_pass http://127.0.0.1:4242/foo;
}
Run:
chmod u+x /opt/httpbot.py
service supervisor status
# If stopped:
service supervisor start
supervisorctl status
# If httpbot is not running:
supervisorctl update
curl https://example.com/foo
# Should return {"ok": true}
tail /var/log/httpbot.log
# Should show `command: /foo` and the output of shell script
You can also use the nginx mirror module and poxy_pass it to a web script that runs whatever, in my case I just added this to my main site location {...
mirror /mirror;
mirror_request_body off;
and then a new location called mirror that I had run a php script that executed whatever...
location = /mirror {
internal;
proxy_pass http://localhost/run_script.php;
proxy_pass_request_body off;
proxy_set_header Content-Length "";
proxy_set_header X-Original-URI $request_uri;
}
https://nginx.org/en/docs/http/ngx_http_mirror_module.html
You can use nginx's perl module which is usually part of a repo and can be easily installed. Sample to call system curl command:
location /mint {
perl '
sub {
my $r = shift;
$r->send_http_header("text/html");
$r->print(`curl -X POST --data \'{"method":"evm_mine"}\' localhost:7545`);
return OK;
}
';
}

Resources