circleci won't finish build on successful server startup - amazon-ec2

I have setup circleCI, AWS CodeDeploy and EC2 to work together so that after I push code to git, it relays to circleCI and then EC2 and starts a server there.
Everything is working fine except the server is running correctly and circleCI won't give me a successful build status. It is always in "running" state
appspec.yml
version: 0.0
os: linux
files:
- source: /
destination: /home/ubuntu
permissions:
- object: /home/ubuntu/scripts
pattern: "**"
mode: 777
type:
- file
hooks:
ApplicationStart:
- location: scripts/start.sh
timeout: 3800
start.sh
#!/bin/bash
node server.js
anyone know how to solve this?

The host agent is waiting for your script to exit. You need to run node as a daemon.
#!/bin/bash
node server.js > /var/log/my_node_log 2> /var/log/my_node_log < /dev/null &
See http://docs.aws.amazon.com/codedeploy/latest/userguide/troubleshooting.html#troubleshooting-long-running-processes

Related

Ansible having problem authenticating with Google Cloud Platform

We are using Ansible to deploy an image to Google Kubernetes Cluster (GKE).
We have setup Ubuntu 20.04 and Python 3.8.5.
playbook.main.yml:
---
- hosts: localhost
vars:
k8s_file_path: /home/pesinn/Documents/...
become: yes
become_method: sudo
roles:
- k8s
main.yml:
- name: First Deployment
k8s:
kubeconfig: /home/pesinn/.kube/config
src: "{{k8s_file_path}}/deployment.yml"
When trying to deploy the image defined in deployment.yml file, by running the playbook, we get this error:
kubernetes.config.config_exception.ConfigException: cmd-path: process returned 1
Cmd: /home/pesinn/y/google-cloud-sdk/bin/gcloud config config-helper --format=json
Stderr: WARNING: Could not open the configuration file: [/root/.config/gcloud/configurations/config_default].
ERROR: (gcloud.config.config-helper) You do not currently have an active account selected.
Please run:
$ gcloud auth login
What we've already done
Initialized the cloud: gcloud init
Logged in and chosen a project gcloud auth login
Run export GOOGLE_APPLICATION_CREDENTIALS="path_to_service_account_key.json"
Run gcloud container clusters get-credentials {gke_project} --region {region}
Run the playbook sudo ansible-playbook playbook.main.yml -vvv
Run gcloud config config-helper --format=json on the local machine without any problems
What is very strange here is that we're logged in for sure. We can access the GKE cluster through kubectl command on the local machine. However, Ansible complains about us not being logged in. Also, in the error logs, we see that it is trying to open /root/.config/gcloud/configurations/config_default. Our default config file is, on the other hand, located in the home folder.
This error occurs randomly. Sometimes Ansible can detect our login and deploys the image, but sometimes it gives us this error. Both scenarios can happens without any code changes being made.
For some reason, ansible does not use GCP's default environment variables for authentication.
You can set
GCP_AUTH_KIND
GCP_SERVICE_ACCOUNT_EMAIL
GCP_SERVICE_ACCOUNT_FILE
GCP_SCOPES
GCP_SERVICE_ACCOUNT_FILE is the equivalent of GOOGLE_APPLICATION_CREDENTIALS
Reference: https://docs.ansible.com/ansible/latest/scenario_guides/guide_gce.html#providing-credentials-as-environment-variables

Deploying to FTP server in Github Actions does not work

As the title says, deploying to FTP server isn't working for me from a Github Action. I've tried using a couple of actions to accomplish this (FTP-Deploy and ftp-action), but FTP-Deploy just kept running with sporadic
curl: (7) Failed to connect to ftpservername.com port 21: Connection timed out
messages and ftp-action kept running without any output. Note: The server is available, I connected and transferred some files using Filezilla without any issues.
After that I tried using lftp, this is the command I used on a local Ubuntu machine
lftp -c "open -u username,password ftpservername.com; mirror -R locfolder remote/remotefolder"
and the file transfer worked, but when used in a Github Action it produced this output:
---- Connecting to ftpservername.com (123.456.789.123) port 21
mkdir `remote/remotefolder' [Connecting...]
**** Socket error (Connection timed out) - reconnecting
---- Closing control socket
---- Connecting to ftpservername.com (123.456.789.123) port 21
I tried setting both ftp:ssl-allow and ssl:verify-certificate to false, but this did not produce any results. Also, I do not have access to the server, so I can't check the server logs.
This is the workflow file:
name: Test
on:
push:
branches: [master]
jobs:
build:
runs-on: ubuntu-latest
steps:
- name: Checkout repo
uses: actions/checkout#v2
- name: Setup Python
uses: actions/setup-python#v2
with:
python-version: '3.x'
- name: Install pip
run: python -m pip install --upgrade pip
- name: Install packages
run: |
sudo apt install lftp
sudo apt install expect
.
.
.
- name: FTP Deploy
run: |
echo Starting...
unbuffer lftp -c "debug; set ftp:ssl-allow false; set ssl:verify-certificate false; open -u username,${{ secrets.PASSWORD }} ftpservername.com; mirror -R -v locfolder remote/remotefolder"
echo Done transferring files.
Any help is appreciated, thank you!
Found the issue, the hosting service was blocking the IP address (as it was an IP address outside of the country). After setting up a self-hosted runner and whitelisting the IP of the runner everything works fine.

Can't connect to docker inside jenkins docker container MacOS

After two full days reading and trying thing, I humbling come here to ask how to make this work, because nothing from the other answers helped me to make this work.
I'm on a macos 10.13.6 (High Sierra)
Running Docker Desktop for mac 2.2.0.5 (43884)
Engine: 19.03.8
Compose 1.25.4
I want to run jenkins to study some pipeline stuff, so this is my ´docker-compose.yml´
version: "3.2"
services:
jenkins:
build:
dockerfile: dockerfile
context: ./build
ports:
- "8080:8080"
- "50000:50000"
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- ./data:/var/jenkins_home
First problem is that the image that i'm using jenkins/jenkins:lts does not have a docker client installed, so even mapping the socket through volumes I can't use docker version the output of this command is bash: docker: command not found.
This is my pipeline just for test (from jenkins documentation):
pipeline {
agent { docker { image 'node:6.3' } }
stages {
stage('build') {
steps {
sh 'npm --version'
}
}
}
}
So through this plugin https://plugins.jenkins.io/docker-plugin/ I can go to "Manage Jenkins > Manage Nodes and Clouds > Configure Clouds > Add a new cloud" and on "Docker Cloud details..."
I have the Host URI where I can put "unix:///var/run/docker.sock" that it will use the docker from my host macos to do what jenkins need to do.
I tried all the suggestion from the internet, from create the jenkins user, docker user, put jenkins user on docker group e other stuff but none of them work on the mac.
The big majority of the asked questions is for linux and all of them seems to have solved the problem, but when I try to replicate on the macos it just don't work.
Maybe there is some step that I'm missing, or people already know that they have to do in some of the steps, but i'm failing miserably.
Some of the steps that I tried:
create use user and group jenkins:
sudo dscl . create /Users/jenkins UniqueID 1000 PrimaryGroupID 1000
sudo dscl . create /Groups/jenkins gid 1000
created the group docker:
sudo dscl . create /Groups/docker gid 1001
Added the jenkins user to the docker group
sudo dscl . append /Groups/docker GroupMembership jenkins
Checked if the user really is on the group
$ dsmemberutil checkmembership -u 1000 -g 1001
user is a member of the group
Tried to change the owner of the socket from inside the jenkins container (that's why I was building the image, but it didn't work)
Tried to changer the ownership of the socket on the host macos but it just don't change.
The socket is always with those permissions.
lrwxr-xr-x 1 root daemon 68B Apr 28 10:14 docker.sock -> /Users/metasix/Library/Containers/com.docker.docker/Data/docker.sock
For jenkins, the best is to have agents that will run all jobs and the master that will only do the orchestration jobs.
Some years ago, I build an JNLP agent that register itself to jenkins master, you can check my repo here: https://github.com/jmaitrehenry/docker-jenkins-jnlp
As I say, I made it like 3 years ago and may be a bit outdated.
About your problem, you need to know that Docker for Mac run containers inside a little VM, so when you add a user on MacOS, the VM doesn't have it. And Docker for Mac do a lot a magic to map uid inside your mac with some uid inside containers.
You can try to add the docker client inside your Dockerfile, for that, try to add those steps:
FROM jenkins/jenkins:lts
[...]
# Switch to root as the base image switch to jenkins user
USER root
# Download docker-cli and install it
RUN curl -o docker-ce-cli.deb https://download.docker.com/linux/debian/dists/stretch/pool/stable/amd64/docker-ce-cli_19.03.8~3-0~debian-stretch_amd64.deb && \
dpkg -i docker-ce-cli.deb && \
rm docker-ce-cli.deb
# Switch back to jenkins user
USER jenkins
You need to enable host mode networking by adding network: host to your compose file:
services:
jenkins:
build:
dockerfile: dockerfile
context: ./build
network: host
ports:
- "8080:8080"
- "50000:50000"
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- ./data:/var/jenkins_home
This will allow your guest docker container to see the hosts network. The problem is that Docker Desktop for MacOS doesn't support listening over the TCP port. There is a known workaround by using socat. https://www.ivankrizsan.se/2016/05/21/docker-api-over-http-on-mac-os-x-with-docker-for-mac-beta/. Once you have socat set up to route from the docker.socker to TCP 2376 set your Host URI to tcp://0.0.0.0:2376. And of course you will need to create a new Dockerfile to extend the jenkins/jenkins:lts one with FROM jenkins/jenkins:lts and add Docker to the container as suggested in another answer
I ran into the same issue. jenkins user was not able to run docker commands even after adding the user to docker group.
When I checked the permissions on the host machine (MacOS), docker.sock file was owned by root:daemon.
ls -lart /var/run/docker.sock
lrwxr-x--x 1 root daemon 37 Feb 1 14:56 /var/run/docker.sock -> /Users/....
I updated the permissions to 755 and it started working. I am able to run the docker commands on the container as jenkins user.
Please change the host file permissions only for development environment.

Bitbucket fatal: Can't access remote

I tried to set up sftp connection between Bitbucket and Runcloud server. Runcloud only uses sftp connection. Bitbucket config:
image: php:7.3
pipelines:
branches:
master:
- step:
name: Deploy to production
deployment: production
script:
- apt-get update
- apt-get -qq install git-ftp
- git ftp init --user $SFTP_username --passwd $FTP_password sftp://runcloud#1.111.111.11/home/runcloud/webapps/mywebsite/wp-content/themes/mywebsiteTheme
Connection always fails with error fatal: Can't access remote 'sftp://1.111.111.11', exiting...
I tried a different sftp Path combination but the result always the same.
sftp://1.111.111.11/home/runcloud/webapps/mywebsite/wp-content/themes/mywebsiteTheme
sftp://mywebsite/home/runcloud/webapps/mywebsite/wp-content/themes/mywebsiteTheme
My website
Root Path: /home/runcloud/webapps/mywebsite
Public Path: /home/runcloud/webapps/mywebsite
Runcload have different as "normal" set up for ftp. For example to conect with FileZila HOST is my server ip. And to get to my website i have to navigate /webapps/mywebsite
Not sure what I doing wrong is my sftp path incorrect?

GItLab CI gives curl: (7) Failed to connect to localhost port 8090: Connection refused

The issue is I get the curl: (7) Failed to connect to localhost port 8090: Connection refused GItLab CI error but this does not happen on my laptop where I get the source html of the webpage. The .gitlab-ci.yml below is a simple reproduction of the issue. I have spent numerous hours trying to figure this out - i'm sure someone else has also.
Aside: This isn't a similar question - since they don't offer a solution.
GitLab Repo: https://gitlab.com/mudassir-ahmed/wordpress-testing-with-gitlab-ci/tree/another-approach but the only file it contains is the .gitlab-ci.yml shown below...
image: docker:stable
variables:
# When using dind service we need to instruct docker, to talk with the
# daemon started inside of the service. The daemon is available with
# a network connection instead of the default /var/run/docker.sock socket.
#
# The 'docker' hostname is the alias of the service container as described at
# https://docs.gitlab.com/ee/ci/docker/using_docker_images.html#accessing-the-services
#
# Note that if you're using the Kubernetes executor, the variable should be set to
# tcp://localhost:2375/ because of how the Kubernetes executor connects services
# to the job container
# DOCKER_HOST: tcp://localhost:2375/
#
# For non-Kubernetes executors, we use tcp://docker:2375/
DOCKER_HOST: tcp://docker:2375/
# When using dind, it's wise to use the overlayfs driver for
# improved performance.
DOCKER_DRIVER: overlay2
services:
- docker:dind
before_script:
- docker info
build:
stage: build
script:
- apk update
- apk add curl
#- hostname -i
- docker container ls
- docker run -d -p 8090:80 --name nginx-server kitematic/hello-world-nginx
- curl localhost:8090 # This works on my laptop but not on a GitLab runner.
Referring to the answer from here : gitlab-ci.yml & docker-in-docker (dind) & curl returns connection refused on shared runner
There are two ways to fix this :
Option 1: Replace localhost in curl localhost:8090 with docker like this curl docker:8090
Option 2:
services:
- name: docker:dind
alias: localhost
docker run -d -p 8090:80 --name nginx-server kitematic/hello-world-nginx
curl localhost:8090 # This works on my laptop but not on a GitLab runner.
Assuming that is your code i think that you should somehow add some timeout between docker run and curl.
I have similar issues some time ago after starting docker container on gitlab runner machine i wasnt able to accces my url to. When i added command which check if container is running for " about one minute " it resolved my problem.
"docker inspect -f {{.State.Running}} " + containerName" but in order to do that check, you should add some additional script

Resources