Running lambdas in localstack in gitlab-ci - aws-lambda

So I have localstack running locally (on my laptop) and can deploy serverless app to it and then invoke a Lambda.
However, I am really struggling with doing the same thing in gitlab-ci.
This is the relevant part of .gitlab-ci.yml:
integration-test:
stage: integration-test
image: node:14-alpine3.12
tags:
- docker
services:
- name: localstack/localstack
alias: localstack
variables:
LAMBDA_EXECUTOR: docker
HOSTNAME_EXTERNAL: localstack
DEFAULT_REGION: eu-west-1
USE_SSL: "false"
DEBUG: "1"
AWS_ACCESS_KEY_ID: test
AWS_SECRET_ACCESS_KEY: test
AWS_DEFAULT_REGION: eu-west-1
script:
- npm ci
- npx sls deploy --stage local
- npx jest --testMatch='**/*.integration.js'
only:
- merge_requests
The localstack gets started and the deployment works fine. But as soon as lambda is invoked (in an integration test), localstack is trying to create a container for the lambda to run in and that's when it fails with the following:
Lambda process returned error status code: 1. Result: . Output:\\nCannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?\\nmust specify at least one container source (.....)
I tried to set DOCKER_HOST to tcp://docker:2375 but then it fails with:
Lambda process returned error status code: 1. Result: . Output:\\nerror during connect: Post http://docker:2375/v1.29/containers/create: dial tcp: lookup docker on 169.254.169.254:53: no such host\
DOCKER_HOST set to tcp://localhost:2375 complains too:
Lambda process returned error status code: 1. Result: . Output:\\nCannot connect to the Docker daemon at tcp://localhost:2375. Is the docker daemon running?\\nmust specify at least one container source
Did anyone ever get lambdas to run within localstack within shared gitlab runners?
Thanks for your help :)

Running docker in docker is usually a bad idea, since it's a big security vulnerability. Granting access to local docker daemon equals granting root privileges on a runner.
If you still want to use docker installed on host to spawn containers, refer to official documentation - https://docs.gitlab.com/ee/ci/docker/using_docker_build.html#use-docker-socket-binding
which boils down to adding
volumes = ["/var/run/docker.sock:/var/run/docker.sock", "/cache"]
to [runners.docker] section in your runner config.
The question is, why do you need docker? According to https://github.com/localstack/localstack , setting LAMBDA_EXECUTOR to local will
run Lambda functions in a temporary directory on the local machine
Which should be the best approach to your problem, and won't compromise security of your runner host.

Related

Laravel Vapor Docker Runtime with Gitlab CI want not to be work

I use Laravel Vapor for deploying our microservices based on Laravel. This works very good so far, if the app with their dependencies is not too large. But if it is then it gets a little bit tricky.
Vapor provides a Docker runtime for this case where you are able to deploy apps up to 10GB size.
For local development we usually use Laradock.io because its easy and flexible.
That means if we deploy from our local environment it easy to enter the workspace container and and run the vapor deploy commands. After enabling Docker Client for the workspace container it works with the vapor Docker runtime properly.
But now we integrated the deployment process into Gitlab CI Pipeline. That works very well for our small services with Vapor PHP runtime.
But for the Docker runtime I desperate on the CI deployment.
The docker runtime needs an installed docker instance where vapor will be invoked. That means in the Gitlab-ci.yml I have to add an image with installed Docker and PHP to invoke the Vapor scripts.
So I created an docker image base on the laradock workspace container but the Gitlab-runner exits always with the error message no docker deamon is available.
This is the related part of my GitLab-CI yml (the image is only local available):
testing:
image:
name: lexitaldev/vapor-docker-deploy:latest
pull_policy: never
securityContext:
privileged: true
environment: testing
stage: deploy
only:
- test
script:
- composer install
- php vendor/bin/vapor deploy test
This is the specific output:
Error Output:
================
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the
docker daemon running?
I've tried to use the standard 'laravelphp/vapor:php80' image and install docker over the script section as well.
before_script:
- apk add docker
- addgroup root docker
But nothing helped. It seems to be there is a problem with the docker.sock.
Did anybody managed to add Vapor Docker Runtime deployment to CI scripts?
Best,
Michael
I would like to tell you, that you only need to add the Service: dind, but after you do that, it will throw an error, related to the image that Gitlab create for your pipelines. So you need to create a runner with volumes, privileged flag, and tags.
I did it, using gitlab-runner on my machine.
sudo gitlab-runner register -n \
--url {{ your_url }} \
--registration-token {{your_token}} \
--executor docker \
--description "{{ Describe your runner }}" \
--docker-image "docker:20.10.12-alpine3.15" \
--docker-privileged \
--docker-volumes="/certs/client" \
--docker-volumes="cache" \
--docker-volumes="/var/run/docker.sock:/var/run/docker.sock"
--tag-list {{ a_tag_for_your_pipeline }}
Once you did that, you would need to use a docker stable version in your gitlab-ci.yml file. For some reason, it doesn't work when I was trying to use version 20 or latest
image: docker:stable
services:
- name: docker:stable:dind
before_script:
- echo $CI_JOB_TOKEN | docker login $CI_REGISTRY -u $CI_REGISTRY_USER --password-stdin
build:
tags:
- {{the tag you defined in your runner}}
variables:
IMAGE_TAG: $CI_REGISTRY_IMAGE:$CI_COMMIT_REF_SLUG
script:
- echo $IMAGE_TAG
- docker build -t $CI_REGISTRY_IMAGE -f {{your Dockerfile}} .
- docker push $CI_REGISTRY_IMAGE
All the variables are previously defined in Gitlab, so don't worry, you can "copy & paste". Also, I added some advices that Gitlab mention on its documentation when you need to register your Docker container in Gitlab container.

manifest unknown: Failed to fetch \"latest\" from request \"/v2/$PROJECT_ID/$IMAGE:latest/manifests/latest\ "."

I have some python files inside of a VM that will run every week to scrape info from a website. This is automated through cloud scheduler and cloud function and it is confirmed that it works. I wanted to use cloud build and cloud run to update the code inside of the VM each time I update the code inside of the Github. I read somewhere that in order to deploy a container image to a VM, the VM has to have a container-os, so I manually made a new VM matching that criteria through compute engine. The container-os VM is already made. I just need to have the container image for it updated with the new container image built with the updated code from Github.
I'm trying to build a container image that I will later use to update the code inside of a virtual machine. Cloud run is triggered every time I push to a folder in my Github repository.
I checked Container Registry and the images are being created, but I keep getting this error when I check the virtual machine:
"Error: Failed to start container: Error response from daemon:
{
"message":"manifest for gcr.io/$PROJECT_ID/$IMAGE:latest not found: manifest unknown: Failed to fetch \"latest\" from request \"/v2/$PROJECT_ID/$IMAGE:latest/manifests/latest\"."
}"
Why is the request being made for the latest tag when I wanted the tag with the commit hash and how can I fix it?
This is the virtual machine log (sudo journalctl -u konlet-startup):
Started Containers on GCE Setup.
Starting Konlet container startup agent
Downloading credentials for default VM service account from metadata server
Updating IPtables firewall rules - allowing tcp traffic on all ports
Updating IPtables firewall rules - allowing udp traffic on all ports
Updating IPtables firewall rules - allowing icmp traffic on all ports
Launching user container $CONTAINER
Configured container 'preemptive-public-email-vm' will be started with name 'klt-$IMAGE-xqgm'.
Pulling image: 'gcr.io/$PROJECT_ID/$IMAGE'
Error: Failed to start container: Error response from daemon: {"message":"manifest for gcr.io/$PROJECT_ID/$IMAGE:latest not found: manifest unknown: Failed to fetch \"latest\" from request \"/v2/$PROJECT_ID/$IMAGE/manifests/latest\"."}
Saving welcome script to profile.d
Main process exited, code=exited, status=1/FAILURE
Failed with result 'exit-code'.
Consumed 96ms CPU time
This is the cloudbuild.yaml:
steps:
- name: 'gcr.io/cloud-builders/docker'
args: ['build', '-t', 'gcr.io/$PROJECT_ID/$IMAGE:$COMMIT_SHA', './folder_name']
- name: 'gcr.io/cloud-builders/docker'
args: ['push', 'gcr.io/$PROJECT_ID/$IMAGE:$COMMIT_SHA']
- name: 'gcr.io/google.com/cloudsdktool/cloud-sdk'
entrypoint: gcloud
args:
- 'run'
- 'deploy'
- 'run-public-email'
- '--image'
- 'gcr.io/$PROJECT_ID/$IMAGE:$COMMIT_SHA'
- '--region'
- 'us-central1'
images:
- 'gcr.io/$PROJECT_ID/$IMAGE:$COMMIT_SHA'
This is the dockerfile:
FROM python:3.9.7-slim-buster
WORKDIR /app
COPY requirements.txt requirements.txt
RUN pip3 install -r requirements.txt
COPY . .
CMD [ "python3", "hello.py" ]
This is hello.py:
import os
from flask import Flask
app = Flask(__name__)
#app.route('/')
def home():
return "Hello world"
if __name__ == '__main__':
app.run(debug=True, host='0.0.0.0', port=int(os.environ.get('PORT', 8080)))
You can update the GCE Container-os VM with the latest image by using Gcloud command in cloudbuild.yaml file. This command is used to update Compute Engine VM instances running container images.
You could encounter a vm restart whenever the image is updated to the GCE container-os VM. When this happens, it will allocate a new ip to the VM, you can use a Static IP to avoid it, if required.
Example Cloudbuild.yaml :
steps:
- name: 'gcr.io/cloud-builders/docker'
args: ['build', '-t', 'gcr.io/$PROJECT_ID/$IMAGE:$COMMIT_SHA', './folder_name']
- name: 'gcr.io/cloud-builders/docker'
args: ['push', 'gcr.io/$PROJECT_ID/$IMAGE:$COMMIT_SHA']
- name: 'gcr.io/google.com/cloudsdktool/cloud-sdk'
entrypoint: gcloud
args:
- 'compute'
- 'instances'
- 'update-container'
- 'Instance Name'
- '--container-image'
- 'gcr.io/$PROJECT_ID/$IMAGE:$COMMIT_SHA'

Drone CI/CD only stuck in exec pipeline

When i use docker pipeline it will success building.
But when i use exec pipe line it's always stuck in pending.
And i don't what going wrong.
kind: pipeline
type: exec
name: deployment
platform:
os: linux
arch: amd64
steps:
- name: backend image build
commands:
- echo start build images...
# - export MAJOR_VERSION=1.0.rtm.
# - export BUILD_NUMBER=$DRONE_BUILD_NUMBER
# - export WORKSPACE=`pwd`
# - bash ./jenkins_build.sh
when:
branch:
- master
Docker Pipe Line is fine.
kind: pipeline
type: docker
name: deployment
steps:
- name: push image to repo
image: plugins/docker
settings:
dockerfile: src/ZR.DataHunter.Api/Dockerfile
tags: latest
insecure: true
registry: "xxx"
repo: "xxx"
username:
from_secret: username
password:
from_secret: userpassword
First of all it's important to note that the exec pipeline can be used only when Drone is self-hosted. It is written in the official docs:
Please note exec pipelines are disabled on Drone Cloud. This feature is only available when self-hosting
When Drone is self-hosted be sure, that:
The exec runner is installed
It is configured properly in its config file (to be able to connect to the Drone server)
And the drone-runner-exec service is running
After the service is started, look for its log file and you have to see an info message saying it was able to connect to your Drone server:
level=info msg="successfully pinged the remote server"
level=info msg="polling the remote server"
It's also possible to view the web UI (Dashboard) of the running service if you enable it.
So if you see it can poll your server, then your exec pipeline should run as expected.

Docker fails to provide creds for awslogs logging driver

My docker-compose file:
version: "2"
services:
app:
build:
# Build an image from the Dockerfile in the current directory
context: .
ports:
- 5000:5000
environment:
PORT: 5000
NODE_ENV: production
And docker-compose.override
version: "2"
networks:
# This special network is configured so that the local metadata
# service can bind to the specific IP address that ECS uses
# in production
credentials_network:
driver: bridge
ipam:
config:
- subnet: "169.254.170.0/24"
gateway: 169.254.170.1
services:
# This container vends credentials to your containers
ecs-local-endpoints:
# The Amazon ECS Local Container Endpoints Docker Image
image: amazon/amazon-ecs-local-container-endpoints
volumes:
# Mount /var/run so we can access docker.sock and talk to Docker
- /var/run:/var/run
# Mount the shared configuration directory, used by the AWS CLI and AWS SDKs
# On Windows, this directory can be found at "%UserProfile%\.aws"
- $HOME/.aws/:/home/.aws/
environment:
# define the home folder; credentials will be read from $HOME/.aws
HOME: "/home"
# You can change which AWS CLI Profile is used
AWS_PROFILE: "default"
networks:
credentials_network:
# This special IP address is recognized by the AWS SDKs and AWS CLI
ipv4_address: "169.254.170.2"
# Here we reference the application container that we are testing
# You can test multiple containers at a time, simply duplicate this section
# and customize it for each container, and give it a unique IP in 'credentials_network'.
app:
logging:
driver: awslogs
options:
awslogs-region: eu-west-3
awslogs-group: sharingmonsterlog
depends_on:
- ecs-local-endpoints
networks:
credentials_network:
ipv4_address: "169.254.170.3"
environment:
AWS_DEFAULT_REGION: "eu-west-3"
AWS_CONTAINER_CREDENTIALS_RELATIVE_URI: "/creds"
I have added my AWS credentials to my mac using the aws configure command and the credentials are stored correctly in ~/.aws/credentials.
I am using docker 2.2.0.4, docker-compose 1.25.4 and docker-machine 0.16.2.
When I run docker-compose up I get the following error:
ERROR: for scraper Cannot start service scraper: Failed to initialize logging driver: NoCredentialProviders: no valid providers in chain.
Deprecated. For verbose messaging see aws.Config.CredentialsChainVerboseErrors
ERROR: Encountered errors while bringing up the project.
I believe this is because I need to set the AWS credentials in the Docker Daemon but I cannot work out how this is done on macOs High Sierra.
We need to pass the credential to Docker daemon. On Systemd based Linux, as the docker daemon is managed by systemd, we need to setup systemd configuration for docker daemon.
Pass Credentials to the awslogs Docker Logging Driver on Ubuntu
For mac, we need to find a way to do the similar in Mac, and need to understand how Mac docker daemon is configured and started.
Apparently there is an issue with Mac docker daemon not being able to pass environment variables, so it would limit the options. The person posted the issue ended up with docker-cloudwatchlogs
Add ability to set environment variables for docker daemon
There are a few ways on Mac mentioned in the stackoverflow.
How do I provide credentials to the docker awslogs driver using Docker for Mac?
However, if this is about docker compose, there could be another way to pass AWS credential via environment variables using Docker Composer features.
Environment variables in Compose
Or simply setup environment variable when running docker compose command line.
AWS_ACCESS_KEY_ID=... AWS_SECRET_ACCESS_KEY= ... AWS_SESSION_TOKEN= ... docker-compose up ...
Please refer to Docker for Mac can't use loaded environment variable from file as well.
Regarding the AWS IAM permission, please make sure the AWS account of the AWS credential has the IAM permission as specified in the Docker document.
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"logs:CreateLogStream",
"logs:PutLogEvents"
],
"Effect": "Allow",
"Resource": "*"
}
]
}

Integration testing with drone.io

I am using a CI tool called Drone(drone.io). So i really want to do some integration tests with it. What i want is Drone to start my application container on some port on the drone host and then I would be able to run integration tests against it. For example in .drone.yml file:
build:
image: python3.5-uwsgi
pull: true
auth_config:
username: some_user
password: some_password
email: email
commands:
- pip install --user --no-cache-dir -r requirements.txt
- python manage.py integration_test -h 127.0.0.1:5000
# this should send various requests to 127.0.0.1:5000
# to test my application's behaviour
compose:
my_application:
# build and run a container based on dockerfile in local repo on port 5000
publish:
deploy:
Drone 0.4 can't start service from your Dockerfile, if you want start docker container, you should build it before, outside this build, and push to dockerhub or your own registry and put this in compose section, see http://readme.drone.io/usage/services/#images:bfc9941b6b6fd7b4ef09dd0ccd08af0c
You can also start your application in build, nohup python manage.py server -h 127.0.0.1:5000 & before you running your integration tests. Make sure that your application is started and listening 5000 port, before you run integration_test.
I recommend you use drone 0.5 with pipelines, you can build docker image and push that to registry before build, and use that as service inside your build.

Resources