ebextensions not working in AWS Elasticbeanstalk - laravel

When deploying in elasticbeanstalk, I am trying to execute a script through an ebextension file.
The log says that the script was executed normally, but it doesn't seem to be applied.
The exact content of the script is a script that sends logs from laravel to cloudwatch through cloudwatch agent.
Below is the contents of the script and the log that the script was executed normally.
ebextention_file.config
files:
"/etc/awslogs/config/laravel_log.conf" :
mode: "060606"
owner: root
group: root
content: |
[/var/app/current/storage/logs/laravel*]
datetime_format = %Y-%m-%d %H:%M:%S
log_group_name = `{"Fn::Join":["/", ["/aws/elasticbeanstalk", { "Ref":"AWSEBEnvironmentName" }, "var/app/current/storage/logs/laravel.log"]]}`
log_stream_name = {instance_id}
file = /var/app/current/storage/logs/laravel*
multi_line_start_pattern = {datetime_format}
commands:
"01":
command: chkconfig awslogs on
"02":
command: service awslogs restart
cfn-init.log:
2020-09-17 03:43:19,921 [INFO] Command 01 succeeded
2020-09-17 03:43:22,056 [INFO] Command 02 succeeded
It is not possible after deployment,
so if i directly connect to each instance with ssh and execute 01 command and 02 command, logs are normally entered in cloudwatch.
I don't know what this is
Deployment policy: rolling with additional batch
Rolling update type: rolling based on Health
What are some more things I should check

Related

ERROR: (gcloud.auth.activate-service-account) The .json key file is not in a valid format -- via impersonate-service-account

Is it possible to use short-lived credentials, with docker-compose, to run a bash scripted gcloud command?
Related posts that I attempted to use but they are 5+ years old and I've been led to believe that the gcloud auth command has changed during this time:
ERROR: (gcloud.auth.activate-service-account) Failed to activate the given service account. Please ensure provided key file is valid
gcloud auth activate-service-account [ERROR] Please ensure provided key file is valid
Setup
there is a lot going on but I've attempted to abbreviate to the relevant parts
Makefile
auth: ## commands for short lived auth
#gcloud config set project ${GCP_PROJECT}
#gcloud auth application-default login --impersonate-service-account="inst-dataflow-svc#${GCP_PROJECT}.iam.gserviceaccount.com"
#gcloud auth configure-docker $(REGION)-docker.pkg.dev
gcloud-flex-build: ## build & push base docker image
docker-compose build gcloud-build-flex-local
docker-compose run gcloud-build-flex-local
docker-compose.yaml
version: '3.4'
services:
gcloud-build-flex-local:
build:
dockerfile: docker/gcloud-build-flex-template.dockerfile
context: .
image: us-central1-docker.pkg.dev/gcp-project/dataflow-docker-registry/local-build/pubsub-to-gbq-build-flex-template
volumes:
- type: bind
source: ${HOME}/.config/gcloud/
target: /tmp
docker/gcloud-build-flex-template.dockerfile
FROM gcr.io/google.com/cloudsdktool/cloud-sdk:408.0.1
COPY docker/scripts/gcloud-build-flex-template.sh /app/gcloud-build-flex-template.sh
COPY dataflow/pubsub-to-gbq/pubsub-to-gbq-metadata /app/pubsub-to-gbq-metadata
WORKDIR /app
ENTRYPOINT "/app/gcloud-build-flex-template.sh"
/app/gcloud-build-flex-template.sh
#!/bin/bash
set -euo pipefail
SERVICE_ACCOUNT_EMAIL=inst-dataflow-svc#gcp-project.iam.gserviceaccount.com
GCP_PROJECT=gcp-project
export GOOGLE_APPLICATION_CREDENTIALS=/tmp/application_default_credentials.json
# debugging
echo $GOOGLE_APPLICATION_CREDENTIALS
ls -lah /tmp/
cat $GOOGLE_APPLICATION_CREDENTIALS
gcloud auth activate-service-account $SERVICE_ACCOUNT_EMAIL --project=$GCP_PROJECT --key-file=$GOOGLE_APPLICATION_CREDENTIALS
Execution
make auth
make gcloud-flex-build
Error
ERROR: (gcloud.auth.activate-service-account) The .json key file is not in a valid format.
make: *** [gcloud-flex-build] Error 1
stdout (abbreviated)
docker-compose build gcloud-build-flex-local
[+] Building 0.4s (9/9) FINISHED
...
docker-compose run gcloud-build-flex-local
drwxr-xr-x 17 root root 544 Dec 30 10:36 .
drwxr-xr-x 1 root root 4.0K Dec 30 10:40 ..
-rw------- 1 root root 591 Dec 30 10:36 application_default_credentials.json
{
"delegates": [],
"service_account_impersonation_url": "https://iamcredentials.googleapis.com/v1/projects/-/serviceAccounts/inst-dataflow-svc#gcp-project.iam.gserviceaccount.com:generateAccessToken",
"source_credentials": {
"client_id": "alphanumeric string .apps.googleusercontent.com",
"client_secret": "alphanumeric string",
"refresh_token": "alphanumeric string",
"type": "authorized_user"
},
"type": "impersonated_service_account"
}
I can make it work via docker run by spoofing the credentials to include only the "source_credentials" object, passed in as a volume, but this same trick doesn't seem to work with docker-compose running a script inside a container...
There is a similar type of configuration mentioned in this document. This involves three major steps:
Create short-lived credentials for your service account and download
your service account keys.
Create the configuration files for making your docker environment up. Use
the above cred files for granting required permissions.
Once you have all the configuration files in place use your docker-compose
commands for making your environment up.
Follow this documentation for more details.

gitlab runner with 2 workers: 1st worker (BE) fine, 2nd worker (FE) uses docker instead of shell

Firstly I set up 1 worker for 1 job. Deploying my backend for the API.
I'm using "shell" as the executer. The .toml file is this structure:
concurrent = 1
check_interval = 0
[session_server]
session_timeout = 1800
[[runners]]
name = "Gitlab Runner Josere Backend"
url = "https://gitlab.com/"
token = "sOmEtOkeN1G0Tfr0mGitlab"
executor = "shell"
[runners.custom_build_dir]
[runners.cache]
[some mumbo jumbo about caching.. does it matter?]
With some struggle I got that to work fine with this .gitlab-ci.yml:
deploy-production:
stage: deploy
variables:
GIT_STRATEGY: clone
script:
- cd ./lumen/
- composer install
- sudo cp -r $CI_PROJECT_DIR/lumen/. /home/josere/public_html/api/
- sudo cp /home/josere/env/.env /home/josere/public_html/api
This is the execution output of the runner:
Running with gitlab-runner 15.2.1 (32fc1585)
on Gitlab Runner Josere backend 9JxGrMLz
Preparing the "shell" executor
00:00
Using Shell executor...
Preparing environment
00:00
Running on ####[my server]#####...
Getting source from Git repository
00:03
Fetching changes with git depth set to 50...
Initialized empty Git repository in /home/gitlab-runner/builds/9JxGrMLz/0/paspalas/josere/.git/
Created fresh repository.
... etc ...
In my frontend repo in Gitlab I went to the same runners settings. I can't really install a runner (its allready running I guess) but I can copy the token that is shown there.
Then I changed my .toml file according to this doc from gitlab (https://docs.gitlab.com/runner/fleet_scaling/):
concurrent = 2
check_interval = 0
[session_server]
session_timeout = 1800
[[runners]]
name = "Gitlab Runner Josere Backend"
url = "https://gitlab.com/"
token = "sOmEtOkeN1G0Tfr0mGitlab"
executor = "shell"
[runners.custom_build_dir]
[runners.cache]
[some mumbo jumbo about caching.. does it matter?]
[[runners]]
name = "Gitlab Runner Josere Frontend"
url = "https://gitlab.com/"
token = "TheOtherTokenThatIgotFromFrontendRepo!"
executor = "shell"
[runners.custom_build_dir]
[runners.cache]
[some mumbo jumbo about caching.. does it matter?]
notice I do keep the executor on "shell".
this is the script for .gitlab-ci.yml that goes in the root of the frontend repo:
deploy-production:
stage: deploy
variables:
GIT_STRATEGY: clone
script:
- npm install
- npm run build
- sudo cp -r $CI_PROJECT_DIR/public/. /home/josere/public_html/
But when I commit my frontend and check the (failing) log for the worker it writes this:
Running with gitlab-runner 15.4.0~beta.5.gdefc7017 (defc7017)
on green-1.shared.runners-manager.gitlab.com/default JLgUopmM
Preparing the "docker+machine" executor
00:06
Using Docker executor with image ruby:2.5 ...
Pulling docker image ruby:2.5 ...
Using docker image sha256:27d###mumbojumbo###2383b for ruby:2.5 with digest ruby#sha256:ecc3###mumbojumbo###444b ...
Preparing environment
00:00
Running on runner-jlguopmm-project-39467125-concurrent-0 via runner-jlguopmm-shared-1665674167-6adf45bf...
Getting source from Git repository
00:02
$ eval "$CI_PRE_CLONE_SCRIPT"
Fetching changes with git depth set to 20...
Initialized empty Git repository in /builds/paspalas/josere-frontend/.git/
Created fresh repository.
Checking out c39e641c as materialui...
Skipping Git submodules setup
Executing "step_script" stage of the job script
00:01
Using docker image sha256:27d###mumbojumbo###3b for ruby:2.5 with digest ruby#sha256:ecc3e###mumbojumbo####44b ...
$ sudo npm install
/bin/bash: line 126: sudo: command not found
Cleaning up project directory and file based variables
00:01
ERROR: Job failed: exit code 1
clearly it seems multiple things go wrong, to start with: why is it using docker while I explicitly tell it to be "shell"?
I fixed the issue. Even though the docs of GitLab differentiates between "runner" and "job", the gitlab-runner calls these "registrations" of a "runner". I did the (extra) registeration like so:
- gitlab-runner register
[filling in info]
- nano /etc/gitlab-runner/config.toml
[check if you have the additional runner]
- gitlab-runner run
[according to gitlab-runner help this is to fire up multiple runners]
- gitlab-runner list
[ now you can check if all "runners" (jobs) are running]

Spring Boot app in Docker container not starting in Cloud Run after building successfully - cannot access jarfile

I've set up continuous deployment to Cloud Run from GitHub for my Spring Boot project, and while it's successfully building in Cloud Build, when I go over to Cloud Run, I get the following error under Creating Revision:
The user-provided container failed to start and listen on the port defined provided by the PORT=8080 environment variable.
When I go over to the Logs, I see the following errors:
2022-09-23 09:42:47.881 BST
Error: Unable to access jarfile /app/target/educity-manager-0.0.1-SNAPSHOT.jar
{
insertId: "632d7187000d739d29eb84ad"
labels: {5}
logName: "projects/educity-manager/logs/run.googleapis.com%2Fstderr"
receiveTimestamp: "2022-09-23T08:42:47.883252595Z"
resource: {2}
textPayload: "Error: Unable to access jarfile /app/target/educity-manager-0.0.1-SNAPSHOT.jar"
timestamp: "2022-09-23T08:42:47.881565Z"
}
2022-09-23 09:43:48.800 BST
run.googleapis.com
…ager/revisions/educity-manager-00011-fod
Ready condition status changed to False for Revision educity-manager-00011-fod with message: Deploying Revision.
{
insertId: "w6ptr6d20ve"
logName: "projects/educity-manager/logs/cloudaudit.googleapis.com%2Fsystem_event"
protoPayload: {
#type: "type.googleapis.com/google.cloud.audit.AuditLog"
resourceName: "namespaces/educity-manager/revisions/educity-manager-00011-fod"
response: {6}
serviceName: "run.googleapis.com"
status: {2}}
receiveTimestamp: "2022-09-23T08:43:49.631015104Z"
resource: {2}
severity: "ERROR"
timestamp: "2022-09-23T08:43:48.800371Z"
}
Dockerfile is as follows (and looking at the build log all of the commands in it completed successfully):
FROM openjdk:17-jdk-alpine
RUN addgroup -S spring && adduser -S spring -G spring
USER spring:spring
COPY . /app
ENTRYPOINT [ "java","-jar","/app/target/educity-manager-0.0.1-SNAPSHOT.jar" ]
I've read that Cloud Run defaults to exposing Port 8080, but just to be on the safe side I've put server.port=${PORT:8080} in my application.properties file (but it seems to make no difference one way or the other).
I have run into similar issues in the past. Usually, I am able to resolve this issue by:
specifying the port in the application itself (as you indicated in your post), and
exposing the required port in my dockerfile eg. EXPOSE 8080
Oh my good god I have done it. After two full days of digging, I realised that because I was doing it through github, my .gitignore file was excluding the /target folder containing the jar file, so Cloud Build never got the jar file mentioned in the Dockerfile.
I am going to have a cry and then go to the pub.

Gitlab runner upload artifact timeout

I have GitLab runner installed on a MacOS using homebrew. The runner configuration is located under ${HOME}/.gitlab-runner/config.toml and service configuration located under ${HOME}/Library/LaunchAgents/homebrew.mxcl.gitlab-runner.plist is the default.
Below is my gitlab-runner toml configuration file.
concurrent = 1
check_interval = 0
[session_server]
session_timeout = 1800
[[runners]]
name = "MY_RUNNER_NAME"
url = "https://gitlab.com/"
token = "MY_GITLAB_TOKEN"
executor = "shell"
shell = "bash"
environment = ["PATH=/usr/local/opt/openjdk#8/bin:/usr/local/opt/ruby#2.7/bin:/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin:/Library/Apple/usr/bin", "LC_ALL=en_US.UTF-8", "LANG=en_US.UTF-8"]
[runners.custom_build_dir]
[runners.cache]
[runners.cache.s3]
[runners.cache.gcs]
[runners.cache.azure]
The runner is connected correctly to Gitlab.com, it executes the steps correctly, but it got stuck in the Uploading artifacts step until the build times out.
Below is my Uploading artifacts logs.
Uploading artifacts...
Runtime platform arch=amd64 os=darwin pid=16690 revision=775dd39d version=13.8.0
android/app/build/outputs/bundle/release/app-release.aab: found 1 matching files and directories
ERROR: Job failed: execution took longer than 1h0m0s seconds
As a debugging step, I tried to run the gitlab-runner artifacts-uploader locally to trace the behavior using this command gitlab-runner --debug --log-level debug artifacts-uploader --verbose --id MY_BUILD_ID --token MY_GITLAB_TOKEN --url https://gitlab.com/ --path android/app/build/outputs/bundle/release/app-release.aab --expire-in "1 week".
Below is my gitlab-runner artifacts-uploader logs.
Runtime platform arch=amd64 os=darwin pid=25259 revision=775dd39d version=13.8.0
android/app/build/outputs/bundle/release/app-release.aab: found 1 matching files and directories
Dialing: tcp gitlab.com:443 ...
It is obvious that the gitlab-runner artifacts-uploader got stuck connecting to gitlab.com:443, and now I am out of ideas on how to trace or solve this issue.

Codedeploy-agent error when run in amazon linux instance

When I create a new deploy in AWSCodeDeploy with GitHub I receive this fail message:
Error CodeScriptFailed
Script Namescripts/stop_server.sh
MessageScript at specified location: scripts/stop_server.sh run as user ubuntu failed with exit code 1
Log TailLifecycleEvent - ApplicationStop
Script - scripts/stop_server.sh
[stderr]su: user ubuntu does not exist
But, my instance is an Amazon Linux Instance and don't have a ubuntu user, anybody know anything about this?
The script that a try to run is:
# scripts/stop_server.sh
#!/bin/bash
forever stop .
My appspec.yml file:
version: 0.0
os: linux
files:
- source: /
destination: /home/ec2-user
hooks:
AfterInstall:
- location: scripts/install_dependencies.sh
timeout: 5
runas: root
ApplicationStart:
- location: scripts/start_server.sh
timeout: 5
runas: root
ApplicationStop:
- location: scripts/stop_server.sh
timeout: 5
runas: root
Codedeploy-agent version agent_version: OFFICIAL_1.0-1.1095_rpm
Application stop usually refers to appspec.yml in previous successful deployment archive. Either empty /opt/codedeploy-agent/deployment-archive/deployment-instructions/ or you may use BeforeInstall hook to execute stop script.
The code deploy create temporary application inside /opt/code-deploy/.... path. If the deployment fails normally it starts from the temporary directory next time. If you want to get rid of the error pointing by deployment, you should check the particular script file in the temporary directory and edit it.

Resources