I have setup an Angular application codepipeline in AWS. The code pipeline works until the build is generated and uploaded the artificats to s3. But when it comes to deployment it failed everytime.
I am sure there is configuration issue with my appspec.yml not able to correct it.
my appspec.yml
version: 0.0
os: windows
files:
- source: /
destination: C:\sandboxBuildData\project\project-ng\dist\project\dist
overwrite: true
file_exists_behavior: OVERWRITE
hooks:
ApplicationStop:
- location: application_stop.sh
timeout: 300
runas: administrator
ApplicationStart:
- location: application_start.sh
timeout: 300
runas: administrator
I don't know if its correct because I see runas is not requred in windows server. Also the windows server has user with password to access it.
Do I need to install deployment group in windows just like linux ?
How do I stop existing command and run a new ?
Related
I am writing appspec.yml file in which downloading the artifacts from CodeBuild step.
I can see the the the appspec.yml is working partially.
Copying the artifacts to certain location is working but the hooks to trigger the script is not working.
I am using a windows based os and wants to trigger the powershell file.
version: 0.0
os: windows
files:
- source: \abc.zip
destination: C:\Downloads
- source: \bcd.zip
destination: C:\Scripts
file_exists_behavior: OVERWRITE
Hooks:
AfterInstall:
- location: \after-install.ps1
runas: administrator
timeout: 900
I tried with this appspec.yml as well but still it's having the same output file section working but hooks not working.
version: 0.0
os: windows
files:
- source: \abc.zip
destination: C:\Downloads
- source: /
destination: C:\Scripts
file_exists_behavior: OVERWRITE
Hooks:
AfterInstall:
- location: C:\Scripts\after-install.ps1
runas: administrator
timeout: 900
after-install.ps1 is having a small script of creation of some environment variables and folder for logs. which is tested and verified by running manually inside the server.
I have created a pipeline for code deployment with github, but it is failing at DownloadBundle with Access Denied error.
I have created a role with AmazonEC2FullAccess and AWSCodeDeployRole to the deployment iam and also crated role for ec2 AmazonEC2FullAccess
I have attached couple of screenshot for setting of code deployment group
also I have placed appspec.yml in the root directory of my repo.
version: 0.0
os: linux
files:
- source: /
destination: /var/www/html/
overwrite: true
file_exists_behavior: OVERWRITE
hooks:
BeforeInstall:
- location: scripts/install_dependencies.sh
timeout: 300
runas: root
ApplicationStart:
- location: scripts/start_server.sh
timeout: 300
runas: root
Note: I am using Auto scaling
For this purpose your EC2 instance has to access S3 as well, check if your EC2 instance has permission to access the related S3 Bucket. Also if you are using KMS for encryption of your bucket, your EC2 instance has to have KMS permissions as well.
I have some python files inside of a VM that will run every week to scrape info from a website. This is automated through cloud scheduler and cloud function and it is confirmed that it works. I wanted to use cloud build and cloud run to update the code inside of the VM each time I update the code inside of the Github. I read somewhere that in order to deploy a container image to a VM, the VM has to have a container-os, so I manually made a new VM matching that criteria through compute engine. The container-os VM is already made. I just need to have the container image for it updated with the new container image built with the updated code from Github.
I'm trying to build a container image that I will later use to update the code inside of a virtual machine. Cloud run is triggered every time I push to a folder in my Github repository.
I checked Container Registry and the images are being created, but I keep getting this error when I check the virtual machine:
"Error: Failed to start container: Error response from daemon:
{
"message":"manifest for gcr.io/$PROJECT_ID/$IMAGE:latest not found: manifest unknown: Failed to fetch \"latest\" from request \"/v2/$PROJECT_ID/$IMAGE:latest/manifests/latest\"."
}"
Why is the request being made for the latest tag when I wanted the tag with the commit hash and how can I fix it?
This is the virtual machine log (sudo journalctl -u konlet-startup):
Started Containers on GCE Setup.
Starting Konlet container startup agent
Downloading credentials for default VM service account from metadata server
Updating IPtables firewall rules - allowing tcp traffic on all ports
Updating IPtables firewall rules - allowing udp traffic on all ports
Updating IPtables firewall rules - allowing icmp traffic on all ports
Launching user container $CONTAINER
Configured container 'preemptive-public-email-vm' will be started with name 'klt-$IMAGE-xqgm'.
Pulling image: 'gcr.io/$PROJECT_ID/$IMAGE'
Error: Failed to start container: Error response from daemon: {"message":"manifest for gcr.io/$PROJECT_ID/$IMAGE:latest not found: manifest unknown: Failed to fetch \"latest\" from request \"/v2/$PROJECT_ID/$IMAGE/manifests/latest\"."}
Saving welcome script to profile.d
Main process exited, code=exited, status=1/FAILURE
Failed with result 'exit-code'.
Consumed 96ms CPU time
This is the cloudbuild.yaml:
steps:
- name: 'gcr.io/cloud-builders/docker'
args: ['build', '-t', 'gcr.io/$PROJECT_ID/$IMAGE:$COMMIT_SHA', './folder_name']
- name: 'gcr.io/cloud-builders/docker'
args: ['push', 'gcr.io/$PROJECT_ID/$IMAGE:$COMMIT_SHA']
- name: 'gcr.io/google.com/cloudsdktool/cloud-sdk'
entrypoint: gcloud
args:
- 'run'
- 'deploy'
- 'run-public-email'
- '--image'
- 'gcr.io/$PROJECT_ID/$IMAGE:$COMMIT_SHA'
- '--region'
- 'us-central1'
images:
- 'gcr.io/$PROJECT_ID/$IMAGE:$COMMIT_SHA'
This is the dockerfile:
FROM python:3.9.7-slim-buster
WORKDIR /app
COPY requirements.txt requirements.txt
RUN pip3 install -r requirements.txt
COPY . .
CMD [ "python3", "hello.py" ]
This is hello.py:
import os
from flask import Flask
app = Flask(__name__)
#app.route('/')
def home():
return "Hello world"
if __name__ == '__main__':
app.run(debug=True, host='0.0.0.0', port=int(os.environ.get('PORT', 8080)))
You can update the GCE Container-os VM with the latest image by using Gcloud command in cloudbuild.yaml file. This command is used to update Compute Engine VM instances running container images.
You could encounter a vm restart whenever the image is updated to the GCE container-os VM. When this happens, it will allocate a new ip to the VM, you can use a Static IP to avoid it, if required.
Example Cloudbuild.yaml :
steps:
- name: 'gcr.io/cloud-builders/docker'
args: ['build', '-t', 'gcr.io/$PROJECT_ID/$IMAGE:$COMMIT_SHA', './folder_name']
- name: 'gcr.io/cloud-builders/docker'
args: ['push', 'gcr.io/$PROJECT_ID/$IMAGE:$COMMIT_SHA']
- name: 'gcr.io/google.com/cloudsdktool/cloud-sdk'
entrypoint: gcloud
args:
- 'compute'
- 'instances'
- 'update-container'
- 'Instance Name'
- '--container-image'
- 'gcr.io/$PROJECT_ID/$IMAGE:$COMMIT_SHA'
When i use docker pipeline it will success building.
But when i use exec pipe line it's always stuck in pending.
And i don't what going wrong.
kind: pipeline
type: exec
name: deployment
platform:
os: linux
arch: amd64
steps:
- name: backend image build
commands:
- echo start build images...
# - export MAJOR_VERSION=1.0.rtm.
# - export BUILD_NUMBER=$DRONE_BUILD_NUMBER
# - export WORKSPACE=`pwd`
# - bash ./jenkins_build.sh
when:
branch:
- master
Docker Pipe Line is fine.
kind: pipeline
type: docker
name: deployment
steps:
- name: push image to repo
image: plugins/docker
settings:
dockerfile: src/ZR.DataHunter.Api/Dockerfile
tags: latest
insecure: true
registry: "xxx"
repo: "xxx"
username:
from_secret: username
password:
from_secret: userpassword
First of all it's important to note that the exec pipeline can be used only when Drone is self-hosted. It is written in the official docs:
Please note exec pipelines are disabled on Drone Cloud. This feature is only available when self-hosting
When Drone is self-hosted be sure, that:
The exec runner is installed
It is configured properly in its config file (to be able to connect to the Drone server)
And the drone-runner-exec service is running
After the service is started, look for its log file and you have to see an info message saying it was able to connect to your Drone server:
level=info msg="successfully pinged the remote server"
level=info msg="polling the remote server"
It's also possible to view the web UI (Dashboard) of the running service if you enable it.
So if you see it can poll your server, then your exec pipeline should run as expected.
This my .travis.yml file. I am trying to automate deployment to aws-codedeploy.
language: node_js
node_js:
- 7.10.0
services:
- mongodb
env:
- PORT=6655 IP="localhost" NODE_ENV="test"
script:
- npm start &
- sleep 25
- npm test
deploy:
provider: codedeploy
access_key_id:
secure: $Access_Key_Id
secret_access_key:
secure: $Access_Key_Secret
revision_type: github
application: Blog
deployment_group: Ayush-Bahuguna
region: us-east-2
after_deploy:
- "./build.sh"
Here build.sh is a shell script that generates the build files
cd /var/www/cms
sudo yarn install
npm run build-prod
And here is .gitignore file
node_modules/
client/dashboard/dist/
client/blog/dist/
The issue is that, even though travis-ci build succeeds, and after_deploy runs successfully, no build files are generated on the aws ec2 instance where my project is hosted.
Are you able to see any deployment created on your AWS CodeDeploy console? And are your able to see the deployment status? If there is a deployment created, but failed, you can try to see the reason why it failed. Even though the deployment succeeded, it doesn't equal to all instances are deployed depends on the deployment configuration: http://docs.aws.amazon.com/codedeploy/latest/userguide/deployment-configurations.html.
Thanks,
Binbin