When I run gcloud app deploy I get the message:
ERROR: (gcloud.app.deploy) The required property [project] is not currently set.
You may set it for your current workspace by running:
$ gcloud config set project VALUE
or it can be set temporarily by the environment variable [CLOUDSDK_CORE_PROJECT]
Setting the project in my current workspace caused me to deploy the wrong app, so I don't want to do that again. I want to try deploying using the environment variable option listed above. How do I do this? What is the deploy syntax to use CLOUDSDK_CORE_PROJECT? I thought this would come from my app.yaml but haven't gotten it working.
You should configure it with your project using projectID.
gcloud config set project <PROJECT_ID>
You can find your projectID your GCP account.
You should be able to pass the project as part of the app deploy command:
gcloud app deploy ~/my_app/app.yaml --project=PROJECT
Look at the examples in the Documentation.
I had the same error message recently in a job which worked fine before.
Old:
# Deploy template
.deploy_template: &deploy_definition
image: google/cloud-sdk
stage: deploy
script:
- echo $GOOGLE_KEY > key.json
- gcloud auth activate-service-account --key-file key.json
- gcloud config set compute/zone $GC_REGION
- gcloud config set project $GC_PROJECT
- gcloud container clusters get-credentials $GC_XXX_CLUSTER
- kubectl set image deployment/XXXXXX --record
New:
# Deploy template
.deploy_template: &deploy_definition
image: google/cloud-sdk
stage: deploy
script:
- echo $GOOGLE_KEY > key.json
- gcloud auth activate-service-account --key-file key.json
- gcloud config set project $GC_PROJECT
- gcloud config set compute/zone $GC_REGION
- gcloud container clusters get-credentials $GC_XXX_CLUSTER
- kubectl set image deployment/XXXXXX --record
I just change the order of compute/zone, and it is worked like before.
Related
I want to create a pipeline for ECS deployment.
So my .gitlab-ci.yml file is like that:
stages:
- build
- test
- review
- deploy
- production
- cleanup
variables:
AUTO_DEVOPS_PLATFORM_TARGET: ECS
CI_AWS_ECS_CLUSTER: shopservice
CI_AWS_ECS_SERVICE: sample-app-service
CI_AWS_ECS_TASK_DEFINITION: first-run-task-definition:1
AWS_REGION: us-west-2
include:
- template: Jobs/Build.gitlab-ci.yml
- template: Jobs/Deploy/ECS.gitlab-ci.yml
But on the stage production_ecs my pipeline was failed. And I gettin an error as :
Using docker image sha256:9920fa2b45873efd675cf992d7130e8a691133fd51ec88b2765d977f82695263 for registry.gitlab.com/gitlab-org/cloud-deploy/aws-ecs:latest with digest registry.gitlab.com/gitlab-org/cloud-deploy/aws-ecs#sha256:c833e508b00451a09639e96fb7551f62abb4f75ba1d31f88b4dc8299c608e0dd ...
22$ ecs update-task-definition
23Unable to locate credentials. You can configure credentials by running "aws configure".
25
Cleaning up project directory and file based variables
00:01
27ERROR: Job failed: exit code 1
Why I'm getting this error? How can I configure was on ECS?
You need to pass your credentials (AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY) as environment variables and also set the region as environment variable AWS_DEFAULT_REGION. You can set them by going to your project (or group) and navigate to Settings > CI/CD.
You can find the documentation for ecs deployments here: https://docs.gitlab.com/ee/ci/cloud_deployment/#deploy-your-application-to-the-aws-elastic-container-service-ecs
and for configuring the aws cli (which is used in the template and GitLab AWS Docker images) here : https://docs.gitlab.com/ee/ci/cloud_deployment/#run-aws-commands-from-gitlab-cicd
I am trying to deploy my Go 1.14 microservices application on to Google's Flexible environment. I read that there are issues with finding GOROOT, and how it is unable to obtain the correct dependencies.
I wanted to use the flexible environment because I needed to do port forwarding. Since my domain name was used to run the actual application, I wanted port 8081 to run my microservices.
I followed the instruction from this link:
https://blog.cubieserver.de/2019/go-modules-with-app-engine-flexible/
I tried option 3.
This is my gitlab-ci.yaml configurations file.
# .gitlab-ci.yaml
stages:
- build
- deploy
build_server:
stage: build
image: golang:1.14
script:
- go mod vendor
- go install farmcycle.us/user/farmcycle
artifacts:
paths:
- vendor/
deploy_app_engine:
stage: deploy
image: google/cloud-sdk:270.0.0
script:
- echo $SERVICE_ACCOUNT > /tmp/$CI_PIPELINE_ID.json
- gcloud auth activate-service-account --key-file /tmp/$CI_PIPELINE_ID.json
- gcloud --quiet --project $PROJECT_ID app deploy app.yaml
after_script:
- rm /tmp/$CI_PIPELINE_ID.json
This my app.yaml configuration file
runtime: go
env: flex
network:
forwarded_ports:
- 8081/tcp
When I deployed this using the Git CI pipeline. Build stage passes, but the Deploy stage failed.
Running with gitlab-runner 13.4.1 (e95f89a0)
on docker-auto-scale 72989761
Preparing the "docker+machine" executor
Preparing environment
00:03
Getting source from Git repository
00:04
Downloading artifacts
00:02
Executing "step_script" stage of the job script
00:03
$ echo $SERVICE_ACCOUNT > /tmp/$CI_PIPELINE_ID.json
$ gcloud auth activate-service-account --key-file /tmp/$CI_PIPELINE_ID.json
Activated service account credentials for: [farmcycle-hk1996#appspot.gserviceaccount.com]
$ gcloud --quiet --project $PROJECT_ID app deploy app.yaml
ERROR: (gcloud.app.deploy) Staging command [/usr/lib/google-cloud-sdk/platform/google_appengine/go-app-stager /builds/JLiu1272/farmcycle-backend/app.yaml /builds/JLiu1272/farmcycle-backend /tmp/tmprH6xQd/tmpSIeACq] failed with return code [1].
------------------------------------ STDOUT ------------------------------------
------------------------------------ STDERR ------------------------------------
2020/10/10 20:48:27 staging for go1.11
2020/10/10 20:48:27 Staging Flex app: failed analyzing /builds/JLiu1272/farmcycle-backend: cannot find package "farmcycle.us/user/farmcycle/api" in any of:
($GOROOT not set)
/root/go/src/farmcycle.us/user/farmcycle/api (from $GOPATH)
GOPATH: /root/go
--------------------------------------------------------------------------------
Running after_script
00:01
Running after script...
$ rm /tmp/$CI_PIPELINE_ID.json
Cleaning up file based variables
00:01
ERROR: Job failed: exit code 1
This was the error. Honestly I am not really sure what this error is and how to fix it.
Surprisingly, even using the latest Go runtime, runtime: go1.15, go modules appear to not be used. See golang-docker.
However, Flex builds your app into a container regardless of runtime and so, in this case, it may be better to use a custom runtime and build your own Dockerfile?
runtime: custom
env: flex
Then you get to use e.g. Go 1.15 and go modules (without vendoring) and whatever else you'd like. For a simple main.go that uses modules e.g.:
FROM golang:1.15 as build
ARG PROJECT="flex"
WORKDIR /${PROJECT}
COPY go.mod .
RUN go mod download
COPY main.go .
RUN GOOS=linux \
go build -a -installsuffix cgo \
-o /bin/server \
.
FROM gcr.io/distroless/base-debian10
COPY --from=build /bin/server /
USER 999
ENV PORT=8080
EXPOSE ${PORT}
ENTRYPOINT ["/server"]
This ought to be possible with Google's recently announced support for buildpacks but I've not tried it.
We are new to the Bitbucket CI/CD pipleline creations and AWS deployments.
We are trying to build and deploy/publish a .Net core (3.1) API to AWS Lambda as Serverless Application Model using Bitbucket pipeline. We have come across the below link to deploy to AWS.
https://bitbucket.org/atlassian/aws-sam-deploy/src/master/?_ga=2.18577786.215737733.1592837802-1805179932.1542868670
We were able to achieve project build and test but the AWS Lambda SAM deployment is not working correctly. Please find our below yaml file
image: mcr.microsoft.com/dotnet/core/sdk:3.1
pipelines:
default:
- step:
caches:
- dotnetcore
script: # Modify the commands below to build your repository.
- export PROJECT_NAME=MyProject
- export TEST_NAME=MyProject.Tests
- dotnet restore
- dotnet build $PROJECT_NAME
- dotnet test $TEST_NAME
- step:
name: deploy AWS SAM
script:
- pipe: atlassian/aws-sam-deploy:0.3.4
variables:
AWS_ACCESS_KEY_ID: ${AWS_ACCESS_KEY_ID} # Optional if already defined in the context.
AWS_SECRET_ACCESS_KEY: ${AWS_SECRET_ACCESS_KEY} # Optional if already defined in the context.
COMMAND: 'deploy'
I think, we are missing few steps in the "deploy AWS SAM" but not sure how to provide the published libraries path or any other steps. We are not using any docker in our project.
Can someone help?
I was facing the same issue. I found a solution that is not the best one, but works.
You can install the Amazon Lambda Tools in your pipeline image and call lambda deploy-serverless. Also, you need to install something to compress (zip) the package.
The "dotnet lambda deploy-serverless" does the same as the MS VS 2019 with AWS Toolkit.
image: mcr.microsoft.com/dotnet/core/sdk:3.1
pipelines:
default:
- step:
caches:
- dotnetcore
script: # Modify the commands below to build your repository.
- apt-get update && apt-get install -y zip
- dotnet tool install -g Amazon.Lambda.Tools
- export PATH="$PATH:/root/.dotnet/tools"
- dotnet lambda deploy-serverless
I'm using gitlab CI for deployment.
I'm running into a problem when the review branch is deleted.
stop_review:
variables:
GIT_STRATEGY: none
stage: cleanup
script:
- echo "$AWS_REGION"
- echo "Stopping review branch"
- serverless config credentials --provider aws --key ${AWS_ACCESS_KEY_ID} --secret ${AWS_SECRET_ACCESS_KEY}
- echo "$CI_COMMIT_REF_NAME"
- serverless remove --stage=$CI_COMMIT_REF_NAME --verbose
only:
- branches
except:
- master
environment:
name: review/$CI_COMMIT_REF_NAME
action: stop
when: manual
error is This command can only be run in a Serverless service directory. Make sure to reference a valid config file in the current working directory if you're using a custom config file
I have tried different GIT_STRATEGY, can some point me in right direction?
In order to run serverless remove, you'll need to have the serverless.yml file available, which means the actual repository will need to be cloned. (or that file needs to get to GitLab in some way).
It's required to have a serverless.yml configuration file available when you run serverless remove because the Serverless Framework allows users to provision infrastructure using not only the framework's YML configuration but also additional resources (like CloudFormation in AWS) which may or may not live outside of the specified app or stage CF Stack entirely.
In fact, you can also provision infrastructure into other providers as well (AWS, GCP, Azure, OpenWhisk, or actually any combination of these).
So it's not sufficient to simply identify the stage name when running sls remove, you'll need the full serverless.yml template.
I configured a Google could demo project and created a cluster for it in the GitLab Serverless settings for a Hello world Spring boot application. The only information I find on deploying applications is https://docs.gitlab.com/ee/user/project/clusters/serverless/#deploying-serverless-applications which might explain how to deploy a Ruby application only. I'm not sure about that because none of the variables used in the script are explained and the hint
Note: You can reference the sample Knative Ruby App to get started.
is somehow confusing because if I'm not familiar with building Ruby applications which I'm not, that won't get me started.
Following the instruction to put
stages:
- build
- deploy
build:
stage: build
image:
name: gcr.io/kaniko-project/executor:debug
entrypoint: [""]
only:
- master
script:
- echo "{\"auths\":{\"$CI_REGISTRY\":{\"username\":\"$CI_REGISTRY_USER\",\"password\":\"$CI_REGISTRY_PASSWORD\"}}}" > /kaniko/.docker/config.json
- /kaniko/executor --context $CI_PROJECT_DIR --dockerfile $CI_PROJECT_DIR/Dockerfile --destination $CI_REGISTRY_IMAGE
deploy:
stage: deploy
image: gcr.io/triggermesh/tm#sha256:e3ee74db94d215bd297738d93577481f3e4db38013326c90d57f873df7ab41d5
only:
- master
environment: production
script:
- echo "$CI_REGISTRY_IMAGE"
- tm -n "$KUBE_NAMESPACE" --config "$KUBECONFIG" deploy service "$CI_PROJECT_NAME" --from-image "$CI_REGISTRY_IMAGE" --wait
in .gitlab-ci.yml causes the deploy stage to fail due to
$ tm -n "$KUBE_NAMESPACE" --config "$KUBECONFIG" deploy service "$CI_PROJECT_NAME" --from-image "$CI_REGISTRY_IMAGE" --wait
2019/02/09 11:08:09 stat /root/.kube/config: no such file or directory, falling back to in-cluster configuration
2019/02/09 11:08:09 Can't read config file
ERROR: Job failed: exit code 1
My Dockerfile which allows to build locally looks as follows:
FROM maven:3-jdk-11
COPY . .
RUN mvn --batch-mode --update-snapshots install
EXPOSE 8080
CMD java -jar target/hello-world-0.1-SNAPSHOT.jar
(the version in the filename doesn't make sense for further deployment, but that's a follow-up problem).
Reason is a mismatch in the environment value specified in .gitlab-ci.yml and the GitLab Kubernetes configuration, see https://docs.gitlab.com/ee/user/project/clusters/#troubleshooting-missing-kubeconfig-or-kube_token for details.