How to update loggingService of container.v1.cluster with deployment-manager - google-deployment-manager

I want to set the loggingService field of an existing container.v1.cluster through deployment-manager.
I have the following config
resources:
- name: px-cluster-1
type: container.v1.cluster
properties:
zone: europe-west1-b
cluster:
description: "dev cluster"
initialClusterVersion: "1.13"
nodePools:
- name: cluster-pool
config:
machineType: "n1-standard-1"
oauthScopes:
- https://www.googleapis.com/auth/compute
- https://www.googleapis.com/auth/devstorage.read_only
- https://www.googleapis.com/auth/logging.write
- https://www.googleapis.com/auth/monitoring
management:
autoUpgrade: true
autoRepair: true
initialNodeCount: 1
autoscaling:
enabled: true
minNodeCount: 3
maxNodeCount: 10
ipAllocationPolicy:
useIpAliases: true
loggingService: "logging.googleapis.com/kubernetes"
masterAuthorizedNetworksConfig:
enabled: false
locations:
- "europe-west1-b"
- "europe-west1-c"
When I try to run gcloud deployment-manager deployments update ..., I get the following error
ERROR: (gcloud.deployment-manager.deployments.update) Error in Operation [operation-1582040492957-59edb819a5f3c-7155f798-5ba37285]: errors:
- code: NO_METHOD_TO_UPDATE_FIELD
message: No method found to update field 'cluster' on resource 'px-cluster-1' of
type 'container.v1.cluster'. The resource may need to be recreated with the new
field.
The same succeeds if I remove loggingService.
Is there a way to update loggingService using deployment-manager without deleting the cluster?

The error NO_METHOD_TO_UPDATE_FIELD is due to updating "initialClusterVersion" when you issued the update call to GKE. This field is only used on creation of the cluster, and the type definition doesn't currently allow for it to be updated later. So that should remain static at the original value and will have no effect on the deployment moving forward or try to delete/comment that line.
Even when the previous entry is true, there is also no method to update the logging service, actually Deployment Manager doesn't have many update methods, so, try using the gcloud command to update the cluster directly, keep in mind that you have to use the monitoring service together with the logging service, so, the commando would look like:
gcloud container clusters update px-cluster-1 --logging-service=logging.googleapis.com/kubernetes --monitoring-service=monitoring.googleapis.com/kubernetes --zone=europe-west1-b

Related

How to resolve "input ConnectedServiceName expects a service connection of type AzureRM" error?

I am learning how to create azure pipeline and ran into the following error:
The pipeline is not valid. Job Phase_1: Step
AzureResourceGroupDeployment input ConnectedServiceName expects a
service connection of type AzureRM but the provided service connection
"MY-SERVICE-CONNECTION-NAME" is of type generic.
What am I missing here?
azure-pipelines.yml
# Starter pipeline
# Start with a minimal pipeline that you can customize to build and deploy your code.
# Add steps that build, run tests, deploy, and more:
# https://aka.ms/yaml
trigger:
branches:
include:
- master
paths:
include:
- cosmos
batch: True
jobs:
- job: Phase_1
displayName: Phase 1
cancelTimeoutInMinutes: 1
pool:
vmImage: ubuntu-latest
steps:
- checkout: self
- task: AzureResourceGroupDeployment#2
displayName: Azure Deployment:Create Or Update Resource Group action on DISPLAY-NAME
inputs:
# azureSubscription: 'SUBSCRIPTION'
ConnectedServiceName: MY-SERVICE-CONNECTION-NAME
resourceGroupName: DISPLAY-NAME
location: West US # TBD
csmFile: cosmos/deploy.json
csmParametersFile: cosmos/parameters-dev.json
deploymentName: DEPLOYMENT-NAME
I tried values from "service connections" but not sure what is the issue here.
The error message is telling you the exact problem. Your service connection needs to be an Azure Resource Manager service connection. Create a service connection of the appropriate type.
I can reproduce the issue:
As Daniel said, this is caused by the service connection type.
From this document you can know what the parameters are:
https://github.com/microsoft/azure-pipelines-tasks/blob/master/Tasks/AzureResourceGroupDeploymentV2/README.md#parameters-of-the-task
Share a little trick. Can help you avoid this type of problem in the future. When you type '- task: sometask#version' in the correct place of YML file of the pipeline in DevOps, you will see a 'Settings' button in the upper left, click it and you can set the value through the UI, which can filter the appropriate options for you:

Helmfile - "needs" keyword has no effect

I have been trying to make use of the keyword needs (following the doc) to control the order of installation of the releases.
Here is my helmfile:
helmDefaults:
createNamespace: false
timeout: 600
helmBinary: /usr/local/bin/helm
releases:
- name: dev-sjs-pg
chart: ../helm_charts/sjs-pg
- name: dev-sjs
chart: ../helm_charts/sjs
needs: ['dev-sjs-pgg']
Regarding versions:
helmfile version v0.139.9
helm version.BuildInfo{Version:"v3.5.4", GitCommit:"1b5edb69df3d3a08df77c9902dc17af864ff05d1", GitTreeState:"clean", GoVersion:"go1.15.11"}
When I run helmfile sync , both releases are installed simultaneously. In particular, there is no error due to my spelling error (dev-sjs-pgg instead of dev-sjs-pg). It is like needs is just not read.
Could you help me understanding what I am doing wrong please ?
I tried to reproduce this. When executing helmfile --log-level=debug sync I see in the debug log:
processing 2 groups of releases in this order:
GROUP RELEASES
1 dev-sjs-pg
2 dev-sjs
I also see these are deployed one after another (just a few seconds difference because I am deploying a fast nginx chart):

Unable to deploy pre built image in app engine standard environment (GCP)

My spring boot application was working fine in cloud build & deployed without any issue till September.
Now my trigger fails in gcloud app deploy.
Step #4: ERROR: (gcloud.app.deploy) INVALID_ARGUMENT: Deployment cannot use a pre-built image. Pre-built images are only allowed in the App Engine Flexible Environment.
app.yaml
runtime: java11
env: standard
service: service
handlers:
- url: /.*
script: this field is required, but ignored
cloudbuild.yaml
steps:
# backend deployment
# Step 1:
- name: maven:3-jdk-14
entrypoint: mvn
dir: 'service'
args: ["test"]
# Step 2:
- name: maven:3-jdk-14
entrypoint: mvn
dir: 'service'
args: ["clean", "install", "-Dmaven.test.skip=true"]
# Step 3:
- name: docker
dir: 'service'
args: ["build", "-t", "gcr.io/service-base/base", "."]
# Step 4:
- name: "gcr.io/cloud-builders/docker"
args: ["push", "gcr.io/service-base/base"]
# Step 5:
- name: 'gcr.io/cloud-builders/gcloud'
dir: 'service/src/main/appengine'
args: ['app', 'deploy', "--image-url=gcr.io/service-base/base"]
timeout: "30m0s"
# Step 6:
# dispatch.yaml deployment
- name: "gcr.io/cloud-builders/gcloud"
dir: 'service/src/main/appengine'
args: ["app", "deploy", "dispatch.yaml"]
timeout: "30m0s"
timeout: "100m0s"
images: ["gcr.io/service-base/base"]
Cloud build error
Thanks in advance. Im confused how my build was working fine before & what am i doing wrong now.
You can't deploy custom container on App Engine standard. You have to provide your code and the environment runtime. Then Buildpack is used to create a standard container on Google Side (for information, a new Cloud Build job is ran for this) and deployed on App Engine.
I recommend you to have a look to Cloud Run to use your custom container. It's very close to App Engine (and even better on many points!) and very customizable.
What your cloudbuild.yaml comment's refer to as Step 5 corresponds to the Step #4 in the error because system begins numbering steps from 0.
The error message is accurate; App Engine standard (!) differs from App Engine flexible in that the latter (flexible) permits container image deployments. App Engine standard deploys from sources.
See Google's example.
It's possible that something has changed Google's side that's causing the issue but, the env: standard in your app.yaml suggests the build file has changed.

update existing infrastructure on heroku using terraform

I've got this infrastructure description
variable "HEROKU_API_KEY" {}
provider "heroku" {
email = "sebastrident#gmail.com"
api_key = "${var.HEROKU_API_KEY}"
}
resource "heroku_app" "default" {
name = "judge-re"
region = "us"
}
Originally I forgot to specify buildpack. It created the application on heroku. I decided to add it to resource entry
buildpacks = [
"heroku/java"
]
But when I try to apply the plan in terraform I get this error
Error: Error applying plan:
1 error(s) occurred:
* heroku_app.default: 1 error(s) occurred:
* heroku_app.default: Post https://api.heroku.com/apps: Name is already taken
Terraform does not automatically rollback in the face of errors.
Instead, your Terraform state file has been partially updated with
any resources that successfully completed. Please address the error
above and apply again to incrementally change your infrastructure.
Terraform plan looks like this
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.
------------------------------------------------------------------------
An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
+ create
Terraform will perform the following actions:
+ heroku_app.judge_re
id: <computed>
all_config_vars.%: <computed>
buildpacks.#: "1"
buildpacks.0: "heroku/java"
config_vars.#: <computed>
git_url: <computed>
heroku_hostname: <computed>
name: "judge-re"
region: "us"
stack: <computed>
web_url: <computed>
Plan: 1 to add, 0 to change, 0 to destroy.
------------------------------------------------------------------------
Note: You didn't specify an "-out" parameter to save this plan, so Terraform
can't guarantee that exactly these actions will be performed if
"terraform apply" is subsequently run.
As a workaround I tried to add destroy in my deploy.sh script
terraform init
terraform plan
terraform destroy -force
terraform apply -auto-approve
But it does not destroy the resource as I get the message Destroy complete! Resources: 0 destroyed.
What is the problem?
Link to build
It looks like you also changed the name of the resource. Your original example has the resource name heroku_app.default while your plan has heroku_app.judge_re.
To point your state to the remote resource, so Terraform knows you are editing and not trying to recreate the resource, use terraform import:
terraform import heroku_app.judge_re judge-re
In terraform, normally you needn't destroy the whole stack, which you just want to re-build one or several resources in it.
terraform taint does this trick. The terraform taint command manually marks a Terraform-managed resource as tainted, forcing it to be destroyed and recreated on the next apply.
terraform taint heroku_app.default
Second, when you troubleshooting why the resource isn't list in destroy resource, please make sure you point to the right terraform tfstate file.
when you run terraform plan, did you see any resources which already was created?

Configurable Replica Number in Template using ValueFrom

I have a template.yml file that is used when deploying to any OpenShift project. Each project has specific project-props configMap to be used, this is part of our CICD pipeline, so each project has a unique project.props available to it
I would like to be able to control the number of replicas and CPU/Memory limits based on what project I am deploying to. For example a branch testing OpenShift project vs Performance testing OpenShift project would have a different CPU request and limit than an ephemeral OpenShift project.
My template.yml file looks something like this:
// <snip>
spec:
replicas: "${OS_REPLICAS}"
// <snip>
resources:
limits:
cpu: "${OS_CPU_LIMIT}"
memory: "${OS_MEMORY_LIMIT}"
requests:
cpu: "${OS_CPU_REQUEST}"
memory: "${OS_MEMORY_REQUEST}"
// <snip>
parameters:
- name: OS_REPLICAS
displayName: OS Number of Replicas
valueFrom:
configMapKeyRef:
name: project-props
key: os.replicas
// rest of params are similar
My relevant project-props section is:
os.replicas=2
os.cpu.limit=2
os.cpu.request=250m
os.memory.limit=1Gi
os.memory.request=1Gi
When I try to deploy this I get the following error:
quantities must match the regular expression '^([+-]?[0-9.]+)([eEinumkKMGTP]*[-+]?[0-9]*)$'
If I change template.yml to have a parameter defined it works fine
parameters:
- name: OS_REPLICAS
displayName: OS Number of Replicas
value: 2
It seems that valueFrom vs value has a different behavior. Is this impossible to do using valueFrom? Is there another way I can dynamically change spec and resources using a configMap?
The alternative is to deploy and then use oc scale dc <deploy_config_name> --replicas=<number> but it's not very elegant.
Where you have:
spec:
replicas: "${OS_REPLICAS}"
you should have:
spec:
replicas: "${{OS_REPLICAS}}"
With template parameter of:
parameters:
- name: OS_REPLICAS
displayName: OS Number of Replicas
value: 2
See:
https://docs.openshift.org/latest/dev_guide/templates.html#writing-parameters
for use of "${{}}".
What it does is interpret the contents of the parameter as JSON/YAML, rather than a string value. This allows you to supply an integer, which replicas requires.
So you don't need valueFrom, which wouldn't work anyway as that is only usable for environment variables and not arbitrary fields like replicas.
As to trying to set a default for memory and CPU for pods deployed in a project, you should look at having a LimitRange resource defined against the project and set a default.
https://docs.openshift.com/container-platform/3.5/dev_guide/compute_resources.html#dev-limit-ranges
I figured out the answer, it's does not read the values from the file but at least they can be dynamic.
OpenShift has an oc process command that you can be run when using a template.
So this works by doing:
oc process -f <template_name>.yaml -v <param_name>=<param_value>
This will over write the parameter value with the one being inserted by -v.
An actual example would be
oc process -f ./src/main/openshift/service.template.yaml -v OS_REPLICAS=2
You can read more about it OpenShift template documentation
It seems that the OS Origin team does not want to support using files for parameter insertion. You can read more about it here:
https://github.com/openshift/origin/pull/10952
https://github.com/openshift/origin/issues/10687

Resources