I am using the client-go to generate reports about resources for my team. We are using VPA(https://github.com/kubernetes/autoscaler) to compare its recommendations for pods against our own. I have everything working well except for the part where I get the recommendations from the Pod. In kubectl, I would enter the following to get descriptions of the pods in the namespace:
k describe vpa -n ABC
The description for each pod includes the following section:
Status:
Conditions:
Last Transition Time: 2022-02-09T10:33:28Z
Status: True
Type: RecommendationProvided
Recommendation:
Container Recommendations:
Container Name: my-service
Lower Bound:
Cpu: 25m
Memory: 442246463
Target:
Cpu: 25m
Memory: 442809964
Uncapped Target:
Cpu: 25m
Memory: 442809964
Upper Bound:
Cpu: 25m
Memory: 724829578
I cannot figure out how to get this information using the go-client package. I am using another package, "k8s.io/metrics/pkg/client/clientset/versioned", to get current resource information for a pod.
I'd appreciate your suggestions... The only thing that I can think to do is to execute kubectl inside my golang app, capture the output, and go from there... Pretty hacky and not very scalable.
Thanks for your time and interest,
Mike
Related
I am using a google cloud app engine to deploy my quic-go server. But getting the error:
failed to sufficiently increase receive buffer size (was: 208 kiB, wanted: 2048 kiB, got: 416 kiB).
I am using app.yaml file to build a docker file which is as follows:
FROM golang:1.18.3
RUN mkdir /app
ADD . /app
WORKDIR /app
RUN apt-get update && apt-get install -y ffmpeg
CMD sudo --sysctl net.core.rmem_default=15000000
CMD sudo --sysctl net.core.rmem_max=15000000
RUN go build -x server.go
ENV GCS_BUCKETNAME xyz
ENV AI_CLIENT_SSL_CERT /path to cert
ENV AI_CLIENT_SSL_KEY /path to key
ENV GCP_BUCKET_SERVICE_ACCOUNT_CREDS /path to google cloud service account credential
CMD [ "./server" ]
This is my app.yaml
runtime: custom
env: flex
env_variables:
GCS_BUCKETNAME : "xyz"
AI_CLIENT_SSL_CERT : "./path to cert"
AI_CLIENT_SSL_KEY : "./path to key"
GCP_BUCKET_SERVICE_ACCOUNT_CREDS : "./path to google cloud credential.json file"
service: streaming-app
automatic_scaling:
min_num_instances: 1
max_num_instances: 20
cpu_utilization:
target_utilization: 0.85
target_concurrent_requests: 100
Any sort of help will be appreciated.
Since sysctl is an OS-level config that doesn't fit in line with App Engine's principle use case. App Engine does not currently have any way of configuring the underlying sysctl config files. I believe that Google Kubernetes engine may be a better use case for running that server, as App Engine environments have a limited set of configurable settings.
can you tell me the scenarios when this file is not present in the kernel?
I’m not sure about the scenarios as I have least experience with kernel. For me it seems a different question rather than original post. you can raise a new StackOverflow question regarding this.
I have been trying to make use of the keyword needs (following the doc) to control the order of installation of the releases.
Here is my helmfile:
helmDefaults:
createNamespace: false
timeout: 600
helmBinary: /usr/local/bin/helm
releases:
- name: dev-sjs-pg
chart: ../helm_charts/sjs-pg
- name: dev-sjs
chart: ../helm_charts/sjs
needs: ['dev-sjs-pgg']
Regarding versions:
helmfile version v0.139.9
helm version.BuildInfo{Version:"v3.5.4", GitCommit:"1b5edb69df3d3a08df77c9902dc17af864ff05d1", GitTreeState:"clean", GoVersion:"go1.15.11"}
When I run helmfile sync , both releases are installed simultaneously. In particular, there is no error due to my spelling error (dev-sjs-pgg instead of dev-sjs-pg). It is like needs is just not read.
Could you help me understanding what I am doing wrong please ?
I tried to reproduce this. When executing helmfile --log-level=debug sync I see in the debug log:
processing 2 groups of releases in this order:
GROUP RELEASES
1 dev-sjs-pg
2 dev-sjs
I also see these are deployed one after another (just a few seconds difference because I am deploying a fast nginx chart):
NOTE: The actual problem I am trying to solve is run testcontainers in Circle CI.
To make it reusable, I decided to extend the existing orb in my organisation.
The question, how can I create several executors for a job? I was able to create the executor itself.
Executor ubuntu.yml:
description: >
The executor to run testcontainers without extra setup in Circle CI builds.
parameters:
# https://circleci.com/docs/2.0/configuration-reference/#resource_class
resource-class:
type: enum
default: medium
enum: [medium, large, xlarge, 2xlarge]
tag:
type: string
default: ubuntu-2004:202010-01
resource_class: <<parameters.resource-class>>
machine:
image: <<parameters.tag>>
One of the jobs itself:
parameters:
executor:
type: executor
default: openjdk
resource-class:
type: enum
default: medium
enum: [small, medium, medium+, large, xlarge]
executor: << parameters.executor >>
resource_class: << parameters.resource-class >>
environment:
# Customize the JVM maximum heap limit
MAVEN_OPTS: -Xmx3200m
steps:
# Instead of checking out code, just grab it the way it is
- attach_workspace:
at: .
# Guessing this is still necessary (we only attach the project folder)
- configure-maven-settings
- cloudwheel/fetch-and-update-maven-cache
- run:
name: "Deploy to Nexus without running tests"
command: mvn clean deploy -DskipTests
I couldn't find a good example of adding several executors, and I assume that I will need to add ubuntu and openjdk for every job. Am I right?
I continue looking into other orbs and documentation but cannot find a similar case to mine.
As it is stated in Circle CI documentation, executors can be defined like this:
executors:
my-executor:
machine: true
my-openjdk:
docker:
- image: openjdk:11
Side note, there can be many executors of any type such as docker, machine (Linux), macos, win.
See, StackOverflow question how to invoke executors from CircleCI orbs.
I have a template.yml file that is used when deploying to any OpenShift project. Each project has specific project-props configMap to be used, this is part of our CICD pipeline, so each project has a unique project.props available to it
I would like to be able to control the number of replicas and CPU/Memory limits based on what project I am deploying to. For example a branch testing OpenShift project vs Performance testing OpenShift project would have a different CPU request and limit than an ephemeral OpenShift project.
My template.yml file looks something like this:
// <snip>
spec:
replicas: "${OS_REPLICAS}"
// <snip>
resources:
limits:
cpu: "${OS_CPU_LIMIT}"
memory: "${OS_MEMORY_LIMIT}"
requests:
cpu: "${OS_CPU_REQUEST}"
memory: "${OS_MEMORY_REQUEST}"
// <snip>
parameters:
- name: OS_REPLICAS
displayName: OS Number of Replicas
valueFrom:
configMapKeyRef:
name: project-props
key: os.replicas
// rest of params are similar
My relevant project-props section is:
os.replicas=2
os.cpu.limit=2
os.cpu.request=250m
os.memory.limit=1Gi
os.memory.request=1Gi
When I try to deploy this I get the following error:
quantities must match the regular expression '^([+-]?[0-9.]+)([eEinumkKMGTP]*[-+]?[0-9]*)$'
If I change template.yml to have a parameter defined it works fine
parameters:
- name: OS_REPLICAS
displayName: OS Number of Replicas
value: 2
It seems that valueFrom vs value has a different behavior. Is this impossible to do using valueFrom? Is there another way I can dynamically change spec and resources using a configMap?
The alternative is to deploy and then use oc scale dc <deploy_config_name> --replicas=<number> but it's not very elegant.
Where you have:
spec:
replicas: "${OS_REPLICAS}"
you should have:
spec:
replicas: "${{OS_REPLICAS}}"
With template parameter of:
parameters:
- name: OS_REPLICAS
displayName: OS Number of Replicas
value: 2
See:
https://docs.openshift.org/latest/dev_guide/templates.html#writing-parameters
for use of "${{}}".
What it does is interpret the contents of the parameter as JSON/YAML, rather than a string value. This allows you to supply an integer, which replicas requires.
So you don't need valueFrom, which wouldn't work anyway as that is only usable for environment variables and not arbitrary fields like replicas.
As to trying to set a default for memory and CPU for pods deployed in a project, you should look at having a LimitRange resource defined against the project and set a default.
https://docs.openshift.com/container-platform/3.5/dev_guide/compute_resources.html#dev-limit-ranges
I figured out the answer, it's does not read the values from the file but at least they can be dynamic.
OpenShift has an oc process command that you can be run when using a template.
So this works by doing:
oc process -f <template_name>.yaml -v <param_name>=<param_value>
This will over write the parameter value with the one being inserted by -v.
An actual example would be
oc process -f ./src/main/openshift/service.template.yaml -v OS_REPLICAS=2
You can read more about it OpenShift template documentation
It seems that the OS Origin team does not want to support using files for parameter insertion. You can read more about it here:
https://github.com/openshift/origin/pull/10952
https://github.com/openshift/origin/issues/10687
We had slack notification working in drone.io 0.4 just fine, but since we updated to 0.5 I can't get it working despite trying out the documentation.
Before, it was like this
build:
build and deploy stuff...
notify:
slack:
webhook_url: $$SLACK_WEBHOOK_URL
channel: continuous_integratio
username: drone
You can see here that I used the $$ to reference the special drone config file of old.
Now my latest attempt looks like this
pipeline:
build and deploy stuff...
slack:
image: plugins/slack
webhook: https://hooks.slack.com/services/...
channel: continuous_integratio
username: drone
According to the documentation slack is now indented within the pipeline (previously build) level.
I tried changing slack out for notify like it was before, used the SLACK_WEBHOOK secret only via the drone cli and there where other things I attempted as well.
Does anyone know what I might be doing wrong?
This is an (almost exact) yaml I am using with slack notification enabled with the exception that I've masked the credentials
pipeline:
build:
image: golang
commands:
- go build
- go test
slack:
image: plugins/slack
webhook: https://hooks.slack.com/services/XXXXXXXXX/YYYYYYYYY/ZZZZZZZZZZZZZZZZZZZZZZZZ
when:
status: [ success, failure ]
There is unfortunately nothing in your example that jumps out, perhaps with the exception of the channel name having a typo (although I'm not sure if that represents your real yaml configuration or not)
If you are attempting to use secrets (via the cli) you need to make sure you sign your yaml file and commit the signature file to your repository. You can then reference your secret in the yaml similar to 0.4 but with a slightly different syntax:
pipeline:
build:
image: golang
commands:
- go build
- go test
slack:
image: plugins/slack
webhook: ${SLACK_WEBHOOK}
when:
status: [ success, failure ]
You can read more about secrets at http://readme.drone.io/usage/secret-guide/
You can also invoke the plugin directly from the command line to help test different input values. This can help with debugging. See https://github.com/drone-plugins/drone-slack#usage
The issue was that in 0.4 the notify plugin was located outside the scope of the pipeline (then build) and now since 0.5 its located inside the pipeline. This combined with the fact that when a pipeline fails it quits the scope immediately, which means the slack (then notify) step never get's reached at all anymore.
The solution to this is to just explicitly tell it to execute the step on failure with the when command:
when:
status: [ success, failure ]
This is actually mentioned in the getting-started guide, though, but I didn't go through till the end as I was aiming to quickly get it up and running and didn't worry about what I considered to be edge cases.