Unknown processors type "resourcedetection" for "resourcedetection" - open-telemetry

Running OT Collector with image ghcr.io/open-telemetry/opentelemetry-collector-releases/opentelemetry-collector:0.58.0
In config.yaml I have,
processors:
batch:
resourcedetection:
detectors: [ env ]
timeout: 2s
override: false
The collector is deployed as a sidecar but it keeps failing with
collector server run finished with error: failed to get config: cannot unmarshal the configuration: unknown processors type "resourcedetection" for "resourcedetection" (valid values: [resource span probabilistic_sampler filter batch memory_limiter attributes])
Any idea as to what is causing this? I haven't found any relevant documentation/question

The Resource Detection Processor is part of the otelcol-contrib distro upstream and you'd hence would need to use otel/opentelemetry-collector-contrib:0.58.0 (or the equivalent on your container registry of choice) for this processor to be available in your collector.

Related

How can I create several executors for a job in Circle CI orb?

NOTE: The actual problem I am trying to solve is run testcontainers in Circle CI.
To make it reusable, I decided to extend the existing orb in my organisation.
The question, how can I create several executors for a job? I was able to create the executor itself.
Executor ubuntu.yml:
description: >
The executor to run testcontainers without extra setup in Circle CI builds.
parameters:
# https://circleci.com/docs/2.0/configuration-reference/#resource_class
resource-class:
type: enum
default: medium
enum: [medium, large, xlarge, 2xlarge]
tag:
type: string
default: ubuntu-2004:202010-01
resource_class: <<parameters.resource-class>>
machine:
image: <<parameters.tag>>
One of the jobs itself:
parameters:
executor:
type: executor
default: openjdk
resource-class:
type: enum
default: medium
enum: [small, medium, medium+, large, xlarge]
executor: << parameters.executor >>
resource_class: << parameters.resource-class >>
environment:
# Customize the JVM maximum heap limit
MAVEN_OPTS: -Xmx3200m
steps:
# Instead of checking out code, just grab it the way it is
- attach_workspace:
at: .
# Guessing this is still necessary (we only attach the project folder)
- configure-maven-settings
- cloudwheel/fetch-and-update-maven-cache
- run:
name: "Deploy to Nexus without running tests"
command: mvn clean deploy -DskipTests
I couldn't find a good example of adding several executors, and I assume that I will need to add ubuntu and openjdk for every job. Am I right?
I continue looking into other orbs and documentation but cannot find a similar case to mine.
As it is stated in Circle CI documentation, executors can be defined like this:
executors:
my-executor:
machine: true
my-openjdk:
docker:
- image: openjdk:11
Side note, there can be many executors of any type such as docker, machine (Linux), macos, win.
See, StackOverflow question how to invoke executors from CircleCI orbs.

How to update loggingService of container.v1.cluster with deployment-manager

I want to set the loggingService field of an existing container.v1.cluster through deployment-manager.
I have the following config
resources:
- name: px-cluster-1
type: container.v1.cluster
properties:
zone: europe-west1-b
cluster:
description: "dev cluster"
initialClusterVersion: "1.13"
nodePools:
- name: cluster-pool
config:
machineType: "n1-standard-1"
oauthScopes:
- https://www.googleapis.com/auth/compute
- https://www.googleapis.com/auth/devstorage.read_only
- https://www.googleapis.com/auth/logging.write
- https://www.googleapis.com/auth/monitoring
management:
autoUpgrade: true
autoRepair: true
initialNodeCount: 1
autoscaling:
enabled: true
minNodeCount: 3
maxNodeCount: 10
ipAllocationPolicy:
useIpAliases: true
loggingService: "logging.googleapis.com/kubernetes"
masterAuthorizedNetworksConfig:
enabled: false
locations:
- "europe-west1-b"
- "europe-west1-c"
When I try to run gcloud deployment-manager deployments update ..., I get the following error
ERROR: (gcloud.deployment-manager.deployments.update) Error in Operation [operation-1582040492957-59edb819a5f3c-7155f798-5ba37285]: errors:
- code: NO_METHOD_TO_UPDATE_FIELD
message: No method found to update field 'cluster' on resource 'px-cluster-1' of
type 'container.v1.cluster'. The resource may need to be recreated with the new
field.
The same succeeds if I remove loggingService.
Is there a way to update loggingService using deployment-manager without deleting the cluster?
The error NO_METHOD_TO_UPDATE_FIELD is due to updating "initialClusterVersion" when you issued the update call to GKE. This field is only used on creation of the cluster, and the type definition doesn't currently allow for it to be updated later. So that should remain static at the original value and will have no effect on the deployment moving forward or try to delete/comment that line.
Even when the previous entry is true, there is also no method to update the logging service, actually Deployment Manager doesn't have many update methods, so, try using the gcloud command to update the cluster directly, keep in mind that you have to use the monitoring service together with the logging service, so, the commando would look like:
gcloud container clusters update px-cluster-1 --logging-service=logging.googleapis.com/kubernetes --monitoring-service=monitoring.googleapis.com/kubernetes --zone=europe-west1-b

How to monitor a specific process using metricbeats?

I have the following basic metricbeats config
metricbeat.modules:
- module: system
metricsets:
- cpu
- filesystem
- memory
- network
- process
enabled: true
period: 10s
processes: ['.*']
cpu_ticks: false
Now I want to monitor only a specific process with process id (pid) = 27056.
I know that I have to do some modifications under the "processes" field of the above config file. Can any please help on how to proceed further?
You can monitor the processes which match any of a list of expressions you pass. For example, this reports on all processes running with nginx, java or python in the command line.
processes: ['nginx','java', 'python']

Configurable Replica Number in Template using ValueFrom

I have a template.yml file that is used when deploying to any OpenShift project. Each project has specific project-props configMap to be used, this is part of our CICD pipeline, so each project has a unique project.props available to it
I would like to be able to control the number of replicas and CPU/Memory limits based on what project I am deploying to. For example a branch testing OpenShift project vs Performance testing OpenShift project would have a different CPU request and limit than an ephemeral OpenShift project.
My template.yml file looks something like this:
// <snip>
spec:
replicas: "${OS_REPLICAS}"
// <snip>
resources:
limits:
cpu: "${OS_CPU_LIMIT}"
memory: "${OS_MEMORY_LIMIT}"
requests:
cpu: "${OS_CPU_REQUEST}"
memory: "${OS_MEMORY_REQUEST}"
// <snip>
parameters:
- name: OS_REPLICAS
displayName: OS Number of Replicas
valueFrom:
configMapKeyRef:
name: project-props
key: os.replicas
// rest of params are similar
My relevant project-props section is:
os.replicas=2
os.cpu.limit=2
os.cpu.request=250m
os.memory.limit=1Gi
os.memory.request=1Gi
When I try to deploy this I get the following error:
quantities must match the regular expression '^([+-]?[0-9.]+)([eEinumkKMGTP]*[-+]?[0-9]*)$'
If I change template.yml to have a parameter defined it works fine
parameters:
- name: OS_REPLICAS
displayName: OS Number of Replicas
value: 2
It seems that valueFrom vs value has a different behavior. Is this impossible to do using valueFrom? Is there another way I can dynamically change spec and resources using a configMap?
The alternative is to deploy and then use oc scale dc <deploy_config_name> --replicas=<number> but it's not very elegant.
Where you have:
spec:
replicas: "${OS_REPLICAS}"
you should have:
spec:
replicas: "${{OS_REPLICAS}}"
With template parameter of:
parameters:
- name: OS_REPLICAS
displayName: OS Number of Replicas
value: 2
See:
https://docs.openshift.org/latest/dev_guide/templates.html#writing-parameters
for use of "${{}}".
What it does is interpret the contents of the parameter as JSON/YAML, rather than a string value. This allows you to supply an integer, which replicas requires.
So you don't need valueFrom, which wouldn't work anyway as that is only usable for environment variables and not arbitrary fields like replicas.
As to trying to set a default for memory and CPU for pods deployed in a project, you should look at having a LimitRange resource defined against the project and set a default.
https://docs.openshift.com/container-platform/3.5/dev_guide/compute_resources.html#dev-limit-ranges
I figured out the answer, it's does not read the values from the file but at least they can be dynamic.
OpenShift has an oc process command that you can be run when using a template.
So this works by doing:
oc process -f <template_name>.yaml -v <param_name>=<param_value>
This will over write the parameter value with the one being inserted by -v.
An actual example would be
oc process -f ./src/main/openshift/service.template.yaml -v OS_REPLICAS=2
You can read more about it OpenShift template documentation
It seems that the OS Origin team does not want to support using files for parameter insertion. You can read more about it here:
https://github.com/openshift/origin/pull/10952
https://github.com/openshift/origin/issues/10687

where is Gossip parameters set in Cassandra?

when I restart C*, I see the following message:
GossipTasks:1 ....FailureDetector.java:249 - Not marking nodes down due to local pause of 61578581871 > 5000000000
where is 5000000000 setup? can it be changed?
env: C* 2.19 on Ubuntu 14.04
The default is defined in FailureDetector.java.
It can be overridden by specifying the system property cassandra.max_local_pause_in_ms
-Dcassandra.max_local_pause_in_ms=3000
Only this gives another warn, so if purpose was to get rit of initial warn then it doesn't matter:)
WARN [Background_Reporter:1] 2016-08-19 11:46:55,778
FailureDetector.java:59 - Overriding max local pause time to 10000ms

Resources