How to add Elastic APM integration from API/CMD/configuration file - elasticsearch

I've created a docker-compose file with some configurations that deploy Elasticsearch, Kibana, Elastic Agent all version 8.7.0.
where in the Kibana configuration files I define the police I needed under xpack.fleet.agentPolicies, with single command all my environment goes up and all component connect successfully.
The only issue is there is one manual step, which is I had to go to Kibana -> Observability -> APM -> Add Elastic APM and then fill the Server configuration.
I want to automate this and manage this from the API/CMD/configuration file, I don't want to do it from the UI.
What is the way to do this? in which component? what is the path the configuration should be at?
I tried to look for APIs or command to do that, but with no luck. I'm expecting help with automating the remaning step.
#Update 1
I've tried to add it as below, but I still can't see the integration added.
package_policies:
- name: fleet_server-apm
id: default-fleet-server
package:
name: fleet_server
inputs:
- type: apm
enabled: true
vars:
- name: host
value: "0.0.0.0:8200"
- name: url
value: "http://0.0.0.0:8200"
- name: enable_rum
value: true
frozen: true

Tldr;
Yes, I believe there is a way to do it.
But I am pretty sure this is poorly documented.
You can find some idea in the repository of apm-server
Solution
In the kibana.yml file you can add some information related to fleet.
This section below is taken from the repository above and helped me set up apm automatically.
But if you have some specific settings you would like to see enable I am usure where you provide them.
xpack.fleet.packages:
- name: fleet_server
version: latest
xpack.fleet.agentPolicies:
- name: Fleet Server (APM)
id: fleet-server-apm
is_default_fleet_server: true
is_managed: false
namespace: default
package_policies:
- name: fleet_server-apm
id: default-fleet-server
package:
name: fleet_server

It is true that the kibana Fleet API is very poorly documented at this moment. I think your problem is that you are trying to add the variables to the fleet-server package insted of the apm package. Your yaml should look like this:
package_policies:
- name: fleet_server-apm
id: default-fleet-server
package:
name: fleet_server
- name: apm-1
package:
name: apm
inputs:
- type: apm
keep_enabled: true
vars:
- name: host
value: 0.0.0.0:8200
frozen: true
- name: url
value: "http://0.0.0.0:8200"
frozen: true
- name: enable_rum
value: true
frozen: true
Source

Related

How can I provide a templated manifest for a Helm chart dependency?

I have an application deployed with a Helm chart that has dependencies. I have templated YAML manifests in the templates directory for the main chart, but I also need to provide templated manifests for the dependency.
The dependency is a zipped tar file in the charts directory - I believe this is what was pulled in when I ran helm dependency build (or update - I forget which I used). I can manually un-tar this file and access all of the dependent chart's components within, including its templates directory. Can I add the appropriate Go template code to a manifest in there? Will that work and is it good practice? Is there a "better" way to do this?
Here are example files:
Chart.yaml:
apiVersion: v2
name: spoe-staging
type: application
version: 1.0.0
dependencies:
- name: keycloak
version: 18.3.0
repository: https://codecentric.github.io/helm-charts
condition: keycloak.enabled
values.yaml:
...
keycloak:
enabled: true
extraEnv: |
- name: X509_CA_BUNDLE
value: "/usr/share/pki/ca-trust-source/anchors/*.crt"
- name: PROXY_ADDRESS_FORWARDING
value: "true"
extraVolumeMounts: |
- name: trusted-certs
mountPath: /usr/share/pki/ca-trust-source/anchors/
extraVolumes: |
- name: trusted-certs
configMap:
name: trusted-certs
...
As you can see, the keycloak dependency needs a ConfigMap named trusted-certs, containing certificate information.
This is just one example, there may be other things I may need to templatize at a dependency level. I don't think I should locate the ConfigMap in the main chart templates directory, since it has nothing to do with that chart.

represent helm chart values.yaml in helmfile.yaml

I am trying to represent the following in the helmfile.yaml but I am getting an error. Can anyone help me to set it up?
values.yaml
extraVolumes:
- name: google-cloud-key
secret:
secretName: gcloud-auth
I tried the following in helmfile.yaml
repositories:
- name: loki
url: https://grafana.github.io/loki/charts
releases:
- name: loki
namespace: monitoring
chart: loki/loki
set:
- name: extraVolumes.name
value: google-cloud-key
- name: extraVolumes.secret.secretName
value: gcloud-auth
The error I am getting is
coalesce.go:160: warning: skipped value for extraVolumes: Not a table.
I also tried with the following in helmfile.yaml
- name: extraVolumes.name[]
value: google-cloud-key
This gave me the following error
Error: failed parsing --set data: key map "extraVolumes" has no value
Any idea?
Helmfile has two ways to provide values to the charts it installs. You're using set:, which mimics the finicky helm install --set option. However, Helmfile also supports values:, which generally maps to helm install -f. Helmfile values: supports two extensions: if a filename in the list ends in *.gotmpl then the values file itself is processed as a template file before being given to Helm; or you can put inline YAML-syntax values directly in helmfile.yaml.
This last option is probably easiest. Instead of using set:, use values:, and drop that block of YAML directly into helmfile.yaml.
releases:
- name: loki
namespace: monitoring
chart: loki/loki
values: # not `set:`
- extraVolumes: # inline YAML content as a single list item
- name: google-cloud-key
secret:
secretName: gcloud-auth
values: is set to a list of either filenames or inline mappings. If you're not deeply familiar with YAML syntax, this means you need to put a - list-item indicator before the inline YAML block. If you already have a list of values: files you can add this additional item into the list wherever appropriate.

"config should have required property 'media_folder'" - when it does

Its been a while since I've played with netlifycms - and I feel I've had this problem before but can't find the answer. My config.yml has the media_folder in it - but I'm getting an error it can't find a config setting for one - anyone have any ideas? So this is my config (full file
backend:
name: github
repo: acecentre/nhs-service-finder
branch: master
collections:
- name: "nhs-service"
label: "Service"
folder: "content/ccg"
media_folder: "static/images/uploads"
media_library:
name: uploads
create: false
But on loading the page (here) I get
Config Errors:
config should have required property 'media_folder'
config should have required property 'media_library'
config should match some schema in anyOf
Check your config.yml file.
What am I doing wrong?
Well - It took me a while.. media_folder and media_library - should be a root setting - not in collections..
backend:
name: github
repo: acecentre/nhs-service-finder
branch: master
media_folder: "static/images/uploads"
media_library:
name: uploads
collections:
- name: "nhs-service"
label: "Service"
folder: "content/ccg"

Why doesn't the metricbeat index name change each day?

I am using a metricbeat (7.3) docker container along side several other docker containers, and sending the results to an elasticsearch (7.3) instance. This works, and the first time everything spins up I get an index in elasticsearch called metricbeat-7.3.1-2019.09.06-000001
The initial problem is that I have a Graphana dashboard setup to look for an index with today's date, so it seems to ignore one created several days ago altogether. I could try to figure out what's wrong with those Grafana queries, but more generically I need those index names to roll at some point - the index that's there is already up to over 1.3GB, and at some point that will just be too big for the system.
My initial metricbeat.yml config:
- module: docker
metricsets:
- "container"
- "cpu"
- "diskio"
- "memory"
- "network"
hosts: ["unix:///var/run/docker.sock"]
period: 10s
enabled: true
output.elasticsearch:
hosts: ["${ELASTICSEARCH_URL}"]
Searching around a bit, it seems like the index field on the elasticsearch output should configure the index name, so I tried the following:
- module: docker
metricsets:
- "container"
- "cpu"
- "diskio"
- "memory"
- "network"
hosts: ["unix:///var/run/docker.sock"]
period: 10s
enabled: true
output.elasticsearch:
hosts: ["${ELASTICSEARCH_URL}"]
index: "metricbeat-%{[beat.version]}-instance1-%{+yyyy.MM.dd}"
That throws an error about needing setup.template settings, so I settled on this:
- module: docker
metricsets:
- "container"
- "cpu"
- "diskio"
- "memory"
- "network"
hosts: ["unix:///var/run/docker.sock"]
period: 10s
enabled: true
output.elasticsearch:
hosts: ["${ELASTICSEARCH_URL}"]
index: "metricbeat-%{[beat.version]}-instance1-%{+yyyy.MM.dd}"
setup.template:
overwrite: true
name: "metricbeat"
pattern: "metricbeat-*"
I don't really know what the setup.template section does, so most of that is a guess from Google searches.
I'm not really sure if the issue is on the metricbeat side, or on the elasticsearch side, or somewhere in-between. But bottom line - how do I get them to roll the index to a new one when the day changes?
This is the setting/steps that worked for me:
metricbeat.yml file:
output.elasticsearch:
# Array of hosts to connect to.
hosts: ["<es-ip>:9200"]
index: metricbeat-%{[beat.version]}
index_pattern: -%{+yyyy.MM.dd}
ilm.enabled: true
then, over to kibana i.e :5601:
go to "Stack Monitoring", select the "metricbeat-*"
do this kind of a setting to begin with, and what follows later is self-explanatory too:

Share variables between steps in drone.io

It seems to me that drone.io does not share parameters across pipeline steps.
Is it possible to read the parameters for the plugins from a file, e.g. a directive
like "from_file" similar to the already existing "from_secret"? This is how one could use it:
kind: pipeline
name: default
steps:
- name: get_repo_name
image: alpine
commands:
- echo "hello" > .repo_name
- name: docker
image: plugins/docker
settings:
repo:
from_file: .repo_name
username:
from_secret: docker_username
password:
from_secret: docker_password
Ability to read input from a file is more the choice of the plugin author, but creating plugins is a pretty simplistic thing as most of your variables just have to be called in as PLUGIN_VARIABLE and you can then offer such things.
https://docs.drone.io/plugins/bash/
To show that some of the plugins do read from file, one such example is drone-github-comment:
steps:
- name: github-comment
image: jmccann/drone-github-comment:1.2
settings:
message_file: file_name.txt
when:
status:
- success
- failure
FWIW, looking at your example though, it would seem you are looking to pass just the repo_name? These variables are all present in a pipeline depending of course on the runner you are using, but for Docker you get all of these:
https://docs.drone.io/pipeline/environment/reference/

Resources