I am writing a Go template to create a yaml config:
my yaml config
logs:
- type: file
path: /opt/nomad/data/alloc/*/alloc/logs/*App1.stderr.*
service: App1
source: nomad
- type: file
path: /opt/nomad/data/alloc/*/alloc/logs/*App2.stderr.*
service: App2
source: nomad
My go template
logs:
{{- range $value := . -}}
- type: file
path: /opt/nomad/data/alloc/*/alloc/logs/*{{$value.Task}}.stderr.*
exclude_paths: /opt/nomad/data/alloc/*/alloc/logs/*{{$value.Task}}.stderr.fifo
service: {{$value.Task}}
source: nomad
{{end}}
How i need to change the template to get my yaml config
The Go yaml package enables programs to comfortably encode and decode YAML values.
You can use that: https://pkg.go.dev/gopkg.in/yaml.v2
Related
I want to parse the following structure using go:
---
prjA:
user1:
metadata:
namespace: prj-ns
spec:
containers:
- image: some-contaner:latest
name: containerssh-client-image
resources:
limits:
ephemeral-storage: 4Gi
requests:
ephemeral-storage: 2Gi
securityContext:
runAsGroup: 1000
runAsNonRoot: true
runAsUser: 1000
imagePullSecrets:
- docker-registry-secret
I'm using sigs.k8s.io/yaml to unmarshal YAML:
var userConfig map[string]map[string]kubernetes.PodConfig
err = yaml.UnmarshalStrict(yamlFile, &userConfig)
where kubernetes is imported from github.com/containerssh/kubernetes. Everything works fine - except the immagePullSecrets which gives the following error:
ERROR unmarshal user config file; error [error unmarshaling JSON: while decoding JSON: json: cannot unmarshal string into Go struct field PodSpec.spec.imagePullSecrets of type v1.LocalObjectReference]
What is the correct way to specify / parse an imagePullSecrets in go?
This is a problem with the input - and maybe not a very clear error message.
The imagePullSecrets must be specified using the key name like:
imagePullSecrets:
- name: docker-registry-secret
I leave the question as it might help other people who run in the same problem.
I am using the tp function, https://helm.sh/docs/howto/charts_tips_and_tricks/#using-the-tpl-function
to include a config file of yaml inside a template.
templates/omer.yaml:
{{ $omer := tpl (.Files.Get "files/app.conf") . }}
{{- $omer-}}
The file files/app.conf contains:
pod1:
custom:
deployment:
annotations:
sidecar.istio.io/proxyCPU: 10m
sidecar.istio.io/proxyMemory: "16Mi"
sidecar.istio.io/proxyCPULimit: 50m
sidecar.istio.io/proxyMemoryLimit: "64Mi"
When I helm template, it works fine. What I'm trying to accomplish, is to merge the app.conf yaml to another yaml, also defined in the templates folder. Trying the merge command for other yamls is working, but when using the tpl Files.Get it is recognised as a string and not as template, trying templates/omer.yaml:
{{ $omer := tpl (.Files.Get "files/app.conf") . }}
{{- $output := mergeOverwrite $omer }}
{{- $output -}}
gives me an error:
Error: template: chart/templates/omer.yaml:2:30: executing "chart/templates/omer.yaml" at <$omer>: wrong type for value; expected map[string]interface {}; got string
Also when trying to convert:
{{- $output := fromYaml ($omer ) }}
Getting an error:
Error: YAML parse error on chart/templates/omer.yaml: error unmarshaling JSON: while decoding JSON: json: cannot unmarshal string into Go value of type releaseutil.SimpleHead
My question: How can Codepipeline read the value of a field in a json file which is in SourceCodeArtifact?
I have Gthub repo that contains a file imageManifest.json which looks like this:
{
"image_id": "docker.pkg.github.com/my-org/my-repo/my-app",
"image_version": "1.0.1"
}
I want my AWS Codepipeline Source stage to be able to read the value of image_version from imageManifest.json and pass it as a parameter to a CloudFormation action in a subsequent stage of my pipeline.
For reference, here is my source stage.
Stages:
- Name: GitHubSource
Actions:
- Name: SourceAction
ActionTypeId:
Category: Source
Owner: ThirdParty
Version: '1'
Provider: GitHub
OutputArtifacts:
- Name: SourceCodeArtifact
Configuration:
Owner: !Ref GitHubOwner
Repo: !Ref GitHubRepo
OAuthToken: !Ref GitHubAuthToken
And here is my deploy stage:
- Name: DevQA
Actions:
- Name: DeployInfrastructure
InputArtifacts:
- Name: SourceCodeArtifact
ActionTypeId:
Category: Deploy
Owner: AWS
Provider: CloudFormation
Version: '1'
Configuration:
StackName: !Ref AppName
Capabilities: CAPABILITY_NAMED_IAM
RoleArn: !GetAtt [CloudFormationRole, Arn]
ParameterOverrides: !Sub '{"ImageId": "${image_version??}"}'
Note that image_version in the last line above is just my aspirational placeholder to illustrate how I hope to use the image_version json value.
How can Codepipeline read the value of a field in a json file which is in SourceCodeArtifact?
StepFunctions? Lambda? CodeBuild?
You can use a CodeBuild step in between Source and Deploy stages.
In CodeBuild step, read the image_version from SourceArtifact (artifact produced by soruce stage) and write to an artifact 'Template configuration' file 1 which is a configuration property of the CloudFormation action. This file can hold parameter values for your CloudFormation stack. Use this file instead of ParameterOverrides you are currently using.
Fn::GetParam is what you want. It can returns a value from a key-value pair in a JSON-formatted file. And the JSON file must be included in an artifact.
Here is the documentation and it gives you some examples: https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/continuous-delivery-codepipeline-parameter-override-functions.html#w2ab1c13c20b9
It should be something like:
ParameterOverrides: |
{
"ImageId" : { "Fn::GetParam" : ["SourceCodeArtifact", "imageManifest.json", "image_id"]}
}
Creating private gke cluster with yaml.
Currently looking into creating a private gke. tried adding private settings in yaml file but getting error
resources:
- name: myclus
type: gcp-types/container-v1:projects.locations.clusters
properties:
parent: projects/[PROJECT_ID]/locations/[REGION]
cluster:
name: my-clus
zone: [ZONE]
network: [NETWORK]
subnetwork: [SUBNETWORK] ### leave this field blank if using the default network###
initialClusterVersion: "1.13"
nodePools:
- name: my-clus-pool1
initialNodeCount: 1
autoscaling:
enabled: true
minNodeCount: 1
maxNodeCount: 12
management:
autoUpgrade: true
autoRepair: true
config:
machineType: n1-standard-1
diskSizeGb: 15
imageType: cos
diskType: pd-ssd
oauthScopes: ###Change scope to match needs###
- https://www.googleapis.com/auth/cloud-platform
preemptible: false
Looking for it to create a private cluster with no external IPs.
Did you ever had the chance to go over this documentation?
https://cloud.google.com/kubernetes-engine/docs/how-to/private-clusters
https://cloud.google.com/kubernetes-engine/docs/how-to/private-clusters#public_master
Well, I also found this other Official Google Document that can help you achieve what you want:
https://cloud.google.com/solutions/creating-kubernetes-engine-private-clusters-with-net-proxies
On the "Creating the Docker Image" section there's a Dockerfile example.
Best of Luck!
I am looking for a way to dynamically set the key using the path of the file below.
For example if I have this YAML:
prospectors.config:
- fields:
queue_name: <somehow get the globbed string below in here>
paths:
- /var/log/casino/*.log
type: log
output.redis:
hosts:
- "producer:6379"
key: "%{[fields.queue_name]}"
And then I had a file called /var/log/casino/test.log, then key would become test.
Im not sure that what you want is possible.
You could use the source field and configure your Redis output using that as the key:
output.redis:
hosts:
- "producer:6379"
key: "%{source}"
This would have the disadvantage of being the absolute path of the source file, not the basename as your question asks for.
If you have a small number of possible basename patterns, and want a queue for each. For example, you have files:
/common/path/test-1.log
/common/path/foo-0.log
/common/path/01-bar.log
/common/path/test-3.log
...
and wanted to have three queues in redis test, foo and bar you could use the source field and the conditionals available in the keys configuration of redis output something like this
output.redis:
hosts:
- "producer:6379"
key: "default_key"
keys:
- key: "test_key"
when.contains:
source: "test"
- key: "foo_key"
when.contains:
source: "foo"
- key: "bar_key"
when.contains:
source: "bar"