Setting up Idempotency for Parse-Platform on k8s - parse-platform

I'm trying to enable the Idempotency options on a Parse-Platform server.
I've tried adding the following to my k8s config:
spec:
containers:
- args:
- --idempotencyOptions
- '{"paths":["classes/.*"], ttl: 30}'
I've also tried to add it to env vars (according to https://github.com/parse-community/parse-server/issues/7151 you have to write the json object as a string.
env:
- name: PARSE_SERVER_EXPERIMENTAL_IDEMPOTENCY_OPTIONS
value: '{"paths":["classes/.*"], ttl: 30}'
No luck. The first option and I get an error/crash loop stating idempotencyOptions isn't valid. The second one, it boots up fine, but duplicates still occur even when the proper headers are added (X-Parse-Request-Id)
Anyone have any other ideas?

So, always good to double check your versions.
Both of these work, my Parse Server was 4.2.0. This feature was introduced in 4.3.0. Once I upgraded everything worked as intended.

Related

kubebuilder debug web-hooks locally

We have a kubebuilder controller which is working as expected, now we need to create a webhooks ,
I follow the tutorial
https://book.kubebuilder.io/reference/markers/webhook.html
and now I want to run & debug it locally,
however not sure what to do regard the certificate, is there a simple way to create it , any example will be very helpful.
BTW i've installed cert-manager and apply the following sample yaml but not sure what to do next ...
I need the simplest solution that I be able to run and debug the webhooks locally as Im doing already with the controller (Before using webhooks),
https://book.kubebuilder.io/cronjob-tutorial/running.html
Cert-manager
I've created the following inside my cluster
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: example-com
namespace: test
spec:
# Secret names are always required.
secretName: example-com-tls
# secretTemplate is optional. If set, these annotations and labels will be
# copied to the Secret named example-com-tls. These labels and annotations will
# be re-reconciled if the Certificate's secretTemplate changes. secretTemplate
# is also enforced, so relevant label and annotation changes on the Secret by a
# third party will be overwriten by cert-manager to match the secretTemplate.
secretTemplate:
annotations:
my-secret-annotation-1: "foo"
my-secret-annotation-2: "bar"
labels:
my-secret-label: foo
duration: 2160h # 90d
renewBefore: 360h # 15d
subject:
organizations:
- jetstack
# The use of the common name field has been deprecated since 2000 and is
# discouraged from being used.
commonName: example.com
isCA: false
privateKey:
algorithm: RSA
encoding: PKCS1
size: 2048
usages:
- server auth
- client auth
# At least one of a DNS Name, URI, or IP address is required.
dnsNames:
- example.com
- www.example.com
uris:
- spiffe://cluster.local/ns/sandbox/sa/example
ipAddresses:
- 192.168.0.5
# Issuer references are always required.
issuerRef:
name: ca-issuer
# We can reference ClusterIssuers by changing the kind here.
# The default value is Issuer (i.e. a locally namespaced Issuer)
kind: Issuer
# This is optional since cert-manager will default to this value however
# if you are using an external issuer, change this to that issuer group.
group: cert-manager.io
Still not sure how to sync it with the kubebuilder to work locally
as when I run the operator in debug mode I got the following error:
setup problem running manager {"error": "open /var/folders/vh/_418c55133sgjrwr7n0d7bl40000gn/T/k8s-webhook-server/serving-certs/tls.crt: no such file or directory"}
What I need is the simplest way to run webhooks locally
Let me walk you through the process from the start.
create webhook like it's said in the cronJob tutorial - kubebuilder create webhook --group batch --version v1 --kind CronJob --defaulting --programmatic-validation . This will create webhooks for implementing defaulting logics and validating logics.
Implement the logics as instructed - Implementing defaulting/validating webhooks
Install cert-manager. I find the easiest way to install is via this commmand - kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.10.1/cert-manager.yaml
Edit the config/default/kustomization.yaml file by uncommenting everything that have [WEBHOOK] or [CERTMANAGER] in their comments. Do the same for config/crd/kustomization.yaml file also.
Build Your Image locally using - make docker-build IMG=<some-registry>/<project-name>:tag. Now you dont need to docker-push your image to remote repository. If you are using kind cluster, You can directly load your local image to your specified kind cluster:
kind load docker-image <your-image-name>:tag --name <your-kind-cluster-name>
Now you can deploy it to your cluster by - make deploy IMG=<some-registry>/<project-name>:tag.
You can also run cluster locally using make run command. But, that's a little tricky if you have enabled webooks. I would suggest you running your cluster using KIND cluster in this way. Here, you don't need to worry about injecting certificates. cert-manager will do that for you. You can check out the /config/certmanager folder to figure out how this is functioning.

PROJECT_ID env and Secret Manager Access

I would like to use the Secret Manager to store a credential to our artifactory, within a cloud build step. I have it working using a build similar to:
steps:
- name: 'busybox:glibc'
entrypoint: 'sh'
args: ['-c', 'env']
secretEnv: ['SECRET_VALUE']
availableSecrets:
secretManager:
- versionName: "projects/PROJECT_ID/secrets/TEST-SECRET/versions/1"
env: 'SECRET_VALUE'
All great, no problems - I then try and slightly improve it to:
steps:
- name: 'busybox:glibc'
entrypoint: 'sh'
args: ['-c', 'env']
secretEnv: ['SECRET_VALUE']
availableSecrets:
secretManager:
- versionName: "projects/$PROJECT_ID/secrets/TEST-SECRET/versions/1"
env: 'SECRET_VALUE'
But then it throws the error:
ERROR: (gcloud.builds.submit) INVALID_ARGUMENT: failed to get secret name from secret version "projects/$PROJECT_ID/secrets/TEST-SECRET/versions/1"
I have been able to add a TRIGGER level env var (SECRET_MANAGER_PROJECT_ID), and that works fine. The only issue that as that is a trigger env, it is not available on rebuild, which breaks a lot of things.
Does anyone know how to get the PROJECT_ID of a Secret Manager from within CloudBuild without using a Trigger Param?
For now, it's not possible to set dynamic value in the secret field. I already provided this feedback directly to the Google Cloud PM, it has been take into account, but I don't have more info to share, especially for the availability.
EDIT 1
(January 22). Thanks to Seza443 comment, I tested again and now it works with automatically populated variable (PROJECT_ID and PROJECT_NUMBER), but also with customer defined substitution variables!
It appears that Cloud Build now allows for the use of substitution variables within the availableSecrets field of a build configuration.
From Google Cloud's documentation on using secrets:
After all the build steps, add an availableSecrets field to specify the secret version and environment variables to use for your secret. You can include substitution variables in the value of the secretVersion field. You can specify more than one secret in a build.
I was able to use the $PROJECT_ID variable in my own build configuration like so:
...
availableSecrets:
secretManager:
- versionName: projects/$PROJECT_ID/secrets/api-key/versions/latest
env: API_KEY
Granted, there appears to be (at least at present) some discrepancy between the documentation quoted above and the recommended configuration file schema. In the documentation they refer to secretVersion, but that appears to have changed to versionName. In either case, it seems to work properly.
Use the $PROJECT_NUMBER instead.
https://cloud.google.com/build/docs/configuring-builds/substitute-variable-values#using_default_substitutions

Helm, evaluate linux env variable in values.yaml

I have the following variable JVM_ARGS
base-values.yaml
app:
env:
PORT: 8080
...
JVM_ARGS: >
-Dspring.profiles.active=$(SPRING_PROFILE),test
-Dspring.config.additional-location=/shared
-javaagent:/app/dd-java-agent.jar
service-x-values.yaml
app:
env:
SPRING_PROFILE: my-local-profile
Values file are evaluated is the order:
base-values.yaml
service-x-values.yaml
I need JVM_ARGS to be evaluated against SPRING_PROFILE and so far I cannot make it work.
What is the best way to do something like that?
I'm new to helm and Kubernetes and have a feeling that I'm missing something basic.
What I tried:
defining JVM_ARGS surrounded with double quotes and without them.
UPD:
The problem was that I had some custom Helm charts built by the other devs and I had little knowledge how those charts worked. I only worked with values files which were applied against the chart templates.
I wanted the property to be resolved by helm to
-Dspring.profiles.active=my-local-profile,vault
At the end I decided to see how Spring Boot itself resolves properties and came up with the following:
-Dspring.profiles.active=${SPRING_PROFILE},vault
Since spring.profiles.active is a regular property, env variables are allowed there and Spring will resolve the property at the runtime which worked for me.
I'm a bit confused: are you referring to an environment variable (as in the title of the question) or to a helm value?
Helm does not evaluate environment variables in the value files. $(SPRING_PROFILE) is treated as a literal string, it's not eveluated.
Actually Helm does not evaluate ANYTHING in the value files. They are source of data, not templates. Placeholders (actually GO templates) are evaluated only inside template files.
As a consequence of the point 3., you cannot reference one helm variable from another.
If you really need to get Spring Profiles from a Linux environment variable, you could achieve it by setting Helm variable when calling helm install and the like (although using --set is considered a bad practice):
helm install --set app.env.spring_profile=$SPRING_PROFILE ...
Although even than, app.env.spring_profile couldn't be evaluated inside base-values.yaml. You would need to move it directly to your template file, e.g.:
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app
spec:
...
template:
...
spec:
containers:
- name: my-app
...
env:
SPRING_PROFILES_ACTIVE: {{- .Values.app.env.spring_profile }},test

How to reuse anchored entry under already unwrapped anchor?

I am trying to write a CircleCI config that will allow me to reuse both whole list/mapping(?) entries and its properties.
Having the following:
image_definitions:
docker:
- &default_localstack_image
image: localstack/localstack:0.10.3
environment:
KINESIS_LATENCY: 0
defaults_env: &defaults_env
environment:
PG_PORT: 5432
PG_USER: root
I would like to be able to replace:
test: &test
docker:
- image: localstack/localstack:0.10.3
<<: *defaults_env
with something like:
test: &test
docker:
- *default_localstack_image
<<: *defaults_env
but it doesn't work this way.
I've also tried:
test: &test
docker:
- *default_localstack_image
*defaults_env
but that also didn't work.
How can I do that?
According to the documentation:
test: &test
docker:
- <<: [*default_localstack_image, *defaults_env]
However, be aware that the merge feature is not part of the YAML spec and has only been defined for outdated YAML 1.1. I don't know if this is actually implemented. Even if it is, be aware that this merge key is the odd man out – violating the spec that says every tag is to be mapped to a type, it is instead interpreted as transformation instruction even though the loading process as defined by the spec has no place for executing transformation steps.
Similar functionality (for example for concatenating scalars) is more or less frequently requested on SO but is not available (and will probably never be) and if you need to do something like this, my advice is to do what e.g. Ansible and SaltStack do and use a templating engine as preprocessor for your YAML file.

Generating filebeat custom fields

I have an elasticsearch cluster (ELK) and some nodes sending logs to the logstash using filebeat. All the servers in my environment are CentOS 6.5.
The filebeat.yml file in each server is enforced by a Puppet module (both my production and test servers got the same configuration).
I want to have a field in each document which tells if it came from a production/test server.
I wanted to generate a dynamic custom field in every document which indicates the environment (production/test) using filebeat.yml file.
In order to work this out i thought of running a command which returns the environment (it is possible to know the environment throught facter) and add it under an "environment" custom field in the filebeat.yml file but I couldn't find any way of doing so.
Is it possible to run a command through filebeat.yml?
Is there any other way to achieve my goal?
In your filebeat.yml:
filebeat:
prospectors:
-
paths:
- /path/to/my/folder
input_type: log
# Optional additional fields. These field can be freely picked
# to add additional information to the crawled log files
fields:
mycustomvar: production
in filebeat-7.2.0 i use next syntax:
processors:
- add_fields:
target: ''
fields:
mycustomfieldname: customfieldvalue
note: target = '' means that mycustomfieldname is a top-level field
official 7.2 docs
Yes, you can add fields to the document through filebeats.
The official doc shows you how.

Resources