Before creating an object in Kubernetes (Service, ReplicationController, etc.), I'd like to test that the JSON or YAML specification of the object is valid. But I don't want to actually create the object.
Is there some to do a "dry run" that would be equivalent to running kubectl create --validate=true -f file.json, but would just let me know that it passes validation, and not actually create it?
Ideally, it would be great if I could do this via API, and not require the use of kubectl. But I could make it work if it required me to use kubectl.
Thanks.
This works for me (kubernetes 1.7 and 1.9):
kubectl apply --validate=true --dry-run=client --filename=file.yaml
Some kubectl commands support a --dry-run flag (like kubectl run, kubectl expose, and kubectl rolling-update).
There is an issue open to add the --dry-run flag to more commands.
There is a tool called kubeval which validates configs against the expected schema, and does not require connection to a cluster to operate, making it a good choice for applications such as CI.
The use of --dry-run and --validate only seem to partially solve the issue.
client-side validation is not exhaustive. it primarily ensures the
fields names and types in the yaml file are valid. full validation is
always done by the server, and can always impose additional
restrictions/constraints over client-side validation.
Source - kubectl --validate flag pass when yaml file is wrong #64830
Given this you cannot do a full set of validations except to hand it off completely to the server for vetting.
Related
I have a spring boot app which loads a yaml file at startup containing an encryption key that it needs to decrypt properties it receives from spring config.
Said yaml file is mounted as a k8s secret file at etc/config/springconfig.yaml
If my springboot is running I can still sh and view the yaml file with "docker exec -it 123456 sh" How can I prevent anyone from being able to view the encryption key?
You need to restrict access to the Docker daemon. If you are running a Kubernetes cluster the access to the nodes where one could execute docker exec ... should be heavily restricted.
You can delete that file, once your process fully gets started. Given your app doesnt need to read from that again.
OR,
You can set those properties via --env-file, and your app should read from environment then. But, still if you have possibility of someone logging-in to that container, he can read environment variables too.
OR,
Set those properties into JVM rather than system environment, by using -D. Spring can read properties from JVM environment too.
In general, the problem is even worse than just simple access to Docker daemon. Even if you prohibit SSH to worker nodes and no one can use Docker daemon directly - there is still possibility to read secret.
If anyone in namespace has access to create pods (which means ability to create deployments/statefulsets/daemonsets/jobs/cronjobs and so on) - it can easily create pod and mount secret inside it and simply read it. Even if someone have only ability to patch pods/deployments and so on - he potentially can read all secrets in namespace. There is no way how you can escape that.
For me that's the biggest security flaw in Kubernetes. And that's why you must very carefully give access to create and patch pods/deployments and so on. Always limit access to namespace, always exclude secrets from RBAC rules and always try to avoid giving pod creation capability.
A possibility is to use sysdig falco (https://sysdig.com/opensource/falco/). This tool will look at pod event, and can take action when a shell is started in your container. Typical action would be to immediately kill the container, so reading secret cannot occurs. And kubernetes will restart the container to avoid service interruption.
Note that you must forbids access to the node itself to avoid docker daemon access.
You can try mounting the secret as an environment variable. Once your application grabs the secret on startup, the application can then unset that variable rendering the secret inaccessible thereon.
I have a system service, foo that is started and stopped via /usr/sbin/service restart foo. It in turn appears to be controlled by a shell script /etc/init.d/foo
How can I create a "pre-start" hook, so that I can run an extra shell script prior to this service starting? In this case, the pre-start hook is extra config that has to be fetched from a cloud provider metadata catalog, and then jacked into a configuration file necessary for foo to start properly.
I have considered modifying /etc/init.d/foo directly, which would work. But which would complicate expected frequent patch-level upgrades which I will catch by apt-get upgrade. I want to avoid a solution that requires re-establishing the hook.
A second option is that I could create a fooWrapper service, remove foo from all run levels, and then just start/stop fooWrapper. The implementation of that script would just be my secret sauce + invoking /etc/init.d/foo. The trouble with that is again package upgrades, whether foo would re-insert itself into the various runlevels, and I would then end up running two conflicting copies.
Your setup suggests that you use sysv init and not yet systemd. If this is the case, read on. Otherwise ignore this answer.
In general, you will have a link like S20foo in /etc/rc.d/rc3.d. The 20 and 3 may be different for you. Normally, you would create a script /etc/init.d.pre_foo that gets your extra config and link it to /etc/rc.d/rc3.d/S19pre_foo. This will start pre_foo before foo.
We have a fully dockerized web app with a valid Swagger definition for the API. The API runs in its own docker container, and we're using docker-compose to orchestrate everything. I want to generate a Ruby client based on the Swagger definition located at http://api:8443/apidocs.json.
I've poured through the documentation here, which led me to Swagger's public docker image for generating client and server code. Sadly the documentation is lacking and offers no examples for actually generating a client with the docker image.
The Dockerfile indicates its container runs a web service, which I can only assume is the dockerized version of http://generator.swagger.io. As such, I would expect to be able to generate a client with the following command:
curl -X POST -H "content-type:application/json" -d \
'{"swaggerUrl":"http://api:8443/apidocs"}' \
http://swagger-generator:8080/api/gen/clients/ruby
No luck here. I keep getting "invalid swagger definition" even though I've confirmed the swagger definition is valid with (npm -q install -g swagger-tools >/dev/null) && swagger-tools validate http://api:8443/apidocs.
Any ideas?
indeed you are correct, the docker image you're referring to is the same image used at http://generator.swagger.io
The issue you're having is the input parameter isn't correct.
Now to get it right, please note that the swagger-generator has a web interface. So once you start it up, like the instructions say, open it in a browser. For example (replace the GENERATOR_HOST with your machine's IP address):
docker run -d -e GENERATOR_HOST=http://192.168.99.100 -p 80:8080 swaggerapi/swagger-generator
then you can open the swagger-ui on http://192.168.99.100
The important part here is that you can use the UI to see the call syntax. If you're generating a client, go to http://192.168.99.100/#!/clients/generateClient select the language you want to generate and click the payload on the right. Replace the swaggerUrl field with the address of your server and voila.
You can use the output in the curl to figure out how to call from the command line. It should be easy from there.
Please keep in mind that just because a 3rd party tool says the swagger definition is valid doesn't mean it actually is. I don't think that's your issue, though, but 3rd party tool mileage may vary...
How to validate an LDIF?
Similar to XML, XMLSchema and Schematron are there any libraries to validate an LDIF with an LDAP schema?
A better way to solve this is to run the ldap commands with flags that dont actually commit results to the server an example would be ldapadd -H ldap:/// -D "cn=admin,dc=nodomain" -w '<secretThatNobodyKnows>' -n -f here with -n flag you are telling it to only show you what might happen. The advantage that this method holds over running query against a fake server is that you will actually validating against the same rules where you want to eventually commit to.
ldap-servers like openldap or opends usually check the ldif against the current schema on insertion. So if you need to check your ldif without using your productive ldap server, you could use a small java-based ldap server like openDS that uses the same ldap-schema.
Are there any means of formatting the output of shell commands to a structured data format like JSON or XML to be processed by another application?
Use case: Bunch of CentOS servers on a network. I'd like to programatically login to them via SSH, run commands to obtain system stats and eventually run basic maintenance commands. Instead of parsing all the text output myself I'm wondering if there is anything out there will help me return the data in a structured format? Even if only some shell commands were supported that would be a head start.
Sounds like the task for SNMP.
It is possible to use puppet fairly lightly. You can configure it to run it's checks only on what you want to check for.
Your entire puppet config could consist of:
exec { "yum install foo":
unless => "some-check for software",
}
That would run yum install foo but only if some-check for software failed.
That said there are lots of benefits if you're managing more that a couple of servers to getting as much of your config and build into puppet manifests (or cfengine, bcfg2, or similar) as possible.
Check out Nagios (http://www.nagios.org/) for remote system monitoring. What you are looking for may already exist out there.