How to do a full export (including users) with embedded keycloak - spring-boot

I have a Springboot application using embedded keycloak.
What I am looking for is a way to load the keycloak server from it, make changes to the configuration, add users and to then export this new version of keycloak.
This question got an answer on how to do a partial export but I can't find anything in the documentation of the Keycloak Admin REST API on how to do a full export.
With the standalone keycloak server I would be able to simply use the CLI and type
-Dkeycloak.migration.action=export -Dkeycloak.migration.provider=singleFile -Dkeycloak.migration.file=/tmp/keycloak-dump.json
But this is the embedded version.
This is most likely trivial since I know for a fact that newly created users have to be stored somewhere.
I added a user and restarting the application doesn't remove it, so keycloak persists it somehow. But the json files I use for keycloak server and realm setup haven't been changed.
So, with no access to a CLI without a standalone server and no REST endpoint for a full export, how do I load the server, make some changes and generate a new json via export that I can simply put into my Spring App instead?

You can make a full export with the following command (if the Springboot works with Docker containers):
[podman | docker] exec -it <pod_name> opt/jboss/keycloak/bin/standalone.sh
-Djboss.socket.binding.port-offset=<interger_value> Docker recommend an offset of 100 at least
-Dkeycloak.migration.action=[export | import]
-Dkeycloak.migration.provider=[singleFile | dir]
-Dkeycloak.migration.dir=<DIR TO EXPORT TO> Use only iff .migration.provider=dir
-Dkeycloak.migration.realmName=<REALM_NAME_TO_EXPORT>
-Dkeycloak.migration.usersExportStrategy=[DIFFERENT_FILES | SKIP | REALM_FILE | SAME_FILE]
-Dkeycloak.migration.usersPerFile=<integer_value> Use only iff .usersExportStrategy=DIFFERENT_FILES
-Dkeycloak.migration.file=<FILE TO EXPORT TO>
I am creating an open source keycloak example with documentation; you can see a full guide about import/export in my company's GitHub.

Related

Minio URL - Credentials

We are using minio for storage file releases.
We are using the go cdk library to convert s3 to http.
The problem is, when I try to execute a release I'm having this issue:** NoCredentialProviders: no valid providers in chain. Deprecated.**
This is the URL we are using: "s3://integration-test-bucket?endpoint=minio.9001&region=us-west-2" . It's any way to pass credentials to the URL itself? In this case, It will not be sensitive data as we are running it locally.
Note: I'm using docker compose yml and default environment for minio_access_key and minio_secret_key. (minioadmin & minioadmin).
I tried several types of query parameters inside URL to pass credentials. The goal is to not touch go CDK library itself, but pass credentials through URL or pass dummy credentials/avoid credentials checking.
You can provide following environment variables to the service/container that tries to connect to minio:
AWS_ACCESS_KEY_ID=${MINIO_USER}
AWS_SECRET_ACCESS_KEY=${MINIO_PASSWORD}
AWS_REGION=${MINIO_REGION_NAME}
The library should pick them up during container startup and use when executing requests.

SSH Access Netconf Server in OpenDaylight

I need to access the config subsystem (a.ka. the datastore) in OpenDaylight. I have read the user guide and know that the way to access it is via:
ssh admin#localhost -p 2830 -s netconf
or (the way I shell into it):
# netopeer2-cli
> connect --ssh --port 2830 --login admin
Once logged in, I noticed after running get-config I don't see the actual data in the subsystem.
> get-config --source=running
DATA
<network-topology xmlns="urn:TBD:params:xml:ns:yang:network-topology">
<topology>
<topology-id>topology-netconf</topology-id>
</topology>
</network-topology>
In a previous project, I was running netopeer2-server and sysrepo and the data in get-config was fleshed out. I believe the reason I am seeing such little information is because the netconf-server I am looking at is the MDSAL netconf-server on port 2830. Based on the user guide, there should be another netconf-server on port 1830 that has direct access to the config subsystem.
How do I access the normal netconf-server on port 1830?
My main goal is to access the data in the full subsystem via get-config and edit the data via edit-config -- how do I do that?
My versions:
OpenDaylight Sodium (based off of 0.11.0)
netopeer2-cli v1.1.39
It looks like the config subsystem endpoint was deprecated back in Flourine--but the documentation has not been updated--even the latest release notes for Sodium indicate that they still maintain a CSS NETCONF server as part of their standard set of questions the dev team answers. I found this here:
https://jira.opendaylight.org/browse/NETCONF-535
I believe the MDSAL server is the only one available now, and it does (in its HELLO response) seem to indicate that it maintains the capabilities for all YANG-compliant modules. However, I cannot access these elements using the netopeer2-cli as the libyang parsing seems to issue a lot of errors. I suspect this is an issue related to netopeer2-cli and its requesting/parsing of the various YANG files after the initial HELLO and how it works with libyang to construct a local version of the model for the purposes of handling various NETCONF requests.

How do I prevent access to a mounted secret file?

I have a spring boot app which loads a yaml file at startup containing an encryption key that it needs to decrypt properties it receives from spring config.
Said yaml file is mounted as a k8s secret file at etc/config/springconfig.yaml
If my springboot is running I can still sh and view the yaml file with "docker exec -it 123456 sh" How can I prevent anyone from being able to view the encryption key?
You need to restrict access to the Docker daemon. If you are running a Kubernetes cluster the access to the nodes where one could execute docker exec ... should be heavily restricted.
You can delete that file, once your process fully gets started. Given your app doesnt need to read from that again.
OR,
You can set those properties via --env-file, and your app should read from environment then. But, still if you have possibility of someone logging-in to that container, he can read environment variables too.
OR,
Set those properties into JVM rather than system environment, by using -D. Spring can read properties from JVM environment too.
In general, the problem is even worse than just simple access to Docker daemon. Even if you prohibit SSH to worker nodes and no one can use Docker daemon directly - there is still possibility to read secret.
If anyone in namespace has access to create pods (which means ability to create deployments/statefulsets/daemonsets/jobs/cronjobs and so on) - it can easily create pod and mount secret inside it and simply read it. Even if someone have only ability to patch pods/deployments and so on - he potentially can read all secrets in namespace. There is no way how you can escape that.
For me that's the biggest security flaw in Kubernetes. And that's why you must very carefully give access to create and patch pods/deployments and so on. Always limit access to namespace, always exclude secrets from RBAC rules and always try to avoid giving pod creation capability.
A possibility is to use sysdig falco (https://sysdig.com/opensource/falco/). This tool will look at pod event, and can take action when a shell is started in your container. Typical action would be to immediately kill the container, so reading secret cannot occurs. And kubernetes will restart the container to avoid service interruption.
Note that you must forbids access to the node itself to avoid docker daemon access.
You can try mounting the secret as an environment variable. Once your application grabs the secret on startup, the application can then unset that variable rendering the secret inaccessible thereon.

Access legacy GAE Datastore from standalone python

Trying to find out how to access the GAE Cloud Datastore from outside GAE. Ideally using a standalone python script (Mac) to run GQL like
q = GqlQuery("SELECT * FROM user WHERE email = 'test#example.com'")
or maybe even a local instance of GAE using GoogleAppEngineLauncher if a standalone script is not possible.
I have done the following
Accessing an existing App Engine Datastore from another platform (https://cloud.google.com/datastore/docs/activate) - permissions with service account + private key
Installed the Python SDK - confirmed SDK files are files are in
/usr/local/lib/python2.7/site-packages/google_appengine
/usr/local/google_appengine/google/appengine/*** (api,base,client,datastore,...,ext,etc.)
Running a print sys.path shows (among other paths)
/usr/local/Cellar/python/2.7.8_1/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/google_appengine,
/usr/local/Cellar/python/2.7.8_1/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/gtk-2.0', '/Library/Python/2.7/site-packages,
/usr/local/lib/python2.7/site-packages,
/usr/local/lib/python2.7/site-packages/google_appengine, /usr/local/lib/python2.7/site-packages/gtk-2.0
Did the export
export DATASTORE_SERVICE_ACCOUNT=...
export DATASTORE_PRIVATE_KEY_FILE=... (full path to .p12 file)
export DATASTORE_DATASET=...
export PYTHONPATH="$PYTHONPATH:/usr/local/google_appengine"
Ran the example adams.py file
Created a new entity called "Trivia" with name= hgtg, answer=42 as a record in PROD
However, running a standalone script
from appengine.ext.db import GqlQuery (or from google.appengine.ext.db import GqlQuery)
gives me ImportError:
No module named appengine.ext.db
I then tried to use a local GAE instance but can't figure out how to connect the PROD GAE database instance.
===
The existing GAE application us using the Datastore (Java) in PRODUCTION. In the Developer Console (https://console.developers.google.com/project), this would be under Storage > Cloud Datastore > Query where I would see the "Entities" (or kind). Obviously, there is a fairly limited amount of things you can do here and I don't really want to touch PRODUCTION code to run a query.
Thanks,
Chris

How does one run Spring XD in distributed mode?

I'm looking to start Spring XD in distributed mode (more specifically deploying it with BOSH). How does the admin component communicate to the module container?
If it's via TCP/HTTP, surely I'll have to tell the admin component where all the containers are? If it's via Redis, I would've thought that I'll need to tell the containers where the Redis instance is?
Update
I've tried running xd-admin and Redis on one box, and xd-container on another with redis.properties updated to point to the admin box. The container starts without reporting any exceptions.
Running the example stream submission curl -d "time | log" http://{admin IP}:8080/streams/ticktock yields no output to either console, and not output to the logs.
If you are using the xd-container script, then the redis.properties is expected to be under "XD_HOME/config" where XD_HOME points the base directory where you have bin, config, lib & modules of xd.
Communication between the Admin and Container runtime components is via the messaging bus, which by default is Redis.
Make sure the environment variable XD_HOME is set as per the documentation; if it is not you will see a logging message that suggests the properties file has been loaded correctly when it has not:
13/06/24 09:20:35 INFO support.PropertySourcesPlaceholderConfigurer: Loading properties file from URL [file:../config/redis.properties]

Resources