1 Client for multiple instances - Spring Authorization server - spring

I have Client that runs on multiple Environments (each env have its own Domain), and have 1 Authorization server that serve all those instances.
Should i create RegisteredClient for each instance, or is it enough to create only 1 client for all instances?
thanks!

Based on the information you've mentioned, you only require a single RegisteredClient. The ProviderSettings and specifically the issuerUri are resolved from the request, so each domain would have a distinct issuer without custom configuration for each instance. See notes on the ProviderContext in the docs.

Related

Hide Keycloak admin console from public access

We are considering using Keycloak for our public REST APIs (mostly Spring boot apps) to authorize and authenticate our users.
In order not to make the admin UI publicly available we want to restrict it.
Our idea is to create two instances but access the same database.
the public Keycloak instance, which only publishes what is necessary e.g. the admin path is not accessible. In this instance only paths should be accessible like these recommended here: https://www.keycloak.org/server/reverseproxy#_exposed_path_recommendations.
a private Keycloak instance, which is only accessible from the internal network, but offers the admin UI (console). With which one can then manage the users/permissions.
Is this a valid solution to have two different instances but with the same database or are there other best practices here to not publish the admin ui/paths?
Yes, this is definitely a common setup. Depending on your requirements, it is always recommended to have more than one instance of Keycloak on the same database, for availability reasons. Keycloak shares some in memory data (like sessions) in an Infinispan Cache, which is shared between one or more instances of Keycloak (generally referred to as a cluster)
You would then use a load balancer (like haproxy, nginx, apache, the choices are practically endless) and configure it to send requests to the actual Keycloak instances.
A possible setup could be the following: Using 4 Keycloak instances on 4 servers:
public-keycloak-1.internal.example.com
public-keycloak-2.internal.example.com
private-keycloak-1.internal.example.com
private-keycloak-2.internal.example.com
You can then add 2 load balancers:
keycloak.example.com (sending requests to public-keycloak-*)
keycloak.internal.example.com (sending requests to private-keycloak-*)
In this example, keycloak.internal.example.com would be the instance you connect to, in order to perform administrative tasks in Keycloak via the Admin Console, or the Admin API, and keycloak.example.com would be the host that you use for Auth{n,z} for your applications.
Restricting access to the Admin API and Admin Console can be done at the load balancer level (restricting requests to those paths), but since Keycloak 20, it is also possible to completely disable the Admin API and Admin console. This is done through the disabling the respective features seen in the documentation. That way, you can disable the features "admin-api", "admin" and "admin2". If you do this on the public-keycloak-* instances, then requests to the public load balancer can never end up touching the Admin API or Console, because Keycloak is configured to simply not serve those requests in the first place.

Access Cloudflare Worker from local environments

I've setup a functional Cloudflare Worker via its route and domain and am using the Worker playground and the quick editor to avoid a deployment.
However, when developing locally I cannot make a request to the worker and get a CORs error.
I’ve read all the docs and implemented most CF security features within Zero Trust. However, nothing is getting us access to our deployed Worker due to strict CORs rules. (which we want)
On my machine I am routing through WARP and it is configured for my
team name.
I have installed and configured a root access certificate, perhaps
not applicable to this issue.
I have also tried to manually auth by visiting the worker URL and
getting a login code emailed to me. Perhaps CF Access is not related
to Workers?
We need clarification because the docs do not clearly explain the flow for access to Worker URLs when working on localhost.
Community question here.
How do we develop apps with Workers and strict CORs by authenticating a computer or user?
I think you can use Transform Rules for set/remove/update CORS.
It should work for you, because according to traffic sequence diagram header modifications performs before workers.

SaaS implementation with Micro-services

I'm trying to build a web-based SaaS solution in ASP.NET Core 2.0, with the help of micro-services architecture, token based authentication and service will be hosted on Docker. Each client has its own users, product and other details with multiple databases with shared schema. Each micro-service has its own database (Schema-per-service).
I hit a roadblock where I need to locate logged in user’s database credentials (connection string), so that database connection will be passed dynamically to respective micro-service to fetch data from respective client database?
I suppose that you have some sort of microservice to handle client authentication into his SaaS account and generate a token to consume the SaaS microservices (like a "private key") correct?
It's the perfect case for microservices architecture:
Create a microservice that domains resources about the client's environment configuration
This microservice receives requests with the client's private key
Then requests the authentication service to validate the passed private key
Get the response of the authentication service and some sort of client's unique key
Responds with the environment configuration corresponding to that client's unique key (or 404 if the auth token doesn't match with any client)
Now having this microservice (I'll call "environment microservice"), any other microservice of your SaaS just needs request the environment microservice to get client's configurations (database connection string, storage system and etc). From this point, you can implement some caching policy at each service to keep the private keys mapping to a set of configurations (and persistent database connections if your model permits). Just ensure that this cache has an interval to validate the tokens and configurations against the environment microservice.

Can we create multiple instance of hyperledger composer rest server for same business network?

To understand application architecture of composer rest server, I would like to understand following things.
Lets say, We have 4 peers of different organization. Now, What would be recommended approach in context of how we would be managing composer rest server.
1) Having one composer rest server per peer
2) Having one composer rest server per network and all peers will share composer rest server
3) Having one composer rest server per channel
firstly, just to confirm its called 'Hyperledger Composer' and 'Fabric Composer' was its old name from way back.
secondly, the answer is you would have one or more (think HA and availability) REST server instances per Composer business network, per Organisation that participates in that business network. It is also dependent on how each deploys REST server instances within their own infrastructure zones and somewhat dependent what user / user base will be authenticating to it (the likelihood is they would be authenticating their own users / REST clients to consume the REST server APIs - by whatever the chosen Passport strategy is, to authenticate in a multi-user environment - eg. LDAP OAuth2 etc. So, the REST server is not strictly tied to 'peers' per se. The peer information is defined in the connection profile info, in the business network card.
Composer business networks are deployed to a specific channel/ledger - and configured so in the business network cards that access it - the connection info specifies the channel, and the REST server instance is instantiated using an appropriate business network card.
See more on deploying your REST server here:
https://hyperledger.github.io/composer/integrating/deploying-the-rest-server.html

What is the best practice to architecture an oAuth server and an API server separately?

I am setting up an API for a mobile app (and down the line a website). I want to use oAuth 2.0 for authentication of the mobile client. To optimize my server setup, I wanted to setup an oAuth server (Lumen) separate from the API server (Laravel). Also, my db also lives on its own separate server.
My question is, if using separate servers and a package like lucadegasperi/oauth2-server-laravel do I need to have the package running on both server?
I am assuming this would be the case because the oAuth server will handle all of the authentication to get the access token and refresh access token functions. But then the API server will need to check the access token on protected endpoints.
Am I correct with the above assumptions? I have read so many different people recommending the oAuth server be separate from the API server, but I can't find any tutorials about how the multi-server dynamic works.
BONUS: I am migrating my DB from my API server, so I would assume I would need the oAuth packages migrations to be run from the API server also. Correct?

Resources