I have following schedule config:
And SA has following roles:
But getting error that it is not able to authenticate. Any thoughts what could be issue?
For googleapis.com endpoints, you want OAuth rather than OIDC authentication. Note you can add Scheduler triggers in the Workflows UI, which is somewhat simpler, as it sets these values for you, and also formats arguments as required.
Related
I am using approle authentication type which takes in role-id and secret-id along with root token in the header to generate a client token which can further be used as an auth token in the header to create and retrieve secrets. This is what happens internally when using spring cloud vault I guess. Correct me if I'm wrong.
Now, I need to rotate my secret-id for every 30 days and the client token for every 24 hours. How do I achieve this? Does spring cloud vault provides an inbuilt library to do this? If not where should I make the changes?
You need to do the equivalent of a vault write -f auth/approle/role/my-role/secret-id to get a new secret id. Where you do this is where it gets interesting...
I assume you already have a Vault policy that allows you to generate a new secret_id. Make sure that the role_name parameter is fixed to your application current role. Chances are you will want to limit the metadata, too.
I would suggest this pattern:
Something (a pipeline or scheduled job of some kind) creates the new secret-id. Bonus points if it is wrapped and single use, but let's save that for another question.
That something will store the secret-id in a secure place. Could be in the Vault KV version 2 store where the application can read.
After creating the new secret-id, that something lists the secrets and keeps the N most recent secret ids. Say the last 5. This makes the process asynchronous and allows running applications to keep going.
Now in your application, you must have a periodic task that looks up the latest secret id and reauthenticates to Vault with it.
If possible, I would suggest that you avoid the problem altogether and use the authentication method provided the platform your are on, it Vault supports it, like GCP, AWS or Kubernetes.
Problem 1: I am integrating Camunda workflow engine in my spring boot application. I have users in a separate business db and need to sync it with Camunda workflow engine to assign the users to particular tasks. The users in the business db are not categorized to groups but have certain roles. I want to sync these roles with the groups in Camunda.
Problem 2: I also want only the assigned users to be able to complete the tasks via REST localhost:8080/rest/task/{id}/complete
How can I be able to achieve this? I cannot find a solid guide that can help me.
Edit: I am able to load the users from my business db to Camunda using this example https://github.com/hashlash/example-camunda-custom-identity-service. This solves problem 1.
Now, I need a way to make sure only the assigned user can complete the assigned task via authorization i.e. Problem 2. Any guides on this?
I don't know if I understood what you want. But I think it makes more sense to associate your users with authorizations than with groups.
If you define in your UserTask the attributes Assignee, Candidate User or Candidate Groups, Camunda will automatically create the authorization for you.
I think you have some additional information on this link:
Additional Task Permissions
You seem to be on the right track. By default Camunda is configured not to enforce authorizations. You need to enabled it using the property:
camunda: bpm:
authorization:
enabled: true
(RE the previous comment: it is better to assign the Camunda authorizations to groups and get the assignment of users to groups from the external identity provider. This way fine grain application-specific authorization management remains in the application.)
Our organization has SSO authentication via an Apache reverse proxy, which is currently working flawlessly. I was surprised how easy it was to configure!
Now that I have this set up, however, I find that I am not able to submit jobs using sonar-runner. When I look at the logs, I see that every access is redirected to the Federated SSO login page.
Is there some additional configuration that needs to be done to allow scans to go through without being authenticated? Or perhaps some configuration options that need to be passed to the sonar-runner itself?
EDIT: We did consider a couple of options.
First, we thought about allowing only the URLs necessary to submit jobs to pass through the reverse proxy without authentication. This is a tedious process at best and allows a path of entry into the service itself without authentication.
Second, we thought about passing a user token along with the request. There are two issues with this approach. First, the existing URL is set up to authenticate using a three-legged approach. As far as I know, I can't set up both two-legged and three-legged authentication for the same URL. Second, we are submitting jobs using the Sonarqube plugin in Jenkins. Without modifying the plugin itself, there is no way to get a user token to pass through to the submit request.
Our workaround for the moment: Since both systems are running in Docker containers, we submit from Jenkins by passing it to the IP address of the Sonarqube container. This has the undesired effect of formatting the Sonarqube report links with a 172.17.0.x address rather than the FQDN.
I've got a spring application set up with spring security. I've got my service methods annotated with #PreAuthorize(...). So everyone from the web needs some specific rights to access those methods, which is fine.
But now I've got a new use case. There's a #Scheduled method running to do some checks and send messages. Currently only people with ROLE_USER are allowed to send messages. But now also the application itself has to send those messages.
How should I manage to have some kind of invisible user (= the application), which is logged in all the time and has specific rights? Or maybe "all rights" would be nice as well, so it just ignores all those security annotations.
Or maybe I don't need a "user" at all?
Thanks for your help!
EDIT: The main 2 questions are:
Should I create a real user for the application? Means: An entry in the user table of the database? How did you solve this? Maybe you do simply use the user account of the admin user (which is a real person)?
If I now have this "system" user. What's the best way to "use" it? For example I'd use #Autowired User systemUser to access this user wherever I need it. (Of course there's some point in the application config where I create a bean with this specific user).
EDIT2: Some more thoughts:
I think in the future I want to send messages from different subsystems of the application. So it's no choice to use the admin user, because I need a few different users with different names.
I was faced with similar problem and the solution I implemented was an internal and an "external" service implementation. The external one has the internal one autowired in. Any application-internal component, like your scheduled job, would have the internal service wired in. Any Web-exposed components would have the secured "external" service wired in, which would have the #PreAuthorize etc. annotations in place, but otherwise would act just as a delegate to the internal service.
I also log, before passing message onto the internal service, the principal of the authentication object which was used for authorization. You know you'll have one available in the SecurityContext, so pick it out and just make a note in your logs of someone external invoking internal services. I do the below (your principal could be non-username, but still, wanted to share):
final String currentUser = SecurityContextHolder.getContext().getAuthentication().getPrincipal().toString();
I think that all the answers you provided are fairly common solutions, so it depends very much on your requirements. The app I'm working on has some intense audit requirements so we have a user set up for the application itself to use when it needs to invoke services through a scheduler. This allows us to log against a given principle. Perhaps you should clarify your requirements?
I have a web app that uses some backend servers (UNC, HTTP and SQL). To get this working I need to configure ServicePrincipalNames for the account running the IIS AppPool and then allow kerberos delegation to the backend services.
I know how to configure this through the "Delegation" tab of the AD Users and Computers tool.
However, the application is going to be deployed to a number of Active Directory environments. Configuring delegation manually has proved to be error prone and debugging the issues misconfiguration causes is time consuming. I'd like to create an installation script or program that can do this for me.
Does anyone know how to script or programmatically set constrained delegation within AD?
Failing that how can I script reading the allowed services for a user to validate that it has been setup correctly?
OK, after much digging on the internet and some testing, I've got a way forward.
The following code is c#.
Setting an SPN for a user or computer can be achieved via the setspn utility.
Alternatively, the following C# code can do the same:
DirectoryEntry de = new DirectoryEntry("LDAP://"+usersDN);
if (!de.Properties["servicePrincipalName"].Contains(spnString))
{
de.Properties["servicePrincipalName"].Add(spnString);
de.CommitChanges();
}
To set constrained delegation:
if (!de.Properties["msDS-AllowedToDelegateTo"].Contains(backendSpnString))
{
de.Properties["msDS-AllowedToDelegateTo"].Add(backendSpnString);
de.CommitChanges();
}
If the user has had non-constrained delegation enabled, you may need to turn this off before enabling constrained - but I didn't fully test this scenario.