I want to implement OAuth2 with Spring Boot. I saw maximum demo, they had used hard coded client id and secret. In a real project, how do we use these?
I am attaching two images. One is of client code, I have made it using Angular, another one is server code, made it with Spring Boot OAuth2.
You can save your client ID on client-side (property file, database, JNDI, ...), because it is a public identifier, see RFC 6749:
2.2. Client Identifier
The authorization server issues the registered client a client identifier -- a unique string representing the registration information provided by the client. The client identifier is not a secret; it is exposed to the resource owner and MUST NOT be used alone for client authentication. The client identifier is unique to the authorization server.
You can save your client secret on client-side (property file, database, JNDI, ...), if you have a confidential client, see RFC 6749:
2.1. Client Types
OAuth defines two client types, based on their ability to authenticate securely with the authorization server (i.e., ability to maintain the confidentiality of their client credentials):
confidential
Clients capable of maintaining the confidentiality of their credentials (e.g., client implemented on a secure server with restricted access to the client credentials), or capable of secure client authentication using other means.
public
Clients incapable of maintaining the confidentiality of their credentials (e.g., clients executing on the device used by the resource owner, such as an installed native application or a web browser-based application), and incapable of secure client authentication via any other means.
For public clients you need no authentication, see RFC 6749:
2.3. Client Authentication
[...]
The authorization server MAY establish a client authentication method with public clients. However, the authorization server MUST NOT rely on public client authentication for the purpose of identifying the client.
But not all authorization servers are supporting public clients for Authorization Code Grant.
Another way is to use the Implicit Grant (without a client secret), see RFC 6749:
4.2. Implicit Grant
The implicit grant type is used to obtain access tokens (it does not support the issuance of refresh tokens) and is optimized for public clients known to operate a particular redirection URI. These clients are typically implemented in a browser using a scripting language such as JavaScript.
[...]
The implicit grant type does not include client authentication, and relies on the presence of the resource owner and the registration of the redirection URI. Because the access token is encoded into the redirection URI, it may be exposed to the resource owner and other applications residing on the same device.
But not all authorization servers are supporting Implicit Grant, for example GitHub.
Related
I have a micro service architecture which is working with Spring Zuul Gateway like below image.
My authentication service returns x-auth-token which is generated by spring authentication resolver and my token repository is redis. So users should use this service to authenticate and then use other services.
All my other services connect to same redis instance, so when they receive x-auth-token they can get user session details. I normally do the authorization by using #PreAuthorize annotation and then specifying the roles that can have access to controller or method.
Everything was so far working fine. Then I have been asked to add rate-limit functionality to this architecture. So for example a single user should not be able to make more than 1 POST request to specific api in books service. Also, if there were two book service instances, I would want to both be counted as single service when its about rate limiting.
I found tons of documents that referred me to this project called spring-cloud-zuul-ratelimit. Looking at the document I realized it does support redis as storage (good for me because I already have redis there) and it also supports handling rate limits per users.
The problem is that my zuul gateway knows nothing about the users! It has no access to redis storage. If I give it an access to redis, the problem might be solved but another one would rise: I'm gonna need to authorize user twice which takes more time and more redis traffic! once at gateway, once at each service (to check the roles and session details).
I'm looking for solutions that are most close to this list of needs:
Does not change my authentication method (I cant just switch to JWT or OAuth)
Does not duplicate authorization or redis queries
Balancing the requests between my services should not effect the rate limit. If each instance of service X is requested once for single user, then user has sent two requests.
Hopefully there is a good spring support for the answer.
I would prefer to be able to change the limits dynamically.
Zuul gateway rate limiter plugin basically tracks counter of user request based
on specific key (could be user's IP, some ID, request path or custom
combinations using custom key generator) given user requests during time interval. You can add it to the
existing zuul gateway application.
Let's say the ratelimiter-gateway is using "[clientIP][userID][method][path]" as request counter key stored in redis, e.g. "10.8.14.58:some#mail.com:POST:/books".
Here's some options I can think of:
If the client send some ID, you can use it directly as rate limiter
combination key.
If the user only send JWT token, you can verify it's claim to get the user
ID, assuming it's embedded in the token, using the same secret key to generate the JWT token in authn service
as Zuul gateway app properties (using OS env credentials, vault etc.). Or
you can just use the token as user ID.
Move the authorization logic to Spring zuul+ratelimiter service. It will
validate incoming request to author & books service, get the user ID from token. And then pass it as
another header, ex: "x-app-user-id", to the upstream services using
SpringBoot Filter. This way, the upstream services won't do any authn logic, it's just read the user id from
header. Communication between author & books service might be using the same header. This, of course, assuming the upstream servers won't be accessed
directly from the outside network.
It might also be good idea to use different redis instance as the ratelimit key storage.
As for dynamic config, based on it's documentation, you can adjust the rate limit config via
properties. I don't know if it can be adjusted dynamically at runtime via
Spring Cloud Config or other remote config implementations without the gateway
app needs to be restarted.
I'm trying to integrate an openFire XMPP server to my current company Spring server but have two major questions I cannot find the answer to -
I'll start with my current architecture first -
1. The xmpp server have a DB-server of it's own seperated from the Spring server DB, This is a dedicated machine to keep the users char history etc
2. The spring server have a DB of it's own where it keeps the user credentials (md5 encrypted) and also client applications data
3. The spring server is dedicated to serve HTTP requests (a dedicated REST server)
All in all I have 2 DB servers once chat server and one Rest server
Now for the questions -
1. Can I forbid registration to the xmpp server (i.e. whitelist the rest server ip and let it be the only one who can create users after a user registers on it)?
2.For security reasons the Rest server switch the session for a logged in user every 2 days the iOS and Android clients deal with session managment locally - How can I use those session with the XMPP server?
To clarify - I want the users to be able use the xmpp server only for chat purposes but only after they logged in to the application itself since the user session may expire the chat client will also have to re-authenticate against the REST server, how can I achieve this?
3. Won't it create an overload on the REST server? (i.e. the Rest server will now have to handle client requests and also XMPP server requests)
4. What is the best architecture to achieve this kind of a system (chat server, db server for chat server, rest server, db server for rest server) so that the system can scale horizontally?
I searched google for an article or something related to describe the general architecture but couldn't find nothing relevant, since I'm not "inveneting the wheel" here I would love to hear a good advice or be directed to an article that explains the How-To's
Thanks in advance.
The standard way in XMPP world for user authentication is SASL.
SASL have a very simple model: server sends to client some "challenge" string to client, and client sends "response" string to server, and they repeat this until server decides client send all required data. What data to send is defined in SASL "mechanism". There are number of well-known SASL mechanisms, e.g. SCRAM, and they are provided by most XMPP servers and clients "out of the box".
Your problem is - you already have authentication system and user database and want to reuse it for chat purposes. There are two ways:
Add your custom REST authentication as SASL module to your server. Google say it is already possible to write and add Openfire SASL plugin. Your SASL REST mechanism will do the same things as for browser, but required urls, tokens, etc. will be wrapped as "challenges" and "responses", e.g. server will send REST auth url as "challenge" for client, and client will open url, post credentials, get a token and send them as "response" back to server. Of course you need to add this SASL REST mechanism in client too.
Adopt your XMPP server to use your authentication database directly. In this case you only need to modify Openfire code to link it with your users/passwords tables (maybe there is already an admin tool for this). In this case clients will continue to use standard SASL mechanisms without modification. When this way may be easier than first one, remember your XMPP server should have access to plain-text passwords, which may be insecure.
You questions in order:
Yes, you can disable registration from XMPP client and point users to registration website.
You will see chat sessions in Openfire administration console and able to stop them, also you can write a module for do this by your schedule
If you will write SASL REST mechanism, there will no any difference between requests from chat clients and web clients for your REST backend, they will look the same.
As I described first, you no need separate DB for chat server and you able to setup multiple chat servers connected to your REST backend.
I am recentlly finding a solution of Web Security, As far as i known the HTTPS will bring more security web, but i found another Security solution of JOSE(JWT&JWE) so i want to known, i use it in the future, can i just use HTTP only but without HTTPS ?
Kris.
Thanks
Your question is legit to me and I am sorry to see that you received downvotes.
As far as i known the HTTPS will bring more security web, but i found another Security solution of JOSE(JWT&JWE)
I think there is a confusion between the both technologies.
JWE is just a format that represents content using JSON based data structures and that provides integrity protection and encryption whereas HTTPS is a secured layer for the HTTP communication protocol.
JWE is not a replacement to the HTTPS protocol.
The use of one technology, the other one or both of them only depends on your application context. HTTPS may not be absolutely necessary in some contexts and the secured communication provided by other means.
You mentioned that you want to find a solution for a security application. A secured connection should be always used in that context.
You absolutely need HTTPS even if you are using JWTs and JWEs. HTTPS allows your client to verify that they are talking to the server they are expecting to talk to. It also protects the content of the communication, including the JWT/JWE tokens that you are using. Without HTTPS, anybody who can listen to the communication between your client and your server can impersonate your clients.
JWTs in particular can carry information about your user. You may not need to forward it to the authorization server that granted the token (if you are using an asymmetric signing key) and still have enough information about the identity and permissions of your user to grant or deny them access to the resources that you are protecting.
Scenario: Sensitive information is exchanged (1) from client to server AND (2) from server to client.
Problem: Data exchanged is not encrypted, so sniffing is easy (it's always theoretically possible, right?)
Solution: Encrypt all data transmitted in either direction (server-to-client and client-to-server).
Implementation:
(1) Client to server - Generate certificate, install private key on server and configure Tomcat to work on HTTPS (Many tutorials for this online).
(2) Server to client - Private key goes to (or generated by) clients, however it seems that some tutorials strongly emphasize that that every client should have their own certificate for the sake of authentication.
Question: If I am already authenticating my users through a database username/password (hashed with salt) combo, but I still need to encrypt server-to-client data transmissions to reduce chance of sniffing, can I just generate one private key for all clients? Are there other ways of achieving what I need with Tomcat/Spring?
It seems you're mixing something up:
Regular https includes encryption in both directions, and only a private key + certificate on the server side. Once a client requests resources through https, they get the answer encrypted. So you'll just need to enforce the https connection (e.g. by redirecting certain requests to https with no delivery of data through http)
If you want client certificates, these are purely used for client authentication, so sharing a common client key/certificate with all possible clients will defeat this purpose. Having client keys/certs does not add any more encryption to your data transfer.
Answering to your follow-up question in the comment:
For https, the server keeps its private key, the public key is what is shared with the client. On typical https, the client can be reasonably sure who the server is (authentication, done through the trustworthy signature on the server's public key. This is what you pay trustcenters for) However, the server has no clue who the client is (here client certificates would come into play, but purely for authentication, not for encryption)
Server and client negotiate a common session key. For this purpose there are many different implementations of the key-exchange protocol. This forum is probably not the right place to describe session negotiation and the ssl handshake again, but you can be sure that you only need a server side key for the purpose you describe above: Take any website as an example: If you go to google mail, their https encryption works through them having a private key and a certified (signed) public key: You have no client side certification, but provide your username and password through the encrypted connection to them. Otherwise you'd have to install a client side key/certificate for a lot of services - and that would be too much of a burden for the average internet user.
Hope that helps.
I wanted to understand the mechanism of message encryption and signing used by NetTcpBinding when 'Windows' credentials are being used with Transport security. What if my AD uses NTLM instead of Kerberos? Will the messages still get signed and encrypted?If so, how?
Thanks in Advance,
Akshat
The short answer is that, yes, with NTLM authentication the messages will still get signed and encrypted if you have set the Transport security ProtectionLevel to EncryptAndSign (the default).
Here's an outline of how it works:
selecting Transport security
configures a
WindowsStreamSecurityBindingElement
in the channel stack. This inserts a
stream upgrade provider (see below)
in the NetTcpBinding, message
exchange between the client and
service happens within the .NET Message
Framing Protocol, which provides both
message framing and a mechanism for
client and service to negotiate
stream upgrades, the principal use of
which is to establish transport
security. If there is a stream
upgrade provider configured in the
channel stack, this will be invoked
during the Preamble stage of the
Framing Protocol when the client
opens the channel.
the upgrade
provider for
WindowsStreamSecurityBindingElement invokes an SSPI handshake between the client and the server using the SPNEGO security package: in the NetTcpBinding this will normally result in Kerberos being selected as the underlying security provider if available, but will choose NTLM if not.
if NTLM is the resulting authentication provider, the SSPI handshake will involve the three-leg NTLM challenge-response exchange of tokens described in the NTLM specification. This protocol includes a mechanism for exchanging keys for message signing and encryption. Once the SSPI handshake has generated an appropriate security context, thereafter all messages exchanged are signed and encrypted in the sending channel stack's stream upgrade provider, and decrypted and verified in the receiving channel stack's stream upgrade provider, in each case by using calls to the NTLM security provider via the abstracted SSPI message support functions.
This is a Microsoft propriety implementation and not properly documented and perhaps on purpose to prevent intruders to take advantage of it.
As far as I know, this usually happens at the TCP level with a special token is generated by the user's credentials and passed along with the request. This is intercepted by windows security channel and authenticated against the AD.
This token is used as a key (or as a basis for generating the key) for encrypting the communication.
I think if you look at the TCP packet, you must be able to see the token - although I have never seen it.
If you are doing this all in code then you can find out the options here (search for 'NetTcpBinding'). Transport security is via Windows builtin TLS.
The diagram here should be helpful for your scenario.