I want to know that is there any configuration in Guvnor to limit the number of unsuccessful login attempts. This is required as I want to prevent Brute Force attack on my production Guvnor server.
Environment:
1. Drools-Guvnor 5.5.0-Final
2. Jboss EAP 6.1.0
Thanks and Best Regards,
Zahid Ahmed
Guvnor does not. If you implement a custom authenticator (i.e. https://gist.github.com/gratiartis/4545962) then you could enforce such restrictions yourself.
Related
I'm trying to load-balance "2 Web Servers (running Apache/PHP)" by putting Nginx at in front of them. But I need to use Round Robin algorithm but when i do this, I can't manage to have the stable SESSIONS.
(I understand; if I use Round Robin, the SESSION information will be lost once i hit to the another Server on next load)
Is there a proper way to achieve this? Any kind advice for the industrial standards on this please?
FYI, I have already put these 2 Web Servers into GlusterFS as in Cluster. So I have a common storage (if you are going to suggest something based on this)
The nginx manual says that session affinity is in the commercial distribution only ("sticky" directive). If you don't use the commercial distribution, you'll have to grab a third-party "plugin" and rebuild the server with support
("sticky" should help you find the third party addons)
If there isn't any specific reason for using Round Robin, you can try to use ip_hash load balancing mechanism.
upstream myapp1 {
ip_hash;
server srv1.example.com;
server srv2.example.com;
server srv3.example.com;
}
If there is the need to tie a client to a particular application server — in other words, make the client’s session “sticky” or “persistent” in terms of always trying to select a particular server — the ip-hash load balancing mechanism can be used.
Please refer to nginx doc for load_balancing for more information.
We are working on application where we will create and store XACML policies in WSO2 server for authorization.
We are looking for the best way to authorise user whenever he is trying to access anything in application. Now we are not sure by this approach how much performance issue will come?
One way we can deal with this is when user is trying to login, at that time get his all details from IDP so we can cache it at application level and we don't have to make trip to wso2 idp each time user is performing any action. It may cause slow login but from there other application experience will be fast.
We just wanted to confirm that is this the correct approach? Is there any issue with this design or is there any better way we can use?
I think its not the correct approach especially when we are talking about attribute based access control (ABAC) and when the attributes require to change frequently.
Also, when you are doing the policy evaluation its better to let PIP fetch the required attributes instead application sending all attributes and furthermore you may use the caching at WSO2 IS side also for XACML policy decision or attributes.
Apart from that for the better performance you may implement your PEP as thrift based. We did the same implementation and did a successful load testing for one of the most used application.
I would not recommend the caching at application side due the following reasons:
You have to make round trip for policy evaluation even if you cache attributes locally at application.
Caching attributes locally inside application will defeat the purpose in case the same policy to be used by other applications in future.
Allowing PIP to fetch required attributes at WSO2 side is recommended as it will ease the new application integration where you need not to worry fetching attributes for all new application integrations.
Caching can be done centrally at WSO2 IS server instead applying the cache at each application level.
P.S. - These are my personal views and opinions and it may not be perfect or best fit as per different requirements and business needs.
I have two online-systems running. Both of them are using eclipselink.
The first system is a administration-system, where the prices for the second application are managed.
The second system is a online shop, where customer can buy articles.
Both of them run on the same server and use the same oracle database.
To provide a fast access, the price-objects are cached by eclipselink.
If I change the value of a price in the administration-system, the shop-system should flush its cache in order to get the new price value.
What is the best way to solve this problem?
I have a similar problem but it's with user credentials.
1) Configure caching in the shop-side
You can configure the EclipseLink caching to have an expiry. You can configure it to have a
TimeToLive or an expire at value. For example you could configure prices to expire after 1, 5 or 10 minutes. Not instant, but pretty quick and very easy to implement. Check out the #Cache annotation in EclipseLink. This is what I ended up using.
2) Have the admin application communicate with the shop application
It might be worth creating a web-service that lives in the shop side which will invalidate the cache when called. Kinda fragile but might be necessary depending on your setup.
3) Use Cache co-ordination
EclipseLink has functionality for cache co-ordination. I have never used it but it looks like it might be best policy for you. You can check out the EclipseLink documentation for more information.
I'm a newbie to OAuth - I have a high volume customer using OAuth: LoadBalancer with 12 servers but only using 1 server to store the OAuth tokens. Today, when testing I can only get 1000 concurrent users on the site and I need to support an SLA of 10,000.
I'm looking at the following alternatives:
1) Look for a more robust OAuth library - must be Java based
2) Store the tokens in a database - will be slower but users will have access
Is there anything else I'm missing? Any recommendations from more experienced OAuth developers/architects?
Much Appreciated!
Steve
Not missing anything. That's not the purpose of OAuth to solve this. Therefore, 2nd alternative sounds good to me. Anyway no COTS clustering solutions, no db storage here if you want to achieve some certain level of scalability easily and at low cost.
Instead start scaling horizontally your token repository using a distributed caching system on its own tier of servers.
If java, maybe investigate spymemcached or equivalent.
You can store your oauth access tokens in any distributed persistent cache (like mongo db with replica sets). With this setup your oauth access tokes will be available on all 12 boxes and you will be able to scale horizontally. Tokens created on any box will be automatically replicated and it should be super fast compared to a regular database.
More info on mongodb and replica sets
My company develops CDN / Web-Hosting solution. We have a middleware that's served as a business logic layer and exposes web service for the front-end.
I would like to seek for a clean solution to feature management - there're uncertainties and ugly workarounds/solutions in the software that the dev would say "when it happens or is broken, we will fix it".
For example, here're the following features that a web publisher can have:
Sites limit
Bandwidth limit
SSL feature + SSL configuration per site
If we downgrade a web publisher, when he's having 10 sites, down to 5 sites, we can choose not to suspend the rest of the 5 sites, or we shall prompt for suspension before the downgrade.
For the case of bandwidth limit, the downgrade is easy, when the bandwidth check happens, if the publisher has it exceeded, then we will suspend his account.
For the case of SSL feature. Every SSL configuration is tied to a site, what shall happen to these configuration object when the SSL feature is downgraded from enabled to disabled?
So as you can see, there're many different situations and there are different ways of handling it.
I can make a system that examines the impacts and prompts the user to make changes before the downgrade/upgrade.
Or a system that ignores the impacts and just upgrade/downgrade. Bad.
Or a system designed in a way that the client code need to be aware of the complex feature matrix (or I can expose a helper to the client code to check if a feature is not DEFUNCT)
There can be many ways that I am still thinking but puzzled. I am wondering, how would you tackle this issue and is there any recommended patterns or books or software that you think I can refer to?
Appreciate your help.