What is the ideal sequence of policies that we need to apply while creating API proxy in Apigee? Following is the list of policies.
Spike Arrest
oAuth
Regular expression protection
JSON Threat protection
Request Quota
How performance will be impacted if oAuth is kept last?
Thanks in advance.
From a security perspective you would want to keep oAuth near the top of your policy order. This will ensure that attackers cannot leak information about your proxy without providing authentication.
From a performance perspective, a successful request will pass through each policy and so overall performance will not change due to the order.
If performance in failure detection is important to you, your best bet would be to keep failures that occur most frequently near the top of the policy order. This will ensure that failure happens faster for those requests.
Additionally, you can view the time each policy takes to run using the Trace feature.
Related
there are dozens of attempts that trying to access "~~~.php", "./env" ..etc files or strange url from other several country in everyday.
In aws configuration, I opened only required ports for service. and the application has spring security config. so those hacking attempts based on url only get "access denied"( I check error log on monitoring system sometimes ), there was no problem by now.
but I'm little worried about that if there were "massive"(million?) hacking access to my app server and each access has different ip, can returning "access denied" error for that times(million) itself cause traffic problem in server? or I can just ignore this error?
I couldn't find the answer in searching.. any advice would be appreciated.
Spring Security is implemented as a stack of filter and URL validation occurs very soon in the stack, so the load for each individual request should remain low.
But the second part of your question is about a quite different attack which is a Distributed Denial Of Service. If tons of requests coming from high throughput origins reach your server it will no longer be able to answer them all, including legitimate ones. Worse, as most Java application are not protected about that, you could crash the JVM for exhausting memory or any other key resources.
Mitigations technics are listed in the linked Wikipedia page, most of them being based on identifying and rejecting the illegitimate traffic. Apart from that, you could try to include in your application or infrastructure a limit on the number of concurrent requests to at least prevent an application crash.
I have a requirement to persist some data in a table (single table). The data is coming from UI. Do i need to write just the system API and persist the data OR i need to write process and system API both? I don't see a use of process API in this case. Please suggest. Is it always necessary to access system API through process API or system API can be invoked without process API as well.
I would recommend a fine-grained approach to this. We should be following it through the experience layer even though we do not have must customization to the data.
In short, an experience layer API and directly calling System layer API (if there is no orchestration/data conversion/formatting needed)
Why we need a system API & experience API? A couple of points.
System API should be more attached to the underlying system. And if
in case, in the future, it changes then it should not impact any of
the clients.
Secondly, giving an upper layer gives us the feasibility to add
different SLAs, policies, logging and lots more, to different
clients. Even if you have a single client right now it's better to
architect for the future. Reusing is the key advantage of these APIs.
Please check Pattern 2 in this document
That is a question for the enterprise architect in your organisation. In this case, the process API would probably be a simple proxy for the system API, but that might not always be the case in future. Also, it is sometimes useful to follow a standard architectural pattern even if it creates some spurious complexity in the implementation. As always, there are design trade-offs and the answer will depend on factors that cannot be known by people outside of your organisation.
We are working on application where we will create and store XACML policies in WSO2 server for authorization.
We are looking for the best way to authorise user whenever he is trying to access anything in application. Now we are not sure by this approach how much performance issue will come?
One way we can deal with this is when user is trying to login, at that time get his all details from IDP so we can cache it at application level and we don't have to make trip to wso2 idp each time user is performing any action. It may cause slow login but from there other application experience will be fast.
We just wanted to confirm that is this the correct approach? Is there any issue with this design or is there any better way we can use?
I think its not the correct approach especially when we are talking about attribute based access control (ABAC) and when the attributes require to change frequently.
Also, when you are doing the policy evaluation its better to let PIP fetch the required attributes instead application sending all attributes and furthermore you may use the caching at WSO2 IS side also for XACML policy decision or attributes.
Apart from that for the better performance you may implement your PEP as thrift based. We did the same implementation and did a successful load testing for one of the most used application.
I would not recommend the caching at application side due the following reasons:
You have to make round trip for policy evaluation even if you cache attributes locally at application.
Caching attributes locally inside application will defeat the purpose in case the same policy to be used by other applications in future.
Allowing PIP to fetch required attributes at WSO2 side is recommended as it will ease the new application integration where you need not to worry fetching attributes for all new application integrations.
Caching can be done centrally at WSO2 IS server instead applying the cache at each application level.
P.S. - These are my personal views and opinions and it may not be perfect or best fit as per different requirements and business needs.
I would like to make a LDAP cache with the following goals
Decrease connection attempt to the ldap server
Read local cache if entry is exist and it is valid in the cache
Fetch from ldap if there is no such request before or the entry in the cache is invalid
Current i am using unboundid LDAP SDK to query LDAP and it works.
After doing some research, i found a persistent search example that may works. Updated entry in the ldap server will pass the entry to searchEntryReturned so that cache updating is possible.
https://code.google.com/p/ldap-sample-code/source/browse/trunk/src/main/java/samplecode/PersistentSearchExample.java
http://www.unboundid.com/products/ldapsdk/docs/javadoc/com/unboundid/ldap/sdk/AsyncSearchResultListener.html
But i am not sure how to do this since it is async or is there a better way to implement to cache ? Example and ideas is greatly welcomed.
Ldap server is Apache DS and it supports persistent search.
The program is a JSF2 application.
I believe that Apache DS supports the use of the content synchronization controls as defined in RFC 4533. These controls may be used to implement a kind of replication or data synchronization between systems, and caching is a somewhat common use of that. The UnboundID LDAP SDK supports these controls (http://www.unboundid.com/products/ldap-sdk/docs/javadoc/index.html?com/unboundid/ldap/sdk/controls/ContentSyncRequestControl.html). I'd recommend looking at those controls and the information contained in RFC 4533 to determine whether that might be more appropriate.
Another approach might be to see if Apache DS supports an LDAP changelog (e.g., in the format described in draft-good-ldap-changelog). This allows you to retrieve information about entries that have changed so that they can be updated in your local copy. By periodically polling the changelog to look for new changes, you can consume information about changes at your own pace (including those which might have been made while your application was offline).
Although persistent search may work in your case, there are a few issues that might make it problematic. The first is that you don't get any control over the rate at which updated entries are sent to your client, and if the server can apply changes faster than the client can consume them, then this can overwhelm the client (which has been observed in a number of real-world cases). The second is that a persistent search will let you know what entries were updated, but not what changes were made to them. In the case of a cache, this may not have a huge impact because you'll just replace your copy of the entire entry, but it's less desirable in other cases. Another big problem is that a persistent search will only return information about entries updated while the search was active. If your client is shut down or the connection becomes invalid for some reason, then there's no easy way to get information about any changes while the client was in that state.
Client-side caching is generally a bad thing, for many reasons. It can serve stale data to applications, which has the potential to cause incorrect behavior or in some cases pose a security risk, and it's absolutely a huge security risk if you're using it for authentication. It could also pose a security risk if not all of the clients have the same level of access to the data contained in the cache. Further, implementing a cache for each client application isn't a scalable solution, and if you were to try to share a cache across multiple applications, then you might as well just make it a full directory server instance. It's much better to use a server that can simply handle the desired load without the need for any additional caching.
I've been having performance issues with a high traffic ASP.NET 2.0 site running on Windows 2000. While editing the web.config file I noticed that the authentication mode was set to 'Windows'. I changed it to 'None'. The only users this site has are anonymous and it gets 25,000+ page views at day. Could this have been a caused performance issues?
There is a small potential, but if you are not securing any folders, it shouldn't be an issue.
In reality it would mostly be an issue if you needed to secure a folder path.
There might be a SMALL performance hit but I can't imagine it would be that bad.
It's very unlikely. Windows authentication is performed within IIS, and then a token is sent on to ASP.NET, so if you're using Anonymous Authentication, then it'll be effectively instantaneous, as this token will be created when the security context is created and that'll be it.
The 'None' authentication is intended for custom authentication, rather than for anonymous authentication- anonymous is one of the Windows authentication choices (i.e. IIS auth).
Perhaps you should setup tracing on the app and get methods to log event periods, to see where it's slow. It's likely to be a slow-running query, a timeout issue, lack of disk-space/swap-space, something like that.
Check out: http://msdn.microsoft.com/en-us/library/aa291347(VS.71).aspx for more detail on the authentication methods.