We recently had a windows upgrade 10 to patch version 10.0.18362. Before this we were able to connect to any URLs.
Aftermath is now we are not able to connect to a website which is hosted after a security device (WAF - web application firewall). CI_session (cookie value is now in different format value like - session_id%22%3Bs%3A32%3A%2)
Just wondering does an operating system changes the HTTPs request format after a client sends it to the server.
Will need help on this. Not sure how to put this up and look for answers.
There seem to be two separate paths going on here:
Windows 10 patches
A WAF implemented on your web app (is it F5's ASM, Advanced WAF or NGINX WAF, F5 was tagged).
That version of Windows by itself doesn't have issues related to connecting to URLs. If you have a VPN in the mix, then that does pose some changes that could affect your browsing sessions (split-dns versus full tunnel and such).
If the WAF is our potential culprit (using F5 as example since it was tagged) and there are blocking events by default the WAF will give you a message stating it was blocked along with an error code.
When rolling out a WAF policy in front of an application, the standard process is to run in transparent mode while learning the application. The WAF then understands the default behavior of that application (if going beyond default attack signatures). If the application is changed, it's standard practice to rerun learning and update the WAF policy as needed (usually done during test/stg processes).
Regardless, the WAF would generate warning or blocking events and this would be visible in analytics, logging, and a blocking page would present itself to the user being blocked (unless disabled - bad experience though).
Moving beyond the WAF aspect of this, if the application is indeed behind a BIG-IP, there may be load balancing methods involved using cookie persistence for the session. The F5 BIG-IP will use a cookie insert or rewrite which clients use until the cookie/session expires (expiration based on persistence within the BIG-IP - more on that here: AskF5 K6917).
Depending on what system is responsible for the session, you should A) not see two separate ci_sessions and B) the BIG-IP would be responsible for the session state to the back end node.
Your client COULD be connecting to two back end nodes and receiving two separate sessions independent of each other. If that's possibly the case, then investigating how the F5 BIG-IP is determining persistence is needed.
If another persistence method was used you'll need to find out and resolve with the BIG-IP admin/app owners. Example of persistence methods on BIG-IP v15. Either way, you'll need to find out how the application is deployed behind BIG-IP and if that changed. If the answer cannot be found within F5's DevCentral community or at AskF5, then a ticket should be created. Cookie persistence on BIG-IP isn't difficult to implement but it's all dependent on how the application behaves.
If not, gather some more details and I can update this answer. Hope this helped at least understand the WAF and BIG-IP LB methonds.
Related
there are dozens of attempts that trying to access "~~~.php", "./env" ..etc files or strange url from other several country in everyday.
In aws configuration, I opened only required ports for service. and the application has spring security config. so those hacking attempts based on url only get "access denied"( I check error log on monitoring system sometimes ), there was no problem by now.
but I'm little worried about that if there were "massive"(million?) hacking access to my app server and each access has different ip, can returning "access denied" error for that times(million) itself cause traffic problem in server? or I can just ignore this error?
I couldn't find the answer in searching.. any advice would be appreciated.
Spring Security is implemented as a stack of filter and URL validation occurs very soon in the stack, so the load for each individual request should remain low.
But the second part of your question is about a quite different attack which is a Distributed Denial Of Service. If tons of requests coming from high throughput origins reach your server it will no longer be able to answer them all, including legitimate ones. Worse, as most Java application are not protected about that, you could crash the JVM for exhausting memory or any other key resources.
Mitigations technics are listed in the linked Wikipedia page, most of them being based on identifying and rejecting the illegitimate traffic. Apart from that, you could try to include in your application or infrastructure a limit on the number of concurrent requests to at least prevent an application crash.
Web applications frameworks such as sinatra (ruby), play (scala), lift (scala) produces a web server listening to a specific port.
I know there are some reasons like security, clustering and, in some cases, performance, that may lead me to use an apache web server in front of my web application one.
Do you have any reasons for this from your experience?
Part of any web application is fully standardized and commoditized functionality. The mature web servers like nginx or apache can do the following things. They can do the following things in a way that is very likely more correct, more efficient, more stable, more secure, more familiar to sysadmins, and more easy to configure than anything you could rewrite in your application server.
Serve static files such as HTML, images, CSS, javascript, fonts, etc
Handle virtual hosting (multiple domains on a single IP address)
URL rewriting
hostname rewriting/redirecting
TLS termination (thanks #emt14)
compression (thanks #JacobusR)
A separate web server provides the ability to serve a "down for maintenance" page while your application server restarts or crashes
Reverse proxies can provide load balancing and fault tolerance for you application framework
Web servers have built-in and tested mechanisms for binding to privileged ports (below 1024) as root and then executing as a non-privileged user. Most web application frameworks do not do this by default.
Mature web servers are battle hardened and stable. By stable, I mean that they quite literally almost never crash. Your web application is almost certainly far less stable. This gives you the ability to at least serve a pretty error page to the user saying your application is down instead of the web browser just displaying a generic "could not connect" error.
Anecdotal case in point: nginx handles attack that would otherwise DoS node.js: http://blog.nodejs.org/2013/10/22/cve-2013-4450-http-server-pipeline-flood-dos/
And just in case you want the semi-official answer from Isaac Schluetter at the Airbnb tech talk on January 30, 2013 around 40 minutes in he addresses the question of whether node is stable & secure enough to serve connections directly to the Internet. His answer is essentially "yes" it is fine. So you can do it and you will probably be fine from a stability and security standpoint (assuming you are using cluster to handle unexpected termination of an app server process), but as detailed above the reality of current operations is that still almost everybody runs node behind a separate web server or reverse proxy/cache.
I would add:
ssl handling
for some servers like apache lots of modules (i.e.
ntml/kerberos authentication)
Web servers are much better for some things compared to your application, like serving static.
Quite often the frameworks do everything you need, but sometimes, adding a layer on top of that can give you seemingly free functionality like compression, security, session management, load balancing, etc. Still, adding a web server may also introduce security issues, for example, chances are your web server security may be compromised easier than Lift by itself. Also, some of the web frameworks are extremely scalable and may even be hampered by an ill chosen web server.
In summary, if you require web server like functionality that is not provided by the framework, then a web server may be a very good option, but keep in mind that it's one more thing to configure properly and update regularly with security patches, etc.
If for example, you just need encryption, or compression, then you may find that adding the correct library or plug-in to your framework may do just that (and only that)
With a proxy http server, the framework doesn't need to keep an http connection open for giving the computed content and can then start serving some other request. It acts as a buffer.
It's an issue of reinventing the wheel. Most frameworks will give you a development environment but for production it's usually good practice to use a commercial/open source project that is able to deal with all issues that arise during production.
Guys building a Framework will have the framework to concentrate on whilst guys building a server are doing just the same(perfecting).
! For the sake of simplifying things I will refer to Windows Store applications (also known as Metro or Modern UI) as "app" and to common desktop applications as "application" !
I believe this is still one of the most unclear yet important questions concerning app-development for developers who already have established applications on the market:
How to manage communication between apps and applications on a Windows 8 system? (please let's not start a debate on principles - there're so many use cases where this is really required!)
I basically read hundrets of articles in the last few days but still it remains unclear how to proceed doing it right from the first time. Mainly because I found several conflicting information.
With my question here I'd like to re-approach this problem from the viewpoint of the final Windows 8 possibilities.
Given situation:
App and application run on same system
1:1 communication
Application is native (written in Delphi)
Administrator or if required even system privileges are available for the application
In 90% of the use cases the app requests an action to be performed by the application and receives some textual result. The app shouldn't be left nor frozen for this!
In 10% the application performs an action (triggered by some event) and informs the app - the result might be: showing certain info on the tile or in the already running and active app or if possible running the app / bringing it to the foreground.
Now the "simple" question is, how to achieve this?
Is local webserver access actually allowed now? (I believe it wasn't for a long time but now is since the final release)
WCF? (-> apparently MS doesn't recommend that anymore)
HTTP requests on a local REST/SOAP server?
WinRT syndication API? (another form of webservice access with RSS/atom responses)
WebSockets (like MessageWebSocket)?
Some other form of TCP/IP communication?
Sharing a text file for in- and output (actually simply thinking of this hurts, but at least that's a possibility MS can't block...)
Named Pipes are not allowed, right?
There are some discussions on this topic here on SO, however most of them are not up-to-date anymore as MS changed a lot before releasing the final version of Windows 8. Instead of mixing up old and new information I'd like to find a definite and current answer to this problem for me and for all the other Windows application and app developers. Thank you!
If you are talking about an application going into the Store, communication with the local system via any mechanism is not allowed. Communication with the local system is supported in some debug scenarios to make app development easier.
You can launch desktop applications from Windows Store applications with file or protocol handlers, but there is no direct communication.
So, to reiterate the point... communication between WinRT and the desktop is not allowed for released Windows Store applications. Communication between the two environments is allowed in debug only.
The PG has posted in different places reasons for why communication is not allowed, ranging from security, to the WinRT lifecycle (i.e., you app gets suspended - how does that get handled re: resources, sockets, remote app, etc. -- lots of failure points) and the fact that Store apps cannot have a dependency on external programs (i.e., I need your local desktop app/service for the app to run, but how do I get your app/service installed? You cannot integrate into the Store app. You can provide another Store desktop app entry, but that is a bad user experience.) Those are high level summaries, of course.
What should we take care of before moving an application from a single Websphere Application Server to a Websphere cluster
This is my list from experience. It is not complete but should cover the most common problem areas:
Plan head the distributed session management configuration (ie. will you use memory-to-memory or database based replicaton). Make a notice that if you are still on 32-bit platform the resource requirement overhead from clustering might cause you instability issues if your application uses already lots of memory.
Make sure that everything you put into user sessions can be serialized with the default serializer (implements Serializable). You might otherwise run into problems with distributed sessions.
The same goes for everything you put into DynaCache. Make sure everything serializes properly.
Specify and make sure all the resource definitions (JDBC providers etc) will be made to a proper scope. I would usually recommend using the actual Cluster scope for everything that your applications installed to cluster use. That ensures the testing features work properly from proper points, and that you don't make conflicting definitions.
Make sure your application uses relative paths for resources in web interfaces. Once you start load balancing and stuff you can run into some serious problems if you have bolted down a lot of stuff.
If you had any sort of timers make sure they work well with clusters. With Quartz that means probably that you should use the JDBC store for timer tasks. With EJB Timers make sure you register the timers only once (it is possible to corrupt the timer database of WAS if you have several nodes attempting the registering at the exactly same time) and make sure you install them to Cluster scope.
Make sure you use the WAS provided SSO mechanisms. If you have a custom implementation please make sure it handles moving the user between servers in cluster well.
Keep it simple, depending on your requirements, try configuring your load balancer to use sticky sessions and not hold state in your HTTP Session. That way you don't need to use resource hungry in memory session replication.
Single Sign On isn't an issue for a single cluster as your HTTP clients will not be moving off the same http://server.acme.com/... host domain name.
Most of your testing should focus on database contention. If you have a highly transactional application (i.e. many writes to the same table) make sure you look at your database Isolation levels so that locks are not held unecessarily. Same goes for your transaction demarkaction. Keep transactions as brief as possible. If you dont have database skills yourself make sure you get a Database Analyst to help you monitor the database while you test.
Also a good advice to raise a PMR to IBM Support up front of any major changes, such as this one or upgrading to new versions etc. Raise it as a "Software Usage Question" and they can provide you with feedback from their knowledge database based on other customers input. Same would apply for any type of product which you have a support agreement for - ask support before problems occur.
My company develops CDN / Web-Hosting solution. We have a middleware that's served as a business logic layer and exposes web service for the front-end.
I would like to seek for a clean solution to feature management - there're uncertainties and ugly workarounds/solutions in the software that the dev would say "when it happens or is broken, we will fix it".
For example, here're the following features that a web publisher can have:
Sites limit
Bandwidth limit
SSL feature + SSL configuration per site
If we downgrade a web publisher, when he's having 10 sites, down to 5 sites, we can choose not to suspend the rest of the 5 sites, or we shall prompt for suspension before the downgrade.
For the case of bandwidth limit, the downgrade is easy, when the bandwidth check happens, if the publisher has it exceeded, then we will suspend his account.
For the case of SSL feature. Every SSL configuration is tied to a site, what shall happen to these configuration object when the SSL feature is downgraded from enabled to disabled?
So as you can see, there're many different situations and there are different ways of handling it.
I can make a system that examines the impacts and prompts the user to make changes before the downgrade/upgrade.
Or a system that ignores the impacts and just upgrade/downgrade. Bad.
Or a system designed in a way that the client code need to be aware of the complex feature matrix (or I can expose a helper to the client code to check if a feature is not DEFUNCT)
There can be many ways that I am still thinking but puzzled. I am wondering, how would you tackle this issue and is there any recommended patterns or books or software that you think I can refer to?
Appreciate your help.