Authenticating a client-side web service request in a cached environment - ajax

We're building a set of external web services to be consumed client-side (using jquery/AJAX) by visitors to our site. The web services need to be publicly available but we'd like to limit access to site visitors.
Importantly, the site in question sits behind a CDN and we cache page content for 24 hours; AJAX requests would preferably be cached as well but I'm conscious doing so will limit our authentication options. Our visitors access the site and services anonymously.
What are some standard "patterns" for authenticating client requests? I'm not dealing with confidential data per-se but do want to deter other users/sites from hijacking these services for liability (think data distribution) and performance reasons.
I'm thinking of a shared secret that's refreshed daily and used site-wide by all clients; any web service request would include the secret. Pretty basic but are there other, better ways for the service to detect the caller's origin in a manner that can't be spoofed?

If the threat to your web service is related to someone automating the client calls, you can implement rate limiting on server side. As you rightly mentioned, client can be required to provide key for each request. Alternatively, if only mortals are going to interact with web service, you can also implement Human Interaction Proof like Captcha etc. One thing to make sure is that "key" which will be used by client needs to given in controlled manner. I once came across a system which basically gave away unlimited keys - this means that automation control will be ineffective as an attacker can request as many keys and make unlimited calls. If you are limiting using IP address, make sure that you throttle requests on network part of ip address (A.B.C.X) as host part (X) can change (when users are behind proxies) If your clients are anonymous, the best/closest "identifier" is indeed address.

Related

Office add-in using bootstrap token to get other tokens

We have a requirement to call other APIs other than graph(like Dynamics, Power Automate etc.,) from our Add-in. All examples in Office Add-in Samples suggest to use bootstrap token and then exchange it to get tokens for subsequent APIs and make calls on the server. This forces all communication from our Add-in to be proxied via our server. This can be a unncessary performance bottle-neck. Can we not send the OBO tokens back to our client side Add-in and call other services directly from the client? Is there a known security issue with this approach?
The "received wisdom" about whether access tokens should be sent to clients or stored on clients has fluctuated over the last 10 -15 years, but in recent years the pendulum has swung pretty decisively to the idea that access tokens should not be on the clients. Client-to-server communication is much more vulnerable than server-to-server communication, because there are a wide variety of well-known ways to attack clients and trick users. At the same time, bad actors don't know when server-to-server communication is going to take place and it is much harder to get access to the server computers on either end of the communication.

Client-side load balancing in practice seems to be almost the same as server-side load balancing. Is that so?

In server-side load balancing, the clients call an intermediate server, which then decides which instance of the actual server (or microservice) to call.
In client-side load balancing also, the clients call an intermediate server (the API gateway - Zuul for instance, configured with a load-balancer - Ribbon for instance and a naming server - Eureka for instance), which then decides which instance of the microservice to call.
Unless we include the API gateway as part of the client, the client still doesn't know the IP address of the exact server to which it should send the request. Seems to me, to be a lot like server-side load-balancing. Is there something I'm missing?
(Including the API gateway as part of client seems weird, since its usually deployed on a different server from the client)
In Client Side load balancing, the Client is doing the heavy lifting of discovery and connection to the origin server. The client may reference a lookup (Eureka, Consul, maybe DDNS), to discover what the end destination is and the registry will dole out a valid origin. The communication is direct, client to server without a middle man.
In Server Side load balancing, the client is dumb, and makes a call to a predetermined address (usually DNS or static IP). That device then either proxies (TCP or protocol level) the connection to the origin server based on either a lookup, heartbeat, etc.
I've seen benefits in client side routing in that as long as you have IP connectivity between client and server, the work of the infrastructure is trivial to add new services, locations, products, apps, etc. As long as the new server can "register" with the registry, and the client has IP access to the server, it just works and IT does not have to be involved in rolling out your new service.
The drawback is it makes the client a little more heavy, it does require IP access direct from client to server, and may be confusing for traditional IT folks and auditors. Each client needs to be aware of the registry and have code to make calls (or use a sidecar/sidekick).
I've seen it in practice where a group started to transition their apps to a Docker environment, and they were able to run their Docker based apps along side their non-docker versions at the same time w/o having to get IT involved and do a lot of experimentation and testing quickly and autonomously.
If you have autonomous teams, are highly advanced on the devops spectrum, and have a lot of trust with your teams, Client Side routing and load balancing may be a good experience for you.

Mac spoof HTTP response

If a program sends a http request, is there a way to spoof the data returned by the request?
For example:
Program that sends name to server to check for permission: http://example.com/test.php?name=Stackoverflow
Actual Response: HI
Response I want to spoof: HELLO
Also, are there good forms of authentication to protect against this (if it is possible).
This question is pretty open-ended, so it's hard to answer it with something terribly specific. Depending on exactly what you're trying to do, a simple proxy like Fiddler (Windows-only), Burp, etc. might do the trick. You could also play tricks with hosts files, iptables (see Otto's comment), etc. It's definitely possible, but depending on exactly what you're trying to do, some methods may be more suitable than others.
As for the second part of your question (authentication to ensure this doesn't happen), this is one of the primary purposes of HTTPS.
In its popular deployment on the internet, HTTPS provides authentication of the web site and associated web server that one is communicating with, which protects against Man-in-the-middle attacks. Additionally, it provides bidirectional encryption of communications between a client and server, which protects against eavesdropping and tampering with and/or forging the contents of the communication. In practice, this provides a reasonable guarantee that one is communicating with precisely the web site that one intended to communicate with (as opposed to an impostor), as well as ensuring that the contents of communications between the user and site cannot be read or forged by any third party.
http://en.wikipedia.org/wiki/HTTP_Secure

rubycas CAS over ssl, sites over non-ssl

I'm trying to determine how much of a security risk I'm looking at
when I have rubycas itself running over https, but my actual sites
running under http. the reason I'm faced with this issue is that the
sites are deployed on heroku, which means ssl is either really
expensive or really a pain.
In addition to the login details, i also pass user rolls
(authorization) to each site that is then stored in a session.
Any input is greatly appreciated.
The problem with this approach is that neither the sessionid (url or cookie) nor the exchanged data is encrypted. Therefore the data can be read and manipulated both on the way from the server to the user and on the way from the user to the server.
Even a passive attacker that can just sniff the traffic without being able to manipulate it, can create damage: The attacker can just copy the sessionid into his or her own browser. Public wireless connections often use a transparent proxy, so both the attacker and the victim have the same public ip-address, which makes it difficult for the application to tell them apart.
There is a tool called Firesheep that makes this kind of attack extremely easy.

SSL Client Cert Verification optimisation

We currently have a group of web-services exposing interfaces to a variety of different client types and roles.
Background:
Authentication is handled through SSL Client Certificate Verification. This is currently being done in web-service code (not by the HTTP server). We don't want to use any scheme less secure than this. This post is not talking about Authorisation, only Authentication.
The web-services talk both SOAP and REST(JSON) and I'm definitely not interested in starting a discussion about the merits of either approach.
All operations exposed via the web-services are stateless.
My problem is that verifying the client certificate on each requests is very heavyweight, and easily dominates CPU time on the application server. I've already tried seperating the Authentication & Application portions onto different physical servers to reduce load, but that doesn't improve dispatch speed overall - the request still takes a constant time to authenticate, no matter where that is done.
I'd like to try limiting the number of authentications by generating an HTTP cookie (with an associated server-side session) after successful client certificate verification, which when supplied by the client will cause client certificate verification to be skipped (though still talking over SSL). I'd also like to time-limit the sessions, and make the processes as transparent as possible from a client perspective.
My questions:
Is this still as secure? (and how can we optimise for security and pragmatism?)
Are there free implementations of this scheme? (I'm aware of the SiteMinder product by CA)
Given the above, should we continue to do Authentication in-application, or move to in-server ?
generating an HTTP cookie (with an
associated server-side session) after
successful client certificate
verification, which when supplied by
the client will cause client
certificate verification to be skipped
Is this still as secure? (and how can
we optimise for security and
pragmatism?)
It's not quite as secure in theory, because the server can no longer prove to himself that there's not a man-in-the-middle.
When the client was presents a client-side certificate, the server can trust it cryptographically. The client and server should be encrypting and data (well, the session key) based on the client's key. Without a client-side cert, the server can only hope that the client has done a good job of validating the server's certificate (as perceived by the client) and by doing so eliminated the possibility of Mr. MitM.
An out-of-the-box Windows client trusts over 200 root CA certificates. In the absence of a client-side cert, the server ends up trusting by extension.
Here's a nice writeup of what to look for in a packet capture to verify that a client cert is providing defense against MitM:
http://www.carbonwind.net/ISA/ACaseofMITM/ACaseofMITMpart3.htm
Explanation of this type of MitM.
http://www.networkworld.com/community/node/31124
This technique is actually used by some firewall appliances boxes to perform deep inspection into the SSL.
MitM used to seem like a big Mission Impossible-style production that took a lot to pull off. Really though it doesn't take any more than a compromised DNS resolver or router anywhere along the way. There are a lot of little Linksys and Netgear boxes out there in the world and probably two or three of them don't have the latest security updates.
In practice, this seems to be good enough for major financial institutions' sites, although recent evidence suggests that their risk assessment strategies are somewhat less than ideal.
Are there free implementations of this scheme? (I'm aware of the SiteMinder product by CA)
Just a client-side cookie, right? That seems to be a pretty standard part of every web app framework.
Given the above, should we continue to do Authentication in-application, or move to in-server ?
Hardware crypto accelerators (either a SSL proxy front end or an accelerator card) can speed this stuff up dramatically.
Moving the cert validation into the HTTP server might help. You may be doing some duplication in the crypto math anyway.
See if you would benefit from a cheaper algorithm or smaller key size on the client certs.
Once you validate a client cert, you could try caching a hash digest of it (or even the whole thing) for short time. That might save you from having to repeat the signature validations all the way up the chain of trust on every hit.
How often to your clients transact? If the ones making up the bulk of your transactions are hitting you frequently, you may be able to convince them to combine multiple transactions in a single SSL negotiation/authentication. Look into setting the HTTP Keep-Alive header. They may be doing that already to some extent. Perhaps your app is doing client cert validation on every HTTP request/response, or just once at the beginning of each session?
Anyway, those are some ideas, best of luck!

Resources