I need to use the session key with my geocode requests to limit my billable transactions. The search manager class takes the map as an argument but I see no way to set the session key. Cursory investigation with Charles seems to indicate it doesn't use the session key even after calling getCredentials on its own.
If this can't be done readily it would seem like a glaring oversight.
The Search, AutoSuggest and Direction modules all automatically use session keys when you pass a map in when loading the manager of that module. No need to get the session key yourself. You only really need to manually get the sessions key if you were to directly connect to the REST services which.
Depending on your application needs, directly accessing the REST services may be useful. Here are a couple of examples of when you would want to access the REST services directly:
You only need the raw data.
You want 100% full control over rendering of the results.
You want to access one of the services that are not exposed as a module such as the elevation service.
Related
I am learning elasticsearch. I wanted to know how safe (in terms of access control & validating user access) it is to access ES server directly from JavaScript API rather than accessing it through your backend ? Will it be a safe to access ES directly from Javascript API ?
Depends on what you mean by "safe".
If you mean "safe to expose to the internet", then no, definitely not, as there isn't any access control and anyone will be able to insert data or even drop all the indexes.
This discussion gives a good overview of the issue. Relevant section:
Just as you would not expose a database directly to the Internet and let users send arbitrary SQL, you should not expose Elasticsearch to the world of untrusted users without sanitizing the input. Specifically, these are the problems we want to prevent:
Exposing private data. This entails limiting the searches to certain indexes, and/or applying filters to the searches.
Restricting who can update what.
Preventing expensive requests that can overwhelm or crash nodes and/or the entire cluster.
Preventing arbitrary code execution through dynamic scripts.
Its most certainly possible, and it can be "safe" if say you're using it as an internal tool behind some kind of authentication. In general, no, its not secure. Elasticsearch API can create, delete, update, and search, which is not something you want to give a client access to or they could essentially do any or all of these things.
You could, in theory, create role based auth with ElasticSearch Shield, but it's far from standard practice. Its not really anymore difficult to implement search on your backend then just have a simple call to return search results.
Are there any examples of using encryption to encrypt the disk-cache used by OkHttp's HttpResponseCache? Naively, I don't think this is a very hard thing to do, but I'd appreciate any advice or experience to avoid security-pitfalls.
Without too many specifics, here's what I'm trying to achieve: a server that accept user's api-keys (typically 40-character random string) for established service X, and makes many API calls on the users behalf. The server won't persist user's api-keys, but a likely use case is that users will periodically call the server, supplying the api-key each time. Established service X uses reasonable rate-limiting, but supports conditional (ETag, If-Modified-Since) requests, so server-side caching by my server makes sense. The information is private though, and the server will be hosted on Heroku or the like, so I'd like to encrypt the files cached by HttpResponseCache so that if the machine is compromised, they don't yield any information.
My plan would be to create a wrapper around HttpResponseCache that accepts a secret key - which would actually be a hash of half of the api-key string. This would be used to AES-encrypt the cached contents and keys used by HttpResponseCache. Does that sound reasonable?
Very difficult to do with the existing cache code. It's a journaled on-disk datastructure that is not designed to support privacy, and privacy is not a feature you can add on top.
One option is to mount an encrypted disk image and put the cache in there. Similar to Mac OS X's FileVault for example. If you can figure out how to do that, you're golden.
Your other option is to implement your own cache, using the existing cache as a guide. Fair warning: the OkResponseCache is subject to change in the next release!
By browsing through the code of the Auth and Session snaplets I observed that session information is only stored on the client (as an encrypted key/value store in a cookie). A common approach to sessions is to only store a session token with the client and then have the rest of the session information (expiry date, key/value pairs) in a data store on the server. What is the rationale for Snap's approach?
For me, the disadvantages of a client side only session are:
The key/value store might get large and use lots of bandwidth. This is not an issue if the session is only used to authenticate a user.
One relies on the client to expire/delete the cookie. Without having at least part of the session on the server one is effectively handing out a token that's valid to eternity when setting the cookie.
A follow-up question is what the natural way of implementing server side sessions in Snap would be. Ideally, I'd only want to write/modify auth and/or session backends.
Simplicity and minimizing dependencies. We've always taken a strong stance that the core snap framework should be DB-agnostic. If you look closely at the organization, you'll see that we carefully designed the session system with a core API that is completely backend-agnostic. Then we included a cookie backend. This provides users with workable functionality out of the gate without forcing a particular persistence system on them. It also serves as an example of how to write your own backend based on any other mechanism you choose.
We also used the same pattern with the auth system. It's a core API that lets you use whatever backend you want. If you want to write your own backend for either of these, then look at the existing implementations and use them as a guide. The cookie backend is the only one I know of for sessions, but auth has several: the simple file-based one that is included, and the ones included in snaplet-postgresql-simple, snaplet-mysql-simple, and snaplet-persistent.
I'm trying to write a C++ wrapper for an out-of-process COM server (on another machine). I'm hoping to hide all the COM-related nastiness from users of the class.
The security requirements force me to call CoSetSecurityBlanket on the server proxy. That is:
CoCreateInstance(CLSID_OutOfProcServer, &proxy);
CoSetProxyBlanket(proxy);
(I've left out lots of parameters). In addition, I must specify credentials in this call since the server requires a local account.
Now here's the problem. This server has lots of methods that return interfaces, and each of these interfaces is a brand new proxy on my side. Thus, I have to call CoSetProxyBlanket() each time I get one. Here's what I want to accomplish:
Have my wrapper hide the CoSetProxyBlanket calls (easy enough)
Avoid storing the credentials in memory (devilishly difficult!)
So far, I've tried copying the blanket from one object to another using CoQueryProxyBlanket and CoSetProxyBlanket. This doesn't work because I can't recover the credentials (unless I store them in memory—which I'd like to avoid).
What's really frustrating is that I have an authenticated connection to the server. It seems like I should be able to copy its security context into the new proxy. (Or at least tell COM to do this for me when it creates the new proxy.) Is there any way to do this or am I stuck storing the credentials?
Try this:
Obtain impersonation token by calling LogonUser() and store this token instead of credentials
ImpersonateLoggedOnUser() with the token
Set proxy blanket with authinfo set to NULL
RevertToSelf()
I haven't tried this, just suggesting an idea...
Is there anyway to user MongoDB as a central session storage for Tomcat6? If so, could we have a cluster of tomcat servers reading session data from MongoDB so that the cluster could be dinamically resized (adding more boxes on the fly) without the need of sticky sessions?
I think I found what I was looking for.
https://github.com/dawsonsystems/Mongo-Tomcat-Sessions
If anyone has used it in production, I would love to hear your experiences.
Tomcat/J2EE sessions have a getId() method which returns the session ID for the current user. You can certainly use this as a key into a sessions collection in MongoDB, and store any data you'd like.
I'm not aware of any pre-built tools to integrate specifically with Tomcat 6, but that doesn't mean they don't exist. But this is a fairly straightforward task, it might be simplest just to write your own DAO to access session data given an HttpSession or HttpServletRequest.
If your session data is the only shared state you maintain, then moving it to MongoDB (or to any off-appserver database or tool) will enable you to scale like you propose. If you have other state maintained on the application servers, then you will need to determine how to move that off of the app servers and onto a shared resource.
I think there is a better way using MongoDD to store sessions, just using Servlet-Api functions and no proprietary Appserver-features.
First of all you need to create your own implementation of an
HttpSession based on a Map for storeing attributes
You need to create an implementation of HttpServletRequest (using HttpServletRequest Wrapper) that overwrites getSession-method and
returns your implementation
You need to create a filter which replaces the given HttpRequest against your created and do the MongoDB-Handling to load and store the attribute-map
You find some code-samples (sadly in german language) here: http://mibutec.wordpress.com/2013/09/23/eigenes-session-handling-in-webapplikationen/