Is it a good practice to use useMasterKey() in cloud code? - parse-platform

For security reasons we are planning to disable all write access to classes. Client applications (android and IOS sdks) will only have read-only access to the Parse data(classes)
Data stored in parse servers will be only modified by cloud functions. The cloud functions will call
Parse.Cloud.useMasterKey();
We have come up with this solution because it is nearly impossible to hide parse application ids and parse client key from attackers/hackers.
So is this a good solution? Are there any drawbacks? Does "Parse.Cloud.useMasterKey()" method have performance implications?
Thank you...

This is pretty standard practice, you implement your own security in your Cloud Functions and use the master key.
In theory it might be more efficient using the master key as the Parse servers don't have to process the Roles/Users/ACLs at all when the master key is used. Of course you need to balance that with any extra check your security logic does.

Related

Encrypt sensitive data with Spring Boot

Hello to all
I did a lot of research on encrypting important data such as credit card numbers in Spring Boot, and three ways to securely encrypt data caught my attention:
Protect secrets with Hashicorp Vault
Column-level encryption
Data Encryption with Java Cryptographic Extensions
All three methods have their advantages and disadvantages. The initial setup of the vault requires a lot of configuration, and I could not find a complete and integrated source for learning it. Column-level data encryption imposes a large processing load on the server, and requires the management of cryptographic keys. The third case requires the creation, management and maintenance of encryption keys for each client request. Is there a better choice for managing sensitive customer data such as email addresses or credit card numbers? Or is it recommended to use Vault to manage the secrets of website users?
Can I encourage you to take a look at our product. I don't want this to be a shameless plug but as a developer who has felt your pain, I think you would want to take a look at what we have. We have designed it to address some of your concerns. ubiqsecurity.com.
To address your specific considerations.
MUCH easier than setting up Hashicorp Vault. We have demos of creating an account and sharing encrypted data in two different languages within 5 minutes. The demos should help you get started if necessary but I wouldn't expect you to need them. Our client libraries also have fully functional examples to help you get started.
This seems to be the reason DBAs are hesitant to turn on encryption within the DB layer. We are leaving the encryption at the application layer. If your encrypted DB is up and running and someone is on the DB server with harvested credentials is your DB really secure?
We manage encryption keys for you. Client uses an API key (similar to other SaaS). Data is encrypted on the client.
Please feel free to reach out to us if you have any questions. Again, not trying to be a shameless plug, but we know the problems developers face when working with encryption and feel our solution addresses a number of the issues you are facing as well as others you haven't even mentioned.

System API in mulesoft

I have a requirement to persist some data in a table (single table). The data is coming from UI. Do i need to write just the system API and persist the data OR i need to write process and system API both? I don't see a use of process API in this case. Please suggest. Is it always necessary to access system API through process API or system API can be invoked without process API as well.
I would recommend a fine-grained approach to this. We should be following it through the experience layer even though we do not have must customization to the data.
In short, an experience layer API and directly calling System layer API (if there is no orchestration/data conversion/formatting needed)
Why we need a system API & experience API? A couple of points.
System API should be more attached to the underlying system. And if
in case, in the future, it changes then it should not impact any of
the clients.
Secondly, giving an upper layer gives us the feasibility to add
different SLAs, policies, logging and lots more, to different
clients. Even if you have a single client right now it's better to
architect for the future. Reusing is the key advantage of these APIs.
Please check Pattern 2 in this document
That is a question for the enterprise architect in your organisation. In this case, the process API would probably be a simple proxy for the system API, but that might not always be the case in future. Also, it is sometimes useful to follow a standard architectural pattern even if it creates some spurious complexity in the implementation. As always, there are design trade-offs and the answer will depend on factors that cannot be known by people outside of your organisation.

Will it be a safe to access elasticsearch directly from Javascript API

I am learning elasticsearch. I wanted to know how safe (in terms of access control & validating user access) it is to access ES server directly from JavaScript API rather than accessing it through your backend ? Will it be a safe to access ES directly from Javascript API ?
Depends on what you mean by "safe".
If you mean "safe to expose to the internet", then no, definitely not, as there isn't any access control and anyone will be able to insert data or even drop all the indexes.
This discussion gives a good overview of the issue. Relevant section:
Just as you would not expose a database directly to the Internet and let users send arbitrary SQL, you should not expose Elasticsearch to the world of untrusted users without sanitizing the input. Specifically, these are the problems we want to prevent:
Exposing private data. This entails limiting the searches to certain indexes, and/or applying filters to the searches.
Restricting who can update what.
Preventing expensive requests that can overwhelm or crash nodes and/or the entire cluster.
Preventing arbitrary code execution through dynamic scripts.
Its most certainly possible, and it can be "safe" if say you're using it as an internal tool behind some kind of authentication. In general, no, its not secure. Elasticsearch API can create, delete, update, and search, which is not something you want to give a client access to or they could essentially do any or all of these things.
You could, in theory, create role based auth with ElasticSearch Shield, but it's far from standard practice. Its not really anymore difficult to implement search on your backend then just have a simple call to return search results.

Rate-Limit an API (spring MVC)

I'm looking the best more efficient way to implement (or use an already setup) rate limiter that would protect all my rest api url. the protection I'm looking at is a "call per second per user limiter"
I had a look on the net and what comes out was the use of either "Redis" or Guava RateLimiter.
To be honest I have never used Redis and I'am really not familiar with it. But by looking on its docs it seems that it has a quite robust rate limiter system.
I have also had a look at Guava's RateLimiter. And it looks a bit easier to use (don't need a redis installation etc...)
So I would like some suggestion of what would be "in my case" the best solution? Is using Redis "too much"?
Have any of you already tried RateLimter? Is this a good solution? Is it scaleable?
PS: I am also open to other solutions than the 2 I aforementioned if you think there are better choices.
Thank you!
If you are trying to limit access to your Spring-based REST api you should use token-bucket algorithm.
There is bucket4j-spring-boot-starter project which uses bucket4j library to rate-limit access to the REST api. You can configure it via application properties file. There is an option to limit the access based on IP address or username.
If you are using Netflix Zuul you could use Spring Cloud Zuul RateLimit which uses different storage options: Consul, Redis, Spring Data and Bucket4j.
Guava’s RateLimiter blocks the current thread so if there’s a burst of asynchronous calls against the throttled service lots of threads will be blocked and might result exhaust of free threads.
Perhaps Spring-based library Kite meets your needs. Kite's "rate-limiting throttle" rejects requests after the principal reaches a configurable limit on the number of requests in some time period. The rate limiter uses Spring Security to determine the principal involved.
But Kite is still a single-JVM approach. If you do need a cluster-aware approach Redis is a way to go.
there is no hard rule, it totally depends on your specific situation. provided that "I have never used Redis", I would recommend guava RateLimiter. compare to redis, a completely new nosql system for you, guava RateLimiter is much easier to get started with. by adding a few lines of code, you are enable to distribute permits at a configurable rate. what left to do is to adapt it to fit your need, like providing rate limit on a per user basis.

Should I make my CouchDB database server public-facing?

I'm new to CouchDb and am trying to comprehend how to properly make use of it. I'm coming from MongoDB where I would always write a web layer and put it in front of mongo so that I could allow users to access the data inside of it, etc. In fact, this is how I've used all databases for every web site that I've ever written. So, looking at Couch, I see that it's native API is HTTP and that it has built in things like OAuth support, and other features that hint to me that perhaps I should no longer have my code layer sitting in front of Couch, but instead write Views and things and just give out accounts to Couch to my users? I'm thinking in terms of like an HTTP-based API for a site of mine, or something that users would consume my data through. Opening up Couch like this seems odd to me, though. Is OAuth, in Couch's sense, meant more for remote access for software that I'd write and run internal to my own network "officially", or is it literally meant for the end users?
I know there might be things that could only be done through a code layer on top of CouchDB, like if you wanted additional non-database related things to occur during API requests, also. So thinking along those lines I think I will still need a code layer, anyway.
Dealer's choice.
Nodejitsu has a great writeup on this sort of topic here.
Not knowing your application specifics I'll take a broad approach...
Back-end
If you want to prevent users from ever seeing your database then make it back-end. You can pipe everything through something like node.js and present only what the user needs to see and they'll never know anything about the database.
See Resource View Presenter
Front-end
If you are not concerned about data security, you can host an entire app on CouchDB; see CouchApp. This approach has the benefit of using the replication mechanism to control publishing your site/data. The drawback here is that you will almost certainly run into some technical limitations that will require moving CouchDB closer to the backend.
Bl-end
Have the app server present the interface and the client pull the data from the database separately. This gives the most flexibility but can be a bag of hurt because even with good design this could lead to supportability and scalability issues.
My recommendation
Use CouchDB on the backend. If you need mobile clients to synchronize then use a secondary DB publicly exposed for this purpose and selectively sync this data to wherever it needs to go.
Simply put, no.
There's no way to secure Couch properly on a public facing site. There's no way to discriminate access at a fine enough granular level. If someone has access to any of the data, they have access to all of the data.
Not all data on a site is meant for public consumption, save for the most trivial of sites.

Resources