Setting read access using permissions.acl - hyperledger-composer

Looking at the hyperledger composer tutorials, it seems access control can be implemented for transactions, but not reading from asset registries.
Is there a way to prevent some participants from reading certain assets?

It's actually the reverse -- the ACLs are currently all about participants attempting to perform operations on assets.
Please refer to the docs for details:
https://hyperledger.github.io/composer/reference/acl_language.html

Related

NEFilterProvider record network activity

NEFilterProvider, or more specifically its 2 subclasses NEFilterDataProvider and NEFilterPacketProvider, has the functionality to allow or deny network activity. However, I couldn't find any way to log in the activity, for debugging purposes.
I know the documentation says this:
it runs in a very restrictive sandbox. The sandbox prevents the Filter
Data Provider extension from moving network content outside of its
address space by blocking all network access, IPC, and disk write
operations.
but is there any trick to log this anyway in debug mode? Maybe using os_log or something like that?
yes, you can use os_log and read the output in the Console app. if you want to workaround the privacy restrictions (while developing/testing), use the %{public} prefix, like so...
import os.log
// ...somewhere in the provider class
os_log("something i want to log %{public}#", someVar)
you're right, the documentation is really, really lacking for this area, other than the SimpleFirewall sample code, and wwdc video. i have an app in production using NEFilterDataProvider but it about cost me my sanity to figure out how to put it all together. at some point i'm going to try to write some blog posts or make a demo repo to try to help create a central community resource to share knowledge and fill in the gaps in the documentation with hard-won knowledge.

How to handle user roles with Rethinkdb?

In RethinkDB, there does not seem to be built-in support for user roles/access permissions.
This seems to be a common feature in most established databases, including MongoDB. We are worried that this gives processes that have access to the database too much access and us as developers little control over who can access what, leading to potential security issues.
I'm wondering: How big of an issue is this? Is there an alternative way to replicate this functionality without rethinDB supporting it out of the box?
EDIT:
As of RethinkDB 2.3 which was just released, you can now add users and ACLs!
2.3 Release Blog Post
Users documentation
Original Answer
access control (sometimes ACL) for RethinkDB is on the road map but in the mean time I recommend to either setup multiple instances divided by user permissions of RethinkDB along with an auth key:
https://rethinkdb.com/docs/security/#securing-the-driver-port
RethinkDB allows you to set an authentication key by modifying the
cluster_config system table. Once you set an authentication key,
client drivers will be required to pass the key to the server in order
to connect.
Hope that helps!

What are my alternatives for managing RabbitMQ channel changes as a part of CD process

I am looking for alternatives for managing my RabbitMQ setup, same as i manage my RDBMS with liquibase/flyway or mongo with mongeez.
After looking around a bit I havent found any resources on it as much (Which gets me thinking on how companies actually do it).
I read thread that talked about each component creating the channels that it needs to its either there or it will be created in runtime when needed.
Other then that i haven't found any mention of a request like mine, am i looking at this the wrong way?
We manage it the following way. It's not a clean straight forward solution, but it works.
Installation, update and base-configuration of RabbitMQ is done via an ansible role.
Creation, update and deletion of virtual hosts, users and access permissions is done via a second ansible role
Management, i.e. create, update and delete of queues and exchanges is done from within the application
With this setup we were able to provide a multi tenant configuration and efficiently manage several installations in several stages.

the geo coder to fetch more requests

I am working with geocoder gem and like to process more number of requests from an IP. By default Google API provides only 2500 requests per day.
Please share your thoughts on how I can do more requests than the limit?
As stated before: Using only Google API the only way around the limitation is to pay for it. Or in a more shady way make the requests form more than one IP/API-Key which i would not recommend.
But to stay on the save side i would suggest mixing the services up since there a few more Geocoding APIs out there - for free.
With the right gem mixing them is also not a big issue:
http://www.rubygeocoder.com/
Supports a couple of them with a nice interface. You would pretty much only have to add some rate-limiting counters making sure you stay within the limits of each provider.
Or go the heavy way of implementing your own geocoding. With for example your own running Openstreetmaps database. The Database can be downloaded here: http://wiki.openstreetmap.org/wiki/Planet.osm#Worldwide_data
Which is the best way depends on what your actual requirements are and what ressources you have available.

What open source cloud storage system offer an append-only mode for buckets, directories, etc

I'm curious if any cloud storage system could be configured to provide the following workflow :
Anonymous users may upload messages/files into identifiable locations which we'll call buckets.
All users should have read access to all messages/files, but no anonymous user should have permissions to modify or delete them.
Buckets have associated public keys which a moderator uses to authenticate approvals or deletions of uploads.
Unapproved messages/files are eventually culled by the system to save space.
I suspect the answer might be "Tahoe-LAFS would love for someone to implement append-only mutable files, but nobody has done so yet."
I've surveyed a number of OSS projects in the storage space and not encountered anything that would provide this workflow purely by configuration and without writing code.
While not OSS, the lowest level of Windows Azure storage is actually implemented via an append only mechanism. A video, presentation, and whitepaper can all be found here and the details in the whitepaper would be useful to anyone looking to implement something like this for Tahoe-LAFS or any other OSS cloud storage system.

Resources