Overhead to using composer-rest-server over JavaScript / Node SDK? - hyperledger-composer

We are trying to figure out best design practice while using hyper-ledger composer. We have following questions :
1) if We are using composer-rest-server, then will have to manage two server side components.
i) running composer-rest-server ii) running application which will send request to composer-rest-server to communicate to network.
Isn't it overhead? what additional advantages we are getting by using composer-rest-server? in fact, probably client will have to authenticate two times i guess.
2) If we are using JavaScript SDK, then will have to manage only one server side application, kindly correct me in case of misunderstanding.
3) When i generated angularjs application using yo generator, its also asking me information of composer-rest-server, but I am not planning to use composer-rest-server and want to use 'composer-client' and 'composer-admin' only.

Yes, there will be some overhead of running the composer-rest-server in its own process, however it will allow you to secure and scale your rest servers independent of your application. Depending on your scenario that may be an overhead worth paying.
Another option would be to generate a LoopBack application (using the lb tools) that uses the loopback-connector-composer LoopBack connector directly. This may give you access to the underlying Express server and would allow you to merge your application and the REST server.
You are correct, however you would have to build the REST API for your business network yourself, and manage authentication and certificates.
The generated Angular application uses the REST API exposed by the composer-rest-server to interact with HLF.
Here is a DRAFT topology diagram that I am working on for contribution to the documentation.

Related

How to migrate REST APIs to GraphQL Apollo Federation

Planning to migrate my PHP APIs to Graphql using Apollo Federation. After a bit of research, I see it is done using the following way:
My questions are:
Is there any better way to create the federated services so it is not a separate layer (1 for each REST API)? Maybe something close to the previous schema stitching approach where all can sit in one place and be stitched together at the end (instead of a specific federated layer for each service).
If this is the recommended way, how do I deploy this infrastructure? From the diagram, does it mean I have 5 instances running to cover all of the services?
Is it recommended to run Gateway and Federated services all inside one instance (from diagram - 3 servers running in one instance)?
Let me know if it helps.
Federated services are great when you want to break the monolithic structure of the non federated implementation of apollo server. It can be designed by incorporating the micro-service best practices. Instead of blindly having one federated service per rest endpoint, you can have federated services based on the functionality the service is suppose to take care. One service can call multiple rest endpoit. This would provide you better control on scaling, securing and managing services at infrastructure level. An example can be as simple as amazon where item browsing hits will be way more than buying transactions. In this case you can have one federated service which provides browsing data where as another one can for managing transactions. Then you can scale one to multiple instance to handle user load and have additional security in place for the one hadnling transactions.
2 & 3. Yes you would need to have deploy all the components separately. I would recomend to have all the services in the same VPC cluster so that you don't have to worry about network layer security. If the services are deployed across multiple clusters, it will be adding handling firewall and https/tls for every request, which would cause unnecssaery delay becuase of network call. Although it would be in milliseconds but can be easily avoided.

What is the recommended way to invoke and query data/transactions that were modeled using Fabric Composer?

I am building an PoC using Fabric v0.6 and composer-ui. The question I have is regarding how to interact with the Fabric peers once I have deployed my .bna file in the Fabric network. In the past I have made invoke and query calls to my chaincode using gRPC and passing the function name and arguments through the call. In the case of chaincode deployed through composer, there is a whole abstraction happening so I am not sure if the name of my transactions created in composer translate exactly to names I can call via my gRPC calls on the client side (my node application). I also don't know if the arguments that I pass to the chaincode are the same or if any special argument is expected.
So I guess my question is, from the client side, how do I make calls to transactions in my chaincode that have been creating using Composer? Are there client examples out there for Fabric v0.6? Thanks!
The first example that comes to mind is the sample-applications repository at https://github.com/fabric-composer/sample-applications
if you look in the sample-applications/packages/getting-started there is an example of a client application. The landRegistry.js file in the lib directory contains the bulk of the code used to interact with the business network.
There is also an application generator which is described in more detail at
https://fabric-composer.github.io/applications/genapp.html
You can also find reference documentation for both client side and businessnetwork implementations at
https://fabric-composer.github.io/jsdoc/
You should also consider using the REST API that Composer can generate for your business network.
npm install -g composer-rest-server
composer-rest-server
Then fill in the details required to connect to your business network and the composer-rest-server will expose a Swagger defined REST API that you can exercise using Swagger UI. The REST API is expressed in terms of the assets, participants and transactions that are modeled in your business network.
More docs here:
https://fabric-composer.github.io/integrating/getting-started-rest-api.html
The advantage of using the REST API is that it keeps the coupling between the client application and the blockchain loose; the client doesn't need any Composer libraries and doesn't even need to know that the data source is a blockchain.

is rethinkdb horizon well suited for a rest api web service?

I see that rethinkdb now has an app server called horizon and it's examples include a lot of client apps without any backend server code.
If I wanted to create a REST api service with rethinkdb - does horizon still add value or should I just create a standard node.js rest api using rethinkdb libraries directly?
I see that horizon has some authentication, authorization and permissions built in which could be useful but I'm not sure if turning it into an api instead of a standard web app is making horizon bend into something it's not supposed to be.
If I wanted to create a REST api service with rethinkdb - does horizon
still add value
No, if all you want is a REST API endpoint mapping CRUD operations onto your RethinkDB data, then Horizon won't help you there.
Horizon is great if you want websocket API w/ "real-time" features and plan to use the Horizon client in the browser.
Horizon is opinionated in how it handles users and permissions (it enforces them on the server side using different users/permissions for each app instead of the RethinkDB users table).
# RethinkDB
r.db('rethinkdb').table('users')
r.db('rethinkdb').table('permissions')
# Horizon
r.db('myapp_internal').table('users')
r.db('myapp_internal').table('users_auth')
I'm currently playing around with a stack that uses feathers to design common services that can be exposed over a REST or websocket transport. Its more complex, but I might use both feathers and Horizon, but there will be some work to map permissions correctly across both endpoints. (Plus schema enforcement...) Feathers supports various authentication providers that return JWT which you could then pass to horizon (if you set the same secret_key)...
If you don't need the real-time features in your database, you might want to check out PostgREST as it has out of the box JWT authentication and uses actual database roles for row level authorization. "One source of truth". You could use that together with PostGraphQL if you want both REST and GraphQL! Plus you can store JSON data in columns these days so its all good!
So many options!
Good luck!
You can embed Horizon in a node app and only use a subset of its features: http://horizon.io/docs/embed/ . You should be able to piggyback on the authentication pretty easily. It would be harder to piggyback on the permissions if you're implementing your own REST API, because the permissions system only controls access to collections.
Personally I suspect it will be more trouble than it's worth if you're embedding it just for the authentication.

Cloud / server side code with Apigee or Usergrid

Is it possible to execute server side code (something like Parse "Cloud Code") with Apigee, as backend for a mobile app as client?
I'd want to use the out of the box "App Services" functionality, but perform some extra stuff (like updating data) from the server side,
The only (naive?) way I can think of is this:
Have my own server running.
The mobile app uses standard "App Services" API on Apigee
If necessary, the client calls
some custom API on my server,
which lets my server call Apigee via REST to fetch data, calculate some results, and post the updated data Apigee
and then returns the result to the client
Sounds a bit complicated (especially in terms of handling authentication) - are there any best practices to achieve something like I described?
Thanks!
Consider App Services as your database in the cloud to which you can talk using APIs. Therefore, you really don't need that server in the between unless you are doing some heavy lifting in it. You could make that API call directly from the app.
Even if you want to have a back end server for your app, you can leverage the node.js functionality that Apigee Edge provides and have a server up and running in the cloud in quick time. More info can be found here
If you want to do server side validation, you should use a Node.js proxy that incorporates Usergrid. This will allow you to perform a query on the database and do processing of the results. Check out this presentation: https://speakerdeck.com/timanglade/coders-workshop-at-i-apis. In particular, see Section 7, which discusses using Usergrid and Node.js.

real time number crunching and storage on cloud

I have some hardware devices that send some data that need to be stored on the cloud server and also I need to do some real time processing on them.
The data they send need to be preserved for months in some custom binary files. These files related to each device can grow in size up to 10GB over time.
There will client programs (mobile / web) that will be looking at the processed data at real time.
My prefered choice of language is C/C++/C#, since there is time sensitive number crunching involved.
Goal is write scalable application that can have thousands of such devices monitored on the cloud.
Do I have to upfront write the code for running on the cloud ( undestand Azure / amazon EC2) ? Can I write multi threaded desktop application and later migrate to cloud ?
I have used Message passing interface (MPI) in the past for clusters. Can I still use MPI ?
If I use microsoft azure API can I still host my software on Amazon cloud ?
For mobile devices to talk to the server, I understand that I need to have a webservice running. how can I convert a desktop program writeen in C++ / C# to act as a web service talking to client?
Are there any 3rd part frame works or tools taht can help me with my work ?
With most cloud compute services you can deploy an off-the-shelf server and install your own software on it. So, yes, you can write and test you application locally then migrate to the cloud once you get all the bugs worked out. Here are the available EC2 server configurations.
I have not tried MPI but you should be able to run just about anything you want on the servers in the cloud. However, Amazon does offer the Simple Queue Service which provides message passing in the cloud. Your software does not need to run in the cloud to use this service.
I have not used Azure. I doubt there are any restrictions regarding which external servers you use for storage and/or compute. However, keeping your cloud storage and compute resources within a single provider will reduce costs, improve performance and provide you with a unified management interface and billing system.
Web servers are fairly simple things. See this post. That took me about 10 seconds to find.
There is plenty of third party software out there. Figure out what you need in more detail and ask more specific questions

Resources