What is happening when you deploy a BNA file to Hyperledger Composer? - hyperledger-composer

After I've put together my business network definition, what is actually happening on peers after I deploy that package? I'm especially interested in how a hyperledger peer can be interpreting javascript, since that doesn't appear to be a supported language for chaincode.

The Composer chain code is written in Go. It uses the Duktape Javascript interpreter to execute the user (and system) JS code within a Go process.
The Composer chain code maps the public JS API to the underlying Fabric Go API calls.
From a Fabric perspective this is just a "normal" piece of Go chain code, albeit quite a complex one!
When you "deploy" a business network using the Composer CLI, you are actually doing 2 things:
deploying the Composer chain code (Go) and starting it
deploying the bytes of the business network archive and storing it in world-state, so that it is available to the interpreter when you submit transactions
In the future we would like to replace the use of Duktape by native Node.js execution. Thanks to Fabric's modular architecture (and use of Docker containers and gRPC) this should be possible.

Related

Is hyperledger composer meant for production

Based on the training videos I have watched, they all talk about the composer as something you use to "define and test your business network", but it almost feels like once its verified and tested, you throw it away and use something more serious. Am I wrong? Even term "playground" feels "experimental" -- something you mess with, but thats all. So, would you build a production system based on composer? How do you split it into multiple machines for redundancy? If composer is so slick and the "obvious" choice for building hyperledger applications, when would you NOT use it and use Fabric or Sawtooth instead? I am trying to get started and dont want to waste time on the wrong path. If you were to build a "serious" supply chain application with multiple players, what framework approach would you take? Thank you
Hyperledger Composer is not a separate platform. It uses Hyperledger Fabric "under the hood". The main problem with Composer is it does not have all the options and features of Fabric exposed in the Composer interface. So, it allows for rapid prototyping at the expense of flexibility. Composer has an impressive user interface.
Unfortunately, Composer is now in maintenance mode with no new features being put into it. See https://lists.hyperledger.org/g/composer/message/125
I would consider Hyperledger Sawtooth or Hyperledger Fabric for permissioned blockchain applications.

Is it possible to use Nodejs Crypto module functionality in Fabric Composer?

I would like to be able to use various functions from the nodejs crypto module in my Fabric Composer chaincode. Is that possible? I tried hashing a simple string within my chaincode but it didn't work - in Playground I received this error message: 'TypeError: Cannot read property 'apply' of null.' and using CLI to local HLF the transaction never completed. I tested my javascript hashing code separately and it works but not when I try to run it within chaincode.
At the moment you cannot use node modules in transaction processor functions- you're limited to base javascript. It's possible that nodejs support will come in the future, and making crypo APIs available in composer is another option being considered.
If you want to try something in the mean time, crypto-browserify might be worth investigating. I don't know if anyone has got that to work though, so please report back if you try it.

What is the best way to setup the API application using Protractor?

I'm setting up my front-end application to use continuous integration in CircleCI. Unit tests work fine, but end-to-end tests are not.
The problem is that it requires the backend (API) server to be running, and ours is in another completely different application. So, what is the best way to setup this backend server (thinking about CI)?
I thought about uploading it on heroku, but then I'd have to keep manually updating the code via git. Another option was to download the code to the CI VM and run the server directly there, but it is just too much work (install ruby, postgres, gems...), and it doesn't seem in no way the best option.
Have anyone passed through the same situation? How do you guys usually deal with this kind of situations?
I ended up doing everything inside the CI. I made some custom scripts that configure the backend project every time the test suite is ran. Also, I cached the folder with the backend code and the gems (which was taking ~2min to install).
The configuring part now adds ~20 seconds to the total time, so it wasn't a big deal. Although I still think that this is probably not the best way to do this, it has some advantages, such as not worrying about updating the backend code (it pulls from master automatically) or its database (it runs rake db:reset after updating the code).
Assuming the API server is running somewhere, configure the front-end application to point there while in the test/CI environment, at least to start out. If there are multiple API environments, choose the one the most closely matches the front-end environment (e.g. dev, staging, etc).
It gets more complicated if/when you need to run the e2e tests each time the API is built or match up specific build versions of the front-end and the API. In that case you will have to run the API server as part of the test.

How to implement CLI client to golang daemon?

I have a linux daemon with http api which I have wrote on golang. At start he initialize variables and all time when I ask api - he is answer. Init is hard operation: read many config's, add many object's etc.
My problem that if main process die I can't use http api ;). My code isn't perfect and sometimes he stack or die, or user's disable linux service. But I still need some low level functionality to work.
If I try to implement all functions of web api at cli: his start will be very slow and hard for system. But I have more problem if implementation will be separated between CLI & web api: inconsistent. For example: I can start inside web api create && at same time inside CLI - delete all. I must implement lock function to prevent this. (I think write code at this side isn't good)
I don't use database server (and don't need). Maybe I can store inside files or use some shared memory?
My question is how can I share object's data between golang daemon and CLI-client?
Go has a built-in RPC system for easy communication between Go processes. You could also take a look at 0mq, or using D-Bus.

Can I use meteor for this?

I'm looking for a way to create an app which has a realtime web interface as well as an API which can be called by a node.js client while sharing most of its code.
I'd like to be able to manage data, monitor and execute tasks inside of my app via browser, but also have an automation/scheduling program which connects to my web app and tells it to run various tasks and get results of each task.
Unfortunately it doesn't look like I can connect to Meteor from the server, so I'm wondering if there's another approach? Is what I described even possible using Meteor?
I have done some testing using socket.io and I think I may be able to do it this way, but Meteor seems like it'd be really great for the realtime user interface.
Yes, you can use npm packages to do what you want. Just like standard Node.js programming.
There might be one error you run into when calling Meteor between external code, but it is easy to solve.
I guess in your case you could set up a TCP server that way and make it update a collection, then you could get the clients to update through the reactive collection publishing mechanism.

Resources