Is there a way to see 'raw' blockchain data using Hyperledger Composer? - hyperledger-composer

Composer seems to add quite a bit of abstraction on top of Fabric - is there any way to see the underlying cryptography?
For example
- Is there a way to see transaction hashes?
- Is there a way to examine past blocks?
Thanks!

From my experience, Composer does not give you a "block" view of your transactions. To see transaction hashes and information you can do this by using a query. Make a query.qry file in the root of your project directory. Then add this:
query getAllHistorianRecords {
description: "getTradeRelatedHistorianRecords"
statement:
SELECT org.hyperledger.composer.system.HistorianRecord
WHERE (transactionTimestamp > '0000-01-01T00:00:00.000Z')
}
This will let you see data such as:
{
"$class": "org.hyperledger.composer.system.HistorianRecord",
"transactionId": "b7b202906deba4d4bca1581ae6033562699361d67d31c2af45cd60b0225d5624",
"transactionType": "org.hyperledger.composer.system.AddParticipant",
"transactionInvoked": "resource:org.hyperledger.composer.system.AddParticipant#b7b202906deba4d4bca1581ae6033562699361d67d31c2af45cd60b0225d5624",
"eventsEmitted": [],
"transactionTimestamp": "2017-10-03T16:24:14.864Z"
}
}...

Related

Prevent new ARN Version generation on lambda layer update

I have created a layer which contains some codes that can be used by multiple lambda function.
Something like this:
module.exports = {
"INVALID_RFC": {"code": "111", "message": "RFC Code is invalid"}
}
The problem is that whenever I make some update on the layer, AWS genertate a new version for it. Due to this, I have to manually update the version in all of the lambda. I understand its importance and use-case, but I want to prevent it. Manullay updating the Lambda everytime is too much time taking.

Grafana to use substraction of two fields in Elasticsearch data source

I have two fields, called 'status_codes' and requests
I want to get number of failed requests.
My equation is [requests - no of success requests]
In the script i wrote something like this _value - doc['#status_codes.200'].value
BUT the value return in the graph is 'N/A'
I'm using elasticsearch(7.6.0) and Grafana(6.6.2).
Following is the out file which i'm sending to elasticsearch
{ "latencies":{
"total":3981710268690,
"mean":43876078,
"50th":916913,
"90th":2217744,
"95th":5162430,
"99th":60233348,
"max":60000209373,
"min":43652
},
"#version":"1",
"latest":"2020-03-05T16:14:44.23387091Z",
"path":"test23.json",
"duration":61163899322,
"wait":552109,
"status_codes":{
"0":90624,
"200":125
},
"earliest":"2020-03-05T16:13:43.069971588Z",
"rate":1483.702004057131,
"throughput":2.0436707446156577,
"#timestamp":"2020-03-05T16:14:44.453Z",
"errors":[
"Post http://www: dial tcp 0.0.0.0:0->10.133.9.87:8688: socket: too many open files",
"Post http://www: dial tcp: lookup internal-netty-load-balancer-937469711.us-east-1.elb.amazonaws.com on 10.20.30.30: dial udp 10.20.30:45: socket: too many open files"
],
"bytes_in":{
"mean":70.90298515686123,
"total":6434375
},
"requests":90749,
"Report_Title":"test23",
"host":"ABS",
"success":0.0013774256465635985,
"end":"2020-03-05T16:14:44.234423019Z",
"bytes_out":{
"mean":70.90298515686123,
"total":6434375
}
}
Also I have used Singlestat plugin as #yash mentioned, but still i could resolve the issue.
Query section
Visualization section
Can someone help me
This is a fairly easy task. You just need to use either of the 'Singlestat Math' or 'Metaqueries' plugin for that. What you need to do is, use the count metric in two queries in the same panel, one for getting the count of successful status codes, and other for unsuccessful status codes. Then you can use either of the plugin to subtract the value of result of query one query from another.
https://grafana.com/grafana/plugins/blackmirror1-singlestat-math-panel
https://grafana.com/grafana/plugins/goshposh-metaqueries-datasource
I suggest you go with the singlestat math plugin, it would be easier to work with, from my experience.
Note: The calculation (A-B) is done in the visualization section and not in query section, in the singlestat math plugin.
P.S. singlestat-math plugin actually adds a new panel in the visualization section. It's a different panel than the default singlestat panel.
Finally I found the the solution as follow,
solution
Thanks everyone.

Video Inside Silhouette using kinect

I am using this git
I am getting error data/config.json does not exist or could not be read.
How do I create json file or load one?
The configuration json file is missing from the source code, however, checking out how the json is parsed in ConfigManager should allow you to create one from scratch.
Just looking at what the property names and types are in that class you can work out something like this:
{
"clips":[
{
"silhouette":"add your silhouette filename here",
"background":"add your background filename here",
"duration":0
}
],
"useKinect":true,
"useGpu":false,
"name":"Your Application Name here",
"resizeSilhouette":false,
"mirrorSilhouette":false,
"resizeSilouhette":false,
"overlayVideo":true,
"useActionClips":false,
"silhouettePadding":{
"top":5,
"right":5,
"bottom":5,
"left":5
},
"centerOfMass":true,
"showTime":true,
"smoothSilhouette":0,
"crossfade":0,
"silhouetteCache":{
"enabled":false,
"minFrames":3,
"maxFrames":10
},
"scale":{
"width":640,
"height":480
},
"osc":{
"enabled":false,
"serverPort":12000,
"clientAddress":"127.0.0.1",
"clientPort":12001,
"channels":0
},
"actions":{
"frequency":1,
"clips":[
"clipName1","clipName2"
]
}
}
If you save this as config.json in your sketch's data folder it should load.
However, bare in mind this may also crash as I've just placed some dummy placeholder data to give you an idea. Fill in the data you think you know what it should be for your project and figure out the rest as you go along.
Unfortunately the github project you decided to use isn't documented, so this means you will have to go through all the source code and understand it before you get to use it.

Ruby neo4j-core mass processing data

Has anyone used Ruby neo4j-core to mass process data? Specifically, I am looking at taking in about 500k lines from a relational database and insert them via something like:
Neo4j::Session.current.transaction.query
.merge(m: { Person: { token: person_token} })
.merge(i: { IpAddress: { address: ip, country: country,
city: city, state: state } })
.merge(a: { UserToken: { token: token } })
.merge(r: { Referrer: { url: referrer } })
.merge(c: { Country: { name: country } })
.break # This will make sure the query is not reordered
.create_unique("m-[:ACCESSED_FROM]->i")
.create_unique("m-[:ACCESSED_FROM]->a")
.create_unique("m-[:ACCESSED_FROM]->r")
.create_unique("a-[:ACCESSED_FROM]->i")
.create_unique("a-[:ACCESSED_FROM]->r")
.create_unique("i-[:IN]->c")
.exec
However doing this locally it takes hours on hundreds of thousands of events. So far, I have attempted the folloiwng:
Wrapping Neo4j::Connection in a ConnectionPool and multi-threading it - I did not see much speed improvements here.
Doing tx = Neo4j::Transaction.new and tx.close every 1000 events processed - looking at a TCP dump, I am not sure this actually does what I expected. It does the exact same requests, with the same frequency, but just has a different response.
With Neo4j::Transaction I see a POST every time the .query(...).exec is called:
Request: {"statements":[{"statement":"MERGE (m:Person{token: {m_Person_token}}) ...{"m_Person_token":"AAA"...,"resultDataContents":["row","REST"]}]}
Response: {"commit":"http://localhost:7474/db/data/transaction/868/commit","results":[{"columns":[],"data":[]}],"transaction":{"expires":"Tue, 10 May 2016 23:19:25 +0000"},"errors":[]}
With Non-Neo4j::Transactions I see the same POST frequency, but this data:
Request: {"query":"MERGE (m:Person{token: {m_Person_token}}) ... {"m_Person_token":"AAA"..."c_Country_name":"United States"}}
Response: {"columns" : [ ], "data" : [ ]}
(Not sure if that is intended behavior, but it looks like less data is transmitted via the Non-Neo4j::Transaction technique - highly possibly I am doing something incorrectly)
Some other ideas I had:
* Post process into a CSV, SCP up and then use the neo4j-import command line utility (although, that seems kinda hacky).
* Combine both of the techniques I tried above.
Has anyone else run into this / have other suggestions?
Ok!
So you're absolutely right. With neo4j-core you can only send one query at a time. With transactions all you're really getting is the ability to rollback. Neo4j does have a nice HTTP JSON API for transactions which allows you to send multiple Cypher requests in the same HTTP request, but neo4j-core doesn't currently support that (I'm working on a refactor for the next major version which will allow this). So there are a number of options:
You can submit your requests via raw HTTP JSON to the APIs. If you still want to use the Query API you can use the to_cypher and merge_params methods to get the cypher and params for that (merge_params is a private method currently, so you'd need to send(:merge_params))
You can load via CSV as you said. You can either
use the neo4j-import command which allows you to import very fast but requires you to put your CSV in a specific format, requires that you be creating a DB from scratch, and requires that you create indexes/constraints after the fact
use the LOAD CSV command which isn't as fast, but is still pretty fast.
You can use the neo4apis gem to build a DSL to import your data. The gem will create Cypher queries under the covers and will batch them for performance. See examples of the gem in use via neo4apis-twitter and neo4apis-github
If you are a bit more adventurous, you can use the new Cypher API in neo4j-core via the new_cypher_api branch on the GitHub repo. The README in that branch has some documentation on the API, but also feel free to drop by our Gitter chat room if you have questions on this or anything else.
If you're implementing a solution which is going to make queries like above where you have multiple MERGE clauses, you'll probably want to profile your queries to make sure that you are avoiding the eager (that post is a bit old and newer versions of Neo4j have alleviated some of the need for care, but you can still look for Eager in your PROFILE)
Also worth a look: Max De Marzi's post on Scaling Cypher Writes

How to do an upsert / push with mongoid / moped

I'm using Mongoid (v3) to access MongoDB, and want to perform this action:
db.sessionlogs.update(
{sessionid: '12345'}, /* selection criteria */
{'$push':{rows: "new set of data"}}, /* modification */
true /* upsert */
);
This works fine in the mongo shell. It's also exactly what I want since it's a single atomic operation which is important to me as I'm going to be calling it a lot. I don't want to have to do two operations -- a fetch and then an update. I've tried a bunch of things through mongoid, but can't get it to work.
How can I get MongoID out of the way and just send this command to MongoDB? I'm guessing there's some way to do this at the Moped level, but the documentation of that library is basically non-existent.
[Answer found while writing the question...]
criteria = Sessionlogs.collection.find(:sessionid => sessionid)
criteria.upsert("$push" => {"rows" => datarow})
Here is one way to do it:
session_log = SessionLog.new(session_id: '12345')
session_log.upsert
session_log.push(:rows, "new set of data")
Or another:
SessionLog.find_or_create_by(session_id: '12345').
push(:rows, "new set of data")
#push performs an atomic $push on the field. It is explained on the
Atomic Persistence page.
(Note: the examples use UpperCamelCase and snake_case as is Ruby convention.)
Don't go down to moped just yet, you can use find and modify operation to achieve the same thing (with all the default scope and inheritance goodies)
Sample to save an edge in a graph if not existed
edge = {source_id: session[:user_id],dest_id:product._id, name: edge_name}
ProductEdge.where(edge).find_and_modify(ProductEdge.new(edge).as_document,{upsert:true})

Resources