Hyperledger composer Rest API filter not working for asset - hyperledger-composer

I am trying to execute Rest API using following filter
api/Commodity?filter={"where":{"owner":"resource:org.example.mynetwork.Trader%231"},%20"include":"resolve"}
but getting following error -
{"error":{"statusCode":500,"name":"Error","message":"2 UNKNOWN: error executing chaincode: transaction returned with failure: Error: ExecuteQuery not supported for leveldb","code":2,"metadata":{"_internal_repr":{}},"details":"error executing chaincode: transaction returned with failure: Error: ExecuteQuery not supported for leveldb","stack":"Error: 2 UNKNOWN: error executing chaincode: transaction returned with failure: Error: ExecuteQuery not supported for leveldb\n at new createStatusError (/home/composer/.npm-global/lib/node_modules/#ibmblockchain/composer-rest-server/node_modules/grpc/src/client.js:64:15)\n at /home/composer/.npm-global/lib/node_modules/#ibmblockchain/composer-rest-server/node_modules/grpc/src/client.js:583:15"}}
Kindly suggest what is wrong here?

The important part of the response is here Error: ExecuteQuery not supported for leveldb
This says that your fabric is configured to use the inbuilt leveldb system for storing the world state. Because of this you cannot perform any type of query on it, and that includes rest filters.
You need to change your fabric setup to use couchdb as the world state store instead.
Fabric documentation can be found here about building fabric networks and on this page there is a specific section about enabling couchdb.
see https://hyperledger-fabric.readthedocs.io/en/release-1.2/build_network.html

Related

Vertex Pipelines components throws "User does not have bigquery.jobs.create permission in project xxx-tp."

When Vertex Pipelines component launches a BigQuery job, I encountered the following error
google.api_core.exceptions.Forbidden: 403 POST https://bigquery.googleapis.com/bigquery/v2/projects/ddde1b02a7e52415cp-tp/jobs?prettyPrint=false:
Access Denied: Project ddde1b02a7e52415cp-tp: User does not have bigquery.jobs.create permission in project ddde1b02a7e52415cp-tp.
This is due to uninitialized bigquery client.
The code is running in a managed environment which is in a different project than the one running the pipeline. The code won't be able to automatically identify the project running the pipeline.
Initializing the BQ client by explicitly specify the project ID in the Bigquery code solved the issue
bigquery.Client(project=[your-project], credentials=credentials)

Laravel Horizon: ErrorException: Warning: PDO::prepare(): MySQL server has gone away

Laravel Version: 5.7.28
PHP Version: 7.2.15
Database Driver & Version: MariaDB 10.2.23
I am struggling with a bug on my production server using Horizon.
ErrorException: Warning: PDO::prepare(): MySQL server has gone away
[internal] in unserialize
You can see a stack trace of the error here: https://sentry.io/share/issue/b105b7946b524a9e841f56f44445ea14/
As far as I can tell, this error should be caught by the Laravel framework. I'm not sure why it's not being caught and turned into a QueryException which would then trigger the reconnection and/or killing the worker.
See: https://github.com/laravel/framework/blob/9fb420cc29a7dd5de5051f09c523ffc3ea01b969/src/Illuminate/Database/Connection.php#L663
And then: https://github.com/laravel/framework/blob/9fb420cc29a7dd5de5051f09c523ffc3ea01b969/src/Illuminate/Database/Connection.php#L735
My understanding is that any Exception should be caught and then re-thrown as a QueryException, which would then be properly caught by the framework and then reconnected to the database.
This is an occasional error so it's difficult to reproduce; I've tried to manually throw a similar error but it is caught properly and handled properly.
Any general guidance on why this error might be different in production and ideas on how I can isolate the error would be appreciated.
In case anyone else runs into this, the current theory is that Sentry is catching errors that are still being handled properly by the framework.
Essentially, the job still completes correctly, because MySQL connection errors are handled automatically by the framework. However, Sentry still catches an error in that error handling process, though the reason is currently unknown.
For reference, see this discussion on Github:
https://github.com/laravel/horizon/issues/583

(Error starting container: API error (500) Hyperledger

I am using bluemix network to deploy and test my custom chaincode( link to the chaincode). I'm using hte Swagger API to deploy, invoke and query my chaincode. The deploy and invoke work fine but when I try to query my chaincode, I keep getting the following error
Following is the validating peer logs :
Is it some problem with my query code or network issue. Any help is appreciated.
The error likely happened during the deploy phase (the logs just shows the query). The "deploy" being an asynchronous transaction returning an ID (just "submits" the transaction to be processed later) cannot indicate if the actual execution of the transaction will be successful or not. But the "query" request is synchronous and shows a failure.
Looking at the chaincode, the error is almost certainly due to the import and use of "github.com/op/go-logging" package. As the fabric only copies the chaincode and does not pick up its dependencies, that package is not available at deploy time.
Note that the same code will work when under "github.com/hyperledger/fabric" path as "github.com/op/go-logging" is available as a "vendor" package in that path.
To test this, try commenting out the import statement and all logging from the code (make sure "go build" works locally first with the changes).

[Error]: Failed to run command with error: Error Domain=Parse Code=428

I get this error sometimes when trying to save things to Parse or to fetch data from it.
This is not constant and appear once in a while making the operation to fail.
I have contacted Parse for that. Here is their answer:
Starting on 4/28/2016, apps that have not migrated their database may see a "428" error code if the request cannot be handled by the remaining shared pool of resources. If you see this error in your logs, we highly recommend migrating the database for your app without delay.
Means this happens because of starting this date all apps are on low priority but those who started DB migration. So, Migration of the DB should resolve that.

Configuration Issue for IBM Filenet 5.2

I installed IBM Filenet Content Engine 5.2,on my machine.I am getting problem while configuring GCD datasources for new profile.
Let me first explain the setps I did,then I would mention the problem that I am getting.
First,I created GCD database in DB2,then I created datasources required for configuration of profile in WAS Admin Console.I created J2C Authentication Alias,for user which has access to GCD database and configured it with datasources.I am getting test database connection as successful but when I run task of configuring GCD datasources,it fails with the following error:-
Starting to run Configure GCD JDBC Data Sources
Configure GCD JDBC Data Sources ******
Finished running Configure GCD JDBC Data Sources
An error occurred while running Configure GCD JDBC Data Sources
Running the task failed with the following message: The data source configuration failed:
WASX7209I: Connected to process "server1" on node Poonam-PcNode01 using SOAP connector; The type of process is: UnManagedProcess
testing Database connection
DSRA8040I: Failed to connect to the DataSource. Encountered java.sql.SQLException: [jcc][t4][2013][11249][3.62.56] Connection authorization failure occurred. Reason: User ID or Password invalid. ERRORCODE=-4214, SQLSTATE=28000 DSRA0010E: SQL State = 28000, Error Code = -4,214.
It looks like simple error of user id and password not valid.I am using same alias for other datasources as well and they are working fine.so not sure,why I am getting error.I have also tried changing scope of datasources,but no success.Can somebody please help?
running "FileNet Configuration Manager" task of configuring GCD datasources will create all the needs things in WAS (including Alias), do not created it before manually.
I suspect it had an issue with exciting JDBC data sources/different names Alias
Seems from your message that you are running it from Filene configuration manager. Could you please double check from your database whether user id is authorised to execute query in GCD database. It is definitely do it with permission issue.

Resources