(Error starting container: API error (500) Hyperledger - go

I am using bluemix network to deploy and test my custom chaincode( link to the chaincode). I'm using hte Swagger API to deploy, invoke and query my chaincode. The deploy and invoke work fine but when I try to query my chaincode, I keep getting the following error
Following is the validating peer logs :
Is it some problem with my query code or network issue. Any help is appreciated.

The error likely happened during the deploy phase (the logs just shows the query). The "deploy" being an asynchronous transaction returning an ID (just "submits" the transaction to be processed later) cannot indicate if the actual execution of the transaction will be successful or not. But the "query" request is synchronous and shows a failure.
Looking at the chaincode, the error is almost certainly due to the import and use of "github.com/op/go-logging" package. As the fabric only copies the chaincode and does not pick up its dependencies, that package is not available at deploy time.
Note that the same code will work when under "github.com/hyperledger/fabric" path as "github.com/op/go-logging" is available as a "vendor" package in that path.
To test this, try commenting out the import statement and all logging from the code (make sure "go build" works locally first with the changes).

Related

Vertex Pipelines components throws "User does not have bigquery.jobs.create permission in project xxx-tp."

When Vertex Pipelines component launches a BigQuery job, I encountered the following error
google.api_core.exceptions.Forbidden: 403 POST https://bigquery.googleapis.com/bigquery/v2/projects/ddde1b02a7e52415cp-tp/jobs?prettyPrint=false:
Access Denied: Project ddde1b02a7e52415cp-tp: User does not have bigquery.jobs.create permission in project ddde1b02a7e52415cp-tp.
This is due to uninitialized bigquery client.
The code is running in a managed environment which is in a different project than the one running the pipeline. The code won't be able to automatically identify the project running the pipeline.
Initializing the BQ client by explicitly specify the project ID in the Bigquery code solved the issue
bigquery.Client(project=[your-project], credentials=credentials)

Vertex pipeline model training component stuck running forever because of metadata issue

I'm attempting to run a Vertex pipeline (custom model training) which I was able to run successfully in a different project. As far as I'm aware, all the pieces of infrastructure (service accounts, buckets, etc.) are identical.
The error appears in a gray box in the pipeline UI when I click on the model training component and reads the following:
Retryable error reported. System is retrying.
com.google.cloud.ai.platform.common.errors.AiPlatformException: code=ABORTED, message=Specified Execution `etag`: `1662555654045` does not match server `etag`: `1662555533339`, cause=null System is retrying.
I've looked into the log explorer and found that the error logs are audit logs have the following associated tags with them:
protoPayload.methodName="google.cloud.aiplatform.internal.MetadataService.RefreshLineageSubgraph"
protoPayload.resourceName="projects/724306335858/locations/europe-west4/metadataStores/default
Leading me to think that there's an issue with the Vertex Metadatastore or the way my pipeline is using it. The audit logs are automatic though, so I'm not sure.
I've tried purging the metadata store as well as deleting it completely. I've also tried running a different model training pipeline that worked before in a different project as well but with no luck.
screenshot of ui
Retryable error which you were getting is the temporary issue, the issue is resolved now.
You can now be able to rerun the pipeline and it is not expected to enter the infinite retry loop.

Getting ECONNREFUSED when setting the --record flag for Cypress test

We've got Cypress Dashboard running for some of our tests. I'm looking to expand that into some new tests, and have been trying to run it locally to confirm it's working.
Everything in our current pipelines is working fine with Dashboard.
Running tests locally without the --record flag is working fine as well.
However when I'm trying to run locally with Dashboard, I'm getting the following error:
We encountered an unexpected error talking to our servers.
We will retry 0 more times in ...
The server's response was:
RequestError: Error: connect ECONNREFUSED 127.0.0.1:1234
It does this 3 times and then gives up. I've not got anything else running on port 1234, and it runs absolutely fine on my colleagues machine.
The command I'm running is:
npx cypress run --record --key {record-key}
I've been through the Cypress docs for setting up Dashboard access, and besides setting up the project in Dashboard, and setting the Record Key and Project ID, there's no other setup I can see that's needed to get it running.
Only thing I've noticed is it very consistently is trying to hit :1234, but I'm not sure if that's notable at all. Has anyone got suggestions for stuff I may have setup on my local machine that might be blocking this?
I've also checked my HOSTS file, not seen anything obvious in there. Don't think I've actually made any amendments myself in there either, seems like Kubernetes has just added an address. Any suggestions of things I can look at or try would be greatly appreciated.
Try running cypress cache clear

Mesos framework stays inactive due to "Authentication failed: EOF"

I'm currently trying to deploy Eremetic (version 0.28.0) on top of Marathon using the configuration provided as an example. I actually have been able to deploy it once, but suddenly, after trying to redeploy it, the framework stays inactive.
By inspecting the logs I noticed a constant attempt to connect to some service that apparently never succeeds because of some authentication problem.
2017/08/14 12:30:45 Connected to [REDACTED_MESOS_MASTER_ADDRESS]
2017/08/14 12:30:45 Authentication failed: EOF
It looks like the service returning an error is ZooKeeper and more precisely it looks like the error can be traced back to this line in the Go ZooKeeper library. ZooKeeper however seems to work: I've tried to query it directly with zkCli and to run a small Spark job (where the Mesos master is given with zk:// URL) and everything seems to work.
Unfortunately I'm not able to diagnose the problem further, what could it be?
It turned out to be a configuration problem. The master URL was simply wrong and this is how the error was reported.

Distributed JMeter test fails with java error but test will run from JMeter UI (non-distributed)

My goal is to run a load test using 4 Azure servers as load generators and 1 Azure server to initiate the test and gather results. I had the distributed test running and I was getting good data. But today when I remote start the test 3 of the 4 load generators fail with all the http transactions erroring. The failed transactions log the following error:
Non HTTP response message: java.lang.ClassNotFoundException: org.apache.commons.logging.impl.Log4jFactory (Caused by java.lang.ClassNotFoundException: org.apache.commons.logging.impl.Log4jFactory)
I confirmed the presence of commons-logging-1.2.jar in the jmeter\lib folder on each machine.
To try to narrow down the issue I set up one Azure server to both initiate the load and run JMeter-server but this fails too. However, if I start the test from the JMeter UI on that same server the test runs OK. I think this rules out a problem in the script or a problem with the Azure machines talking to each other.
I also simplified my test plan down to where it only runs one simple http transaction and this still fails.
I've gone through all the basics: reinstalled jmeter, updated java to the latest version (1.8.0_111), updated the JAVA_HOME environment variable and backed out the most recent Microsoft Security update on the server. Any advice on how to pick this problem apart would be greatly appreciated.
I'm using JMeter 3.0r1743807 and Java 1.8
The Azure servers are running Windows Server 2008 R2
I did get a resolution to this problem. It turned out to be a conflict between some extraneous code in a jar file and a component of JMeter. It was “spooky” because something influenced the load order of referenced jar files and JMeter components.
I had included a jar file in my JMeter script using the “Add directory or jar to classpath” function in the Test Plan. This jar file has a piece of code I needed for my test along with many other components and one of those components, probably a similar logging function, conflicted with a logging function in JMeter. The problem was spooky; the test ran fine for months but started failing at the maximally inconvenient time. The problem was revealed by creating a very simple JMeter test that would load and run just fine. If I opened the simple test in JMeter then, without closing JMeter, opened my problem test, my problem test would not fail. If I reversed the order, opening the problem test followed by the simple test then the simple test would fail too. Given that the problem followed the order in which things loaded I started looking at the jar files and found my suspect.
When I built the script I left the jar file alone thinking that the functions I need might have dependencies to other pieces within the jar. Now that things are broken I need to find out if that is true and happily it is not. So, to fix the problem I changed the extension on my jar file to zip then edited it in 7-zip. I removed all the code except what I needed. I kept all the folders in the path to my needed code, I did this for two reasons; I did not have to update my code that called the functions and when I tried changing the path the functions did not work.
Next I changed the extension on the file back to jar and changed the reference in JMeter’s “Add directory or jar to classpath” function to point to the revised jar. I haven’t seen the failure since.
Many thanks to the folks who looked at this. I hope the resolution will help someone out.

Resources