The problem occurs with macOs when trying to connect to VM through iap. The same command works from another system with the same credentials, so must be something to do with my local environment.
This is the command I'm trying to run:
gcloud compute ssh **** --project=**** --zone=europe-west4-a --tunnel-through-iap
And it fails with this error:
ERROR: gcloud crashed (AttributeError): 'Thread' object has no attribute 'isAlive'
If you would like to report this issue, please run the following command:
gcloud feedback
To check gcloud for common problems, please run the following command:
gcloud info --run-diagnostics
kex_exchange_identification: Connection closed by remote host
ERROR: (gcloud.compute.ssh) [/usr/bin/ssh] exited with return code [255].
Google gives me no answer whatsoever
This seems like a python problem. I guess you are using too new Python3. Please use either Python 3.5 - 3.8 or Python 2.7.9 or higher.
Here is the documentation:
https://cloud.google.com/sdk/docs/install#mac
Related
I'm trying to run pishrink on MacOS using a Docker host, as explained here. The pishrink script shrinks the size of an .img so it's quicker to burn onto an SD card.
I have Docker Desktop running, and I've add the repo to the top-level in my file system (/pishrink) and and running the following command:
docker-compose run pishrink /pishrink/pishrink.sh /pishrink/big-image.img /pishrink/small-image.img
When I do, I get the following error:
Error response from daemon: OCI runtime create failed: container_linux.go:344: starting container process caused "exec: \"/pishrink/pishrink.sh\": permission denied": unknown
Can someone help me debug this issue? I'm relatively new to using Docker so I might be making some simple + fundamental mistakes.
I was able to fix this with the following command, using sudo as suggested:
sudo docker-compose run pishrink /pishrink/pishrink.sh /pishrink/big-image.img /pishrink/small-image.img
I am seeing the following error:
bin/wsk package get --summary /whisk.system/alarms --insecure
error: Unable to get package 'alarms': The supplied authentication is not authorized to access this resource. (code 7320)
I am using the guest authentication
(I have downloaded the openwhisk source on my Ubuntu 16.04 machine and installed it using ./gradlew distDocker).
Other features are working: action, triggers, rules, etc.
Tried downloading /whisk.system/alarms from github, and ran installCatalog.sh - it gave EOF for a POST request:
~/openwhisk-package-alarms$ ./installCatalog.sh $AUTH_KEY $API_HOST $API_HOST $API_PORT $API_HOST
error: Package update failed: Put https://172.17.0.1:5984/api/v1/namespaces/_/packages/alarms?overwrite=true: EOF
techie#serverless02:~/openwhisk-package-alarms$
You need to setup alarm package.
Please refer to https://github.com/apache/incubator-openwhisk-package-alarms
There isn't easy to follow documentation yet. I wrote notes on how to install on Vagrant here: https://github.com/apache/incubator-openwhisk-package-alarms/issues/51#issuecomment-294010619
I am testing a a business network I created, I ran the Composer-rest-server and all worked fine, then shut the server as suggested in the developers guide , then I proceeded use the yo hyperledger composer to create the skeleton of the angular app, however, now the angular app is showing in the local browser, however, the composer-rest- server is not.
Expected Behavior:
I should start the composer-rest- server in localhost:3000 and the angular app as well
Actual Behavior:
I get this message;
scovering types from business network definition ...
Connection fails: Error: Error trying to ping. Error: Error trying to query chaincode. Error: Connect Failed
It will be retried for the next request.
Exception: Error: Error trying to ping. Error: Error trying to query chaincode. Error: Connect Failed
Error: Error trying to ping. Error: Error trying to query chaincode. Error: Connect Failed
at _checkRuntimeVersions.then.catch (/home/node/.nvm/versions/node/v6.11.2/lib/node_modules/composer-rest-server/node_modules/composer-connector-hlfv1/lib/hlfconnection.js:696:34)
Your Environment
composer-cli#0.11.3
generator-hyperledger-composer#0.11.3
composer-rest-server#0.11.3
Docker version 17.06.0-ce, build 02c1d87
docker-compose version 1.13.0, build 1719ceb
The Problem
If you kill your fabric instance using ./stopFabric that you started using the ./startFabric command then all the containers that were apart of the business network were killed as well and therefore you need to reinstall the .bna and start the network again. (the development flow provided is purposely volatile for rapid development)
The Solution
1.) Type docker ps to see all of your running containers. You should see none if you are getting that error because your peer is not responding to pings
2.) Open a separate terminal and navigate to where you have fabric-dev-servers in the terminal and run ./fabricStart. This will start all the containers like your network Certificate Authority, the peer, the orderer, etc.
3.) Return to your project in another terminal. Do Step 1 & 2 found at the developer tutorial (you likely won't need to do step 3 since you likely already imported the network administrator identity going through the tutorial)
4.) Run composer network ping --card admin#tutorial-network. The ping should go through.
5.) Run docker ps. You should see 4 containers running
6.) Run composer-rest-server and follow the steps from the tutorial.
7.) Run cd tutorial-network-app to switch to where your angular application is (or wherever you generated it with the yo command)
8.) Navigate to http://localhost:3000 and everything should work.
Any other questions or problems just reply here and I can help.
The expected behaviour is that the REST server is already running (the the generator uses Loopback to spin up a REST server already (that's why you shut down the previous REST server)). Its described here https://hyperledger.github.io/composer/unstable/tutorials/developer-guide.html under 'Generate your Skeleton Web Application'.
After you created the application - following completion of the yo hyperledger-composer questions (and after providing the answers) you run your application using npm start from within the generated application directory. Your app is accessible at http://localhost:4200.
I am getting a PERMISSION_DENIED error when attempting to run the sample bookshelf app provided by Google CloudPlatform.
On https://cloud.google.com/java/getting-started/using-forms it says to run the following command:
mvn -Plocal clean jetty:run-exploded -DprojectID=[YOUR-PROJECT-ID]
The error I am getting is:
[WARNING] Failed startup of context o.e.j.m.p.JettyWebAppContext#25e203e6{/,file:///Users/markfriesen/Documents/workspace/getting-started-java/bookshelf/2-structured-data/target/bookshelf-2-1.0-SNAPSHOT/,UNAVAILABLE}{/Users/markfriesen/Documents/workspace/getting-started-java/bookshelf/2-structured-data/target/bookshelf-2-1.0-SNAPSHOT}
com.google.cloud.datastore.DatastoreException: Missing or insufficient permissions.
at com.google.cloud.datastore.spi.v1.HttpDatastoreRpc.translate(HttpDatastoreRpc.java:128)
...
Caused by: com.google.datastore.v1.client.DatastoreException: Missing or insufficient permissions., code=PERMISSION_DENIED
at com.google.datastore.v1.client.RemoteRpc.makeException(RemoteRpc.java:126)
at com.google.datastore.v1.client.RemoteRpc.makeException(RemoteRpc.java:169)
I am able to launch the datastore emulator using the command:
gcloud beta emulators datastore start --project=[YOUR-PROJECT-ID]
I am running:
macOS Sierra
Java version: 1.8.0_112
Anyone got ideas?
Thanks
From the error message, it appears that the code is in fact connecting to the real GCD, not the Emulator, and you do not have the Datastore API enabled for the project. I would recommend logging into GCD and enable the Datastore API and try again.
I ran into this problem in a docker context :
I was running the datastore emulator in a docker container within a docker-compose network
I was trying to connect to it from my local machine.
When I ran the command/script from another container connected to the network of the datastore it worked. I guess it's the localhost exception or something like that.
following the getting started guide I attempt to create & connect to a datalab vm instance with the command:
datalab create demo
but I get the following pop-up:
then, on ok-ing the error,
connection broken
Attempting to reconnect...
in the command prompt
Any idea how to have the keys generated a different way to allow me to connect?
As a workaround, you can try either running the datalab connect demo command from inside of Cloud Shell, or downgrading to version 153.0.0 of the Cloud SDK.
As for your error, this seems to be a newly introduced bug in the 154.0.0 release of the Cloud SDK.
Prior to that, running a command like gcloud compute ssh --ssh-flag=-o --ssh-flag=LogLevel=info demo would have resulted in the "-o LogLevel=info" flag being stripped out of the command prior to it running on Windows.
With the most recent release (154.0.0), however, those flags are now passed to the SSH command as-is. This causes an error on Windows, as the PuTTY CLI does not support the -o flag.
I've filed https://github.com/googledatalab/datalab/issues/1356 to track fixing this issue.
Sorry that you got hit by this.