I am following this tutorial to get started with substrate https://substrate.dev/docs/en/tutorials/create-your-first-substrate-chain/.
However, since I am doing this on a cloud server, when I try to access the front end on "http://localhost:8000/substrate-front-end-template",
I have to instead do "http://my-cloud-ip-address:8000/substrate-front-end-template".
This front end fails to connect with the backend with the following error:
WebSocket connection to 'ws://my-cloud-ip-address:9944/' failed: Error in connection establishment: net::ERR_CONNECTION_REFUSED
API-WS: disconnected from ws://my-cloud-ip-address:9944: 1006:: Abnormal Closure
Basically, the WebSocket connection is not working on my server. How do I open a WebSocket connection to the substrate node running on my server to be accessible to remote connections and not just localhost?
Note: I have disabled the firewall on my server all ports are open.
It is likely best not to disable the firewall, instead only open the ports you need. You likely need to set flags for your node allowing for remote access. Here is an example:
./target/release/node-template \
--telemetry-url 'wss://telemetry.polkadot.io/submit/ 0' \
--base-path /tmp/node \
--port 30333 \
--ws-port 9944 \
--rpc-port 9933 \
--rpc-cors all \
--validator \
--ws-external \
--rpc-external \
--rpc-methods=Unsafe \
--prometheus-external \
--name node_validator-TESTING
Note that the unsafe flags should only be used for testing and you should set the ports per your needs and firewall. Telemetry is optional here too, as well as your node's name.
Related
A few minutes ago I clone search guard branch from here and I does everythink what README said.
After docker-compose up -d all services are working but elasticsearch_1 log one error every few secounds:
elasticsearch_1 | [2018-09-14T08:59:49,614][ERROR][c.f.s.a.BackendRegistry ] Not yet initialized (you may need to run sgadmin)
After that I run docker-compose exec -T elasticsearch bin/init_sg.sh, output:
Search Guard Admin v6
Will connect to localhost:9300 ... done
WARNING: An illegal reflective access operation has occurred
WARNING: Illegal reflective access by io.netty.util.internal.ReflectionUtil (file:/usr/share/elasticsearch/plugins/search-guard-6/netty-common-4.1.16.Final.jar) to constructor java.nio.DirectByteBuffer(long,int)
WARNING: Please consider reporting this to the maintainers of io.netty.util.internal.ReflectionUtil
WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
WARNING: All illegal access operations will be denied in a future release
Elasticsearch Version: 6.3.2
Search Guard Version: 6.3.2-23.0
Connected as CN=kirk,OU=client,O=client,L=Test,C=DE
Contacting elasticsearch cluster 'elasticsearch' and wait for YELLOW clusterstate ...
ERR: Timed out while waiting for a green or yellow cluster state.
* Try running sgadmin.sh with -icl (but no -cl) and -nhnv (If that works you need to check your clustername as well as hostnames in your TLS certificates)
* Make also sure that your keystore or PEM certificate is a client certificate (not a node certificate) and configured properly in elasticsearch.yml
* If this is not working, try running sgadmin.sh with --diagnose and see diagnose trace log file)
* Add --accept-red-cluster to allow sgadmin to operate on a red cluster.
I guess that sgadmin can't connect to elasticsearch cluster but I does everythink what README said.
Any sugestions how to fix this?
Thanks for answers.
I already resolve this. Your product, works complitely fine. I had error with index win Kibana which make elasticsearch cluster RED status - never YELLOW.
If you want connect your sgadmin with elasticsearch cluster without waiting for YELLOW status use line: --accept-red-cluster in init_sg.sh script:
#!/bin/sh
plugins/search-guard-6/tools/sgadmin.sh \
-cd config/sg/ \
-ts config/sg/truststore.jks \
-ks config/sg/kirk-keystore.jks \
-nhnv \
-icl \
--accept-red-cluster
Then everything works fine and Kibana will show you why you have RED status - in my case kibana index problem.
My code:
gcloud compute instances create mongodb --zone europe-west3-a --image-family coreos-stable \
--image-project coreos-cloud --tags http-server
gcloud compute ssh --zone europe-west3-a mongodb \
--command="some_commands"
It creates the google instance nicely but when it tries to ssh, an error comes up:
WARNING - POTENTIAL SECURITY BREACH!
The server's host key does not match the one PuTTY has
cached in the registry. This means that either the
server administrator has changed the host key, or you
have actually connected to another computer pretending
to be the server. Update cached key? (y/n, Return cancels connection) Server refused our key
Using keyboard-interactive authentication.
How can I skip this annoying prompt? Take into account that I must create a new instance, I cannot do ssh to one that already exists.
I'm experiencing that the Kubernetes API server fails to start during cluster bootstrapping with the following error log, apparently due to being unable to initialize its "client CA configmap":
E1029 14:35:56.211083 5 client_ca_hook.go:78] Timeout: request did not complete within allowed duration
F1029 14:35:56.211121 5 hooks.go:126] PostStartHook “ca-registration” failed: unable to initialize client CA configmap: timed out waiting for the condition
It seems to happen here in the Kubernetes source code. What might cause this error?
See the full log here.
Update: It seems that my etcd cluster isn't accessible from master nodes, even though the same command works from etcd member machines:
$ sudo ETCDCTL_API=3 etcdctl --cacert=/opt/tectonic/tls/etcd-client-ca.crt \
--cert=/opt/tectonic/tls/etcd-client.crt --key=/opt/tectonic/tls/etcd-client.key \
--endpoints=https://coreos-testing-etcd-0.socialfoodie.club:2379 \
endpoint health
https://coreos-testing-etcd-0.socialfoodie.club:2379 is unhealthy: failed to connect: grpc: timed out when dialing
Error: unhealthy cluster
I found out that despite the cryptic error message in the API server, the cause is that it can't write to the etcd cluster. The reason was that the API server was configured with a different client certificate authority than what the etcd cluster was using, due to a timing issue wrt. copying certificates in my Terraform cluster setup. I figured out that the CA was the problem by using curl to contact the etcd cluster instead of etcdctl, as it gave a clear error message.
Thanks to #johnharris85 for suggesting etcd connectivity being an issue!
BrowserStackLocal gives Error: Could not connect to www.browserstack.com!
I am trying to use Charles Proxy with BrowserStackLocal. I want to use Rewrite feature of Charles Proxy. Both Charles Proxy and BrowserStackLocal are running on same Mac Laptop.
I am getting following error. Has anybody run into this problem?
$ ./BrowserStackLocal myKey -proxyHost 192.168.160.113 -proxyPort 8888 -force -forcelocal
BrowserStackLocal v5.5
*** Error: Could not connect to www.browserstack.com!
Configuration Options:
-v
Provides verbose logging
-f
If you want to test local folder rather internal server
-h
Prints this help
-version
Displays the version
-force
Kill other running Browserstack Local
-only
Restricts Local Testing access to specified local servers and/or folders
-forcelocal
Route all traffic via local machine
-onlyAutomate
Disable Live Testing and Screenshots, just test Automate
-proxyHost HOST
Hostname/IP of proxy, remaining proxy options are ignored if this option is absent
-proxyPort PORT
Port for the proxy, defaults to 3128 when -proxyHost is used
-proxyUser USERNAME
Username for connecting to proxy (Basic Auth Only)
-proxyPass PASSWORD
Password for USERNAME, will be ignored if USERNAME is empty or not specified
-localIdentifier SOME_STRING
If doing simultaneous multiple local testing connections, set this uniquely for different processes
To test an internal server, run:
./BrowserStackLocal <KEY>
Example:
./BrowserStackLocal DsVSdoJPBi2z44sbGFx1
To test HTML files, run:
./BrowserStackLocal -f <KEY> <full path to local folder>
Example:
./BrowserStackLocal -f DsVSdoJPBi2z44sbGFx1 /Applications/MAMP/htdocs/example/
View more configuration options at http://www.browserstack.com/local-testing
Charles Proxy generates its own certificates which is signed by 'Charles Root Certificate'. It seems Charles Proxy is modifying the certificate used by BrowserStackLocal due to which the request to BrowserStack fails and you receive "Could not connect to www.browserstack.com!". More information on SSL-Certificates and Charles is available here.
Can you disable this setting in Charles? This will allow BrowserStackLocal use its original certificate and connect to BrowserStack.
I disabled the SSL proxying in Charles Proxy and turned SOCKS . That solved the problem.
Created new EC2 instance of neo4j via CloudFormation template found here (ubuntu host).
https://github.com/neo4j-contrib/ec2neo
Got the web interface to work fine, and DB is up and running.
Trying to connect with neo4j-shell from my local dev machine, and I am able to establish a connection to the remote EC2 server.
$ neo4j-shell -host ec2-xx-xx-xx-xx.compute-1.amazonaws.com
Welcome to the Neo4j Shell! Enter 'help' for a list of commands
NOTE: Remote Neo4j graph database service 'shell' at port 1337
neo4j-sh (?)$
netstat confirms that a connection has been ESTABLISHED
tcp6 0 0 xx.xx.xx.xx:1337 my.local.ip.add:13785 ESTABLISHED
At this point, I type help, or any neo4j command, and I get no response back from the server. The console just hangs. As soon as I stop the neo4j service on the server, I get the following exception on the client console.
java.rmi.UnmarshalException: Error unmarshaling return header; nested exception is:
java.io.EOFException
at sun.rmi.transport.StreamRemoteCall.executeCall(StreamRemoteCall.java:229)
at sun.rmi.server.UnicastRef.invoke(UnicastRef.java:162)
at java.rmi.server.RemoteObjectInvocationHandler.invokeRemoteMethod(RemoteObjectInvocationHandler.java:194)
at java.rmi.server.RemoteObjectInvocationHandler.invoke(RemoteObjectInvocationHandler.java:148)
at com.sun.proxy.$Proxy1.interpretLine(Unknown Source)
at org.neo4j.shell.impl.AbstractClient.evaluate(AbstractClient.java:149)
at org.neo4j.shell.impl.AbstractClient.evaluate(AbstractClient.java:133)
at org.neo4j.shell.impl.AbstractClient.grabPrompt(AbstractClient.java:101)
at org.neo4j.shell.StartClient.grabPromptOrJustExecuteCommand(StartClient.java:383)
at org.neo4j.shell.StartClient.startRemote(StartClient.java:330)
at org.neo4j.shell.StartClient.start(StartClient.java:196)
at org.neo4j.shell.StartClient.main(StartClient.java:135)
Caused by: java.io.EOFException
at java.io.DataInputStream.readByte(DataInputStream.java:267)
at sun.rmi.transport.StreamRemoteCall.executeCall(StreamRemoteCall.java:215)
... 11 more
I have made the following change to neo4j-wrapper.conf :
wrapper.java.additional=-Djava.rmi.server.hostname=ec2-xx-xx-xx-xx.compute-1.amazonaws.com
All iptables are "disabled", to eliminate variables. I am able to run neo4j-shell on the server itself, to 127.0.0.1
What am I missing in my network config or neo4j server config?
Try to ssh into the instance and run it there. remote connections have been a pain for a long time because of the underlying Java RMI port handling.
You can also try out cycli which supports http and auth.