How to connect to RSK public nodes over websockets? - websocket

I am trying to connect to RSK Mainnet or RSK Testnet over websockets.
Here's what I tried for Mainnet:
const wsProvider = new Web3.providers.WebsocketProvider('ws://public-node.rsk.co');
const web3 = new Web3(wsProvider);
web3.eth.subscribe('newBlockHeaders', function(error, blockHeader){
if (!error) {
console.log("new blockheader " + blockHeader.number)
} else {
console.error(error);
}
});
with this result:
connection not open on send()
Error: connection not open
And I did the same with Testnet but using ws://public-node.testnet.rsk.co, getting similar outcome.
Neither of these work, as seen in the errors above.
How can I connect?

Milton
I am not sure, but I think websocket is not enabled in public nodes.
Usually it is not enabled in others public blockchain nodes that I know.

RSK public nodes expose JSON-RPC endpoints only over HTTP.
They do not expose JSON-RPC endpoints over websockets,
so unfortunately, you are not able to do exactly what you have described.
However, you can achieve something equivalent
by running your own RSK node,
and use this to establish websockets connections.
Here are the RSK
configuration options for RPC .
Also, you can see the default configuration values
in the "base" configuration file, for
rpc.providers.ws
ws {
enabled = false
bind_address = localhost
port = 4445
}
Additionally, you should include the /websocket suffix in your endpoint. Default websocket endpoint when running your own node is: ws://localhost:4445/websocket.
Therefore, update the initial part of your code,
such that it looks like this:
const wsProvider = new Web3.providers.WebsocketProvider('ws://localhost:4445/websocket');
const web3 = new Web3(wsProvider);

Related

grpc custom load balancer not detecting new server addition in cluster

I am building a distributed workflow orchestrator, grpc is used to communicate with the server cluster by workers.If a new server is added to the server grpc client is not able to detect this change. However i have done a workaround by adding a max connection age to the server options
grpc.KeepaliveParams(keepalive.ServerParameters{
MaxConnectionAge: time.Minute * 1,
})
We have two implementation of workers, one in golang and other in java this workaround works perfectly in golang client. Every minute the client makes new connection and is able to detect new servers in cluster. But this is not working with java client.
public CustomNameResolverFactory(String host, int port) {
ManagedChannel managedChannel = NettyChannelBuilder
.forAddress(host, port)
.withOption( ChannelOption.CONNECT_TIMEOUT_MILLIS, 10000 )
.usePlaintext().build();
GetServersRequest request = GetServersRequest.newBuilder().build();
GetServersResponse servers = TaskServiceGrpc.newBlockingStub(managedChannel).getServers(request);
List<Server> serversList = servers.getServersList();
System.out.println(servers);
LOGGER.info("found servers {}", servers);
for (Server server : serversList) {
String rpcAddr = server.getRpcAddr();
String[] split = rpcAddr.split(":");
String hostName = split[0];
int portN = Integer.parseInt(split[1]);
addresses.add(new EquivalentAddressGroup(new InetSocketAddress(hostName, portN)));
}
}
Java client code- https://github.com/Mohitkumar/orchy-worker-java/blob/master/src/main/java/com/orchy/client/CustomNameResolverFactory.java
Golang client code- https://github.com/Mohitkumar/orchy/blob/main/worker/lb/resolver.go

UnknownHostException when trying to connect using websocket

I have a use case where I need to send 2 requests to the server. The output of first request is used in second request so the calls have to be synchronous. I am using ktor (OkHttp)client websocket for this. I am failing at first attempt to even connect to the server with this error
Exception in thread "main" java.net.UnknownHostException: https: nodename nor servname provided, or not known
I suspect I haven't split my url properly and thats why its not able to connect to host.
Couple of qns
Is there any benefit to using websocket instead of using 2 separate Http requests?
Is there a way I can just pass URL to the websocket request?
Best and easiest way to get response and send another request?
I have been able to find very limited documentation on ktor client websocket.
const val HOST = "https://sample.com"
const val PATH1 = "/path/to/config?val1=<val1>&val2=<val2>"
const val PATH2 = "/path/to/config?val=<response_from_first_req>"
fun useSocket() {
val client = HttpClient() {
install(WebSockets)
}
runBlocking {
client.webSocket(method = HttpMethod.Get, host = HOST, path = PATH1) {
val othersMessage = incoming.receive() as? Frame.Text
println(othersMessage?.readText())
println("Testing")
}
}
client.close()
}
Thanks in advance.

Examples of integrating moleculer-io with moleculer-web using moleculer-runner instead of ServiceBroker?

I am having fun with using moleculer-runner instead of creating a ServiceBroker instance in a moleculer-web project I am working on. The Runner simplifies setting up services for moleculer-web, and all the services - including the api.service.js file - look and behave the same, using a module.exports = { blah } format.
I can cleanly define the REST endpoints in the api.service.js file, and create the connected functions in the appropriate service files. For example aliases: { 'GET sensors': 'sensors.list' } points to the list() action/function in sensors.service.js . It all works great using some dummy data in an array.
The next step is to get the service(s) to open up a socket and talk to a local program listening on an internal set address/port. The idea is to accept a REST call from the web, talk to a local program over a socket to get some data, then format and return the data back via REST to the client.
BUT When I want to use sockets with moleculer, I'm having trouble finding useful info and examples on integrating moleculer-io with a moleculer-runner-based setup. All the examples I find use the ServiceBroker model. I thought my Google-Fu was pretty good, but I'm at a loss as to where to look to next. Or, can i modify the ServiceBroker examples to work with moleculer-runner? Any insight or input is welcome.
If you want the following chain:
localhost:3000/sensor/list -> sensor.list() -> send message to local program:8071 -> get response -> send response as return message to the REST caller.
Then you need to add a socket io client to your sensor service (which has the list() action). Adding a client will allow it to communicate with "outside world" via sockets.
Check the image below. I think it has everything that you need.
As a skeleton I've used moleculer-demo project.
What I have:
API service api.service.js. That handles the HTTP requests and passes them to the sensor.service.js
The sensor.service.js will be responsible for communicating with remote socket.io server so it needs to have a socket.io client. Now, when the sensor.service.js service has started() I'm establishing a connection with a remote server located at port 8071. After this I can use this connection in my service actions to communicate with socket.io server. This is exactly what I'm doing in sensor.list action.
I've also created remote-server.service.js to mock your socket.io server. Despite being a moleculer service, the sensor.service.js communicates with it via socket.io protocol.
It doesn't matter if your services use (or not) socket.io. All the services are declared in the same way, i.e., module.exports = {}
Below is a working example with socket.io.
const { ServiceBroker } = require("moleculer");
const ApiGateway = require("moleculer-web");
const SocketIOService = require("moleculer-io");
const io = require("socket.io-client");
const IOService = {
name: "api",
// SocketIOService should be after moleculer-web
// Load the HTTP API Gateway to be able to reach "greeter" action via:
// http://localhost:3000/hello/greeter
mixins: [ApiGateway, SocketIOService]
};
const HelloService = {
name: "hello",
actions: {
greeter() {
return "Hello Via Socket";
}
}
};
const broker = new ServiceBroker();
broker.createService(IOService);
broker.createService(HelloService);
broker.start().then(async () => {
const socket = io("http://localhost:3000", {
reconnectionDelay: 300,
reconnectionDelayMax: 300
});
socket.on("connect", () => {
console.log("Connection with the Gateway established");
});
socket.emit("call", "hello.greeter", (error, res) => {
console.log(res);
});
});
To make it work with moleculer-runner just copy the service declarations into my-service.service.js. So for example, your api.service.js could look like:
// api.service.js
module.exports = {
name: "api",
// SocketIOService should be after moleculer-web
// Load the HTTP API Gateway to be able to reach "greeter" action via:
// http://localhost:3000/hello/greeter
mixins: [ApiGateway, SocketIOService]
}
and your greeter service:
// greeter.service.js
module.exports = {
name: "hello",
actions: {
greeter() {
return "Hello Via Socket";
}
}
}
And run npm run dev or moleculer-runner --repl --hot services

Service Fabric https endpoint with kestrel and reverse proxy

I've been trying to setup Https on a stateless API endpoint following the instructions on the microsoft documentations and diverse post/blogs I could find. It works fine locally, but I'm struggling to make it work after deploying it on my dev server getting
Browser : HTTP ERROR 504
Vm event viewer : HandlerAsyncOperation EndProcessReverseProxyRequest failed with FABRIC_E_TIMEOUT
SF event table : Error while processing request: request url = https://mydomain:19081/appname/servicename/api/healthcheck/ping, verb = GET, remote (client) address = xxx, request processing start time = 2018-03-13T14:50:17.1396031Z, forward url = https://0.0.0.0:44338/api/healthcheck/ping, number of successful resolve attempts = 48, error = 2147949567, message = , phase = ResolveServicePartition
in code I have in the instancelistener
.UseKestrel(options =>
{
options.Listen(IPAddress.Any, 44338, listenOptions =>
{
listenOptions.UseHttps(GetCertificate());
});
})
servicemanifest
<Endpoint Protocol="https" Name="SslServiceEndpoint" Type="Input" Port="44338" />
startup
services.AddMvc(options =>
{
options.SslPort = 44338;
options.Filters.Add(new RequireHttpsAttribute());
});
+
var options = new RewriteOptions().AddRedirectToHttps(StatusCodes.Status301MovedPermanently, 44338);
app.UseRewriter(options);
here is what I got in azure (deployed through ARM template)
Health probes
NAME PROTOCOL PORT USED BY
AppPortProbe TCP 44338 AppPortLBRule
FabricGatewayProbe TCP 19000 LBRule
FabricHttpGatewayProbe TCP 19080 LBHttpRule
SFReverseProxyProbe TCP 19081 LBSFReverseProxyRule
Load balancing rules
NAME LOAD BALANCING RULE BACKEND POOL HEALTH PROBE
AppPortLBRule AppPortLBRule (TCP/44338) LoadBalancerBEAddressPool AppPortProbe
LBHttpRule LBHttpRule (TCP/19080) LoadBalancerBEAddressPool FabricHttpGatewayProbe
LBRule LBRule (TCP/19000) LoadBalancerBEAddressPool FabricGatewayProbe
LBSFReverseProxyRule LBSFReverseProxyRule (TCP/19081) LoadBalancerBEAddressPool SFReverseProxyProbe
I have a Cluster certificate, ReverseProxy Certificate, and auth to the api through azure ad and in ARM
"fabricSettings": [
{
"parameters": [
{
"name": "ClusterProtectionLevel",
"value": "[parameters('clusterProtectionLevel')]"
}
],
"name": "Security"
},
{
"name": "ApplicationGateway/Http",
"parameters": [
{
"name": "ApplicationCertificateValidationPolicy",
"value": "None"
}
]
}
],
Not sure what else could be relevant, if you have any ideas/suggestions, those are really welcome
Edit : code for GetCertificate()
private X509Certificate2 GetCertificate()
{
var certificateBundle = Task.Run(async () => await GetKeyVaultClient()
.GetCertificateAsync(Environment.GetEnvironmentVariable("KeyVaultCertifIdentifier")));
var certificate = new X509Certificate2();
certificate.Import(certificateBundle.Result.Cer);
return certificate;
}
private KeyVaultClient GetKeyVaultClient() => new KeyVaultClient(async (authority, resource, scope) =>
{
var context = new AuthenticationContext(authority, TokenCache.DefaultShared);
var clientCred = new ClientCredential(Environment.GetEnvironmentVariable("KeyVaultClientId"),
Environment.GetEnvironmentVariable("KeyVaultSecret"));
var authResult = await context.AcquireTokenAsync(resource, clientCred);
return authResult.AccessToken;
});
Digging into your code I've realized that there is nothing wrong with it except one thing. I mean, as you use Kestrel, you don't need to set up anything extra in the AppManifest as those things are for Http.Sys implementation. You don't even need to have an endpoint in the ServiceManifest(although recommended) as all these things are about URL reservation for the service account and SSL binding configuration, neither of which is required with Kestrel.
What you do need to do is to use IPAddress.IPv6Any while you configure SSL. Aside the fact that it turns out to be the recommended way which allows you to accept both IPv4 and IPV6 connections, it also does a 'correct' endpoint registration in the SF. See, when you use IPAddress.Any, you'll get the SF setting up an endpoint like https://0.0.0.0:44338, and that's how the reverse proxy will try to reach the service which obviously wouldn't work. 0.0.0.0 doesn't correspond to any particular ip, it's just the way to say 'any IPv4 address at all'. While when you use IPAddress.IPv6Any, you'll get a correct endpoint mapped to the vm ip address that could be resolved from within the vnet. You could see that stuff by yourself in the SF Explorer if you go down to the endpoint section in the service instance blade.

Socket.io - using multiple nodes

So I was looking into running socket.io across multiple processes.
The guide here: https://socket.io/docs/using-multiple-nodes/ left me with some questions.
It mentions using configuring nginx to load balance between socket.io processes, but it also mentions using the built in cluster module in Node.js below.
Am I supposed to be using nginx AND the cluster module in Node.js for this?
Also how do I tell if load balancing is working?
I tested it using the nginx option with two socket.io processes running using the redis adapter and using the cluster module.
This is what I had in my nginx config:
http {
upstream io_nodes {
ip_hash;
server 127.0.0.1:6001;
server 127.0.0.1:6002;
}
server {
listen 3000;
server_name example.com;
location / {
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_http_version 1.1;
proxy_pass http://io_nodes;
}
}
This is an example of my socket.io code (most of it taken from here: https://github.com/elad/node-cluster-socket.io):
var express = require('express'),
cluster = require('cluster'),
net = require('net'),
redis = require('redis'),
sio = require('socket.io'),
sio_redis = require('socket.io-redis');
var port = 6001,
num_processes = require('os').cpus().length;
if (cluster.isMaster) {
console.log('is master 6001');
// This stores our workers. We need to keep them to be able to reference
// them based on source IP address. It's also useful for auto-restart,
// for example.
var workers = [];
// Helper function for spawning worker at index 'i'.
var spawn = function(i) {
workers[i] = cluster.fork();
// Optional: Restart worker on exit
workers[i].on('exit', function(code, signal) {
console.log('respawning worker', i);
spawn(i);
});
};
// Spawn workers.
for (var i = 0; i < num_processes; i++) {
spawn(i);
}
// Helper function for getting a worker index based on IP address.
// This is a hot path so it should be really fast. The way it works
// is by converting the IP address to a number by removing non numeric
// characters, then compressing it to the number of slots we have.
//
// Compared against "real" hashing (from the sticky-session code) and
// "real" IP number conversion, this function is on par in terms of
// worker index distribution only much faster.
var worker_index = function(ip, len) {
var s = '';
for (var i = 0, _len = ip.length; i < _len; i++) {
if (!isNaN(ip[i])) {
s += ip[i];
}
}
return Number(s) % len;
};
// Create the outside facing server listening on our port.
var server = net.createServer({ pauseOnConnect: true }, function(connection) {
// We received a connection and need to pass it to the appropriate
// worker. Get the worker for this connection's source IP and pass
// it the connection.
var worker = workers[worker_index(connection.remoteAddress, num_processes)];
worker.send('sticky-session:connection', connection);
}).listen(port);
} else {
// Note we don't use a port here because the master listens on it for us.
var app = new express();
// Here you might use middleware, attach routes, etc.
// Don't expose our internal server to the outside.
var server = app.listen(0, 'localhost'),
io = sio(server);
// Tell Socket.IO to use the redis adapter. By default, the redis
// server is assumed to be on localhost:6379. You don't have to
// specify them explicitly unless you want to change them.
io.adapter(sio_redis({ host: 'localhost', port: 6379 }));
// Here you might use Socket.IO middleware for authorization etc.
io.on('connection', function(socket) {
console.log('port 6001');
console.log(socket.id);
});
// Listen to messages sent from the master. Ignore everything else.
process.on('message', function(message, connection) {
if (message !== 'sticky-session:connection') {
return;
}
// Emulate a connection event on the server by emitting the
// event with the connection the master sent us.
server.emit('connection', connection);
connection.resume();
});
}
Connections worked just fine with this, although I'm testing it all locally..
How do I know if it's working properly? Every time the client connects, it seems to connect to the socket.io process on port 6001.
The client connect code connects to port 3000.
Am I supposed to be using nginx AND the cluster module in Node.js for this?
If all your server processes are on one computer, you can use the cluster module without NGINX.
If you're using multiple server computers, then you need a piece of network infrastructure like NGINX to load balance among the different servers since node.js clustering cannot do that for you.
And, you can use both together (multiple servers load balanced by something like NGINX and each server running clustering on each server). The key here is that node.js clustering only spreads the load among different processes on the same host.
Also how do I tell if load balancing is working?
You can have each process log the activity it is processing and add the process ID as part of the logging and if you are loading your server with multiple requests at the same time, you should see some load being handled by each process. If you do actual load testing, you should get significantly more throughput when clustering is on and working vs. not using clustering. Keep in mind that total throughput depends upon where your bottlenecks are so if your server is actually database bound and all clustered processes are using the same database, you may not benefit much from clustering the node.js process. If, on the other hand, your node.js process is compute intensive and you have multiple cores in your server, you may get a significant benefit from clustering.
Adding more point to above solution.
Also how do I tell if load balancing is working?
I am using node-debug for the same, it opens multiple debugger as per number of thread. Now you can add breakpoint to check whether load is being distributed properly.

Resources