Service Fabric https endpoint with kestrel and reverse proxy - https

I've been trying to setup Https on a stateless API endpoint following the instructions on the microsoft documentations and diverse post/blogs I could find. It works fine locally, but I'm struggling to make it work after deploying it on my dev server getting
Browser : HTTP ERROR 504
Vm event viewer : HandlerAsyncOperation EndProcessReverseProxyRequest failed with FABRIC_E_TIMEOUT
SF event table : Error while processing request: request url = https://mydomain:19081/appname/servicename/api/healthcheck/ping, verb = GET, remote (client) address = xxx, request processing start time = 2018-03-13T14:50:17.1396031Z, forward url = https://0.0.0.0:44338/api/healthcheck/ping, number of successful resolve attempts = 48, error = 2147949567, message = , phase = ResolveServicePartition
in code I have in the instancelistener
.UseKestrel(options =>
{
options.Listen(IPAddress.Any, 44338, listenOptions =>
{
listenOptions.UseHttps(GetCertificate());
});
})
servicemanifest
<Endpoint Protocol="https" Name="SslServiceEndpoint" Type="Input" Port="44338" />
startup
services.AddMvc(options =>
{
options.SslPort = 44338;
options.Filters.Add(new RequireHttpsAttribute());
});
+
var options = new RewriteOptions().AddRedirectToHttps(StatusCodes.Status301MovedPermanently, 44338);
app.UseRewriter(options);
here is what I got in azure (deployed through ARM template)
Health probes
NAME PROTOCOL PORT USED BY
AppPortProbe TCP 44338 AppPortLBRule
FabricGatewayProbe TCP 19000 LBRule
FabricHttpGatewayProbe TCP 19080 LBHttpRule
SFReverseProxyProbe TCP 19081 LBSFReverseProxyRule
Load balancing rules
NAME LOAD BALANCING RULE BACKEND POOL HEALTH PROBE
AppPortLBRule AppPortLBRule (TCP/44338) LoadBalancerBEAddressPool AppPortProbe
LBHttpRule LBHttpRule (TCP/19080) LoadBalancerBEAddressPool FabricHttpGatewayProbe
LBRule LBRule (TCP/19000) LoadBalancerBEAddressPool FabricGatewayProbe
LBSFReverseProxyRule LBSFReverseProxyRule (TCP/19081) LoadBalancerBEAddressPool SFReverseProxyProbe
I have a Cluster certificate, ReverseProxy Certificate, and auth to the api through azure ad and in ARM
"fabricSettings": [
{
"parameters": [
{
"name": "ClusterProtectionLevel",
"value": "[parameters('clusterProtectionLevel')]"
}
],
"name": "Security"
},
{
"name": "ApplicationGateway/Http",
"parameters": [
{
"name": "ApplicationCertificateValidationPolicy",
"value": "None"
}
]
}
],
Not sure what else could be relevant, if you have any ideas/suggestions, those are really welcome
Edit : code for GetCertificate()
private X509Certificate2 GetCertificate()
{
var certificateBundle = Task.Run(async () => await GetKeyVaultClient()
.GetCertificateAsync(Environment.GetEnvironmentVariable("KeyVaultCertifIdentifier")));
var certificate = new X509Certificate2();
certificate.Import(certificateBundle.Result.Cer);
return certificate;
}
private KeyVaultClient GetKeyVaultClient() => new KeyVaultClient(async (authority, resource, scope) =>
{
var context = new AuthenticationContext(authority, TokenCache.DefaultShared);
var clientCred = new ClientCredential(Environment.GetEnvironmentVariable("KeyVaultClientId"),
Environment.GetEnvironmentVariable("KeyVaultSecret"));
var authResult = await context.AcquireTokenAsync(resource, clientCred);
return authResult.AccessToken;
});

Digging into your code I've realized that there is nothing wrong with it except one thing. I mean, as you use Kestrel, you don't need to set up anything extra in the AppManifest as those things are for Http.Sys implementation. You don't even need to have an endpoint in the ServiceManifest(although recommended) as all these things are about URL reservation for the service account and SSL binding configuration, neither of which is required with Kestrel.
What you do need to do is to use IPAddress.IPv6Any while you configure SSL. Aside the fact that it turns out to be the recommended way which allows you to accept both IPv4 and IPV6 connections, it also does a 'correct' endpoint registration in the SF. See, when you use IPAddress.Any, you'll get the SF setting up an endpoint like https://0.0.0.0:44338, and that's how the reverse proxy will try to reach the service which obviously wouldn't work. 0.0.0.0 doesn't correspond to any particular ip, it's just the way to say 'any IPv4 address at all'. While when you use IPAddress.IPv6Any, you'll get a correct endpoint mapped to the vm ip address that could be resolved from within the vnet. You could see that stuff by yourself in the SF Explorer if you go down to the endpoint section in the service instance blade.

Related

consul proxy change health endpoint

I have deployed a consul proxy on a different host than 'localhost' but consul keeps on checking health on 127.0.0.1.
Config of the service and it's sidecar:
service {
name = "counting"
id = "counting-1"
port = 9005
address = "169.254.1.1"
connect {
sidecar_service {
proxy {
config {
bind_address = "169.254.1.1"
bind_port = 21002
tcp_check_address = "169.254.1.1"
local_service_address = "localhost:9005"
}
}
}
}
check {
id = "counting-check"
http = "http://169.254.1.1:9005/health"
method = "GET"
interval = "10s"
timeout = "1s"
}
}
The proxy was deployed using the following command:
consul connect proxy -sidecar-for counting-1 > counting-proxy.log
Consul UI's health check message:
How do I change the health check to 169.254.1.1?
First, I recommend using the Envoy proxy (consul connect envoy) instead of the built-in proxy (consul connect proxy) since the latter is not recommended for production use.
As far as changing the health check address, you can do that by setting proxy.local_service_address. This address is used when configuring the health check for the local application.
See https://github.com/hashicorp/consul/issues/11008#issuecomment-929832280 for a related discussion on this issue.

How to connect to RSK public nodes over websockets?

I am trying to connect to RSK Mainnet or RSK Testnet over websockets.
Here's what I tried for Mainnet:
const wsProvider = new Web3.providers.WebsocketProvider('ws://public-node.rsk.co');
const web3 = new Web3(wsProvider);
web3.eth.subscribe('newBlockHeaders', function(error, blockHeader){
if (!error) {
console.log("new blockheader " + blockHeader.number)
} else {
console.error(error);
}
});
with this result:
connection not open on send()
Error: connection not open
And I did the same with Testnet but using ws://public-node.testnet.rsk.co, getting similar outcome.
Neither of these work, as seen in the errors above.
How can I connect?
Milton
I am not sure, but I think websocket is not enabled in public nodes.
Usually it is not enabled in others public blockchain nodes that I know.
RSK public nodes expose JSON-RPC endpoints only over HTTP.
They do not expose JSON-RPC endpoints over websockets,
so unfortunately, you are not able to do exactly what you have described.
However, you can achieve something equivalent
by running your own RSK node,
and use this to establish websockets connections.
Here are the RSK
configuration options for RPC .
Also, you can see the default configuration values
in the "base" configuration file, for
rpc.providers.ws
ws {
enabled = false
bind_address = localhost
port = 4445
}
Additionally, you should include the /websocket suffix in your endpoint. Default websocket endpoint when running your own node is: ws://localhost:4445/websocket.
Therefore, update the initial part of your code,
such that it looks like this:
const wsProvider = new Web3.providers.WebsocketProvider('ws://localhost:4445/websocket');
const web3 = new Web3(wsProvider);

Examples of integrating moleculer-io with moleculer-web using moleculer-runner instead of ServiceBroker?

I am having fun with using moleculer-runner instead of creating a ServiceBroker instance in a moleculer-web project I am working on. The Runner simplifies setting up services for moleculer-web, and all the services - including the api.service.js file - look and behave the same, using a module.exports = { blah } format.
I can cleanly define the REST endpoints in the api.service.js file, and create the connected functions in the appropriate service files. For example aliases: { 'GET sensors': 'sensors.list' } points to the list() action/function in sensors.service.js . It all works great using some dummy data in an array.
The next step is to get the service(s) to open up a socket and talk to a local program listening on an internal set address/port. The idea is to accept a REST call from the web, talk to a local program over a socket to get some data, then format and return the data back via REST to the client.
BUT When I want to use sockets with moleculer, I'm having trouble finding useful info and examples on integrating moleculer-io with a moleculer-runner-based setup. All the examples I find use the ServiceBroker model. I thought my Google-Fu was pretty good, but I'm at a loss as to where to look to next. Or, can i modify the ServiceBroker examples to work with moleculer-runner? Any insight or input is welcome.
If you want the following chain:
localhost:3000/sensor/list -> sensor.list() -> send message to local program:8071 -> get response -> send response as return message to the REST caller.
Then you need to add a socket io client to your sensor service (which has the list() action). Adding a client will allow it to communicate with "outside world" via sockets.
Check the image below. I think it has everything that you need.
As a skeleton I've used moleculer-demo project.
What I have:
API service api.service.js. That handles the HTTP requests and passes them to the sensor.service.js
The sensor.service.js will be responsible for communicating with remote socket.io server so it needs to have a socket.io client. Now, when the sensor.service.js service has started() I'm establishing a connection with a remote server located at port 8071. After this I can use this connection in my service actions to communicate with socket.io server. This is exactly what I'm doing in sensor.list action.
I've also created remote-server.service.js to mock your socket.io server. Despite being a moleculer service, the sensor.service.js communicates with it via socket.io protocol.
It doesn't matter if your services use (or not) socket.io. All the services are declared in the same way, i.e., module.exports = {}
Below is a working example with socket.io.
const { ServiceBroker } = require("moleculer");
const ApiGateway = require("moleculer-web");
const SocketIOService = require("moleculer-io");
const io = require("socket.io-client");
const IOService = {
name: "api",
// SocketIOService should be after moleculer-web
// Load the HTTP API Gateway to be able to reach "greeter" action via:
// http://localhost:3000/hello/greeter
mixins: [ApiGateway, SocketIOService]
};
const HelloService = {
name: "hello",
actions: {
greeter() {
return "Hello Via Socket";
}
}
};
const broker = new ServiceBroker();
broker.createService(IOService);
broker.createService(HelloService);
broker.start().then(async () => {
const socket = io("http://localhost:3000", {
reconnectionDelay: 300,
reconnectionDelayMax: 300
});
socket.on("connect", () => {
console.log("Connection with the Gateway established");
});
socket.emit("call", "hello.greeter", (error, res) => {
console.log(res);
});
});
To make it work with moleculer-runner just copy the service declarations into my-service.service.js. So for example, your api.service.js could look like:
// api.service.js
module.exports = {
name: "api",
// SocketIOService should be after moleculer-web
// Load the HTTP API Gateway to be able to reach "greeter" action via:
// http://localhost:3000/hello/greeter
mixins: [ApiGateway, SocketIOService]
}
and your greeter service:
// greeter.service.js
module.exports = {
name: "hello",
actions: {
greeter() {
return "Hello Via Socket";
}
}
}
And run npm run dev or moleculer-runner --repl --hot services

Akka HTTP not allowing incoming connections from remote hosts on macOS

I wanted to try out the following small example:
object Webserver {
def main(args: Array[String]) {
implicit val system = ActorSystem("my-system")
implicit val materializer = ActorMaterializer()
// needed for the future flatMap/onComplete in the end
implicit val executionContext = system.dispatcher
val route =
path("hello") {
get {
redirect(Uri("https://google.com"), StatusCodes.PermanentRedirect)
}
}
val bindingFuture = Http().bindAndHandle(route, "localhost", 8080)
println(s"Server online at http://localhost:8080/\nPress RETURN to stop...")
StdIn.readLine() // let it run until user presses return
bindingFuture
.flatMap(_.unbind()) // trigger unbinding from the port
.onComplete(_ => system.terminate()) // and shutdown when done
}
}
This works perfectly when accessing from the same host on macOS. However, when I am accessing the host remotely, I can't access the akka webserver.
I have checked my Firewall options and I verified that the program java allows incoming connections.
One more suspicious thing: When I run python -m SimpleHTTPServer 8080, I get the following window:
I don't get this window when starting my akka application. Do I have to implement custom logic to ask for permission or something?
To enable remote access to your server, you need to bind your server to the external interface. To simply bind to all interfaces, you can set the host/IP to 0.0.0.0, like:
Http().bindAndHandle(route, "0.0.0.0", 8080)

Javascript get request from https server to localhost:port with self signed SSL

I have two servers configured and running om my Debian server. One main server and one Elasticsearch (search engine) server.
The main server is running on a https node server with a NGINX proxy and a purchased SSL certificate. The Elasticsearch server is running on a http server. I've added a new NGINX proxy server to redirect https://localhost:9999 to http://localhost:9200 with a self-signed SSL certificate. There's also a configured authentication on the Elasticsearch server with a username and a password.
Everything seem to be properly configured since I can get a successful response from the server when I'm doing a curl from the servers terminal towards https://localhost:9999 with the -k option to bypass the verication of the self-signed certificate, without it, it does not work.
I cannot do a cross-domain request from my https main server to my http localhost server. Therefore I need to configure https on my localhost server.
Without the -k option:
curl: (60) SSL certificate problem: self signed certificate
More details here: http://curl.haxx.se/docs/sslcerts.html
curl performs SSL certificate verification by default, using a "bundle"
of Certificate Authority (CA) public keys (CA certs). If the default
bundle file isn't adequate, you can specify an alternate file
using the --cacert option.
If this HTTPS server uses a certificate signed by a CA represented in
the bundle, the certificate verification probably failed due to a
problem with the certificate (it might be expired, or the name might
not match the domain name in the URL).
If you'd like to turn off curl's verification of the certificate, use
the -k (or --insecure) option.
With the -k option:
{
"name" : "server-name",
"cluster_name" : "name",
"cluster_uuid" : "uuid",
"version" : {
"number" : "x.x.x",
"build_hash" : "abc123",
"build_date" : "Timestamp",
"build_snapshot" : false,
"lucene_version" : "x.x.x"
},
"tagline" : "You Know, for Search"
}
Which is a successful Elasticsearch server response.
So the full curl request looks something like curl -k https://localhost:9999/ --user username:password.
So, the actual question:
I would like to be able to do a simple jQuery AJAX request towards this server. I'm trying with the following request $.get('https://username:password#localhost:9999/') but I'm getting ERR_CONNECTION_REFUSED.
My guess is that that the AJAX request does not bypass the self-signed certificate verification and therefore it refuses to connect.
Is there any simple way to solve this with request headers or something like that? Or do i need to purchase a CA-certificate to make this work with AJAX?
You are right the problem is the self signed certificate.If you try the same request but as http it will work.
Here is a workaround to make ElasticSearch work with https:
You need to implement your own Http Connector:
var HttpConnector = require('elasticsearch/src/lib/connectors/http');
var inherits = require('util').inherits;
var qs = require('querystring');
var fs = require('fs');
function CustomHttpConnector(host, config) {
HttpConnector.call(this, host, config);
}
inherits(CustomHttpConnector, HttpConnector);
// This function is copied and modified from elasticsearch-js/src/lib/connectors/http.js
CustomHttpConnector.prototype.makeReqParams = function (params) {
params = params || {};
var host = this.host;
var reqParams = {
method: params.method || 'GET',
protocol: host.protocol + ':',
auth: host.auth,
hostname: host.host,
port: host.port,
path: (host.path || '') + (params.path || ''),
headers: host.getHeaders(params.headers),
agent: this.agent,
rejectUnauthorized: true,
ca: fs.readFileSync('publicCertificate.crt', 'utf8')
};
if (!reqParams.path) {
reqParams.path = '/';
}
var query = host.getQuery(params.query);
if (query) {
reqParams.path = reqParams.path + '?' + qs.stringify(query);
}
return reqParams;
};
module.exports = CustomHttpConnector;
Then register it like so:
var elasticsearch = require('elasticsearch');
var CustomHttpConnector = require('./customHttpConnector');
var Elasticsearch = function() {
this.client = new elasticsearch.Client({
host: {
host: 'my.server.com',
port: '443',
protocol: 'https',
auth: 'user:passwd'
},
keepAlive: true,
apiVerison: "1.3",
connectionClass: CustomHttpConnector
});
}
https://gist.github.com/fractalf/d08de3b59c32197ccd65
If you want to make simple ajax calls not using ES the only thing you can do is prompt the user to visit the page and accept the certificate themselves when the request is denied.
Also see: https://stackoverflow.com/a/4566055/5758328

Resources