Webmachine with http and https? - https

What is the recommended way of getting https working with webmachine?
I see that there is an example for getting mochiweb working with https and http. I just can seem to translate that to webmachine. In particular how do you handle both http and https requests in one app.

i had some success getting multiple listeners with the following change to mywebdemo_sup.erl in the demo app. i haven't tested it much further than that, but hopefully enough to get you started.
init([]) ->
Ip = case os:getenv("WEBMACHINE_IP") of false -> "0.0.0.0"; Any -> Any end,
{ok, Dispatch} = file:consult(filename:join(
[filename:dirname(code:which(?MODULE)),
"..", "priv", "dispatch.conf"])),
WebConfig = [
{name, one},
{ip, Ip},
{port, 8000},
{log_dir, "priv/log"},
{dispatch, Dispatch}],
Web = {one,
{webmachine_mochiweb, start, [WebConfig]},
permanent, 5000, worker, dynamic},
WebSSLConfig = [
{name, two},
{ip, Ip},
{port, 8443},
{ssl, true},
{ssl_opts, [{certfile, "/tmp/api_server.crt"},
{cacertfile,"tmp/api_server.ca.crt"},
{keyfile, "/tmp/api_server.key"}]},
{log_dir, "priv/log"},
{dispatch, Dispatch}],
WebSSL = {two,
{webmachine_mochiweb, start, [WebSSLConfig]},
permanent, 5000, worker, dynamic},
Processes = [Web, WebSSL],
{ok, { {one_for_one, 10, 10}, Processes} }.

Related

load balancer in Spring boot, ignores failing servers

I have a load balancer in Spring boot, but I want that when one of the servers fails, it ignores it and uses another
so for example if the second localhost fails, i will like it to try to use another, without the user notice, that there is a server down, ¿what can i do?
so i am usign this
#Override
public Flux<List<ServiceInstance>> get() {
return Flux.just(Arrays
.asList(
new DefaultServiceInstance(serviceId + "1", serviceId, "localhost", 8080, false),
new DefaultServiceInstance(serviceId + "2", serviceId, "localhost", 9092, false),
new DefaultServiceInstance(serviceId + "3", serviceId, "localhost", 9999, false)));
}
}
To tell my code all the posibles servers, but when one fails, it shows an error, and i will like that it tries another server, instead of showing HTTP response 500, so the user will not notice ther was an error in the first place

Examples of integrating moleculer-io with moleculer-web using moleculer-runner instead of ServiceBroker?

I am having fun with using moleculer-runner instead of creating a ServiceBroker instance in a moleculer-web project I am working on. The Runner simplifies setting up services for moleculer-web, and all the services - including the api.service.js file - look and behave the same, using a module.exports = { blah } format.
I can cleanly define the REST endpoints in the api.service.js file, and create the connected functions in the appropriate service files. For example aliases: { 'GET sensors': 'sensors.list' } points to the list() action/function in sensors.service.js . It all works great using some dummy data in an array.
The next step is to get the service(s) to open up a socket and talk to a local program listening on an internal set address/port. The idea is to accept a REST call from the web, talk to a local program over a socket to get some data, then format and return the data back via REST to the client.
BUT When I want to use sockets with moleculer, I'm having trouble finding useful info and examples on integrating moleculer-io with a moleculer-runner-based setup. All the examples I find use the ServiceBroker model. I thought my Google-Fu was pretty good, but I'm at a loss as to where to look to next. Or, can i modify the ServiceBroker examples to work with moleculer-runner? Any insight or input is welcome.
If you want the following chain:
localhost:3000/sensor/list -> sensor.list() -> send message to local program:8071 -> get response -> send response as return message to the REST caller.
Then you need to add a socket io client to your sensor service (which has the list() action). Adding a client will allow it to communicate with "outside world" via sockets.
Check the image below. I think it has everything that you need.
As a skeleton I've used moleculer-demo project.
What I have:
API service api.service.js. That handles the HTTP requests and passes them to the sensor.service.js
The sensor.service.js will be responsible for communicating with remote socket.io server so it needs to have a socket.io client. Now, when the sensor.service.js service has started() I'm establishing a connection with a remote server located at port 8071. After this I can use this connection in my service actions to communicate with socket.io server. This is exactly what I'm doing in sensor.list action.
I've also created remote-server.service.js to mock your socket.io server. Despite being a moleculer service, the sensor.service.js communicates with it via socket.io protocol.
It doesn't matter if your services use (or not) socket.io. All the services are declared in the same way, i.e., module.exports = {}
Below is a working example with socket.io.
const { ServiceBroker } = require("moleculer");
const ApiGateway = require("moleculer-web");
const SocketIOService = require("moleculer-io");
const io = require("socket.io-client");
const IOService = {
name: "api",
// SocketIOService should be after moleculer-web
// Load the HTTP API Gateway to be able to reach "greeter" action via:
// http://localhost:3000/hello/greeter
mixins: [ApiGateway, SocketIOService]
};
const HelloService = {
name: "hello",
actions: {
greeter() {
return "Hello Via Socket";
}
}
};
const broker = new ServiceBroker();
broker.createService(IOService);
broker.createService(HelloService);
broker.start().then(async () => {
const socket = io("http://localhost:3000", {
reconnectionDelay: 300,
reconnectionDelayMax: 300
});
socket.on("connect", () => {
console.log("Connection with the Gateway established");
});
socket.emit("call", "hello.greeter", (error, res) => {
console.log(res);
});
});
To make it work with moleculer-runner just copy the service declarations into my-service.service.js. So for example, your api.service.js could look like:
// api.service.js
module.exports = {
name: "api",
// SocketIOService should be after moleculer-web
// Load the HTTP API Gateway to be able to reach "greeter" action via:
// http://localhost:3000/hello/greeter
mixins: [ApiGateway, SocketIOService]
}
and your greeter service:
// greeter.service.js
module.exports = {
name: "hello",
actions: {
greeter() {
return "Hello Via Socket";
}
}
}
And run npm run dev or moleculer-runner --repl --hot services

Redirect to custom protocol/scheme is not accessible when using headless watir/firefox

I’m working on a set of integration tests for the dropbox oauth sequence which finishes in a series of 302 redirects, the last of which is a custom protocol/scheme. Everything works as expected in the mobile app which the testing mimics, and everything bar this works in the integration tests.
The testing environment runs on ubuntu server (no GUI) and is headless using xvfb.
Objectively, I don’t actually need the custom protocol URI to be followed, I just need to get access to the URI to confirm the contents match expectations.
I have tried everything I can think of to access the URI containing the custom scheme from within watir/selenium, but all the references I can find say that the underlying detail is deliberately hidden by design.
I have also tried all the options I can find for creating a custom protocol handler within the firefox profile, but no matter what happens the script isn’t called.
Nothing useful is being left in the watir/selenium logs.
Any thoughts?
Custom protocol handler snippet:
# initialise headless
headless = Headless.new( reuse: false )
headless.start
# initialise profile
profile = Selenium::WebDriver::Firefox::Profile.new
profile[ 'general.useragent.override' ] = 'agent'
profile[ 'network.protocol-handler.app.biscuit' ] = '/usr/bin/biscuit'
profile[ 'network.protocol-handler.external.biscuit' ] = true
profile[ 'network.protocol-handler.expose.biscuit' ] = true
profile[ 'network.protocol-handler.warn-external.biscuit' ] = false
# initialise client
client = Selenium::WebDriver::Remote::Http::Persistent.new
# initialise browser
browser = Watir::Browser.new :firefox, profile: profile, accept_insecure_certs: true, http_client: client
# run dropbox authentication cycle
# cleanup
browser.close
headless.destroy
After chasing this around for ages, it turns out that most of the documentation for adding custom schemes on the mozilla site and forums is deprecated and there’s nothing new to replace it. Grrr.
Through a process of trial and error, I found that the model profile used by the webdriver does not need to be complete, and anything that is missing it’ll pull from the default profile. So all that is required is a handlers.json file containing the custom scheme/s and no more.
Snippet to demonstrate:
# create a temporary model profile
profilePath = '/tmp/modelProfile'
FileUtils.mkpath profilePath
File.chmod( 0700, profilePath )
FileUtils.chown 0, 0, profilePath
open( profilePath + '/handlers.json', 'w' ) { |file| file.write '{ "defaultHandlersVersion": { "en-US": 4 }, "schemes": { "biscuit": { "action": 2, "handlers": [ { "name": "biscuit", "uriTemplate": "https://www.biscuit.me?url=%s" } ] } } }' }
# create profile
profile = Selenium::WebDriver::Firefox::Profile.new( '/tmp/modelProfile' )
# initialise browser
browser = Watir::Browser.new :firefox, profile: profile, accept_insecure_certs: true

Service Fabric https endpoint with kestrel and reverse proxy

I've been trying to setup Https on a stateless API endpoint following the instructions on the microsoft documentations and diverse post/blogs I could find. It works fine locally, but I'm struggling to make it work after deploying it on my dev server getting
Browser : HTTP ERROR 504
Vm event viewer : HandlerAsyncOperation EndProcessReverseProxyRequest failed with FABRIC_E_TIMEOUT
SF event table : Error while processing request: request url = https://mydomain:19081/appname/servicename/api/healthcheck/ping, verb = GET, remote (client) address = xxx, request processing start time = 2018-03-13T14:50:17.1396031Z, forward url = https://0.0.0.0:44338/api/healthcheck/ping, number of successful resolve attempts = 48, error = 2147949567, message = , phase = ResolveServicePartition
in code I have in the instancelistener
.UseKestrel(options =>
{
options.Listen(IPAddress.Any, 44338, listenOptions =>
{
listenOptions.UseHttps(GetCertificate());
});
})
servicemanifest
<Endpoint Protocol="https" Name="SslServiceEndpoint" Type="Input" Port="44338" />
startup
services.AddMvc(options =>
{
options.SslPort = 44338;
options.Filters.Add(new RequireHttpsAttribute());
});
+
var options = new RewriteOptions().AddRedirectToHttps(StatusCodes.Status301MovedPermanently, 44338);
app.UseRewriter(options);
here is what I got in azure (deployed through ARM template)
Health probes
NAME PROTOCOL PORT USED BY
AppPortProbe TCP 44338 AppPortLBRule
FabricGatewayProbe TCP 19000 LBRule
FabricHttpGatewayProbe TCP 19080 LBHttpRule
SFReverseProxyProbe TCP 19081 LBSFReverseProxyRule
Load balancing rules
NAME LOAD BALANCING RULE BACKEND POOL HEALTH PROBE
AppPortLBRule AppPortLBRule (TCP/44338) LoadBalancerBEAddressPool AppPortProbe
LBHttpRule LBHttpRule (TCP/19080) LoadBalancerBEAddressPool FabricHttpGatewayProbe
LBRule LBRule (TCP/19000) LoadBalancerBEAddressPool FabricGatewayProbe
LBSFReverseProxyRule LBSFReverseProxyRule (TCP/19081) LoadBalancerBEAddressPool SFReverseProxyProbe
I have a Cluster certificate, ReverseProxy Certificate, and auth to the api through azure ad and in ARM
"fabricSettings": [
{
"parameters": [
{
"name": "ClusterProtectionLevel",
"value": "[parameters('clusterProtectionLevel')]"
}
],
"name": "Security"
},
{
"name": "ApplicationGateway/Http",
"parameters": [
{
"name": "ApplicationCertificateValidationPolicy",
"value": "None"
}
]
}
],
Not sure what else could be relevant, if you have any ideas/suggestions, those are really welcome
Edit : code for GetCertificate()
private X509Certificate2 GetCertificate()
{
var certificateBundle = Task.Run(async () => await GetKeyVaultClient()
.GetCertificateAsync(Environment.GetEnvironmentVariable("KeyVaultCertifIdentifier")));
var certificate = new X509Certificate2();
certificate.Import(certificateBundle.Result.Cer);
return certificate;
}
private KeyVaultClient GetKeyVaultClient() => new KeyVaultClient(async (authority, resource, scope) =>
{
var context = new AuthenticationContext(authority, TokenCache.DefaultShared);
var clientCred = new ClientCredential(Environment.GetEnvironmentVariable("KeyVaultClientId"),
Environment.GetEnvironmentVariable("KeyVaultSecret"));
var authResult = await context.AcquireTokenAsync(resource, clientCred);
return authResult.AccessToken;
});
Digging into your code I've realized that there is nothing wrong with it except one thing. I mean, as you use Kestrel, you don't need to set up anything extra in the AppManifest as those things are for Http.Sys implementation. You don't even need to have an endpoint in the ServiceManifest(although recommended) as all these things are about URL reservation for the service account and SSL binding configuration, neither of which is required with Kestrel.
What you do need to do is to use IPAddress.IPv6Any while you configure SSL. Aside the fact that it turns out to be the recommended way which allows you to accept both IPv4 and IPV6 connections, it also does a 'correct' endpoint registration in the SF. See, when you use IPAddress.Any, you'll get the SF setting up an endpoint like https://0.0.0.0:44338, and that's how the reverse proxy will try to reach the service which obviously wouldn't work. 0.0.0.0 doesn't correspond to any particular ip, it's just the way to say 'any IPv4 address at all'. While when you use IPAddress.IPv6Any, you'll get a correct endpoint mapped to the vm ip address that could be resolved from within the vnet. You could see that stuff by yourself in the SF Explorer if you go down to the endpoint section in the service instance blade.

Connection lost whatever the long poll's settings with crossbar.io

I'm using crossbar to test the websockets and the long polling.
But each time I try using long-polling as default transport, whatever the settings I set, I get a "connection lost" every 2 seconds in my console.
By the way, it works perfectly with the websocket.
Here's the settings I want to test:
On the server site:
{
"lp": {
"type": "longpoll",
"options": {
"request_timeout": 0,
"session_tiemout": 0,
"queue_limit_bytes": 0,
"queue_limit_messages": 0
}
}
}
On the client side:
var connection = new autobahn.Connection({
transports: [{
url: [my url],
type: "longpoll",
max_retries: 1,
initial_retry_delay: 1,
retry_delay_growth: 3,
retry_delay_jitter: 3
}], ...
I'm using python on the server side, Chrome 43 as default browser (also tested on firefox).
Is something wrong in my settings ?
Sorry, I cannot replicate this. I'm using the longpoll example (https://github.com/crossbario/crossbarexamples/tree/master/longpoll) and have modified the config and the connection data to mirror what you list here. (I assume that the "tiemout" is just a typo here, since Crossbar.io doesn't start with this.)
This works fine in Chrome 43.
My best guess is that the problem is with something you didn't list.
My suggestion: Start from the example, and see whether this works for you.

Resources