AppHarbor self-hosted WebAPI using background worker - appharbor

Do you know if it's possible to self host an WebAPI service on an AppHarbor background worker?
Thanks

It's not currently possible, I'm afraid. That's because there's no way for background workers to accept incoming HTTP requests or network connections.

Related

How to deploy a flask socket io application on IIS server?

my use case is
I am trying to build an API that takes images as input and does some
image processing operations and return the output JSON back to the
client.
Multiple clients can concurrently request Server and as the server
does take 2 to 3 minutes time to process.
Initially I thought of a normal flask Application, where client
would poll the server for a response on a timely basis
But as Flask-SocketIO can respond back to the client event-based, I
want to use Flask-SocketIO
As the other APIs in my project are hosted on IIS, I wanted to use
the same IIS as the hosting server
my questions are
Can I use Flask-SocketIO for my use case, where API takes 2 to 3
minutes to respond back
If not IIS, how to deploy flask-socketIO on
the windows machine, I have gone through the documentation but I did
not find any deployment strategy for hosting it on windows machine
The best way to achieve concurrency in this case
Thanks in advance
Prasad.

It's possible to setup NATS protocol on heroku?

I've seen a lot of documentation and tutorials how to setup HTTPS and Websockets on heroku, but it's possible to setup another protocol like TLS or NATS?
If it's possible how can I do that?
Unfortunately, no.
Inbound requests are received by a load balancer that offers SSL
termination. From here they are passed directly to a set of routers.
The routers are responsible for determining the location of your
application’s web dynos and forwarding the HTTP request to one of
these dynos.
https://devcenter.heroku.com/articles/http-routing#routing
Not supported
TCP Routing
https://devcenter.heroku.com/articles/http-routing#not-supported
Heroku offers only http/https routing for applications hosted on it.

One Web API calls the other Web APIs

I have 3 Web API Servers which have the same functionality. I am going to add another Web API server which will be used only for Proxy. So All clients from anywhere and any devices will call Web API Proxy server and the Proxy server will transfer randomly the client requests to the other 3 Web API servers.
I am doing this way because:
There are a lot of client requests in a minute and I can not use only 1 Web API server.
If one server was dead, clients can still send request to the other servers. (I need at least 1 web servers response to the clients )
The Question is:
What is the best way to implement the Web API Proxy server?
Is there a better way to handle high volume client requests?
I need at least 1 web server response to the clients. If I have 3 servers and 2 of them are dead.
Please give me some links or documents that can help me.
Thanks
Sounds like you need a reverse proxy. Apache HTTP Server and NGINX can be configured to act as a load balanced reverse proxy.
NGINX documentation: http://nginx.com/resources/admin-guide/reverse-proxy/
Apache HTTP Server documentation: http://httpd.apache.org/docs/2.2/mod/mod_proxy.html
What you are describing is call Load Balancing and Azure (which seems you are using from your comments) provides it out of the box both for Cloud Services and Websites. You should create as many instances as you like under the same cloud service and open a specific port (which will be load balanced) under cloud service endpoints.

asp.web api load balancing system

I want to build a asp.net web api server that can re-route incoming HTTP request to other web api servers. Main server will be master and its only job will be accepting and routing to other servers. Slave servers will inform master server when they started and ready to accept http requests. Slave servers must not only inform they alive but they should send which api they support. I think I have to re-map routing tables in master server in runtime. Is it possible?
This seems like load balancing according to functionality . Is there any way to do this ? I have to write a load balancer for web api any suggestion is welcome.

How to do load testing on long polling?

I'm trying to figure out how to do load testing on a long polling or web socket type of architecture.
I need to setup multiple clients which subscribe to channels on one side and wait for responses. The load testing should measure the time it took for messages from the publishing server to reach the clients.
Any ideas?
As said here,
SignalR uses a particular protocol to communicate so it's best that
you use our tool to generate load for testing your server(s).
So, SignalR comes with Crank. Crank can only connect to PersistentConnection based apps and not Hub based app.
This another answer could help you for Hub based app.
You can use crank, as referred above. One of the parameters is Transport, so you can specify only LongPolling:
crank.exe /Url:http://someUri /Transport:LongPolling
Use JMeter (https://jmeter.apache.org/) and flood with http connections with transport-type as header.

Resources