Is there a SaaS provider for a HTTPS hub/load-balancer, for bandwidth-constrained IoT devices? - https

The reason I write this post is I would like to avoid re-inventing the wheel if possible, as I think this is definitely a solved problem, but I am struggling to find a SaaS provider which offers an existing solution that fits my use-case.
The problem goes as follows:
I am running a fleet of data hungry IoT devices.
These devices need to send data to multiple 3rd party endpoints (all HTTPS).
Currently as each new endpoint is added, the mobile data usage (3G/4G/5G) scales in a linear manner, as it is sending to each 3rd-party from the IoT device itself.
Proposed solution:
The IoT devices transmit their data to a HTTPS "hub", this then forwards the data to a list of specified endpoints. Like how a HTTPS load-balancer would work, but operating in a send-to-all mode.
This would keep the IoT device data usage constant, while increasing the cloud data usage (which is orders of magnitude cheaper). Resulting in a cost saving.
Now I had imagined that this is a fairly common problem with IoT devices, but am struggling to find any provider offering this type of service already. I may be lacking knowledge of what terminology to search for if this already exists. If anybody knows the name of a service which offers something like this then this is the type of answer I am looking for.
Ideal features:
HTTPS retry. The service should cache a request if it fails to forward to one or more of the destinations, it should then attempt to re-transmit to the failed destinations after some amount of time.
A SLA regarding uptime, as downtime of this service would expectedly cause a larger outage than duplicating the requests in the original method.
Reverse proxy to preserve the original IP address if possible.
A web GUI (a nice-to-have but not essential).
If this doesn't exist I will likely write my own solution. Thanks in advance.

Related

KDB+/Q: GRPC implementation?

gRPC is a modern open source high performance RPC framework that can
run in any environment. It can efficiently connect services in and
across data centers with pluggable support for load balancing,
tracing, health checking and authentication. It is also applicable in
last mile of distributed computing to connect devices, mobile
applications and browsers to backend services.
I'm finding GRPC is becoming increasingly more pertinent in backend infrastructure, and would've liked to have it in my favorite language/tsdb kdb+/q.
I was surprised to find that kdb+ does not have a grpc implementation. Obviously, the (https://code.kx.com/q/interfaces/protobuf/)
package doesn't support the parsing of rpc's, is there anything quantitatively preventing there being a KDB+ implementation of the rpc requests/services etc. found in grpc?
Why would one not want to implement rpc's (grpc) in kdb+ and would it be a good idea to wrap a c++/c implemetation therin inorder to achieve this functionality.
Thanks for your advice.
Interesting post:
https://zimarev.com/blog/event-sourcing/myth-busting/2020-07-09-overselling-event-sourcing/
outlines event sourcing, which I think might be a better fit for kdb?
What is the main issue with services using RPC calls to exchange information? Well, it’s the high degree of coupling introduced by RPC by its nature. The whole group of services or even the whole system can go down if only one of the services stops working. This approach diminishes the whole idea of independent components.
In my practice I hardly encounter any need to use RPC for inter-service communication. Partially because I often use Event Sourcing, more about it later. But we always use asynchronous communication and exchange information between services using events, even without Event Sourcing.
For example, an order microservice in an e-commerce system needs customer data from the customer microservice. These dependencies between microservices are not ideal. Other microservices can go down and synchronous RESTful requests over https do not scale well due to their blocking nature. If there was a way to completely eliminate dependencies between microservices completely the result would be a more robust architecture with less bottlenecks.
You don’t need Event Sourcing to fix this issue. Event-driven systems are perfectly capable of doing that. Event Sourcing can eliminate some of the associated issues like two-phase commits, but again, not a requirement to remove the temporal coupling from your system.

Must Microservices based systems be all in the same network?

I have an web application that is separated in several components. For some reasons (pricing) I'm considering to deploy future components in different clouds.
Does anybody has references and experience on this to tell me if this is definitely not good? I know that components being in different networks will decrease the performance. At the same time, I do not like the idea of losing the power of choice where the new components will be.
Must Microservices based systems be all in the same network? How do you handle this problem?
Having worked with multiple services in the past I can tell you that services are made to work across separate networks. This is why there are security protocols like CAS, SAML, OAUTH, HTTPS, and HMAC to name a few.
So as long as you are able to deal with the management of the networks, and you have good security around your services (and I assume you do), then I would not be worried about breaking some unspoken microservices rule. Remember that microservices, if written well and are useful, are expected to be used across the Internet, especially for the Internet of Things, so they are expected to be used across multiple networks.
When you start trying this, I would pay very close attention to the bandwidth charges. AWS as an example you are ok if you are in the same region. Bandwidth between services will not cost much if anything. Lets say you use AWS and Google Cloud. Now you will be paying for the bandwidth between the 2 providers.
As a suggestion I would look at Docker as a possible solution to your problem/concern of vendor lock in.
You would be restricted to providers that support docker but in theory you could migrate quickly between providers easily since your application would be abstracted from each cloud providers architecture.
Performance, will take a hit with anything leaving the providers data center. I suppose with some investigation you might try researching providers that use a common internet exchange. This would help minimize a few hops at least.

ZeroMQ capabilities

I am looking for solutions for a scenario.
Let's assume a service-oriented architecture (SOA) with hundreds of services. The services are completely isolated – what is behind their APIs is an implementation detail.
Different services can have different security policies – i.e. who can access the service. For example, a service can be fully public, accessible to a subset of employees, accessible to a subset of other services, etc. Some services may even have that specified on the API level, for example a public service with some internal API calls (is that a bad idea?).
I have touched a bit on ZMQ but not enough to know if this interconnection of services can be accomplished with ZMQ. Any help to decide on whether to continue concentration on ZMQ or not will be highly appreciated.
Are you asking about how to handle security in a SOA? Or are you asking whether or not it is feasible to build a SOA with 0MQ?
The former requires you to build it yourself. You need to define your own security policy between services. Not really 0MQ's domain.
For the latter, yes, 0MQ should allow you to build a SOA architecture. In fact we're doing it right now. Services are encapsulated into containers with a HTTP endpoint handled by nginx, which then reverse proxies the request to a (one or more) nodejs server within through express, which then PUSH messages to workers' PULL sockets on a fair queue basis. Upon finishing processing the request, the worker PUSH its reply back to the server's PULL socket. This way we can spin up or any number of workers we want with minimal disruption to the server. And this is one service.
Service to service communications is handled through REST-over-HTTP.

Existing Event Driven Network Protocols

I am building a set of programs that consist of multiple clients and a single server.
The clients are frequently pushing small packets of data to the server, which will validate the information (returning an error if the data is invalid), and process the received information. The information may then incur the firing of events, which clients will be subscribed to, allowing for clients to be instantly (or as close as possible) notified (along with a small amount of data).
I have some ideas about how to do this, but I am trying to avoid creating a protocol of my own, mainly as I'm sure it would take forever and I would probably make a few errors. So I was wondering if there are any existing protocols that I could implement into my system that would provide such functionality.
The number of clients will initially be quite small, but will be growing over time to potentially include 1000's of clients (with their own subscriptions), and several front end servers (each one handling a subset of subscriptions) parsing the information back and forth with back end servers for improved capability.
So, if anyone knows of any existing protocols that implement these requirements and functionality, that would be fantastic.
EDIT
I am currently looking at the XMPP protocol, and the JXTA protocol suite (for reference, and implement with another language). Both seem quite good and provide the necessary connectivity, but I have not had the opportunity to test each of them out in my environment, or if they are even suitable for what I am attempting.
Additionally, some of the network clients will be outside of the local network and operating over WAN. Security is not so much of an issue, but I need to take into account the increase latency of this, and firewall rules (local to the connection that is hosting the application and ISP firewalls) that could be blocking certain ports or transport protocols (I have read some text that said that some ISPs where blocking UDP packets, but not sure of how wide this goes. I can do it at home, the office, mobile, friends houses, etc and have yet to experience it myself).
I'm sorry if the following is not exactly what you're after but I am slightly confused by your use of the word 'protocol'. I understand a protocol to be a 'communication specification' only, where the implementation is left entirely to you. If that is the case I always find the the following graphic usefull, link.
If on the other hand you are looking for a solution which allows you to easily implement the networking side of your application, helping save time, then checkout the following network libraries, which implement their own custom protocol:
NetworkComms.Net
Lidgren
ZeroMQ
Disclaimer: I'm a developer for NetworkComms.Net

How to implement a secure distributed social network?

I'm interested in how you would approach implementing a BitTorrent-like social network. It might have a central server, but it must be able to run in a peer-to-peer manner, without communication to it:
If a whole region's network is disconnected from the internet, it should be able to pass updates from users inside the region to each other
However, if some computer gets the posts from the central server, it should be able to pass them around.
There is some reasonable level of identification; some computers might be dissipating incomplete/incorrect posts or performing DOS attacks. It should be able to describe some information as coming from more trusted computers and some from less trusted.
It should be able to theoretically use any computer as a server, however, optimizing dynamically the network so that typically only fast computers with ample internet work as seeders.
The network should be able to scale to hundreds of millions of users; however, each particular person is interested in less than a thousand feeds.
It should include some Tor-like privacy features.
Purely theoretical question, though inspired by recent events :) I do hope somebody implements it.
Interesting question. With the use of already existing tor, p2p, darknet features and by using some public/private key infrastructure, you possibly could come up with some great things. It would be nice to see something like this in action. However I see a major problem. Not by some people using it for file sharing, BUT by flooding the network with useless information. I therefore would suggest using a twitter like approach where you can ban and subscribe to certain people and start with a very reduced set of functions at the beginning.
Incidentally we programmers could make a good start to accomplish that goal by NOT saving and analyzing to much information about the users and use safe ways for storing and accessing user related data!
Interesting, the rendezvous protocol does something similar to this (it grabs "buddies" in the local network)
Bittorrent is a mean of transfering static information, its not intended to have everyone become producers of new content. Also, bittorrent requires that the producer is a dedicated server until all of the clients are able to grab the information.
Diaspora claims to be such one thing.

Resources