Improve Backend Response Time - protocol-buffers

On doing analysis on improving back-end response time. I come across several ways you can reduce the time. One is the response returned from the back-end to be of protobuf format instead of Json.
Questions
1. Does all the browsers able to translate the protobuf messages?
2. Does protobuf is been supported by all HTTP servers - apache, play, tomcat?
3. Postgres has JSon column type. Does any DB has protobuf column type?
Other says Apache Thrif is the best for improving the back-end response time which uses a different mechanism than protobuf message
Please help me to understand better

Related

Spring Boot Microservices - Design of API to get the response as a List by passing Ids

I am using Spring Boot and Spring Cloud for Microservices architecture and using various things like API Gateway, Distributed Config, Zipkin + Sleuth, Cloud and 12 factor methodologies where we've single DB server has the same schema but tables are private.
Now I am looking to have below things - Note - Response Object is nested and gives data in hierarchy.
Can we ask downstream system to develop API to accept List of CustomerId and given response in one go?
Or can we simply call the same API multiple times giving single CustomerId and get the response?
Please suggest having complex response set and also having simple response set. What would be better considering performance and microservices in mind.
I would go with option 1. This may be less RESTful but it is more performant, especially if the list of CustomerId is large. Following standards is for sure good, but sometimes the use case requires us to bend a bit the standards so that the system is useful.
With option 2. you will most probably "waste" more time with HTTP connection "dance" than with your actual use case of getting the data. Imagine having to call 50 times the same downstream service if you are required to retrieve the data from 50 CustomerIds.

Providing REST APIs: Apache NiFi vs Spring Boot [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
We're currently evaluating some open source tools at our company how to create our REST APIs in the future.
Last candidates are Apache NiFi and Spring. I'm familiar with Spring and it's relatively easy to implement APIs that satisfy our needs.
However, I'm not sure if NiFi is the better tool or even designed to purely be used as API provider.
Generally, our APIs do the following:
Parse JSON payload/input parameters (sometimes quite complex XQuery stuff on the payload)
Send those infos to Oracle DB functions where main logic resides
Parse Oracle output and send appropriate HTTP response
If anyone with NiFi or Spring experience (or both) has some more insights on what's the better alternative here, I'd greatly appreciate it. Thanks in advance!
NiFi isn't specifically designed for creating RESTful APIs, but there's no reason you couldn't achieve this in NiFi. After all, the use-case you describe is pretty much just moving data; send data payload -> parse data -> send to oracle -> respond.
You can build complex HTTP handling logic with the NiFI HandleHTTPRequest and HandleHTTPResponse processors.
You can easily work with JSON in NiFi; either using the concept of Records with JsonTreeReader, or using something like JoltTransformJSON.
You can interact with DBs, including Oracle, using the DBCPConnectionPool and then run SQL using PutSQL, ExecuteSQL, QueryDatabaseTable (and their corresponding Record varients e.g. ExecuteSQLRecord).
You'll also gain some of the benefits of NiFi out of the box, e.g. fault tolerance, clustering, scaling out, visibility, lineage etc.
NiFi is a no-code approach, so it's a vastly different experience to developing a Spring application. You'll need to learn the do's and don't's of NiFi, how to properly structure flows, how to scale, etc. You can also extend NiFi with custom development, but you'd have to learn the NiFi structure and APIs.
Obviously, you could achieve all of this with Spring too; if your needs are very simple (you won't need to scale out, you don't need guarenteed fault tolerance, etc.), or if your API is going to branch into wider use-cases than you described here, it will probably be easier as you already have Spring experience.
There are other considerations; how you version control (NiFi has NiFi Registry), external dependencies (NiFi requires ZooKeeper), overhead (NiFi has it's UI for building flows), deployment (NiFi disks requirements for repositories, OS support, etc.), management/support (are you comfortable supporting NiFi/Registry/ZooKeeper if there are issues), upgrades, etc.
I would also go for the spring framework based approach. Nifi is more of an ETL like tool
#McLovin, love the name.
As a long time user of NiFi, I have NiFi APIs in production with several huge enterprises. I have also experience with Spring and can appreciate both methods.
However, the number one qualifier for NIFI here, in my opinion, is myself, or whomever I am training in delivery, not having to write any code. This is amazing!! I also love the ability to set an inbound port in NiFi to accept anything or allow or deny requests based on my flow logic which I can change at any time. I can modify the flow while it is live. I can add more logic while it is live. I can capture exceptions, send notifications, and build in replay ability with requests.
I would absolutely choose NiFi over Spring to create a scalable API.

How to handle file uploads in GraphQL over Websocket Protocol?

I'm creating a web app that uses GraphQL, the requirement is to handle GraphQL operations over WebSocket. Although I managed to achieve this by using subscriptions-transport-ws. however, I'm quite stuck with handling file uploads. and somehow I came across with streaming file from client to server using socket.io-stream, but this leads to having 2 separate API for textual data and files. So, I was wondering if there is a way to combine this functionality into GraphQL.
I ran into the exact same problem, and my file sizes were such that converting to base64 wasn't a feasible option. I also didn't want to use a separate library outside of GraphQL because that would require substantial setup changes in my server (to handle both GraphQL and non-GraphQL).
Fortunately, the solution ended up being fairly simple. I created two GraphQL clients on the front-end - one for the majority of my traffic exclusively over WebSockets, and another exclusively over HTTP just for operations that involved file uploads.
Now I can simply specify which client I want depending on whether the operation involves file uploads, without complex changes on my server or impacting the real-time benefits of all of my other queries.
currently, I am also finding a proper solution for this but you can use an alternative approach by converting your file data into base64 and pass that data as a string. This approach can only work for a small size file as string data-type can not store large amounts of data as buffers do.

OpenWhisk and binary data from Google Flatbuffers

We have data being created by a simulated device being put on the network with NanoMSG with a payload of Google FlatBuffers (binary).
We would like to trigger on patterns of this data with OpenWhisk, and respond with Flatbuffer encoded responses.
Assume latency and throughput are not a big concern here.
Which approach can we take:
Write a repeater that converts the Flatbuffer to JSON (FB has a utility to do this) and then place the data onto an AMQP buss which is listened to by OpenWhisk? (we have folks familiar with AMQP, but not Kafka
Try to do something with Kafka, which seems (maybe it is only the IBM version) to directly handle the binary Flabuffers (probably still need a shim from NanoMSG to Kafka. E.g.
How to invoke an OpenWhisk action from IoT Platform in Bluemix
https://medium.com/openwhisk/serverless-transformation-of-iot-data-in-motion-with-openwhisk-272e36117d6c
Not sure if we still don't need the Flatbuffers JavaScript deserializer and serializer to convert the binary based64 data in JavaScript to JSON
Learn Kakfa, and then transform the NanoMsg payload (Flatbuffers to JSON).
Something else?
Anyone have direct experience in this?
Update
Thank you James, those are spot-on links. But it does raise some secondary issues:
If the data is in Google FlatBuffers schema, it does not seem to be any advantage to using Kafka binary transformation, since the mux/demux from base64 still needs to be done in the javascript layer.
It is slightly disturbing that Kafka (which is known for its low latency) is batching the events. That does effect latency when one has Iot (sensor data) that needs to be responded to in a closed-loop to actuators (sensor->control->actuators) is a common robotics model, and that is pretty much close to what we are doing. For the moment we are not pushing the latency issue, but I can see emerging cases where we will need the low latency. What is the thinking in the Kafka Whisk provider community about this?
I must be missing something, but the AMQP provider says it is using RHEA https://github.com/amqp/rhea#receiver . That seems to provide all one needs in terms of writing a simple trigger/rules for dealing with sensor stream data. Why would one use OpenWhisk?
Either option makes sense. OpenWhisk actions receive and return JSON messages. Binary data passed into those functions must be Base64 encoded.
If you use an AMQP feed, you can convert the binary data to JSON manually.
The Kafka feed provider does support automatic encoding of the binary input values (using the isBinary* parameters).
Kafka feeds push batches of messages to the OpenWhisk actions. This is different from a message queue, which would push one message at a time. This feed provider is built-in OpenWhisk.
There is an external community feed provider for AMQP here. This would need you to install and run it manually.

Does some optimized web servers for single page application exists?

When we do single page application, the webserver basically does only one things, it gives some data when the client asks them (using JSON format for example). So any server side language (php, ror) or tool (apache, ningx) can do it.
But is there a language/tool that works better with this sorts of single page applications that generates lot of small requests that need low latency and sometimes permanent connection (for realtime and push things)?
SocketStream seems like it matches your requirements quite well: "A phenomenally fast real-time web framework for Node.js ... dedicated to creating single-page real time websites."
SocketStream uses WebSockets to get lowest latency for the real-time portion. There are several examples on the site to build from.
If you want a lot of small requests in realtime by pushing data - you should take a look at socket type connections.
Check out Node.js with Socket.io.
If you really want to optimize for speed, you could try implementing a custom HTTP server that just fits your needs, for example with the help of Netty.
It's blazingly fast and has examples for HTTP and WebSocket servers included.
Also, taking a look at GWAN may be worthwile (though I have not tried that one yet).
http://en.wikipedia.org/wiki/Nginx could be appropriate

Resources