Reactive streaming approach of file upload in Spring (Boot) - spring

We have spent a lot of hours on the inet and on stackoverflow, but none of the findings satisfied us in the way we planned a file upload in Spring context.
A few words towards our architecture. We have a node.js client which uploads files into a Spring Boot app. Let us call this REST endpoint our "client endpoint". Our Spring Boot application acts as middleware and calls endpoints of a "foreign system", so we call this endpoint a "foreign" one, due to distinction. The main purpose is the file handling between these two endpoints and some business logic in between.
Actually, the interface to our client looks like this:
public class FileDO {
private String id;
private byte[] file;
...
}
Here we are very flexible because it is our client and our interface defintion.
Due to the issue that under load our system has run out of memory sometimes, we plan to reorganize our code into a more stream-based, reactive approach. When i write "under load", i mean heavily under load, e.g. hundreds of file uploads at the same time with big files from at least some MB to at most 1GB. We know, that this tests don't represent real applications use cases, but we want to be prepared.
We spent some research into our challenge and we ended up with profiler tools showing us that according to our REST endpoints we store the files as byte arrays completely in our memory. Thats all, but not efficient.
Currently we are facing this requirement to deliver a REST endpoint for file upload and push these files into another REST endpoint of some foreign system. Doing so, our main applications intention is to be some middle tier for file upload. According to this initial situation we are looking forward to not have those files as a whole in our memory. Best would be a stream, maybe reactive. We are partially reactive with some business functions already, but at the very beginning of being familiar with all that stuff.
So, what are our steps so far? We introduced a new Client (node.js --> Spring Boot) interface as the following one. This works so far. But is it really a stream based approach? First metrics have shown, that this doesn't reduce memory utilization.
#PostMapping(value="/uploadFile", consumes = MediaType.MULTIPART_FORM_DATA_VALUE)
#ResponseStatus(HttpStatus.CREATED)
public Mono<Void> upload(#RequestPart(name = "id") String id, #RequestPart(name = "file") Mono<FilePart> file) {
fileService.save(id, file);
...
}
First question: is this type Mono<> right here? Or should we better have Flux of DataBuffer or something? And, if so, how the client shoud behave and deliver data in such a format that it is really a streaming approach?
The FileService class then should post this file(s) into the foreign system, perhaps do something else with given data, at least log the id and the file name. :-)
Our code in this FileService.save(..) actually looks like the following in between:
...
MultipartBodyBuilder bodyBuilder = new MultipartBodyBuilder();
bodyBuilder.asyncPart(...take mono somehow...);
bodyBuilder.part("id", id);
return webClient.create("url-of-foreign-system")
.uri("/uploadFile")
.syncBody(bodyBuilder.build())
.retrieve()
.bodyToMono(Result.class);
...
Unfortunately, the second REST endpoint, the one of our foreign system, looks little different to our first one. It will be enriched by data from another system. It takes some FileDO2 with an id and a byte array and some other meta data specific to the second foreign system.
As said, our approach should be to minimize the memory footprint of the actions in between client and foreign system. Sometimes we have not only to deliver data to that system, but also do some business logic that maybe slows down the whole streaming process.
Any ideas to do that in a whole? Currently we have not clue to do that all...
We appreciate any help or ideas.

Related

Processing incoming payloads as batch not working as expected in spring-cloud-streams

I say 'not working as expected' but actually is more like 'I don't really know if I'm doing the proper work in here', I feel like I'm mixing stuff from different approaches and doesn't really correlate.
Right now I've been using Spring Cloud Streams to process String-type messages from a PubSub subscription and so far so good, message in message out without much of a hassle.
What I'm trying to achieve now is to gather, let's say, 1000 messages, process them and send them altogether to another PubSub Topic. Still unsure about sending them as a List or individually like now, but all at the same time (this shouldn't be related to this question though).
Now I just discovered the following property.
spring.cloud.stream.bindings.input.consumer.batch-mode=true
Together with the following ones more specific to the GCP stuff.
spring.cloud.gcp.pubsub.publisher.batching.enabled=true
spring.cloud.gcp.pubsub.publisher.batching.delay-threshold-seconds=300
spring.cloud.gcp.pubsub.publisher.batching.element-count-threshold=100
So first question is... Are they linked by any means? Must I have the first one together with the other three?
What happened after I added the previous properties to my application.properties file is actually no change at all. Messages keep arriving and leaving the application without any issue and with no batch approach whatsoever.
Currently using the functional features the following way.
#Bean
public Function<Message<String>, String> sampleFunction() {
... // Stream processing in here
return processedString;
}
I was expecting this to crash with some message since the method only receives a String, not a list of String. Since it didn't crash, I modified the method above to receive a list of String (maybe Spring does some magic behind the scenes to still receive messages as String but collect them in a list for the method to process afterwards?).
#Bean
public Function<Message<List<String>>, String> sampleFunction() {
... // Stream processing in here
return processedString;
}
But this just crashes since it's trying to parse a single String message as a List of String.
How could I prepare the code to batch all those String messages into a List? Is there any example on this?
...batch-mode only works with binders that support it (e.g. Kafka, RabbitMQ). It doesn't look like the GCP binder supports it (I see no references to the property).
https://github.com/spring-cloud/spring-cloud-gcp/blob/master/spring-cloud-gcp-pubsub-stream-binder/src/main/java/org/springframework/cloud/gcp/stream/binder/pubsub/PubSubMessageChannelBinder.java
https://docs.spring.io/spring-cloud-stream/docs/3.1.0/reference/html/spring-cloud-stream.html#_batch_consumers
Publisher batching is not related to consumer batching.

Spring Data MongoDB Reactive - Dealing with findAll for a large number of documents?

Let's say I have a ReactiveMongoRepository defined like this:
#Repository
interface MyRepo extends ReactiveMongoRepository<MyDTO, String> {}
Given that the repository contains a lot of MyData documents (hundreds of thousands at least) and you do a simple "findAll()" followed by a deletion:
myRepo.findAll()
.doOnNext( myDto -> {
System.out.println(myDto.message);
})
.flatMap( myDto -> {
myRepo.deleteById(myDto.id);
})
This will be executed roughly once a month.
Is it safe to use Spring Data / MongoDB like this when streaming large sets of data? Or is it recommended to using some sort of batching or pagination to avoid cursor issues etc?
The general answer is it depends, but in your specific case in my opinion is no, at least not in your presented way
first of all, I guess that a find all operation, for all collection has very few sense.
I suppose that find an use case that need to handle hundreds of thousands is near to impossible, supposing that you have implement a data ingestion pipeline ok you have handle an infinite stream of data but for this use case a more I can suggest a more suitable architecture like streaming with kafka using spring cloud stream for example.
The problem is not the possibility of handle many data because the mongo reactive drive is very performant and tuning the back pressure mechanism you should save your server but repeat using a find all in streaming so big is few applicable, probably if you should handle a stream of data a messaging middleware with spring cloud stream may be the best option, imaging that you fire a find all ok your server and mogno probably will fine but your user will attend many hours before the request will finished, otherwise if the use case is a of line process as said before ok for processing an infinite data stream spring cloud stream may be the best option
UPDATE
Considering the use case of a lets say batch that should be ran one times per month I can say that the music change a lot.
Reading the code of Spring data reactive mongo I see that:
#NoRepositoryBean
public interface ReactiveMongoRepository<T, ID> extends ReactiveSortingRepository<T, ID>, ReactiveQueryByExampleExecutor<T> {
....
}
instead of
#NoRepositoryBean
public interface MongoRepository<T, ID> extends PagingAndSortingRepository<T, ID>, QueryByExampleExecutor<T> {
...
}
The key point of attention here is that the reactive version of the repository do not has the pagination feature in fact the name of base interface do not contains the word Paging, the key point here is the kind of technology.
In the blocking io the pagination is necessary for the model one thread per req and a so blocking pattern is dangerous for database application and so on busy a connection and the client for all the query is dangerous for timeout, load and so on and the split the query in page can help to not stress too much the system. But in a no blocking io the behavior is different you are attaching to a stream of data, the driver is a no blocking driver you do not use the classical mongo driver, spring data use the specific reactive mongo drive that is optimized for this job and it is based on a event loop model.
said that the key point here is that use a io intensive model for a off line profess probably is not so useful rather than safe, I mean using the reactive model is useful for software that are mainly io bound and with high traffic, the model support the high concurrency. But if your use case is a clean collection one times per month I guess that probably use reactive programming is safe since that is thought for support io intensive use case but in this case a classical batch blocking io model with pagination is a more suitable approach. The key point is i suppose that it should be safe the driver is thougth for manage a lot of data in high and streaming use case but it is useless use this approach for a batch use case
I hope that it can help you

REST API for main page - one JSON or many?

I'm providing RESTful API to my (JS) client from (Java Spring) server.
Main site page contains a number of logical blocks (news, last comments, some trending stuff), each of them has a corresponding entity on server. Which way is a right one to go, handle one request like
/api/main_page/ ->
{
news: {...}
comments: {...}
...
}
or let the client do a few requests like
/api/news/
/api/comments/
...
I know in general it's better to have one large request/response, but is this an answer to this situation as well?
Ideally, you should have different API calls for fetching individual configurable content blocks of the page from the same API.
This way your content blocks are loosely bounded to each other.
You
can extend, port(to a new framework) and modify them independently at
anytime you want.
This comes extremely useful when application grows.
Switching off a feature is fairly easy in this
case.
A/B testing is also easy in this case.
Writing automation is
also very easy.
Overall it helps in reducing the testing efforts.
But if you really want to fetch this in one call. Then you should add additional params in request and when the server sees that additional param it adds the additional independent JSON in the response by calling it's own method from BL layer.
And, if speed is your concern then try caching these calls on server for some time(depends on the type of application).
I think in general multiple requests can be justified, when the requested resources reflect parts of the system state. (my personal rule of thumb, still WIP).
i.e. if a news gets displayed in your client application a lot, I would request it once and reuse it wherever I can. If you aggregate here, you would need to request for it later, maybe some of them never get actually displayed, and you have some magic to do if the representation of a news differs in the aggregation and /news/{id}-resource.
This approach would increase communication if the page gets loaded for the first time, but decrease communication throughout your client application the longer it runs.
The state on the server gets copied request by request to your client or updated when needed (Etags, last-modified, etc.).
In your example it looks like /news and /comments are some sort of latest or since last visit, but not all.
If this is true, I would design them to be a resurce as well, like /comments/latest or similar.
But in any case I would them only have self-links to the /news/{id} or /comments/{id} respectively. Then you would have a request to /comments/latest, what results in a list of news-self-links, for what I would start a request only if I don't already have that news (maybe I want to check if the cached copy is still up to date).
It is also possible to trigger the request to a /news/{id} only if it gets actually displayed (scrolling, swiping).
Probably the lifespan of a news or a comment is a criterion to answer this question. Meaning the caching in the client it is not that vital to the system, in opposite of a book in an Book store app.

Micro Services and noSQL - Best practice to enrich data in micro service architecture

I want to plan a solution that manages enriched data in my architecture.
To be more clear, I have dozens of micro services.
let's say - Country, Building, Floor, Worker.
All running over a separate NoSql data store.
When I get the data from the worker service I want to present also the floor name (the worker is working on), the building name and country name.
Solution1.
Client will query all microservices.
Problem - multiple requests and making the client be aware of the structure.
I know multiple requests shouldn't bother me but I believe that returning a json describing the entity in one single call is better.
Solution 2.
Create an orchestration that retrieves the data from multiple services.
Problem - if the data (entity names, for example) is not stored in the same document in the DB it is very hard to sort and filter by these fields.
Solution 3.
Before saving the entity, e.g. worker, call all the other services and fill the relative data (Building Name, Country name).
Problem - when the building name is changed, it doesn't reflect in the worker service.
solution 4.
(This is the best one I can come up with).
Create a process that subscribes to a broker and receives all entities change.
For each entity it updates all the relavent entities.
When an entity changes, let's say building name changes, it updates all the documents that hold the building name.
Problem:
Each service has to know what can be updated.
When a trailing update happens it shouldnt update the broker again (recursive update), so this can complicate to the microservices.
solution 5.
Keeping everything normalized. Fileter and sort in ElasticSearch.
Problem: keeping normalized data in ES is too expensive performance-wise
One thing I saw Netflix do (which i like) is create intermediary services for stuff like this. So maybe a new intermediary service that can call the other services to gather all the data then create the unified output with the Country, Building, Floor, Worker.
You can even go one step further and try to come up with a scheme for providing as input which resources you want to include in the output.
So I guess this closely matches your solution 2. I notice that you mention for solution 2 that there are concerns with sorting/filtering in the DB's. I think that if you are using NoSQL then it has to be for a reason, and more often then not the reason is for performance. I think if this was done wrong then yeah you will have problems but if all the appropriate fields that are searchable are properly keyed and indexed (as #Roman Susi mentioned in his bullet points 1 and 2) then I don't see this as being a problem. Yeah this service will only be as fast as the culmination of your other services and data stores, so they have to be fast.
Now you keep your individual microservices as they are, keep the client calling one service, and encapsulate the complexity of merging the data into this new service.
This is the video that I saw this in (https://www.youtube.com/watch?v=StCrm572aEs)... its a long video but very informative.
It is hard to advice on the Solution N level, but certain problems can be avoided by the following advices:
Use globally unique identifiers for entities. For example, by assigning key values some kind of URI.
The global ids also simplify updates, because you track what has actually changed, the name or the entity. (entity has one-to-one relation with global URI)
CAP theorem says you can choose only two from CAP. Do you want a CA architecture? Or CP? Or maybe AP? This will strongly affect the way you distribute data.
For "sort and filter" there is MapReduce approach, which can distribute the load of figuring out those things.
Think carefully about the balance of normalization / denormalization. If your services operate on URIs, you can have a service which turns URIs to labels (names, descriptions, etc), but you do not need to keep the redundant information everywhere and update it. Do not do preliminary optimization, but try to keep data normalized as long as possible. This way, worker may not even need the building name but it's global id. And the microservice looks up the metadata from another microservice.
In other words, minimize the number of keys, shared between services, as part of separation of concerns.
Focus on the underlying model, not the JSON to and from. Right modelling of the data in your system(s) gains you more than saving JSON calls.
As for NoSQL, take a look at Riak database: it has adjustable CAP properties, IIRC. Even if you do not use it as such, reading it's documentation may help to come up with suitable architecture for your distributed microservices system. (Of course, this applies if you have essentially parallel system)
First of all, thanks for your question. It is similar to Main Problem Of Document DBs: how to sort collection by field from another collection? I have my own answer for that so i'll try to comment all your solutions:
Solution 1: It is good if client wants to work with Countries/Building/Floors independently. But, it does not solve problem you mentioned in Solution 2 - sorting 10k workers by building gonna be slow
Solution 2: Similar to Solution 1 if all client wants is a list enriched workers without knowing how to combine it from multiple pieces
Solution 3: As you said, unacceptable because of inconsistent data.
Solution 4: Gonna be working, most of the time. But:
Huge data duplication. If you have 20 entities, you are going to have x20 data.
Large complexity. 20 entities -> 20 different procedures to update related data
High cohesion. All your services must know each other. Data model change will propagate to every service because of update procedures
Questionable eventual consistency. It can be done so data will be consistent after failures but it is not going to be easy
Solution 5: Kind of answer :-)
But - you do not want everything. Keep separated services that serve separated entities and build other services on top of them.
If client wants enriched data - build service that returns enriched data, as in Solution 2.
If client wants to display list of enriched data with filtering and sorting - build a service that provides enriched data with filtering and sorting capability! Likely, implementation of such service will contain ES instance that contains cached and indexed data from lower-level services. Point here is that ES does not have to contain everything or be shared between every service - it is up to you to decide better balance between performance and infrastructure resources.
This is a case where Linked Data can help you.
Basically the Floor attribute for the worker would be an URI (a link) to the floor itself. And Any other linked data should be expressed as URIs as well.
Modeled with some JSON-LD it would look like this:
worker = {
'#id': '/workers/87373',
name: 'John',
floor: {
'#id': '/floors/123'
}
}
floor = {
'#id': '/floor/123',
'level': 12,
building: { '#id': '/buildings/87' }
}
building = {
'#id': '/buildings/87',
name: 'John's home',
city: { '#id': '/cities/908' }
}
This way all the client has to do is append the BASE URL (like api.example.com) to the #id and make a simple GET call.
To remove the extra calls burden from the client (in case it's a slow mobile device), we use the gateway pattern with micro-services. The gateway can expand those links with very little effort and augment the return object. It can also do multiple calls in parallel.
So the gateway will make a GET /floor/123 call and replace the floor object on the worker with the reply.

Storing, Loading, and Updating a Trie in ASP.NET MVC 3

I have a trie-based word detection algorithm for a custom dictionary. Note that regular expressions are too brittle with this dictionary as entries may contain spaces, periods, etc.
I've implemented the algorithm in a local C# app that reads in the dictionary from file and stores the trie in memory (it's compact, so no RAM size issues at all). Now I would like to use this algorithm in an MVC 3 app on a cloud host like AppHarbor, with the added twist that I want a web interface to enable adding/editing words.
It's fast enough that loading the dictionary from file and building the trie every time a user uploads their text would not be an issue (< 1s on my laptop). However, if I want to enable admins to edit the dictionary via the web interface, that would seem tricky since the dictionary would potentially be getting updated while a user is trying to upload text for analysis.
What is the best strategy for storing, loading, and updating the trie in an MVC 3 app?
I'm not sure if you are looking for specific implementation details, or more conceptual ideas about how to handle but I'll throw some ideas out there for now.
Actual Trie Classes - Here is a good C# example of classes for setting up a Trie. It sounds like you already have this part figured out.
Storing: I would persist the trie data to XML unless you are already using a database and have some need to have it in a dbms. The XML will be simple to work with in the MVC application and you don't need to worry about database connectivity issues, or the added cost of a database. I would also have two versions of the trie data on the server, a production copy and a production support copy, the second for which your admin can perform transactions against.
Loading In your admin module of the application, you may implement a feature for loading the trie data into memory, the frequency of data loading depends on your application needs. It could be scheduled or available as a manual function. Like in wordpress sites, if a user should access it while updating they would receive a message that the site is undergoing maintenance. You may choose to load into memory on demand only, and keep the trie loaded at all times except for if problems occurred.
Updating - I'd have a second database (or XML file) that is used for applying updates. The method of applying updates to production would depend partially on the frequency, quantity, and time of updates. One safe method might be to store transactions entered by the admin.
For example:
trie.put("John", 112);
trie.put("Doe", 222);
trie.Remove("John");
Then apply these transactions to your production data as needed via an admin function. If needed put your site into "maint" mode. If the updates are few and fast you may be able to code the site so that it will hold all work until transactions are processed, a user might have to wait a few milliseconds longer for a result but you wouldn't have to worry about mutating data issues.
This is pretty vague but just throwing some ideas out there... if you provide comments I'll try to give more.
1 Store trie in cache:
It is not dynamic data, and caching helps us in other tasks (like concurrency access to trie by admin and user)
2 Make access to cache clear:
:
public class TrieHelper
{
public Trie MyTrie
{
get
{
if (HttpContext.Current.Cache["myTrieKey"] == null)
HttpContext.Current.Cache["myTrieKey"] = LoadTrieFromFile(); //Returns Trie object
return (Trie)HttpContext.Current.Cache["myTrieKey"];
}
}
3 Lock trie object while adding operation in progress
public void AddWordToTrie(string word)
{
var trie = MyTrie;
lock (HttpContext.Current.Cache["myTrieKey"])
{
trie.AddWord(word);
} // notify that trie object locking when write data to file is not reuired
WriteNewWordToTrieFile(word); // should lock FileWriter object
}
}
4 If editing is performs by 1 admin at a time - store trie in xml file - it will be easy to implement logic of search element, after what word your word should be added (you can create function, that will use MyTrie object in memory), and add it, using linq to xml.
I've got a kind'a the same but 10 times bigger :)
The client design it's own calendar with questions ans possible answer in the meanwhile some is online and being used by the normal user.
What I come up was something as test and deploy. The Admin enters the calendar values and set it up correctly and after he can use a Preview button to see if it's like he needs/wants, then, to make the changes valid to all end users, he need to push Deploy.
He, as an ADMIN, will know that, until he pushes the DEPLOY button, all users accessing the Calendar will have the old values. Soon he hits deploy all is set in the Database, and pushed the files he uploaded into Amazon S3 (for faster access).
I update the Cache with the new calendar and the new Calendar object is cached until the App pool says otherwise or he hit the Deploy button again.
You could do something like this.
As you are going to perform your application in the cloud environment, I'd suggest you to take a look at CQRS and durable messaging and provide some concurrency model (possibly, optimistic concurrency and intelligent conflict detection http://skillsmatter.com/podcast/design-architecture/cqrs-not-just-for-server-systems 5:00)
Also, obviously, you need to analyze your business requirements more precisely because, as Udi Dahan mentioned, race conditions are result of the lack of business analysis.

Resources