I have list of answers on questions. I want to save those answers and then (after mongo gives them an id) add them to their questions.
Now I'm doing it this way:
public Flux<Answer> createAnswers(List<Answer> answers) {
return answerRepository.saveAll(answers)
.map(answer -> {
questionRepository.findById(answer.getQuestionId())
.subscribe(question -> {
question.getAnswers().removeIf(ans -> Objects.equals(ans.getId(), answer.getId()));
question.getAnswers().add(answer);
questionRepository.save(question).block();
});
return answer;
});
}
I also tried with ..saveAll(answers).doOnNext() and doOnEach() but this way questions are not saved.
It seems that map is used to transform data but not for doing operations on each element. Also I'm a bit confused with call to block().
Is there a better way to achieve my aim?
You should never call subscribe/block on a Flux or a Mono within a method that returns a reactive type itself.
Doing so will decouple the current pipeline with what you're trying to achieve. Best case scenario, this will break backpressure support. In many cases, this might also break in surprising ways; for example, if your method is dealing with HTTP request/response or a session of some sort, the response/session might get closed with your other subscriber is trying to do something on it still.
I believe something like this is more consistent (although I'm missing a lot of context here, so it might not be the best way to achieve this):
public Flux<Answer> createAnswers(List<Answer> answers) {
return answerRepository.saveAll(answers)
.flatMap(answer -> {
return questionRepository
.findById(answer.getQuestionId())
.flatMap(question -> {
question.getAnswers().removeIf(ans -> Objects.equals(ans.getId(), answer.getId()));
question.getAnswers().add(answer);
return questionRepository.save(question);
})
.thenReturn(answer);
});
}
Related
Problem
We're developing a Spring Boot service to upload data to different back end databases. The idea is that, in one multipart/form-data request a user will send a "model" (basically a file) and "modelMetadata" (which is JSON that defines an object of the same name in our code).
We got the below to work in the WebFlux annotated controller syntax, when the user sends the "modelMetadata" in the multipart form with the content-type of "application/json":
#PostMapping(consumes = [MediaType.MULTIPART_FORM_DATA_VALUE], produces = [MediaType.APPLICATION_JSON_VALUE])
fun saveModel(#RequestPart("modelMetadata") monoModelMetadata: Mono<ModelMetadata>,
#RequestPart("model") monoModel: Mono<FilePart>,
#RequestHeader headers: HttpHeaders) : Mono<ResponseEntity<ModelMetadata>> {
return modelService.saveModel(monoModelMetadata, monoModel, headers)
}
But we can't seem to figure out how to do the same thing in Webflux's functional router definition. Below are the relevant code snippets we have:
#Bean
fun modelRouter() = router {
accept(MediaType.MULTIPART_FORM_DATA).nest {
POST(ROOT, handler::saveModel)
}
}
fun saveModel(r: ServerRequest): Mono<ServerResponse> {
val headers = r.headers().asHttpHeaders()
val monoModelPart = r.multipartData().map { multiValueMap ->
it["model"] // What do we do with this List<Part!> to get a Mono<FilePart>
it["modelMetadata"] // What do we do with this List<Part!> to get a Mono<ModelMetadata>
}
From everything we've read, we should be able to replicate the same functionality found in the annotation controller syntax with the router functional syntax, but this particular aspect doesn't seem to be well documented. Our goal was to move over to use the new functional router syntax since this is a new application we're developing and there are some nice forward thinking features/benefits as described here.
What we've tried
Googling to the ends of the Earth for a relevant example
this is a similar question, but hasn't gained any traction and doesn't relate to our need to create an object from one piece of the multipart request data
this may be close to what we need for uploading the file component of our multipart request data, but doesn't handle the object creation from JSON
Tried looking at the #RequestPart annotation code to see how things are done on that side, there's a nice comment that seems to hint at how they are converting the parts to objects, but we weren't able to figure out where that code lives or any relevant example of how to use an HttpMessageConverter on the ``
the content of the part is passed through an {#link HttpMessageConverter} taking into consideration the 'Content-Type' header of the request part.
Any and all help would be appreciated! Even just some links for us to better understand Part/FilePart types and there role in multipart requests would be helpful!
I was able to come up with a solution to this issue using an autowired ObjectMapper. From the below solution I could turn the modelMetadata and modelPart into Monos to mirror the #RequestPart return types, but that seems ridiculous.
I was also able to solve this by creating a MappingJackson2HttpMessageConverter and turning the metadataDataBuffer into a MappingJacksonInputMessage, but this solution seemed better for our needs.
fun saveModel(r: ServerRequest): Mono<ServerResponse> {
val headers = r.headers().asHttpHeaders()
return r.multipartData().flatMap {
// We're only expecting one Part of each to come through...assuming we understand what these Parts are
if (it.getOrDefault("modelMetadata", listOf()).size == 1 && it.getOrDefault("model", listOf()).size == 1) {
val modelMetadataPart = it["modelMetadata"]!![0]
val modelPart = it["model"]!![0] as FilePart
modelMetadataPart
.content()
.map { metadataDataBuffer ->
// TODO: Only do this if the content is JSON?
objectMapper.readValue(metadataDataBuffer.asInputStream(), ModelMetadata::class.java)
}
.next() // We're only expecting one object to be serialized from the buffer
.flatMap { modelMetadata ->
// Function was updated to work without needing the Mono's of each type
// since we're mapping here
modelService.saveModel(modelMetadata, modelPart, headers)
}
}
else {
// Send bad request response message
}
}
Although this solution works, I feel like it's not as elegant as the one alluded to in the #RequestPart annotation comments. Thus I will accept this as the solution for now, but if someone has a better solution please let us know and I will accept it!
I have a small issue with doing a blocking operation in Spring Webflux. I retrieve a list of article documents and from the list of article documents, i would like to update another object.
When i execute the below, sometimes it works and sometimes it throws a "block()/blockFirst()/blockLast() are blocking, which is not supported in thread reactor-http-nio-2". Could you please suggest how to fix. I dont really want to make it blocking but not sure how to proceed. There are similar threads in stackoverflow but not with respective to my requirement.
It would be really nice if someone could suggest a way to work around ?
private OrderInfo setPrices(final OrderInfo orderInfo) {
final List<ArticleDocument> articleDocuments = getArticleDocuments(orderInfo).block(); // Problematic line
for (ArticleDocument article : articleDocuments) {
//Update orderInfo based on one of the article price and few more condition.
}
return orderInfo;
}
private Mono<List<ArticleDocument>> getArticleDocuments(final OrderInfo orderInfo) {
return this.articleRepository.findByArticleName(orderInfo.getArticleName()).collectList();
}
It has to be something like this. Please take note that I have not tested it on my IDE. To modify anything please comment and figure it out together.
private Mono<OrderInfo> setPrices(final OrderInfo orderInfo) {
getArticleDocuments(orderInfo)
.map(articleDocuments -> {
articleDocuments.forEach(article -> // UPDATE AS YOU NEED);
return orderInfo;
});
private Mono<List<ArticleDocument>> getArticleDocuments(final OrderInfo orderInfo) {
return this.articleRepository.findByArticleName(orderInfo.getArticleName()).collectList();
}
Remember, you have to put everything under chaining. that's why you have to return Mono<OrderInfo> instead of OrderInfo from setPrices method. If you find my suggested code is tough to adapt to your current coding structure, you can show me the full code. Let's find out we can build a good chain or not.
BTW, you were using getArticleDocuments(orderInfo).block();. See? you were using .block()? Don't do that in a chain. don't ever block anything in a request to the response chain process. you will return mono or flux from the controller and everything will be handled by webflux
So I have a spring batch app that I have getting a list of ids that it then uses 'read()' on to get 1 to many results back. The issue is, I have no control over how many results I get back for each id meaning that my chunking is spotty at best. Is there a suggested way to avoid spikes in memory/cpu? An example is below:
#Before
public void getIds() {
*getListOfIds* //Usually around 10,000 or so
}
#Override
public AccountObject read() {
if(list of ids havent all been used) {
List<AccountObject> myAccounts = myService.getAccounts(id);
return myAccounts; //This could be anywhere from 1 result to 100,000 results.
} else {
return null;
}
}
So the myAccounts object above could be small or huge. This causes chunking to basically be useless because at the moment I am chunking by List. I'd really rather chunk by straight AccountObject but don't see an easy way to do this.
Is there a class, strategy, etc. that I am missing here?
I am some need help understanding the latest recommended approach to wire up and use reactiveui for a WPF project.
In doing research on the internet on reactiveui I came across various (few) posts spanning a long time period during which the library evolved with the unfortunate result that some of these how-to articles now refer to older ways of doing things which are no longer applicable
I am trying to understand the recommended way to wire up commands (usually to invoke web service which returns a DTO) and I’ve found multiple ways mentioned to do it.
My current understanding is that
// this is the first thing to do
MyCommand = ReactiveCommand.Create()
// variations to wire up the delegates / tasks to be invoked
MyCommand.CreateAsyncTask()
MyCommand.CreateAsyncFunc()
MyCommand.CreateAsyncAction()
// this seems to be only way to wire handler for receiving result
MyCommand.Subscribe
// not sure if these below are obsolete?
MyCommand.ExecuteAsync
MyCommand.RegisterAsyncTask()
Could someone try to explain which of these variations is the latest API and which are obsolete, with perhaps a few words about when to use each of them
The changes on the ReactiveCommand API are documented in this blog post:
http://log.paulbetts.org/whats-new-in-reactiveui-6-reactivecommandt/
The first option - ReactiveCommand.Create() - just creates a reactive command.
To define a command which asynchronously returns data from a service you would use :
MyCommand = ReactiveCommand.CreateAsyncTask(
canExec, // optional
async _ => await api.LoadSomeData(...));
You may use the Subscribe method to handle data when it is received:
this.Data = new ReactiveList<SomeDTO>();
MyCommand.Subscribe(items =>
{
this.Data.Clear();
foreach (var item in items)
this.Data.Add(item);
}
Though, the simplest thing is to use instead the ToProperty method like this:
this._data = MyCommand
.Select(items => new ReactiveList<SomeDTO>(items))
.ToProperty(this, x => x.Data);
where you have defined an output property for Data:
private readonly ObservableAsPropertyHelper<ReactiveList<SomeDTO>> _data;
public ReactiveList<SomeDTO> Data
{
get { return _data.Value; }
}
I'm experiencing with MongoDB with Node.js using the plugin node-mongodb-native. A problem I'm experiencing is the amount of nested callbacks. I'm trying to simplify a few things by lessening the code required for a query.
Instead of this ...
db.collection("test", function(err, collection) {
collection.find(...).toArray(function(err, results) {
// ...
});
});
... I was thinking of building an object which acts as a cache of collections so that the first callback is not necessary. I'm using the following code for building the object:
var collections = {};
["test", "foo"].forEach(function(name) {
db.collection(name, function(err, coll) {
collections[name] = coll;
});
});
With it, I'm able to clean up the first code snippet to:
collections.test.find(...).toArray(function(err, results) {
// ...
});
I was wondering whether this is a good practice. It works just fine, but I guess the callback of getting a collection is there for a reason. Does it make sense to build a collection cache as I'm doing now?
That completely depends on what a collection object is.
- Is it live?
- Is it connected to the database?
- Does it do any internal caching?
- Does it reflect new data?
Without knowing those details I recommend you create a lazy evaluation proxy.
Mongo.collection("test").find(...).toArray(function(err, results) {
// ...
});
The idea here is that you internally store the find command and when you call toArray you get the collection and invoke the find command on it, then invoke toArray.
This means your getting a new collection every time and avoid the "is caching safe" problem but still have a nice API.