Is there a way to collect Uni.combine().all().unis(...) failures, as Uni.join().all(...).andCollectFailures() does?
I need to call different services concurrently (with heterogeneous results) and fail all if one of them fails.
Moreover, what's the difference between Uni.combine().all().unis(...) and Uni.join(...) ?
Uni Combine Exception
The code should be like this,
return Uni.combine().all().unis(getObject1(), getObject2()).collectFailures().asTuple().flatMap(tuples -> {
return Uni.createFrom().item(Response.ok().build());
}).onFailure().recoverWithUni(failures -> {
System.out.println(failures instanceof CompositeException);
CompositeException exception = (CompositeException) failures;
for (Throwable error : exception.getCauses()) {
System.out.println(error.toString());
}
// failures.printStackTrace();
return Uni.createFrom().item(Response.status(500).build());
});
Difference:
Quarkus provides parallel processing through use of these two features:
Unijoin - Iterate through a list of objects and perform a certain
operation in parallel.
Iterate through a list of orders and perform activities on each order one by one (Multi)
OR
Iterate through a list of orders and add a method wrapper on the object in the Unijoin Builder. When the builder executes, the method wrappers are called in parallel, and its response collected in a list.
List<RequestDTO> reqDataList = request.getRequestData(); // Your input data
UniJoin.Builder<ResponseDTO> builder = Uni.join().builder();
for (RequestDTO requestDTO : reqDataList) {
builder.add(process(requestDTO));
}
return builder.joinAll().andFailFast().flatMap(responseList -> {
List<ResponseDTO> nonNullList = new ArrayList<>();
nonNullList.addAll(responseList.stream().filter(respDTO -> {
return respDTO != null;
}).collect(Collectors.toList()));
return Uni.createFrom().item(nonNullList);
});
You can see the list of objects converted to method wrapper, 'process', which is then called in parallel when 'andFailFast' is called.
'Uni.combine' - Call separate methods that
returns different response in parallel.
List<OrderDTO> orders = new ArrayList<>();
return Uni.combine().all().unis(getCountryMasters(),
getCurrencyMasters(updateDto)).asTuple()
.flatMap(tuple -> {
List<CountryDto> countries = tuple.getItem1();
List<CurrencyDto> currencies = tuple.getItem2();
// Get country code and currency code from each order and
// convert it to corresponding technical id.
return convert(orders, countries, currencies);
});
As you see above, both methods in 'combine' returns different results and yet they are performed in parallel.
Related
I am trying to call getProductContract() but it returns an empty list. I think that's because the Mono is not being executed.
Can someone pls help me on how to execute the calls so that I get the populated resultList back?
Sample code
//Controller.java:
service.getProductContract()
// Service.java
public Mono<List<ProductContract>> getProductContract() {
Set<String> productIdList = new HashSet<>();
productIdList.add("p123");
productIdList.add("p456");
List<ProductContract> resultList = new ArrayList<>();
productIdList.forEach(productId -> prodRepository.getProductDetails(productId)
.flatMap(productDetail -> prodRepository.getProductContracts(productDetail.getProductContractId()))
.mapNotNull(contracts -> resultList.add(contracts.stream().findFirst().isPresent()? contracts.stream().findFirst().get():null))
.subscribeOn(Schedulers.boundedElastic())
);
log.info("size {}",String.valueOf(resultList.size())); //-> Size is ZERO
return Mono.just(resultList);
}
// Repository.java
public Mono<List<ProductContract>> getProductContracts (String contractId){...} // can return multiple contacts for 1 id
public Mono<String> getProductDetails(String productId){...}
This productIdList....flapMap... block is executed asynchronously, when printing the size in the log.info, it was zero indicates the execution is not completed.
A better way to assemble all resources in your case is like this.
return Flux.fromIterable(productIdList)
.flatMap(productId -> prodRepository.getProductDetails(productId))
.flatMap(p-> ....getContacts)
.map(....build a contact dto instance...)
If you want to return a Mono<List<ContactDto>>, just call Flux.collectList.
I am trying to make a local aggregation.
The input topic has records containing multiple elements and I am using flatmap to split the record into multiple records with another key (here element_id). This triggers a re-partition as I am applying a grouping for aggregation later in the stream process.
Problem: there are way too many records in this repartition topic and the app cannot handle them (lag is increasing).
Here is a example of the incoming data
key: another ID
value:
{
"cat_1": {
"element_1" : 0,
"element_2" : 1,
"element_3" : 0
},
"cat_2": {
"element_1" : 0,
"element_2" : 1,
"element_3" : 1
}
}
And an example of the wanted aggregation result:
key : element_2
value:
{
"cat_1": 1,
"cat_2": 1
}
So I would like to make a first "local aggregation" and stop splitting incoming records, meaning that I want to aggregate all elements locally (no re-partition) for example in a 30 seconds window, then produce result per element in a topic. A stream consuming this topic later aggregates at a higher level.
I am using Stream DSL, but I am not sure it is enough. I tried to use the process() and transform() methods that allow me to benefit from the Processor API, but I don't known how to properly produce some records in a punctuation, or put records in a stream.
How could I achieve that ? Thank you
transform() returns a KStream on which you can call to() to write the results into a topic.
stream.transform(...).to("output_topic");
In a punctuation you can call context.forward() to send a record downstream. You still need to call to() to write the forwarded record into a topic.
To implement a custom aggregation consider the following pseudo-ish code:
builder = new StreamsBuilder();
final StoreBuilder<KeyValueStore<Integer, Integer>> keyValueStoreBuilder =
Stores.keyValueStoreBuilder(Stores.persistentKeyValueStore(stateStoreName),
Serdes.Integer(),
Serdes.Integer());
builder.addStateStore(keyValueStoreBuilder);
stream = builder.stream(topic, Consumed.with(Serdes.Integer(), Serdes.Integer()));
stream.transform(() ->
new Transformer<Integer, Integer, KeyValue<Integer, Integer>>() {
private KeyValueStore<Integer, Integer> state;
#Override
public void init(final ProcessorContext context) {
state = (KeyValueStore<Integer, Integer>) context.getStateStore(stateStoreName);
context.schedule(
Duration.ofMinutes(1),
PunctuationType.STREAM_TIME,
timestamp -> {
// You can get aggregates from the state store here
// Then you can send the aggregates downstream
// with context.forward();
// Alternatively, you can output the aggregate in the
// transform() method as shown below
}
);
}
#Override
public KeyValue<Integer, Integer> transform(final Integer key, final Integer value) {
// Get existing aggregates from the state store with state.get().
// Update aggregates and write them into the state store with state.put().
// Depending on some condition, e.g., 10 seen records,
// output an aggregate downstream by returning the output.
// You can output multiple aggregates by using KStream#flatTransform().
// Alternatively, you can output the aggregate in a
// punctuation as shown above
}
#Override
public void close() {
}
}, stateStoreName)
With this manual aggregation you could implement the higher level aggregation in the same streams app and leverage re-partitioning.
process() is a terminal operation, i.e., it does not return anything.
I'm working on a project which uses Spring web Flux and mongo DB and I'm very new to reactive programming and WebFlux.
I have scenario of saving into 3 collections using one service. For each collection im generating id using a sequence and then save them. I have FieldMaster which have List on them and every Field Info has List . I need to save FieldMaster, FileInfo and FieldOption. Below is the Code i'm using. The code works only when i'm running on debugging mode, otherwise it get blocked on below line
Integer field_seq_id = Integer.parseInt(sequencesCollection.getNextSequence(FIELDINFO).block().getSeqValue());
Here is the full code
public Mono< FieldMaster > createMasterData(Mono< FieldMaster > fieldmaster)
{
return fieldmaster.flatMap(fm -> {
return sequencesCollection.getNextSequence(FIELDMASTER).flatMap(seqVal -> {
LOGGER.info("Generated Sequence value :" + seqVal.getSeqValue());
fm.setId(Integer.parseInt(seqVal.getSeqValue()));
List<FieldInfo> fieldInfo = fm.getFieldInfo();
fieldInfo.forEach(field -> {
// saving Field Goes Here
Integer field_seq_id = Integer.parseInt(sequencesCollection.getNextSequence(FIELDINFO).block().getSeqValue()); // stops execution at this line
LOGGER.info("Generated Sequence value Field Sequence:" + field_seq_id);
field.setId(field_seq_id);
field.setMasterFieldRefId(fm.getId());
mongoTemplate.save(field).block();
LOGGER.info("Field Details Saved");
List<FieldOption> fieldOption = field.getFieldOptions();
fieldOption.forEach(option -> {
// saving Field Option Goes Here
Integer opt_seq_id = Integer.parseInt(sequencesCollection.getNextSequence(FIELDOPTION).block().getSeqValue());
LOGGER.info("Generated Sequence value Options Sequence:" + opt_seq_id);
option.setId(opt_seq_id);
option.setFieldRefId(field_seq_id);
mongoTemplate.save(option).log().block();
LOGGER.info("Field Option Details Saved");
});
});
return mongoTemplate.save(fm).log();
});
});
}
First in reactive programming is not good to use .block because you turn nonblocking code to blocking. if you want to get from a stream and save in 3 streams you can do like that.
There are many different ways to do that for performance purpose but it depends of the amount of data.
here you have a sample using simple data and using concat operator but there are even zip and merge. it depends from your needs.
public void run(String... args) throws Exception {
Flux<Integer> dbData = Flux.range(0, 10);
dbData.flatMap(integer -> Flux.concat(saveAllInFirstCollection(integer), saveAllInSecondCollection(integer), saveAllInThirdCollection(integer))).subscribe();
}
Flux<Integer> saveAllInFirstCollection(Integer integer) {
System.out.println(integer);
//process and save in collection
return Flux.just(integer);
}
Flux<Integer> saveAllInSecondCollection(Integer integer) {
System.out.println(integer);
//process and save in collection
return Flux.just(integer);
}
Flux<Integer> saveAllInThirdCollection(Integer integer) {
System.out.println(integer);
//process and save in collection
return Flux.just(integer);
}
just a scenario :
I have 4 classes created in Parse cloud database for a particular Application - ClassA, ClassB, ClassC, ClassD.
I can retrieve data related to ClassA using REST URL like - https://api.parse.com/1/classes/ClassA
Is it possible to retrieve data of all 4 classes using single REST URL ?
No, it's not possible to do this. You can query from a single class at a time, and a maximum of 1,000 objects.
A cloud function can make multiple queries and merge the results, meaning that a single REST call (to call the function) could return results from multiple classes (but a maximum of 1,000 objects per query). Something like this:
Parse.Cloud.define("GetSomeData", function(request, response) {
var query1 = new Parse.Query("ClassA");
var query2 = new Parse.Query("ClassB");
query1.limit(1000);
query2.limit(1000);
var output = {};
query1.find().then(function(results) {
output['ClassA'] = results;
return query2.find();
}).then(function(results) {
output['ClassB'] = results;
response.success(output);
}, function(error) {
response.error(error);
});
});
I´m still having a hard time with Linq.
I need to write a Update Function tat receives an object that has a list. Actually, A region has a list of cities. I want to pass an object "Region" that has a name filed and a list of cities. The problem, is the city objects came from another context and I am unable to attach them to this context. I have been trying several functions, and always get an error like "EntitySet was modified during enumeration" or other. I am tring to make the code below work, but if anyone has a different approach please help.
public int Updateregion(region E)
{
try
{
using (var ctx = new AppDataDataContext())
{
var R =
(from edt in ctx.regiaos
where edt.ID == E.ID
select edt).SingleOrDefault();
if (R != null)
{
R.name = R.name;
R.description = E.description;
}
R.cities = null;
R.cities.AddRange(Edited.Cities);
ctx.SubmitChanges();
return 0 //OK!
}
}
catch (Exception e)
{
......
}
You can't attach objects retrieved from one datacontext to another, it's not supported by Linq-to-SQL. You need to somehow dettach the objects from their original context, but this isn't supported either. One can wonder why a dettach method isn't available, but at least you can fake it by mapping the list to new objects:
var cities = Edited.Cities.Select(city => new City {
ID = city.ID,
Name = city.Name,
/* etc */
});
The key here is to remember to map the primary key and NOT map any of the relation properties. They must be set to null. After this, you should be able to attach the new cities list, and have it work as expected.