couchbase upsert/insert silently failing with ttl - insert

i am trying to upsert 10 documents using spring boot. It is failing to upsert "few documents" with TTL.There is no error or exception. If i do not provide ttl then it is working as expected.
In addition to that, if i increase the ttl to a different value then also all the documents are getting created.
On the other hand, if i reduce the ttl then failing to insert few more docuemnts.
I tried to insert the failed document(single document out of 10) from another poc with the same ttl the document is getting created.
public Flux<JsonDocument> upsertAll(final List<JsonDocument> jsonDocuments) {
return Flux
.from(keys())
.flatMap(key -> Flux
.fromIterable(jsonDocuments)
.parallel()
.runOn(Schedulers.parallel())
.flatMap(jsonDocument -> {
final String arg = String.format("upsertAll-%s", jsonDocument);
return Mono
.just(asyncBucket
.upsert(jsonDocument, 1000, TimeUnit.MILLISECONDS)
.doOnError(error -> log.error(jsonDocument.content(), error, "failed to upsert")))
.map(obs -> Tuples.of(obs, jsonDocument.content()))
.map(tuple2 -> log.observableHandler(tuple2))
.map(observable1 -> Tuples.of(observable1, jsonDocument.content()))
.flatMap(tuple2 -> log.monoHandler(tuple2))
;
})
.sequential())
;
}
List<JsonDocument> jsonDocuments = new LinkedList<>();
dbService.upsertAll(jsonDocuments)
.subscribe();
some one please suggest how to resolve this issue.

Due to an oddity in the Couchbase server API, TTL values less than 30 days are treated differently than values greater than 30 days.
In order to get consistent behavior with Couchbase Java SDK 2.x, you'll need to adjust the TTL value before passing it to the SDK:
// adjust TTL for Couchbase Java SDK 2.x
public static int adjustTtl(int ttlSeconds) {
return ttlSeconds < TimeUnit.DAYS.toSeconds(30)
? ttlSeconds
: (int) (ttlSeconds + (System.currentTimeMillis() / 1000));
}
In Couchbase Java SDK 3.0.6 this is no longer required; just pass a Duration and the SDK will adjust the value behind the scenes if necessary.

Related

KTable & LogAndContinueExceptionHandler

I have a very simple consumer from which I create a materialized view. I have enabled validation on my value object (throwing Constraintviolationexception for invalid json data). When I receive a value on which the validation fails, I exepct the value to logged & consumer should read the next offset as I have LogAndContinueExceptionHandler enabled.
However LogAndContinueExceptionHandler is never invoked and consumePojo State transition from PENDING_ERROR to ERROR
Code
#Bean
public Consumer<KTable<String, Pojo>> consume() {
return values->
values
.filter((key, value) -> Objects.nonNull(key))
.mapValues(value-> value, Materialized.<String, Pojo>as(Stores.inMemoryKeyValueStore("POJO_STORE_NAME"))
.withKeySerde(Serdes.String())
.withValueSerde(SerdeUtil.pojoSerde())
.withLoggingDisabled())
.toStream()
.peek((key, value) -> log.debug("Receiving Pojo from topic with key: {}, and UUID: {}", key, value == null ? 0 : value.getUuid()));
}
Why is it that LogAndContinueExceptionHandler is not invoked in case of KTable?
Note: If code is changed to KStreams then I see logging and records being skipped but with KTable not !!
In order to handle exceptions not handled by Kafka Streams use the KafkaStreams.setUncaughtExceptionHandler method and StreamsUncaughtExceptionHandler implementation, this needs to return one of 3 available enumerations:
REPLACE_THREAD
SHUTDOWN_CLIENT
SHUTDOWN_APPLICATION
and in your case REPLACE_THREAD is the best option, as you can see in KIP-671:
REPLACE_THREAD:
The current thread is shutdown and transits to state DEAD.
A new thread is started if the Kafka Streams client is in state RUNNING or REBALANCING.
For the Global thread this option will log an error and revert to shutting down the client until the option had been added
In Spring Kafka you can replace default StreamsUncaughtExceptionHandler by StreamsBuilderFactoryBean:
#Autowired
void setMyStreamsUncaughtExceptionHandler(StreamsBuilderFactoryBean streamsBuilderFactoryBean) {
streamsBuilderFactoryBean.setStreamsUncaughtExceptionHandler(exception -> StreamsUncaughtExceptionHandler.StreamThreadExceptionResponse.REPLACE_THREAD);
}
I was able to solve the problem after looking at the logs carefully, I found that valueSerde for the Pojo, was showing useNativeDecoding (default being JsonSerde) due to this DeserializationExceptionHandler wasn't invoked and thread terminated.
Problem went away when I fixed the valueSerde in application.properties

Jdbi transaction - multiple methods - Resources should be closed

Suppose I want to run two sql queries in a transaction I have code like the below:
jdbi.useHandle(handle -> handle.useTransaction(h -> {
var id = handle.createUpdate("some query")
.executeAndReturnGeneratedKeys()
.mapTo(Long.class).findOne().orElseThrow(() -> new IllegalStateException("No id"));
handle.createUpdate("INSERT INTO SOMETABLE (id) " +
"VALUES (:id , xxx);")
.bind("id")
.execute();
}
));
Now as the complexity grows I want to extract each update in into it's own method:
jdbi.useHandle(handle -> handle.useTransaction(h -> {
var id = someQuery1(h);
someQuery2(id, h);
}
));
...with someQuery1 looking like:
private Long someQuery1(Handle handle) {
return handle.createUpdate("some query")
.executeAndReturnGeneratedKeys()
.mapTo(Long.class).findOne().orElseThrow(() -> new IllegalStateException("No id"));
}
Now when I refactor to the latter I get a SonarQube blocker bug on the someQuery1 handle.createUpdate stating:
Resources should be closed
Connections, streams, files, and other
classes that implement the Closeable interface or its super-interface,
AutoCloseable, needs to be closed after use....*
I was under the impression, that because I'm using jdbi.useHandle (and passing the same handle to the called methods) that a callback would be used and immediately release the handle upon return. As per the jdbi docs:
Both withHandle and useHandle open a temporary handle, call your
callback, and immediately release the handle when your callback
returns.
Any help / suggestions appreciated.
TIA
SonarQube doesn't know any specifics regarding JDBI implementation and just triggers by AutoCloseable/Closable not being closed. Just suppress sonar issue and/or file a feature-request to SonarQube team to improve this behavior.

Flowfiles stuck in queue (Apache NiFi)

I have following flow:
ListFTP -> RouteOnAttribute -> FetchFTP -> UnpackContent -> ExecuteScript.
Some of files are stuck in queue UnpackContent -> ExecuteScript.
ExecuteScript ate some flowfiles and they just disappeared: failure and success relationships are empty. It just showed some activity in Tasks/Time field. All of them stuck in queue before ExecuteScript. I tried to empty queue, but not all of flowfiles have been deleted from this queue. About 1/3 of them still stuck in queue. I tried to disable all processors and empty queue again but it returns: 0 FlowFiles (0 bytes) were removed from the queue.
When i'm trying to change Connection destionation it returns:
Cannot change destination of Connection because FlowFiles from this Connection are currently held by ExecuteScript[id=d33c9b73-0177-1000-5151-83b7b938de39]
ExecuScript from this answer (uses Python).
So, I can't empty queue because its always return message that there is no any flowfile, and i can't remove connection. This has been going on for several hours.
Connection configuration:
Scheduling is set to 0 sec, no penalties for flowfiles, etc.
Is it script problem?
UPDATE
Changed script to:
flowFile = session.get()
if (flowFile != None):
# All processing code starts at this indent
if errorOccurred:
session.transfer(flowFile, REL_FAILURE)
else:
session.transfer(flowFile, REL_SUCCESS)
# implicit return at the end
Same result.
UPDATE v2
I set concurent tasks to 50 and then ran ExecuteScript again and terminated it. I got this error:
UPDATE v3
I created additional ExecuteScript processor with same script and it works fine. But after i stopped this new processor and create new flowfiles, this processor now have same problems: it's just stuck.
Hilarious. Is ExecuteScript for single use?
You need to modify Your code in nifi-1.13.2 because NIFI-8080 caused these bugs. Or you just use nifi 1.12.1
JythonScriptEngineConfigurator:
#Override
public Object init(ScriptEngine engine, String scriptBody, String[] modulePaths) throws ScriptException {
// Always compile when first run
if (engine != null) {
// Add prefix for import sys and all jython modules
prefix = "import sys\n"
+ Arrays.stream(modulePaths).map((modulePath) -> "sys.path.append(" + PyString.encode_UnicodeEscape(modulePath, true) + ")")
.collect(Collectors.joining("\n"));
}
return null;
}
#Override
public Object eval(ScriptEngine engine, String scriptBody, String[] modulePaths) throws ScriptException {
Object returnValue = null;
if (engine != null) {
returnValue = ((Compilable) engine).compile(prefix + scriptBody).eval();
}
return returnValue;
}

Ignite 2.4.0 - SqlQuery results do not match with results of query from H2 console

We implemented a caching solution using ignite 2.0.0 version for data structure that looks like this.
public class EntityPO {
#QuerySqlField(index = true)
private Integer accessZone;
#QuerySqlField(index = true)
private Integer appArea;
#QuerySqlField(index = true)
private Integer parentNodeId;
private Integer dbId;
}
List<EntityPO> nodes = new ArrayList<>();
SqlQuery<String, EntityPO> sql =
new SqlQuery<>(EntityPO.class, "accessZone = ? and appArea = ? and parentNodeId is not null");
sql.setArgs(accessZoneId, appArea);
CacheConfiguration<String, EntityPO> cacheconfig = new
CacheConfiguration<>(cacheName);
cacheconfig.setCacheMode(CacheMode.PARTITIONED);
cacheconfig.setAtomicityMode(CacheAtomicityMode.ATOMIC);
cacheconfig.setIndexedTypes(String.class, EntityPO.class);
cacheconfig.setOnheapCacheEnabled(true);
cacheconfig.setBackups(numberOfBackUpCopies);
cacheconfig.setName(cacheName);
cacheconfig.setQueryParallelism(1);
cache = ignite.getOrCreateCache(cacheconfig);
We have method that looks for node in a particular accessZone and appArea. This method works fine in 2.0.0, we upgraded to the latest version 2.4.0 version and this method no longer returns anything(zero records). We enabled H2 debug console and ran the same query and we are seeing the same atleast 3k records. Downgrading the library back to 2.0.0 makes the code work again. Please let me know if you need more information to help with this question
Results from H2 console.
H2 Console Results
If you use persistence, please check a baseline topology for you cluster.
Baseline Topology is the major feature introduced in the 2.4 version.
Briefly, the baseline topology is a set server nodes that could store the data. Most probably the cause of your issue is to one or several server nodes are not in the baseline.

Reactor changes in Spring Boot 2 M4

I have updated from Spring Boot 2.0.0.M3 to 2.0.0.M4, which updates Reactor from 3.1.0.M3 to 3.1.0.RC1. This causes my code to break in a number of places.
Mono.and() now returns Mono<Void>, where previously it returned Mono<Tuple>
This is also the case for Mono.when()
The following code compiles with the older versions, but not with the new version
Mono<String> m1 = Mono.just("A");
Mono<String> m2 = Mono.just("B");
Mono<String> andResult = m1.and(m2).map(t -> t.getT1() + t.getT2());
Mono<String> whenResult = Mono.when(m1, m2).map(t -> t.getT1() + t.getT2());
Has there been any changes to how this should work?
when and and that produce Tuple have been replaced with zip/zipWith which are their exact equivalent in the Flux API, in order to align the APIs. Remaining when and and methods, which are found only in Mono, are now purely about combining the completion signals, discarding the onNexts (hence they return a Mono<Void>)
I switched to Mono.zip(...):
mono1.and(mono2).map(...)
=>
Mono.zip(mono1, mono2).map(...)

Resources