Spring data redis implementation redis zrevrangebyscore operation - spring

I have a question because there is something that does not work well in the operation of spring data redis.
There was no problem with the inquiry when I used redis-cli, but when I use the API of spring data redis
reverseRangeByScore
https://docs.spring.io/spring-data/redis/docs/current/api/org/springframework/data/redis/core/ZSetOperations.html#reverseRangeByScore-K-double-double-
it failed to retrieve results, so I am asking if there is another way or if I made a mistake.
when redis-cli
$zrevrangebyscore redis_key +inf (1664142666 withscores
1) "189:Z0000539"
2) "1664432446"
3) "192:Z0000288"
4) "1664332797"
5) "178:0000cq4e"
6) "1664256182"
In Spring data redis
private val stringRedisTemplate: StringRedisTemplate
val now = Instant.now().epochSecond - (86400 * 7L);
val res = stringRedisTemplate.opsForZSet().reverseRangeByScore(
"redis_key",
0.0,
now.toDouble()
)
res //<- empty
I'd appreciate it if you could give me your opinion.

You are replacing +inf in redis-cli with 0.0 in spring-data-redis which does not seem a logical replacement.
Try Double.POSITIVE_INFINITY instead of 0.0.

Related

nodejs + grpc-node server much slower than REST

I have implemented 2 services A and B, where A can talk to B via both gRPC (using grpc-node with Mali) and pure HTTP REST calls.
The request size is negligible.
The response size is 1000 items that look like this:
{
"productId": "product-0",
"description": "some-text",
"price": {
"currency": "GBP",
"value": "12.99"
},
"createdAt": "2020-07-12T18:03:46.443Z"
}
Both A and B are deployed in GKE as services, and they communicate over the internal network using kube-proxy.
What I discovered is that the REST version is a lot faster than gRPC. The REST call's p99 sits at < 1s, and the gRPC's p99 can go over 30s.
Details
Node version and OS: node:14.7.0-alpine3.12
Dependencies:
"google-protobuf": "^3.12.4",
"grpc": "^1.24.3",
"mali": "^0.21.0",
I have even created client-side-TCP-pooling by setting the gRPC option grpc.use_local_subchannel_pool=1, but this did not seem to help.
The problem seems to be the server side, as I can see that from the log that the grpc lib's call.startBatch call took many seconds to send data of size ~51kb. This is way slower than the REST version.
I also checked the CPU and network of the services are healthy. The REST version could send > 2mbps, whereas the gRPC version only manages ~150kbps.
Running netstat on service B (in gRPC) shows a number of ESTABLISHED TCP connections (as expected because of TCP pooling).
My suspicion is that the grpc-core C++ code is somehow less optimal than REST, but I have no proof.
Any ideas where I should look at next? Thanks for any helps
Update 1
Here're some benchmarks:
Setup
Blazemeter --REST--> services A --gRPC/REST--> service B
request body (both lags) is negligible
service A is a node service + Koa
service B has 3 options:
grpc-node: node with grpc-node
gRPC + Go: Go implementation of the same gRPC service
REST + Koa: node with Koa
Blazemeter --> service A: response payload is negligible, and the same for all tests
serivce A --> service B: the gRPC/REST response payload is 1000 of ProductPrice:
message ProductPrice {
string product_id = 1; // Hard coded to "product-x", x in [0 ... 999]
string description = 2; // Hard coded to random string, length = 10
Money price = 3;
google.protobuf.Timestamp created_at = 4; // Hard coded
}
message Money {
Currency currency = 1; // Hard coded to GBP
string value = 2; // Hard coded to "12.99"
}
enum Currency {
CURRENCY_UNKNOWN = 0;
GBP = 1;
}
The services are deployed to Kubernetes in GCP,
instance type: n1-highcpu-4
5 pods each service
2 CPU, 1 GB memory each pod
kube-proxy using cluster IP (not going via internet) (I've also test headless with clusterIP: None, which gave similar results)
Load
50rps
Results
service B using grpc-node
service B using Go gRPC
service B using REST with Koa
Network IO
Observations
gRPC + Go is roughly on par with REST (I thought gRPC would be faster)
grpc-node is 4x slower than REST
Network isn't the bottleneck

couchbase upsert/insert silently failing with ttl

i am trying to upsert 10 documents using spring boot. It is failing to upsert "few documents" with TTL.There is no error or exception. If i do not provide ttl then it is working as expected.
In addition to that, if i increase the ttl to a different value then also all the documents are getting created.
On the other hand, if i reduce the ttl then failing to insert few more docuemnts.
I tried to insert the failed document(single document out of 10) from another poc with the same ttl the document is getting created.
public Flux<JsonDocument> upsertAll(final List<JsonDocument> jsonDocuments) {
return Flux
.from(keys())
.flatMap(key -> Flux
.fromIterable(jsonDocuments)
.parallel()
.runOn(Schedulers.parallel())
.flatMap(jsonDocument -> {
final String arg = String.format("upsertAll-%s", jsonDocument);
return Mono
.just(asyncBucket
.upsert(jsonDocument, 1000, TimeUnit.MILLISECONDS)
.doOnError(error -> log.error(jsonDocument.content(), error, "failed to upsert")))
.map(obs -> Tuples.of(obs, jsonDocument.content()))
.map(tuple2 -> log.observableHandler(tuple2))
.map(observable1 -> Tuples.of(observable1, jsonDocument.content()))
.flatMap(tuple2 -> log.monoHandler(tuple2))
;
})
.sequential())
;
}
List<JsonDocument> jsonDocuments = new LinkedList<>();
dbService.upsertAll(jsonDocuments)
.subscribe();
some one please suggest how to resolve this issue.
Due to an oddity in the Couchbase server API, TTL values less than 30 days are treated differently than values greater than 30 days.
In order to get consistent behavior with Couchbase Java SDK 2.x, you'll need to adjust the TTL value before passing it to the SDK:
// adjust TTL for Couchbase Java SDK 2.x
public static int adjustTtl(int ttlSeconds) {
return ttlSeconds < TimeUnit.DAYS.toSeconds(30)
? ttlSeconds
: (int) (ttlSeconds + (System.currentTimeMillis() / 1000));
}
In Couchbase Java SDK 3.0.6 this is no longer required; just pass a Duration and the SDK will adjust the value behind the scenes if necessary.

How to set the starting point when using the Redis scan command in spring boot

i want to migrate 70million data redis(sentinel-mode) to redis(cluster-mode)
ScanOptions options = ScanOptions.scanOptions().build();
Cursor<byte[]> c = sentinelTemplate.getConnectionFactory().getConnection().scan(options);
while(c.hasNext()){
count++;
String key = new String(c.next());
key = key.trim();
String value = (String)sentinelTemplate.opsForHash().get(key,"tc");
//Thread.sleep(1);
clusterTemplate.opsForHash().put(key, "tc", value);
}
I want to scan again from a certain point because redis connection disconnected at some point.
How to set the starting point when using the Redis scan command in spring boot?
Moreover, whenever the program is executed using the above code, the connection is broken when almost 20 million data are moved.

Is there any way to view the physical SQLs executed by Calcite JDBC?

Recently I am studying Apache Calcite, by now I can use explain plan for via JDBC to view the logical plan, and I am wondering how can I view the physical sql in the plan execution? Since there may be bugs in the physical sql generation so I need to make sure the correctness.
val connection = DriverManager.getConnection("jdbc:calcite:")
val calciteConnection = connection.asInstanceOf[CalciteConnection]
val rootSchema = calciteConnection.getRootSchema()
val dsInsightUser = JdbcSchema.dataSource("jdbc:mysql://localhost:13306/insight?useSSL=false&serverTimezone=UTC", "com.mysql.jdbc.Driver", "insight_admin","xxxxxx")
val dsPerm = JdbcSchema.dataSource("jdbc:mysql://localhost:13307/permission?useSSL=false&serverTimezone=UTC", "com.mysql.jdbc.Driver", "perm_admin", "xxxxxx")
rootSchema.add("insight_user", JdbcSchema.create(rootSchema, "insight_user", dsInsightUser, null, null))
rootSchema.add("perm", JdbcSchema.create(rootSchema, "perm", dsPerm, null, null))
val stmt = connection.createStatement()
val rs = stmt.executeQuery("""explain plan for select "perm"."user_table".* from "perm"."user_table" join "insight_user"."user_tab" on "perm"."user_table"."id"="insight_user"."user_tab"."id" """)
val metaData = rs.getMetaData()
while(rs.next()) {
for(i <- 1 to metaData.getColumnCount) printf("%s ", rs.getObject(i))
println()
}
result is
EnumerableCalc(expr#0..3=[{inputs}], proj#0..2=[{exprs}])
EnumerableHashJoin(condition=[=($0, $3)], joinType=[inner])
JdbcToEnumerableConverter
JdbcTableScan(table=[[perm, user_table]])
JdbcToEnumerableConverter
JdbcProject(id=[$0])
JdbcTableScan(table=[[insight_user, user_tab]])
There is a Calcite Hook, Hook.QUERY_PLAN that is triggered with the JDBC query strings. From the source:
/** Called with a query that has been generated to send to a back-end system.
* The query might be a SQL string (for the JDBC adapter), a list of Mongo
* pipeline expressions (for the MongoDB adapter), et cetera. */
QUERY_PLAN;
You can register a listener to log any query strings, like this in Java:
Hook.QUERY_PLAN.add((Consumer<String>) s -> LOG.info("Query sent over JDBC:\n" + s));
It is possible to see the generated SQL query by setting calcite.debug=true system property. The exact place where this is happening is in JdbcToEnumerableConverter. As this is happening during the execution of the query you will have to remove the "explain plan for"
from stmt.executeQuery.
Note that by setting debug mode to true you will get a lot of other messages as well as other information regarding generated code.

Ignite 2.4.0 - SqlQuery results do not match with results of query from H2 console

We implemented a caching solution using ignite 2.0.0 version for data structure that looks like this.
public class EntityPO {
#QuerySqlField(index = true)
private Integer accessZone;
#QuerySqlField(index = true)
private Integer appArea;
#QuerySqlField(index = true)
private Integer parentNodeId;
private Integer dbId;
}
List<EntityPO> nodes = new ArrayList<>();
SqlQuery<String, EntityPO> sql =
new SqlQuery<>(EntityPO.class, "accessZone = ? and appArea = ? and parentNodeId is not null");
sql.setArgs(accessZoneId, appArea);
CacheConfiguration<String, EntityPO> cacheconfig = new
CacheConfiguration<>(cacheName);
cacheconfig.setCacheMode(CacheMode.PARTITIONED);
cacheconfig.setAtomicityMode(CacheAtomicityMode.ATOMIC);
cacheconfig.setIndexedTypes(String.class, EntityPO.class);
cacheconfig.setOnheapCacheEnabled(true);
cacheconfig.setBackups(numberOfBackUpCopies);
cacheconfig.setName(cacheName);
cacheconfig.setQueryParallelism(1);
cache = ignite.getOrCreateCache(cacheconfig);
We have method that looks for node in a particular accessZone and appArea. This method works fine in 2.0.0, we upgraded to the latest version 2.4.0 version and this method no longer returns anything(zero records). We enabled H2 debug console and ran the same query and we are seeing the same atleast 3k records. Downgrading the library back to 2.0.0 makes the code work again. Please let me know if you need more information to help with this question
Results from H2 console.
H2 Console Results
If you use persistence, please check a baseline topology for you cluster.
Baseline Topology is the major feature introduced in the 2.4 version.
Briefly, the baseline topology is a set server nodes that could store the data. Most probably the cause of your issue is to one or several server nodes are not in the baseline.

Resources