Insertion time out error in Couchbase from springboot - spring-boot

Unable To save identifier IN COUCHBASE, identifier: identifiertail::123::459, , Details: com.couchbase.client.core.error.AmbiguousTimeoutException:
UpsertRequest, Reason: TIMEOUT
"scope":"_default","type":"kv"},"timeoutMs":30000,"timings":{"encodingMicros":102,"totalMicros":36987410}}
at com.couchbase.client.core.msg.BaseRequest.cancel(BaseRequest.java:163)
at com.couchbase.client.core.Timer.lambda$register$2(Timer.java:157)
at com.couchbase.client.core.deps.io.netty.util.HashedWheelTimer$HashedWheelTimeout.expire(HashedWheelTimer.java:672)
at com.couchbase.client.core.deps.io.netty.util.HashedWheelTimer$HashedWheelBucket.expireTimeouts(HashedWheelTimer.java:747)
at com.couchbase.client.core.deps.io.netty.util.HashedWheelTimer$Worker.run(HashedWheelTimer.java:472)
at com.couchbase.client.core.deps.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.lang.Thread.run(Thread.java:748)
Got above error while inserting records from app in logs. But when I checked the file, it was inserted and errors stopped coming after restart of Pods.
What could be reason that got fixed with the Pod restart?
Few things to add to this:
Couchbase cluster was healthy.
Network connectivity was good.
spring boot version: 4.1
Couchbase: 6.0.3
connect timeout: 60s
Thanks
Ritz

Related

Elasticsearch doesnt run because "No network interfaces configured"

I am getting the following error message in my wsl terminal when attempting to run elasticsearch (./elasticsearch).
Exception in thread "main" java.net.SocketException: No network interfaces configured
I've just installed elasticsearch and not used it before so I don't know if the issue is with the 'network' as it is used in elasticsearch.
I unfortunately have no idea how to fix it at this point (have tried a few things from googling)
Ive downloaded Java JDK and would like to get elasticsearch running for my project. Any help would be appreciated!

Springboot service unable to recover after database failover

We have a springboot service which has the capability to recover itself after database restart. But all of a sudden we noticed "recoverer is already running, abandoning this recovery request" in the logs and healthcheck of service is failed. We had to get the service restarted in both our datacenters.
Has anybody faced the similar issue?
==Edit
Below are the configurations:
spring.jta.log-dir=target/transaction-logs
spring.jta.bitronix.datasource.className=bitronix.tm.resource.jdbc.lrc.LrcXADataSource
spring.jta.bitronix.datasource.driverProperties.driverClassName=com.microsoft.sqlserver.jdbc.SQLServerDriver
spring.jta.bitronix.datasource.driverProperties.url=
spring.jta.bitronix.datasource.driverProperties.user=
spring.jta.bitronix.datasource.driverProperties.password=
spring.jta.bitronix.datasource.test-query=select 1
spring.jta.bitronix.datasource.max-pool-size=100
spring.jta.bitronix.datasource.prepared-statement-cache-size=100

Deleted Horizon still get errors of redis

I'v deleted Laravel Horizon package totally from my project and remove Redis too totally and using Memcached instead.
But at the 00:00 time of every days some Logs appears for Horizon trying to connect Redis, but since Redis is not available in my server, So it cant connect and get errors.
I do anything u think, but issue is still there.
I deleted Predis package too but still erorrs.
Connection refused [unix:/var/run/redis/redis.sock] {"exception":"[object] (Predis\\Connection\\ConnectionException(code: 111): Connection refused

Error getting a JDBC connection to Hive via Knox

I have a Hadoop cluster running Hortonworks Data Platform 2.4.2 which has been running well for more than a year. The cluster is Kerberised and external applications connect via Knox. Earlier today, the cluster stopped accepting JDBC connections via Knox to Hive.
The Knox logs show no errors, but the Hive Server2 log shows the following error:
"Caused by: org.apache.hadoop.security.authorize.AuthorizationException: User: knox is not allowed to impersonate org.apache.hive.service.cli.HiveSQLException: Failed to validate proxy privilege of knox for "
Having looked at other users the suggestions mostly seem to be around the correct setting of configuration options for hadoop.proxyusers.users and hadoop.proxyusers.groups.
However, in my case I don't see how these settings could be the problem. The cluster has been running for over a year and we have a number of applications connecting to Hive via JDBC on a daily basis. The configuration of the server has not been changed and connections were previously succeeding on the current configuration. No changes had been made to the platform or environment and the cluster was not restarted or taken down for maintenance between the last successful JDBC connection and JDBC connections being declined.
I have now stopped and started the cluster, but after restart the cluster still does not accept JDBC connections.
Does anyone have any suggestions on how I should proceed?
Do you have Hive Impersonation turned on?
hive.server2.enable.doAs=true
This could be the issue assuming hadoop.proxyusers.users and hadoop.proxyusers.groups are set properly.
Also, check whether the user 'knox' exist on Hive Server2 node (and others used for impersonation).
The known work around seems to be to set:
hadoop.proxyuser.knox.groups = *
hadoop.proxyuser.knox.hosts = *
I have yet to find a real fix that lets you keep this layer of added security.

infinispan cartridge with Hotrod in OpenShift

I have created a scaled application and added Infinispan Cartridge from the below URL:
raw.github.com/bdecoste/openshift-origin-cartridge-infinispan/master/metadata/manifest.yml
Now I want to connect to the infinispan server from the application running in seperate gear. I am using hotrod-client. properties having the following content:
infinispan.client.hotrod.server_list = $OPENSHIFT_INIFINISPAN_HOST:$OPENSHIFT_INFINISPAN_TCP_PROXY_PORT; infinispan.client.hotrod.socket_timeout = 500 infinispan.client.hotrod.connect_timeout = 10
When I run the application I get the following in the error logs:
ISPN004007: Exception encountered. Retry 9 out of 10: org.infinispan.client.hotrod. exceptions.TransportException:: java.net.SocketTimeoutException at org.infinispan.client.hotrod.impl.transport.tcp.TcpTransport.readByte(TcpTransport.java:184) [i nfinispan-client-hotrod-5.2.1.Final.jar:5.2.1.Final]
what causes this and how we can resolve?
Thanks a lot in advance.
You can not communicate between gears that are not part of the scaled application unless it is over port 80,443,8000,8443 using your public url.

Resources