RedisCommandTimeOutException while making connecting micronaut lambda with elastic-cache - aws-lambda

I am trying to create a lambda using Micronaut-2 connecting to elastic-cache.
I have used redis-lettuce dependency in the project with the following configuration and encryption on the transaction is enabled in the elastic-cache config.
redis:
uri: redis://{aws master node endpoint}
password: {password}
tls: true
ssl: true
io-thread-pool-size: 5
computation-thread-pool-size: 4
I am getting below exception:
command timed out after 1 minute(s): io.lettuce.core.rediscommandtimeoutexception
io.lettuce.core.rediscommandtimeoutexception: command timed out after 1 minute(s) at
io.lettuce.core.exceptionfactory.createtimeoutexception(exceptionfactory.java:51) at
io.lettuce.core.lettucefutures.awaitorcancel(lettucefutures.java:119) at
io.lettuce.core.futuresyncinvocationhandler.handleinvocation(futuresyncinvocationhandler.java:75)
at io.lettuce.core.internal.abstractinvocationhandler.invoke(abstractinvocationhandler.java:79)
com.sun.proxy.$proxy22.set(unknown source) at
hello.world.function.httpbookredishandler.execute(httpbookredishandler.java:29) at
hello.world.function.httpbookredishandler.execute(httpbookredishandler.java:16) at
io.micronaut.function.aws.micronautrequesthandler.handlerequest(micronautrequesthandler.java:73)
I have tried with spring cloud function with same network (literally on the same lambda) with the same elastic cache setup, it is working fine.
Any direction that can help me to debug this issue, please.

This might be late.
First thing to mention here is, an elastic-cache can only be accessed within a VPC. If you want to access it from the internet, it needs to have NAT GW enabled.

Related

Kibana says "Kibana server is not ready yet." and "elasticsearch-reset-password" returns an error

I am new to Elasticsearch/Kibana and am trying to set up a basic installation via Docker. I've backed myself into a corner, and I need help finding my way out.
I have the following docker-compose.yml.
services:
elasticsearch:
container_name: elasticsearch
image: docker.elastic.co/elasticsearch/elasticsearch:8.4.0
environment:
- discovery.type=single-node
ulimits:
memlock:
soft: -1
hard: -1
cap_add:
- IPC_LOCK
ports:
- "9200:9200"
kibana:
container_name: kibana
image: docker.elastic.co/kibana/kibana:8.4.0
environment:
- ELASTICSEARCH_HOSTS=http://elasticssearch:9200
ports:
- "5601:5601"
I run docker compose up . and the logs look mostly good. However, when I try to connect to http://localhost:5601/, I see a message "Kibana server is not ready yet." that never goes away.
The end of the Elasticsearch log looks like this.
{"#timestamp":"2022-08-26T15:26:25.616Z", "log.level":"ERROR", "message":"exception during geoip databases update", "ecs.version": "1.2.0","service.name":"ES_ECS","event.dataset":"elasticsearch.server","process.thread.name":"elasticsearch[1de6b5b3d4cb][generic][T#4]","log.logger":"org.elasticsearch.ingest.geoip.GeoIpDownloader","elasticsearch.cluster.uuid":"vGjmfQNWTRS2sEeG0AiwuQ","elasticsearch.node.id":"3CcC2gJmRk2tQZOQTwU9HA","elasticsearch.node.name":"1de6b5b3d4cb","elasticsearch.cluster.name":"docker-cluster","error.type":"org.elasticsearch.ElasticsearchException","error.message":"not all primary shards of [.geoip_databases] index are active","error.stack_trace":"org.elasticsearch.ElasticsearchException: not all primary shards of [.geoip_databases] index are active\n\tat org.elasticsearch.ingest.geoip#8.4.0/org.elasticsearch.ingest.geoip.GeoIpDownloader.updateDatabases(GeoIpDownloader.java:134)\n\tat org.elasticsearch.ingest.geoip#8.4.0/org.elasticsearch.ingest.geoip.GeoIpDownloader.runDownloader(GeoIpDownloader.java:274)\n\tat org.elasticsearch.ingest.geoip#8.4.0/org.elasticsearch.ingest.geoip.GeoIpDownloaderTaskExecutor.nodeOperation(GeoIpDownloaderTaskExecutor.java:102)\n\tat org.elasticsearch.ingest.geoip#8.4.0/org.elasticsearch.ingest.geoip.GeoIpDownloaderTaskExecutor.nodeOperation(GeoIpDownloaderTaskExecutor.java:48)\n\tat org.elasticsearch.server#8.4.0/org.elasticsearch.persistent.NodePersistentTasksExecutor$1.doRun(NodePersistentTasksExecutor.java:42)\n\tat org.elasticsearch.server#8.4.0/org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:769)\n\tat org.elasticsearch.server#8.4.0/org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:26)\n\tat java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)\n\tat java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)\n\tat java.base/java.lang.Thread.run(Thread.java:833)\n"}
2022-08-26T15:26:26.005783998Z {"#timestamp":"2022-08-26T15:26:26.002Z", "log.level": "INFO", "current.health":"GREEN","message":"Cluster health status changed from [RED] to [GREEN] (reason: [shards started [[.geoip_databases][0]]]).","previous.health":"RED","reason":"shards started [[.geoip_databases][0]]" , "ecs.version": "1.2.0","service.name":"ES_ECS","event.dataset":"elasticsearch.server","process.thread.name":"elasticsearch[1de6b5b3d4cb][masterService#updateTask][T#1]","log.logger":"org.elasticsearch.cluster.routing.allocation.AllocationService","elasticsearch.cluster.uuid":"vGjmfQNWTRS2sEeG0AiwuQ","elasticsearch.node.id":"3CcC2gJmRk2tQZOQTwU9HA","elasticsearch.node.name":"1de6b5b3d4cb","elasticsearch.cluster.name":"docker-cluster"}
2022-08-26T15:26:26.264786433Z {"#timestamp":"2022-08-26T15:26:26.264Z", "log.level": "INFO", "message":"successfully loaded geoip database file [GeoLite2-Country.mmdb]", "ecs.version": "1.2.0","service.name":"ES_ECS","event.dataset":"elasticsearch.server","process.thread.name":"elasticsearch[1de6b5b3d4cb][generic][T#2]","log.logger":"org.elasticsearch.ingest.geoip.DatabaseNodeService","elasticsearch.cluster.uuid":"vGjmfQNWTRS2sEeG0AiwuQ","elasticsearch.node.id":"3CcC2gJmRk2tQZOQTwU9HA","elasticsearch.node.name":"1de6b5b3d4cb","elasticsearch.cluster.name":"docker-cluster"}
2022-08-26T15:26:26.304814423Z {"#timestamp":"2022-08-26T15:26:26.304Z", "log.level": "INFO", "message":"successfully loaded geoip database file [GeoLite2-ASN.mmdb]", "ecs.version": "1.2.0","service.name":"ES_ECS","event.dataset":"elasticsearch.server","process.thread.name":"elasticsearch[1de6b5b3d4cb][generic][T#3]","log.logger":"org.elasticsearch.ingest.geoip.DatabaseNodeService","elasticsearch.cluster.uuid":"vGjmfQNWTRS2sEeG0AiwuQ","elasticsearch.node.id":"3CcC2gJmRk2tQZOQTwU9HA","elasticsearch.node.name":"1de6b5b3d4cb","elasticsearch.cluster.name":"docker-cluster"}
2022-08-26T15:26:27.017126446Z {"#timestamp":"2022-08-26T15:26:27.016Z", "log.level": "INFO", "message":"successfully loaded geoip database file [GeoLite2-City.mmdb]", "ecs.version": "1.2.0","service.name":"ES_ECS","event.dataset":"elasticsearch.server","process.thread.name":"elasticsearch[1de6b5b3d4cb][generic][T#1]","log.logger":"org.elasticsearch.ingest.geoip.DatabaseNodeService","elasticsearch.cluster.uuid":"vGjmfQNWTRS2sEeG0AiwuQ","elasticsearch.node.id":"3CcC2gJmRk2tQZOQTwU9HA","elasticsearch.node.name":"1de6b5b3d4cb","elasticsearch.cluster.name":"docker-cluster"}
I'm not sure if that ERROR about "geoip databases" is a problem. It does look like cluster health is "GREEN".
The end of the Kibana logs looks like this.
[2022-08-26T15:26:25.032+00:00][INFO ][plugins.ruleRegistry] Installing common resources shared between all indices
2022-08-26T15:26:25.091816903Z [2022-08-26T15:26:25.091+00:00][INFO ][plugins.cloudSecurityPosture] Registered task successfully [Task: cloud_security_posture-stats_task]
2022-08-26T15:26:26.081102019Z [2022-08-26T15:26:26.080+00:00][INFO ][plugins.screenshotting.config] Chromium sandbox provides an additional layer of protection, and is supported for Linux Ubuntu 20.04 OS. Automatically enabling Chromium sandbox.
2022-08-26T15:26:26.155818080Z [2022-08-26T15:26:26.155+00:00][ERROR][elasticsearch-service] Unable to retrieve version information from Elasticsearch nodes. getaddrinfo ENOTFOUND elasticssearch
2022-08-26T15:26:26.982333104Z [2022-08-26T15:26:26.981+00:00][INFO ][plugins.screenshotting.chromium] Browser executable: /usr/share/kibana/x-pack/plugins/screenshotting/chromium/headless_shell-linux_x64/headless_shell
That "Unable to retrieve version information from Elasticsearch nodes." ERROR looks more like it could be a problem, but I'm not sure what to do about it. One online question that sounds similar comes down to the difference between ELASTICSEARCH_HOSTS and ELASTICSEARCH_URL for an earlier version of Elastic that doesn't seem relevant here.
Poking around online also turns up situations in which the "Kibana server is not ready yet." error is a problem with the security setup. The whole security setup part is a bit confusing to me, but it seems like one thing that might have happened is that I failed to setup passwords correctly. I'm trying to start over, so I shelled into the Elasticsearch instance and ran elasticsearch-reset-password --username elastic. I saw the following error.
elasticsearch#1de6b5b3d4cb:~$ elasticsearch-reset-password --username elastic
15:24:34.593 [main] WARN org.elasticsearch.common.ssl.DiagnosticTrustManager - failed to establish trust with server at [172.18.0.2]; the server provided a certificate with subject name [CN=1de6b5b3d4cb], fingerprint [cc4a98abd8b44925c631d7e4b05f048317c8e02b], no keyUsage and extendedKeyUsage [serverAuth]; the session uses cipher suite [TLS_AES_256_GCM_SHA384] and protocol [TLSv1.3]; the certificate has subject alternative names [IP:172.18.0.3,DNS:localhost,IP:127.0.0.1,DNS:1de6b5b3d4cb]; the certificate is issued by [CN=Elasticsearch security auto-configuration HTTP CA]; the certificate is signed by (subject [CN=Elasticsearch security auto-configuration HTTP CA] fingerprint [ba8730cc6481e4847e4a14eff4f774ca1c96ad0b] {trusted issuer}) which is self-issued; the [CN=Elasticsearch security auto-configuration HTTP CA] certificate is trusted in this ssl context ([xpack.security.http.ssl (with trust configuration: Composite-Trust{JDK-trusted-certs,StoreTrustConfig{path=certs/http.p12, password=<non-empty>, type=PKCS12, algorithm=PKIX}})])
java.security.cert.CertificateException: No subject alternative names matching IP address 172.18.0.2 found
Those are all the problems I have encountered. I don't know what they mean or which are significant, and Googling doesn't turn up any clear next steps. Any suggestions as to what is going on here?
Never mind. Stupid mistake. I misspelled elasticsearch in the line.
ELASTICSEARCH_HOSTS=http://elasticssearch:9200
"ss" instead of "s". Easy to overlook. The error message in the Kibana logs was telling me what the problem was. I just didn't know how to interpret it.
Even though this was just a typo I'm going to leave this question up in case someone makes the same mistake and gets confused in the same way.

Some problems on QUIC-GO example server

The situation is, I wanna establish a QUIC connection based on quic-go from local to ECS server. The related tests using localhost are done both on local and remote device. That is:
#local: .$QUIC-GO-PATH/example/client/main -insecure -keylog ssl.log -qlog trial.log -v https://127.0.0.1:6121/demo/tile
#local: .$QUIC-GO-PATH/example/main -qlog -tcp -v
These tests are completed.
Now is the problem,when I start local-remote connection an error occurred:
#remote: .$QUIC-GO-PATH/example/main -qlog -tcp -v
#local: .$QUIC-GO-PATH/example/client/main -insecure -keylog ssl.log -qlog trial.log -v https://$REMOTE_IPADDR:6121/demo/tile
timeout: no recent network activity
When I go through a wireshark examination, it seems like the CRYPTO handshake never finishes:
Wireshark
Also client Qlog file atteched here:
Qlog file
Codes are all the same with https://github.com/lucas-clemente/quic-go
Help!
This problem has been solved.
Code $QUIC-GO-PATH/example/main.go has binded the port as a default onto 127.0.0.1:6121, which led to the problem that the server cannot get reached by client outside, just get this on server running:
-bind 0.0.0.0:6121

Can't hit spring-cloud-dataflow HTTP(source) application

I have been following a tutorial to create a stream with spring-cloud-dataflow. It creates the following stream -
http --port=7171 | transform --expression=payload.toUpperCase() | file --directory=c:/dataflow-output
All three applications start up fine. I am using rabbitMQ and if I log in to the rabbit UI I can see that two queues get created for the stream. The tutorial said that I should be able to POST a message to http://localhost:7171 using postman. When I do this nothing happens. I do not get a response, I do not see anything in the queues, and no file is created. In my dataflow logs I can see this being listed.
local: [{"targets":["skipper-server:20060","skipper-server:20052","skipper-server:7171"],"labels":{"job":"scdf"}}]
The tutorial was using an older version of dataflow that I do not believe made use of skipper. Since I am using skipper, does that change the url? I tried http://skipper-server:7171 and http://localhost:7171 but neither of these seem to be reaching the endpoint. I did turn off SSL cert verification in the postman settings.
Sorry for asking so many dataflow questions this week. Thanks in advance.
I found that the port I was trying to hit (7171) which was on my skipper server was not exposed. I had to add and expose the port on the skipper server configuration in my .yml file. I found this post which clued me in.
How to send HTTP requests to my server running in a docker container?
skipper-server:
image: springcloud/spring-cloud-skipper-server:2.1.2.RELEASE
container_name: skipper
expose:
- "7171"
ports:
- "7577:7577"
- "9000-9010:9000-9010"
- "20000-20105:20000-20105"
- "7171:7171"
environment:
- SPRING_CLOUD_SKIPPER_SERVER_PLATFORM_LOCAL_ACCOUNTS_DEFAULT_PORTRANGE_LOW=20000
- SPRING_CLOUD_SKIPPER_SERVER_PLATFORM_LOCAL_ACCOUNTS_DEFAULT_PORTRANGE_HIGH=20100
- SPRING_DATASOURCE_URL=jdbc:mysql://mysql:1111/dataflow
- SPRING_DATASOURCE_USERNAME=xxxxx
- SPRING_DATASOURCE_PASSWORD=xxxxx
- SPRING_DATASOURCE_DRIVER_CLASS_NAME=org.mariadb.jdbc.Driver
- SPRING_RABBITMQ_HOST=127.0.0.1
- SPRING_RABBITMQ_PORT=xxxx
- SPRING_RABBITMQ_USERNAME=xxxxx
- SPRING_RABBITMQ_PASSWORD=xxxxx
entrypoint: "./wait-for-it.sh mysql:1111-- java -Djava.security.egd=file:/dev/./urandom -jar /spring-cloud-skipper-server.jar"

How to set the "cluster" property in prisma.yml

Thanks for reading my question in advance. I am just start to use graphql and prisma following this tutorial.
I have the following Error when Deploying the Prisma database service:
Error: No cluster set. Please set the "cluster" property in your prisma.yml
at /Users/judy/howtographql/server/node_modules/graphql-config-extension-prisma/src/index.ts:89:11
at step (/Users/judy/howtographql/server/node_modules/graphql-config-extension-prisma/dist/index.js:40:23)
at Object.next (/Users/judy/howtographql/server/node_modules/graphql-config-extension-prisma/dist/index.js:21:53)
at fulfilled (/Users/judy/howtographql/server/node_modules/graphql-config-extension-prisma/dist/index.js:12:58)
at <anonymous>
error Command failed with exit code 1.
ERROR: "playground" exited with 1.
error Command failed with exit code 1.
I looked over the tutorial to find that there is nothing about how to set the cluster. I wonder how to fix this problem.
The default prisma.yaml is:
# the name for the service (will be part of the service's HTTP endpoint)
service: hackernews-graphql-js
# the cluster and stage the service is deployed to
stage: dev
# to disable authentication:
# disableAuth: true
secret: mysecret123
# the file path pointing to your data model
datamodel: datamodel.graphql
It could be just that you may have entered an incorrect endpoint address. Please refer https://github.com/prisma/graphql-config-extension-graphcool/issues/8

Running two SonarQube instances in Windows Server 2008 R2

I've already run one SonarQube instance at port 9000 and able access it at address: localhost:9000.
Now I would like to run another SonarQube instance for my new project at port 10000. I've changed in sonar.properties file:
sonar.web.port: 10000
sonar.web.context: /
However, when I run C:\SonarMAP\bin\windows-x86-64\StartSonar.bat, I got the ERROR message:
wrapper | ERROR: Another instance of the SonarQube application is already running.
Press any key to continue . . .
I do some research to solve this problem but can't find any helpful information.
Any suggestion ? Thanks !
UPDATE
The instance 1 configuration:
sonar.jdbc.username=username
sonar.jdbc.password=password
sonar.jdbc.url=jdbc:postgresql://server15/sonarQube
sonar.jdbc.driverClassName: org.postgresql.Driver
sonar.jdbc.validationQuery: select 1
sonar.jdbc.maxActive=20
sonar.jdbc.maxIdle=5
sonar.jdbc.minIdle=2
sonar.jdbc.maxWait=5000
sonar.jdbc.minEvictableIdleTimeMillis=600000
sonar.jdbc.timeBetweenEvictionRunsMillis=30000
The instance 2 configuration:
sonar.jdbc.username=username
sonar.jdbc.password=password
sonar.jdbc.url: jdbc:postgresql://localhost/sonarMAP
sonar.jdbc.driverClassName: org.postgresql.Driver
sonar.jdbc.validationQuery: select 1
sonar.jdbc.maxActive: 20
sonar.jdbc.maxIdle: 5
sonar.jdbc.minIdle: 2
sonar.jdbc.maxWait: 5000
sonar.jdbc.minEvictableIdleTimeMillis: 600000
sonar.jdbc.timeBetweenEvictionRunsMillis: 30000
sonar.web.port: 9100
sonar.web.context: /
sonar.search.port=9101
sonar.notifications.delay: 60
Apparently you can't run multiple instances on Windows because of wrapper.single_invocation=true in conf/wrapper.conf.
Setting it to false seems to allow this (you'll still have to use different ports as Fabrice explained in his answer) but this is getting into grey zone: non recommended and non tested setup.
You need to change other settings inside the conf/sonar.properties file, namely:
sonar.search.port: the port of Elasticsearch
sonar.search.httpPort: if you enabled it in the first instance, you've got to change it as well
and obviously you can't connect to the same schema of the same DB

Resources