Deepset Haystack Secure Connection to Elasticsearch - elasticsearch

I am trying to create a Haystack document store with Elasticsearch (run on Docker with security) but I am getting the following errors. I know I need to tell Haystack to use https but I couldn't find a how in the docs. I did try specifying port 9300 and 9200.
document_store = ElasticsearchDocumentStore(
host='https://localhost',
username= 'elastic',
password='{PASSWORD}',
port=9200,
index='squad_docs'
)
Haystack errors in the Python console
ConnectionError: ConnectionError(('Connection aborted.', RemoteDisconnected('Remote end closed connection without response'))) caused by: ProtocolError(('Connection aborted.', RemoteDisconnected('Remote end closed connection without response')))
Elasticsearch WARN in the logs
{"#timestamp":"2022-11-08T14:01:54.875Z", "log.level": "WARN", "message":"received plaintext http traffic on an https channel, closing connection Netty4HttpChannel{localAddress=/172.18.0.2:9200, remoteAddress=/172.18.0.1:56568}", "ecs.version": "1.2.0","service.name":"ES_ECS","event.dataset":"elasticsearch.server","process.thread.name":"elasticsearch[c759978b5b07][transport_worker][T#1]","log.logger":"org.elasticsearch.xpack.security.transport.netty4.SecurityNetty4HttpServerTransport","elasticsearch.cluster.uuid":"vDXCcs4uQM-gcuf5vciZkA","elasticsearch.node.id":"VunPyrAPRbG9PrLFyZ1E8w","elasticsearch.node.name":"c759978b5b07","elasticsearch.cluster.name":"docker-cluster"}

The ElasticsearchDocumentStore has a parameter scheme which you need to set to "https". You might want to have a look at the documentation to see all available parameters.

Related

Socket closed abruptly during opening handshake: rabbitmq using adonisjs 5 connection fail

I'm using node 14.17.0 and adonisjs 5.8.5.
This is my rabbitmq .env :
RABBITMQ_HOSTNAME=localhost
RABBITMQ_USER=
RABBITMQ_PASSWORD=
RABBITMQ_PORT=15672
RABBITMQ_PROTOCOL= 'amqp://'
I try sendToQueue. But I get that's error. Anyone can help?
When you get the problem. Just change RABBITMQ_PORT=15672 become RABBITMQ_PORT=5672 It will passed.

Kibana says "Kibana server is not ready yet." and "elasticsearch-reset-password" returns an error

I am new to Elasticsearch/Kibana and am trying to set up a basic installation via Docker. I've backed myself into a corner, and I need help finding my way out.
I have the following docker-compose.yml.
services:
elasticsearch:
container_name: elasticsearch
image: docker.elastic.co/elasticsearch/elasticsearch:8.4.0
environment:
- discovery.type=single-node
ulimits:
memlock:
soft: -1
hard: -1
cap_add:
- IPC_LOCK
ports:
- "9200:9200"
kibana:
container_name: kibana
image: docker.elastic.co/kibana/kibana:8.4.0
environment:
- ELASTICSEARCH_HOSTS=http://elasticssearch:9200
ports:
- "5601:5601"
I run docker compose up . and the logs look mostly good. However, when I try to connect to http://localhost:5601/, I see a message "Kibana server is not ready yet." that never goes away.
The end of the Elasticsearch log looks like this.
{"#timestamp":"2022-08-26T15:26:25.616Z", "log.level":"ERROR", "message":"exception during geoip databases update", "ecs.version": "1.2.0","service.name":"ES_ECS","event.dataset":"elasticsearch.server","process.thread.name":"elasticsearch[1de6b5b3d4cb][generic][T#4]","log.logger":"org.elasticsearch.ingest.geoip.GeoIpDownloader","elasticsearch.cluster.uuid":"vGjmfQNWTRS2sEeG0AiwuQ","elasticsearch.node.id":"3CcC2gJmRk2tQZOQTwU9HA","elasticsearch.node.name":"1de6b5b3d4cb","elasticsearch.cluster.name":"docker-cluster","error.type":"org.elasticsearch.ElasticsearchException","error.message":"not all primary shards of [.geoip_databases] index are active","error.stack_trace":"org.elasticsearch.ElasticsearchException: not all primary shards of [.geoip_databases] index are active\n\tat org.elasticsearch.ingest.geoip#8.4.0/org.elasticsearch.ingest.geoip.GeoIpDownloader.updateDatabases(GeoIpDownloader.java:134)\n\tat org.elasticsearch.ingest.geoip#8.4.0/org.elasticsearch.ingest.geoip.GeoIpDownloader.runDownloader(GeoIpDownloader.java:274)\n\tat org.elasticsearch.ingest.geoip#8.4.0/org.elasticsearch.ingest.geoip.GeoIpDownloaderTaskExecutor.nodeOperation(GeoIpDownloaderTaskExecutor.java:102)\n\tat org.elasticsearch.ingest.geoip#8.4.0/org.elasticsearch.ingest.geoip.GeoIpDownloaderTaskExecutor.nodeOperation(GeoIpDownloaderTaskExecutor.java:48)\n\tat org.elasticsearch.server#8.4.0/org.elasticsearch.persistent.NodePersistentTasksExecutor$1.doRun(NodePersistentTasksExecutor.java:42)\n\tat org.elasticsearch.server#8.4.0/org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:769)\n\tat org.elasticsearch.server#8.4.0/org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:26)\n\tat java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)\n\tat java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)\n\tat java.base/java.lang.Thread.run(Thread.java:833)\n"}
2022-08-26T15:26:26.005783998Z {"#timestamp":"2022-08-26T15:26:26.002Z", "log.level": "INFO", "current.health":"GREEN","message":"Cluster health status changed from [RED] to [GREEN] (reason: [shards started [[.geoip_databases][0]]]).","previous.health":"RED","reason":"shards started [[.geoip_databases][0]]" , "ecs.version": "1.2.0","service.name":"ES_ECS","event.dataset":"elasticsearch.server","process.thread.name":"elasticsearch[1de6b5b3d4cb][masterService#updateTask][T#1]","log.logger":"org.elasticsearch.cluster.routing.allocation.AllocationService","elasticsearch.cluster.uuid":"vGjmfQNWTRS2sEeG0AiwuQ","elasticsearch.node.id":"3CcC2gJmRk2tQZOQTwU9HA","elasticsearch.node.name":"1de6b5b3d4cb","elasticsearch.cluster.name":"docker-cluster"}
2022-08-26T15:26:26.264786433Z {"#timestamp":"2022-08-26T15:26:26.264Z", "log.level": "INFO", "message":"successfully loaded geoip database file [GeoLite2-Country.mmdb]", "ecs.version": "1.2.0","service.name":"ES_ECS","event.dataset":"elasticsearch.server","process.thread.name":"elasticsearch[1de6b5b3d4cb][generic][T#2]","log.logger":"org.elasticsearch.ingest.geoip.DatabaseNodeService","elasticsearch.cluster.uuid":"vGjmfQNWTRS2sEeG0AiwuQ","elasticsearch.node.id":"3CcC2gJmRk2tQZOQTwU9HA","elasticsearch.node.name":"1de6b5b3d4cb","elasticsearch.cluster.name":"docker-cluster"}
2022-08-26T15:26:26.304814423Z {"#timestamp":"2022-08-26T15:26:26.304Z", "log.level": "INFO", "message":"successfully loaded geoip database file [GeoLite2-ASN.mmdb]", "ecs.version": "1.2.0","service.name":"ES_ECS","event.dataset":"elasticsearch.server","process.thread.name":"elasticsearch[1de6b5b3d4cb][generic][T#3]","log.logger":"org.elasticsearch.ingest.geoip.DatabaseNodeService","elasticsearch.cluster.uuid":"vGjmfQNWTRS2sEeG0AiwuQ","elasticsearch.node.id":"3CcC2gJmRk2tQZOQTwU9HA","elasticsearch.node.name":"1de6b5b3d4cb","elasticsearch.cluster.name":"docker-cluster"}
2022-08-26T15:26:27.017126446Z {"#timestamp":"2022-08-26T15:26:27.016Z", "log.level": "INFO", "message":"successfully loaded geoip database file [GeoLite2-City.mmdb]", "ecs.version": "1.2.0","service.name":"ES_ECS","event.dataset":"elasticsearch.server","process.thread.name":"elasticsearch[1de6b5b3d4cb][generic][T#1]","log.logger":"org.elasticsearch.ingest.geoip.DatabaseNodeService","elasticsearch.cluster.uuid":"vGjmfQNWTRS2sEeG0AiwuQ","elasticsearch.node.id":"3CcC2gJmRk2tQZOQTwU9HA","elasticsearch.node.name":"1de6b5b3d4cb","elasticsearch.cluster.name":"docker-cluster"}
I'm not sure if that ERROR about "geoip databases" is a problem. It does look like cluster health is "GREEN".
The end of the Kibana logs looks like this.
[2022-08-26T15:26:25.032+00:00][INFO ][plugins.ruleRegistry] Installing common resources shared between all indices
2022-08-26T15:26:25.091816903Z [2022-08-26T15:26:25.091+00:00][INFO ][plugins.cloudSecurityPosture] Registered task successfully [Task: cloud_security_posture-stats_task]
2022-08-26T15:26:26.081102019Z [2022-08-26T15:26:26.080+00:00][INFO ][plugins.screenshotting.config] Chromium sandbox provides an additional layer of protection, and is supported for Linux Ubuntu 20.04 OS. Automatically enabling Chromium sandbox.
2022-08-26T15:26:26.155818080Z [2022-08-26T15:26:26.155+00:00][ERROR][elasticsearch-service] Unable to retrieve version information from Elasticsearch nodes. getaddrinfo ENOTFOUND elasticssearch
2022-08-26T15:26:26.982333104Z [2022-08-26T15:26:26.981+00:00][INFO ][plugins.screenshotting.chromium] Browser executable: /usr/share/kibana/x-pack/plugins/screenshotting/chromium/headless_shell-linux_x64/headless_shell
That "Unable to retrieve version information from Elasticsearch nodes." ERROR looks more like it could be a problem, but I'm not sure what to do about it. One online question that sounds similar comes down to the difference between ELASTICSEARCH_HOSTS and ELASTICSEARCH_URL for an earlier version of Elastic that doesn't seem relevant here.
Poking around online also turns up situations in which the "Kibana server is not ready yet." error is a problem with the security setup. The whole security setup part is a bit confusing to me, but it seems like one thing that might have happened is that I failed to setup passwords correctly. I'm trying to start over, so I shelled into the Elasticsearch instance and ran elasticsearch-reset-password --username elastic. I saw the following error.
elasticsearch#1de6b5b3d4cb:~$ elasticsearch-reset-password --username elastic
15:24:34.593 [main] WARN org.elasticsearch.common.ssl.DiagnosticTrustManager - failed to establish trust with server at [172.18.0.2]; the server provided a certificate with subject name [CN=1de6b5b3d4cb], fingerprint [cc4a98abd8b44925c631d7e4b05f048317c8e02b], no keyUsage and extendedKeyUsage [serverAuth]; the session uses cipher suite [TLS_AES_256_GCM_SHA384] and protocol [TLSv1.3]; the certificate has subject alternative names [IP:172.18.0.3,DNS:localhost,IP:127.0.0.1,DNS:1de6b5b3d4cb]; the certificate is issued by [CN=Elasticsearch security auto-configuration HTTP CA]; the certificate is signed by (subject [CN=Elasticsearch security auto-configuration HTTP CA] fingerprint [ba8730cc6481e4847e4a14eff4f774ca1c96ad0b] {trusted issuer}) which is self-issued; the [CN=Elasticsearch security auto-configuration HTTP CA] certificate is trusted in this ssl context ([xpack.security.http.ssl (with trust configuration: Composite-Trust{JDK-trusted-certs,StoreTrustConfig{path=certs/http.p12, password=<non-empty>, type=PKCS12, algorithm=PKIX}})])
java.security.cert.CertificateException: No subject alternative names matching IP address 172.18.0.2 found
Those are all the problems I have encountered. I don't know what they mean or which are significant, and Googling doesn't turn up any clear next steps. Any suggestions as to what is going on here?
Never mind. Stupid mistake. I misspelled elasticsearch in the line.
ELASTICSEARCH_HOSTS=http://elasticssearch:9200
"ss" instead of "s". Easy to overlook. The error message in the Kibana logs was telling me what the problem was. I just didn't know how to interpret it.
Even though this was just a typo I'm going to leave this question up in case someone makes the same mistake and gets confused in the same way.

GeoNode - Add new Haystack search index (Connection Error when changing HAYSTACK_SEARCH to True)

I am using a GeoNode project (version 3.3.2, installation without Docker) and I have created some models that I would like to add to the Haystack indexes.
I have already created the indexes for my models, and according to the Haystack documentation I need to run the update_index or rebuild_index commands, which are not available in GeoNode.
According to the GeoNode documentation, I have to set the HAYSTACK_SEARCH property to True to enable haystack Search Backend Configuration. I tried to set this property in my geonode-project settings.py, but I still couldn't execute the commands. I tried to change it directly in the GeoNode settings.py, and the commands appeared, but I get the error:
elasticsearch.exceptions.ConnectionError:
ConnectionError(<urllib3.connection.HTTPConnection object at 0x7f37210ff8e0>: Failed to establish a new connection: [Errno 111] Connection refused) caused by:
NewConnectionError(<urllib3.connection.HTTPConnection object at 0x7f37210ff8e0>: Failed to establish a new connection: [Errno 111] Connection refused)
Do you have any idea how to solve this error?
Or a workaround to index my models in GeoNode's Haystack.

RedisCommandTimeOutException while making connecting micronaut lambda with elastic-cache

I am trying to create a lambda using Micronaut-2 connecting to elastic-cache.
I have used redis-lettuce dependency in the project with the following configuration and encryption on the transaction is enabled in the elastic-cache config.
redis:
uri: redis://{aws master node endpoint}
password: {password}
tls: true
ssl: true
io-thread-pool-size: 5
computation-thread-pool-size: 4
I am getting below exception:
command timed out after 1 minute(s): io.lettuce.core.rediscommandtimeoutexception
io.lettuce.core.rediscommandtimeoutexception: command timed out after 1 minute(s) at
io.lettuce.core.exceptionfactory.createtimeoutexception(exceptionfactory.java:51) at
io.lettuce.core.lettucefutures.awaitorcancel(lettucefutures.java:119) at
io.lettuce.core.futuresyncinvocationhandler.handleinvocation(futuresyncinvocationhandler.java:75)
at io.lettuce.core.internal.abstractinvocationhandler.invoke(abstractinvocationhandler.java:79)
com.sun.proxy.$proxy22.set(unknown source) at
hello.world.function.httpbookredishandler.execute(httpbookredishandler.java:29) at
hello.world.function.httpbookredishandler.execute(httpbookredishandler.java:16) at
io.micronaut.function.aws.micronautrequesthandler.handlerequest(micronautrequesthandler.java:73)
I have tried with spring cloud function with same network (literally on the same lambda) with the same elastic cache setup, it is working fine.
Any direction that can help me to debug this issue, please.
This might be late.
First thing to mention here is, an elastic-cache can only be accessed within a VPC. If you want to access it from the internet, it needs to have NAT GW enabled.

Bug CleverBeagle Pup 2.0 Meteor GraphQL deployment on Heroku

1) In the first time, when I deployed the origin code to heroku server with git clone https://github.com/cleverbeagle/pup
The launching application didn't work.
I managed to correct this with to copy the content of 'settings-development.json' file and paste in Heroku => myProject => Setttings => Reveal Config Vars => Key : METEOR_SETTINGS and Value : I pasted here the content.
thanks to :
- https://github.com/cleverbeagle/pup/issues/9
- https://github.com/cleverbeagle/pup/issues/197
So, now, the app is showing on server.
2) On Chrome console, I have this error :
50d72c91808ef7fba57f920b67d152d2d57698eb.js?meteor_js_resource=true:9 WebSocket connection to 'ws://localhost:4001/graphql' failed: Error in connection establishment: net::ERR_CONNECTION_REFUSED
so I changed this in METEOR_SETTINGS
"graphQL": {
"httpUri": "http://localhost:3000/graphql",
"wsUri": "ws://localhost:4001/graphql"
},
to
"graphQL": {
"httpUri": "https://myproject.herokuapp.com:3000/graphql",
"wsUri": "wss://myproject.herokuapp.com:4001/graphql"
},
Note without https and wss, the app is not showing
3) Now on Chrome Console, I have :
this warning :
50d72c91808ef7fba57f920b67d152d2d57698eb.js?meteor_js_resource=true:9 WebSocket connection to 'wss://myproject.herokuapp.com:4001/graphql' failed: WebSocket is closed before the connection is established.
and after several warning above, I have this error :
50d72c91808ef7fba57f920b67d152d2d57698eb.js?meteor_js_resource=true:9 WebSocket connection to 'wss://myproject.herokuapp.com:4001/graphql' failed: WebSocket opening handshake timed out
By using the origin source code from Pup, I can signup on server but I cannot create a new document.
Any help, please ?
Thank you
EDIT 15 JAN 2019
**4) I remove Port like this : **
"httpUri": "https://myproject.herokuapp.com/graphql",
"wsUri": "wss://myproject.herokuapp.com/graphql"
Now, I can create New document on https://myproject.herokuapp.com/documents
but I still have this warning :
fe6fa1ac83e19aa2513ac3f97293600e8dc99e8e.js?meteor_js_resource=true:9
WebSocket connection to 'wss://myproject.herokuapp.com/graphql'
failed: WebSocket is closed before the connection is established.
and this error :
WebSocket connection to 'wss://myproject.herokuapp.com/graphql'
failed: Error during WebSocket handshake: Unexpected response code:
503
any idea ?
Thanks

Resources