I am trying to connect to my Elasticsearch server using the Java Api and shield. I can execute index, get, delete and search operations on the existing cluster using sense plugin (e.g) and via curl on 9200. I've seen other threads about this but none of them worked and none of them were trying to connect to a Elasticsearch webserver with shield.
I used the same API to connect with my localhost of elasticsearch and it worked fine however when I try to connect with my web server I always get the same error:
Error
1342 [main] DEBUG org.elasticsearch.shield.transport.netty - [Benjamin Jacob Grimm] connected to node [{#transport#-1}{HOST_IP}{HOST/HOST_IP:9300}]
1431 [elasticsearch[Benjamin Jacob Grimm][generic][T#1]] DEBUG org.elasticsearch.shield.transport.netty - [Benjamin Jacob Grimm] disconnecting from [{#transport#-1}{HOST_IP}{HOST/HOST_IP:9300}], channel closed event
1463 [main] INFO org.elasticsearch.client.transport - [Benjamin Jacob Grimm] failed to get node info for {#transport#-1}{HOST_IP}{HOST/HOST_IP:9300}, disconnecting...
NodeDisconnectedException[[][HOST/HOST_IP:9300][cluster:monitor/nodes/liveness] disconnected]
...9200/_nodes
"cluster_name": "elasticsearch",
"nodes": {
"UYdZbCQKQZavtFYOoUpawg": {
"name": "Desmond Pitt",
"transport_address": "HOST_IP:9300",
"host": "HOST_IP",
"ip": "HOST_IP",
"version": "2.3.3",
"build": "218bdf1",
"http_address": "HOST_IP:9200",
"settings": {
"pidfile": "/var/run/elasticsearch/elasticsearch.pid",
"cluster": {
"name": "elasticsearch"
},
"path": {
"conf": "/etc/elasticsearch",
"data": "/var/lib/elasticsearch",
"logs": "/var/log/elasticsearch",
"home": "/usr/share/elasticsearch"
},
"shield": {
"http": {
"ssl": "true"
},
"https": {
"ssl": "true"
},
"transport": {
"ssl": "true"
}
},
"name": "Desmond Pitt",
"client": {
"type": "node"
},
"http": {
"cors": {
"allow-origin": "*",
"allow-headers": "Authorization, Origin, X-Requested-With, Content-Type, Accept",
"allow-credentials": "true",
"allow-methods": "OPTIONS, HEAD, GET, POST, PUT, DELETE",
"enabled": "true"
}
},
"index": {
"queries": {
"cache": {
"type": "opt_out_cache"
}
}
},
"foreground": "false",
"config": {
"ignore_system_properties": "true"
},
"network": {
"host": "HOST_IP",
"bind_host": "0.0.0.0",
"publish_host": "HOST_IP"
}
}
Java code:
TransportClient client = TransportClient.builder()
.addPlugin(ShieldPlugin.class)
.settings(Settings.builder()
.put("cluster.name", ClusterName)
.put("shield.user", "USER:PASSWORD")
.build())
.build()
.addTransportAddress(new InetSocketTransportAddress(InetAddress.getByName(HOST), 9300));
I've tried as stated on Can't connect to ElasticSearch server using Java API to sync my Java API java version and my server and currently i'm using:
Java API:
C:\Program Files\Java\jdk1.8.0_92
Server:
"version": "1.8.0_91",
"vm_name": "OpenJDK 64-Bit Server VM",
I don't know if it has any problem using ...0_91 and 0_92 but doesn't seem to make any difference because the java API works weel on my localhost server.
If you need more information feel free to ask.
Thanks in advance!
UPDATE:
Changes I did in elasticsearch.yml
shield.ssl.keystore.path: /usr/share/elasticsearch/bin/shield/elastic.jks
shield.ssl.keystore.password: password
shield.ssl.keystore.key_password: password
shield.transport.ssl: true
shield.http.ssl: true
shield.https.ssl: true
network.host: HOST_IP
network.publish_host: HOST_IP
shield.ssl.hostname_verification.resolve_name: false
Result of https://HOST:9200/_cluster/health?pretty=true
{
"cluster_name": "elasticsearch",
"status": "yellow",
"timed_out": false,
"number_of_nodes": 1,
"number_of_data_nodes": 1,
"active_primary_shards": 5,
"active_shards": 5,
"relocating_shards": 0,
"initializing_shards": 0,
"unassigned_shards": 5,
"delayed_unassigned_shards": 0,
"number_of_pending_tasks": 0,
"number_of_in_flight_fetch": 0,
"task_max_waiting_in_queue_millis": 0,
"active_shards_percent_as_number": 50
}
UPDATE2:
I've tried activate SSL according to official documentation and I got the following errors:
2082 [elasticsearch[Steel Serpent][transport_client_worker][T#1]{New I/O worker #1}] DEBUG org.elasticsearch.shield.transport.netty - [Steel Serpent] SSL/TLS handshake failed, closing channel: null
java.nio.channels.ClosedChannelException
at org.jboss.netty.handler.ssl.SslHandler.channelDisconnected(SslHandler.java:575)
at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:102)
at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559)
at org.jboss.netty.channel.Channels.fireChannelDisconnected(Channels.java:396)
at org.jboss.netty.channel.socket.nio.AbstractNioWorker.close(AbstractNioWorker.java:360)
at org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:93)
at org.jboss.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:108)
at org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:337)
at org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)
at org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
at org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
at org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Temporary Solution
After that try I did as Vladislav Kysliy suggested and disabled SSL and it worked but I'm looking for a real solution and not a temporary one.
As i can see you enabled SSL encryption. But your java code didn't activate SSL. According official documentation you should use something like this:
TransportClient client = TransportClient.builder()
.addPlugin(ShieldPlugin.class)
.settings(Settings.builder()
.put("cluster.name", "myClusterName")
.put("shield.user", "transport_client_user:changeme")
.put("shield.ssl.keystore.path", "/path/to/client.jks") (1)
.put("shield.ssl.keystore.password", "password")
.put("shield.transport.ssl", "true")
...
.build())
Moreover i would test my code without any encryption and add some new features(e.g SSL) to config and code step by step.
UPD: To be honest remotely fixing ssl issues will be tricky. This errors often appeared when client sends an invalid SSL certificate. Probably you need to disable client auth
Because of you use SSL + Shield the main idea is check your functionality step-by-step: disable SSL - check in Java -API client, enable SSL - check again.
Related
I've installed apisix and apisix-dashboard with helm on my k8s cluster.
I used all defaults except APIKEY for admin and viewer acc., and custom username/password for dashboard. So I'm currently running the 2.15 version.
My installation steps
helm repo add apisix https://charts.apiseven.com
helm repo update
# installing apisix/apisix
helm install --set-string admin.credentials.admin="new_api_key"
--set-string admin.credentials.viewer="new_api_key" apisix apisix/apisix --create-namespace --namespace my-apisix
# installing apisix/apisix-dashboard, where values.yaml contains username/password
helm install -f values.yaml apisix-dashboard apisix/apisix-dashboard --create-namespace --namespace my-apisix
I'm unable to configure the mocking plugin, I've been following the docs.
In the provided example I'm unable to call the API on route with ID 1, so I've created a custom route and after that used the VIEW json, where I've changed the configuration accordingly to the sample provided.
All calls on this routes are returning 502 errors, in the logs i can see the route is routing traffic to a non existing server. All of that leads me to believe that the mocking plugin is disabled.
Example of my route:
{
"uri": "/mock-test.html",
"name": "mock-sample-read",
"methods": [
"GET"
],
"plugins": {
"mocking": {
"content_type": "application/json",
"delay": 1,
"disable": false,
"response_schema": {
"$schema": "http://json-schema.org/draft-04/schema#",
"properties": {
"a": {
"type": "integer"
},
"b": {
"type": "integer"
}
},
"required": [
"a",
"b"
],
"type": "object"
},
"response_status": 200,
"with_mock_header": true
}
},
"upstream": {
"nodes": [
{
"host": "127.0.0.1",
"port": 1980,
"weight": 1
}
],
"timeout": {
"connect": 6,
"send": 6,
"read": 6
},
"type": "roundrobin",
"scheme": "https",
"pass_host": "node",
"keepalive_pool": {
"idle_timeout": 60,
"requests": 1000,
"size": 320
}
},
"status": 1
}
Can anyone provide me with an actual working example or point out what I'm missing? Any suggestions are welcomed.
EDIT:
Looking at the logs of the apache/apisix:2.15.0-alpine it looks like this mocking plugin is disabled. Looking at the docs The mocking Plugin is used for mocking an API. When executed, it returns random mock data in the format specified and the request is not forwarded to the Upstream.
Error logs where I've changed the domain and IP addr. suggest that the traffic is being redirected to the upstream:
10.10.10.24 - - [23/Sep/2022:11:33:16 +0000] my.domain.com "GET /mock-test.html HTTP/1.1" 502 154 0.001 "-" "PostmanRuntime/7.29.2" 127.0.0.1:1980 502 0.001 "http://my.domain.com"
Globally plugins are enabled, I've tested using the Keycloak plugin.
EDIT 2: Could this be a bug in version 2.15 of apisix? There is currently no open issue on the github repo.
yes, mocking plugin is not enabled.
you can just add it here.
https://github.com/apache/apisix-helm-chart/blob/7ddeca5395a2de96acd06bada30f3ab3580a6252/charts/apisix/values.yaml#L219-L269
You can also submit a PR directly to fix it
I'm attempting to connect to a CloudSQL instance via a cloudsql-proxy container on my Kubernetes deployment. I have the cloudsql credentials mounted and the value of GOOGLE_APPLICATION_CREDENTIALS set.
However, I'm still receiving the following error in my logs:
2018/10/08 20:07:28 Failed to connect to database: Post https://www.googleapis.com/sql/v1beta4/projects/[projectID]/instances/[appName]/createEphemeral?alt=json&prettyPrint=false: oauth2: cannot fetch token: Post https://oauth2.googleapis.com/token: x509: certificate signed by unknown authority
My connection string looks like this:
[dbUser]:[dbPassword]#cloudsql([instanceName])/[dbName]]?charset=utf8&parseTime=True&loc=Local
And the proxy dialer is shadow-imported as:
_ github.com/GoogleCloudPlatform/cloudsql-proxy/proxy/dialers/mysql
Anyone have an idea what might be missing?
EDIT:
Deployment Spec looks something like this (JSON formatted):
{
"replicas": 1,
"selector": {
...
},
"template": {
...
"spec": {
"containers": [
{
"image": "[app-docker-imager]",
"name": "...",
"env": [
...
{
"name": "MYSQL_PASSWORD",
...
},
{
"name": "MYSQL_USER",
...
},
{
"name": "GOOGLE_APPLICATION_CREDENTIALS",
"value": "..."
}
],
"ports": [
{
"containerPort": 8080,
"protocol": "TCP"
}
],
"volumeMounts": [
{
"mountPath": "/secrets/cloudsql",
"name": "[secrets-mount-name]",
"readOnly": true
}
]
},
{
"command": [
"/cloud_sql_proxy",
"-instances=...",
"-credential_file=..."
],
"image": "gcr.io/cloudsql-docker/gce-proxy:1.11",
"name": "...",
"ports": [
{
"containerPort": 3306,
"protocol": "TCP"
}
],
"volumeMounts": [
{
"mountPath": "/secrets/cloudsql",
"name": "[secrets-mount-name]",
"readOnly": true
}
]
}
],
"volumes": [
{
"name": "[secrets-mount-name]",
"secret": {
"defaultMode": 420,
"secretName": "[secrets-mount-name]"
}
}
]
}
}
}
The error message indicates that your client is not able to trust the certificate of https://www.googleapis.com. There are two possible causes for this:
Your client does not know what root certificates to trust. The official cloudsql-proxy docker image includes root certificates, so if you are using that image, this is not your problem. If you are not using that image, you should (or at least install ca certificates in your image).
Your outbound traffic is being intercepted by a proxy server that is using a different, untrusted, certificate. This might be malicious (in which case you need to investigate who is intercepting your traffic). More benignly, you might be in a organization using an outbound proxy to inspect traffic according to policy. If this is the case, you should build a new docker image that includes the CA certificate used by your organization's outbound proxy.
I have a cluster of 3 nodes of consul servers. I have registered one service(FooService) with one of the server(Server1). When i check the registered services using http (/v1/agent/services) from the server(Server1) it is showing correctly. But when i try the same with any of other server(ie, Server1 /Server2) its not listing this registered service. This issue is not happening for KV Store. Can someone suggest a fix for this?
consul version : 1.2.1
I have pasted my configuration below
{
"bootstrap_expect": 3,
"client_addr": "0.0.0.0",
"datacenter": "DC1",
"data_dir": "/var/consul",
"domain": "consul",
"enable_script_checks": true,
"dns_config": {
"enable_truncate": true,
"only_passing": true
},
"enable_syslog": true,
"encrypt": "3scwcXQpgNVo1CZuqlSouA==",
"leave_on_terminate": true,
"log_level": "INFO",
"rejoin_after_leave": true,
"server": true,
"start_join": [
"10.0.0.242",
"10.0.0.243",
"10.0.0.244"
],
"ui": true
}
What i understood is , spring boot app should always connect to local consul client. Then this issue will not occur.
Newbie to Microservices here.
I have been looking into develop a microservice with spring actuator while having Consul for service discovery and fail recovery.
I have configured a cluster as explained in Consul documentation.
Now what I'm trying to do is configure a Consul Watch to trigger when any of my service is down and execute a shell script to restart my service. Following is my configuration file.
{
"bind_addr": "127.0.0.1",
"datacenter": "dc1",
"encrypt": "EXz7LsrhpQ4idwqffiFoQ==",
"data_dir": "/data",
"log_level": "INFO",
"enable_syslog": true,
"enable_debug": true,
"enable_script_checks": true,
"ui":true,
"node_name": "SpringConsulClient",
"server": false,
"service": { "name": "Apache", "tags": ["HTTP"], "port": 8080,
"check": {"script": "curl localhost >/dev/null 2>&1", "interval": "10s"}},
"rejoin_after_leave": true,
"watches": [
{
"type": "service",
"handler": "/Consul-Script.sh"
}
]
}
Any help/tip would be greatly appreciate.
Regards,
Chrishan
Take a closer look at the description of the service watch type in the official documentation. It has an example, how you can specify it:
{
"type": "service",
"service": "redis",
"args": ["/usr/bin/my-service-handler.sh", "-redis"]
}
Note that it has no property handler and but takes a path to the script as an argument. And one more:
It requires the "service" parameter
It seems, in you case you need to specify it as follows:
"watches": [
{
"type": "service",
"service": "Apache",
"args": ["/fully/qualified/path/to/Consul-Script.sh"]
}
]
I am trying to set up mesos exporter on my mesosphere DCOS cluster. The link I am referring to is https://github.com/prometheus/mesos_exporter. The JSON file I have used is :
{
"id": "/mesosexporter",
"instances": 6,
"cpus": 0.1,
"mem": 25,
"constraints": [["hostname", "UNIQUE"]],
"acceptedResourceRoles": ["slave_public","*"],
"container": {
"type": "DOCKER",
"docker": {
"image": "prom/mesos-exporter",
"network": "BRIDGE",
"portMappings": [
{
"containerPort": 9105,
"hostPort": 9105,
"protocol": "tcp"
}
]
}
},
"healthChecks": [{
"protocol": "TCP",
"gracePeriodSeconds": 600,
"intervalSeconds": 30,
"portIndex": 0,
"timeoutSeconds": 10,
"maxConsecutiveFailures": 2
}]
}
But only meter exposed to Prometheus is 'mesos_exporter_slave_scrape_errors_total'. What are the other meters which mesos exporter exposes to Promethues. The readme from the github of mesos-exporter says that we need to provide command line flags, but if I want to run mesos exporter as a docker container how should I specify the configuration?
EDIT - The meter 'mesos_exporter_slave_scrape_errors_total' gives non-zero value, indicating that errors occurred during the scrape.
EDIT - After adding the 'parameter' primitive my JSON file looks like:
{
"id": "/mesosexporter",
"instances": 1,
"cpus": 0.1,
"mem": 25,
"constraints": [["hostname", "UNIQUE"]],
"acceptedResourceRoles": ["slave_public"],
"container": {
"type": "DOCKER",
"docker": {
"image": "prom/mesos-exporter",
"network": "BRIDGE",
"portMappings": [
{
"containerPort": 9105,
"hostPort": 9105,
"protocol": "tcp"
}
],
"privileged": true,
"parameters": [
{ "key": "-exporter.discovery", "value": "true" },
{ "key": "-exporter.discovery.master-url",
"value": "http://mymasterDNS.amazonaws.com:5050" }
]
}
},
"healthChecks": [{
"protocol": "TCP",
"gracePeriodSeconds": 600,
"intervalSeconds": 30,
"portIndex": 0,
"timeoutSeconds": 10,
"maxConsecutiveFailures": 2
}]
}
Mesos version: 0.22.1
Marathon version: 0.8.2-SNAPSHOT
The app remains in 'deploying' state after using this JSON
But only meter exposed to Prometheus is 'mesos_exporter_slave_scrape_errors_total'. What are the other meters which mesos exporter exposes to Promethues.
If the mesos-exporter is listening on port 9100, you can check http://<hostname>:9100/metrics to know what metrics are being exposed. I am referring the prom/node-exporter that I have setup on one of the systems.
but if I want to run mesos exporter as a docker container how should I specify the configuration?
I am assuming you're POST'ing this JSON file to the Marathon REST API to start Docker containers. If that is indeed the case, you can specify additional options using parameters JSON directive. More info can be found on Marathon docs under the section Privileged Mode and Arbitrary Docker Options.
Hope that helps!
Using the args primitive solved the problem. The equivalent docker command is docker run -p 9105:9105 prom/mesos-exporter -exporter.discovery=true -exporter.discovery.master-url="mymasternodeDNS:5050" -log.level=debug
As the parameters 'exporter.discovery', 'exporter.discovery.master-url' and 'log.level' are for the image entry point and not for 'docker run', 'args' has to be used.
The format for 'args' as added in the working JSON is:
"args": [
"-exporter.discovery=true",
"-exporter.discovery.master-url=http://mymasternodeDNS:5050",
"-log.level=debug"]