Open Distro Elasticsearch - Authenticate to Kibana with JWT - elasticsearch

I could get the open distro running with basic auth (using internal user database), now I need to use JWT tokens to authenticate to Kibana dashboard.
Elasticsearch config:
basic_internal_auth_domain:
http_enabled: false
transport_enabled: true
order: 4
http_authenticator:
type: basic
challenge: true
authentication_backend:
type: intern
proxy_auth_domain:
http_enabled: false
transport_enabled: false
order: 3
http_authenticator:
type: proxy
challenge: false
config:
user_header: "x-proxy-user"
roles_header: "x-proxy-roles"
authentication_backend:
type: noop
jwt_auth_domain:
enabled: true
http_enabled: true
transport_enabled: true
order: 0
http_authenticator:
type: jwt
challenge: false
config:
signing_key: "EdzdXd5weiuSVFyddfjhjhfjjchJGRrZmpkayZPUA=="
jwt_header: "Authorization"
jwt_url_parameter: "token"
roles_key: "roles"
subject_key: "sub"
authentication_backend:
type: noop
Kibana Config:
server.name: kibana
server.port: 5601
server.host: "127.0.0.1"
elasticsearch.url: https://127.0.0.1:9200
elasticsearch.ssl.verificationMode: none
elasticsearch.username: kibanaserver
elasticsearch.password: kibanaserver
elasticsearch.requestHeadersWhitelist: ["securitytenant","Authorization"]
opendistro_security.auth.type: "jwt"
opendistro_security.jwt.url_param: token
opendistro_security.multitenancy.enabled: true
opendistro_security.multitenancy.tenants.preferred: ["Private", "Global"]
opendistro_security.readonly_mode.roles: ["kibana_read_only"]
After this, when I open the http://localhost:5601?token=dfkhdfjdfhdjfhdhfkhdjfhjdhfjdhffdjhfdjhf, the auth fails, elasticsearch logs show this message -
[c.a.o.s.h.HTTPBasicAuthenticator] [node-1] No 'Basic Authorization'
header, send 401 and 'WWW-Authenticate Basic'
I have followed the documentation thoroughly, yet there is very little material on the internet right now, it's still in the POC stages for most of the people I guess. Any suggestions?

Those who are looking for an answer - My JWT token was wrong, make sure you configure "iat", "nbf" and "exp" according to your server timings and not your local time.

Related

Why is Fscrawler refusing to trust certificate while I have set ssl verification to 'false"?

Here's my Yaml file for fscrawler:
name: "data_science"
fs:
url: "C:\\tmp\\DS_books"
update_rate: "15m"
excludes:
- "*/~*"
json_support: false
filename_as_id: false
add_filesize: true
remove_deleted: true
add_as_inner_object: false
store_source: false
index_content: true
attributes_support: false
raw_metadata: false
xml_support: false
index_folders: true
lang_detect: false
continue_on_error: false
ocr:
language: "eng"
enabled: true
pdf_strategy: "ocr_and_text"
follow_symlinks: false
elasticsearch:
nodes:
- url: "https://127.0.0.1:9200"
username: "elastic"
password: "8u4c0pEXmjYwq_Pd4zeX"
bulk_size: 100
flush_interval: "5s"
byte_size: "10mb"
ssl_verification: false
Yet I get the following message when I try to build index:
"WARN ][o.e.x.s.t.n.SecurityNetty4HttpServerTransport] [DESKTOP-0MS6MUS] http client did not trust this server's certificate, closing connection Netty4HttpChannel{localAddress=/127.0.0.1:9200, remoteAddress=/127.0.0.1:51966}"
replaced with most recent Elasticsearch 8.4.3 and problem isn't occurring anymore...

Kibana not able to connect to ES services

I am trying to setup ES with Kibana on AKS and having a bit of issue. The setup worked before need of Security plugin enabled. Now I need security plugin enabled, although not able to get Kibana connected. Do you have any idea please ? I tried adding, disabling calling with/without https, seems it is all the same. Thanks
Deploying with helm:
ES: image: docker.elastic.co/elasticsearch/elasticsearch imageTag: 7.16.2
Kibana: image: "docker.elastic.co/kibana/kibana" imageTag: "7.10.2"
My full configs:
elastisearch.yml
xpack.security.enabled: "true"
xpack.security.transport.ssl.enabled: "true"
xpack.security.transport.ssl.supported_protocols: "TLSv1.2"
xpack.security.transport.ssl.client_authentication: "none"
xpack.security.transport.ssl.key: "/usr/share/elasticsearch/config/certkey/apps-com-key.pem"
xpack.security.transport.ssl.certificate: "/usr/share/elasticsearch/config/cert/apps-com-fullchain.pem"
xpack.security.transport.ssl.certificate_authorities: "/usr/share/elasticsearch/config/certs/fullchain-ca.pem"
xpack.security.transport.ssl.verification_mode: "certificate"
xpack.security.http.ssl.enabled: "false"
xpack.security.http.ssl.client_authentication: "none"
xpack.security.http.ssl.key: "/usr/share/elasticsearch/config/certkey/key.pem"
xpack.security.http.ssl.certificate: "/usr/share/elasticsearch/config/cert/fullchain.pem"
xpack.security.http.ssl.certificate_authorities: "/usr/share/elasticsearch/config/certs/fullchain-ca.pem"
kibana.yml
logging.root.level: all
logging.verbose: true
elasticsearch.hosts: ["https://IP:9200"]
elasticsearch.username: "kibana_system"
elasticsearch.password: ${KIBANA_PASSWORD}
server.ssl:
enabled: "true"
key: "/usr/share/kibana/config/certkey/key.pem"
certificate: "/usr/share/kibana/config/cert/fullchain.pem"
clientAuthentication: "none"
supportedProtocols: [ "TLSv1.2"]
elasticsearch.ssl:
certificateAuthorities: [ "/usr/share/kibana/config/certs/fullchain-ca.pem" ]
verificationMode: "certificate"
elasticsearch.requestHeadersWhitelist: [ authorization ]
newsfeed.enabled: "false"
telemetry.enabled: "false"
telemetry.optIn: "false"
The errors I receive on Kibana pod.
{"type":"log","#timestamp":"2022-10-10T13:24:57Z","tags":["error","elasticsearch","data"],"pid":8,"message":"[ConnectionError]: write EPROTO 140676394411840:error:1408F10B:SSL routines:ssl3_get_record:wrong version number:../deps/openssl/openssl/ssl/record/ssl3_record.c:332:\n

Missing authentication credentials for REST request when using sniffing when Kibana starts

I just upgraded ELK from 7.1.0 to 7.5.0 and Kibana fails to start with
{"type":"log","#timestamp":"2020-01-22T17:27:54Z","tags":["error","elasticsearch","data"],"pid":23107,"message":"Request error, retrying\nGET http://localhost:9200/_xpack => socket hang up"}
{"type":"log","#timestamp":"2020-01-22T17:27:55Z","tags":["info","plugins-system"],"pid":23107,"message":"Starting [8] plugins: [security,licensing,code,timelion,features,spaces,translations,data]"}
{"type":"log","#timestamp":"2020-01-22T17:27:55Z","tags":["warning","plugins","licensing"],"pid":23107,"message":"License information could not be obtained from Elasticsearch for the [data] cluster. [security_exception] missing authentication credentials for REST request [/_xpack], with { header={ WWW-Authenticate=\"Basic realm=\\\"security\\\" charset=\\\"UTF-8\\\"\" } } :: {\"path\":\"/_xpack\",\"statusCode\":401,\"response\":\"{\\\"error\\\":{\\\"root_cause\\\":[{\\\"type\\\":\\\"security_exception\\\",\\\"reason\\\":\\\"missing authentication credentials for REST request [/_xpack]\\\",\\\"header\\\":{\\\"WWW-Authenticate\\\":\\\"Basic realm=\\\\\\\"security\\\\\\\" charset=\\\\\\\"UTF-8\\\\\\\"\\\"}}],\\\"type\\\":\\\"security_exception\\\",\\\"reason\\\":\\\"missing authentication credentials for REST request [/_xpack]\\\",\\\"header\\\":{\\\"WWW-Authenticate\\\":\\\"Basic realm=\\\\\\\"security\\\\\\\" charset=\\\\\\\"UTF-8\\\\\\\"\\\"}},\\\"status\\\":401}\",\"wwwAuthenticateDirective\":\"Basic realm=\\\"security\\\" charset=\\\"UTF-8\\\"\"}"}
when having the following two options enabled:
elasticsearch.sniffOnStart: true
elasticsearch.sniffOnConnectionFault: true
Any idea what I am doing wrong ?
The complete Kibana config follows:
server.port: 5601
server.host: 0.0.0.0
server.name: kibana
kibana.index: ".kibana"
kibana.defaultAppId: "discover"
elasticsearch.hosts: ["http://node1.test.com:9200", "http://node2.test.com:9200", "http://node3.test.com:9200", "http://node4.test.com:9200", "http://node5.test.com:9200"]
elasticsearch.pingTimeout: 1500
elasticsearch.requestTimeout: 30000
elasticsearch.logQueries: true
elasticsearch.sniffOnStart: true
elasticsearch.sniffOnConnectionFault: true
elasticsearch.username: "kibana"
elasticsearch.password: "XXX"
logging.dest: /var/log/kibana.log
logging.verbose: false
xpack.security.enabled: true
xpack.monitoring.enabled: true
xpack.monitoring.ui.enabled: true
xpack.security.encryptionKey: "XXX"
If I remove elasticsearch.sniffOnStart: true all is well.
This "xpack.security.enabled: false" worked for 6.2.x version as well

elasticsearch.yml ses email config

Took the config from elastic search documentation, and added it to elastic cloud yml.
xpack.notification.email.account:
ses_account:
smtp:
auth: true
starttls.enable: true
starttls.required: true
host: email-smtp.us-east-1.amazonaws.com
port: 587
user: <username>
password: <password>
giving me the below error:
'xpack.notification.email.account.ses_account.profile': is not allowed

JHipster test: NoCacheRegionFactoryAvailableException when second level cache is disabled

When I use jhipster generate an app, I disabled the second level cache. However, when I run either "gradle test" or "run as junit test" to test the app, it is failed because the "NoCacheRegionFactoryAvailableException". I have checked the application.yml in directory "src/test/resources/config", and be sure that the second cache is disabled. I do not know why the app is still looking for second-cache. Is there any clue how this happen? or how to disable second level cache completely?
Except the test failure, everything else works well, the app can run successfully.
application.yml in src/test/resources/config
spring:
application:
name: EMS
datasource:
url: jdbc:h2:mem:EMS;DB_CLOSE_DELAY=-1
name:
username:
password:
jpa:
database-platform: com.espion.ems.domain.util.FixedH2Dialect
database: H2
open-in-view: false
show_sql: true
hibernate:
ddl-auto: none
naming-strategy: org.springframework.boot.orm.jpa.hibernate.SpringNamingStrategy
properties:
hibernate.cache.use_second_level_cache: false
hibernate.cache.use_query_cache: false
hibernate.generate_statistics: true
hibernate.hbm2ddl.auto: validate
data:
elasticsearch:
cluster-name:
cluster-nodes:
properties:
path:
logs: target/elasticsearch/log
data: target/elasticsearch/data
mail:
host: localhost
mvc:
favicon:
enabled: false
thymeleaf:
mode: XHTML
liquibase:
contexts: test
security:
basic:
enabled: false
server:
port: 10344
address: localhost
jhipster:
async:
corePoolSize: 2
maxPoolSize: 50
queueCapacity: 10000
security:
rememberMe:
# security key (this key should be unique for your application, and kept secret)
key: jhfasdhflasdhfasdkfhasdjkf
metrics: # DropWizard Metrics configuration, used by MetricsConfiguration
jmx.enabled: true
swagger:
title: EMS API
description: EMS API documentation
version: 0.0.1
termsOfServiceUrl:
contactName:
contactUrl:
contactEmail:
license:
licenseUrl:
enabled: false
Move src/test/resources/config/application.yml to src/test/resources directory.
You can find that solution from https://github.com/jhipster/generator-jhipster/issues/3730

Resources