Why is Fscrawler refusing to trust certificate while I have set ssl verification to 'false"? - elasticsearch

Here's my Yaml file for fscrawler:
name: "data_science"
fs:
url: "C:\\tmp\\DS_books"
update_rate: "15m"
excludes:
- "*/~*"
json_support: false
filename_as_id: false
add_filesize: true
remove_deleted: true
add_as_inner_object: false
store_source: false
index_content: true
attributes_support: false
raw_metadata: false
xml_support: false
index_folders: true
lang_detect: false
continue_on_error: false
ocr:
language: "eng"
enabled: true
pdf_strategy: "ocr_and_text"
follow_symlinks: false
elasticsearch:
nodes:
- url: "https://127.0.0.1:9200"
username: "elastic"
password: "8u4c0pEXmjYwq_Pd4zeX"
bulk_size: 100
flush_interval: "5s"
byte_size: "10mb"
ssl_verification: false
Yet I get the following message when I try to build index:
"WARN ][o.e.x.s.t.n.SecurityNetty4HttpServerTransport] [DESKTOP-0MS6MUS] http client did not trust this server's certificate, closing connection Netty4HttpChannel{localAddress=/127.0.0.1:9200, remoteAddress=/127.0.0.1:51966}"

replaced with most recent Elasticsearch 8.4.3 and problem isn't occurring anymore...

Related

org.axonframework.commandhandling.distributed.CommandDispatchException:An erroroccurredwhiletryingtodispatchacommandontheDistributedCommandBus:404null

***Facing below issue:
org.axonframework.commandhandling.distributed.CommandDispatchException: An error occurred while trying to dispatch a command on the DistributedCommandBus: 404 null
The application is deployed on open shift platform using K8s.
This issue is coming while increasing the number of pods to greater than 1 on a specific environment.***
Below is the configuration file:-
eventbus:
type: jms
eventBus:
server:
selector:
topicName: EventBus
queueName: EventBus.management-inventory-api
eventstore:
jdbc:
validateOnly: true
endpoints:
health:
sensitive: false
jmx:
uniqueNames: true
liquibase:
enabled: false
server:
port: 8087
http:
port: 8088
ssl:
enabled: false
keyStore: ${certs.path}/inventorydomain.jks
keyStoreType: JKS
trustStore: ${certs.path}/inventorydomain_truststore.jks
trustStoreType: JKS
keyAlias: inventorydomain
spring:
jmx:
default-domain: com.inventory.domain.inventory-command
jpa:
hibernate:
ddlAuto: validate
show-sql: true
messages:
basename: errors,platform-errors
autoconfigure.exclude: |
org.axonframework.boot.autoconfig.JpaAutoConfiguration,
org.axonframework.boot.autoconfig.AxonAutoConfiguration
annotation:
eventHandler:
lookupPrefix: com.consumercard.command.listener.external
eventMapping:
lookupPrefix: com.consumercard.command.event
jmsBrokers:
producer:
brokerUrl: vm://localhost?broker.persistent=false
consumer:
brokerUrl: vm://localhost?broker.persistent=false
spring.cloud.config.discovery.enabled: false
eureka:
instance:
preferIpAddress: true
nonSecurePort: ${server.http.port}
securePort: ${server.port}
nonSecurePortEnabled: true
securePortEnabled: false
metadata-map:
zone: zone-1
client:
enabled: false
serviceUrl:
defaultZone: https://localhost:8448/eureka
logging:
level: DEBUG
basicAuth.enabled: true
rest:
client:
connection:
defaultMaxPerRoute: 50
maxTotal: 100
connectionTimeout: 10000
readTimeout: 30000
NOTE:-
This was an additional property added:-
eureka:
instance:
metadata-map:
zone: zone-1

Witnessing strange surge in RDS aurora mysql RDS

I have a springboot based microservice application.
id 'org.springframework.boot' version '2.5.4'
id 'io.spring.dependency-management' version '1.0.11.RELEASE'
id "com.palantir.docker" version "0.26.0"
id "com.palantir.docker-run" version "0.26.0"
id 'pl.allegro.tech.build.axion-release' version '1.13.2'
Database is mysql5.7 aurora rds
'mysql', name: 'mysql-connector-java', version: '8.0.28'
hikaricp - 4.0.3
I am witnessing one strange surge in cpu utilisation stats on RDS performance insight dashboard is that even when there are no requests on my app server, mysql still shows high cpu utilization.
Here are the screen shots:
we can observe from the logs that there are no requests on server but when the connection passes its max life, surge in cpu utilization on RDS aurora mysql:
"connection has passed maxLifetime" -> and the top sql shows set autocommit = 0 is the highest number of queries being fired.
Here are my configurations:
application.yml
spring:
application:
name: catalogue
profiles:
# The commented value for `active` can be replaced with valid Spring profiles to load.
# Otherwise, it will be filled in by gradle when building the JAR file
# Either way, it can be overridden by `--spring.profiles.active` value passed in the commandline or `-Dspring.profiles.active` set in `JAVA_OPTS`
active: dev
group:
dev:
- dev
- api-docs
# Uncomment to activate TLS for the dev profile
#- tls
prod:
- prod
- api-docs
# Uncomment to activate TLS for the dev profile
#- tls
stage:
- stage
jmx:
enabled: false
data:
web:
pageable:
default-page-size: 20
max-page-size: 20
jpa:
repositories:
bootstrap-mode: deferred
jpa:
open-in-view: false
properties:
hibernate.jdbc.time_zone: UTC
hibernate.id.new_generator_mappings: true
hibernate.connection.provider_disables_autocommit: true #https://vladmihalcea.com/why-you-should-always-use-hibernate-connection-provider_disables_autocommit-for-resource-local-jpa-transactions/
hibernate.cache.use_second_level_cache: true
hibernate.cache.region.factory_class: org.hibernate.cache.ehcache.EhCacheRegionFactory
hibernate.cache.use_query_cache: false
hibernate.javax.cache.missing_cache_strategy: create
# modify batch size as necessary
hibernate.jdbc.batch_size: 20
hibernate.order_inserts: true
hibernate.order_updates: true
hibernate.batch_versioned_data: true
hibernate.query.fail_on_pagination_over_collection_fetch: true
hibernate.query.in_clause_parameter_padding: true
hibernate.dialect: org.hibernate.dialect.MySQL5InnoDBDialect
javax.persistent.sharedCache.mode: ENABLE_SELECTIVE
hibernate:
ddl-auto: none
naming:
physical-strategy: org.springframework.boot.orm.jpa.hibernate.SpringPhysicalNamingStrategy
implicit-strategy: org.springframework.boot.orm.jpa.hibernate.SpringImplicitNamingStrategy
messages:
basename: i18n/messages
main:
allow-bean-definition-overriding: true
task:
execution:
thread-name-prefix: catalogue-task-
pool:
core-size: 2
max-size: 50
queue-capacity: 10000
scheduling:
thread-name-prefix: catalogue-scheduling-
pool:
size: 2
thymeleaf:
mode: HTML
output:
ansi:
console-available: true
server:
servlet:
session:
cookie:
http-only: true
tomcat:
mbeanregistry:
enabled: true
threads:
max: 100
compression:
enabled: true
mime-types: "text/html,text/xml,text/plain,text/css,text/javascript,application/javascript,application/json"
min-response-size: 1024
port: 8080
# Properties to be exposed on the /info management endpoint
info:
# Comma separated list of profiles that will trigger the ribbon to show
display-ribbon-on-profiles: 'dev'
management:
endpoints:
web:
exposure:
include: "health,info,metrics,prometheus"
endpoint:
health:
probes:
enabled: true
show-details: always
show-components: always
application-prod.yml
logging:
level:
ROOT: ERROR
org.hibernate.SQL: ERROR
com.pitstop.catalogue: ERROR
com.zaxxer.hikari: ERROR
config: classpath:logback-prod.xml
spring:
devtools:
restart:
enabled: true
additional-exclude: static/**
jackson:
serialization:
indent-output: true
datasource:
auto-commit: false
type: com.zaxxer.hikari.HikariDataSource
url: ${SPRING_DATASOURCE_URL}
username: ${SPRING_DATASOURCE_USERNAME}
password: ${SPRING_DATASOURCE_PASSWORD}
hikari:
poolName: CatalogJPAHikariCP
minimumIdle: 10
maximumPoolSize: 120
connectionTimeout: 30000
idleTimeout: 300000
maxLifetime: 600000
auto-commit: false
data-source-properties:
cachePrepStmts: true
prepStmtCacheSize: 250
prepStmtCacheSqlLimit: 2048
useServerPrepStmts: true
useLocalSessionState: true
rewriteBatchedStatements: true
cacheResultSetMetadata: true
cacheServerConfiguration: true
maintainTimeStats: true
servlet:
multipart:
location: /data/tmp
jpa:
hibernate:
ddl-auto: none
properties:
spring.jpa.show-sql: true
hibernate.generate_statistics: true
liquibase:
contexts: prod
messages:
cache-duration: PT1S # 1 second, see the ISO 8601 standard
thymeleaf:
cache: false
sleuth:
sampler:
probability: 1 # report 100% of traces

Open Distro Elasticsearch - Authenticate to Kibana with JWT

I could get the open distro running with basic auth (using internal user database), now I need to use JWT tokens to authenticate to Kibana dashboard.
Elasticsearch config:
basic_internal_auth_domain:
http_enabled: false
transport_enabled: true
order: 4
http_authenticator:
type: basic
challenge: true
authentication_backend:
type: intern
proxy_auth_domain:
http_enabled: false
transport_enabled: false
order: 3
http_authenticator:
type: proxy
challenge: false
config:
user_header: "x-proxy-user"
roles_header: "x-proxy-roles"
authentication_backend:
type: noop
jwt_auth_domain:
enabled: true
http_enabled: true
transport_enabled: true
order: 0
http_authenticator:
type: jwt
challenge: false
config:
signing_key: "EdzdXd5weiuSVFyddfjhjhfjjchJGRrZmpkayZPUA=="
jwt_header: "Authorization"
jwt_url_parameter: "token"
roles_key: "roles"
subject_key: "sub"
authentication_backend:
type: noop
Kibana Config:
server.name: kibana
server.port: 5601
server.host: "127.0.0.1"
elasticsearch.url: https://127.0.0.1:9200
elasticsearch.ssl.verificationMode: none
elasticsearch.username: kibanaserver
elasticsearch.password: kibanaserver
elasticsearch.requestHeadersWhitelist: ["securitytenant","Authorization"]
opendistro_security.auth.type: "jwt"
opendistro_security.jwt.url_param: token
opendistro_security.multitenancy.enabled: true
opendistro_security.multitenancy.tenants.preferred: ["Private", "Global"]
opendistro_security.readonly_mode.roles: ["kibana_read_only"]
After this, when I open the http://localhost:5601?token=dfkhdfjdfhdjfhdhfkhdjfhjdhfjdhffdjhfdjhf, the auth fails, elasticsearch logs show this message -
[c.a.o.s.h.HTTPBasicAuthenticator] [node-1] No 'Basic Authorization'
header, send 401 and 'WWW-Authenticate Basic'
I have followed the documentation thoroughly, yet there is very little material on the internet right now, it's still in the POC stages for most of the people I guess. Any suggestions?
Those who are looking for an answer - My JWT token was wrong, make sure you configure "iat", "nbf" and "exp" according to your server timings and not your local time.

JHipster test: NoCacheRegionFactoryAvailableException when second level cache is disabled

When I use jhipster generate an app, I disabled the second level cache. However, when I run either "gradle test" or "run as junit test" to test the app, it is failed because the "NoCacheRegionFactoryAvailableException". I have checked the application.yml in directory "src/test/resources/config", and be sure that the second cache is disabled. I do not know why the app is still looking for second-cache. Is there any clue how this happen? or how to disable second level cache completely?
Except the test failure, everything else works well, the app can run successfully.
application.yml in src/test/resources/config
spring:
application:
name: EMS
datasource:
url: jdbc:h2:mem:EMS;DB_CLOSE_DELAY=-1
name:
username:
password:
jpa:
database-platform: com.espion.ems.domain.util.FixedH2Dialect
database: H2
open-in-view: false
show_sql: true
hibernate:
ddl-auto: none
naming-strategy: org.springframework.boot.orm.jpa.hibernate.SpringNamingStrategy
properties:
hibernate.cache.use_second_level_cache: false
hibernate.cache.use_query_cache: false
hibernate.generate_statistics: true
hibernate.hbm2ddl.auto: validate
data:
elasticsearch:
cluster-name:
cluster-nodes:
properties:
path:
logs: target/elasticsearch/log
data: target/elasticsearch/data
mail:
host: localhost
mvc:
favicon:
enabled: false
thymeleaf:
mode: XHTML
liquibase:
contexts: test
security:
basic:
enabled: false
server:
port: 10344
address: localhost
jhipster:
async:
corePoolSize: 2
maxPoolSize: 50
queueCapacity: 10000
security:
rememberMe:
# security key (this key should be unique for your application, and kept secret)
key: jhfasdhflasdhfasdkfhasdjkf
metrics: # DropWizard Metrics configuration, used by MetricsConfiguration
jmx.enabled: true
swagger:
title: EMS API
description: EMS API documentation
version: 0.0.1
termsOfServiceUrl:
contactName:
contactUrl:
contactEmail:
license:
licenseUrl:
enabled: false
Move src/test/resources/config/application.yml to src/test/resources directory.
You can find that solution from https://github.com/jhipster/generator-jhipster/issues/3730

PhantomJS + Vagrant + Codeception

I was wondering is anyone had the chance to have all of this (plus laravel and wordpress) working in the same envirioment(every page is working and I have no problems with laravel or wordpress).
My question is about is anyone has a configuration working in the right way.
here is my copy of each
functional.suite.yml
class_name: FunctionalTester
modules:
enabled:
# add framework module here
- Laravel5
- Db
- Dbh
- Asserts
- WebDriver:
url: 'http://vagrant.test.com/'
host: '127.0.0.1'
#host: '192.168.56.102'
browser: phantomjs
window_size: 1024x768
port: 4444
window_size: 'maximize'
clear_cookies: 1
restart: 1
- \Helper\Acceptance
config:
Db:
dsn: 'mysql:host=192.168.56.102;dbname=wordpress'
user: 'wordpress_user'
password: 'wordpress_password'
dump: 'tests/_data/dump.sql'
populate: false
cleanup: false
reconnect: true
env:
phantom:
modules:
config:
WebDriver:
browser: 'phantomjs'
chrome:
modules:
config:
WebDriver:
browser: 'chrome'
acceptance.suite.yml
class_name: AcceptanceTester
modules:
enabled:
# add framework module here
- Laravel5
- Asserts
- WebDriver:
- \Helper\Acceptance
config:
Laravel5:
cleanup: false
environment: test
WebDriver:
browser: phantomjs
window_size: 1024x768
url: 'http://vagrant.test.com/'
Db:
dsn: 'mysql:host=localhost;dbname=wordpress'
user: 'wordpress_user'
password: 'wordpress_password'
dump: tests/_data/test-dump.sql
populate: true
cleanup: false
env:
phantom:
modules:
config:
WebDriver:
browser: 'phantomjs'
chrome:
modules:
config:
WebDriver:
browser: 'chrome'
codeception.yml
actor: Tester
paths:
tests: tests
log: tests/_output
data: tests/_data
support: tests/_support
envs: tests/_envs
settings:
bootstrap: _bootstrap.php
colors: true
#ramdomise test order
random: true
memory_limit: 1024M
extensions:
enabled:
- Codeception\Extension\RunFailed
coverage:
whitelist:
include:
- app/*
remote: true
modules:
enabled:
- Laravel5
- WebDriver
- Db
config:
Db:
dsn: 'mysql:host=vagrant.test.com;dbname=wordpress'
user: 'wordpress_user'
password: 'wordpress_password'
dump: 'tests/_data/myDump.sql'
populate: true
cleanup: true
///////////////////////////////////////////////////
my phantomjs starting code
phantomjs --webdriver=4444
and this is my error
[Codeception\Exception\ModuleException]
Db: SQLSTATE[HY000] [2002] Operation timed out while creating PDO connection

Resources