Adding requestTimeout causes Kibana to fail at startup - elasticsearch

I am experimenting with Elasticsearch-Kibana-Logstash for processing web log files.
Some of the queries take a bit of time and Kibana times out quickly, so I wanted to increase the amount of time that Kibana will wait for a response from elasticsearch. After a bit of searching, I found some suggestions to set elasticsearch.requestTimeout. I tried to increase the timeout by adding this to my kibana.yml file:
elasticsearch.requestTimeout: 5000
This causes Kibana to fail immediately on startup with this error:
kibserver_1 | FATAL { Error: Payload timeout must be shorter than socket timeout: POST /elasticsearch/{index}/_search
kibserver_1 | at Object.exports.assert (/usr/share/kibana/node_modules/hoek/lib/index.js:736:11)
kibserver_1 | at new module.exports.internals.Route (/usr/share/kibana/node_modules/hapi/lib/route.js:69:10)
kibserver_1 | at internals.Connection._addRoute (/usr/share/kibana/node_modules/hapi/lib/connection.js:387:19)
kibserver_1 | at internals.Connection._route (/usr/share/kibana/node_modules/hapi/lib/connection.js:379:18)
kibserver_1 | at internals.Plugin._apply (/usr/share/kibana/node_modules/hapi/lib/plugin.js:572:14)
kibserver_1 | at internals.Plugin.route (/usr/share/kibana/node_modules/hapi/lib/plugin.js:542:10)
kibserver_1 | at createProxy (/usr/share/kibana/src/core_plugins/elasticsearch/lib/create_proxy.js:85:14)
kibserver_1 | at ScopedPlugin.init [as externalInit] (/usr/share/kibana/src/core_plugins/elasticsearch/index.js:110:37)
kibserver_1 | at ScopedPlugin.tryCatcher (/usr/share/kibana/node_modules/bluebird/js/main/util.js:26:23)
kibserver_1 | at Promise.attempt.Promise.try (/usr/share/kibana/node_modules/bluebird/js/main/method.js:30:24)
kibserver_1 | at /usr/share/kibana/src/server/plugins/plugin.js:196:46
kibserver_1 | at next (native)
kibserver_1 | at step (/usr/share/kibana/src/server/plugins/plugin.js:25:191)
kibserver_1 | at /usr/share/kibana/src/server/plugins/plugin.js:25:361
kibserver_1 | cause:
kibserver_1 | Error: Payload timeout must be shorter than socket timeout: POST /elasticsearch/{index}/_search
kibserver_1 | at Object.exports.assert (/usr/share/kibana/node_modules/hoek/lib/index.js:736:11)
kibserver_1 | at new module.exports.in how to resolternals.Route (/usr/share/kibana/node_modules/hapi/lib/route.js:69:10)
kibserver_1 | at internals.Connection._addRoute (/usr/share/kibana/node_modules/hapi/lib/connection.js:387:19)
kibserver_1 | at internals.Connection._route (/usr/share/kibana/node_modules/hapi/lib/connection.js:379:18)
kibserver_1 | at internals.Plugin._apply (/usr/share/kibana/node_modules/hapi/lib/plugin.js:572:14)
kibserver_1 | at internals.Plugin.route (/usr/share/kibana/node_modules/hapi/lib/plugin.js:542:10)
kibserver_1 | at createProxy (/usr/share/kibana/src/core_plugins/elasticsearch/lib/create_proxy.js:85:14)
kibserver_1 | at ScopedPlugin.init [as externalInit] (/usr/share/kibana/src/core_plugins/elasticsearch/index.js:110:37)
kibserver_1 | at ScopedPlugin.tryCatcher (/usr/share/kibana/node_modules/bluebird/js/main/util.js:26:23)
kibserver_1 | at Promise.attempt.Promise.try (/usr/share/kibana/node_modules/bluebird/js/main/method.js:30:24)
kibserver_1 | at /usr/share/kibana/src/server/plugins/plugin.js:196:46
kibserver_1 | at next (native)
kibserver_1 | at step (/usr/share/kibana/src/server/plugins/plugin.js:25:191)
kibserver_1 | at /usr/share/kibana/src/server/plugins/plugin.js:25:361,
kibserver_1 | isOperational: true }
This one baffles me. I can't seem to find any reference to a "payload timeout" in the ElasticSearch documentation. My web searches suggest this might be coming from hapijs, but I'm not sure how to resolve this. Does anyone out there know?
(Kibana, ElasticSearch, and Logstash are all v 6.1.0)

I think the problem is that you're setting the timeout to a value that's too small, it's in ms and the default is 30000 (see https://www.elastic.co/guide/en/kibana/6.1/settings.html):
elasticsearch.requestTimeout:
Default: 30000 Time in milliseconds to wait for responses from the back end or Elasticsearch. This value must be a positive integer.
What might be possible is that the elasticsearch.requestTimeout is used to set the socket timeout in the hapijs and since the default for the payload time out seems to be 10 s (from here):
route.options.payload.timeout
Default value: to 10000 (10 seconds).
It will fail when checking if the payload timeout is shorter than the socket timeout. But that's just an hypotheses and I have failed find any proof in Kibana's code.

Related

How to get execution times of Azure Function steps?

I have an Azure Function set up as illustrated bellow. I need to understand the execution times for Trigger, Function, and Output steps because even after the function is "warm", first request takes up to 7 seconds. After that the execution time drops to something like 100 ms.
So far I switched the logging level in host.json to
"logging": {
"fileLoggingMode": "always",
"logLevel": {
"default": "Information",
"Host.Results": "Error",
"Function": "Trace",
"Host.Aggregator": "Trace"
}
}
and watched the simple telemetry in live logs:
8:48:07 AM | Trace Request successfully matched the route with name 'main' and template 'api/{*segments}'
8:48:06 AM | Trace Executing 'Functions.main' (Reason='This function was programmatically called via the host APIs.', Id=...)
That's pretty much all I see. Also, when opening the Application - Functions - Function - main log from Visual Studio, the log levels still have [Information] preable.
What I would like to see is basically a duration-time output as in the monitor (from Functions - Main in web portal) section, but split by steps. For example:
date | step | success | result code | duration (ms)
---------------------------------------------------------
.... | trigger | success | 200 | 39
.... | function | success | 200 | 32
.... | output | success | 200 | 37
How to get the duration time for Trigger, Function, and Output steps on every execution?

Kafka Connect MySQL "server time zone value 'EDT' is unrecognized"

I'm new to Kafka but had Debezium Connect/Mysql up and running in Docker just fine. All of a sudden all docker containers were gone and upon restart and attempt to reconnect to MySQL the JDBC connection fails with this response to the Connect rest API:
Using ZOOKEEPER_CONNECT=0.0.0.0:2181
Using KAFKA_LISTENERS=PLAINTEXT://172.19.0.5:9092 and KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://172.19.0.5:9092
HTTP/1.1 400 Bad Request
Date: Fri, 30 Apr 2021 20:39:56 GMT
Content-Type: application/json
Content-Length:
Server: Jetty(9.4.33.v20201020)
{"error_code":400,"message":"Connector configuration is invalid
and contains the following 1 error(s): \nUnable to connect: The server time
zone value 'EDT' is unrecognized or represents more than one time zone. You
must configure either the server or JDBC driver (via the 'serverTimezone'
configuration property) to use a more specifc time zone value if you want
to utilize time zone support.\nYou can also find the above list of errors
at the endpoint `/connector-plugins/{connectorType}/config/validate`"}
in response to running this:
docker run -it --rm --net mynet_default debezium/kafka \
curl -i -X POST -H "Accept:application/json" -H "Content-Type:application/json" \
mynet_connect_1:8083/connectors/ \
-d '{
"name": "my-connector-001",
"config": {
"connector.class": "io.debezium.connector.mysql.MySqlConnector",
"tasks.max": "1",
"database.hostname": "my.domain.com",
"database.port": "3306",
"database.user": "myuser",
"database.password": "mypassword",
"database.server.id": "6400",
"database.server.name": "dbserver001",
"database.include.list": "mydb",
"database.history.kafka.bootstrap.servers": "kafka:9092",
"database.history.kafka.topic": "dbhistory.metrics.connector004",
"table.include.list":"mydb.users,mydb.applications"
} }'
This worked fine for a couple hours. Then as I was watching updates mostly straight from the Debezium tutorial, all the kafka containers were gone and then ever since (with the exact same config) will no longer connect, citing the timezone thing. I can connect with the same credentials in the mysql client (via the docker network) and the MySQL permissions and grants did not change. There's a Gitter mention last July that this error is itself an erroneous indication of some other connection failure. And there are multiple reports of it being a bug in JDBC. Is there any other possibility beside someone having changed something on our database?
This is the Connect log:
connect_1 | 2021-04-30 20:28:47,254 INFO || [Worker clientId=connect-1, groupId=1] Session key updated [org.apache.kafka.connect.runtime.distributed.DistributedHerder]
connect_1 | 2021-04-30 20:39:56,996 ERROR || Failed testing connection for jdbc:mysql://my.domain.com:3306/?useInformationSchema=true&nullCatalogMeansCurrent=false&useSSL=false&useUnicode=true&characterEncoding=UTF-8&characterSetResults=UTF-8&zeroDateTimeBehavior=CONVERT_TO_NULL&connectTimeout=30000 with user 'myuser' [io.debezium.connector.mysql.MySqlConnector]
connect_1 | java.sql.SQLException: The server time zone value 'EDT' is unrecognized or represents more than one time zone. You must configure either the server or JDBC driver (via the 'serverTimezone' configuration property) to use a more specifc time zone value if you want to utilize time zone support.
connect_1 | at com.mysql.cj.jdbc.exceptions.SQLError.createSQLException(SQLError.java:129)
connect_1 | at com.mysql.cj.jdbc.exceptions.SQLError.createSQLException(SQLError.java:97)
connect_1 | at com.mysql.cj.jdbc.exceptions.SQLError.createSQLException(SQLError.java:89)
connect_1 | at com.mysql.cj.jdbc.exceptions.SQLError.createSQLException(SQLError.java:63)
connect_1 | at com.mysql.cj.jdbc.exceptions.SQLError.createSQLException(SQLError.java:73)
connect_1 | at com.mysql.cj.jdbc.exceptions.SQLExceptionsMapping.translateException(SQLExceptionsMapping.java:76)
connect_1 | at com.mysql.cj.jdbc.ConnectionImpl.createNewIO(ConnectionImpl.java:836)
connect_1 | at com.mysql.cj.jdbc.ConnectionImpl.<init>(ConnectionImpl.java:456)
connect_1 | at com.mysql.cj.jdbc.ConnectionImpl.getInstance(ConnectionImpl.java:246)
connect_1 | at com.mysql.cj.jdbc.NonRegisteringDriver.connect(NonRegisteringDriver.java:197)
connect_1 | at io.debezium.jdbc.JdbcConnection.lambda$patternBasedFactory$1(JdbcConnection.java:231)
connect_1 | at io.debezium.jdbc.JdbcConnection.connection(JdbcConnection.java:872)
connect_1 | at io.debezium.connector.mysql.MySqlConnection.connection(MySqlConnection.java:79)
connect_1 | at io.debezium.jdbc.JdbcConnection.connection(JdbcConnection.java:867)
connect_1 | at io.debezium.jdbc.JdbcConnection.connect(JdbcConnection.java:413)
connect_1 | at io.debezium.connector.mysql.MySqlConnector.validateConnection(MySqlConnector.java:98)
connect_1 | at io.debezium.connector.common.RelationalBaseSourceConnector.validate(RelationalBaseSourceConnector.java:52)
connect_1 | at org.apache.kafka.connect.runtime.AbstractHerder.validateConnectorConfig(AbstractHerder.java:375)
connect_1 | at org.apache.kafka.connect.runtime.AbstractHerder.lambda$validateConnectorConfig$1(AbstractHerder.java:326)
connect_1 | at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
connect_1 | at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
connect_1 | at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
connect_1 | at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
connect_1 | at java.base/java.lang.Thread.run(Thread.java:834)
connect_1 | Caused by: com.mysql.cj.exceptions.InvalidConnectionAttributeException: The server time zone value 'EDT' is unrecognized or represents more than one time zone. You must configure either the server or JDBC driver (via the 'serverTimezone' configuration property) to use a more specifc time zone value if you want to utilize time zone support.
connect_1 | at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
connect_1 | at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
connect_1 | at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
connect_1 | at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)
connect_1 | at com.mysql.cj.exceptions.ExceptionFactory.createException(ExceptionFactory.java:61)
connect_1 | at com.mysql.cj.exceptions.ExceptionFactory.createException(ExceptionFactory.java:85)
connect_1 | at com.mysql.cj.util.TimeUtil.getCanonicalTimezone(TimeUtil.java:132)
connect_1 | at com.mysql.cj.protocol.a.NativeProtocol.configureTimezone(NativeProtocol.java:2120)
connect_1 | at com.mysql.cj.protocol.a.NativeProtocol.initServerSession(NativeProtocol.java:2143)
connect_1 | at com.mysql.cj.jdbc.ConnectionImpl.initializePropsFromServer(ConnectionImpl.java:1310)
connect_1 | at com.mysql.cj.jdbc.ConnectionImpl.connectOneTryOnly(ConnectionImpl.java:967)
connect_1 | at com.mysql.cj.jdbc.ConnectionImpl.createNewIO(ConnectionImpl.java:826)
connect_1 | ... 17 more
connect_1 | 2021-04-30 20:39:56,998 INFO || AbstractConfig values:
connect_1 | [org.apache.kafka.common.config.AbstractConfig]
Is there a parameter I could be passing in via the Connect rest API to specify the timezone in the JDBC string (shown near the top of this log)? I'm using the Debezium (1.5) Docker images per this tutorial.
I think EDT is not in the mysql.time_zone_name table
SELECT * FROM mysql.time_zone_name;
Adding the configuration "database.serverTimezone" solves this issue on my end, e.g.
"config": {
...
"database.serverTimezone": "America/Los_Angeles",
...
}
in my case, using kafka connect, i had to modify the connector config - mysql connection string (connection.url property)
like this:
jdbc:mysql://<server ip or dns>:<port>?serverTimezone=GMT%2b8:00&<more parameters>
Note: %2b is the + character.

Weird output of pg_stat_activity

I have troubles with the output of this simple query:
select
pid,
state
from pg_stat_activity
where datname = 'My_DB_name'
while running it different ways:
In IDE
Via running psql in terminal
In bash script:
QUERY="copy (select pid, state from pg_stat_activity where datname = 'My_DB_name') to stdout with csv"
psql -h host -U user -d database -t -c "$QUERY" >> result
1 and 2 return results as I need them:
1:
pid state
------ -----------------------------
23126 idle
25573 active
2642 active
20420 idle
23391 idle
5339 idle
7710 idle
1558 idle
12506 idle
2862 active
716 active
9834 idle in transaction (aborted)
2:
pid | state
-------+-------------------------------
23126 | idle
25573 | idle
2642 | active
20420 | idle
23391 | idle
5339 | active
7710 | idle
1558 | idle
12506 | idle
2211 | active
716 | active
9834 | idle in transaction (aborted)
3 is weird - it doesnt give me any state name except 'active'
23126,
25573,
2642,
20420,
23391,
5339,
7710,
1558,
12506,
1660,active
716,active
1927,active
9834,
What am I missing? How to get all the state names via bash script?
pg_stat_activity is a catalog view that will show different content depending on whether you're logged in as a superuser, or as a non-privileged user.
From your output, it looks like you're logged in as superuser in #1 and #2, but as a normal user in #3.

Floating ip pool not found

I'm trying to use vagrant-openstack-provider to manage Bluemix VMs.
All is looking good, except for an error message at the end: Floating ip pool not found.
2015-09-27 11:17 | DEBUG | request => method : POST
2015-09-27 11:17 | DEBUG | request => url : https://api2-dal09.open.ibmcloud.com:8774/v2/.../os-floating-ips
2015-09-27 11:17 | DEBUG | request => headers : {"X-Auth-Token"=>"...", :accept=>:json, :content_type=>:json}
2015-09-27 11:17 | DEBUG | request => body : {"pool":"private"}
2015-09-27 11:17 | DEBUG | response => code : 404
2015-09-27 11:17 | DEBUG | response => headers : {:content_length=>"73", :content_type=>"application/json; charset=UTF-8", :x_compute_request_id=>"...", :date=>"Sun, 27 Sep 2015 10:17:30 GMT"}
2015-09-27 11:17 | DEBUG | response => body : {"itemNotFound": {"message": "Floating ip pool not found.", "code": 404}}
2015-09-27 11:17 | WARN | Error allocating ip in pool private : Floating ip pool not found.
2015-09-27 11:17 | WARN | Impossible to allocate a new IP
ERROR warden: Error occurred: Floating ip pool not found.
Is it possible to create an IP pool in the horizon console? If so, how do I do this? I couldn't find any documentation online.
I specified 'private' in the Vagrantfile:
os.floating_ip_pool = 'private'
I should have been using 'Public-Network' instead:
os.floating_ip_pool = 'Public-Network'
I didn't realise at the time, but you can find the floating_ip_pool with:
snowch$ vagrant openstack floatingip-list
+-------------------+
| Floating IP pools |
+-------------------+
| Public-Network |
+-------------------+

SeleniumRC Script error - selenium-browserbot.js

Problem:
In our application, we have same locators(id) for submit images (link + form submit) in Popup and main window. Main window submit image executes two javascript functions for click event. While trying through selenium, I'm getting following selenium error and submit action not happened. Unable to process, even if I clicked manually on the browser, which was opened by Selenium. But this scenario is working fine if I do manually end to end.
I tried with FormSubmit and ClickAt..No luck... Any trick/solution?
I found similar thread without any solution --> Selenium RC - selenium-browserbot.js error (http://stackoverflow.com/questions/2380543/selenium-rc-selenium-browserbot-js-error)
Environment:
Browser: IE8
Java: Sun Microsystems Inc. 16.0-b13
OS: Windows XP 5.1 x86
Selenium RC- selenium-server-1.0.3
Selenium Error Message
An error has occured in the script on this page
Line: 2120
Char: 9
Error: Permission denied
Code: 0
URL: file:///C:/DOCUME~1/script1/LOCALS~1/Temp/customProfileDira540839f44a5460e8f29cdcb8f3632a7/core/scripts/selenium-browserbot.js
Do you want to continue running scripts on this page?
HTML Source
<A onmouseover="imgOn('cmdSubmitbutton', 'submit');" onmouseout="imgOff('cmdSubmitbutton', 'submit');" onclick="return verifyAttachmentJS();submitOrder();" href="javascript:void(0);"><IMG title="Submit this form." border=0 name=cmdSubmitbutton src="/webtop/images/buttons/submit_off.gif"> </A>
selenium RC command history
type(desc1, asdasdas)
click(cmdAttachbutton)
click(cmdSubmitbutton)
click(cmdFinishbutton)
selectWindow(null)
type(ORD_TrackingNbr, 6666)
isElementPresent(cmdSubmitbutton)
clickAt(cmdSubmitbutton, 20,8)
Selenium Log Console
info(1331131785603): Executing: |type | findCnum | ABCDP60 |
info(1331131789088): Executing: |click | gobutton | |
error(1331131792666): Caught an exception attempting to log location; this should get noticed soon!
error(1331131792666): Unexpected Exception: Permission denied
error(1331131792666): Exception details: name -> Error, number -> -2146828218, description -> Permission denied, message -> Permission denied
info(1331131795494): Executing: |selectFrame | relative=up | |
info(1331131796275): Executing: |selectFrame | relative=up | |
info(1331131797056): Executing: |getLocation | | |
info(1331131797494): Executing: |click | xpath=//a[#id="ABCDP60" or #name="ABCDP60" or #href="ABCDP60" or normalize-space(descendant-or-self::text())="ABCDP60" or #href="http://172.18.70.63/webtop/ABCDP60"] | |
error(1331131806338): Caught an exception attempting to log location; this should get noticed soon!
error(1331131806338): Unexpected Exception: Permission denied
error(1331131806338): Exception details: name -> Error, number -> -2146828218, description -> Permission denied, message -> Permission denied
info(1331131808056): Executing: |click | //form[#id='form1']/table/tbody/tr/td[6]/a[4]/font/b | |
info(1331131815478): Executing: |click | cmdZipbutton | |
error(1331131815712): Caught an exception attempting to log location; this should get noticed soon!
error(1331131815712): Unexpected Exception: Permission denied
error(1331131815712): Exception details: name -> Error, number -> -2146828218, description -> Permission denied, message -> Permission denied
info(1331131835384): Executing: |selectWindow | name=attachdoc | |
info(1331131865634): Executing: |type | desc1 | asdasdas |
info(1331131876431): Executing: |click | cmdAttachbutton | |
info(1331131882290): Executing: |click | cmdSubmitbutton | |
info(1331131888290): Executing: |click | cmdFinishbutton | |
info(1331131894196): Executing: |selectWindow | null | |
info(1331131894306): Executing: |type | ORD_TrackingNbr | 6666 |
info(1331131898399): Executing: |isElementPresent | cmdSubmitbutton | |
info(1331131902290): Executing: |clickAt | cmdSubmitbutton | 20,8 |
info(1331132070023): Done appending missed logging messages
error(1331132070023): Caught an exception attempting to log location; this should get noticed soon!
error(1331132070023): Unexpected Exception: Permission denied
error(1331132070039): Exception details: name -> Error, number -> -2146828218, description -> Permission denied, message -> Permission denied
error(1331132115492): Caught an exception attempting to log location; this should get noticed soon!
error(1331132115492): Unexpected Exception: Permission denied
error(1331132115492): Exception details: name -> Error, number -> -2146828218, description -> Permission denied, message -> Permission denied

Resources