Send an email when application shuts down - quarkus

I need to send an email when the application shuts down.
I have created the following method:
void onStop(#Observes ShutdownEvent event) throws ParserConfigurationException {
mailer.send(Mail.withText("test#email.com","STOP TEST", "HERE IS STOP A TEST"));
Quarkus.waitForExit();
}
But I always receive the following error:
│ Caused by: java.net.ConnectException: Connection refused (Connection refused) │
│ at java.base/java.net.PlainSocketImpl.socketConnect(Native Method) │
│ at java.base/java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:412) │
│ at java.base/java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:255) │
│ at java.base/java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:237) │
│ at java.base/java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) │
│ at java.base/java.net.Socket.connect(Socket.java:609) │
│ at org.apache.http.conn.ssl.SSLConnectionSocketFactory.connectSocket(SSLConnectionSocketFactory.java:368) │
│ at org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:142) │
│ ... 32 more
I am quite sure that all the connections are closed when I am trying to send the email. This is very similar to the question asked here -> How to accept http requests after shutdown signal in Quarkus?, but not clear to me how to implement the solution he proposes.
Any help is greatly appreciated.

Related

ElasticSearch MasterNotDiscoveredException

I've setup an elasticsearch cluster in kuberentes, but I'm getting the error "MasterNotDiscoveredException". I'm not really sure even where to begin debugging this error as there does not appear to be anything really useful in the logs of any of the nodes:
│ elasticsearch {"type": "server", "timestamp": "2022-06-15T00:44:17,226Z", "level": "WARN", "component": "r.suppressed", "cluster.name": "logging-ek", "node.name": "logging-ek-es-master-0", "message": "path: /_bulk, params: {}", │
│ elasticsearch "stacktrace": ["org.elasticsearch.cluster.block.ClusterBlockException: blocked by: [SERVICE_UNAVAILABLE/1/state not recovered / initialized, SERVICE_UNAVAILABLE/2/no master];", │
│ elasticsearch "at org.elasticsearch.cluster.block.ClusterBlocks.globalBlockedException(ClusterBlocks.java:179) ~[elasticsearch-7.17.1.jar:7.17.1]", │
│ elasticsearch "at org.elasticsearch.action.bulk.TransportBulkAction$BulkOperation.handleBlockExceptions(TransportBulkAction.java:635) [elasticsearch-7.17.1.jar:7.17.1]", │
│ elasticsearch "at org.elasticsearch.action.bulk.TransportBulkAction$BulkOperation.doRun(TransportBulkAction.java:481) [elasticsearch-7.17.1.jar:7.17.1]", │
│ elasticsearch "at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:26) [elasticsearch-7.17.1.jar:7.17.1]", │
│ elasticsearch "at org.elasticsearch.action.bulk.TransportBulkAction$BulkOperation$2.onTimeout(TransportBulkAction.java:669) [elasticsearch-7.17.1.jar:7.17.1]", │
│ elasticsearch "at org.elasticsearch.cluster.ClusterStateObserver$ContextPreservingListener.onTimeout(ClusterStateObserver.java:345) [elasticsearch-7.17.1.jar:7.17.1]", │
│ elasticsearch "at org.elasticsearch.cluster.ClusterStateObserver$ObserverClusterStateListener.onTimeout(ClusterStateObserver.java:263) [elasticsearch-7.17.1.jar:7.17.1]", │
│ elasticsearch "at org.elasticsearch.cluster.service.ClusterApplierService$NotifyTimeout.run(ClusterApplierService.java:660) [elasticsearch-7.17.1.jar:7.17.1]", │
│ elasticsearch "at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:718) [elasticsearch-7.17.1.jar:7.17.1]", │
│ elasticsearch "at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136) [?:?]", │
│ elasticsearch "at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635) [?:?]", │
│ elasticsearch "at java.lang.Thread.run(Thread.java:833) [?:?]", │
│ elasticsearch "Suppressed: org.elasticsearch.discovery.MasterNotDiscoveredException", │
│ elasticsearch "\tat org.elasticsearch.action.support.master.TransportMasterNodeAction$AsyncSingleAction$2.onTimeout(TransportMasterNodeAction.java:297) ~[elasticsearch-7.17.1.jar:7.17.1]", │
│ elasticsearch "\tat org.elasticsearch.cluster.ClusterStateObserver$ContextPreservingListener.onTimeout(ClusterStateObserver.java:345) [elasticsearch-7.17.1.jar:7.17.1]", │
│ elasticsearch "\tat org.elasticsearch.cluster.ClusterStateObserver$ObserverClusterStateListener.onTimeout(ClusterStateObserver.java:263) [elasticsearch-7.17.1.jar:7.17.1]", │
│ elasticsearch "\tat org.elasticsearch.cluster.service.ClusterApplierService$NotifyTimeout.run(ClusterApplierService.java:660) [elasticsearch-7.17.1.jar:7.17.1]", │
│ elasticsearch "\tat org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:718) [elasticsearch-7.17.1.jar:7.17.1]", │
│ elasticsearch "\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136) [?:?]", │
│ elasticsearch "\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635) [?:?]", │
│ elasticsearch "\tat java.lang.Thread.run(Thread.java:833) [?:?]"] }
is pretty much the only logs i've ever seen.
It does appear that the cluster sees all of my master nodes:
elasticsearch {"type": "server", "timestamp": "2022-06-15T00:45:41,915Z", "level": "WARN", "component": "o.e.c.c.ClusterFormationFailureHelper", "cluster.name": "logging-ek", "node.name": "logging-ek-es-master-0", "message": "master not discovered yet, this node has not previously joine │
│ d a bootstrapped (v7+) cluster, and [cluster.initial_master_nodes] is empty on this node: have discovered [{logging-ek-es-master-0}{fHLQrvLsTJ6UvR_clSaxfg}{iLoGrnWSTpiZxq59z7I5zA}{10.42.64.4}{10.42.64.4:9300}{mr}, {logging-ek-es-master-1}{EwF8WLIgSF6Q1Q46_51VlA}{wz5rg74iThicJdtzXZg29g}{ │
│ 10.42.240.8}{10.42.240.8:9300}{mr}, {logging-ek-es-master-2}{jtrThk_USA2jUcJYoIHQdg}{HMvZ_dUfTM-Ar4ROeIOJlw}{10.42.0.5}{10.42.0.5:9300}{mr}]; discovery will continue using [127.0.0.1:9300, 127.0.0.1:9301, 127.0.0.1:9302, 127.0.0.1:9303, 127.0.0.1:9304, 127.0.0.1:9305, [::1]:9300, [::1]: │
│ 9301, [::1]:9302, [::1]:9303, [::1]:9304, [::1]:9305, 10.42.0.5:9300, 10.42.240.8:9300] from hosts providers and [{logging-ek-es-master-0}{fHLQrvLsTJ6UvR_clSaxfg}{iLoGrnWSTpiZxq59z7I5zA}{10.42.64.4}{10.42.64.4:9300}{mr}] from last-known cluster state; node term 0, last-accepted version │
│ 0 in term 0" }
and I've verified that they can in fact reach each other through the network. Is there anything else or anywhere else I need to look for errors? I installed elasticsearch via elasticoperator.
Start doing curl to the 9300 port. Make sure you get a valid response going both ways.
Also make sure your MTU is set right or this can happen as well. It's a network problem most of the time.

Liferay DXP 7.3 Theme Creation: Error during gulp build

I'm getting this error when I try to use the gulp build command:
[09:08:11] Starting 'build:compile-css'...
Deprecation Warning: Using / for division outside of calc() is deprecated and will be removed in Dart Sass 2.0.0.
Recommendation: math.div($spacer, 2) or calc($spacer / 2)
More info and automated migrator: https://sass-lang.com/d/slash-div
╷
306 │ $headings-margin-bottom: $spacer / 2 !default;
│ ^^^^^^^^^^^
╵
build\_css\clay\bootstrap\_variables.scss 306:31 #import
build\_css\clay\base.scss 10:9 #import
build\_css\clay.scss 1:9 root stylesheet
Deprecation Warning: Using / for division outside of calc() is deprecated and will be removed in Dart Sass 2.0.0.
Recommendation: math.div($input-padding-y, 2) or calc($input-padding-y / 2)
More info and automated migrator: https://sass-lang.com/d/slash-div
╷
501 │ $input-height-inner-quarter: add($input-line-height * .25em, $input-padding-y / 2) !default;
│ ^^^^^^^^^^^^^^^^^^^^
╵
build\_css\clay\bootstrap\_variables.scss 501:73 #import
build\_css\clay\base.scss 10:9 #import
build\_css\clay.scss 1:9 root stylesheet
Deprecation Warning: Using / for division outside of calc() is deprecated and will be removed in Dart Sass 2.0.0.
Recommendation: math.div($custom-control-indicator-size, 2) or calc($custom-control-indicator-size / 2)
More info and automated migrator: https://sass-lang.com/d/slash-div
╷
571 │ $custom-switch-indicator-border-radius: $custom-control-indicator-size / 2 !default;
│ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
╵
build\_css\clay\bootstrap\_variables.scss 571:49 #import
build\_css\clay\base.scss 10:9 #import
build\_css\clay.scss 1:9 root stylesheet
Deprecation Warning: Using / for division outside of calc() is deprecated and will be removed in Dart Sass 2.0.0.
Recommendation: math.div($spacer, 2) or calc($spacer / 2)
More info and automated migrator: https://sass-lang.com/d/slash-div
╷
717 │ $nav-divider-margin-y: $spacer / 2 !default;
│ ^^^^^^^^^^^
╵
build\_css\clay\bootstrap\_variables.scss 717:37 #import
build\_css\clay\base.scss 10:9 #import
build\_css\clay.scss 1:9 root stylesheet
Deprecation Warning: Using / for division outside of calc() is deprecated and will be removed in Dart Sass 2.0.0.
Recommendation: math.div($spacer, 2) or calc($spacer / 2)
More info and automated migrator: https://sass-lang.com/d/slash-div
╷
722 │ $navbar-padding-y: $spacer / 2 !default;
│ ^^^^^^^^^^^
╵
build\_css\clay\bootstrap\_variables.scss 722:37 #import
build\_css\clay\base.scss 10:9 #import
build\_css\clay.scss 1:9 root stylesheet
[09:08:14] 'build:compile-css' errored after 3.14 s
[09:08:14] Error in plugin "sass"
Message:
build\_css\compat\components\_dropdowns.scss
Error: compound selectors may no longer be extended.
Consider `#extend .dropdown-item, .disabled` instead.
╷
34 │ #extend .dropdown-item.disabled;
│ ^^^^^^^^^^^^^^^^^^^^^^^
╵
build\_css\compat\components\_dropdowns.scss 34:11 root stylesheet
Details:
formatted: Error: compound selectors may no longer be extended.
Consider `#extend .dropdown-item, .disabled` instead.
╷
34 │ #extend .dropdown-item.disabled;
│ ^^^^^^^^^^^^^^^^^^^^^^^
╵
build\_css\compat\components\_dropdowns.scss 34:11 root stylesheet
line: 34
column: 11
file: C:\Users\fmateosg\IdeaProjects\test\themes\base-theme\build\_css\compat\components\_dropdowns.scss
status: 1
messageFormatted: build\_css\compat\components\_dropdowns.scss
Error: compound selectors may no longer be extended.
Consider `#extend .dropdown-item, .disabled` instead.
╷
34 │ #extend .dropdown-item.disabled;
│ ^^^^^^^^^^^^^^^^^^^^^^^
╵
build\_css\compat\components\_dropdowns.scss 34:11 root stylesheet
messageOriginal: compound selectors may no longer be extended.
Consider `#extend .dropdown-item, .disabled` instead.
╷
34 │ #extend .dropdown-item.disabled;
│ ^^^^^^^^^^^^^^^^^^^^^^^
╵
build\_css\compat\components\_dropdowns.scss 34:11 root stylesheet
relativePath: build\_css\compat\components\_dropdowns.scss
domainEmitter: [object Object]
domainThrown: false
[09:08:14] 'build' errored after 6.36 s
I know that there is a similar question to this one but the answers there couldn't solve my problem. This is my folder structure:
Folder structure
I copied the _dropdowns.scss file into src/css/compat/components/ and did the modification there but it still gives me the error when I retry to build
I had the same problem because I accidentally upgraded my liferay-theme-tasks in package.json to version 11.2.2.
If thats the case, downgrade liferay-theme-tasks to version ^10.0.2, remove the node_modules folder and run npm install again. Gulp build should pass after that.
I'm using Node.js version 14.17.0, gulp version 4.0.2

Create table on cluster of clickhouse error

When I create table as follows:
CREATE TABLE partition_v3_cluster ON CLUSTER perftest_3shards_3replicas(
ID String,
URL String,
EventTime Date
) ENGINE = MergeTree()
PARTITION BY toYYYYMM(EventTime)
ORDER BY ID;
I get errors:
Query id: fe98c8b6-16af-44a1-b8c9-2bf10d9866ea
┌─host────────┬─port─┬─status─┬─error───────────────────────────────────────────────────────────────────────────────────────────────────────────── ──────────────────────────────────────────────────────────────────────────┬─num_hosts_remaining─┬─num_hosts_active─┐
│ 10.18.1.131 │ 9000 │ 371 │ Code: 371, e.displayText() = DB::Exception: There are two exactly the same ClickHouse instances 10.18.1.131:9000 i n cluster perftest_3shards_3replicas (version 21.6.3.14 (official build)) │ 2 │ 0 │
│ 10.18.1.133 │ 9000 │ 371 │ Code: 371, e.displayText() = DB::Exception: There are two exactly the same ClickHouse instances 10.18.1.133:9000 i n cluster perftest_3shards_3replicas (version 21.6.3.14 (official build)) │ 1 │ 0 │
│ 10.18.1.132 │ 9000 │ 371 │ Code: 371, e.displayText() = DB::Exception: There are two exactly the same ClickHouse instances 10.18.1.132:9000 i n cluster perftest_3shards_3replicas (version 21.6.3.14 (official build)) │ 0 │ 0 │
└─────────────┴──────┴────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────── ──────────────────────────────────────────────────────────────────────────┴─────────────────────┴──────────────────┘
← Progress: 0.00 rows, 0.00 B (0.00 rows/s., 0.00 B/s.) 0%
3 rows in set. Elapsed: 0.149 sec.
Received exception from server (version 21.6.3):
Code: 371. DB::Exception: Received from localhost:9000. DB::Exception: There was an error on [10.18.1.131:9000]: Code: 371, e.displayText() = DB:: Exception: There are two exactly the same ClickHouse instances 10.18.1.131:9000 in cluster perftest_3shards_3replicas (version 21.6.3.14 (official build)).
And here is my metrika.xml:
<?xml version="1.0" encoding="utf-8"?>
<yandex>
<remote_servers>
<perftest_3shards_3replicas>
<shard>
<replica>
<host>10.18.1.131</host>
<port>9000</port>
</replica>
<replica>
<host>10.18.1.132</host>
<port>9000</port>
</replica>
<replica>
<host>10.18.1.133</host>
<port>9000</port>
</replica>
</shard>
<shard>
<replica>
<host>10.18.1.131</host>
<port>9000</port>
</replica>
<replica>
<host>10.18.1.132</host>
<port>9000</port>
</replica>
<replica>
<host>10.18.1.133</host>
<port>9000</port>
</replica>
</shard>
<shard>
<replica>
<host>10.18.1.131</host>
<port>9000</port>
</replica>
<replica>
<host>10.18.1.132</host>
<port>9000</port>
</replica>
<replica>
<host>10.18.1.133</host>
<port>9000</port>
</replica>
</shard>
</perftest_3shards_3replicas>
</remote_servers>
<zookeeper>
<node>
<host>10.18.1.131</host>
<port>2181</port>
</node>
<node>
<host>10.18.1.132</host>
<port>2181</port>
</node>
<node>
<host>10.18.1.133</host>
<port>2181</port>
</node>
</zookeeper>
<macros>
<shard>01</shard>
<replica>01</replica>
</macros>
</yandex>
I don't know where is wrong, can someone help me? Thank u

Empty karma html report

Here are the facts:
I have a nodejs app
I've written some Jasmine tests
I'm using karma to generate the code coverage report
Here is the karma.conf.js:
module.exports = function (config) {
config.set({
frameworks: ['jasmine', 'browserify'],
files: [
'utils/dates.js',
'utils/maths.js',
'spec/server_utils_spec.js'
],
preprocessors: {
'utils/*.js': ['coverage', 'browserify'],
'spec/server_utils_spec.js': ['coverage', 'browserify']
},
reporters: ['progress', 'coverage'],
coverageReporter: {
type: 'html',
dir: 'coverage/'
},
colors: true,
logLevel: config.LOG_INFO,
autoWatch: true,
singleRun: false
})
}
These are my packages (I know they are global):
→ npm -g list | grep -i "karma"
├─┬ karma#2.0.4
├─┬ karma-browserify#5.3.0
├─┬ karma-coverage#1.1.2
├─┬ karma-html-reporter#0.2.7
├── karma-jasmine#1.1.2
├─┬ karma-mocha#1.3.0
and despite the terminal log
/usr/bin/karma start server/karma.conf.js
23 07 2018 15:39:36.033:INFO [framework.browserify]: registering rebuild (autoWatch=true)
23 07 2018 15:39:36.817:INFO [framework.browserify]: 3073 bytes written (0.03 seconds)
23 07 2018 15:39:36.817:INFO [framework.browserify]: bundle built
23 07 2018 15:39:36.818:WARN [karma]: No captured browser, open http://localhost:9876/
23 07 2018 15:39:36.822:INFO [karma]: Karma v2.0.4 server started at http://0.0.0.0:9876/
23 07 2018 15:39:39.135:INFO [Chrome 67.0.3396 (Linux 0.0.0)]: Connected on socket H9J96y-fFn-7PfkHAAAA with id manual-9692
Chrome 67.0.3396 (Linux 0.0.0): Executed 0 of 2 SUCCESS (0 secs / 0 secs)
Chrome 67.0.3396 (Linux 0.0.0): Executed 1 of 2 SUCCESS (0 secs / 0.001 secs)
Chrome 67.0.3396 (Linux 0.0.0): Executed 2 of 2 SUCCESS (0 secs / 0.002 secs)
Chrome 67.0.3396 (Linux 0.0.0): Executed 2 of 2 SUCCESS (0.033 secs / 0.002 secs)
TOTAL: 2 SUCCESS
I get this result when I open the generated html report:
I've tried the solutions to all the other related questions and I've come up with nothing.
My questions:
why the empty report?
how can I manage to have the report
Thank you in advance!

java.io.IOException: Job status not available about hive

When I use hive with select * from table_name;, it works.
When I use select t.a from table_name t OR select * from table_name where ..., the following error happens :
Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks is set to 0 since there's no reduce operator
Starting Job = job_1414949555870_118360, Tracking URL = N/A
Kill Command = /usr/local/hadoop-2.5.1/bin/hadoop job -kill job_1414949555870_118360
java.io.IOException: Job status not available
at org.apache.hadoop.mapreduce.Job.updateStatus(Job.java:322)
at org.apache.hadoop.mapreduce.Job.getJobState(Job.java:347)
at org.apache.hadoop.mapred.JobClient$NetworkedJob.getJobState(JobClient.java:295)
at org.apache.hadoop.hive.shims.HadoopShimsSecure.isJobPreparing(HadoopShimsSecure.java:104)
at org.apache.hadoop.hive.ql.exec.mr.HadoopJobExecHelper.progress(HadoopJobExecHelper.java:242)
at org.apache.hadoop.hive.ql.exec.mr.HadoopJobExecHelper.progress(HadoopJobExecHelper.java:541)
at org.apache.hadoop.hive.ql.exec.mr.ExecDriver.execute(ExecDriver.java:431)
at org.apache.hadoop.hive.ql.exec.mr.MapRedTask.execute(MapRedTask.java:136)
at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:151)
at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:65)
at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1485)
at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1263)
at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1091)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:931)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:921)
at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:268)
at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:220)
at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:422)
at org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:790)
at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:684)
at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:623)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.util.RunJar.main(RunJar.java:212)
Ended Job = job_1414949555870_118360 with exception java.io.IOException(Job status not available )
FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask`
So, what's wrong about hive?
DEBUG info as follow, maybe useful!! Please help me!!!
15/04/14 16:53:48 DEBUG ipc.Client: getting client out of cache: org.apache.hadoop.ipc.Client#60cb201b
15/04/14 16:53:48 DEBUG mapred.ClientServiceDelegate: Failed to contact AM/History for job job_1414949555870_118441 retrying..
java.net.UnknownHostException: Invalid host name: local host is: (unknown); destination host is: "slave109":43759; java.net.UnknownHostException; For more details see: http://wiki.apache.org/hadoop/UnknownHost

Resources