Why centos reboots my service (spring boot, nginx)? - spring

This is my service (runapp.service) for starting the app after restarting the system:
[Unit]
Description=Spring Boot Oulinaart
After=network.target
[Service]
Type=simple
ExecStart=/usr/bin/java -jar /var/www/oulina/data/newproject-1.0-SNAPSHOT.jar
Restart=on-failure
RestartSec=5s
TimeoutStartSec=0
[Install]
WantedBy=default.target
I start my service with next commands:
systemctl daemon-reload
systemctl enable runapp.service
systemctl start runapp.service
Part of log journalctl --since "10 min ago":
May 08 13:39:11 194-58-118-141.ovz.vps.regruhosting.ru sshd[30203]: Invalid user berta from 106.12.57.47 port 55164
May 08 13:39:11 194-58-118-141.ovz.vps.regruhosting.ru sshd[30203]: input_userauth_request: invalid user berta [preauth]
May 08 13:39:11 194-58-118-141.ovz.vps.regruhosting.ru sshd[30203]: pam_unix(sshd:auth): check pass; user unknown
May 08 13:39:11 194-58-118-141.ovz.vps.regruhosting.ru sshd[30203]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=10
May 08 13:39:13 194-58-118-141.ovz.vps.regruhosting.ru sshd[30203]: Failed password for invalid user berta from 106.12.57.47 port 55164 ssh2
May 08 13:39:13 194-58-118-141.ovz.vps.regruhosting.ru sshd[30210]: Invalid user vendeg from 223.247.153.131 port 35690
May 08 13:39:13 194-58-118-141.ovz.vps.regruhosting.ru sshd[30210]: input_userauth_request: invalid user vendeg [preauth]
May 08 13:39:13 194-58-118-141.ovz.vps.regruhosting.ru sshd[30210]: pam_unix(sshd:auth): check pass; user unknown
May 08 13:39:13 194-58-118-141.ovz.vps.regruhosting.ru sshd[30210]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=22
May 08 13:39:15 194-58-118-141.ovz.vps.regruhosting.ru sshd[30210]: Failed password for invalid user vendeg from 223.247.153.131 port 35690 ssh2
May 08 13:39:15 194-58-118-141.ovz.vps.regruhosting.ru sshd[30210]: Received disconnect from 223.247.153.131 port 35690:11: Bye Bye [preauth]
May 08 13:39:15 194-58-118-141.ovz.vps.regruhosting.ru sshd[30210]: Disconnected from 223.247.153.131 port 35690 [preauth]
May 08 13:39:42 194-58-118-141.ovz.vps.regruhosting.ru sshd[30238]: Invalid user ubuntu from 62.234.120.192 port 56178
May 08 13:39:42 194-58-118-141.ovz.vps.regruhosting.ru sshd[30238]: input_userauth_request: invalid user ubuntu [preauth]
May 08 13:39:42 194-58-118-141.ovz.vps.regruhosting.ru sshd[30238]: pam_unix(sshd:auth): check pass; user unknown
May 08 13:39:42 194-58-118-141.ovz.vps.regruhosting.ru sshd[30238]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=62
May 08 13:39:44 194-58-118-141.ovz.vps.regruhosting.ru sshd[30238]: Failed password for invalid user ubuntu from 62.234.120.192 port 56178 ssh2
May 08 13:40:01 194-58-118-141.ovz.vps.regruhosting.ru systemd[1]: Started Session 1307662 of user root.
May 08 13:40:01 194-58-118-141.ovz.vps.regruhosting.ru systemd[1]: Started Session 1307663 of user root.
May 08 13:40:01 194-58-118-141.ovz.vps.regruhosting.ru systemd[1]: Started Session 1307665 of user root.
May 08 13:40:01 194-58-118-141.ovz.vps.regruhosting.ru CROND[30255]: (root) CMD (/usr/local/mgr5/sbin/cron-ispmgr sbin/mgrctl -m ispmgr problems.autosolve >/d
May 08 13:40:01 194-58-118-141.ovz.vps.regruhosting.ru CROND[30256]: (root) CMD (/usr/local/mgr5/sbin/cron-ispmgr sbin/mgrctl -m ispmgr periodic >/dev/null 2>
May 08 13:40:01 194-58-118-141.ovz.vps.regruhosting.ru CROND[30257]: (root) CMD (/usr/local/mgr5/sbin/cron-core sbin/mgrctl -m core sysinfostat >/dev/null 2>&
May 08 13:40:01 194-58-118-141.ovz.vps.regruhosting.ru systemd[1]: Started Session 1307666 of user root.
May 08 13:40:01 194-58-118-141.ovz.vps.regruhosting.ru systemd[1]: Started Session 1307667 of user root.
May 08 13:40:01 194-58-118-141.ovz.vps.regruhosting.ru CROND[30258]: (root) CMD ( /opt/php71/bin/php -c /usr/local/mgr5/addon/revisium_antivirus/php.ini /us
May 08 13:40:01 194-58-118-141.ovz.vps.regruhosting.ru CROND[30260]: (root) CMD (/usr/local/mgr5/sbin/cron-core sbin/mgrctl -m core problems.autosolve >/dev/n
May 08 13:40:01 194-58-118-141.ovz.vps.regruhosting.ru systemd[1]: Started Session 1307675 of user root.
May 08 13:40:01 194-58-118-141.ovz.vps.regruhosting.ru CROND[30263]: (root) CMD ( /opt/php71/bin/php -c /usr/local/mgr5/addon/revisium_antivirus/php.ini /us
May 08 13:40:01 194-58-118-141.ovz.vps.regruhosting.ru systemd[1]: runapp.service: main process exited, code=killed, status=9/KILL
May 08 13:40:02 194-58-118-141.ovz.vps.regruhosting.ru systemd[1]: Unit runapp.service entered failed state.
May 08 13:40:02 194-58-118-141.ovz.vps.regruhosting.ru systemd[1]: runapp.service failed.
May 08 13:40:07 194-58-118-141.ovz.vps.regruhosting.ru systemd[1]: runapp.service holdoff time over, scheduling restart.
May 08 13:40:07 194-58-118-141.ovz.vps.regruhosting.ru systemd[1]: Stopped Spring Boot Oulinaart.
May 08 13:40:07 194-58-118-141.ovz.vps.regruhosting.ru systemd[1]: Started Spring Boot Oulinaart.
May 08 13:40:07 194-58-118-141.ovz.vps.regruhosting.ru sshd[30287]: Invalid user kiki from 133.242.155.85 port 57262
May 08 13:40:07 194-58-118-141.ovz.vps.regruhosting.ru sshd[30287]: input_userauth_request: invalid user kiki [preauth]
May 08 13:40:07 194-58-118-141.ovz.vps.regruhosting.ru sshd[30287]: pam_unix(sshd:auth): check pass; user unknown
May 08 13:40:07 194-58-118-141.ovz.vps.regruhosting.ru sshd[30287]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=ww
May 08 13:40:09 194-58-118-141.ovz.vps.regruhosting.ru sshd[30306]: Authentication refused: bad ownership or modes for directory /var/www/oulina/data
May 08 13:40:10 194-58-118-141.ovz.vps.regruhosting.ru sshd[30287]: Failed password for invalid user kiki from 133.242.155.85 port 57262 ssh2
May 08 13:40:10 194-58-118-141.ovz.vps.regruhosting.ru java[30289]: . ____ _ __ _ _
May 08 13:40:10 194-58-118-141.ovz.vps.regruhosting.ru java[30289]: /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \
May 08 13:40:10 194-58-118-141.ovz.vps.regruhosting.ru java[30289]: ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \
May 08 13:40:10 194-58-118-141.ovz.vps.regruhosting.ru java[30289]: \\/ ___)| |_)| | | | | || (_| | ) ) ) )
May 08 13:40:10 194-58-118-141.ovz.vps.regruhosting.ru java[30289]: ' |____| .__|_| |_|_| |_\__, | / / / /
May 08 13:40:10 194-58-118-141.ovz.vps.regruhosting.ru java[30289]: =========|_|==============|___/=/_/_/_/
May 08 13:40:10 194-58-118-141.ovz.vps.regruhosting.ru java[30289]: :: Spring Boot :: (v2.2.2.RELEASE)
May 08 13:40:10 194-58-118-141.ovz.vps.regruhosting.ru java[30289]: 2020-05-08 13:40:10.672 INFO 30289 --- [ main] c.o.c.ServingWebContentApplicati
May 08 13:40:10 194-58-118-141.ovz.vps.regruhosting.ru java[30289]: 2020-05-08 13:40:10.679 INFO 30289 --- [ main] c.o.c.ServingWebContentApplicati
May 08 13:40:11 194-58-118-141.ovz.vps.regruhosting.ru sshd[30306]: Accepted password for oulina from 37.110.124.162 port 65512 ssh2
May 08 13:40:11 194-58-118-141.ovz.vps.regruhosting.ru systemd[1]: Created slice User Slice of oulina.
May 08 13:40:11 194-58-118-141.ovz.vps.regruhosting.ru systemd-logind[154]: New session 1308145 of user oulina.
May 08 13:40:11 194-58-118-141.ovz.vps.regruhosting.ru systemd[1]: Started Session 1308145 of user oulina.
May 08 13:40:11 194-58-118-141.ovz.vps.regruhosting.ru sshd[30306]: pam_unix(sshd:session): session opened for user oulina by (uid=0)
May 08 13:40:13 194-58-118-141.ovz.vps.regruhosting.ru java[30289]: 2020-05-08 13:40:13.973 INFO 30289 --- [ main] .s.d.r.c.RepositoryConfiguration
May 08 13:40:14 194-58-118-141.ovz.vps.regruhosting.ru java[30289]: 2020-05-08 13:40:14.249 INFO 30289 --- [ main] .s.d.r.c.RepositoryConfiguration
May 08 13:40:16 194-58-118-141.ovz.vps.regruhosting.ru java[30289]: 2020-05-08 13:40:16.485 INFO 30289 --- [ main] trationDelegate$BeanPostProcesso
May 08 13:40:16 194-58-118-141.ovz.vps.regruhosting.ru java[30289]: 2020-05-08 13:40:16.604 INFO 30289 --- [ main] trationDelegate$BeanPostProcesso
May 08 13:40:16 194-58-118-141.ovz.vps.regruhosting.ru java[30289]: 2020-05-08 13:40:16.615 INFO 30289 --- [ main] trationDelegate$BeanPostProcesso
May 08 13:40:16 194-58-118-141.ovz.vps.regruhosting.ru java[30289]: 2020-05-08 13:40:16.622 INFO 30289 --- [ main] trationDelegate$BeanPostProcesso
May 08 13:40:16 194-58-118-141.ovz.vps.regruhosting.ru java[30289]: 2020-05-08 13:40:16.627 INFO 30289 --- [ main] trationDelegate$BeanPostProcesso
May 08 13:40:16 194-58-118-141.ovz.vps.regruhosting.ru java[30289]: 2020-05-08 13:40:16.647 INFO 30289 --- [ main] trationDelegate$BeanPostProcesso
May 08 13:40:17 194-58-118-141.ovz.vps.regruhosting.ru java[30289]: 2020-05-08 13:40:17.644 INFO 30289 --- [ main] o.s.b.w.embedded.tomcat.TomcatWe
May 08 13:40:17 194-58-118-141.ovz.vps.regruhosting.ru java[30289]: 2020-05-08 13:40:17.690 INFO 30289 --- [ main] o.apache.catalina.core.StandardS
May 08 13:40:17 194-58-118-141.ovz.vps.regruhosting.ru java[30289]: 2020-05-08 13:40:17.693 INFO 30289 --- [ main] org.apache.catalina.core.Standar
May 08 13:40:17 194-58-118-141.ovz.vps.regruhosting.ru java[30289]: 2020-05-08 13:40:17.923 INFO 30289 --- [ main] o.a.c.c.C.[Tomcat].[localhost].[
May 08 13:40:17 194-58-118-141.ovz.vps.regruhosting.ru java[30289]: 2020-05-08 13:40:17.924 INFO 30289 --- [ main] o.s.web.context.ContextLoader
May 08 13:40:18 194-58-118-141.ovz.vps.regruhosting.ru java[30289]: 2020-05-08 13:40:18.589 INFO 30289 --- [ main] o.f.c.internal.license.VersionPr
May 08 13:40:18 194-58-118-141.ovz.vps.regruhosting.ru java[30289]: 2020-05-08 13:40:18.626 INFO 30289 --- [ main] com.zaxxer.hikari.HikariDataSour
May 08 13:40:19 194-58-118-141.ovz.vps.regruhosting.ru java[30289]: 2020-05-08 13:40:19.780 INFO 30289 --- [ main] com.zaxxer.hikari.HikariDataSour
May 08 13:40:20 194-58-118-141.ovz.vps.regruhosting.ru java[30289]: 2020-05-08 13:40:20.067 INFO 30289 --- [ main] o.f.c.internal.database.Database
May 08 13:40:20 194-58-118-141.ovz.vps.regruhosting.ru java[30289]: 2020-05-08 13:40:20.498 INFO 30289 --- [ main] o.f.core.internal.command.DbVali
May 08 13:40:20 194-58-118-141.ovz.vps.regruhosting.ru java[30289]: 2020-05-08 13:40:20.566 INFO 30289 --- [ main] o.f.core.internal.command.DbMigr
May 08 13:40:20 194-58-118-141.ovz.vps.regruhosting.ru java[30289]: 2020-05-08 13:40:20.568 INFO 30289 --- [ main] o.f.core.internal.command.DbMigr
May 08 13:40:20 194-58-118-141.ovz.vps.regruhosting.ru java[30289]: 2020-05-08 13:40:20.956 INFO 30289 --- [ main] o.hibernate.jpa.internal.util.Lo
May 08 13:40:21 194-58-118-141.ovz.vps.regruhosting.ru java[30289]: 2020-05-08 13:40:21.207 INFO 30289 --- [ main] org.hibernate.Version
May 08 13:40:21 194-58-118-141.ovz.vps.regruhosting.ru java[30289]: 2020-05-08 13:40:21.680 INFO 30289 --- [ main] o.hibernate.annotations.common.V
May 08 13:40:22 194-58-118-141.ovz.vps.regruhosting.ru java[30289]: 2020-05-08 13:40:22.115 INFO 30289 --- [ main] org.hibernate.dialect.Dialect
May 08 13:40:25 194-58-118-141.ovz.vps.regruhosting.ru java[30289]: 2020-05-08 13:40:25.249 INFO 30289 --- [ main] o.h.e.t.j.p.i.JtaPlatformInitiat
May 08 13:40:25 194-58-118-141.ovz.vps.regruhosting.ru java[30289]: 2020-05-08 13:40:25.279 INFO 30289 --- [ main] j.LocalContainerEntityManagerFac
May 08 13:40:28 194-58-118-141.ovz.vps.regruhosting.ru java[30289]: 2020-05-08 13:40:28.589 WARN 30289 --- [ main] JpaBaseConfiguration$JpaWebConfi
May 08 13:40:29 194-58-118-141.ovz.vps.regruhosting.ru java[30289]: 2020-05-08 13:40:29.444 INFO 30289 --- [ main] o.s.s.web.DefaultSecurityFilterC
May 08 13:40:29 194-58-118-141.ovz.vps.regruhosting.ru java[30289]: 2020-05-08 13:40:29.774 INFO 30289 --- [ main] o.s.s.concurrent.ThreadPoolTaskE
May 08 13:40:31 194-58-118-141.ovz.vps.regruhosting.ru java[30289]: 2020-05-08 13:40:31.091 INFO 30289 --- [ main] s.a.ScheduledAnnotationBeanPostP
May 08 13:40:31 194-58-118-141.ovz.vps.regruhosting.ru java[30289]: 2020-05-08 13:40:31.240 INFO 30289 --- [ main] o.s.b.w.embedded.tomcat.TomcatWe
May 08 13:40:31 194-58-118-141.ovz.vps.regruhosting.ru java[30289]: 2020-05-08 13:40:31.244 INFO 30289 --- [ main] c.o.c.ServingWebContentApplicati
Log netstat -tulnp:
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 127.0.0.1:3306 0.0.0.0:* LISTEN 532/mysqld
tcp 0 0 0.0.0.0:587 0.0.0.0:* LISTEN 456/exim
tcp 0 0 0.0.0.0:110 0.0.0.0:* LISTEN 477/dovecot
tcp 0 0 0.0.0.0:143 0.0.0.0:* LISTEN 477/dovecot
tcp 0 0 0.0.0.0:111 0.0.0.0:* LISTEN 1/init
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 511/nginx: master p
tcp 0 0 127.0.0.1:8081 0.0.0.0:* LISTEN 504/httpd
tcp 0 0 0.0.0.0:465 0.0.0.0:* LISTEN 456/exim
tcp 0 0 194.58.118.141:53 0.0.0.0:* LISTEN 476/named
tcp 0 0 127.0.0.1:53 0.0.0.0:* LISTEN 476/named
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 442/sshd
tcp 0 0 127.0.0.1:953 0.0.0.0:* LISTEN 476/named
tcp 0 0 0.0.0.0:25 0.0.0.0:* LISTEN 456/exim
tcp 0 0 127.0.0.1:48314 0.0.0.0:* LISTEN 31081/shellinaboxd
tcp 0 0 194.58.118.141:443 0.0.0.0:* LISTEN 511/nginx: master p
tcp 0 0 0.0.0.0:1500 0.0.0.0:* LISTEN 513/ihttpd
tcp 0 0 0.0.0.0:993 0.0.0.0:* LISTEN 477/dovecot
tcp 0 0 0.0.0.0:995 0.0.0.0:* LISTEN 477/dovecot
tcp 0 0 127.0.0.1:9000 0.0.0.0:* LISTEN 440/php-fpm: master
tcp6 0 0 :::587 :::* LISTEN 456/exim
tcp6 0 0 :::111 :::* LISTEN 6221/rpcbind
tcp6 0 0 :::8080 :::* LISTEN 31063/java
tcp6 0 0 2a00:f940:2:1:2::bda:80 :::* LISTEN 511/nginx: master p
tcp6 0 0 :::465 :::* LISTEN 456/exim
tcp6 0 0 :::21 :::* LISTEN 520/proftpd: (accep
tcp6 0 0 :::53 :::* LISTEN 476/named
tcp6 0 0 :::22 :::* LISTEN 442/sshd
tcp6 0 0 ::1:953 :::* LISTEN 476/named
tcp6 0 0 :::25 :::* LISTEN 456/exim
tcp6 0 0 2a00:f940:2:1:2::bd:443 :::* LISTEN 511/nginx: master p
udp 0 0 194.58.118.141:53 0.0.0.0:* 476/named
udp 0 0 127.0.0.1:53 0.0.0.0:* 476/named
udp 0 0 0.0.0.0:111 0.0.0.0:* 1/init
udp 0 0 0.0.0.0:881 0.0.0.0:* 6221/rpcbind
udp6 0 0 :::53 :::* 476/named
udp6 0 0 :::111 :::* 6221/rpcbind
udp6 0 0 :::881 :::* 6221/rpcbind
Why is my service killed every 10-20 minutes? And I see at journalctl only this message:
runapp.service: main process exited, code=killed, status=9/KILL
OS centos7-x86_64_isp_lite5.
Port for spring 8080 (by default).
Thx for your answers!

I found a solution.
My server had 512 MB of RAM. When checking free-m, 420 and more of them were stably occupied. The server stopped falling when I increased it to 1 GB.
The logs of command "top" (after increase RAM) are shown below:

Related

Webclient SslHandshakeTimeoutException: handshake timed out after 10000ms

I've recently made a component using webflux and webclient.
There are a few requests that uses an external resource to retrieve some data.
In local works everything fine, retrieving data from the external resource and correctly displays to the user.
When i do a deploy on remote server it doesn't work anymore and it shows me these error:
i.n.r.DefaultHostsFileEntriesResolver : -Dio.netty.hostsFileRefreshInterval: 0
Jan 08 23:59:31 dlv-izac-user app/web.1 2023-01-09 07:59:30.715 DEBUG 4 --- [or-http-epoll-7] i.n.util.ResourceLeakDetectorFactory : Loaded default ResourceLeakDetector: io.netty.util.ResourceLeakDetector#5abf28c7
Jan 08 23:59:31 dlv-izac-user app/web.1 2023-01-09 07:59:30.717 DEBUG 4 --- [or-http-epoll-7] io.netty.resolver.dns.DnsQueryContext : [id: 0x477cca20] WRITE: UDP, [45834: /10.1.0.2:53], DefaultDnsQuestion(izac-dlv.herokuapp.com. IN A)
Jan 08 23:59:31 dlv-izac-user app/web.1 2023-01-09 07:59:30.722 WARN 4 --- [or-http-epoll-7] io.netty.channel.epoll.EpollEventLoop : Unexpected exception in the selector loop.
Jan 08 23:59:31 dlv-izac-user app/web.1 io.netty.channel.unix.Errors$NativeIoException: epoll_wait(..) failed: Function not implemented
Jan 08 23:59:32 dlv-izac-user app/web.1 2023-01-09 07:59:31.722 WARN 4 --- [or-http-epoll-7] io.netty.channel.epoll.EpollEventLoop : Unexpected exception in the selector loop.
Jan 08 23:59:32 dlv-izac-user app/web.1 io.netty.channel.unix.Errors$NativeIoException: epoll_wait(..) failed: Function not implemented
Jan 08 23:59:33 dlv-izac-user app/web.1 2023-01-09 07:59:32.723 WARN 4 --- [or-http-epoll-7] io.netty.channel.epoll.EpollEventLoop : Unexpected exception in the selector loop.
Jan 08 23:59:33 dlv-izac-user app/web.1 io.netty.channel.unix.Errors$NativeIoException: epoll_wait(..) failed: Function not implemented
Jan 08 23:59:34 dlv-izac-user app/web.1 2023-01-09 07:59:33.723 WARN 4 --- [or-http-epoll-7] io.netty.channel.epoll.EpollEventLoop : Unexpected exception in the selector loop.
Jan 08 23:59:34 dlv-izac-user app/web.1 io.netty.channel.unix.Errors$NativeIoException: epoll_wait(..) failed: Function not implemented
Jan 08 23:59:35 dlv-izac-user app/web.1 2023-01-09 07:59:34.724 WARN 4 --- [or-http-epoll-7] io.netty.channel.epoll.EpollEventLoop : Unexpected exception in the selector loop.
Jan 08 23:59:35 dlv-izac-user app/web.1 io.netty.channel.unix.Errors$NativeIoException: epoll_wait(..) failed: Function not implemented
Jan 08 23:59:36 dlv-izac-user app/web.1 2023-01-09 07:59:35.734 DEBUG 4 --- [or-http-epoll-7] io.netty.resolver.dns.DnsNameResolver : [id: 0x477cca20] RECEIVED: UDP [45834: /10.1.0.2:53], DatagramDnsResponse(from: /10.1.0.2:53, 45834, QUERY(0), NoError(0), RD RA)
Jan 08 23:59:36 dlv-izac-user app/web.1 DefaultDnsQuestion(izac-dlv.herokuapp.com. IN A)
Jan 08 23:59:36 dlv-izac-user app/web.1 DefaultDnsRawRecord(izac-dlv.herokuapp.com. 9 IN A 4B)
Jan 08 23:59:36 dlv-izac-user app/web.1 DefaultDnsRawRecord(izac-dlv.herokuapp.com. 9 IN A 4B)
Jan 08 23:59:36 dlv-izac-user app/web.1 DefaultDnsRawRecord(izac-dlv.herokuapp.com. 9 IN A 4B)
Jan 08 23:59:36 dlv-izac-user app/web.1 DefaultDnsRawRecord(OPT flags:0 udp:4096 0B)
Jan 09 00:00:16 dlv-izac-user app/web.1 2023-01-09 08:00:15.814 WARN 4 --- [or-http-epoll-8] r.netty.http.client.HttpClientConnect : [fc253c96, L:/172.19.105.50:34258 - R:izac-dlv.herokuapp.com/46.137.15.86:443] The connection observed an error
Jan 09 00:00:16 dlv-izac-user app/web.1 io.netty.handler.ssl.SslHandshakeTimeoutException: handshake timed out after 10000ms
Jan 09 00:00:16 dlv-izac-user app/web.1 at io.netty.handler.ssl.SslHandler$7.run(SslHandler.java:2113) ~[netty-handler-4.1.76.Final.jar!/:4.1.76.Final]
Jan 09 00:00:16 dlv-izac-user app/web.1 at io.netty.util.concurrent.PromiseTask.runTask(PromiseTask.java:98) ~[netty-common-4.1.76.Final.jar!/:4.1.76.Final]
Jan 09 00:00:16 dlv-izac-user app/web.1 at io.netty.util.concurrent.ScheduledFutureTask.run(ScheduledFutureTask.java:170) ~[netty-common-4.1.76.Final.jar!/:4.1.76.Final]
Jan 09 00:00:16 dlv-izac-user app/web.1 at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:164) ~[netty-common-4.1.76.Final.jar!/:4.1.76.Final]
Jan 09 00:00:16 dlv-izac-user app/web.1 at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:469) ~[netty-common-4.1.76.Final.jar!/:4.1.76.Final]
Jan 09 00:00:16 dlv-izac-user app/web.1 at io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:391) ~[netty-transport-classes-epoll-4.1.76.Final.jar!/:4.1.76.Final]
Jan 09 00:00:16 dlv-izac-user app/web.1 at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:986) ~[netty-common-4.1.76.Final.jar!/:4.1.76.Final]
Jan 09 00:00:16 dlv-izac-user app/web.1 at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) ~[netty-common-4.1.76.Final.jar!/:4.1.76.Final]
Jan 09 00:00:16 dlv-izac-user app/web.1 at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ~[netty-common-4.1.76.Final.jar!/:4.1.76.Final]
Jan 09 00:00:16 dlv-izac-user app/web.1 at java.base/java.lang.Thread.run(Thread.java:829) ~[na:na]
This is my call using webClient
URI uri = UriComponentsBuilder.fromHttpUrl(izacComponentUrl)
.queryParam("request", request)
.build().toUri();
WebClient client = WebClient.builder()
.defaultHeader(HttpHeaders.CONTENT_TYPE, MediaType.APPLICATION_JSON_VALUE)
.build();
return client.get()
.uri(uri)
.retrieve()
.bodyToFlux(RestaurantInfoResponseBody.class)
.onErrorContinue(RuntimeException.class, (ex, o) -> log.error("Errore durante il recupero dei dati tramite il servizio esterno!"));
}
You have a timeout.
Have you checked if the network flow is open between the 2 servers ?

How to grep 3 results from the whois command at once?

Bash 4.3
Ubuntu 16.04
Each while read loop takes me a little under a second to accomplish. How can I grep for 3 results at the same time?
#!/bin/bash
#-- tmp files
tmp_dir="$(mktemp -d -t 'text.XXXXX' || mktemp -d 2>/dev/null)"
tmp_input1="${tmp_dir}/temp_input1.txt"
tmp_input2="${tmp_dir}/temp_input2.txt"
wDir="/home/work"
list="${wDir}/.ip-list.txt"
finalResults="${wDir}/final-results.txt"
cd "$wDir"
awk '{ print $11 }' "$list" | sort -u > "$tmp_input1"
while read ip; do
echo "-- IP Address: $ip" >> "$tmp_input2"
whois "$ip" | grep inetnum >> "$tmp_input2"
whois "$ip" | grep route >> "$tmp_input2"
whois "$ip" | grep mnt-by | head -n 2 | sed -n '1!p' >> "$tmp_input2"
echo "" >> "$tmp_input2"
done<"$tmp_input1"
mv "$tmp_input2" "$finalResults"
cat "$finalResults"
rm -rf "$tmp_dir"
Here is my .ip-list.txt file
> Tue Oct 16 21:15:59 2018 TCP 147.135.23.98 80 => 95.217.197.238 62293
> Tue Oct 16 21:16:52 2018 TCP 147.135.23.98 1160 => 95.217.243.116 44076
> Tue Oct 16 21:16:51 2018 TCP 147.135.23.98 1160 => 159.69.253.26 43842
> Tue Oct 16 21:16:47 2018 TCP 147.135.23.98 1160 => 95.217.49.21 13288
> Tue Oct 16 21:16:18 2018 TCP 147.135.23.98 80 => 95.217.223.72 21969
> Tue Oct 16 21:16:42 2018 TCP 147.135.23.98 1160 => 95.216.232.46 9834
> Tue Oct 16 21:16:54 2018 TCP 147.135.23.98 1160 => 88.198.149.27 23388
> Tue Oct 16 21:15:57 2018 TCP 147.135.23.98 80 => 95.217.72.11 38498
> Tue Oct 16 21:16:41 2018 TCP 147.135.23.98 1160 => 159.69.250.160 8549
> Tue Oct 16 21:16:27 2018 TCP 147.135.23.98 80 => 95.217.57.97 52546
> Tue Oct 16 21:16:28 2018 TCP 147.135.23.98 80 => 95.216.225.43 60635
> Tue Oct 16 21:16:32 2018 TCP 147.135.23.98 80 => 213.239.244.5 17729
> Tue Oct 16 21:16:05 2018 TCP 147.135.23.98 80 => 95.217.27.233 24669
> Tue Oct 16 21:16:46 2018 TCP 147.135.23.98 1160 => 94.130.60.83 21203
> Tue Oct 16 21:16:52 2018 TCP 147.135.23.98 1160 => 95.217.191.48 1070
> Tue Oct 16 21:16:22 2018 TCP 147.135.23.98 80 => 95.217.219.152 15617
> Tue Oct 16 21:16:44 2018 TCP 147.135.23.98 1160 => 95.217.35.111 55808
> Tue Oct 16 21:16:46 2018 TCP 147.135.23.98 1160 => 95.216.224.158 37768
> Tue Oct 16 21:16:13 2018 TCP 147.135.23.98 80 => 159.69.241.84 24365
> Tue Oct 16 21:16:21 2018 TCP 147.135.23.98 80 => 95.217.169.49 33710
> Tue Oct 16 21:16:07 2018 TCP 147.135.23.98 80 => 95.217.186.121 21758
> Tue Oct 16 21:16:00 2018 TCP 147.135.23.98 80 => 78.47.228.239 21199
> Tue Oct 16 21:16:30 2018 TCP 147.135.23.98 80 => 95.217.23.171 8670
> Tue Oct 16 21:16:49 2018 TCP 147.135.23.98 1160 => 95.216.244.96 22087
> Tue Oct 16 21:16:20 2018 TCP 147.135.23.98 80 => 95.217.64.54 13638
> Tue Oct 16 21:16:40 2018 TCP 147.135.23.98 1160 => 95.217.55.104 3377
> Tue Oct 16 21:16:09 2018 TCP 147.135.23.98 80 => 95.217.242.169 13627
> Tue Oct 16 21:16:54 2018 TCP 147.135.23.98 1160 => 95.217.192.169 6566
> Tue Oct 16 21:16:53 2018 TCP 147.135.23.98 1160 => 95.217.101.221 41547
> Tue Oct 16 21:16:54 2018 TCP 147.135.23.98 1160 => 159.69.227.235 62092
> Tue Oct 16 21:16:45 2018 TCP 147.135.23.98 1160 => 95.217.235.228 63643
> Tue Oct 16 21:16:08 2018 TCP 147.135.23.98 80 => 95.216.227.162 51332
> Tue Oct 16 21:16:54 2018 TCP 147.135.23.98 1160 => 95.217.68.128 38480
There are hundreds of lines.
How can I make these commands more efficient? Can they be combined?
whois "$ip" | grep inetnum >> "$tmp_input2"
whois "$ip" | grep route >> "$tmp_input2"
whois "$ip" | grep mnt-by | head -n 2 | sed -n '1!p' >> "$tmp_input2"
Write output of whois "$ip" to a variable and use variable:
grep -e 'inetnum' -e 'route' <<< "$out" >> "$tmp_input2"
grep 'mnt-by' <<< "$out" | sed '2!d' >> "$tmp_input2"
Not in this way.
The first two greps, you can replace by
whois "$ip" | egrep 'inetnum|route' >> "$tmp_input2"
But because you put the third grep is put through additional filters, you cannot add that one to the egrep.
But grep is not the problem; whois is the big time consumer. And you run it multiple times.
So, it would be a good idea to limit the number of whois-es.
hop=$(mktemp)
while read ip; do
echo "-- IP Address: $ip" >> "$tmp_input2"
whois "$ip" > $hop
grep inetnum $hop >> "$tmp_input2"
grep route $hop >> "$tmp_input2"
grep mnt-by $hop | head -n 2 | sed -n '1!p' >> "$tmp_input2"
echo "" >> "$tmp_input2"
done<"$tmp_input1
rm -f $hop

Artifactory issue (maybe Derby related)

A few days back I noticed that my artifactory instance was not running anymore. Now I am not able to start it again. In the localhost logs in /opt/jfrog/artifactory/tomcat/logs I found a long stack trace, but I am not sure whether that is the actual problem because It seems to appear already at a time where everything was still working fine.
Update 2 weeks later: A few days after initially writing this question, artifactory was suddenly running again. I did not understand why, since nothing I tried seemed to help. Now, the same issue is back...
The localhost log:
19-Jul-2018 10:50:34.781 INFO [localhost-startStop-1] org.apache.catalina.core.ApplicationContext.log Using artifactory.home at '/var/opt/jfrog/artifactory' resolved from: System property
19-Jul-2018 10:50:38.843 INFO [localhost-startStop-2] org.apache.catalina.core.ApplicationContext.log 1 Spring WebApplicationInitializers detected on classpath
19-Jul-2018 10:51:57.405 INFO [localhost-startStop-1] org.apache.catalina.core.ApplicationContext.log Using artifactory.home at '/var/opt/jfrog/artifactory' resolved from: System property
19-Jul-2018 10:52:03.598 INFO [localhost-startStop-2] org.apache.catalina.core.ApplicationContext.log 1 Spring WebApplicationInitializers detected on classpath
19-Jul-2018 10:53:07.428 INFO [localhost-startStop-1] org.apache.catalina.core.ApplicationContext.log Using artifactory.home at '/var/opt/jfrog/artifactory' resolved from: System property
19-Jul-2018 10:53:15.436 INFO [localhost-startStop-2] org.apache.catalina.core.ApplicationContext.log 1 Spring WebApplicationInitializers detected on classpath
19-Jul-2018 10:53:32.409 SEVERE [localhost-startStop-1] org.apache.catalina.core.StandardContext.listenerStart Exception sending context initialized event to listener instance of class [org.artifactory.webapp.servlet.ArtifactoryHomeConfigListener]
java.lang.RuntimeException: Could't establish connection with db: jdbc:derby:/var/opt/jfrog/artifactory/data/derby;create=true
at org.jfrog.config.db.TemporaryDBChannel.<init>(TemporaryDBChannel.java:35)
at org.artifactory.common.ArtifactoryConfigurationAdapter.getDbChannel(ArtifactoryConfigurationAdapter.java:183)
at org.jfrog.config.wrappers.ConfigurationManagerImpl.getDBChannel(ConfigurationManagerImpl.java:422)
at org.artifactory.config.MasterKeyBootstrapUtil.dbChannel(MasterKeyBootstrapUtil.java:206)
at org.artifactory.config.MasterKeyBootstrapUtil.tryToCreateTable(MasterKeyBootstrapUtil.java:95)
at org.artifactory.config.MasterKeyBootstrapUtil.validateOrInsertKeyInformation(MasterKeyBootstrapUtil.java:65)
at org.artifactory.config.MasterKeyBootstrapUtil.handleMasterKey(MasterKeyBootstrapUtil.java:46)
at org.artifactory.webapp.servlet.BasicConfigManagers.initHomes(BasicConfigManagers.java:95)
at org.artifactory.webapp.servlet.BasicConfigManagers.initialize(BasicConfigManagers.java:81)
at org.artifactory.webapp.servlet.ArtifactoryHomeConfigListener.contextInitialized(ArtifactoryHomeConfigListener.java:53)
at org.apache.catalina.core.StandardContext.listenerStart(StandardContext.java:4745)
at org.apache.catalina.core.StandardContext.startInternal(StandardContext.java:5207)
at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:150)
at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:752)
at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:728)
at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:734)
at org.apache.catalina.startup.HostConfig.deployDescriptor(HostConfig.java:630)
at org.apache.catalina.startup.HostConfig$DeployDescriptor.run(HostConfig.java:1842)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.sql.SQLException: Failed to start database '/var/opt/jfrog/artifactory/data/derby' with class loader java.net.URLClassLoader#e9e54c2, see the next exception for details.
at org.apache.derby.impl.jdbc.SQLExceptionFactory.getSQLException(Unknown Source)
at org.apache.derby.impl.jdbc.SQLExceptionFactory.getSQLException(Unknown Source)
at org.apache.derby.impl.jdbc.Util.seeNextException(Unknown Source)
at org.apache.derby.impl.jdbc.EmbedConnection.bootDatabase(Unknown Source)
at org.apache.derby.impl.jdbc.EmbedConnection.<init>(Unknown Source)
at org.apache.derby.jdbc.InternalDriver.getNewEmbedConnection(Unknown Source)
at org.apache.derby.jdbc.InternalDriver.connect(Unknown Source)
at org.apache.derby.jdbc.InternalDriver.connect(Unknown Source)
at org.apache.derby.jdbc.EmbeddedDriver.connect(Unknown Source)
at org.jfrog.config.db.TemporaryDBChannel.<init>(TemporaryDBChannel.java:31)
... 22 more
Caused by: ERROR XJ040: Failed to start database '/var/opt/jfrog/artifactory/data/derby' with class loader java.net.URLClassLoader#e9e54c2, see the next exception for details.
at org.apache.derby.iapi.error.StandardException.newException(Unknown Source)
at org.apache.derby.impl.jdbc.SQLExceptionFactory.wrapArgsForTransportAcrossDRDA(Unknown Source)
... 32 more
Caused by: ERROR XSDB6: Another instance of Derby may have already booted the database /var/opt/jfrog/artifactory/data/derby.
at org.apache.derby.iapi.error.StandardException.newException(Unknown Source)
at org.apache.derby.iapi.error.StandardException.newException(Unknown Source)
at org.apache.derby.impl.store.raw.data.BaseDataFileFactory.privGetJBMSLockOnDB(Unknown Source)
at org.apache.derby.impl.store.raw.data.BaseDataFileFactory.run(Unknown Source)
at java.security.AccessController.doPrivileged(Native Method)
at org.apache.derby.impl.store.raw.data.BaseDataFileFactory.getJBMSLockOnDB(Unknown Source)
at org.apache.derby.impl.store.raw.data.BaseDataFileFactory.boot(Unknown Source)
at org.apache.derby.impl.services.monitor.BaseMonitor.boot(Unknown Source)
at org.apache.derby.impl.services.monitor.TopService.bootModule(Unknown Source)
at org.apache.derby.impl.services.monitor.BaseMonitor.startModule(Unknown Source)
at org.apache.derby.impl.services.monitor.FileMonitor.startModule(Unknown Source)
at org.apache.derby.iapi.services.monitor.Monitor.bootServiceModule(Unknown Source)
at org.apache.derby.impl.store.raw.RawStore.boot(Unknown Source)
at org.apache.derby.impl.services.monitor.BaseMonitor.boot(Unknown Source)
at org.apache.derby.impl.services.monitor.TopService.bootModule(Unknown Source)
at org.apache.derby.impl.services.monitor.BaseMonitor.startModule(Unknown Source)
at org.apache.derby.impl.services.monitor.FileMonitor.startModule(Unknown Source)
at org.apache.derby.iapi.services.monitor.Monitor.bootServiceModule(Unknown Source)
at org.apache.derby.impl.store.access.RAMAccessManager.boot(Unknown Source)
at org.apache.derby.impl.services.monitor.BaseMonitor.boot(Unknown Source)
at org.apache.derby.impl.services.monitor.TopService.bootModule(Unknown Source)
at org.apache.derby.impl.services.monitor.BaseMonitor.startModule(Unknown Source)
at org.apache.derby.impl.services.monitor.FileMonitor.startModule(Unknown Source)
at org.apache.derby.iapi.services.monitor.Monitor.bootServiceModule(Unknown Source)
at org.apache.derby.impl.db.BasicDatabase.bootStore(Unknown Source)
at org.apache.derby.impl.db.BasicDatabase.boot(Unknown Source)
at org.apache.derby.impl.services.monitor.BaseMonitor.boot(Unknown Source)
at org.apache.derby.impl.services.monitor.TopService.bootModule(Unknown Source)
at org.apache.derby.impl.services.monitor.BaseMonitor.bootService(Unknown Source)
at org.apache.derby.impl.services.monitor.BaseMonitor.startProviderService(Unknown Source)
at org.apache.derby.impl.services.monitor.BaseMonitor.findProviderAndStartService(Unknown Source)
at org.apache.derby.impl.services.monitor.BaseMonitor.startPersistentService(Unknown Source)
at org.apache.derby.iapi.services.monitor.Monitor.startPersistentService(Unknown Source)
... 29 more
19-Jul-2018 10:53:32.503 SEVERE [localhost-startStop-1] org.apache.catalina.core.StandardContext.listenerStart Exception sending context initialized event to listener instance of class [org.artifactory.webapp.servlet.logback.LogbackConfigListener]
java.lang.IllegalStateException: Artifactory home not initialized
at org.artifactory.webapp.servlet.logback.LogbackConfigListener.initArtifactoryHome(LogbackConfigListener.java:55)
at org.artifactory.webapp.servlet.logback.LogbackConfigListener.contextInitialized(LogbackConfigListener.java:47)
at org.apache.catalina.core.StandardContext.listenerStart(StandardContext.java:4745)
at org.apache.catalina.core.StandardContext.startInternal(StandardContext.java:5207)
at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:150)
at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:752)
at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:728)
at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:734)
at org.apache.catalina.startup.HostConfig.deployDescriptor(HostConfig.java:630)
at org.apache.catalina.startup.HostConfig$DeployDescriptor.run(HostConfig.java:1842)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
19-Jul-2018 10:53:32.569 SEVERE [localhost-startStop-1] org.apache.catalina.core.StandardContext.listenerStart Exception sending context initialized event to listener instance of class [org.artifactory.webapp.servlet.ArtifactoryContextConfigListener]
java.lang.IllegalStateException: Artifactory home not initialized.
at org.artifactory.webapp.servlet.ArtifactoryContextConfigListener.getArtifactoryHome(ArtifactoryContextConfigListener.java:176)
at org.artifactory.webapp.servlet.ArtifactoryContextConfigListener.setSessionTrackingMode(ArtifactoryContextConfigListener.java:150)
at org.artifactory.webapp.servlet.ArtifactoryContextConfigListener.contextInitialized(ArtifactoryContextConfigListener.java:77)
at org.apache.catalina.core.StandardContext.listenerStart(StandardContext.java:4745)
at org.apache.catalina.core.StandardContext.startInternal(StandardContext.java:5207)
at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:150)
at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:752)
at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:728)
at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:734)
at org.apache.catalina.startup.HostConfig.deployDescriptor(HostConfig.java:630)
at org.apache.catalina.startup.HostConfig$DeployDescriptor.run(HostConfig.java:1842)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
19-Jul-2018 10:54:44.743 INFO [localhost-startStop-2] org.apache.catalina.core.ApplicationContext.log Initializing Spring embedded WebApplicationContext
19-Jul-2018 10:56:21.997 INFO [localhost-startStop-2] org.apache.catalina.core.ApplicationContext.log jolokia: No access restrictor found, access to any MBean is allowed
19-Jul-2018 17:22:06.575 INFO [localhost-startStop-5] org.apache.catalina.core.ApplicationContext.log Closing Spring root WebApplicationContext
19-Jul-2018 17:24:22.721 INFO [localhost-startStop-1] org.apache.catalina.core.ApplicationContext.log Using artifactory.home at '/var/opt/jfrog/artifactory' resolved from: System property
19-Jul-2018 17:24:27.463 INFO [localhost-startStop-2] org.apache.catalina.core.ApplicationContext.log 1 Spring WebApplicationInitializers detected on classpath
19-Jul-2018 17:26:29.617 INFO [localhost-startStop-2] org.apache.catalina.core.ApplicationContext.log Initializing Spring embedded WebApplicationContext
artifactory.log with severall restart attempts. Note: it always gets stuck at '[art-init] [INFO ] (o.a.w.s.ArtifactoryContextConfigListener:281) -'
/ \ _ __| |_ _| |_ __ _ ___| |_ ___ _ __ _ _ | | | | (___| (___
/ /\ \ | '__| __| | _/ _` |/ __| __/ _ \| '__| | | | | | | |\___ \\___ \
/ ____ \| | | |_| | || (_| | (__| || (_) | | | |_| | | |__| |____) |___) |
/_/ \_\_| \__|_|_| \__,_|\___|\__\___/|_| \__, | \____/|_____/_____/
Version: 5.9.0 __/ |
Revision: 50900900 |___/
Artifactory Home: '/var/opt/jfrog/artifactory'
2018-08-01 16:52:01,039 [art-init] [WARN ] (o.a.f.l.ArtifactoryLockFile:65) - Found existing lock file. Artifactory was not shutdown properly. [/var/opt/jfrog/artifactory/data/.lock]
2018-08-01 16:52:03,744 [art-init] [INFO ] (o.a.s.ArtifactoryApplicationContext:484) - Artifactory application context set to NOT READY by refresh
2018-08-01 16:52:03,777 [art-init] [INFO ] (o.a.s.ArtifactoryApplicationContext:227) - Refreshing artifactory: startup date [Wed Aug 01 16:52:03 UTC 2018]; root of context hierarchy
2018-08-01 16:53:29,163 [art-init] [INFO ] (o.a.s.d.DbServiceImpl:217) - Database: Apache Derby 10.11.1.1 - (1616546). Driver: Apache Derby Embedded JDBC Driver 10.11.1.1 - (1616546) Pool: derby
2018-08-01 16:53:29,178 [art-init] [INFO ] (o.a.s.d.DbServiceImpl:220) - Connection URL: jdbc:derby:/var/opt/jfrog/artifactory/data/derby
2018-08-01 16:54:05,176 [art-init] [INFO ] (o.j.s.b.p.t.BinaryProviderClassScanner:76) - Added 'blob' from jar:file:/opt/jfrog/artifactory/tomcat/webapps/artifactory/WEB-INF/lib/artifactory-storage-db-5.9.0.jar!/
2018-08-01 16:54:05,524 [art-init] [INFO ] (o.j.s.b.p.t.BinaryProviderClassScanner:76) - Added 'empty, external-file, external-wrapper, file-system, cache-fs, retry' from jar:file:/opt/jfrog/artifactory/tomcat/webapps/artifactory/WEB-INF/lib/binary-store-core-2.0.37.jar!/
2018-08-01 16:54:49,865 [art-init] [INFO ] (o.a.s.ArtifactorySchedulerFactoryBean:647) - Starting Quartz Scheduler now
2018-08-01 16:54:52,478 [art-init] [INFO ] (o.a.s.ArtifactoryApplicationContext:234) - Artifactory context starting up 39 Spring Beans...
2018-08-01 16:55:04,519 [art-init] [INFO ] (o.a.s.a.AccessServiceImpl:356) - Initialzed new service id: jfrt#01c72726yd871j0wqyck0k0n0z
2018-08-01 16:55:05,338 [art-init] [INFO ] (o.j.s.c.EncryptionWrapperFactory:33) - createArtifactoryKeyWrapper EncryptionWrapperBase{ encodingType=ARTIFACTORY_MASTER, topEncrypter=BytesEncrypterBase{ Cipher='DESede', keyId='22QC5'}, formatUsed=OldFormat, decrypters=[BytesEncrypterBase{ Cipher='DESede', keyId='22QC5'}]}
2018-08-01 16:55:05,687 [art-init] [INFO ] (o.a.s.a.ArtifactoryAccessClientConfigStore:556) - Using Access Server URL: http://localhost:8040/access (bundled) source: detected
2018-08-01 16:55:09,887 [art-init] [INFO ] (o.a.s.a.AccessServiceImpl:279) - Waiting for access server...
2018-08-01 16:56:49,406 [art-init] [INFO ] (o.a.w.s.ArtifactoryContextConfigListener:281) -
_ _ __ _ ____ _____ _____
/\ | | (_)/ _| | | / __ \ / ____/ ____|
/ \ _ __| |_ _| |_ __ _ ___| |_ ___ _ __ _ _ | | | | (___| (___
/ /\ \ | '__| __| | _/ _` |/ __| __/ _ \| '__| | | | | | | |\___ \\___ \
/ ____ \| | | |_| | || (_| | (__| || (_) | | | |_| | | |__| |____) |___) |
/_/ \_\_| \__|_|_| \__,_|\___|\__\___/|_| \__, | \____/|_____/_____/
Version: 5.9.0 __/ |
Revision: 50900900 |___/
Artifactory Home: '/var/opt/jfrog/artifactory'
2018-08-01 16:56:49,510 [art-init] [WARN ] (o.a.f.l.ArtifactoryLockFile:65) - Found existing lock file. Artifactory was not shutdown properly. [/var/opt/jfrog/artifactory/data/.lock]
2018-08-01 16:56:52,853 [art-init] [INFO ] (o.a.s.ArtifactoryApplicationContext:484) - Artifactory application context set to NOT READY by refresh
2018-08-01 16:56:52,908 [art-init] [INFO ] (o.a.s.ArtifactoryApplicationContext:227) - Refreshing artifactory: startup date [Wed Aug 01 16:56:52 UTC 2018]; root of context hierarchy
2018-08-01 17:00:01,718 [art-init] [INFO ] (o.a.w.s.ArtifactoryContextConfigListener:281) -
_ _ __ _ ____ _____ _____
/\ | | (_)/ _| | | / __ \ / ____/ ____|
/ \ _ __| |_ _| |_ __ _ ___| |_ ___ _ __ _ _ | | | | (___| (___
/ /\ \ | '__| __| | _/ _` |/ __| __/ _ \| '__| | | | | | | |\___ \\___ \
/ ____ \| | | |_| | || (_| | (__| || (_) | | | |_| | | |__| |____) |___) |
/_/ \_\_| \__|_|_| \__,_|\___|\__\___/|_| \__, | \____/|_____/_____/
Version: 5.9.0 __/ |
Revision: 50900900 |___/
Artifactory Home: '/var/opt/jfrog/artifactory'
2018-08-01 17:00:01,827 [art-init] [WARN ] (o.a.f.l.ArtifactoryLockFile:65) - Found existing lock file. Artifactory was not shutdown properly. [/var/opt/jfrog/artifactory/data/.lock]
2018-08-01 17:00:04,617 [art-init] [INFO ] (o.a.s.ArtifactoryApplicationContext:484) - Artifactory application context set to NOT READY by refresh
2018-08-01 17:00:04,619 [art-init] [INFO ] (o.a.s.ArtifactoryApplicationContext:227) - Refreshing artifactory: startup date [Wed Aug 01 17:00:04 UTC 2018]; root of context hierarchy
2018-08-01 17:03:09,821 [art-init] [INFO ] (o.a.w.s.ArtifactoryContextConfigListener:281) -
When I try to start the service via systemctl start artifactory.service it fails with "Job for artifactory.service failed because a timeout was exceeded. See "systemctl status artifactory.service" and "journalctl -xe" for details.".
Output systemctl status artifactory.service:
nikl#nikls-droplet-1:/opt/jfrog/artifactory/tomcat/logs$ sudo systemctl status artifactory.service
● artifactory.service - Setup Systemd script for Artifactory in Tomcat Servlet Engine
Loaded: loaded (/lib/systemd/system/artifactory.service; enabled; vendor preset: enabled)
Active: inactive (dead) since Fri 2018-07-20 23:19:14 UTC; 25min ago
Process: 18829 ExecStart=/opt/jfrog/artifactory/bin/artifactoryManage.sh start (code=killed, signal=TERM)
Jul 20 23:18:23 nikls-droplet-1 artifactoryManage.sh[18829]: /usr/bin/java
Jul 20 23:18:24 nikls-droplet-1 artifactoryManage.sh[18829]: Starting Artifactory tomcat as user artifactory...
Jul 20 23:18:24 nikls-droplet-1 su[18851]: Successful su for artifactory by root
Jul 20 23:18:24 nikls-droplet-1 su[18851]: + ??? root:artifactory
Jul 20 23:18:24 nikls-droplet-1 su[18851]: pam_unix(su:session): session opened for user artifactory by (uid=0)
Jul 20 23:18:24 nikls-droplet-1 artifactoryManage.sh[18829]: Max number of open files: 1024
Jul 20 23:18:24 nikls-droplet-1 artifactoryManage.sh[18829]: Using ARTIFACTORY_HOME: /var/opt/jfrog/artifactory
Jul 20 23:18:24 nikls-droplet-1 artifactoryManage.sh[18829]: Using ARTIFACTORY_PID: /var/opt/jfrog/run/artifactory.pid
Jul 20 23:18:24 nikls-droplet-1 artifactoryManage.sh[18829]: Tomcat started.
Jul 20 23:19:14 nikls-droplet-1 systemd[1]: Stopped Setup Systemd script for Artifactory in Tomcat Servlet Engine.
Output journalctl -xe:
nikl#nikls-droplet-1:/opt/jfrog/artifactory/tomcat/logs$ sudo journalctl -xe
-- Kernel start-up required KERNEL_USEC microseconds.
--
-- Initial RAM disk start-up required INITRD_USEC microseconds.
--
-- Userspace start-up required 108849 microseconds.
Jul 20 23:54:42 nikls-droplet-1 systemd[1]: Started User Manager for UID 1001.
-- Subject: Unit user#1001.service has finished start-up
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit user#1001.service has finished starting up.
--
-- The start-up result is done.
Jul 20 23:54:42 nikls-droplet-1 artifactoryManage.sh[19282]: Max number of open files: 1024
Jul 20 23:54:42 nikls-droplet-1 artifactoryManage.sh[19282]: Using ARTIFACTORY_HOME: /var/opt/jfrog/artifactory
Jul 20 23:54:42 nikls-droplet-1 artifactoryManage.sh[19282]: Using ARTIFACTORY_PID: /var/opt/jfrog/run/artifactory.pid
Jul 20 23:54:42 nikls-droplet-1 artifactoryManage.sh[19282]: Tomcat started.
Jul 20 23:54:42 nikls-droplet-1 su[19305]: pam_unix(su:session): session closed for user artifactory
Jul 20 23:56:11 nikls-droplet-1 systemd[1]: artifactory.service: Start operation timed out. Terminating.
Jul 20 23:56:11 nikls-droplet-1 systemd[1]: Failed to start Setup Systemd script for Artifactory in Tomcat Servlet Engine.
-- Subject: Unit artifactory.service has failed
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit artifactory.service has failed.
--
-- The result is failed.
Jul 20 23:56:11 nikls-droplet-1 systemd[1]: artifactory.service: Unit entered failed state.
Jul 20 23:56:11 nikls-droplet-1 systemd[1]: artifactory.service: Failed with result 'timeout'.
Jul 20 23:56:11 nikls-droplet-1 polkitd(authority=local)[1443]: Unregistered Authentication Agent for unix-process:19273:10938586 (system bus name :1.317, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Jul 20 23:56:16 nikls-droplet-1 systemd[1]: artifactory.service: Service hold-off time over, scheduling restart.
Jul 20 23:56:16 nikls-droplet-1 systemd[1]: Stopped Setup Systemd script for Artifactory in Tomcat Servlet Engine.
-- Subject: Unit artifactory.service has finished shutting down
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit artifactory.service has finished shutting down.
Jul 20 23:56:16 nikls-droplet-1 systemd[1]: Starting Setup Systemd script for Artifactory in Tomcat Servlet Engine...
-- Subject: Unit artifactory.service has begun start-up
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit artifactory.service has begun starting up.
Jul 20 23:56:16 nikls-droplet-1 artifactoryManage.sh[19863]: /usr/bin/java
Jul 20 23:56:17 nikls-droplet-1 artifactoryManage.sh[19863]: Starting Artifactory tomcat as user artifactory...
Jul 20 23:56:17 nikls-droplet-1 su[19885]: Successful su for artifactory by root
Jul 20 23:56:17 nikls-droplet-1 su[19885]: + ??? root:artifactory
Jul 20 23:56:17 nikls-droplet-1 su[19885]: pam_unix(su:session): session opened for user artifactory by (uid=0)
Jul 20 23:56:17 nikls-droplet-1 systemd-logind[1395]: New session c140 of user artifactory.
-- Subject: A new session c140 has been created for user artifactory
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
-- Documentation: http://www.freedesktop.org/wiki/Software/systemd/multiseat
--
-- A new session with the ID c140 has been created for the user artifactory.
--
-- The leading process of the session is 19885.
Jul 20 23:56:17 nikls-droplet-1 systemd[1]: Started Session c140 of user artifactory.
-- Subject: Unit session-c140.scope has finished start-up
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit session-c140.scope has finished starting up.
--
-- The start-up result is done.
Jul 20 23:56:17 nikls-droplet-1 artifactoryManage.sh[19863]: Max number of open files: 1024
Jul 20 23:56:17 nikls-droplet-1 artifactoryManage.sh[19863]: Using ARTIFACTORY_HOME: /var/opt/jfrog/artifactory
Jul 20 23:56:17 nikls-droplet-1 artifactoryManage.sh[19863]: Using ARTIFACTORY_PID: /var/opt/jfrog/run/artifactory.pid
Jul 20 23:56:17 nikls-droplet-1 artifactoryManage.sh[19863]: Tomcat started.
Jul 20 23:56:17 nikls-droplet-1 su[19885]: pam_unix(su:session): session closed for user artifactory
Jul 20 23:56:18 nikls-droplet-1 systemd-logind[1395]: Removed session c140.
-- Subject: Session c140 has been terminated
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
-- Documentation: http://www.freedesktop.org/wiki/Software/systemd/multiseat
--
-- A session with the ID c140 has been terminated.
I have tried deleting the .lck files in the derby folder because that seems to have solved the "Another instance of Derby may have already booted the database /var/opt/jfrog/artifactory/data/derby" issue for someone else before, but nothing changed for me. After the next start of artifactory the files were just back and the same error showed up in the log files.
Since the output of systemctl start artifactory.service complaints about a timeout I raised START_TMO in artifactory.default to 300, but still the same problem persists. I also raised START_TMO in a different file according to the first answer of this SO question.
I don't understand what is going on and would be very grateful for any help/advice.

How to search logs between dates using sed

I'm trying to search through some logs while grepping for a specific line. How can I reduce the following logs even further by date and time? For example all lines between 2018/02/27 13:10:31 to 2018/02/27 13:17:34. I've tried using sed but I can't get it to work correctly on either date columns.
grep "Eps=" file.log
INFO | jvm 3 | 2018/02/27 13:02:27 | [Tue Feb 27 13:02:27 EST 2018] [INFO ] {Eps=5618.819672131148, Evts=2077762260}
INFO | jvm 3 | 2018/02/27 13:03:27 | [Tue Feb 27 13:03:27 EST 2018] [INFO ] {Eps=5288.8, Evts=2078079588}
INFO | jvm 3 | 2018/02/27 13:04:27 | [Tue Feb 27 13:04:27 EST 2018] [INFO ] {Eps=5176.633333333333, Evts=2078390186}
INFO | jvm 3 | 2018/02/27 13:05:28 | [Tue Feb 27 13:05:28 EST 2018] [INFO ] {Eps=5031.633333333333, Evts=2078692084}
INFO | jvm 3 | 2018/02/27 13:06:28 | [Tue Feb 27 13:06:28 EST 2018] [INFO ] {Eps=5047.433333333333, Evts=2078994930}
INFO | jvm 3 | 2018/02/27 13:07:30 | [Tue Feb 27 13:07:29 EST 2018] [INFO ] {Eps=5314.183333333333, Evts=2079313781}
INFO | jvm 3 | 2018/02/27 13:08:31 | [Tue Feb 27 13:08:31 EST 2018] [INFO ] {Eps=5182.934426229508, Evts=2079629940}
INFO | jvm 3 | 2018/02/27 13:09:31 | [Tue Feb 27 13:09:31 EST 2018] [INFO ] {Eps=5143.459016393443, Evts=2079943691}
INFO | jvm 3 | 2018/02/27 13:10:31 | [Tue Feb 27 13:10:31 EST 2018] [INFO ] {Eps=5519.266666666666, Evts=2080274847}
INFO | jvm 3 | 2018/02/27 13:11:31 | [Tue Feb 27 13:11:31 EST 2018] [INFO ] {Eps=5342.8, Evts=2080595415}
INFO | jvm 3 | 2018/02/27 13:12:32 | [Tue Feb 27 13:12:32 EST 2018] [INFO ] {Eps=5230.183333333333, Evts=2080909226}
INFO | jvm 3 | 2018/02/27 13:13:32 | [Tue Feb 27 13:13:32 EST 2018] [INFO ] {Eps=4975.533333333334, Evts=2081207758}
INFO | jvm 3 | 2018/02/27 13:14:32 | [Tue Feb 27 13:14:32 EST 2018] [INFO ] {Eps=5225.283333333334, Evts=2081521275}
INFO | jvm 3 | 2018/02/27 13:15:33 | [Tue Feb 27 13:15:33 EST 2018] [INFO ] {Eps=5261.766666666666, Evts=2081836981}
INFO | jvm 3 | 2018/02/27 13:16:34 | [Tue Feb 27 13:16:34 EST 2018] [INFO ] {Eps=5257.688524590164, Evts=2082157700}
INFO | jvm 3 | 2018/02/27 13:17:34 | [Tue Feb 27 13:17:34 EST 2018] [INFO ] {Eps=5634.133333333333, Evts=2082495748}
INFO | jvm 3 | 2018/02/27 13:18:34 | [Tue Feb 27 13:18:34 EST 2018] [INFO ] {Eps=5490.5, Evts=2082825178}
INFO | jvm 3 | 2018/02/27 13:19:35 | [Tue Feb 27 13:19:35 EST 2018] [INFO ] {Eps=5351.05, Evts=2083146241}
INFO | jvm 3 | 2018/02/27 13:20:37 | [Tue Feb 27 13:20:37 EST 2018] [INFO ] {Eps=5022.983606557377, Evts=2083452643}
INFO | jvm 3 | 2018/02/27 13:21:37 | [Tue Feb 27 13:21:37 EST 2018] [INFO ] {Eps=5302.196721311476, Evts=2083776077}
INFO | jvm 3 | 2018/02/27 13:22:37 | [Tue Feb 27 13:22:37 EST 2018] [INFO ] {Eps=5096.2, Evts=2084081849}
INFO | jvm 3 | 2018/02/27 13:23:37 | [Tue Feb 27 13:23:37 EST 2018] [INFO ] {Eps=5074.45, Evts=2084386316}
INFO | jvm 3 | 2018/02/27 13:24:38 | [Tue Feb 27 13:24:38 EST 2018] [INFO ] {Eps=5264.566666666667, Evts=2084702190}
Tools like sed or grep operate on strings, even when you can do really sophisticated stuff using regular expressions.
But these tools lack the ability to do something like "range queries" for things like date.
You might find various solutions to this questions, mine would include a small python snippet:
#!/usr/bin/env python
import sys
from datetime import datetime
begin = datetime(2018,2,27,13,10,31)
end = datetime(2018,2,27,13,47,34)
for line in sys.stdin.readlines():
if begin <= datetime.strptime(line.split('|')[2].strip(),'%Y/%m/%d %H:%M:%S') <= end:
print(line[:-1])
That snipped saved as filter.py and made executable (e.g. chmod +x) could then be called like this:
grep "Eps=" file.log | filter.py
Something like that will do the job in shell but as Stefan Sonnenberg-Carstens said in his answer consider using Python for that job:
#!/usr/bin/env sh
from=$(grep '2018/02/27 13:10:31' -n file.log | cut -d: -f1)
to=$(grep '2018/02/27 13:17:34' -n file.log | cut -d: -f1)
head -$to file.log | tail +$from
Output:
INFO | jvm 3 | 2018/02/27 13:10:31 | [Tue Feb 27 13:10:31 EST 2018] [INFO ] {Eps=5519.266666666666, Evts=2080274847}
INFO | jvm 3 | 2018/02/27 13:11:31 | [Tue Feb 27 13:11:31 EST 2018] [INFO ] {Eps=5342.8, Evts=2080595415}
INFO | jvm 3 | 2018/02/27 13:12:32 | [Tue Feb 27 13:12:32 EST 2018] [INFO ] {Eps=5230.183333333333, Evts=2080909226}
INFO | jvm 3 | 2018/02/27 13:13:32 | [Tue Feb 27 13:13:32 EST 2018] [INFO ] {Eps=4975.533333333334, Evts=2081207758}
INFO | jvm 3 | 2018/02/27 13:14:32 | [Tue Feb 27 13:14:32 EST 2018] [INFO ] {Eps=5225.283333333334, Evts=2081521275}
INFO | jvm 3 | 2018/02/27 13:15:33 | [Tue Feb 27 13:15:33 EST 2018] [INFO ] {Eps=5261.766666666666, Evts=2081836981}
INFO | jvm 3 | 2018/02/27 13:16:34 | [Tue Feb 27 13:16:34 EST 2018] [INFO ] {Eps=5257.688524590164, Evts=2082157700}
INFO | jvm 3 | 2018/02/27 13:17:34 | [Tue Feb 27 13:17:34 EST 2018] [INFO ] {Eps=5634.133333333333, Evts=2082495748}
Using a perl one-liner - try to find a more concise and clear way :)
perl -ne 'print if m|2018/02/27 13:10:31| .. m|2018/02/27 13:17:34|' file
Output :
INFO | jvm 3 | 2018/02/27 13:10:31 | [Tue Feb 27 13:10:31 EST 2018] [INFO ] {Eps=5519.266666666666, Evts=2080274847}
INFO | jvm 3 | 2018/02/27 13:11:31 | [Tue Feb 27 13:11:31 EST 2018] [INFO ] {Eps=5342.8, Evts=2080595415}
INFO | jvm 3 | 2018/02/27 13:12:32 | [Tue Feb 27 13:12:32 EST 2018] [INFO ] {Eps=5230.183333333333, Evts=2080909226}
INFO | jvm 3 | 2018/02/27 13:13:32 | [Tue Feb 27 13:13:32 EST 2018] [INFO ] {Eps=4975.533333333334, Evts=2081207758}
INFO | jvm 3 | 2018/02/27 13:14:32 | [Tue Feb 27 13:14:32 EST 2018] [INFO ] {Eps=5225.283333333334, Evts=2081521275}
INFO | jvm 3 | 2018/02/27 13:15:33 | [Tue Feb 27 13:15:33 EST 2018] [INFO ] {Eps=5261.766666666666, Evts=2081836981}
INFO | jvm 3 | 2018/02/27 13:16:34 | [Tue Feb 27 13:16:34 EST 2018] [INFO ] {Eps=5257.688524590164, Evts=2082157700}
INFO | jvm 3 | 2018/02/27 13:17:34 | [Tue Feb 27 13:17:34 EST 2018] [INFO ] {Eps=5634.133333333333, Evts=2082495748}

How to debug this condition of "eth2: tx hang 1 detected on queue 11, resetting adapter"?

I want to send sk_buff by "dev_queue_xmit", when I just send 2 packets, the network card may be hang.
I want to know how to debug this condition.
the /var/log/messages is:
[root#10g-host2 test]# tail -f /var/log/messages
Sep 29 10:38:22 10g-host2 acpid: waiting for events: event logging is off
Sep 29 10:38:23 10g-host2 acpid: client connected from 2018[68:68]
Sep 29 10:38:23 10g-host2 acpid: 1 client rule loaded
Sep 29 10:38:24 10g-host2 automount[2210]: lookup_read_master: lookup(nisplus): couldn't locate nis+ table auto.master
Sep 29 10:38:24 10g-host2 mcelog: failed to prefill DIMM database from DMI data
Sep 29 10:38:24 10g-host2 xinetd[2246]: xinetd Version 2.3.14 started with libwrap loadavg labeled-networking options compiled in.
Sep 29 10:38:24 10g-host2 xinetd[2246]: Started working: 0 available services
Sep 29 10:38:25 10g-host2 abrtd: Init complete, entering main loop
Sep 29 10:39:41 10g-host2 kernel: vmalloc mmap_buf=ffffc90016e29000 mmap_size=4096
Sep 29 10:39:41 10g-host2 kernel: insmod module wsmmap successfully!
Sep 29 10:39:49 10g-host2 kernel: mmap_buf + 1024 is ffffc90016e29400
Sep 29 10:39:49 10g-host2 kernel: data ffffc90016e2942a, len is 42
Sep 29 10:39:49 10g-host2 kernel: udp data ffffc90016e29422
Sep 29 10:39:49 10g-host2 kernel: ip data ffffc90016e2940e
Sep 29 10:39:49 10g-host2 kernel: eth data ffffc90016e29400
Sep 29 10:39:49 10g-host2 kernel: h_source is ffffc90016e29406, dev_addr is ffff880c235c4750, len is 6result is 0
Sep 29 10:39:50 10g-host2 kernel: mmap_buf + 1024 is ffffc90016e29400
Sep 29 10:39:50 10g-host2 kernel: data ffffc90016e2942a, len is 42
Sep 29 10:39:50 10g-host2 kernel: udp data ffffc90016e29422
Sep 29 10:39:50 10g-host2 kernel: ip data ffffc90016e2940e
Sep 29 10:39:50 10g-host2 kernel: eth data ffffc90016e29400
Sep 29 10:39:50 10g-host2 kernel: h_source is ffffc90016e29406, dev_addr is ffff880c235c4750, len is 6result is 0
Sep 29 10:39:52 10g-host2 kernel: ixgbe 0000:03:00.0: eth2: Detected Tx Unit Hang
Sep 29 10:39:52 10g-host2 kernel: Tx Queue <11>
Sep 29 10:39:52 10g-host2 kernel: TDH, TDT <0>, <5>
Sep 29 10:39:52 10g-host2 kernel: next_to_use <5>
Sep 29 10:39:52 10g-host2 kernel: next_to_clean <0>
Sep 29 10:39:52 10g-host2 kernel: ixgbe 0000:03:00.0: eth2: tx_buffer_info[next_to_clean]
Sep 29 10:39:52 10g-host2 kernel: time_stamp <fffd3dd8>
Sep 29 10:39:52 10g-host2 kernel: jiffies <fffd497f>
Sep 29 10:39:52 10g-host2 kernel: ixgbe 0000:03:00.0: eth2: tx hang 1 detected on queue 11, resetting adapter
Sep 29 10:39:52 10g-host2 kernel: ixgbe 0000:03:00.0: eth2: Reset adapter
Sep 29 10:39:52 10g-host2 kernel: ixgbe 0000:03:00.0: master disable timed out
Sep 29 10:39:53 10g-host2 kernel: ixgbe 0000:03:00.0: eth2: detected SFP+: 5
Sep 29 10:39:54 10g-host2 kernel: ixgbe 0000:03:00.0: eth2: NIC Link is Up 10 Gbps, Flow Control: RX/TX
some information of my computer is:
ethtool -i eth2
driver: ixgbe
version: 3.21.2
firmware-version: 0x1bab0001
bus-info: 0000:03:00.0
supports-statistics: yes
supports-test: yes
supports-eeprom-access: yes
supports-register-dump: yes
supports-priv-flags: no
LSB Version: :base-4.0-amd64:base-4.0-noarch:core-4.0-amd64:core-4.0-noarch:graphics-4.0-amd64:graphics-4.0-noarch:printing-4.0-amd64:printing-4.0-noarch
Distributor ID: CentOS
Description: CentOS release 6.5 (Final)
Release: 6.5
Codename: Final
kernel version is: 2.6.32-431.el6.x86_64
Thank you for your help.
I use vmalloc() which alloc the memory for skb->data, so this let NIC down. I fix it by use kmalloc().

Resources