Dronekit-sitl fails to bind on default port 5760 - windows

I have dronekit-sitl installed in a python3 virtual environment on my Windows 10 machine and have used it before by running dronekit-sitl copter with no issues. However, as of today I am running across what seems to be a permission issue when trying to execute the ArduCopter sitl.
$ dronekit-sitl copter
os: win, apm: copter, release: stable
SITL already Downloaded and Extracted.
Ready to boot.
Execute: C:\Users\kyrlon\.dronekit\sitl\copter-3.3\apm.exe --home=-35.363261,149.165230,584,353 --model=quad -I 0
SITL-0> Started model quad at -35.363261,149.165230,584,353 at speed 1.0
SITL-0.stderr> bind port 5760 for 0
Starting sketch 'ArduCopter'
bind failed on port 5760 - Operation not permitted
Starting SITL input
Not sure what might have triggered a new operation permission issue, and I tried to start over with a fresh Python environment, but even after a complete PC shutdown, I am still having the error as shown above.

It turns out that having docker on my system was the culprit and excluding the port I was attempting to use as mentioned in this SO post that led me to this github issue. Running the command in an elevated terminal:
netsh interface ipv4 show excludedportrange protocol=tcp
Provided me the results of the following excluded ports:
Protocol tcp Port Exclusion Ranges
Start Port End Port
---------- --------
1496 1595
1658 1757
1758 1857
1858 1957
1958 2057
2058 2157
2180 2279
2280 2379
2380 2479
2480 2579
2702 2801
2802 2901
2902 3001
3002 3101
3102 3201
3202 3301
3390 3489
3490 3589
3590 3689
3693 3792
3793 3892
3893 3992
3993 4092
4093 4192
4193 4292
4293 4392
4393 4492
4493 4592
4593 4692
4768 4867
4868 4967
5041 5140
5141 5240
5241 5340
5357 5357
5358 5457
5458 5557
5558 5657
5700 5700
5701 5800
8005 8005
8884 8884
15202 15301
15302 15401
15402 15501
15502 15601
15602 15701
15702 15801
15802 15901
15902 16001
16002 16101
16102 16201
16202 16301
16302 16401
16402 16501
16502 16601
16602 16701
16702 16801
16802 16901
16993 17092
17093 17192
50000 50059 *
* - Administered port exclusions.
Turns out that docker or possibly Hyper-V excluded the range that included 5760:
5701 5800
And as mentioned from the github issue, I probably resolved this issue before after a set number of restarts that incremented the port ranges, or possibly got lucky in the past starting dronekit-sitl before docker ran on my system.
Either way, to resolve this issue of Operation not permitted, running the command as admin:
net stop winnat
net start winnat
solved the issue with dronekit-sitl without having to specify a different port besides the default 5760.

Related

Unable to access tomcat manager 8080 in Google Cloud

I have been using Amazon EC2 to run my Tomcat+MySQL website for a while and is now migrating to Google Cloud Platform. I start a compute engine instance (Ubuntu 16.04), connect to it via ssh and use apt-get to install mysql/tomcat7.
The problem I encountered is that tomcat will not start. The catalina.out log didn't have a "Server startup at xxxms" message, and I can't connect to 8080 port via browser.
The last several lines of catalina.out is
Jul 10, 2017 7:06:20 PM org.apache.catalina.startup.Catalina load INFO: Initialization processed in 928 ms
Jul 10, 2017 7:06:20 PM org.apache.catalina.core.StandardService startInternal INFO: Starting service Catalina
Jul 10, 2017 7:06:20 PM org.apache.catalina.core.StandardEngine startInternal INFO: Starting Servlet Engine: Apache Tomcat/7.0.68 (Ubuntu)
Jul 10, 2017 7:06:20 PM org.apache.catalina.startup.HostConfig deployDescriptor INFO: Deploying configuration descriptor /etc/tomcat7/Catalina/localhost/host-manager.xml
Jul 10, 2017 7:06:21 PM org.apache.catalina.startup.TldConfig execute INFO: At least one JAR was scanned for TLDs yet contained no TLDs. Enable debug logging for this logger for a complete list of JARs that were scanned but no TLDs were found in them. Skipping unneeded JARs during scanning can improve startup time and JSP compilation time.
When I use netstat to check, it shows user tomcat7 is listening to 8080
tcp 0 0 0.0.0.0:8080 0.0.0.0:* LISTEN 115 32984 -
$ id -u tomcat7
$ 115
I try to wget localhost:8080 in ssh terminal, it shows
Connecting to localhost (localhost)|127.0.0.1|:8080... connected.
HTTP request sent, awaiting response...
and just hang there.
Any idea or suggestion will be greatly appreciated!
Update
It turns out that firewall is not the root cause of the problem, and even without allowing port 8443 Tomcat will work (Of coz you need to allow 8080). The reason that there's no "Server started" message showing up is Tomcat take extremely long time to startup (1346049 ms the first time, 354034 ms when restarted, no web app installed except for the default index.html), and the reason for no responding to request is also that it has not finished starting up yet.
This is the first time I have seen that Tomcat takes so long to start and also the reason I didn't realize it in the first place. I suspect (with some search) this is caused by Tomcat Jar scanning. Will keep update this question once I have more detail.
Update - Problem Solved
It turns out that I encounter the same problem here and the solution is here. In short, much of the time is consumed by the following task:
Creation of SecureRandom instance for session ID generation using [SHA1PRNG]
which require Java to load /dev/random to get random numbers. /dev/random typically get its entropy source from keyboard/mouse input, which cannot provide enough randomness on a headless virtual machine. This causes the random number to be "used up" during computation and cause a lot of wait. The solution is to install haveged, which use some other source to provide randomness (details in the link).
I installed haveged, and now tomcat only takes 1 sec to startup and everything works normal.
Thanks for asking such interesting question.
I've never used Google Cloud services but I managed to replicate your issue.
After reading a little I found that you need to update your Firewall Rules to enable access to 8080 port.
Go to:
1) (Hamburguer Icon, upper left)
2) Networking
3) Firewall Rules
4) Add new
I created one called 'allow-tomcat7' with this properties:
Descripción
Enables Tomcat 7 access
Red
default
Prioridad
1000
Dirección
Entrada
Acción tras coincidencia
Permitir
Filtros de origen
Intervalos de IP
0.0.0.0/0
Protocolos y puertos
tcp:8080
tcp:8443
udp:8080
There's an option for 'target tags' when you edit the configuration, although I've created a 'tag' and applied it only to my new EC instance it didn't work. I had to remove target tags and it worked like a charm:
Make sure you allow access only for your IP address!
You'll need to adjust your security settings, otherwise, you'll become a honeypot, once I've enabled the port for everyone several bots started to scan it:
daychuzleo#testing-tomcat:~$ sudo tcpdump -i ens4 port 8080
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on ens4, link-type EN10MB (Ethernet), capture size 262144 bytes
20:39:31.437634 IP 170.251.221.183.54162 > testing-tomcat.c.hip-river-163201.internal.http-alt: Flags [.], seq 1638030511:1638030512, ack 1250919796, win 259, length 1: HTTP
20:39:31.437665 IP testing-tomcat.c.hip-river-163201.internal.http-alt > 170.251.221.183.54162: Flags [.], ack 1, win 231, options [nop,nop,sack 1 {0:1}], length 0
20:39:37.133899 IP 170.251.221.183.53878 > testing-tomcat.c.hip-river-163201.internal.http-alt: Flags [.], seq 2436191518:2436191519, ack 4071767590, win 259, length 1: HTTP
20:39:37.133930 IP testing-tomcat.c.hip-river-163201.internal.http-alt > 170.251.221.183.53878: Flags [.], ack 1, win 222, options [nop,nop,sack 1 {0:1}], length 0
20:39:51.379839 IP 170.251.221.183.54162 > testing-tomcat.c.hip-river-163201.internal.http-alt: Flags [F.], seq 1, ack 1, win 259, length 0
20:39:51.392375 IP 170.251.221.183.47923 > testing-tomcat.c.hip-river-163201.internal.http-alt: Flags [S], seq 1420913913, win 8192, options [mss 1386,nop,wscale 8,nop,nop,sackOK,unknown-76 0x01010a18e9680005,unknown-76 0x0c01,nop,eol], length 0
20:39:51.392410 IP testing-tomcat.c.hip-river-163201.internal.http-alt > 170.251.221.183.47923: Flags [S.], seq 507557961, ack 1420913914, win 28400, options [mss 1420,nop,nop,sackOK,nop,wscale 7], length 0
20:39:51.421934 IP testing-tomcat.c.hip-river-163201.internal.http-alt > 170.251.221.183.54162: Flags [.], ack 2, win 231, length 0
20:39:51.586555 IP 170.251.221.183.47923 > testing-tomcat.c.hip-river-163201.internal.http-alt: Flags [.], ack 1, win 259, length 0
20:39:51.590317 IP 170.251.221.183.47923 > testing-tomcat.c.hip-river-163201.internal.http-alt: Flags [P.], seq 1:389, ack 1, win 259, length 388: HTTP: GET / HTTP/1.1
20:39:51.590337 IP testing-tomcat.c.hip-river-163201.internal.http-alt > 170.251.221.183.47923: Flags [.], ack 389, win 231, length 0
I was unable to make it work with wget, but I think with this you'll found it out.
UPDATE:
I forgot to mention some things you may have not configured:
-Allowing Firewall for HTTP and HTTPS in your VM instance:
Try using a web navigator (Chrome, Firefox) don't use wget.
Verify that you're not being filtered by your company firewall, try testing with 4g in your cell phone or an unrestricted network, or just ask your IT team to allow you access to the temporary public IP (and port) generated (each time).
Start the service using:
sudo service tomcat7 start
Try reinstalling tomcat
Other things I did (in the research process)
Moving the service to IPV4 instead of IPV6
daychuzleo#testing-tomcat:~$ netstat -ntpl
(Not all processes could be identified, non-owned process info
will not be shown, you would have to be root to see it all.)
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN -
tcp 0 0 0.0.0.0:8080 0.0.0.0:* LISTEN -
tcp6 0 0 :::22 :::* LISTEN -
To do it, edit the default tomcat and add in JavaOPTS the IPV4 option:
vim /etc/default/tomcat
JAVA_OPTS="-Djava.awt.headless=true -Xmx128m -XX:+UseConcMarkSweepGC -Djava.net.preferIPv4Stack=true"
Disable the 8443 redirection
Comment the section "redirectPort" in server.xml:
vim /etc/tomcat/server.xml
<Connector port="8080" protocol="HTTP/1.1"
connectionTimeout="20000"
URIEncoding="UTF-8"
address="0.0.0.0"/>
<!--redirectPort="8443" />-->
Verify each change by restarting your tomcat instance.

TFTP error: 'File not found'

I am using minicom on Kali Linux native ( Linux 4.6.0-kali1-amd64 x86_64 )
to install embedded linux on a stm32f746g-Disco.
After setting up the tftp protocol, the ethernet connection with the board
, after building the kernel and putting the image in the appropriate folder ( \tftpboot\stm32f7\uImage is the path & name of the image ), I'm starting minicom to comunicate with the board.
The comunication with the board works fine, but the problem is that the board somewhat cannot read the image of the kernel, even if the path is correct:
STM32F746-DISCO> reset
resetting ...
U-Boot 2010.03 (Dec 21 2015 - 04:18:19)
CPU : STM32F7 (Cortex-M7)
Freqs: SYSCLK=200MHz,HCLK=200MHz,PCLK1=50MHz,PCLK2=100MHz
Board: STM32F746 Discovery Rev 1.A, www.emcraft.com
DRAM: 8 MB
In: serial
Out: serial
Err: serial
Net: STM32_MAC
Hit any key to stop autoboot: 0
Auto-negotiation...completed.
STM32_MAC: link UP (100/Full)
Using STM32_MAC device
TFTP from server 172.17.4.1; our IP address is 172.17.4.206
Filename 'stm32f7/uImage'.
Load address: 0xc0007fc0
Loading: *
TFTP error: 'File not found' (1)
Not retrying...
Wrong Image Format for bootm command
ERROR: can't get kernel image! `
The image folder and file is chrooted:
root#DESKTOP-26MQUER:/tftpboot/stm32f7# ls -la
drwxrwxrwx 2 root root 4096 gen 12 16:06 .
drwxrwxrwx 3 root root 4096 gen 10 14:36 ..
-rw-r--r-- 1 root root 0 gen 12 16:06 pippo
-rwxrwxrwx 1 root root 1384352 gen 12 16:02 uImage
the tftp file is this
root#DESKTOP-26MQUER:/tftpboot/stm32f7# cat /etc/xinetd.d/tftp
service tftp
{
protocol = udp
port = 69
socket_type = dgram
wait = yes
user = root
server = /usr/sbin/in.tftpd
server_args = /tftpboot
disable = no
}
Please notice that the xinetd service is active.
I can't understand the problem, guidance will be appreciated.
If have checked all possible point on tftp config, the issue is still there, you can try a standalone tftp server(standalone means not be managed by xinetd):
1, try to install tftpd-hpa
2, config tftpd-hpa
$ sudo vi /etc/default/tftpd-hpa
TFTP_USERNAME="tftp"
TFTP_DIRECTORY="/tftpboot"
TFTP_ADDRESS="0.0.0.0:69"
TFTP_OPTIONS="-l -c -s"
3, start tftp server
$ sudo service tftpd-hpa restart
Even though this is an old thread.In my case it problem was that (CentOS) /usr/lib/systemd/system/tftp.service contained only -s [path to dir] and xinet wasn't using tftp config.So adding switches from xinet tftp config to tftp.service solve my problem.

Ambari dashboard retrieving no statistics

I have a fresh install of Hortonworks Data Platform 2.2 installed on a small cluster (4 machines) but when I login to the Ambari GUI, the majority of dashboard stats boxes (HDFS disk usage, Network usage, Memory usage etc) are not populated with any statistics, instead they show the message:
No data There was no data available. Possible reasons include inaccessible Ganglia service
Clicking on the HDFS service link gives the following summary:
NameNode Started
SNameNode Started
DataNodes 4/4 DataNodes Live
NameNode Uptime Not Running
NameNode Heap n/a / n/a (0.0% used)
DataNodes Status 4 live / 0 dead / 0 decommissioning
Disk Usage (DFS Used) n/a / n/a (0%)
Disk Usage (Non DFS Used) n/a / n/a (0%)
Disk Usage (Remaining) n/a / n/a (0%)
Blocks (total) n/a
Block Errors n/a corrupt / n/a missing / n/a under replicated
Total Files + Directories n/a
Upgrade Status Upgrade not finalized
Safe Mode Status n/a
The Alerts and Health Checks box to the right of the screen is not displaying any information but if I click on the settings icon this opens the Nagios frontend and again, everything looks healthy here!
The install went smoothly (CentOS 6.5) and everything looks good as far as all services are concerned (all started with green tick next to service name). There are some stats displayed on the dashboard: 4/4 datanodes are live, 1/1 Nodemanages live & 1/1 Supervisors are live. I can write files to HDFS so its looks like it's a Ganglia issue?
The Ganglia daemon seems to be working ok:
ps -ef | grep gmond
nobody 1720 1 0 12:54 ? 00:00:44 /usr/sbin/gmond --conf=/etc/ganglia/hdp/HDPHistoryServer/gmond.core.conf --pid-file=/var/run/ganglia/hdp/HDPHistoryServer/gmond.pid
nobody 1753 1 0 12:54 ? 00:00:44 /usr/sbin/gmond --conf=/etc/ganglia/hdp/HDPFlumeServer/gmond.core.conf --pid-file=/var/run/ganglia/hdp/HDPFlumeServer/gmond.pid
nobody 1790 1 0 12:54 ? 00:00:48 /usr/sbin/gmond --conf=/etc/ganglia/hdp/HDPHBaseMaster/gmond.core.conf --pid-file=/var/run/ganglia/hdp/HDPHBaseMaster/gmond.pid
nobody 1821 1 1 12:54 ? 00:00:57 /usr/sbin/gmond --conf=/etc/ganglia/hdp/HDPKafka/gmond.core.conf --pid-file=/var/run/ganglia/hdp/HDPKafka/gmond.pid
nobody 1850 1 0 12:54 ? 00:00:44 /usr/sbin/gmond --conf=/etc/ganglia/hdp/HDPSupervisor/gmond.core.conf --pid-file=/var/run/ganglia/hdp/HDPSupervisor/gmond.pid
nobody 1879 1 0 12:54 ? 00:00:45 /usr/sbin/gmond --conf=/etc/ganglia/hdp/HDPSlaves/gmond.core.conf --pid-file=/var/run/ganglia/hdp/HDPSlaves/gmond.pid
nobody 1909 1 0 12:54 ? 00:00:48 /usr/sbin/gmond --conf=/etc/ganglia/hdp/HDPResourceManager/gmond.core.conf --pid-file=/var/run/ganglia/hdp/HDPResourceManager/gmond.pid
nobody 1938 1 0 12:54 ? 00:00:50 /usr/sbin/gmond --conf=/etc/ganglia/hdp/HDPNameNode/gmond.core.conf --pid-file=/var/run/ganglia/hdp/HDPNameNode/gmond.pid
nobody 1967 1 0 12:54 ? 00:00:47 /usr/sbin/gmond --conf=/etc/ganglia/hdp/HDPNodeManager/gmond.core.conf --pid-file=/var/run/ganglia/hdp/HDPNodeManager/gmond.pid
nobody 1996 1 0 12:54 ? 00:00:44 /usr/sbin/gmond --conf=/etc/ganglia/hdp/HDPNimbus/gmond.core.conf --pid-file=/var/run/ganglia/hdp/HDPNimbus/gmond.pid
nobody 2028 1 1 12:54 ? 00:00:58 /usr/sbin/gmond --conf=/etc/ganglia/hdp/HDPDataNode/gmond.core.conf --pid-file=/var/run/ganglia/hdp/HDPDataNode/gmond.pid
nobody 2057 1 0 12:54 ? 00:00:51 /usr/sbin/gmond --conf=/etc/ganglia/hdp/HDPHBaseRegionServer/gmond.core.conf --pid-file=/var/run/ganglia/hdp/HDPHBaseRegionServer/gmond.pid
I have checked the Ganglia service on each node, the processes are running as expected
ps -ef | grep gmetad
nobody 2807 1 2 12:55 ? 00:01:59 /usr/sbin/gmetad --conf=/etc/ganglia/hdp/gmetad.conf --pid-file=/var/run/ganglia/hdp/gmetad.pid
I have tried restarting Ganglia services with no luck, restarted all services but still the same. Does anyone have any ideas how I get the dashboard to work properly? Thank you.
It turns out to be a proxy issue, to access the internet I had to add my proxy details to the file /var/lib/ambari-server/ambari-env.sh
export AMBARI_JVM_ARGS=$AMBARI_JVM_ARGS' -Xms512m -Xmx2048m -Dhttp.proxyHost=theproxy -Dhttp.proxyPort=80 -Djava.security.auth.login.config=/etc/ambari-server/conf/krb5JAASLogin.conf -Djava.security.krb5.conf=/etc/krb5.conf -Djavax.security.auth.useSubjectCredsOnly=false'
When ganglia was trying to access each node in the cluster the request was going via the proxy and never resolving, to overcome the issue I added my nodes to the exclude list (add the flag -Dhttp.nonProxyHosts) like so:
export AMBARI_JVM_ARGS=$AMBARI_JVM_ARGS' -Xms512m -Xmx2048m -Dhttp.proxyHost=theproxy -Dhttp.proxyPort=80 -Dhttp.nonProxyHosts="localhost|node1.dms|node2.dms|node3.dms|etc" -Djava.security.auth.login.config=/etc/ambari-server/conf/krb5JAASLogin.conf -Djava.security.krb5.conf=/etc/krb5.conf -Djavax.security.auth.useSubjectCredsOnly=false'
After adding the exclude list the stats were retrieved as expected!

RethinkDB: why does rethinkdb service use so much memory?

After encountering situations where I found that rethinkdb service is down for unknown reason, I noticed it uses a lot of memory:
# free -m
total used free shared buffers cached
Mem: 7872 7744 128 0 30 68
-/+ buffers/cache: 7645 226
Swap: 4031 287 3744
# top
top - 23:12:51 up 7 days, 1:16, 3 users, load average: 0.00, 0.00, 0.00
Tasks: 133 total, 1 running, 132 sleeping, 0 stopped, 0 zombie
Cpu(s): 0.0%us, 0.2%sy, 0.0%ni, 99.8%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Mem: 8061372k total, 7931724k used, 129648k free, 32752k buffers
Swap: 4128760k total, 294732k used, 3834028k free, 71260k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
1835 root 20 0 7830m 7.2g 5480 S 1.0 94.1 292:43.38 rethinkdb
29417 root 20 0 15036 1256 944 R 0.3 0.0 0:00.05 top
1 root 20 0 19364 1016 872 S 0.0 0.0 0:00.87 init
# cat log_file | tail -9
2014-09-22T21:56:47.448701122 0.052935s info: Running rethinkdb 1.12.5 (GCC 4.4.7)...
2014-09-22T21:56:47.452809839 0.057044s info: Running on Linux 2.6.32-431.17.1.el6.x86_64 x86_64
2014-09-22T21:56:47.452969820 0.057204s info: Using cache size of 3327 MB
2014-09-22T21:56:47.453169285 0.057404s info: Loading data from directory /rethinkdb_data
2014-09-22T21:56:47.571843375 0.176078s info: Listening for intracluster connections on port 29015
2014-09-22T21:56:47.587691636 0.191926s info: Listening for client driver connections on port 28015
2014-09-22T21:56:47.587912507 0.192147s info: Listening for administrative HTTP connections on port 8080
2014-09-22T21:56:47.595163724 0.199398s info: Listening on addresses
2014-09-22T21:56:47.595167377 0.199401s info: Server ready
It seems a lot considering the size of the files:
# du -h
4.0K ./tmp
156M .
Do I need to configure a different cache size? Do you think it has something to do with finding the service surprisingly gone? I'm using v1.12.5
There were a few leak in the previous version, the main one being https://github.com/rethinkdb/rethinkdb/issues/2840
You should probably update RethinkDB -- the current version being 1.15.
If you run 1.12, you need to export your data, but that should be the last time you need it since 1.14 introduced seamless migrations.
From Understanding RethinkDB memory requirements - RethinkDB
By default, RethinkDB automatically configures the cache size limit according to the formula (available_mem - 1024 MB) / 2. available_mem
You can change this via a config file as they document, or change it with a size (in MB) from the command line:
rethinkdb --cache-size 2048

H2 Console starts on IP address which is not mine

If I run
H2JAR=/common/home/jjs/.m2/repository/com/h2database/h2/1.3.168/h2-1.3.168.jar
java -cp $H2JAR org.h2.tools.Server $*
I get
Web Console server running at http://68.178.232.99:8082 (only local connections)
TCP server running at tcp://68.178.232.99:9092 (only local connections)
PG server running at pg://68.178.232.99:5435 (only local connections)
But I have
1004 ~\>traceroute 68.178.232.99
traceroute to 68.178.232.99 (68.178.232.99), 30 hops max, 60 byte packets
1 190.33.189.161 (190.33.189.161) 9.145 ms 9.023 ms 9.467 ms
2 172.31.36.254 (172.31.36.254) 171.169 ms 171.083 ms 170.976 ms
3 10.255.6.9 (10.255.6.9) 170.811 ms 170.641 ms 170.529 ms
4 ge-0-0-0.bal1-int-1.jf1-agr-1.cwpanama.net (201.224.254.237) 170.416 ms 170.306 ms 170.193 ms
5 so-7-1-3.mia11.ip4.tinet.net (216.221.158.49) 185.066 ms 186.763 ms 188.797 ms
6 xe-2-2-0.mia10.ip4.tinet.net (89.149.184.254) 189.751 ms xe-8-0-0.mia10.ip4.tinet.net (89.149.180.185) 202.777 ms xe-1-0-0.mia10.ip4.tinet.net (89.149.183.21) 202.611 ms
7 ge-0-2-2.mpr2.mia1.us.above.net (64.125.13.81) 211.130 ms 215.839 ms 217.518 ms
8 xe-4-0-0.cr2.iah1.us.above.net (64.125.30.202) 219.719 ms 221.003 ms 228.238 ms
9 xe-1-1-0.mpr4.phx2.us.above.net (64.125.30.149) 219.337 ms 225.518 ms 228.868 ms
10 209.66.64.6.t01121-04.above.net (209.66.64.6) 228.763 ms 214.909 ms 215.359 ms
my host file is:
127.0.0.1 localhost.localdomain localhost
::1 localhost6.localdomain6 localhost6
H2 tries to detect the IP address of your computer. It seems it doesn't work correctly in your case. Could you run the network test of the H2 database? You would need to download the .zip file of H2, expand it, chmod the build.sh file, and then run:
./build.sh testNetwork
In my case the result is:
Target: testNetwork
localhost:
localhost/127.0.0.1
localhost/127.0.0.1
localhost/0:0:0:0:0:0:0:1
localhost/fe80:0:0:0:0:0:0:1%1
getLocalHost:Thomass-MacBook-Pro.local/192.168.0.104
/192.168.0.104
byName:/192.168.0.104
ServerSocket[addr=0.0.0.0/0.0.0.0,port=0,localport=63498]
time: 0
server accepting
client:/192.168.0.104:63498
time: 8
server accepted:Socket[addr=/192.168.0.104,port=63499,localport=63498]
client:Socket[addr=/192.168.0.104,port=63498,localport=63499]
time: 2
server read:123
client read:234
server closing
server done
time: 202
done
Done in 1626 ms
This will not solve the problem, but it will give more information about what H2 tries to do to detect the IP address.

Resources