I'm developing a web site and I'm having problems with AWS Rout 53
Initially I was using digital ocean, so I set up the A records to the IP of the server and updated my registrar nameservers (godaddy).
Everything was working fine.
Now I'm hosting on heroku, but it's not resolving the name.
If I visit teamcomp.net, I correctly get redirected to www.teamcomp.net, so I'd think the S3 bucket is configured correctly.
But after that, Firefox can't find the server at www.teamcomp.net.
Server not found
Check the address for typing errors such as ww.example.com instead of www.example.com
If you are unable to load any pages, check your computer's network connection.
If your computer or network is protected by a firewall or proxy, make sure that Firefox is permitted to access the Web.
I followed the instruction at https://devcenter.heroku.com/articles/route-53 , I added the domains to the heroku app and the nameserver of the registrar are correctly set.
$ heroku domains
=== mysterious-reef-3637 Heroku Domain
mysterious-reef-3637.herokuapp.com
=== mysterious-reef-3637 Custom Domains
Domain Name DNS Target
──────────────── ──────────────────────────────────
www.teamcomp.net mysterious-reef-3637.herokuapp.com
teamcomp.net mysterious-reef-3637.herokuapp.com
This is the configuration I currently have. It looks to me like I'm doing everything like the guide said, but I can not reach the website from that domain (it works if I use heroku domain)
$ host teamcomp.net
Host teamcomp.net not found: 2(SERVFAIL)
What can I do to fix it?
Additional infos
$ dig +recurse +trace teamcomp.net any
; <<>> DiG 9.9.5-3ubuntu0.5-Ubuntu <<>> +recurse +trace teamcomp.net any
;; global options: +cmd
. 304830 IN NS g.root-servers.net.
. 304830 IN NS h.root-servers.net.
. 304830 IN NS i.root-servers.net.
. 304830 IN NS j.root-servers.net.
. 304830 IN NS k.root-servers.net.
. 304830 IN NS l.root-servers.net.
. 304830 IN NS m.root-servers.net.
. 304830 IN NS a.root-servers.net.
. 304830 IN NS b.root-servers.net.
. 304830 IN NS c.root-servers.net.
. 304830 IN NS d.root-servers.net.
. 304830 IN NS e.root-servers.net.
. 304830 IN NS f.root-servers.net.
;; Received 755 bytes from 127.0.1.1#53(127.0.1.1) in 3393 ms
net. 172800 IN NS f.gtld-servers.net.
net. 172800 IN NS a.gtld-servers.net.
net. 172800 IN NS b.gtld-servers.net.
net. 172800 IN NS c.gtld-servers.net.
net. 172800 IN NS l.gtld-servers.net.
net. 172800 IN NS h.gtld-servers.net.
net. 172800 IN NS g.gtld-servers.net.
net. 172800 IN NS j.gtld-servers.net.
net. 172800 IN NS i.gtld-servers.net.
net. 172800 IN NS k.gtld-servers.net.
net. 172800 IN NS d.gtld-servers.net.
net. 172800 IN NS e.gtld-servers.net.
net. 172800 IN NS m.gtld-servers.net.
net. 86400 IN DS 35886 8 2 7862B27F5F516EBE19680444D4CE5E762981931842C465F00236401D 8BD973EE
net. 86400 IN RRSIG DS 8 1 86400 20151224170000 20151214160000 62530 . Kmd1EaxlpKS2T8PZIV/HWmZe8glRgOKjgtjfuvx4D4YDPGRnyOxWXVql 4Y8srSFmvDPDR382hMQWLaOwjnVCO4dMWPnRIoYvzqo05a2/7EOJDXlV 6WczFZKy+9M7oUj4XeeHrpi04ypUj/gXvnCMNKk3/5QJl4T8MovWEHeu hXw=
;; Received 733 bytes from 192.228.79.201#53(b.root-servers.net) in 7182 ms
teamcomp.net. 172800 IN NS ns-1351.awsdns-40.org.
teamcomp.net. 172800 IN NS ns-2043.awsdns-63.co.uk.
teamcomp.net. 172800 IN NS ns-210.awsdns-26.com.
teamcomp.net. 172800 IN NS ns-526.awsdns-01.net.
A1RT98BS5QGC9NFI51S9HCI47ULJG6JH.net. 86400 IN NSEC3 1 1 0 - A1RUUFFJKCT2Q54P78F8EJGJ8JBK7I8B NS SOA RRSIG DNSKEY NSEC3PARAM
A1RT98BS5QGC9NFI51S9HCI47ULJG6JH.net. 86400 IN RRSIG NSEC3 8 2 86400 20151218061702 20151211050702 37703 net. QEvwTsJgNbCEgO6sLxxz09CG5Ugs4hPXoRo+8W5o4Xn5nrkdN7lw0rNI DFS/C6dJtShsOkX2/0EIpp8DaGAvjgTs6jLu+oZzgTaedKHSk0cQUPVf EcGNbbpp8FCHz0yUMBes9FPg8WAe+DXNZ++lCjtK5aO89EEWJqNEOjfP vmA=
UC4NBKDSCVJ8ARDU0BVH1BBDDQ15GR8I.net. 86400 IN NSEC3 1 1 0 - UC518K8QR415HBJULMN8MLAPPT20CKR1 NS DS RRSIG
UC4NBKDSCVJ8ARDU0BVH1BBDDQ15GR8I.net. 86400 IN RRSIG NSEC3 8 2 86400 20151220062946 20151213051946 37703 net. Re7MIW4RzyQdlEfoIM1TrQIq8mG5VvLlGvDfba+NeUAbnKNZMmW+WCYr n3Jktc9xVXJoecBZg+CSTG03CWqG8GkA8RiQQjAVKF1dcWRph6mGLglM crgzBFvK4H+uo6WJDkOowm7jA736J/9/FWJ1GoBXMoFMvz/HmPiujpRR Hgs=
;; Received 695 bytes from 192.43.172.30#53(i.gtld-servers.net) in 548 ms
teamcomp.net. 5 IN A 54.231.131.76
teamcomp.net. 172800 IN NS ns-1351.awsdns-40.org.
teamcomp.net. 172800 IN NS ns-2043.awsdns-63.co.uk.
teamcomp.net. 172800 IN NS ns-210.awsdns-26.com.
teamcomp.net. 172800 IN NS ns-526.awsdns-01.net.
teamcomp.net. 900 IN SOA ns-210.awsdns-26.com. awsdns-hostmaster.amazon.com. 1 7200 900 1209600 86400
;; Received 255 bytes from 205.251.194.14#53(ns-526.awsdns-01.net) in 494 ms
So it looks like incorrectly set university DNS resolver. Switching to any other public DNS resolver should solve the problem: http://www.circleid.com/posts/20110407_top_public_dns_resolvers_compared/.
Here are some tutorials for how to change one's DNS server on Windows 10 and OS X:
http://solverbase.com/w/Windows_10:_Changing_DNS_Servers
http://www.plus.net/support/software/dns/changing_dns_mac.shtml
http://osxdaily.com/2015/06/02/change-dns-command-line-mac-os-x/
https://use.opendns.com/
https://developers.google.com/speed/public-dns/docs/using?hl=en
Hope it helps!
Related
I have dronekit-sitl installed in a python3 virtual environment on my Windows 10 machine and have used it before by running dronekit-sitl copter with no issues. However, as of today I am running across what seems to be a permission issue when trying to execute the ArduCopter sitl.
$ dronekit-sitl copter
os: win, apm: copter, release: stable
SITL already Downloaded and Extracted.
Ready to boot.
Execute: C:\Users\kyrlon\.dronekit\sitl\copter-3.3\apm.exe --home=-35.363261,149.165230,584,353 --model=quad -I 0
SITL-0> Started model quad at -35.363261,149.165230,584,353 at speed 1.0
SITL-0.stderr> bind port 5760 for 0
Starting sketch 'ArduCopter'
bind failed on port 5760 - Operation not permitted
Starting SITL input
Not sure what might have triggered a new operation permission issue, and I tried to start over with a fresh Python environment, but even after a complete PC shutdown, I am still having the error as shown above.
It turns out that having docker on my system was the culprit and excluding the port I was attempting to use as mentioned in this SO post that led me to this github issue. Running the command in an elevated terminal:
netsh interface ipv4 show excludedportrange protocol=tcp
Provided me the results of the following excluded ports:
Protocol tcp Port Exclusion Ranges
Start Port End Port
---------- --------
1496 1595
1658 1757
1758 1857
1858 1957
1958 2057
2058 2157
2180 2279
2280 2379
2380 2479
2480 2579
2702 2801
2802 2901
2902 3001
3002 3101
3102 3201
3202 3301
3390 3489
3490 3589
3590 3689
3693 3792
3793 3892
3893 3992
3993 4092
4093 4192
4193 4292
4293 4392
4393 4492
4493 4592
4593 4692
4768 4867
4868 4967
5041 5140
5141 5240
5241 5340
5357 5357
5358 5457
5458 5557
5558 5657
5700 5700
5701 5800
8005 8005
8884 8884
15202 15301
15302 15401
15402 15501
15502 15601
15602 15701
15702 15801
15802 15901
15902 16001
16002 16101
16102 16201
16202 16301
16302 16401
16402 16501
16502 16601
16602 16701
16702 16801
16802 16901
16993 17092
17093 17192
50000 50059 *
* - Administered port exclusions.
Turns out that docker or possibly Hyper-V excluded the range that included 5760:
5701 5800
And as mentioned from the github issue, I probably resolved this issue before after a set number of restarts that incremented the port ranges, or possibly got lucky in the past starting dronekit-sitl before docker ran on my system.
Either way, to resolve this issue of Operation not permitted, running the command as admin:
net stop winnat
net start winnat
solved the issue with dronekit-sitl without having to specify a different port besides the default 5760.
I am using elasticsearch "1.4.2" with river plugin on an aws instance with 8GB ram.Everything was working fine for a week but after a week the river plugin[plugin=org.xbib.elasticsearch.plugin.jdbc.river.JDBCRiverPlugin
version=1.4.0.4] stopped working also I was not able to do a ssh login to the server.After server restart ssh login worked fine ,when I checked the logs of elastic search I could find this error.
[2015-01-29 09:00:59,001][WARN ][river.jdbc.SimpleRiverFlow] no river mouth
[2015-01-29 09:00:59,001][ERROR][river.jdbc.RiverThread ] java.lang.OutOfMemoryError: unable to create new native thread
java.util.concurrent.ExecutionException: java.lang.OutOfMemoryError: unable to create new native thread
After restarting the service everything works normal .But after certain interval the same thing happen.Can anyone tell what could be the reason and solution .If any other details are required please let me know.
When I checked the number of file descriptor using
sudo ls /proc/1503/fd/ | wc -l
I could see it is increasing after every time . It was 320 and it now reached 360 (keeps increasing) . and
sudo grep -E "^Max open files" /proc/1503/limits
this shows 65535
processor info
vendor_id : GenuineIntel
cpu family : 6
model : 62
model name : Intel(R) Xeon(R) CPU E5-2670 v2 # 2.50GHz
stepping : 4
microcode : 0x415
cpu MHz : 2500.096
cache size : 25600 KB
siblings : 8
cpu cores : 4
memory
MemTotal: 62916320 kB
MemFree: 57404812 kB
Buffers: 102952 kB
Cached: 3067564 kB
SwapCached: 0 kB
Active: 2472032 kB
Inactive: 2479576 kB
Active(anon): 1781216 kB
Inactive(anon): 528 kB
Active(file): 690816 kB
Inactive(file): 2479048 kB
Do the following
Run the following two commands as root:
ulimit -l unlimited
ulimit -n 64000
In /etc/elasticsearch/elasticsearch.yml make sure you uncomment or add a line that says:
bootstrap.mlockall: true
In /etc/default/elasticsearch uncomment the line (or add a line) that says MAX_LOCKED_MEMORY=unlimited and also set the ES_HEAP_SIZE line to a reasonable number. Make sure it's a high enough amount of memory that you don't starve elasticsearch, but it should not be higher than half the memory on your system generally and definitely not higher than ~30GB. I have it set to 8g on my data nodes.
In one way or another the process is obviously being starved of resources. Give your system plenty of memory and give elasticsearch a good part of that.
I think you need to analysis your server log. Maybe In: /var/log/message
I'm trying to run Apache Bench via Ruby with the backticks method. Normal Apache Bench output looks like this:
This is ApacheBench, Version 2.3 <$Revision: 1528965 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/
Benchmarking stackoverflow.com (be patient).....done
Server Software:
Server Hostname: stackoverflow.com
Server Port: 80
Document Path: /
Document Length: 0 bytes
Concurrency Level: 2
Time taken for tests: 0.752 seconds
Complete requests: 10
Failed requests: 0
Total transferred: 4600 bytes
HTML transferred: 0 bytes
Requests per second: 13.30 [#/sec] (mean)
Time per request: 150.421 [ms] (mean)
Time per request: 75.210 [ms] (mean, across all concurrent requests)
Transfer rate: 5.97 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 72 73 0.4 73 73
Processing: 74 77 2.2 77 81
Waiting: 74 77 2.2 77 81
Total: 147 150 2.2 151 153
The Ruby command I'm trying is:
results = `ab -n 10 -c 2 http://stackoverflow.com/`
It seems to work find in IRB and I get all the results back. But, when I try and hook it into my Sinatra app, I only get this back:
This is ApacheBench, Version 2.3 <$Revision: 1528965 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/
Benchmarking stackoverflow.com (be patient).....
It's as if the command isn't waiting for the ab tests to finish. What's going on here?
After encountering situations where I found that rethinkdb service is down for unknown reason, I noticed it uses a lot of memory:
# free -m
total used free shared buffers cached
Mem: 7872 7744 128 0 30 68
-/+ buffers/cache: 7645 226
Swap: 4031 287 3744
# top
top - 23:12:51 up 7 days, 1:16, 3 users, load average: 0.00, 0.00, 0.00
Tasks: 133 total, 1 running, 132 sleeping, 0 stopped, 0 zombie
Cpu(s): 0.0%us, 0.2%sy, 0.0%ni, 99.8%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Mem: 8061372k total, 7931724k used, 129648k free, 32752k buffers
Swap: 4128760k total, 294732k used, 3834028k free, 71260k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
1835 root 20 0 7830m 7.2g 5480 S 1.0 94.1 292:43.38 rethinkdb
29417 root 20 0 15036 1256 944 R 0.3 0.0 0:00.05 top
1 root 20 0 19364 1016 872 S 0.0 0.0 0:00.87 init
# cat log_file | tail -9
2014-09-22T21:56:47.448701122 0.052935s info: Running rethinkdb 1.12.5 (GCC 4.4.7)...
2014-09-22T21:56:47.452809839 0.057044s info: Running on Linux 2.6.32-431.17.1.el6.x86_64 x86_64
2014-09-22T21:56:47.452969820 0.057204s info: Using cache size of 3327 MB
2014-09-22T21:56:47.453169285 0.057404s info: Loading data from directory /rethinkdb_data
2014-09-22T21:56:47.571843375 0.176078s info: Listening for intracluster connections on port 29015
2014-09-22T21:56:47.587691636 0.191926s info: Listening for client driver connections on port 28015
2014-09-22T21:56:47.587912507 0.192147s info: Listening for administrative HTTP connections on port 8080
2014-09-22T21:56:47.595163724 0.199398s info: Listening on addresses
2014-09-22T21:56:47.595167377 0.199401s info: Server ready
It seems a lot considering the size of the files:
# du -h
4.0K ./tmp
156M .
Do I need to configure a different cache size? Do you think it has something to do with finding the service surprisingly gone? I'm using v1.12.5
There were a few leak in the previous version, the main one being https://github.com/rethinkdb/rethinkdb/issues/2840
You should probably update RethinkDB -- the current version being 1.15.
If you run 1.12, you need to export your data, but that should be the last time you need it since 1.14 introduced seamless migrations.
From Understanding RethinkDB memory requirements - RethinkDB
By default, RethinkDB automatically configures the cache size limit according to the formula (available_mem - 1024 MB) / 2. available_mem
You can change this via a config file as they document, or change it with a size (in MB) from the command line:
rethinkdb --cache-size 2048
If I run
H2JAR=/common/home/jjs/.m2/repository/com/h2database/h2/1.3.168/h2-1.3.168.jar
java -cp $H2JAR org.h2.tools.Server $*
I get
Web Console server running at http://68.178.232.99:8082 (only local connections)
TCP server running at tcp://68.178.232.99:9092 (only local connections)
PG server running at pg://68.178.232.99:5435 (only local connections)
But I have
1004 ~\>traceroute 68.178.232.99
traceroute to 68.178.232.99 (68.178.232.99), 30 hops max, 60 byte packets
1 190.33.189.161 (190.33.189.161) 9.145 ms 9.023 ms 9.467 ms
2 172.31.36.254 (172.31.36.254) 171.169 ms 171.083 ms 170.976 ms
3 10.255.6.9 (10.255.6.9) 170.811 ms 170.641 ms 170.529 ms
4 ge-0-0-0.bal1-int-1.jf1-agr-1.cwpanama.net (201.224.254.237) 170.416 ms 170.306 ms 170.193 ms
5 so-7-1-3.mia11.ip4.tinet.net (216.221.158.49) 185.066 ms 186.763 ms 188.797 ms
6 xe-2-2-0.mia10.ip4.tinet.net (89.149.184.254) 189.751 ms xe-8-0-0.mia10.ip4.tinet.net (89.149.180.185) 202.777 ms xe-1-0-0.mia10.ip4.tinet.net (89.149.183.21) 202.611 ms
7 ge-0-2-2.mpr2.mia1.us.above.net (64.125.13.81) 211.130 ms 215.839 ms 217.518 ms
8 xe-4-0-0.cr2.iah1.us.above.net (64.125.30.202) 219.719 ms 221.003 ms 228.238 ms
9 xe-1-1-0.mpr4.phx2.us.above.net (64.125.30.149) 219.337 ms 225.518 ms 228.868 ms
10 209.66.64.6.t01121-04.above.net (209.66.64.6) 228.763 ms 214.909 ms 215.359 ms
my host file is:
127.0.0.1 localhost.localdomain localhost
::1 localhost6.localdomain6 localhost6
H2 tries to detect the IP address of your computer. It seems it doesn't work correctly in your case. Could you run the network test of the H2 database? You would need to download the .zip file of H2, expand it, chmod the build.sh file, and then run:
./build.sh testNetwork
In my case the result is:
Target: testNetwork
localhost:
localhost/127.0.0.1
localhost/127.0.0.1
localhost/0:0:0:0:0:0:0:1
localhost/fe80:0:0:0:0:0:0:1%1
getLocalHost:Thomass-MacBook-Pro.local/192.168.0.104
/192.168.0.104
byName:/192.168.0.104
ServerSocket[addr=0.0.0.0/0.0.0.0,port=0,localport=63498]
time: 0
server accepting
client:/192.168.0.104:63498
time: 8
server accepted:Socket[addr=/192.168.0.104,port=63499,localport=63498]
client:Socket[addr=/192.168.0.104,port=63498,localport=63499]
time: 2
server read:123
client read:234
server closing
server done
time: 202
done
Done in 1626 ms
This will not solve the problem, but it will give more information about what H2 tries to do to detect the IP address.