IBM Cloud Private CE - Change all console URLs to DNS name instead of IP address - ibm-cloud-private

I have a successful single-node install of ICP 3.1.0 CE. I want to access the console using a fully qualified DNS name instead of an IP address, and have a public wildcard certificate which I wish to use to secure console access.
I was able to add the myhostname-only and myhostname.mydomain.com variants to the console and change the console to use my public certificate, so that is all working properly. But when I log into the console using myhostname.mydomain.com and look at the URLs associated with the interface items, some refer (correctly) to paths anchored at myhostname.mydomain.com... and some (e.g. Catalog, some items under Platform, etc) refer to paths anchored off the IP address.
Is there a way to change this behavior, such that FQDNs are used consistently throughout, without reinstalling ICP?
If not, and if the mixed results I see are because I did something boneheaded during install, can someone clarify what I should do to ensure that all paths post-installation are FQDNs instead of IP addresses?
Thanks!

I was unable to find a way to correct the discrepancies in URLs, so I followed Justin's lead... I deleted the cluster, explicitly set cluster_lb_address to the desired FQDN in config.yaml, and reinstalled. Now all URLs are FQDNs.
I don't know if this is the recommended way to fix this issue, or if it is a bug, or if I was simply missing something... but setting cluster_lb_address achieved the desired result.

Related

Connect an Heroku app to a Ionos domain Name

I have developed an app and make it available via Heroku. Now I would like to add a custom domain name via Ionos however I don't know how to configure it. When using EC2 instances I would configure an static ip address but for Heroku, I don't know what to do. I have checked other post about this but none a precise or recent about what to do.
Thanks for you attention and have a Great day.
Had the exact same issue and here's how I made it work (just specifying I'm not an expert, so take this answer with a grain of salt):
First you'd have to go to Heroku in your app setting and then add the domain name you bought. It's important that you write the host when adding it, like put either www. or *. at the beginning of the domain. It will give you back a DNS target which you will then need to use on Ionos.
Secondly, you'd have to bind this DNS target on Ionos using a CNAME. Just go to your domains, click the one in your list, then open DNS and click Add a record. Choose CNAME and then put www as provider and past the DNS target you copied in target field. Finally, confirm the changes
Wait a few seconds/minutes, navigate to www.yourdomain.whatever and tada!
About static IP address, Heroku made some docs, and that won't work, you'd have to use dynamic ones. So in a nutshell, use CNAMEs instead of A records
Here are some docs if you want to dig more into this

EC2, RHEL - No Route To Domain

This is probably incredibly simple and I'm just missing one step. The problem I was (originally) trying to solve was how to get a statically allocated hostname, one that would not change with each restart. I've done the following steps:
I have a domain registered on GoDaddy, and it points to my EIP. I use it to connect over SSH (putty) to my EC2 instance, so I know that part is working. I've opened ports 9080, 9060, 9043, and 9443 as well as SSH and FTP ports. And I've installed and started the software that uses those ports, and that stuff normally just works on a local RHEL install, so I think what's different here is the custom domain name.
I've added my EIP and fully qualified host name to my /etc/hosts file.
I've added my fully qualified host name to my /etc/hostname file and modified the /etc/rc.local script to set the hostname properly on a restart, and that works. If I execute the command hostname, it returns my fully qualified hostname, so that looks ok.
I cannot ping my server, but I think that's ok, because probably amazon blocks pings. So I don't think that's a symptom of anything.
I cannot open a to http://myserver.mydomain:9080/, which normally just works. Here it just times out.
If I do a wget http://myserver.mydomain:9080 from inside the EC2 instance, it returns failed: No Route To Host
But if I do a wget against localhost instead of the fully qualified name I get what I expect as a response.
So.... routing tables? Do those need to change? And if so how?
You probably don't want to do what you did. Everything in EC2 is NAT'd. Meaning that the IP assigned to your instance is a private/internal ip and the public IP is mapped to it by the routing system.
So internally, you want everything to resolve to the private IP, or you will get charged for traffic as it has to get routed to the edge and then route back in. Using the public DNS name will resolve correctly from the default DNS servers.
If you are using RHEL, you will need to make sure both the security group and the internal firewall (iptables) have ports opened. You could just disable the internal firewall since its a bit redundant with the security groups. On the other hand, it can provide some options security groups do not if you need them.

Recaptcha IP addresses

Okay, so we implement Recaptcha in production. We get errors because it can't reach the IP address it needs to use the service. We open a port for the IP address to reach Google. No problem. We do that and configure that IP address explicitly to work. It works great. Then, the next day, we start getting errors again because Recaptcha is using a different IP address. I can allow requests from that IP address, too, but now I'm unsettled. Where are these addresses coming from? How do I configure this to work reliably?
Recatpcha from Google can use any Google IP address and there are lots of them.
Ran this from Windows:
_netblocks.google.com text =
nslookup -type=TXT _netblocks.google.com
"v=spf1 ip4:216.239.32.0/19 ip4:64.233.160.0/19 ip4:66.249.80.0/20 ip4:72.14.192.0/18 ip4:209.85.128.0/17 ip4:66.102.0.0/20 ip4:74.125.0.0/16 ip4:64.18.0.0/20 ip4:207.126.144.0/20 ip4:173.194.0.0/16 ?all"
That's all the network Google uses currently. These can change so check them often.
Google suggest allowing port 80 to all IPs outbound, this highly insecure. They recommend going through a proxy server but again that is highly insecure if your web server is an DMZ. Proxy aware trojans do exist. All that need to be done is exploit a vulnerability to execute arbitrary code and you can create reverse connection on port 80 through a proxy server to download the payload. Then it is trivial to escalate privileges and own the box. I don't mean just Windows servers but Linux as well. I've done it in lab environment on security was on. It's really easy to do.
This is the Google website I got this from:
http://code.google.com/p/recaptcha/wiki/FirewallsAndRecaptcha
I wanted to append to this answer with more recent information. The documentation that Chris is pointing to does not include all of the TXT records necessary to dig (thanks Google):
_netblocks2.google.com (IPv6 subnets)
_netblocks3.google.com (Additional IPv4 subnets)
In my particular case, the _netblocks3 entry contained 2 large /19's that made my initial rule ineffective
(I found additional references here: https://support.google.com/a/answer/60764?hl=en)
Perhaps you should be using a hostname rather than IP

How do I get bind to use the DHCP dns for lookup?

I've got XAMPP setup on my laptop (OSX 10.6) for dev, and I wanted to use VirtualDocumentRoot so that I could do *.localhost and it would automap to the folder under my sites directory. I've got this all set up fine, and it works great, but when I got to work today, I found an issue with the way our LAN handles DNS.
Long story short, instead of checking the LAN DNS server for local domains, it goes out to the root. Is there a way to get bind to check the DHCP supplied DNS server for addresses it's not responsible for? Or alternatively, is there a way to get my os to use the DHCP DNS server first, and then fall back to the local with minimal performance hit?
Thanks!
I'm using Linux Arch, but as MacOSX is based on some *nix system - may this ideas helps you:
Take a look at the file /etc/resolv.conf. In my setup this file is automatically generated by NetworkManager.
This document writes about ways to update /etc/resolv.conf when dhcpcd, NetworkManager or dhclient is used: https://wiki.archlinux.org/index.php/Dnsmasq#DHCP_Setup
In this way you do just prepend the local dns before the dhcp's dns (or static if you're switching to a static configuration). Make sure you remove all forwarders from your dns-server.
If macos does not use them, may this workaround gives you a hint, even if it's very limited:
Add a global name-server (like google's one 8.8.8.8) to your dns-server's list of forwarders.

How best to validate that a URL is on the public internet?

I want to validate that a hostname/IP address is on the public internet; that is, that as far as is reasonable, I'd get the same response from it no matter where I access it from (obviously that's impossible to guarantee.
ie I want to exclude localhost, 127.0.0.1, anything in the private IP ranges, and anything that has an invalid TLD.
Am I missing anything else that I ought to be checking?
And is there a better list than http://data.iana.org/TLD/tlds-alpha-by-domain.txt for a list of valid TLDs?
A valid TLD may still resolve to a local address if you do not have strict control over the DNS or /etc/hosts, so resolving and then excluding by IP range (10.0.0.0/8, 172.16.0.0/12, 192.168.0.0/16, 169.254.0.0/16, 127.0.0.0/8) is best.
Your TLD list is up-to-date.
IANA is the official source for information on domain names, so you can't get a better list - or at least, you can't get any more authoritative.
Wouldn't the ultimate validation be to create a list of networks local to you (for instance, behind your own firewall) and, if it's not, try to connect the host? If you can connect and it's not local, you would have no reason to expect that any other location on the internet couldn't connect.

Resources