Is there a way to get one Static IP address for a Heroku Server? I'm trying to integrate various API's which ask for an IP address. Because of Heroku's server setup, you never have one server with a static IP - instead your IP is dynamic.
I've looked into add-ons like Proximo, however this appears to be a paid-for solution. Is there a solution where you have a static IP that you don't have to pay for?
You can use QuotaGuard Static Heroku add-on.
QuotaGuard can be attached to a Heroku application via the command line:
$ heroku addons:add quotaguardstatic
After installing, the application should be configured to fully integrate with the add-on.
When you sign up you will be provided with a unique username and password that you can use when configuring your proxy service in your application
A QUOTAGUARDSTATIC_URL setting will be available in the app configuration and will contain the full URL you should use to proxy your API requests.
This can be confirmed using the next command:
$ heroku config:get QUOTAGUARDSTATIC_URL
http://user:pass#static.quotaguard.com:9293
All requests that you make via this proxy will appear to the destination server to originate from one of the two static IPs you will be assigned when you sign up.
You can use A simple HTTP and REST client for Ruby for detecting your IP:
$ gem install rest-client
Next, you can run the below example in an IRB session and verify that the final IP returned is one of your two static IPs.
$ irb
>require "rest-client"
>RestClient.proxy = 'http://user:pass#static.quotaguard.com:9293'
>res = RestClient.get("http://ip.jsontest.com")
That's it:)
Fixie is another option. Fixie is an add-on that provides Heroku applications with a fixed set of static IP addresses for outbound requests. It is language- and framework-agnostic.
Fixie is easy to setup and has "get started" documentation (similar to the one for Python below) for Ruby, Node, Java, Go here. Here is the one for Python.
First you need to sign up for the free plan:
$ heroku addons:open fixie
Opening fixie for sharp-mountain-4005…
Next, the FIXIE_URL will be set as environment variable. To route a specific request through Fixie using requests:
import os, requests
proxyDict = {
"http" : os.environ.get('FIXIE_URL', ''),
"https" : os.environ.get('FIXIE_URL', '')
}
r = requests.get('http://www.example.com', proxies=proxyDict)
Using urllib2 the same functionality will look like this:
import os, urllib2
proxy = urllib2.ProxyHandler({'http': os.environ.get('FIXIE_URL', '')})
auth = urllib2.HTTPBasicAuthHandler()
opener = urllib2.build_opener(proxy, auth, urllib2.HTTPHandler)
response = opener.open('http://www.example.com')
html = response.read()
In both cases, these requests would come through a known IP address assigned by Fixie.
You can use Nginx as your reserve proxy. Edit your nginx.conf and set proxy_pass. Make sure to set proxy_set_header to your herokuapp
upstream backend {
server xxx.talenox.com;
}
server {
listen 80;
server_name rpb1.talenox.com;
location / {
proxy_pass http://backend;
proxy_redirect off;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header Host ‘xxxxx.herokuapp.com’;
}
}
Related
I'm stuck on how to fix this SSL error --
My SSL certs work fine on Chrome, but in Safari and Firefox I get an error that there is a host name mismatch if I go to www.domain.com instead of just domain.com
I've set up SSL Certificates using Certbot for my domain for both domain.com and www.domain.com
When I check on nginx to make sure that the certificates exist, I run sudo certbot --nginx, then select both of the domains when asked Which names would you like to activate https for?, and for both domain.com and www.domain.com, I get the result "You have an existing certificate that has exactly the same domains or certificate name you requested and isn't close to expiry" and asks if I'd like to attempt to reinstall or renew and replace the cert.
I'm not sure what other steps I can take, as last time I installed certbot I simply followed the instructions, did the above for both www and non-www addresses, and it simply worked at both www and non-www!
Does anyone have any suggestions what to do next?
TLDR:
domain.com: works fine in firefox/safari, nginx says cert exists
www.domain.com: host name mismatch in firefox/safari, nginx says cert exists
why?!
After messing with it for a while, and trying #xyz's ssl checker I figured out the following things:
both certs were valid
When I re-installed the certs using certbot, the most recent cert would start working and the previous one would stop working
Turned out that i needed to add the other url as a subdomain to the existing cert and that fixed it!
I used:
sudo certbot -d domain.com -d www.domain.com
and that did the trick
You can check both domains from an external service, e.g. here:
https://www.sslshopper.com/ssl-checker.html
It will tell you if the certificate is correctly installed on both.
You should also open a new tab in Chrome, open developer tools, record network requests, then goto www.domain.com and see what redirects Chrome makes and what URL's it actually makes requests to. Maybe it has some automatic URL changing based on previously successfully resolved URL's.
I'm try to use Heroku CLI.
But when I want to do some commamd like:
heroku login, heroku log, etc.
The below error will show
SELF_SIGNED_CERT_IN_CHAIN self signed certificate in certificate chain
How can I sovled it?
I had the same issue however this helped me:
Verify your proxy export
export NO_PROXY='localhost,localnets, <company proxy IP settings>
Then verifying my companies .pem file stored in my user directory:
export NODE_EXTRA_CA_CERTS=~/.ssh/bc.pem
(or wherever you store it.)
Then try
heroku login
Its moreover related to security and firewall settings over machine and network.
If you are in secure network, try to connect over proxies / public network, then you can able to run heroku commands.
Or manually acquire SSL/TLS certificate on machine. Kindly refer to this link
The Heroku documentation says that I should use the following proxy settings when I use the heroku create command:
> set HTTP_PROXY=http://proxy.server.com:portnumber
or
> set HTTPS_PROXY=https://proxy.server.com:portnumber
> heroku login
Unfortunately, I am receiving the following error message:
! ECONNRESET: tunneling socket could not be established, cause=getaddrinfo ENOTFOUND proxy.server.com
! proxy.server.com:8080
How can I fix this error?
I am also having trouble cloning the GitHub repo which is mentioned in the Heroku documentation, so I have to download it manually.
That documentation is under the heading Using an HTTP proxy. Are you sure that you need to use an HTTP proxy? In many cases you won't need one; simply running heroku create will work.
If you are sure that you need an HTTP proxy you should make sure to replace proxy.server.com with your actual proxy server's name or IP address. proxy.server.com is just an example.
I've a fresh install of Deis on AWS but I get this error when I try to register an user:
http://deis.XXXX.com does not appear to be a valid Deis controller.
Also, when I try to make a curl to the ELB or any node it return a timeout, but I think that it's a normal behaviour due to the security group configutarion.
It could be a proxy configuration error? Because when I installed Deis I got this error:
Enabling proxy protocol failed, please enable proxy protocol manually after finishing your deis cluster installation.
And I enabled it manually with:
deisctl config router set proxyProtocol=1
Thanks!
Once you have enabled proxyProtocol on the router you should be able to run deisctl install platform without issues.
Is that not the case?
I had this issue when I hadn't registered my deis cluster domain with global dns - i.e., I had only added it to a Route 53 hosted zone that wasn't actually public.
I fixed it by adding an A ALIAS record in Route 53 pointing a wildcard sub-domain under my existing domain to the deiswebelb host.
Name:
*.apps.example.com
Type: A
Value: ALIAS dualstack.deis-deiswebelb-1abcdefghijkl-1234567890.us-east-1.elb.amazonaws.com
I have 2 Linux Servers (with LAMP):
Web Server with SSL (https://www.example.com)
Admin Server (needs to connect to Web Server, via https)
When i connect from Admin Server (to Web Server) via curl command. It is refusing. Then when i use curl with --caeert option, its going through. Like this:
# curl --cacert CAchain.crt -I https://www.example.com
HTTP/1.1 200 OK
..
I'm getting 200 OK only because of --cacert CAchain.crt.
Then obviously i need the pure/basic curl command without defining the --cacert, to be working. Like:
# curl -I https://www.example.com
HTTP/1.1 200 OK
..
So that my Admin Application will for sure be able to connect to it (via https).
But now, when i connect to https://www.example.com from Admin Server (via its Application), it is bouncing back. Not able to reach, with SSL.
How do i make my Linux (RHEL) to install the client's CA-CERT inside, in order automatically AVOID defining the cert file. So that any communications to "https://www.example.com" via CURL or Web Browser (from Admin), can just then successfully go through. (Is it something like, we make "SSH without Keys" logic? But how, please?)
You need to add the CA cert to somewhere that curl can use it - it looks like you're just keeping it in your local directory (which isn't where curl looks for it - typically in some /etc/pki/ssl/ca-bundle.crt-type location). There's a handful of ways to do this. I don't have much experience doing it in RHEL (or CentOS), but have done it for Debian.
This ServerFault Post might help.
Likewise, This Post might help you install/import the CA cert properly.