I added a custom wildcard SSL certificate to my environment and a CNAME record in my registar pointing to my provider. The provider public IP is 185.54.7.222 and my environment public IP is 185.54.6.184. The public IP of my environment is never accessed so why is it required? I have to pay for it but I think this is not something I should be forced to activate...
Without a public IP, your environment is accessed via a proxy server.
This is bad for performance and reliability (it's a shared resource, shared with other platform users - so in theory there's a chance for it to become overloaded by something that another user does);
The proxy server does not have your SSL certificate; only a wildcard certificate using your provider's domain name for that particular Jelastic region of their platform.
Your custom SSL certificate is installed onto your server (e.g. Apache), and therefore your users need to form a direct connection to it from the internet (not via the proxy server). That is why the public IP address is required. It gives a direct connection straight to your server.
More info in the Jelastic docs:
https://docs.jelastic.com/shared-load-balancer
https://docs.jelastic.com/public-ip
Note also that production environments are expected to use their own public IP address (as mentioned in the documentation).
Related
I am trying to build a new DNS, which will act as a proxy for certain domain names and uses a public DNS as upstream.
My understanding of DNS:
Client asks DNS (x.x.x.x) about example.com
DNS will look up inside its zones (or parent and root) and find example.com can be found at i.i.i.i
DNS will send i.i.i.i to the client.
Now, client asks the ip address of restricted.test and DNS server knows it is a restricted website, so instead of giving the direct ip to the website, it gives it's own proxy address p.p.p.p to the client.
Please correct me if I'm wrong till now, but when the client tries to connect to p.p.p.p how the proxy server knows which website the client wants to go in?
I really want to know how these work under the hood
Thanks in advance.
This mechanism you are asking about is the Proxy Auto-Configuration (PAC) file.
Read more about it here :
https://developer.mozilla.org/en-US/docs/Web/HTTP/Proxy_servers_and_tunneling/Proxy_Auto-Configuration_PAC_file
And here :
https://www.websense.com/content/support/library/web/v76/pac_file_best_practices/PAC_explained.aspx
Essentially in corporate networks, a PAC file is pushed out to every computer, and browser settings are also configured to enable the PAC file. But it can also be done manually. Just check your browser proxy settings to see the location of the PAC file it is pointed to.
Does anyone know how to easily setup https for a rest api in google compute engine ? I have currently a static ip and the api works over http but in the browser when I call it I get mixed content error because the client is server over https (firebase hosting)
Is it possible to setup https with only a static ip (and not a domain name) ?
-Jani
Is it possible to setup https with only a static ip (and not a domain
name) ?
Yes, it is possible, but since 2016 you cannot purchase an SSL certificate with a public IP address. You can use a self-signed certificate but you will have even more browser issues. Not recommended.
Possible Options:
Use your domain name (or purchase one) and use Let's Encrypt for SSL which is free and is one of your better options.
Use a different service such as Cloud Run, Cloud Functions, Firebase or App Engine which offers SSL and does not require a domain name that you own as you can use Google's endpoint.
Attach a Google Load Balancer in front of your Compute Engine instance and configure a front end with a Google Managed SSL certificate. However, this will require a domain name.
If you do not want to use your own domain name, then option #2 is your only choice.
To setup https for a rest api in google compute engine:
1- You have to buy a domain
2- You have to buy an SSL certificate
3- create a load balance resource in Google Cloud to which I assign the domain and the certificate
4- You can install the certificate to the server directly
If you want to use https over IP instead of domain, please follow click here
I have an Azure web app up and running, using a custom domain purchased outside of Azure... and that all runs fine. So I have https://myappname.azurewebsites.net/ loading fine with my domain name URL https://www.myappname.com
I'm trying to upgrade the web app, though using Azure Traffic Manager. I've cloned the app a few times, each on its own app service plan, and I have the traffic manager all up and running fine. I can successfully hit different versions of my cloned website based on the traffic manager configuration profile... so no issues there.
The only issue is that I can only access the "traffic managed" version of my website via the standard azure URL -> myappname.trafficmanager.net.
All examples I've seen say all I really need to do now, is go into my DNS Management screen, and add domain forwarding, however, my online DNS management tool does not offer this option.
I can't really change my A record in the DNS management screen, because I don't know the IP address of myappname.trafficmanager.net
Every place I've tried to change the name of the current/working Azure URL (like in awverify text files, www cnames, etc.) does nothing. The DNS still points to the single instance which remains in the IP address od the DNS managers A record.
Also, since my live/single instance is linked to the domain name (along with the SSL binding), I can't add those properties to the clones, which makes sense....only one version can be live. However I could unbind that when I make the switch from the single instance web app to the traffic managed set of clones, but I fear I can only bind that to one of the clones. I can't seem to bind it to the myappname.trafficmanager.net version, which might cascade down to all of its endpoints. Is there a way to bind my domain name and SSL cert to more than one version of my web app?
Thanks!
Is there a way to bind my domain name and SSL cert to more than one
version of my web app?
I don't think you can do that unless you have two different domains or subdomains with each own SSL cert. Each web app hostname is unique globally and each SSL binding is attached with the web app domain name.
If you have a purchased domain and just keep the default xxx.azurewebsites.net as each hostname. Then you could configure the two Azure app serves as the endpoint of TM.
By default, Azure provided a wildcard cert for this domain *azurewebsites.net, so you can automatically access this hostname with HTTPS without any extra cert. Then use a CNAME record www in the domain domain.com in your DNS provider to point to the traffic manager hostname myappname.trafficmanager.net. Since Traffic Manager works as DNS level, it does not validate the server and client SSL, you could safely ignore the SSL warning when accessing with traffic manager hostname.
Feel free to let me know if you have any question.
Please help on the below use-case.
We have an AWS EC2 instance with public IP or load balancer DNS --> public.ip or application.lb.amazonaws.com (where we have a custom web apps running as target)
We have another VM instance (e.g.: private.ip) within our Data Center (DC) (where the same web apps is running as source).
We need to have a web based communication between these 2 instances but currently its happening through HTTP. We have already handled all connectivity issues and we are able to now communicate between 2 instances.
We're accessing the source & target URL's as http://public.ip:31415 or application.lb.amazonaws.com:31416
Now we need to convert HTTP URL's in (4) to HTTPS along with a custom domain name. This domain name will not be PUBLIC & it will be resolved only within our office network. E.g Domain name: test.source.apps & test.target.apps
We would be making an entry in our local machine /etc/hosts (similar to below) to have this name resolution in (5) works for now in test & for other environments we planned to make an entry in our internal office DNS servers for this name resolution.
Example /etc/hosts:
Target:
test.target.app public.ip.ec2.server
(or) test.target.app application.lb.amazonaws.com
Source:
test.source.app data.center.ip
We don’t want any paid mode of SSL (like CA or public domain) due to the fact that this URL will be used only by 2 -3 developers and within the office network only. But as part of the security compliance we need to definitely make this a HTTPS URL.
Web apps are running in Jetty web server. We've planned to do it using LetsEncrypt + Custom domain.
Can anyone suggest if this possible in AWS & any steps on how to make this change (i.e. creating subdomain that is internal to our host/network &
using LetsEncrypt SSL)?
I'm developing a web application using Laravel hosting on a public cloud. Now, the application can be accessed publicly on the internet via domain address. However, I want to restrict to only users who are connecting to the organization networks to be able to use the application since we do not want the application to be used at home or elsewhere.
At the moment, the organization has 2 places (2 public internet networks) where they must be able to access to the application. Both of them are using home-standard internet where IP address changes every time the internet reconnects. As we do not have static IP addresses, I cannot filter user by using IP address filter. The IP filter rule must be changed every time when the organization network reconnected.
My application already have solid authentication and authorization mechanism and, of course, the users must know this information since they must access the app for work. However, this doesn't meet the requirement.
I have thought about the VPN but it (probably) doesn't not work because if we allow user the access to the VPN, they still be able to access the VPN anywhere and use the application outside the work places. If we restrict the VPN client to access from specific IP address, then when the IP changes, the same problem occurs.
To sum up, I would like to ask for the advice on how to restrict the access of web application, hosted on public internet, to the users that are connecting from the public IP address that can change every time when the internet reconnected. The requirement may sound strange but it is as it is. Please feel free to ask for more details if you want to and have a discussions on the suggestions.
Thank you in advance.
You could setup a client for a dynamic dns service (e.g. dyndns) on the client-side.
Then you could use that on the server-side to always check against current IP using that dns.
As alternative you could bind the website to localhost only and only let it be accessed via an pubkey-enforced SSH tunnel (and make that get auto-established by a script/scheduler on the client side, on a permission level outside of the users' reach, so that they can't take the private key needed for the connection anywhere)
You can use different PHP methods and variables to detect from where the request has been originated. Just whitelist your domains and organizations, and allow only them by adding a middleware.
Additionally, you can generate a token using Laravel Passport or you can create your own mechanism, and then use that token to authenticate if the request is valid or not.
Since the IP changes, you can setup a dynamic dns or as suggested on the comment above.