Trying to Deploy a PCF Spring Boot App which requires a static IP - spring-boot

I have an application that uses spring boot for a backend and Vue.js as a front end. I have packaged the app into a jar file and deployed to PCF with ease. The problem is the application uses API Keys from https://developer.clashroyale.com/#/getting-started ...these keys require you to input the IP Address that will be used...
Obviously my key will not work unless I give the correct IP address, so how do I retrieve the IP Address for my PCF application so I can generate the proper API Key?
Also, the documentation says that the IP will change with every deployment of my application... Which prompts the question :
Is it impossible to use API Keys that require static IP Addresses with PCF applications?
I have deployed this same application to amazon AWS and it worked because I have a static IP Address that I can use to register a key. I prefer to use PCF, but am having trouble setting it up.

I don't think you will be able to use that API on the PCF platform. Every time you either cf restage or anything to cause the container to be rebuilt/redeployed, the IP will change.
So in short yes, it's impossible: https://docs.run.pivotal.io/marketplace/external-ips.html

Your app will be run on any number of Diego Cells, which all have different IP addresses. There are a couple ways that traffic can leave your app and the Cell.
In some cases, outbound traffic may go through a NAT, in which case the number of possible IPs may be small and the IPs may not change often (or at all). In other cases, traffic may leave directly from the Diego Cell on which your application is running. In this case, there's a lot more IPs & the IPs will change any time your app is restarted.
If you're talking about some general installation of Cloud Foundry, it will depend on how the operators for that environment have set up the traffic to flow so you'd need to confirm with your operator to be certain.
If you're talking about Pivotal Web Services, outbound traffic will originate from the IP of the Cell on which your app is running. See the link in Francisco's post.
Having said all that, there's a hack that you can use to work around the behavior above. Route your traffic through a proxy. Traffic coming out of the proxy can be made to have a fixed IP address.
On PWS, there is a service in the marketplace available to do exactly this. It's called QuotaGuard.
https://docs.run.pivotal.io/marketplace/services/quotaguard.html
You don't have to use that service though, you could use any other service provider or you could even set up your own proxy. I would recommend using a service unless you know exactly what you are doing though. Setting up & securing a proxy is not trivial and an improperly secured proxy is bad not just for you as the owner but the whole Internet.

Related

What source IP ranges to add to google cloud firewall to only allow access from the domain of my API

I currently have a Google cloud redis instance running which allows all connections ( ip range 0.0.0.0/0 ) which I would like to secure.
I have an api that is hosted on Heroku that is being forwarded to via a google domain. What I want to know is which ip do I add to the Source Ip ranges field in the google cloud firewall config tab to only allow connections from my API.
There are a few things I am confused about:
I need to specify an IP range, but I'm only going to be connecting to it from one IP ( The domain pointing to my API )
Which IP do I provide? The IP of my domain that is pointing to my API or the IP of the api instance itsself as it is on heroku?
Any help would be great!
Thanks
Heroku itself is hosted on AWS, so it uses a subset of their EC2 range.
Looking at this answer, you could use
heroku regions --json
to find the currently used IP ranges.
Problem with that: they can change!
If you need a static source IP coming from a Heroku app, you might want to use one of the SOCKS5 proxy addons.
But:
There is a performance impact for this cross-datacenter usage between your application and the Redis instance, so actually I would recommend you switching to a Redis instance by Heroku, or at least by a provider that lives inside the same AWS region.

Deploying an Internal API

I basically have an API that is going to be used with a web app and a mobile app. I don't want the API to publically available, where should I deploy it then? is there a way without using AWS? Thanks, Nav :)
There are multiple ways of doing this. This is a sensitive topic, as this is an opinion-based field.
However, I will try to answer below - and challange your way of approaching this.
It really depends on your 'operational' skills, funds, need for security, deadline(s) etc.
Basically you need to make an endpoint available on the www, without everybody being able to connect.
You could either:
Deploy a virtual machine or web app. in Azure/AWS/GCP/... and whitelist the IP's you need to connect from.
Rent a VPS from any provider, and deploy your application here - Again, whitelisting. (Edit: Not phones, since this IP changes constantly. A proxy can be implemented here (potential bottleneck), or any authentication mechanism like OAuth, JWT, Certificates etc. can be implemented either on the ingress controller (e.g. NGINX) or the application itself.)
Deploy the application on your Home-PC, order a static IP to your home and make a forwarded port and set up security on your premise (not recommended, and raises and bunch of other headaches)
Get in touch with a company that hosts web applications (Can be quite expensive)
Based on the limited information provided in your question, there is a ton of options, nice-2-haves and factors that comes in to play when choosing the setup that suits your needs.
You should also consider; VPN usage, Backup/disaster recovery, data leaks, redundancy, the need for future deploys, how you would access your environment in six months....
I hope this answered your question, but also raised a few for you to answer yourself.
Finally, I'd recommend you looking for inspiration here.
EDIT:
Question:
Whitelisting mobile IP's.
VPS selected.
Answer:
This becomes quite a task when mobile phones tend to change IP's frequently.
Since you are looking further into the VPS setup, you are more in control of the setup and can choose to look into OAuth and JWT.
Links:
OAuth - https://oauth.net/getting-started/ https://developer.okta.com/blog/2019/01/22/oauth-api-keys-arent-safe-in-mobile-apps
NGINX JWT - https://docs.nginx.com/nginx/admin-guide/security-controls/configuring-jwt-authentication/
So - At the end of the day, you can make your app use a proxy (potential bottleneck) and whitelist this IP, or make the endpoint open (any -> 443) and implement an authentication mechanism like the ones mentioned above.
Consider implementing a DMZ zone for incoming traffic from the web.
https://en.wikipedia.org/wiki/DMZ_(computing)
and put your application behind this zone, making sure that the only the DMZ zone is facing the internet, and the server hosting your application is talking to the server in the DMZ.
Again, this is quite a big topic and is hard to simplify to a stackoverflow post.
If you are hosting the app on AWS you have a couple of options.
API Gateway now supports private endpoints. These endpoints can not be called via the public internet. That means if your app is hosted on AWS only the internal services of the app can call the end point. i.e. front end to database etc. I've used this method for internal micro services such as placing in house app data onto kinesis streams.
Alternatively, if you don't want to use API Gateway you have lots of options. Most of which would involve you creating rest APIs from where ever you plan on hosting your code. This could be on the server it's self or some sort of container.
API Gateway Private Endpoint Reference:
https://aws.amazon.com/blogs/compute/introducing-amazon-api-gateway-private-endpoints/

Webserver for Angular and Spring application

I'm building a small web application for a personal project. It will be an Angular web application which will talk to a Spring-Boot service layer which in turn will read/write stuff to MongoDb.
I hope to host all this on a single EC2 instance in AWS. My question is how to configure a web server (like Apache but doesn't have to be) to 'beautify' the URLs a bit. Example, without touching anything angular will run at something like host:4200 and the service layer at host:8080. I will then have to map a proper domain to host in AWS, but the hiding of ports etc is where it gets murky for me.
I want to be able to hit my web app at domain.com (no ports etc) and I also want my service layer to ideally have a similar setup e.g. domain.com/service (no ports etc).
How do I configure a webservice to do this for me? Examples or pointers to specific examples would be ideal, but even a pointer to the right documentation will be helpful.
This thread is kind of similar to what I want but not too helpful: How to deploy Spring framework backend and Angular 2 frontend application in any online server?
You can use a setup with AWS CloudFront as reverse proxy and CDN cache. You can map the Domain Name and SSL Certificates(You can use AWS issued free SSL Certificates through AWS Certificate Manager) to CloudFront while the EC2 instance is plugged as an origin behind CloudFront as shown in the following diagram.
In the diagram I have optionally added, which is a common practice in designing applications in AWS.
Hosting the Angular App in S3
Using Autoscaling & Loadbalancing for EC2 instances.
You need to use Apache or other web server as a reverse proxy. Start here -
https://devops.profitbricks.com/tutorials/configure-apache-as-a-reverse-proxy-using-mod_proxy-on-ubuntu/
You then will need to setup a custom domain name. The easiest option is to just use an ELB (now called Classic Load Balancer). More details are here -
http://docs.aws.amazon.com/elasticloadbalancing/latest/classic/using-domain-names-with-elb.html

Custom domains for Multi-tenant web app

I am developing an app (RoR + Heroku) which allows users create their own websites either using my subdomain (pagename.myapp.com) or using their own domain (pagename.com).
An important point of this is that this option is the key of my business: subdomains are the free plans and custom domains are the paid ones. So I have a table where I store the custom domains of each user and check if this page is active (exists and has paid the quota).
For that I need to give users the capability of point their domain to my servers. All we know that Heroku don't recommend the use of DNS A-Records.
Also I would like to abstract as much as possible this feature to being able to switch my infrastructure (Heroku to AWS) in the future without having to ask all my users to change their DNS Zone. Taking this into account, I think that the best option would be run something like an EC2 proxy (using AWS Elastic IP) which give me the ownership of this IP. This proxy I think that should redirect to proxy.myapp.com, and I would resolve the request in the app level.
Due to I didn't find clear information about that, I am not sure if this hypotesis is the best solution and how to setup the proxy (which type of proxy use? Nginx maybe?).
Said that, I would like to ask recommendations/best practices to solve this "common" feature.
Thanks
What you are wanting to do is fairly straight forward to implement. Your assumptions are correct about setting up the proxy. Nginx or haproxy will both work great for this (I personally would use haproxy). Here are some of the gotchas that you will run into though:
Changing the host header at a proxy server can cause the end web application to generate incorrect links. You can use relative paths to fix this, but it requires that the web application developer to be aware of the environment that they are running in.
user connects to www.example.com (proxy server)
proxy server connects to www.realdomain.com (web app)
the web app has a link for a shopping cart. www.realdomain.com/shoppingcart
the end user clicks on the link but the link is www.realdomain.com/shoppingcart instead of www.example.com/shoppingcart
The cost of the host acting as the proxy server. This can spiral out of control really quickly. For example, do you want redundancy, if so how are you planning on implementing that? Do you plan on having ssl termination? If so you will have to increase the CPU count to accommodate the additional load. Do you want to have a secure connection to heroku from your proxy? If you do then you will need to increase the CPU count for that as well. You may have to add additional ram as well depending on the number of concurrent connections.
Heroku also changes their load balancers regularly. This is important because your proxy service will need to reload the config / update the ip addresses of the heroku instances every 60 seconds. In my experience they may change once or twice a day, but the DNS entry that they use has a 60 second TTL. That means that you should make sure that you are capable of updating your config up to every 60 seconds.
My company has been doing something very similar to this for almost a year now. We use haproxy and simply have it reload the config regularly. We have never had an outage or an interruption to our end users. Nginx is also a very good product. It has built in DNS caching so if you go that route you will need to make sure that you configure it correctly so that the DNS cache TTL is 60 seconds.
Will many of your clients want to use your app on their domain apex? E.g. example.com rather than theapp.example.cpm? If not, I would recommend having them CNAME to proxy.myapp.com which CNAMEs to myapp.herokuapp.com. Then, you can update proxy.myapp.com without customer interruption.
If you do need apex or A record support, you would want to set up Nginx as a reverse proxy for your Heroku app. Keep in mind that if you need HTTPS support for client domains, you will need to do some sort of certificate management on your proxy.
I like the answer dtorgo gave and that he mentioned the TLS termination, which many online tutorials on custom domains don't touch at all.
I'll go into more detail on how to implement the custom domains feature for your SaaS while also handling the TLS/HTTPS.
If your customers just CNAME to your domain or create the A record to your IP and you don't handle TLS termination for these custom domains, your app will not support HTTPS, and without it, your app won't work in modern browsers on these custom domains.
You need to set up a TLS termination reverse proxy in front of your webserver. This proxy can be run on a separate machine but you can run it on the same machine as the webserver.
CNAME vs A record
If your customers want to have your app on their subdomain, e.g. app.customer.com they can create a CNAME app.customer.com pointing to your proxy.
If they want to have your app on their root domain, e.g. customer.com then they'll have to create an A record on customer.com pointing to your proxy's IP. Make sure this IP doesn't change, ever!
How to handle TLS termination?
To make TLS termination work, you'll have to issue TLS certificates for these custom domains. You can use Let's Encrypt for that. Your proxy will see the Host header of the incoming request, e.g. app.customer1.com or customer2.com etc., and then it will decide which TLS certificate to use by checking the SNI.
The proxy can be set up to automatically issue and renew certificates for these custom domains. On the first request from a new custom domain, the proxy will see it doesn't have the appropriate certificate. It will ask Let's Encrypt for a new certificate. Let's Encrypt will first issue a challenge to see if you manage the domain, and since the customer already created a CNAME or A record pointing to your proxy, that tells Let's Encrypt you indeed manage the domain, and it will let you issue a certificate for it.
To issue and renew certificates automatically, I'd recommend using Caddyserver, greenlock.js, OpenResty (Nginx).
tl;dr on what happens here;
Caddyserver listens on 443 and 80, it receives requests, issues, and renews certificates automatically, proxies traffic to your backend.
How to handle it on my backend
Your proxy is terminating TLS and proxying requests to your backend. However, your backend doesn't know who is the original customer behind the request. This is why you need to tell your proxy to include additional headers in proxied requests to identify the customer. Just add X-Serve-For: app.customer.com or X-Serve-For: customer2.com or whatever the Host header is of the original request.
Now when you receive the proxied request on the backend, you can read this custom header and you know who is the customer behind the request. You can implement your logic based on that, show data belonging to this customer, etc.
More
Put a load balancer in front of your fleet of proxies for higher availability. You'll also have to use distributed storage for certificates and Let's Encrypt challenges. Use AWS ECS or EBS for automated recovery if something fails, otherwise, you may be waking up in the middle of the night restarting machines, or your proxy manually.
If you need more detail you can DM me on Twitter #dragocrnjac

how to allow communication with an application from outside the network?

I have an application running in my personal network. This application can send emails to users and they can acknowledge the receipt via the email they receive as long as they are on my personal network. This is because they have to access the application to perform the acknowledge action.
I want to extend this and see if I can allow acknowledgements via emails from outside the network as well. I know I have to change my application to do this but not sure which way to go. Can some one throw some light?
My application is a spring based web application.
You need to configure your firewall to allow outside access to whatever port the app runs on.
You need to configure port fowarding on your gateway to direct outside traffic to the system running your app (unless your gateway is the server running the app).
After that you should just be able to go to youroutfacingip:portforapp
for example http://123.456.78.90:12345
in a web browser anywhere
you can setup DNS if you want to use a URL instead of an ip.
Keep in mind, anyone can go to this url, so make sure it has access control.

Resources