Rancher Traefik and Lets Encrypt with two domains - lets-encrypt

I have two containers running in rancher 1.6.25, behind a traefik load balancer, Version 1.7.9. Not using the built-in Let's Encrypt function. It is my understanding that I can't use the built-in LE support since I have two LE accounts.
One container is hosting www.domain1.com and the other is hosting www.domain2.com. I have two Let's Encrypt containers running which are generating the certificates. (I have to have two because each certificate belongs to a separate Let's Encrypt account). Both are configured to save files to /shared/drive/LE/production.
How do I tell traefik to look to this drive for certificates? Should I just mount the directory inside the Traefik container somewhere?
I have searched high and low and have not been able to find a solution for this exact scenario. Any help is appreciated!

Related

Trying to Deploy a PCF Spring Boot App which requires a static IP

I have an application that uses spring boot for a backend and Vue.js as a front end. I have packaged the app into a jar file and deployed to PCF with ease. The problem is the application uses API Keys from https://developer.clashroyale.com/#/getting-started ...these keys require you to input the IP Address that will be used...
Obviously my key will not work unless I give the correct IP address, so how do I retrieve the IP Address for my PCF application so I can generate the proper API Key?
Also, the documentation says that the IP will change with every deployment of my application... Which prompts the question :
Is it impossible to use API Keys that require static IP Addresses with PCF applications?
I have deployed this same application to amazon AWS and it worked because I have a static IP Address that I can use to register a key. I prefer to use PCF, but am having trouble setting it up.
I don't think you will be able to use that API on the PCF platform. Every time you either cf restage or anything to cause the container to be rebuilt/redeployed, the IP will change.
So in short yes, it's impossible: https://docs.run.pivotal.io/marketplace/external-ips.html
Your app will be run on any number of Diego Cells, which all have different IP addresses. There are a couple ways that traffic can leave your app and the Cell.
In some cases, outbound traffic may go through a NAT, in which case the number of possible IPs may be small and the IPs may not change often (or at all). In other cases, traffic may leave directly from the Diego Cell on which your application is running. In this case, there's a lot more IPs & the IPs will change any time your app is restarted.
If you're talking about some general installation of Cloud Foundry, it will depend on how the operators for that environment have set up the traffic to flow so you'd need to confirm with your operator to be certain.
If you're talking about Pivotal Web Services, outbound traffic will originate from the IP of the Cell on which your app is running. See the link in Francisco's post.
Having said all that, there's a hack that you can use to work around the behavior above. Route your traffic through a proxy. Traffic coming out of the proxy can be made to have a fixed IP address.
On PWS, there is a service in the marketplace available to do exactly this. It's called QuotaGuard.
https://docs.run.pivotal.io/marketplace/services/quotaguard.html
You don't have to use that service though, you could use any other service provider or you could even set up your own proxy. I would recommend using a service unless you know exactly what you are doing though. Setting up & securing a proxy is not trivial and an improperly secured proxy is bad not just for you as the owner but the whole Internet.

How do you deploy (using Roots Trellis) to a domain that has CloudFlare proxying it?

I built a site using Roots' Trellis that now uses CloudFlare, thus proxying traffic. Which, understandably, prevents Trellis from deploying via MyExample.com.
I’m aware I can connect via IP or an un-proxied CName (Ex. ssh.MyExample.com). But I am unclear about which file(s) I edit in Trellis so the deploy uses the IP or un-proxied domain.
It seems that editing the /hosts/production file would do the trick, but the rest of the Roots ecosystem depends on the values in these files I'm afraid that re-running the deploy with corrupt the server. This has been my experience with similar issues in the past.
Can anyone confirm the steps to achieve this?
Edit /trellis/hosts/production at lines 5 and 8, below [production] and [web] and update it with the un-proxied domain (ex. ssh.myexample.com).
Save, and Git commit your change.
Navigate to the /trellis/ directory and run your deploy using the proxied domain. Ex. ./bin/deploy.sh production myexample.com

Webserver for Angular and Spring application

I'm building a small web application for a personal project. It will be an Angular web application which will talk to a Spring-Boot service layer which in turn will read/write stuff to MongoDb.
I hope to host all this on a single EC2 instance in AWS. My question is how to configure a web server (like Apache but doesn't have to be) to 'beautify' the URLs a bit. Example, without touching anything angular will run at something like host:4200 and the service layer at host:8080. I will then have to map a proper domain to host in AWS, but the hiding of ports etc is where it gets murky for me.
I want to be able to hit my web app at domain.com (no ports etc) and I also want my service layer to ideally have a similar setup e.g. domain.com/service (no ports etc).
How do I configure a webservice to do this for me? Examples or pointers to specific examples would be ideal, but even a pointer to the right documentation will be helpful.
This thread is kind of similar to what I want but not too helpful: How to deploy Spring framework backend and Angular 2 frontend application in any online server?
You can use a setup with AWS CloudFront as reverse proxy and CDN cache. You can map the Domain Name and SSL Certificates(You can use AWS issued free SSL Certificates through AWS Certificate Manager) to CloudFront while the EC2 instance is plugged as an origin behind CloudFront as shown in the following diagram.
In the diagram I have optionally added, which is a common practice in designing applications in AWS.
Hosting the Angular App in S3
Using Autoscaling & Loadbalancing for EC2 instances.
You need to use Apache or other web server as a reverse proxy. Start here -
https://devops.profitbricks.com/tutorials/configure-apache-as-a-reverse-proxy-using-mod_proxy-on-ubuntu/
You then will need to setup a custom domain name. The easiest option is to just use an ELB (now called Classic Load Balancer). More details are here -
http://docs.aws.amazon.com/elasticloadbalancing/latest/classic/using-domain-names-with-elb.html

How do I connect up my Amazon EC2 instances without manually modifying config files?

I have a three-tier Windows-based web application bundled into 3 AMIs on Amazon EC2 that I use for load testing.
An ASP.NET web application on IIS
An .NET application server
SQL Server
After I launch them, the config files of each tier needs modifying to update the IP addresses.
At the moment I am doing this manually: I connect to the webserver instance via remote desktop and modify the config file to point to the new IP of the application server instance. Then I do the same with the application server to change the IP in the connection string.
This must be a common requirement and I must be missing something obvious. There must be a better way!
I could use Elastic IP addresses, but these machines are only provisioned for a couple of hours at a time, and I would be charged for the addresses when they were NOT in use (which would be most of the time).
Is there some way of persistently naming the machines? Can I somehow get all the machines on the same network and use machine names instead of IP addresses?
I could write some nifty PowerShell script that would perform the modifications remotely. Is there an example somewhere?
I could use a dynamic IP address service. I'm not sure if this would have any negative effect on performance or availability... Are there any downsides to this approach?
I could install some sort of self-configuring service on each machine (which connects to S3? SNS? SimpleDB?) to publish/retrieve the addresses of the other machines and update the config files automatically. Is there an example somewhere?
What is best practice?
You could use Amazon Virtual Private Cloud (Amazon VPC). You have a private subnet where you can assign an IP address to an instance, but it may require launching an instance from command line to assign IP. VPC is charged the same way as EC2.

Amazon VPC testing

I sell a product that runs on Amazon EC2. A company now wants to purchase and install it within their perimeter... This also implies the use of a VPN connection to the EC2 datacenter.
I want to test my product using Amazon VPN (VPC) before handing over the code. Must I change my code to make it work across VPC? If I run on Windows, then wants the quickest and easiest desktop VPN client avaialable that will allow me to connect across VPN to the Amazon datacenter?
Make sure you setup NAT servers and set your routes in the AWS console. Your client can have some security infrastructure for extending their data center to the cloud - firewall rules at the VPC level etc. Disable firewall rules on the server you deploy to since your VPC already takes care of this. As root execute the following command. service iptables stop (you probably already know this I am guessing)
Is it important for your app to run across VPCs?
Depending on how large the company you are selling to is, their security team may give them the run around to have VPC to VPC communication. Is it important for your software to span across VPCs?

Resources