Load balancer not working with http2 - AWS - amazon-ec2

I have a network load balancer setup on EC2 and everything works fine, though I'd like to enable http2 to get better performance and scores from auditing tools (eg. lighthouse).
My setup is Nginx on Ubuntu, the load balancer only has 2 instances to point to.
When I setup the listener to use http2, it doesn't work even though nginx and everything on the instances is setup properly. Is it because I'm using certificate manager and that there's no way for the balancer to use the certificate if it's installed through there?
Thanks a lot!

If someone comes across this, I ended up being able to solve the issue recently.
My problem was that I needed to install a self-issued certificate on both of my EC2 instances. That enables the back-and-forth between the load balancer to be secured and, in turn, it allows the load balancer to reply with the http2 headers signaling that it's available. For some reason, I thought that this was already configured, but it wasn't.
Now everything works fine!

Related

Load balance Fiddler Proxy

For a POC I want to have 2+ separate instances of Fiddler Proxy running on separate servers and have nginx do the round-robin balancing (or sticky if necessary), the problem I'm running into is as per Telerik each Fiddler instance generates its own root certificate in order to decrypt HTTPS traffic so when the traffics starts going into different servers the client side freaks out and closes the connections. I tried to solve it with sticky session but that still requires me to install all 2+ fiddler root certificates on the client and if I add/remove Fiddler instances I need to manually update all the clients. I've been trying exporting the root certificate from the one (lets say master) instance and import into the other (secondary) instances but it doesn't seem to work. I'm not really a security/certificate expert so I'm not really sure if I'm doing something wrong or this is plain just not possible. I appreciate any help and opportunity to be educated in the matter :-p Thanks!

Memcache on kubernetes

I have a spring boot api running on google cloud kubernetes cluster, I wanna have a caching server to use for my api so I thought to use memcache.
I tried two ways of doing it:
I downloaded the memcache from the google launcher which is basically deploying an instance of memcache on a vm. And then I assigned an external IP to my vm, whitelisted my ip to try it locally and ofc opened the port 11211 (the default one). For the client side I used, this guy, specified the ip address but I still get connection cancelled : java.util.concurrent.CancellationException: Cancelled and the doc is bad so I could find anything that helps.
I decided to try another way, which is following this tutorial and now I have the memcached cluster but I don't know how to consume these pods from my other cluster or should the pods be on the same cluster i have the api running on?
I would appreciate any help, this is my first encounter with the global caching.
So I figured it out based on Jonah Benton's advice.
It was actually pretty simple, i used this tutorial to create a new pod running memcached in my cluster and then I used this client to connect on it and it worked like a charm!
Hope it helps someone.

classic load balancer https acm issue

I have a website -- portaldevservices.com
The domain is managed by route 53 and works fine with http.
I have one ec2 instance.
I recently decided to move to https and put a load balancer in front of the ec2 instance.
From here I created a load balancer edited the A record and the Cname to the credentials of the load balancer. The health check is fine and the ec2 instance was added.
Using Amazon Certificate manager I created a cert and added it to the load balancer.
Here are some credentials/info:
When I try to access https://portaldevservices.com I get this:
Website screenshot
hosted zones
load balancer port config
load balancer basic config
load balancer listener
acm certificate
Thanks for the help. I'm a mobile dev so this is my first time really stepping into the backend world.
Solved:
Ok that was a lot easier than I thought. If anyone else experiences this issue all I had to do was add the "www." to the front of my A type
From portaldevservices.com -> www.portaldevservices.com
The https access now works well.
Ok that was a lot easier than I thought. If anyone else experiences this issue all I had to do was add the "www." to the front of my A type
From portaldevservices.com -> www.portaldevservices.com
The https access now works well.

Can't connect to Tigase server running on EC2 Instance: Connection Refused

After installing Tigase on an AWS EC2 instance I keep getting the error message 'connection refused' when I try to connect to it using an xmpp client.
The instance is attached to a security group with rules to allow traffic to the necessary ports (tigase needs 5223 primarily and some others for more exotic features). I've also tried it with rules allowing all traffic to all ports from all sources but I still get the same message.
I've also checked iptables because I noticed some people needed to configure those as well in specific cases, I made sure it allows all connections but still I can't connect to Tigase.
Yes Tigase is running, there are no relevant errors in the Tigase logs
SSH (port 22) and HTTP (port 80) work fine
Enabling ICMP (ping) on all ports works fine
I've tried several xmpp clients, same problem
I've deleted and recreated instances several times
Re-installed Tigase on fresh instances several times with various configuration options
Tried using domain name associated with Elastic IP, normal IP and tried public DNS directly.
Configured the DNS in the way necessary for Tigase as described here
I've looked everywhere and have not been able to find anything to fix this. Networking isn't my main area of expertise and I'd really appreciate any advice.
Wow, in case anyone runs into the same problem in the future, turns out that this was related to the AMI. I was using an Amazon Linux AMI and switched to Ubuntu Server 14.04 LTS. I wish I tried this sooner but I didn't really consider this a possible solution earlier. Apparently Amazon Linux doesn't play well with Tigase.

WSO2 WSAS Application Server and ELB not working

I succesfully followed the guides on http://docs.wso2.org/display/Cluster/Clustering+Application+Server and have deployed an scenario of one ELB 2.1.0 balancing and connecting the nodes as follows:
- One machine with the ELB, Manager and Worker node
- Another physical different machine with another worker node
All the managing on the ELB and manager and workers seems to be fine, for the logs and the console show every member connecting to each other, and so.
My problem is that if I configure the "proxyPort" properties on the Catalina-server.xml of the Manager and the workers node, I can't connect trhough the ports on the ELB 8280 or 8243, because I am obtaining a blank page (if i try to use the manager administration console) or a blank webserver response (if I try to consume any webservice through the ELB port).
If I don't configure the proxyPorts properties on each node, and point to each IP and port separately, I can successfully use the manager console, and consume the WebServices on each worker node as I expected but...of course, this way I am not having Load Balance and High Availability.
Sorry to bother you because I'm new at this matters, but I searched the internet all around and have found how to fix problems all the way through this point I'm in... It seems that the problem should only be at some transportation level configuration on the axis2.xml of some node, or maybe the fact our network is behind a proxy had something to be...don't know.
¿Anybody could give any advice? Versions are: WSAS 5.2.0 and ELB 2.1.0.
We have found now, that working with previous versions (with Tribes and not Hazelcast as the clustering class), WSAS 5.1.0 and ELB 2.0.3, and not changing the parameter on the Axis2.xml file in the ELB (leaving as it comes, being different than the domain established in the loadbalancer.cfg and the domain in the Axis2.xml of the Workers and Managers), it works well. But, if we don't change the on the axis2.xml working with the 5.2.0 and 2.1.0 versions, trying to replicate the status we have with the previous version, the ELB doesn't realice that a manager and worker are connecting to him (we can't see anything on the ELB logs when launching the manager and Worker), so I suppose in this case the clustering is not working, and for it to work properly, we need to set the in the Axis2.xml of the loadbalancer the same as in the loadbalancer.conf and in the axis2.xml of the rest of nodes in the cluster.
We need to deploy this for testing and valoration purposes on a customer and we would like to understand or to know if there is something wrong with the last versions, or this is just a lack of knowledge for our part, case when we will need the help of this forum :).
In this link you could find the configuration files involved in the ELB 2.1.0 and Manager WSAS 5.2.0 issue: http://www.dravencrow.com/varios/configuration_files.rar
Thank you very much in advance
Just for the sake of others with the same problem, we finally worked this out. It seems that with the newest versions of ELB (2.1.0) you need to stablish the port where to listen to other members in the "loadbalancer.conf" level, with the property "group_mgt_port", which did not comes as default in the file from the release of ELB 2.1.0. Also, the domain fixed in the Axis2.xml of the ELB level must be totally different from the domain fixed in the rest of Axis2.xml files of other nodes (and from the domain in the loadbalancer.conf itself). With this setup, the last version of ELB and AS works fine taking care of connections between each other, and the proxyports works as well connecting to manager and workers throug the elb port 8243. Thank you very much for your patience and apologies for bothering you....it's really difficult find some reliable documentation on the last versions of each product in the WSO2 suite, for the most of the info available is spreaded through different blogs and forums. Regards

Resources