Vert.x Hazelcast in Multidocker AWS - amazon-ec2

i try to run multiple Vert.x Instances on a EC2 Node via multiple Docker Containers.
Container A:
Port Forwardning: 5071 -> 5071
Local IP: 172.17.0.4
Container B:
Port Forwardning: 5072 -> 5072
Local IP: 172.17.0.5
Container C:
Port Forwardning: 5073 -> 5073
Local IP: 172.17.0.6
i use the Hazelcast Amazon EC2 Setup but this is not working, because the node himself has just one Public IP (set in the Hazelcastsetup) and no possibility to add ports.
How can i run multiple vertx via hazelcast in aws on different ports (maybe this different port solution is not the best one).
Thanks
Marcel
P.s.: i tried to add the nodes via tcp-ip setup, but it's not allowed to mixed AWS and tcp join.
P.p.s: i cannot and don't want use the "--net=host" in AWS ElasticBeanstalk
It looks like this one: https://github.com/hazelcast/hazelcast/issues/4537
Update:
my HC Config
JsonObject amazonConfig = clusterConfig.getJsonObject("aws");
String publicIp = null;
String privateIp = null;
String localIp = InetAddress.getLocalHost().getHostAddress();
logger.info("Found local IP: " + localIp);
try {
publicIp = doHttpUrlConnectionAction("http://169.254.169.254/latest/meta-data/public-ipv4");
logger.info("Found public IP: " + publicIp);
privateIp = doHttpUrlConnectionAction("http://169.254.169.254/latest/meta-data/local-ipv4");
logger.info("Found private IP: " + privateIp);
} catch (IOException | InterruptedException e) {
logger.fatal("Cannot detect public cloud ip");
throw e;
}
logger.info("AWS Cluster config loaded");
hazelcastConfig.getNetworkConfig().setPublicAddress(privateIp);
hazelcastConfig.getNetworkConfig().setPortAutoIncrement(false);
if (amazonConfig.containsKey("hazelcastPort")) {
logger.info("Use port " + amazonConfig.getString("hazelcastPort") + " for hazelcast");
hazelcastConfig.getNetworkConfig()
.setPublicAddress(privateIp + ":" + amazonConfig.getString("hazelcastPort"));
hazelcastConfig.getNetworkConfig().setPort(Integer.valueOf(amazonConfig.getString("hazelcastPort")));
}
// hazelcastConfig.setProperty("hazelcast.local.localAddress",
// localIp);
hazelcastConfig.getNetworkConfig().getJoin().getMulticastConfig().setEnabled(false);
hazelcastConfig.getNetworkConfig().getJoin().getAwsConfig().setEnabled(true);
// hazelcastConfig.getNetworkConfig().getInterfaces().setEnabled(true).addInterface(localIp);
if (amazonConfig.containsKey("region")) {
hazelcastConfig.getNetworkConfig().getJoin().getAwsConfig().setRegion(amazonConfig.getString("region"));
}
if (amazonConfig.containsKey("accessKey")) {
hazelcastConfig.getNetworkConfig().getJoin().getAwsConfig()
.setAccessKey(amazonConfig.getString("accessKey"));
}
if (amazonConfig.containsKey("secretKey")) {
hazelcastConfig.getNetworkConfig().getJoin().getAwsConfig()
.setSecretKey(amazonConfig.getString("secretKey"));
}
try {
String hazelcastGroup = System.getenv("HAZELCASTGROUP");
logger.info("Join Hazelcast Nodes with Tag HAZELCASTGROUP and Value " + hazelcastGroup);
hazelcastConfig.getNetworkConfig().getJoin().getAwsConfig().setTagKey("HAZELCASTGROUP");
hazelcastConfig.getNetworkConfig().getJoin().getAwsConfig().setTagValue(hazelcastGroup);
} catch (Exception e) {
logger.error("Cannot detect hazelcastgroup: " + e.getMessage(), e);
throw e;
}
mgr = new HazelcastClusterManager(hazelcastConfig);
vertxOptions = new VertxOptions().setClusterManager(mgr).setClustered(true);
Solution
// privateIp = doHttpUrlConnectionAction("http://169.254.169.254/latest/meta-data/local-ipv4");
hazelcastConfig.getNetworkConfig().setPublicAddress(privateIp);
Dont disable setPortAutoIncrement
for the 1st Docker Image, you should set the port to 5701 via
hazelcastConfig.getNetworkConfig().setPort(5701);
on the second Docker Image - 5702 and so on
You don't need to link the Docker Container. Just make a portmapping for each image.
Create a Security Group for this ports, so that other nodes can access the ports.

Here are my recommendations. If it doesn't solve the problem then please post the HZ log statements.
Uncomment the line which adds a property for localAddress. hazelcastConfig.setProperty("hazelcast.local.localAddress", localIp);
Disable tcp-ip configuration explicitly.
Remove the setting of Public address second time.
hazelcastConfig.getNetworkConfig().setPublicAddress(privateIp);
hazelcastConfig.getNetworkConfig().setPortAutoIncrement(false);
if (amazonConfig.containsKey("hazelcastPort")) {
logger.info("Use port " + amazonConfig.getString("hazelcastPort") + " for hazelcast");
hazelcastConfig.getNetworkConfig().setPort(Integer.valueOf(amazonConfig.getString("hazelcastPort")));
}
If you can, try to make use of default ports itself. As you pointed out in the comments, there was an issue with HZ not supporting custom ports. Also AWSClient specification doesn't allow specifying custom ports, they tend to use the default ports 5701,5702,5703. Here is the enhancement request that I had created few months back. https://github.com/hazelcast/hazelcast-aws/issues/3
Also make sure the docker containers are able to communicate with each other.

Related

Cannot connect using service mesh in nomad / consul

When I try to connect to an upstream service via a sidecar service in Consul Connect, I get the following error.
2023-02-01T09:31:33-08:00 Setup Failure failed to setup alloc: pre-run hook "group_services" failed: unable to get address for service "sequrercbase": invalid port "base_port": port label not found
The upstream service is named 'sequrercbase' and creates a dynamic port named 'base_port' that I'd like downstream services to connect to.
network {
mode = "bridge"
port "base_port" { }
}
service {
name = "sequrercbase"
port = "base_port"
connect {
sidecar_service {}
}
}
This service is trying to connect to 'securercbase' on the named port 'base_port'.
network {
mode = "bridge"
port "api_port" { }
}
service {
name = "sequrercbase"
port = "base_port"
connect {
sidecar_service {
proxy {
upstreams {
destination_name = "sequrercbase"
local_bind_port = 9989
}
}
}
}
}
Any thoughts on how to work around this issue?

Configure Outlier detection for consul service mesh using nomad job

I am trying to configure Outlier Detection for a consul connect service mesh based on this documentation.
https://learn.hashicorp.com/tutorials/consul/service-mesh-circuit-breaking?in=consul/developer-mesh
The documentation shows that Outlier Detection and Circuit breaking can be configured using the config stanza inside proxy.upstreams. But the following job file throws error - Blocks of type "config" are not expected here.
job "docs" {
datacenters = ["dc1"]
group "docs" {
network {
mode = "bridge"
}
service {
name = "docs"
port = "5678"
connect {
sidecar_service {
proxy {
upstreams {
destination_name = "demo"
local_bind_port = 10082
config {
connect_timeout_ms = 3000
}
}
}
}
}
}
task "server" {
driver = "docker"
config {
image = "hashicorp/http-echo"
args = [
"-listen",
":5678",
"-text",
"hello world",
]
}
}
}
}
Am I doing anything wrong? Is this not the right way to configure circuit breaking in nomad job file?
sidecar Proxy, Circuit breaking, ingress, egress must be implemented with consul directly and not from nomad. Also, In your job you didn't map the port inside docker and outside port. consul work a specific version of envoy load balacner.
First launch your job without connect stanza and do port mapping
install envoy and do proxy connect connection manually to test
once test work make a service proxy to launch your sidecar your circuit breaking
1- Launching job: (by exemple your port inside docker is 8080 )
job "docs" {
datacenters = ["dc1"]
group "docs" {
network {
mode = "bridge"
}
task "server" {
driver = "docker"
config {
image = "hashicorp/http-echo"
args = [
"-listen",
":5678",
"-text",
"hello world",
]
port_map {
docs = 8080
}
}
resources {
network {
mbits = 10
port "docs" { static = 5678 }
}
}
service {
name = "docs"
port = "docs"
check {
name = "docs port alive"
type = "http"
path = "/"
interval = "10s"
timeout = "2s"
}
}
}
}
}
2-check your consul version and install supported envoy version here. i use consul 1.11 so i will install supported envoy 1.18.4
yum -y -q install tar
curl https://func-e.io/install.sh | bash -s -- -b /usr/local/bin
func-e use 1.18.4
make the envoy bin available
cp /root/.func-e/versions/1.18.4/bin/envoy /usr/local/bin/
Proxy integration
insert at your end of consul config .for me my config are stored in
/etc/consul.d/config.hcl
config_entries {
bootstrap = [
{
kind = "proxy-defaults"
name = "global"
config {
protocol = "http"
}
}
]
}
**restart your consul service to check if envoy proxy integration worked**
systemctl restart consul
Overwrite your service registration in consul with consul file :
cat > /etc/consul.d/docs.hcl <<- EOF
service {
name = "docs"
port = 5678
#token = "" # put api service token here
check {
id = "docs"
name = "HTTP API on Port 5678"
http = "http://localhost:5678"
interval = "30s"
}
connect {
sidecar_service {
port = 20000
check {
name = "Connect Envoy Sidecar"
tcp = "127.0.0.1:20000"
interval = "10s"
}
}
}
}
EOF
restart service consul or reload it
systemctl restart consul
Test proxy side car working
consul connect envoy -sidecar-for=docs
create docs service proxy Create at /etc/systemd/system/consul-envoy-docs.service and input the following:
cat > /etc/systemd/system/consul-envoy.service <<- EOF
[Unit]
Description=Consul Envoy
After=syslog.target network.target
[Service]
ExecStart=/usr/local/bin/consul connect envoy -sidecar-for=docs
ExecStop=/bin/sleep 5
Restart=always
[Install]
WantedBy=multi-user.target
EOF
Restart consul and start consul-envoy:
systemctl daemon-reload
systemctl restart consul
systemctl start consul-envoy-docs
In the event that consul-envoy fails, restart it with:
systemctl restart consul-envoy
3. Well if all work correctly , adapt conf file in /etc/systemd/system/consul-envoy-docs.service as described here to make circuit breaking
If someone have issue with nomad , consul , vault , envoy or hashistack tag me

Create private network with Terraform with starting script - Google Cloud Platform

starting with Terraform recently with GCP, I would like finish a exercice:
Create a new VPC network with a single subnet.
Create a firewall rule that allows external RDP traffic to the bastion host system.
Deploy two Windows servers that are connected to both the VPC network and the default network.
Create a virtual machine that points to the startup script.
Configure a firewall rule to allow HTTP access to the virtual machine.
Here is my solution:
Create a new VPC network called securenetwork, then create a new VPC subnet inside securenetwork. Once the network and subnet have been configured, configure a firewall rule that allows inbound RDP traffic (TCP port 3389) from the internet to the bastion host.
# Create the securenetwork network
resource "google_compute_network" "securenetwork" {
name = "securenetwork"
auto_create_subnetworks = false
}
# Create securesubnet-us subnetwork
resource "google_compute_subnetwork" "securesubnet-eu" {
name = "securesubnet-eu"
region = "europe-west1"
network = "${google_compute_network.securenetwork.self_link}"
ip_cidr_range = "10.130.0.0/20"
}
# Create a firewall rule to allow HTTP, SSH, RDP and ICMP traffic on securenetwork
resource "google_compute_firewall" "securenetwork-allow-http-ssh-rdp-icmp" {
name = "securenetwork-allow-http-ssh-rdp-icmp"
network = "${google_compute_network.securenetwork.self_link}"
allow {
protocol = "tcp"
ports = ["3389"]
}
allow {
protocol = "icmp"
}
}
# Create the vm-securehost instance
module "vm-securehost" {
source = "./instance/securehost"
instance_name = "vm-securehost"
instance_zone = "europe-west1-d"
instance_subnetwork = "${google_compute_subnetwork.securesubnet-eu.self_link}"
instance_network = "${google_compute_network.securenetwork.self_link}"
}
# Create the vm-bastionhost instance
module "vm-bastionhost" {
source = "./instance/bastionhost"
instance_name = "vm-bastionhost"
instance_zone = "europe-west1-d"
instance_subnetwork = "${google_compute_subnetwork.securesubnet-eu.self_link}"
instance_network = "${google_compute_network.securenetwork.self_link}"
}
Deploy Windows instances
a Windows 2016 server instance called vm-securehost with two network interfaces. Configure the first network interface with an internal only connection to the new VPC subnet, and the second network interface with an internal only connection to the default VPC network. This is the secure server.
variable "instance_name" {}
variable "instance_zone" {}
variable "instance_type" {
default = "n1-standard-1"
}
variable "instance_subnetwork" {}
variable "instance_network" {}
resource "google_compute_instance" "vm_instance" {
name = "${var.instance_name}"
zone = "${var.instance_zone}"
machine_type = "${var.instance_type}"
boot_disk {
initialize_params {
image = "windows-cloud/windows-2016"
}
}
network_interface {
subnetwork = "${var.instance_subnetwork}"
access_config {
# Allocate a one-to-one NAT IP to the instance
}
}
}
a second Windows 2016 server instance called vm-bastionhost with two network interfaces. Configure the first network interface to connect to the new VPC subnet with an ephemeral public (external NAT) address, and the second network interface with an internal only connection to the default VPC network. This is the jump box or bastion host.
variable "instance_name" {}
variable "instance_zone" {}
variable "instance_type" {
default = "n1-standard-1"
}
variable "instance_subnetwork" {}
variable "instance_network" {}
resource "google_compute_address" "default" {
name = "default"
region = "europe-west1"
}
resource "google_compute_instance" "vm_instance" {
name = "${var.instance_name}"
zone = "${var.instance_zone}"
machine_type = "${var.instance_type}"
boot_disk {
initialize_params {
image = "windows-cloud/windows-2016"
}
}
network_interface {
subnetwork = "${var.instance_subnetwork}"
network = "${var.instance_network}"
access_config {
# Allocate a one-to-one NAT IP to the instance
nat_ip = "${google_compute_address.default.address}"
}
}
}
My question:
how to config the Windows compute instance called vm-securehost that does not have a public ip-address?
how to config the Windows compute instance called vm-securehost that run Microsoft IIS web server software on startup?
Thanks for any comment for the solution
To create a vm without any external ip address, omit the ‘access config’ argument in your terraform script, as it’s the one responsible for creation of external ip address.
To run Microsoft IIS web server software on your vm while startup, add the following argument in your vm creation block (exclude quotation marks) -
'metadata_startup_script = import-module servermanager && add-windowsfeature web-server -includeallsubfeature'
Please refer to following links for detailed information on the issue -
https://cloud.google.com/compute/docs/tutorials/basic-webserver-iis
https://registry.terraform.io/providers/hashicorp/google/latest/docs/resources/compute_instance#metadata_startup_script

Elasticsearch client does not fetch result when a single client node goes down

We have a very standard elasticsearch setup with 3 master nodes, 6 data nodes and 3 client nodes. Here is our connection code for connecting to Elasticsearch clients from our Java application.
Settings settings = Settings.settingsBuilder()
.put("cluster.name", configuration.getString("clusterName"))
.put("client.transport.sniff", false)
.put("client.transport.ping_timeout", "5s")
.build();
TransportClient client = TransportClient.builder().settings(settings).build();
for (String hostname : (Collection<String>)configuration.get("hostnames")){
try {
client = client.addTransportAddresses(
new InetSocketTransportAddress(InetAddress.getByName(hostname), 9300)
);
break;
} catch (UnknownHostException e) {
e.printStackTrace();
}
}
We have currently three different host in hostnames list. But any time a single client from this list of hostname goes down this Elasticsearch transport client stops responding. I have gone through transport client documentation on Elasticsearch site and have also tried looking at their Github issues, according to that whenever a node goes down only elasticsearch should remove it from list of nodes and continue working with other nodes, but in our case things just break down. Anyone has any idea what might be the problem?
We are using elasticsearch 2.4.3 right now.
It looks like you are breaking the loop after a single node has been added. Try removing the break statement:
for (String hostname : (Collection<String>)configuration.get("hostnames")){
try {
client = client.addTransportAddresses(
new InetSocketTransportAddress(InetAddress.getByName(hostname), 9300)
);
} catch (UnknownHostException e) {
e.printStackTrace();
}
}

Build Url using ServletUriComponentsBuilder without port number in Spring

I am using ServletUriComponentsBuilder in my service class to build some urls but the problem is that it includes port number also where the servlet container is running, this is a problem when I am deploying my app on production behind a proxy server which is supposed to run only on port 80
Code that I am using is:
String sUri = ServletUriComponentsBuilder.fromCurrentContextPath().path("/student/edit/" + st.getId()).build().toUriString();
While c:url that I am using in JSP is working perfectly fine, it do not include port number. Is there any way by which ServletUriComponentsBuilder also start detects whether it needs to include port number or not.
Means if the application start on port 8080 then it can include port number but when app is accessed from port 80 then do not include?
Whats happening: If my tomcat is running on port 8080 while I have proxy server in place which serves request on port 80, but urls built by ServletUriComponentsBuilder still appends port 8080 after host, I need it to be 80
Take a look at ServletUriComponentsBuilder#fromRequest:
String scheme = request.getScheme();
int port = request.getServerPort();
String host = request.getServerName();
String header = request.getHeader("X-Forwarded-Host");
if (StringUtils.hasText(header)) {
String[] hosts = StringUtils.commaDelimitedListToStringArray(header);
String hostToUse = hosts[0];
if (hostToUse.contains(":")) {
String[] hostAndPort = StringUtils.split(hostToUse, ":");
host = hostAndPort[0];
port = Integer.parseInt(hostAndPort[1]);
}
else {
host = hostToUse;
}
}
....
Especially the line
String header = request.getHeader("X-Forwarded-Host");
will do the trick. All you have to do is set X-Forwarded-Host in your proxy server and start using ServletUriComponentsBuilder#fromRequest instead of ServletUriComponentsBuilder#fromCurrentContextPath. Your url should contain your public proxy hostname and no port.
This is a bug from this method:
public static ServletUriComponentsBuilder fromRequest(HttpServletRequest request) {
String scheme = request.getScheme();
int port = request.getServerPort();
String host = request.getServerName();
String header = request.getHeader("X-Forwarded-Host");
if (StringUtils.hasText(header)) {
String[] hosts = StringUtils.commaDelimitedListToStringArray(header);
String hostToUse = hosts[0];
if (hostToUse.contains(":")) {
String[] hostAndPort = StringUtils.split(hostToUse, ":");
host = hostAndPort[0];
port = Integer.parseInt(hostAndPort[1]);
}
else {
host = hostToUse;
}
}
ServletUriComponentsBuilder builder = new ServletUriComponentsBuilder();
builder.scheme(scheme);
builder.host(host);
if ((scheme.equals("http") && port != 80) || (scheme.equals("https") && port != 443)) {
builder.port(port);
}
builder.pathFromRequest(request);
builder.query(request.getQueryString());
return builder;
}
If X-Forwarded-Host is filled, and there is no port it's because we are on the port 80. But the else
else {
host = hostToUse;
}
So this case is a mixed case where the host is readed from the X-Forwarded-Host value but the port is read directly from the request (and it's the request the apache use to call tomcat).
We are working this issue here and did'nt find any alternative way to write a new UriComponentsBuilder (ok... maybe juste extends!)

Resources