I have build and nexus(repository management) running in the same project. But my CloudBuild job is not able to reach nexus node in VPC. Do I need to provision public IP to my nexus node or there is way to reach nexus node sort VPC peering between CloudBuild and my VPC?
Per further research it seems it is not possible which will require to expose Nexus to public. But if reverse flow this can be used: https://cloud.google.com/vpc/docs/configure-private-google-access
Related
I have created 2 kubernetes clusters on AWS within a VPC.
1) Cluster dedicated to micro services (MI)
2) Cluster dedicated to Consul/Vault (Vault)
So basically both of those clusters can be reached through distinct classic public load balancers which expose k8s APIs
MI: https://api.k8s.domain.com
Vault: https://api.vault.domain.com
I also set up openvpn on both clusters, so you need to be logged in vpn to "curl" or "kubectl" into the clusters.
To do that I just added a new rule in the ELBs's security groups with the VPN's IP on port 443:
HTTPS 443 VPN's IP/32
At this point all works correctly, which means I'm able to successfully "kubectl" in both clusters.
Next thing I need to do, is to be able to do a curl from Vault's cluster within pod's container within into the MI cluster. Basically:
Vault Cluster --------> curl https://api.k8s.domain.com --header "Authorization: Bearer $TOKEN"--------> MI cluster
The problem is that at the moment clusters only allow traffic from VPN's IP.
To solve that, I've added new rules in the security group of MI cluster's load balancer.
Those new rules allow traffic from each vault's node private and master instances IPs.
But for some reason it does not work!
Please note that before adding restrictions in the ELB's security group I've made sure the communication works with both clusters allowing all traffic (0.0.0.0/0)
So the question is when I execute a command curl in pod's container into another cluster api within the same VPC, what is the IP of the container to add to the security group ?
NAT gateway's EIP for the Vault VPC had to be added to the ELB's security group to allow traffic.
I am migrating my spring cloud eureka application to AWS ECS and currently having some trouble doing so.
I have an ECS cluster on AWS in which two EC2 services was created
Eureka-server
Eureka-client
each service has a Task running on it.
QUESTION:
how do i establish a "docker network" amongst these two services such that i can register my eureka-client to the eureka-server's registry? Having them in the same cluster doesn't seem to do the trick.
locally i am able to establish a "docker network" to achieve this task. is it possible to have a "docker network" on AWS?
The problem here lies on the way how ECS clusters work. If you go to your dashboard and check out your task definition, you'll see an ip address which AWS assigns to the resource automatically.
In Eureka's case, you need to somehow obtain this ip address while deploying your eureka client apps and use it to register to your eureka-server. But of course your task definitions gets destroyed and recreated again somehow so you easily lose it.
I've done this before and there are couple of ways to achieve this. Here is one of the ways:
For the EC2 instances that you intend to spread ECS tasks as eureka-server or registry, you need to assign Elastic IP Addresses so you always know where to connect to in terms of a host ip address.
You also need to tag them properly so you can refer them in the next step.
Then switching back to ECS, when deploying your eureka-server tasks, inside your task definition configuration, there's an argument as placement_constraint
This will allow you to add a tag to your tasks so you can place those in the instances you assigned elastic ip addresses in the previous steps.
Now if this is all good and you deployed everything, you should be able to refer your eureka-client apps to that ip and have them registered.
I know this looks dirty and kind of complicated but the thing is Netflix OSS project for Eureka has missing parts which I believe is their proprietary implementation for their internal use and they don't want to share.
Another and probably a cooler way of doing this is using a Route53 domain or alias record for your instances so instead of using an elastic ip, you can also refer them using a DNS.
I don't see a way to configure the cluster FQDN for On Premise installation.
I create a 6 nodes cluster (each nodes running on a physical server) and I'm only able to contact each node on their own IP instead of contacting the cluster on a "general FQDN". With this model, I'm to be are of which node is up, and which node is down.
Does somebody know how to achieve it, based on the sample configurations files provided with Service Fabric standalone installation package?
You need to add a network load balancer to your infrastructure for that. This will be used to route traffic to healthy nodes.
My objective is to setup automatic node recovery upon failover. I came across ec2-autoscale-reactor formula in Salt that ties with Autoscaling Groups in AWS.
This mandates AWS SNS Service to issue notifications over HTTP(s) to Salt Master when a node is created or deleted. However, AWS SNS needs a publicly-open host to send notifictions; hence this may not be a solution as my Salt Master shall be local within VPC.
Is there any other option in Salt using which I may achieve automatic-node-healing for components in AWS?
In my microservices system I plan to use docker swarm and Consul.
In order to ensure the high availability of Consul I’m going to build a cluster of 3 server agents (along with a client agent per node), but this doesn’t save me from local consul agent failure.
Am I missing something?
If not, how can I configure swarm to be aware of more than 1 consul agents?
Consul is the only service discovery backend that don't support multiple endpoints while using swarm.
Both zookeeper and etcd support the etcd://10.0.0.4,10.0.0.5 format of providing multiple Ip's for the "cluster" of discovery back-ends while using Swarm.
To answer your question how you can configure Swarm to support more than 1 consul (server) - I don't have a definitive answer to it but can point you in a direction and something you can test ( no guarantees ) :
One suggestion worth testing (which is not recommended for production) is to use a Load Balancer that can pass your requests from the Swarm manager to one of the three consul servers.
So when starting the swarm managers you can point to consul://ip_of_loadbalancer:port
This will however cause the LB to be a bottleneck (if it goes down).
I have not tested the above and can't answer if it will work or not - it is merely a suggestion.