How to setup a bridge as an AWS EC2 instance - amazon-ec2

I am trying to setup a test environment on my Amazon cloud for a proxy component.
The proxy component is an ec2 instance that all of network traffic pass through it.
client
_____
|_____| ------|
|
client | proxy
_____ | _______
|_____| ------| ----> |_______| -----> Internet
|
client |
_____ |
|_____| ------|
I created a VPC but I can't understand how can I "connect" each client to pass its traffic to the proxy.
EDIT
The way our proxy works is by using a bridge interface (br0) that transfer the network data between eth0 to eth1 and back.
EC2-instance |
_____ |
|_____| ------|
| proxy (bridge)
EC2-instance | (Also ec2-user)
_____ | __________________ ________________
|_____| ------| |__________________| ---->|Internet Gateway|---> Internet
|------>| br0 |
EC2-instance |
_____ |
|_____| ------|
Does this kind of network topology configuration is also configurable in AWS?

Here is my understand for your question, if you can provide detail or need some adjusts, let me know.
assign all application clients to private subnets, don't assign
internet gateway to its route table.
create a NAT instance, you can use exist Amazon NAT AMI image to create
it.
Community AMIs (amzn-ami-vpc-nat-pv-2015.03.0.x86_64-ebs - ami-XXXXXX)
auto-sign public IP address or assign EIP(elastic IP) on this NAT instance
Disabling Source/Destination Checks on this NAT instance.
Updating the Main Route Table with this new NAT instance on the private subnets that you assign the IP addresses to your application clients.
now all clients should be fine to access Internet directly via this NAT instance.
refer Setting up the NAT Instance
Of course, you can create a custom nat instance in AWS VPC
If you need something like squid proxy server, please give detail

Related

Kubernetes NodePort url getting changed with "minikube service <service>"

I have created a NodePort to forward request from port 30101->80->8089:
apiVersion: v1
kind: Service
metadata:
name: gorest-service
spec:
type: NodePort
selector:
app: gorest
ports:
- protocol: TCP
port: 80
targetPort: 8089
nodePort: 30101
When I try to get the service URL http://192.168.49.2:30101, I am unable to access it, but with url http://127.0.0.1:64741, retrieved by using minikube service <service>, I am able to access.
Query: Unable to understand how http://192.168.49.2:30101 was changed to http://127.0.0.1:64741 retrived by minikube service <service>
% minikube service gorest-service
|-----------|----------------|-------------|---------------------------|
| NAMESPACE | NAME | TARGET PORT | URL |
|-----------|----------------|-------------|---------------------------|
| default | gorest-service | 8089 | http://192.168.49.2:30101 |
|-----------|----------------|-------------|---------------------------|
๐Ÿƒ Starting tunnel for service gorest-service.
|-----------|----------------|-------------|------------------------|
| NAMESPACE | NAME | TARGET PORT | URL |
|-----------|----------------|-------------|------------------------|
| default | gorest-service | | http://127.0.0.1:64741 |
|-----------|----------------|-------------|------------------------|
๐ŸŽ‰ Opening service default/gorest-service in default browser...
โ— Because you are using a Docker driver on darwin, the terminal needs to be open to run it.
In your manifest file you remap the service port inside the container from port 80 to port 8089.
If you want to access that service inside kubernetes you have to use the 8089 port.
BUT you cannot access services inside K8s from the outside world: you need to expose them (you should use a load balancer or an egress service).
Minikube is meant to be used for development. The 64741 port you see is a tunnel service that starts minikube and allows you to test and debug your service outside k8s without using an egress (which might be doing more than just connecting the service with the outside world, like for instance authenticating or authorizing requests).
Your understanding is correct, service exposed using NodePort should be reachable on minikube_IP:NodePort. First I checked it on Linux VM with minikube installed and it worked.
Then I noticed you're using MacOS:
Because you are using a Docker driver on darwin
Which leads us to some limitations with minikube running with docker driver on MacOS. Please see this GitHub issue.
There are two options at least (more, but these are simple to do):
use minikube tunnel what you did and it worked for you.
Tunnel is used to expose the service from inside of VM where minikube is running to the host machine's network. Please refer to access applications in minikube. This is how minikube_IP:NodePort transforms to localhost:different_port.
start minikube with VirtualBox driver to get a proper IP (if you really need to access your service on NodePort), below the command how to start it with VirtualBox driver (this should be installed on your machine):
minikube start --driver=VirtualBox

IPsec - Clients cannot ping each other

I'm having a hard time to finalize a first working configuration with IPsec.
I want to have a IPsec server that creates a network with its clients, and I want the clients to be able to communicate each other through the server. I'm using Strongswan on both server and clients, and I'll have a few clients with other IPsec implementations.
Problem
So the server is reachable at 10.231.0.1 for every clients and the server can ping the clients. It works well. But the clients cannot reach each other.
Here is an output of tcpdump when I try to ping 10.231.0.2 from 10.231.0.3
# tcpdump -n host 10.231.0.3
[..]
21:28:49.099653 ARP, Request who-has 10.231.0.2 tell 10.231.0.3, length 28
21:28:50.123649 ARP, Request who-has 10.231.0.2 tell 10.231.0.3, length 28
I thought of farp plugin, mentionned here : https://wiki.strongswan.org/projects/strongswan/wiki/ForwardingAndSplitTunneling but the ARP request is not making its way to the server, it stays local.
Information
Server ipsec.conf
config setup
charondebug="ike 1, knl 1, cfg 0"
uniqueids=no
conn ikev2-vpn
auto=add
compress=no
type=tunnel
keyexchange=ikev2
fragmentation=yes
forceencaps=yes
dpdaction=clear
dpddelay=300s
esp=aes256-sha256-modp4096!
ike=aes256-sha256-modp4096!
rekey=no
left=%any
leftid=%any
leftcert=server.crt
leftsendcert=always
leftsourceip=10.231.0.1
leftauth=pubkey
leftsubnet=10.231.0.0/16
right=%any
rightid=%any
rightauth=pubkey
rightsourceip=10.231.0.2-10.231.254.254
rightsubnet=10.231.0.0/16
Client ipsec.conf
config setup
charondebug="ike 1, knl 1, cfg 0"
uniqueids=no
conn ikev2-vpn
auto=route
compress=no
type=tunnel
keyexchange=ikev2
fragmentation=yes
forceencaps=yes
dpdaction=clear
dpddelay=60s
esp=aes256-sha256-modp4096!
ike=aes256-sha256-modp4096!
rekey=no
right=server.url
rightid=%any
rightauth=pubkey
rightsubnet=10.231.0.1/32
left=%defaultroute
leftid=%any
leftauth=pubkey
leftcert=client.crt
leftsendcert=always
leftsourceip=10.231.0.3
leftsubnet=10.231.0.3/32
There should be nothing special or relevant in Strongswan's & charon's configuration file, but I can provide them if you think that could be usefull.
I've taken a few shortcuts in the configuration : I'm using VirtualIP but I'm not using a DHCP plugin or anything to distribute the IP. I'm setting the IP address manually on the clients like so :
ip address add 10.231.0.3/16 dev eth0
And here is a routing table on the client's side (automatically set like that by adding the IP and by Strongswann for the table 220) :
# ip route list | grep 231
10.231.0.0/16 dev eth0 proto kernel scope link src 10.231.0.3
# ip route list table 220
10.231.0.1 via 192.168.88.1 dev eth0 proto static src 10.231.0.3
I've also played with iptables and this rule
iptables -t nat -I POSTROUTING -m policy --pol ipsec --dir out -j ACCEPT
On both client and server, because I understood that could be a problem if I have MASQUERADE rules already set, but that did not changed anything.
I've also set those kernel parameters through sysctl on both client and server side :
sysctl net.ipv4.conf.default.accept_redirects=0
sysctl net.ipv4.conf.default.send_redirects=0
sysctl net.ipv4.conf.default.rp_filter=0
sysctl net.ipv4.conf.eth0.accept_redirects=0
sysctl net.ipv4.conf.eth0.send_redirects=0
sysctl net.ipv4.conf.eth0.rp_filter=0
sysctl net.ipv4.conf.all.proxy_arp=1
sysctl net.ipv4.conf.eth0.proxy_arp=1
sysctl net.ipv4.ip_forward=1
Lead 1
This could be related to my subnets declared in /32 in my client's configurations. At first I declared the subnet in /16 but I could not connect two clients with this configuration. The second client was taking the whole traffic for itself. So I understood I should limit the traffic selectors and this is how I did it but maybe I'm wrong.
Lead 2
This could be related to my way of assigning IP manually, and the mess it can introduce in the routing table. When I play with the routing table manually assigning gateway (like the public IP of the client as a gateway) then the ARP in TCPdump disappear and I see the ICMP request. But absolutely nothing on the server.
Any thoughts on what I've done wrong ?
Thanks

Configure pacemaker's src addr

I'm trying to configure my corosync cluster with 2 ips:
- 1 public
- 1 private
I do have 3 primitives:
- 2 ipaddr2 to mount ips
- 1 ipsrcaddr to src from the private ip in it's subnet
My problem is that ipsrcaddr replaces the default gateway with my private address as source, like this:
~# ip r s
default via 92.181.55.1 dev ens3 src 192.168.0.11
92.181.55.1 dev ens3 scope link
192.168.0.0/24 dev ens4 scope link via 192.168.0.11
I can no longer send traffic with my public ip after resource starts :/
Anyone experienced the same issue ? Any advice ?
Thanks

Unable to access internet from Private subnet | Error: Cannot find a valid baseurl

I am trying to use a NAT Instance rather than a NAT Gateway; I am also not using any Community AMIs for the NAT Instance configuration.
I am trying to do a yum update from my private but I am thrown the following error: Cannot find a valid baseurl for repo: amzn-main/latest
My AWS stack is as follows:
VPC: A VPC VPC1 with an Internet Gateway IGW1 attached.
Subnets: Two subnets - public in us-east-1a and private in us-east-1b.
Public subnet: Subnet1.1-1a has Route table [Public-IGW-1 with local and IGW1 - 0.0.0.0/0].
Private subnet: Subnet1.2-1b has Route table [Private-1 with local and NAT instance NAT EC2 1- 0.0.0.0/0].
Route tables:
Private-1 has routes local and NAT EC2 1 instance - 0.0.0.0/0.
Public-IGW-1 has routes local and IGW1 - 0.0.0.0/0.
Security groups: Subnet-1.1-1a-Public from us-east-1a in VPC1 has SSH MyIP and HTTP with anywhere.
Subnet1.1-1a-Private from us-east-1b (have to rename; else deceiving) in VPC1 has inbound 22 - anywhere.
Instances:
NAT EC2 1 lives in Subnet1.1-1a of VPC1 with Security group NAT SG inbound 80 - anywhere 22. Private instance has SG - 22 - anywhere. Public instance has SG - 22 - MyIP and 80 - anywhere.
I copied my keypair into the public instance with scp and ssh-ed into the private instance with ssh -i keypair ec2-user#private-ip-addr. When I do a sudo yum update the error canot find a valid baseurl is shown.
I have made sure that NACL is allowing all traffic.
I figured it. The NAT Instance and the Public Instance have to be in the same security groups.

How can I deploy a BOSH Director on BOSH Lite

I'm currently trying to install a BOSH Director on BOSH Lite - it's clear to me that BOSH Lite already ships with a Director, but I would like to test a release containing a Director "on top of that". Here is my setup:
Everything works fine until I add the warden_cpi job. I would like to configure the Warden CPI to connect to Warden running on the Virtual Machine hosting BOSH Lite and still being available to the Director . So what I tried is this:
releases:
- name: bosh-warden-cpi
url: https://bosh.io/d/github.com/cppforlife/bosh-warden-cpi-release?v=29
sha1: 9cc293351744f3892d4a79479cccd3c3b2cf33c7
version: latest
instance_groups:
- name: bosh-components
jobs:
- name: warden_cpi
release: bosh-warden-cpi
properties:
warden_cpi:
host_ip: 10.254.50.4 # host IP of BOSH Lite Vagrant Box
warden:
connect_network: tcp
connect_address: 10.254.50.4:7777 # again host IP and Port of garden-linux on BOSH Lite Vagrant Box
agent:
mbus: nats://user:password#127.0.0.1:4222
blobstore:
provider: dav
options:
endpoint: http://127.0.0.1:25250
user: user
password: password
where 10.254.50.4 is the IP address of the Vagrant Box and 7777 is the port of garden-linux.
During the deployment, I get this message from bosh vms
+----------------------------------------------------------+--------------------+-----+---------+--------------+
| VM | State | AZ | VM Type | IPs |
+----------------------------------------------------------+--------------------+-----+---------+--------------+
| bosh-components/0 (37a1938e-e1df-4650-bec6-460e4bc3916e) | unresponsive agent | n/a | small | |
| bosh-director/0 (2bb47ce1-0bba-49aa-b9a3-86e881e91ee9) | running | n/a | small | 10.244.102.2 |
| jumpbox/0 (51c895ae-3563-4561-ba3f-d0174e90c3f4) | running | n/a | small | 10.244.102.4 |
+----------------------------------------------------------+--------------------+-----+---------+--------------+
As an error message from bosh deploy, I get this:
Error 450002: Timed out sending `get_state' to e1ed3839-ade4-4e12-8f33-6ee6000750d0 after 45 seconds
After the error occurs, I can see the VM with bosh vms:
+----------------------------------------------------------+---------+-----+---------+--------------+
| VM | State | AZ | VM Type | IPs |
+----------------------------------------------------------+---------+-----+---------+--------------+
| bosh-components/0 (37a1938e-e1df-4650-bec6-460e4bc3916e) | running | n/a | small | 10.244.102.3 |
| bosh-director/0 (2bb47ce1-0bba-49aa-b9a3-86e881e91ee9) | failing | n/a | small | 10.244.102.2 |
| jumpbox/0 (51c895ae-3563-4561-ba3f-d0174e90c3f4) | running | n/a | small | 10.244.102.4 |
+----------------------------------------------------------+---------+-----+---------+--------------+
But when I ssh into the bosh-components VM, there are no jobs in /var/vcap/jobs.
When I remove the warden_cpi block from the jobs list, everything runs as expected. The full jobs list for my BOSH components VM:
nats
postgres
registry
blobstore
The Director itself runs on another machine. Without the Warden CPI the two machines can communicate as expected.
Can anybody point out to me how I have to configure the Warden CPI so that it connects to the Vagrant Box as expected?
The question is very old, it's a BOSH v1 CLI whereas now BOSH v2 is an established standard, Garden Linux had been deprecated a long time ago in favor of Garden runC, but still, having experimented a lot with BOSH-Lite, I'd like to answer this one.
First, a semantics remark: you shouldn't say โ€œon top of thatโ€, but โ€œas instructed byโ€ instead, because a BOSH Director just instructs some underlying (API-based) infrastructure to do something, that eventually makes it run some workloads.
Second, there are two hurdles you might hit here:
The main problem is that the Warden CPI talks to both the Garden backend and the local Linux kernel for setting up various things around those Garden containers. As a direct consequence, you cannot run a Warden CPI inside a BOSH-Lite container.
The filesystem used (here by the long-gone Garden Linux, but nowadays the issue would be similar with Garden runC) might not work inside a Garden container, as managed by the pre-existing Warden CPI.
All in all, then main thing to be aware of, is this idea that the Warden CPI not only talks to the Garden backend through some its REST API. More than that, the Warden CPI needs to be co-located with the Linux kernel that runs Garden, in order to make system calls and run local commands for mounting persistent storage and other things.

Resources