AWS: Why do we have to specify 2 BGP ASN when configuring a DX Gateway? - aws-direct-connect

When creating a DX Gateway, we specify a BGP ASN. Another BGP ASN is required when specifying a VIF. Could someone explain why 2 BGP ASNs are required, and what purpose it solves?
Thanks.

As you already know that BGP session is established between 2 peers, therefore BGP ASN is required for both the peers.
While creating the Direct Connect Gateway (DXGW), you mentions Amazon side BGP ASN, however while you are creating a VIF, you mentions on-premise (customer N/W) BGP ASN.
While creating DXGW, it asks for Amazon side ASN
(The Autonomous System Number for the new Direct Connect gateway.)
While creating VIF, it asks for BGP ASN
(The Border Gateway Protocol Autonomous System Number of your on-premises router for the new virtual interface.)
Hoping that this clears out your query.

Related

Mikrotik PPPOE Routing

I have setup PPPOE Server successfully in Mikrotik. All is Good. I am able to give out public ips to clients over PPPOE. But I have some issues. For example I have 2 clients with public ips that would want to be able to connect to each other this is not working. Both public ips cannot reach each other.
Client 1
Local IP Remote IP
a.a.a.a b.b.b.b
Client 2
a.a.a.b b.b.b.b
Is there a way for these two IPs talk to each other?
Although its a very old question.
Answer:
1. Both the clients will be connected using ppp link to the pppoe server.
2. Enabled ipv4/ipv6 routing the pppoe server, if it's linux you need to enabled ip forwarding.
The above is logical answer, I haven't tried it myself.

Retun parameters while creating BGP Group in Alibaba Cloud VPC

BGP group is used for communication between the Virtual Border Router (VBR) and the local IDC in Alibaba cloud VPC. I create a BGP Group using below mentioned command:
https://vpc.aliyuncs.com/?Action=CreateBgpGroup
&RegionId=cn-beijing
&PeerAsn=2010
&RouterId=vbr-2zeff11o2sqhnp1u7ci93
&CommonParameters
But how can I get Return Parameters (i.e. RequestID & BgpGroupID)?
As per the documentation https://www.alibabacloud.com/help/doc-detail/63231.htm?spm=a2c63.p38356.b99.103.4c0559d8LKgNw6 you should have received a response in XML or JSON format.
What was your response?

What is difference between proxy server and anonymizer?

Both proxy server and anonymizer working as a mediator between your ip and target website.
As per my understanding -> In anonymizer, specially in "Network
anonymizer" it pass the request from all the computer connected to
same network so it will complex to trace your ip.
Is my above understanding is correct?
What is the main difference or they are same?
I found this article
So, while the proxy server is the actual server that passes your information, anonymizer is just the name given to the online service per se. An anonymizer uses underlying proxy servers to be able to give you anonymity.

Concept of remote controling several consul stacks securely

Introduction
I am running multiple, i call them consul-stacks. They do always look like:
- 1 consul server
- 9 consul nodes
Each node offers some services - just a classic web-stack and more (not interesting for this question).
Gossip is used to protect the server getting queried by arbitrary nodes and reveal data.
Several consul-template / tiller "watchers" are waiting to dynamically configure the nodes/services on KV changes
Goal
Lets say i have 10 of those stacks ( number is dynamic ) and i want to build a web-app controlling the consul-KV of each stack using a specific logic
What i have right now
I have created a thor+diplomat tool to wrap the logic i need to create specific KV entries. I implemented it while running it on the "controller" container in the stack, talking to localhost:8500 - which then authenticates with gossip and writes to the server.
Question
What concept would i now use to move this tool a remote ( not part of the consul-stack ) server, while being able to write into each consul-stacks KV.
Sure, i can use diplomat to connect to stack1.tld:8500 - but this would mean i open the HTTP port and need to secure it somehow ( not protected by gossip? somehow, only RPC? ) and also protect the /ui.
Is there a better way to connect to each of those stacks?
use an nginx proxy server with basic auth in fron of 8500 to protect the access?
also using ssl-interception on this port and still using 8500 or rather use the a configured https port (in consul HTTPS API)
use ACLs to protect the access? ( a lot of setup to allow the access for the stack members - need of TLS?)
In general, without using TLS ( which needs to much work for the clients to setup ), what concepts would fit this need communicating to the stack-server to write into its KV, securely.
If i missed something, happy to add anything you ask for
The answer on this is
Enable ACLs on the consul-server
{
"acl_datacenter": "stable",
"acl_default_policy": "deny",
"acl_down_policy": "deny"
}
Create a general acl token with write/write/write
consul-cli acl create --management=false --name="general_node" --rule "key::write" --rule "event::write" --rule "service::write" --token=<master-token>
Ensure to use your master-token here, created during the server start
Optionally also configure gossip to let your clients communicate encrypted ( otherwise ACLs kind of not make sense )
Add the general token to the consul-client you use remotely to be able to talk to the remote consul - since this consul will no longer be publicly doing anything ( without token )

How to connect to Elasticsearch server remotely using load balancer

There might be a post which I am looking for. I have very limited time and got requirement at the last moment. I need to push the code to QA and setup elasticsearch with admin team. Please respond me as soon as possible or share the link which has similar post!!.
I have scenario wherein I will have multiple elasticsearch servers, one is hosted on USA , another one in UK and one more server is hosted in India within the same network(companies network) which shares same cluster name. I can set multicast to false and unicast to provide host and IP address information to form a topology.
Now in my application I know that I have to use Transport cLient as follows,
Settings settings = ImmutableSettings.settingsBuilder()
.put("cluster.name", "myClusterName").build();
Client client = new TransportClient(settings)
.addTransportAddress(new InetSocketTransportAddress("host1", 9300))
.addTransportAddress(new InetSocketTransportAddress("host2", 9300));
Following are my concerns,
1) As per the above information, admin team will just provide the single ip address that is load balancer ip address and the loadbalancer will manage the request and response handling .I mean the loadbalance is responsible to redirect to the respective elasticsearch server . Here my question is, Is it okay to use Transport client to connect to the host with the portnumber as follows ,
new TransportClient(settings)
.addTransportAddress(new InetSocketTransportAddress("loadbalancer-ip-address", “loadbalance-port-number”)) ;
If loadbalancer will redirect the request to elastcisearch server what should be the configuration to loadbalancer like, we need to provde all the elasticsearch host or ipaddress details to it? so that at any given point of time , if there is any failure to the master elasticsearch server it will pick another master.
2) What is the best configuration for 4 nodes or elasticsearch servers like, shards , replicas and etc.
Each node will have one primary shard and 1 replicate ? which can be configured in elasticsearch.yml
Please replay me as soon as possible.
Thanks in advance.

Resources