Pull data from icinga satellite to master behind firewall - ssh-tunnel

I have the following situation:
A private enterprise network with a Icinga2 master, monitoring the internal servers. The firewall blocks all inbound access, however all servers to have outbound internet access (multiple protocols, such as SSH, HTTP, HTTPS).
We also have an environment in Azure with 1 publicly accessable VM (nginx) and behind that, in a private network, application servers. I'd also like to monitor these servers. I read that I can set up a Icinga2 satellite (in Azure), that monitors the Azure environment and sends the data to the master.
This would be a great solution. However, my master is in our private enterprise network, so the Icinga satellite can't push any data to the master. The only option would be that the master pulls the data periodically from the satellite(s). It's possible for the master to login via SSH agent forwarding to the servers in Azure. Is this possible or is there a better solution? I'd rather not create a reverse SSH tunnel.

You might just use the icinga2 client and let the master connect to the client (ensure that the endpoint object contains host/port attributes). Once the connection is established the client will send its check results (and historical replay logs even if there).

Related

Automatic Failover between Azure Internal Load Balancers

We are moving a workflow of our business to Azure. I currently have two VMs as an HA pair behind an internal load balancer in the North Central US Region as my production environment. I have mirrored this architecture in the South Central US Region for disaster recovery purposes. A vendor recommended I place an Azure Traffic Manager in front of the ILBs for automatic failover, but it appears that I cannot spec ILBs as endpoints for ATM. (For clarity, all connections to these ILBs are through VPNs.)
Our current plan is to put the IPs for both ILBs in a custom-built appliance placed on-prem, and the failover would happen on that appliance. However, it would greatly simplify things if we could present a single IP to that appliance, and let the failover happen in Azure instead.
Is there an Azure product or service, or perhaps more appropriate architecture that would allow for a single IP to be presented to the customer, but allow for automatic failover across regions?
It seems that you could configure an application gateway with an internal load balancer (ILB) endpoint. In this case, you will have a private frontend IP configuration for an Application Gateway. The APPGW will be deployed in a dedicated subnet, it will exist on the same VNet with your internal backend VMs. Please note in this case you can directly add the private VMs as the backends instead of internal load balancer frontend IP address because of private APPGW itself is an internal load balancer.
Moreover, APPGW also could configure a public frontend IP configuration, if so, you can configure the APPGW public frontend IP as the endpoints of the Azure traffic manager.
Hope this could help you.

Site-to-site VPN vs point-to-site VPN

I have a scenario where I have a Windows VM in windows Azure that needs to connect to an external customer network (and connect to a database that is not in Azure).
This traffic is uni-directional in that it is only my VM that needs to connect to the customer's databases and not the other way around. Site to site is managed on Azure, which I cannot really test locally.
Conceptually, connecting to the customer's network via a point-to-site VPN seems more suitable (by creating the VPN connection in Windows itself via the network config).
The customer prefers site-to-site even though they don't need to connect to my VM. Am I missing something?
In point-to-site, you have to connect to the network you want to access manually. Usually, if you log-off or restart the workstation it loses connection, and you have to reconnect every time. It's common to use this type of VPN when we are working remotely, and we need to access our company assets. The channel is bi-directional, but it's 1-to-many.
Site-to-site is used when you want to connect two networks and keep the communication up all the time. It's also bi-directional, but it's many-to-many and stays up no matter if your server/workstation is running or not because the connection is established through a network gateway and not from the computer operating system.
In Azure, the Virtual Network Gateway is the platform providing both functionalities. You can configure site-to-site to connect to your customer network. If this network is not running in Azure, they usually have an appliance to establish dedicated tunnels. As long as it supports IPsec IKE, you are good to go.
If you are using the VM in Azure as a workstation, then point-to-site may be enough, but if your application needs to get data from the customer database automatically with or without someone logged in the VM, then site-to-site is a better approach.
A better explanation can be found here

Access hadoop nodes web UI from multiple links

i am using the following setup for hadoop's nodes web ui access :
dfs.namenode.http-address : 127.0.0.1:50070
By which i am able to access the nodes web ui link only form the local machine as :
http://127.0.0.1:50070
Is there any way by which i can make it accessible from outside as well ? say like :
http://<Machine-IP>:50070
Thanks in Advance !!
You can use hostname or ipaddress instead of localhost/127.0.0.1.
Make sure you can ping the hostname or ip from the remote machine. If you can ping it then you can able to access web ui.
To ping it
Open cmd/terminal
type the below command in remote machines
ping hostname/ip
From http://docs.aws.amazon.com/emr/latest/ManagementGuide/emr-web-interfaces.html
The following table lists web interfaces that you can view on the core
and task nodes. These Hadoop interfaces are available on all clusters.
To access the following interfaces, replace slave-public-dns-name in
the URI with the public DNS name of the node. For more information
about retrieving the public DNS name of a core or task node instance,
see Connecting to Your Linux/Unix Instances Using SSH in the Amazon
EC2 User Guide for Linux Instances. In addition to retrieving the
public DNS name of the core or task node, you must also edit the
ElasticMapReduce-slave security group to allow SSH access over TCP
port 22. For more information about modifying security group rules,
see Adding Rules to a Security Group in the Amazon EC2 User Guide for
Linux Instances.
YARN ResourceManager
YARN NodeManager
Hadoop HDFS NameNode
Hadoop HDFS DataNode
Spark HistoryServer
Because there are several application-specific interfaces available on
the master node that are not available on the core and task nodes, the
instructions in this document are specific to the Amazon EMR master
node. Accessing the web interfaces on the core and task nodes can be
done in the same manner as you would access the web interfaces on the
master node.
There are several ways you can access the web interfaces on the master
node. The easiest and quickest method is to use SSH to connect to the
master node and use the text-based browser, Lynx, to view the web
sites in your SSH client. However, Lynx is a text-based browser with a
limited user interface that cannot display graphics. The following
example shows how to open the Hadoop ResourceManager interface using
Lynx (Lynx URLs are also provided when you log into the master node
using SSH).
Copy lynx http://ip-###-##-##-###.us-west-2.compute.internal:8088/
There are two remaining options for accessing web interfaces on the
master node that provide full browser functionality. Choose one of the
following:
Option 1 (recommended for more technical users): Use an SSH client to connect to the master node, configure SSH tunneling with local port
forwarding, and use an Internet browser to open web interfaces hosted
on the master node. This method allows you to configure web interface
access without using a SOCKS proxy.
to do this use the command
$ ssh -gnNT -L 9002:localhost:8088 user#example.com
where user#example.com is your username. Note the use of -g to open access to external ip addresses (beware this is a security risk)
you can check this is running using
nmap localhost
to close this ssh tunnel when done use
ps aux | grep 9002
to find the pid of your running ssh process and kill it.
Option 2 (recommended for new users): Use an SSH client to connect to the master node, configure SSH tunneling with dynamic port
forwarding, and configure your Internet browser to use an add-on such
as FoxyProxy or SwitchySharp to manage your SOCKS proxy settings. This
method allows you to automatically filter URLs based on text patterns
and to limit the proxy settings to domains that match the form of the
master node's DNS name. The browser add-on automatically handles
turning the proxy on and off when you switch between viewing websites
hosted on the master node, and those on the Internet. For more
information about how to configure FoxyProxy for Firefox and Google
Chrome, see Option 2, Part 2: Configure Proxy Settings to View
Websites Hosted on the Master Node.
This seems like insanity to me but I have been unable to find how to configure access in core-site.xml to override the web interface for the ResourceManager which by default it is available at localhost:8088/ and if Amazon think this is the way then I tend to go along with it

How does a service such as tunnlr work?

The website says:
Tunnlr uses SSH remote tunneling. It securely connects a port on your
local machine to an open port on our public server. Once you start
your Tunnlr client, the web server on your local machine will be
available to the rest of the world through your special Tunnlr URL.
Could someone please go into a bit more detail over how this entire process works? Or maybe point to something open source that allows the same thing?
The SSH protocol allows tunneling of connections in either direction. So based on the description above here's what is happening:
You download a client program (an SSH client) to your computer and run it.
The client establishes an SSH connection out from your computer to the tunnlr remote server
On the tunnlr server an access port is opened for incoming connections. Let's say port 1234.
Now when anyone connects to tunnlr:1234 the tunnlr server will instruct your client program through the connection established in step 2 to open a connection inside your computer - let's say to port 80 (e.g. you're running a webserver there).
The tunnel connection will now shuffle data between tunnlr:1234 and your_computer:80.
So effectively this is what is running:
[some_remote_computer]<->[tunnlr:1234]<->[SSH tunnel]<->[your_computer:80]
Assume some_remote_computer is your friend or anyone else you want to be able to connect to your local web server.
SSH is available for many platforms (Linux, Windows, OSX and more). You can build such tunnels quite easily with it, but you will of course need access to both computers you want to build the tunnel between. Let's say one computer is your own computer and another is a VPS you've rented (or any other remote server with SSH access). Now you can run exactly the same setup.
The advantage with tunnlr is they manage the remote server for you, and they have a registered hostname you can use for your tunnels.

How I can access FTP server based on different network

I have set up a FTP server with Apache FTP server on local machine, this machine can access internet but its IP address cannot be accessed externally.
I also have another machine in a different city - it can access the internet but it is same in that its IP address cannot be accessed externally. The two computers are not on the same network so they are unable ping each other.
How I can use FTP client from another machine to access the FTP server, I know it should be impossible but do you guys have any workarounds (whatever code change or other approaches)
I am in the US - do you guys have idea how I can make my home IP publicly accessible?
it is very possible if you control the firewall that the server is behind. this is standard network configuration, and you can find hundreds of tutorials online, but the most important bit of information is the firewall, not the ftp server. you configure port forwarding on your firewall to forward incoming ftp requests to your internal ftp server. also, you will want to use "passive" ftp from the client because the client is also behind a firewall.

Resources