How to use Ansible to update a client machine without having its IP address - ansible

Ansbile can be used to update a machine via ssh, and in order to establish such connection, you need an accessible IP address.
How to use it to update a fleet of distributed machines on different networks (consumers) which don't have a public address?
One solution I was thinking of, is to reverse the procedure, have Ansible configured on the client machine, that connects each day to the server, read a file to see if it has a new update for it and loads that update.
I find this not straightforward, is there another way?

ansible-pull is exacly what you are looking for.

Related

I don't have fixed IP address, how can I let others access my database?

I run Memgraph Platform on my laptop inside Docker container. When I'm at the office my colleague can access it, but when I work from home he can not get to the database. I don't have fixed IP address, and my ISP doesn't allow me to do port forwarding and dynamic DNS also doesn't work for me. What can I do to make my database accessible to others?
Try to follow the advice given by #Martheen. I have experience with running Tailscale for this purpose and it works.
I don't know why you can't deploy it to some server (if there are regulation issues or company policies in question) but if it is not any of those maybe you could use Memgraph Cloud and host your data. That way you would be sure that everyone with the right credentials could access your data. But it all depends on your setup and usage scenario. Since you are using Docker I presume that you have all of your environment configured right the way that you want on your laptop.

Can't join into a cluster on marklogic

I'm working with marklogic database and I tried to create a cluster.
I already have a development key. The OS is the same in all the nodes (win 7 x64).
When you tried to add a node into the cluster, you need to type the host name or the IP adress. For some reason when I type de host name, marklogic sometimes can't find the node , but that doesn't matter, because with the IP, the connection is successfull.
The main problem is when continues trought the process. At the end when marklogic try to transfer cluster configuration information to the new host, the process never ends and finally a message like "No data received" appear in the web browser.
I know that this message doesnt mean that the process fails, because when I change for example the host name, the same message appear.
So, when I check the summary in the first node, the second node appears, so that means the node "joins" into the cluster, but I'm not able to start the admin interface and always the second node appears disconnected even if I restart the service.
Aditionally, I'm able to make a ping from any computer to another.
I tried to create another network, because in my school some ports are not allowed, furthermore I tried to use different development key and the same key in my nodes too,
and finally I already have all the services enabled, but the problem persist.
Any help or comments would be appreciated.
Make sure ports 7998 - 8003 are open on both computers for both inbound and outbound traffic and that you don't have a firewall (Windows firewall, or iptables) blocking these.
You can also start looking into the Logs/ErrorLog.txt file and see if something obvious shows up.
Stick to IP addresses for now as it seems your DNS isn't fully working.
Your error looks like a kind of networking connectivity problem between the hosts.
Also you might get more detailed, or atleast different, answers from the MarkLogic developer mailing list.
http://developer.marklogic.com/discuss
-David Lee
Make sure the host names in MarkLogic configuration match the DNS names at which the hosts can see each other. If those are unreliable, then simply use IP addresses as host names. Go to the Admin interface on both ends, lookup the host name, change the DNS name into IP name, try again.
Also look at DALDEI's suggestion about ports and firewalls, that could be interfering as well.
HTH!

Amazon EC2 - seeing files between instances

I've set up 2 instances of Windows Server 2008 on EC2. I want one to act as the database server and the other as the client. For the client app to work it needs to be able to connect to the server instance with ALL of these things:
IP address of the database instance
access through a given UDP port
server name e.g. \\MyServer
an actual physical path through to its database e.g. \\UNC\SharedFolder\MyDatabaseFolder
I'm a complete novice with EC2. Is there anyway I can set this up?
Many thanks
At least three of the four are completely possible and I have worked with similar setups. Maybe someone else knows more about the UDP bit.
IP address of the database instance
That is standard on EC2. All instances have two network interfaces, one EC2 internal and one to the outside world. For communication between instances use the internal one. Data traffic over these interfaces is free.
Access through a given UDP port
I have never tried UDP communication in EC2, but if it works you should probably keep it within a local network of your own, i.e. a virtual private cloud (VPC).
Server name e.g. \MyServer
This kind of host name lookup does not need a name server, although you certainly could run one (preferably within a VPC). If you put the server name and (internal) IP into your hosts file (%systemroot%\system32\drivers\etc\hosts) you don't need a name server, though.
An actual physical path through to its database e.g. \UNC\SharedFolder\MyDatabaseFolder
Folder sharing should work the same as with any other Windows machine, but even that should probably be kept within a VPC.
Setting up a VPC can be a little steep to start with, but the documentation is good and the hard bits are often not needed (such as VPN tunnels). Have a look at the example scenarios and follow the one best matching your needs.

Solution for local ip changes of AWS EC2 instances

Amazon only gives you a certain number of static ip address and the local (private) ips of each EC2 instance can change when the machine is restarted. This makes creating a stable platform where EC2 instances depend on each other ridiculously hard to use as far as I can tell.
I've search online a lot about various solutions and so far have found nothing reasonable outside of assigning an elastic ip address on ever EC2 even if its not public facing. Does anyone have any other good ideas that is actually easy to execute on?
Thanks!
See the AWS team's response to question Static local IP:
The internal IP address of EC2 instances is allocated via DHCP. On
instance shutdown, or when the DHCP lease expires, the IP address is
returned to the general EC2 DHCP pool of addresses available for other
instances.
There is no way to guarantee that you will obtain the same DHCP
address across reboots.
Edit: The answer is to use Amazon VPC. There is no downside except a trivial amount of extra setup because now you control the router. It's a world apart from plain old EC2 instance on AWS. It's so necessary in fact that VPC will be enabled for all future AWS setups by default. See this post for more information: http://www.reddit.com/r/aws/comments/1a3n0r/ec2_update_virtual_private_clouds_for_everyone/
The stock answers are:
Use AWS VPC so you have complete control over instance addressing
Use Elastic IPs, which will resolve to the instance's local address (not the public, as you'd expect) when used to communicate between EC2 instances
I stumbled upon third option. There's ec2-ssh by the Instragram folks. It's a python shell script that you install globally and lets you both query the public dns of your ec2 instances by tag name and also ssh in via tag name as well.
The documentation for it is virtually nonexistent. I've written down the steps to install below:
To install ec2-ssh:
sudo yum install python-boto (python wrapper for ec2 api)
git clone https://github.com/Instagram/ec2-ssh
In your ~/.bash_profile set your AWS access key and secret like so:
export AWS_ACCESS_KEY_ID=XYZ123
export AWS_SECRET_ACCESS_KEY=XYZ123
cd into the bin folder of the repo, there will be two files:
ec2-host and ec2-ssh
copy them to your /usr/bin or /usr/local/bin.
Now you can do awesome stuff like:
$ ec2-host ZenWorker
ec2-999-xy-999-99.compute-1.amazonaws.com
and
$ ec2-ssh ZenWorker
Connecting to ec2-999-xy-999-99.compute-1.amazonaws.com.
Note that in your regular shell scripts you can use backticks to call these global tools. I've timed these calls and they take between 0.25 and 0.5 second using an EC2 instance, so that's really the only downside. Perhaps you can live with the delay, or use the fact that public DNS only changes for an instance on reboot to work up a solution.
Note that these two programs are commandline scripts and you don't need any Python knowledge to use them. For PHP fans, or those that also want an easy way to scp files without knowing the changing public DNS, you can checkout ec2dns.
I was in the same situation once. I still dont have the expertise to solve it properly. My ugly solution was to use elb not really for load balancing but just for the endpoint.
But I think a good solution can be obtained by using aws vpc.
Here's another Ruby solution for Updating Route 53 DNS from instance on AWS. You shouldn't reference raw 3rd party system IP addresses in your applications or server configurations.
you can change Ip Address using Elastic Ip:
You Can Do Using C# Code:
var associateRequest = new AssociateAddressRequest
{
PublicIp = your Elastic Ip,
InstanceId = Your Instance Id Which You Assign
};
amazonEc2Client.AssociateAddress(associateRequest);
after That DeAssociate It.
var disAssociateRequest = new isassociateAddressRequest(publicIp.ElasticIpAddress1);
AmazonEc2Client.DisassociateAddress(your Elastic Ip);
your Public Ip Will Change

How do I connect up my Amazon EC2 instances without manually modifying config files?

I have a three-tier Windows-based web application bundled into 3 AMIs on Amazon EC2 that I use for load testing.
An ASP.NET web application on IIS
An .NET application server
SQL Server
After I launch them, the config files of each tier needs modifying to update the IP addresses.
At the moment I am doing this manually: I connect to the webserver instance via remote desktop and modify the config file to point to the new IP of the application server instance. Then I do the same with the application server to change the IP in the connection string.
This must be a common requirement and I must be missing something obvious. There must be a better way!
I could use Elastic IP addresses, but these machines are only provisioned for a couple of hours at a time, and I would be charged for the addresses when they were NOT in use (which would be most of the time).
Is there some way of persistently naming the machines? Can I somehow get all the machines on the same network and use machine names instead of IP addresses?
I could write some nifty PowerShell script that would perform the modifications remotely. Is there an example somewhere?
I could use a dynamic IP address service. I'm not sure if this would have any negative effect on performance or availability... Are there any downsides to this approach?
I could install some sort of self-configuring service on each machine (which connects to S3? SNS? SimpleDB?) to publish/retrieve the addresses of the other machines and update the config files automatically. Is there an example somewhere?
What is best practice?
You could use Amazon Virtual Private Cloud (Amazon VPC). You have a private subnet where you can assign an IP address to an instance, but it may require launching an instance from command line to assign IP. VPC is charged the same way as EC2.

Resources