I have ec2 instance running and which is linked with elastic ip.
when I ping it from local machine then It shows request time out because of which I am not able connect to it via putty and win scp.
I am facing this issue from last 2days.
It was working well for last 2 months.
Please help.
My instance is runig and healthy.
If you want to ping an EC2 instance from your local machine you need to allow inbound Internet Control Message Protocol (ICMP) traffic. Please check your Security Groups to make sure this is allowed. Remember that all inbound traffic is disable by default. You may need to establish a rule similar to this one (JSON format):
"AllowIngressICMP": {
"Type": "AWS::EC2::SecurityGroupIngress",
"Properties": {
"GroupId": <Your Security Group here>,
"IpProtocol": "icmp",
"FromPort": "-I",
"ToPort": "-I",
"CidrIp": "0.0.0.0/0"
** The -I means "every port"
Related
I'm using Docker on a Window 10 laptop. I recently tried to get some code to run in a container to connect to another server on the network. I ended up making a Ubuntu container and found the issue is a IP conflict between the docker network and the server resource (172.17.1.3).
There appears to be an additional layer of networking on the Windows Docker setup with isn't present on the Unix system, and the docker comments to "simply using a bridge network" doesn't resolve this issue.
docker network inspect bridge
[
{
"Name": "bridge",
"Id": "d60dd1169153e8299a7039e798d9c313f860e33af1b604d05566da0396e5db19",
"Created": "2020-02-28T15:24:32.531675705Z",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.17.0.0/16",
"Gateway": "172.17.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {},
"Options": {
"com.docker.network.bridge.default_bridge": "true",
"com.docker.network.bridge.enable_icc": "true",
"com.docker.network.bridge.enable_ip_masquerade": "true",
"com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
"com.docker.network.bridge.name": "docker0",
"com.docker.network.driver.mtu": "1500"
},
"Labels": {}
}
]
Is it possible to change the subnet/gateway to avoid the IP conflict? If so how? I tried the simple thing and making a new docker network:
docker network create --driver=bridge --subnet=172.15.0.0/28 --gateway=172.15.0.1 new_subnet_1
There still appears to have a conflict somewhere, I can reach other devices just nothing in 172.17.0.0/16. I assume guessing it's somewhere in the HyperV, vEthernet adapter, or vswitch.
UPDATE 1
I took a look at wireshark (PC level) with the new_subnet_1 network and I did not see these packets leave the vSwitch interface or the PC's NIC.
I did see this Docker forum which is indicating an issue with the Hyper-V and V-switch that could be the issue.
Docker Engine v19.03.5
DockerDesktopVM created by Docker for Windows install
UPDATE 2
After several Hyper-v edits and putting the environment back together I check the DockerDesktopVm. After getting in from a privileged container I found that the docker0 network had the IP conflict. Docker0 is appears to be the same default bridge network that I was avoiding, because it is a pre-defined network it cannot be removed, and all my traffic is being sent to it.
After several offshoots, and breaking my environment at least once, I found that the solution was easier then I had though.
Tuned off Docker Desktop Services
Added the following line to the %userprofile%\.docker\deamon.json file in windows 10
....lse,
"bip": "172.15.1.6/24" <<new non conflicting range
}
Restarted Docker Desktop Service
Easy solution after chasing options in Hyper-V and the Docker Host Linux VM.
I am trying to run multiple Docker daemon configured to run containers with Hyper-V isolation and LCOW on the same Windows 10 machine.
I was able to configure the daemons to manage their own data files, but I am still struggling to get the network configuration clean.
When the first daemon start, it binds to the local "nat" network for DNS resolution. When the second daemon starts, it tries to bind to the same "nat" network then fails as port 53 is already being used by first daemon.
ERRO[2019-02-15T15:50:58.194988300Z] Resolver Setup/Start failed for container nat, "error in opening name server socket listen udp 172.18.64.1:53: bind: Only one usage of each socket address (protocol/network address/port) is normally permitted."
Containers started by this daemon then cannot perform any name resolution. Access through IP still works properly.
Here is the dockerd configuration I am currently using:
{
"registry-mirrors": [],
"insecure-registries": [],
"bridge": "mydaemon1",
"data-root": "C:\\Users\\myuser\\Desktop\\Docker\\Docker",
"deprecated-key-path": "C:\\Users\\myuser\\Desktop\\Docker\\Docker\\config\\key.json",
"debug": true,
"exec-root": "C:\\Users\\myuser\\Desktop\\Docker\\Docker\\exec-root",
"exec-opts": [
"isolation=hyperv"
],
"experimental": true,
"group": "mydaemon-docker",
"hosts": [
"npipe:////./pipe/mydaemon1_engine"
],
"pidfile": "C:\\Users\\myuser\\Desktop\\Docker\\Docker\\docker.pid",
"storage-opts": [
"lcow.kirdpath=C:\\Users\\myuser\\Desktop\\Docker\\server\\resources",
"lcow.kernel=lcow-kernel",
"lcow.initrd=lcow-initrd.img"
]
}
I tried to tweak the bridge configuration, but it didn't change anything. Daemon always tries to connect to nat network. It looks like the only supported value is none, which removes the default eth0 in the containers and any DNS support.
Is it possible to configure the network used for DNS resolution, ie nat here?
Ideally I want the daemon to have its own, dedicated, nat network.
I know it is not possible to do it in Docker for Windows while using the MobyVM as WinNAT, which is used in that case, does not support it.
While using Hyper-V isolation and LCOW, it seems WinNAT is not used anymore as Get-NetNat does not return any NAT network configuration despite DNS working properly. I am not sure I am right on anything, whether this is possible neither if any other Windows limitation applies...
I'm using dnsmasq but I'm a little confused as to what gets set where. Everything works as expected, but I wasn't sure if any of my config parameters are redundant or would cause issues down the road.
1 - Do I need to set the recusors option in Consul's config?
2 - Do I still need both nameservers entry in /etc/resolv.conf?
3 - Do I need dnsmasq on all Consul clients or just the servers?
#/etc/dnsmasq.d/dnsmasq.conf`
server=/consul/127.0.0.1#8600
My Consul config looks like this:
{
"server": false,
"client_addr": "0.0.0.0",
"bind_addr": "0.0.0.0",
"datacenter": "us-east-1",
"advertise_addr": "172.16.11.144",
"data_dir": "/var/consul",
"encrypt": "XXXXXXXXXXXXX",
"retry_join_ec2": {
"tag_key": "SOMEKEY",
"tag_value": "SOMEVALUE"
},
"log_level": "INFO",
"recursors" : [ "172.31.33.2" ],
"enable_syslog": true
}
My /etc/resolv.conf looks like this:
nameserver 127.0.0.1
nameserver 172.31.33.2
1) read the documentation: https://www.consul.io/docs/agent/options.html#recursors having a recursor setup is great if you have external services registered in Consul, otherwise it's probably moot. You likely don't want ALL of your DNS traffic to hit consul directly, just the consul specific DNS traffic.
2 & 3:
It's up to you. Some people run dnsmasq on every machine. Some people centralize dnsmasq on their internal DNS servers. Both are valid configurations. If you run it on every single machine, then you probably just need 1 nameserver entry, pointed at localhost. If you run it centralized (i.e. just on your internal DNS servers) then you just point every machine at your internal DNS servers. Both are valid options.
How can I access to consul UI externally?
I want to access consul UI writing
<ANY_MASTER_OR_SLAVE_NODE_IP>:8500
I have try doing a ssh tunnel to acces:
ssh -N -f -L 8500:localhost:8500 root#172.16.8.194
Then if I access http://localhost:8500
It works, but it is not what I want. I need to access externally, without ssh tunnel.
My config.json file is the next:
{
"bind_addr":"172.16.8.216",
"server": false,
"datacenter": "nyc2",
"data_dir": "/var/consul",
"ui_dir": "/home/ikerlan/dist",
"log_level": "INFO",
"enable_syslog": true,
"start_join": ["172.16.8.211","172.16.8.212","172.16.8.213"]
}
Any help?
Thanks
Add
{
"client_addr": "0.0.0.0"
}
to your configuration or add the option -client 0.0.0.0 to the command line of consul to make your Web UI accessible from the outside (see the docs for more information).
Please note that this will also make your Consul REST API accessible from the outside. Depending on your environment you might want to activate Consul's ACLs to restrict access.
You can use socat in this case.
socat -d -d TCP-L:8500,bind=172.16.93.128,fork TCP:localhost:8500 &
where 172.16.93.12 is my IP.
I run it as a docker image, i gave
docker pull consul
docker run -p 8500:8500 consul
and i am able to access the consul ui at http://<hostname>:8500/ui
Finally I find the solution.
Add to the config file with the bind addr that is the IP of the machine, and the client_addr that is the hosts he listen to. So I use 0.0.0.0 to listen to all the IPs.
"bind_addr":"<machine-ip>",
"client_addr":"0.0.0.0",
I don't have hands-on experience with Consul yet, but here are a few tips:
Run sudo netstat -peanut | grep :8500 and check if Consul is bound to 0.0.0.0 or an explicit ip. Should check docs if this is configurable.
On each node install Squid, Nginx or any other software which can act as a HTTP proxy
No way to get User Interface if ther no user interface )
Classic UI its some stack of Desktop Environment(x-term....), so before get, you need install it on node
I have figured out how to use Sublime SFTP with Vagrant. But I constantly am switching between multiple Vagrant VMs and running multiple VMs at once. In order to connect Sublime SFTP to the VM, you have to set the host:
"host": "127.0.0.1",
"user": "vagrant",
//"password": "",
"port": "2222",
"ssh_key_file": "/home/jeremy/.vagrant/machines/inspire/virtualbox/private_key",
The only problem is the "port": "222" field will change depending on when I start up which VMs and how many I am running. So it makes it impossible to use sublime with these VMs with having to reconfigure the sftp_servers file first. Is there any way to permanently assign the port to the VM or a better way to accomplish what I am trying to do?
in your Vagrantfile you can define the ssh port with property config.ssh.port