Docker Toolbox Tutorial Client.Timeout exceeded while awaiting headers - windows

I'm following the guide at https://docs.docker.com/get-started/part2/#publish-the-image
Throughout the guide I've had trouble with this error sometimes coming up and sometimes when I rerun the commands they will work.
docker push %username%/%repository%:%tag%
I will get a response of Using default tag: latest
Error response from daemon: Get https://registry-1.docker.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
I'm using Windows Home with Docker Toolbox.
Please let me know if any additional information is needed.

Simply go to the Docker's Settings > Network and change DNS Server radio button to Fixed

Unfortunately answers above didn't help in my case, but restarting Docker did.

For anyone else who's looking I found the answer here which led me to here
Specifically for me these steps were helpful. In case the links break in the future.
This worked for Windows 10 Home/Docker Toolbox
Right click on wifi icon on bottom right of the screen and open network and sharing center.
Right click on the connection at "connections:"
Click on Properties
Uncheck IPv6
Check IPv4
Click properties
Select radio button Use the following DNS addresses
For preferred use 8.8.8.8
Restart the computer and try again.

In case someone runs his own docker repo.
I had a similar "Client.Timeout exceeded while awaiting headers" when running
docker login myownrepo.com:5000
It happened to be that i had port forwarding only for port 5000, and forgotten to add for port 5001.
The issue was resolved by adding port 5001 (on my router) pointing to the same docker-repo-host.

Do not add proxy unless it is required to access registry. In my case behind corporate network, I had added proxy which resulted in timeout error, after removing the proxy configuration in Docker Desktop, it got resolved. Hope it helps somebody.

"Client.Timeout exceeded while awaiting headers" is a symptom of several possible causes. In my case it was simply a case of the private network firewall blocking the docker client machine from accessing the registry host machine.
To test if that's the case (for whoever may be reading this), first try temporarily disabling the private network firewall.
For instance, if the private docker registry is hosted on Windows 10:
1) open Windows Security
2) click on Firewall & network protection
3) ensure Private Network is "active" and click on it
4) under "Microsoft Defender Wall" switch OFF private firewall
If the IP is suddenly accessible then you need to re-enable the firewall on the host and configure it to allow access to the docker registry.

I had this problem as I was using docker-reg:5000 under WSL2.
Adding it to /etc/hosts did not work.
As docker really works under windows, you need to add it to the WINDOWS hosts file.
C:\Windows\System32\drivers\etc\hosts

In my case having a slow and not stable internet connection was causing the issue. So if it is possible to increase performance of your connection do that, but in my case i retried it multiple times and it worked after a while.

Related

can't connect to vulnserver using netcat

I am not able to connect to vulnserver using netcat.
I type this to connect
nc -nv 192.168.70.130 9999
(UNKNOWN) [192.168.70.130] 9999 (?) open
and it says this forever and doesn't happen anything
I have disabled real time protection, allowed in firewall and also VM is set to NAT mode.
Is there any other way to connect or what might be the possible issue.
I have also encountered the same issue. I thought it was my VM acting up, so I restarted the network access. I tried allowing vulnserver.exe through my windows firewall. Neither of them solved the issue. Finally, I disabled windows defender firewall and now it works like a charm. But before doing this, try to ping the windows machine from the linux box. If there's a response it should work. If there is no response however, try enabling file and printer sharing in windows. For more info, read this...https://superuser.com/questions/1137912/ping-to-windows-10-not-working-if-file-and-printer-sharing-is-turned-off
Immunity Debugger and vulnserver has to run as administrator, then Immunity Debugger can see vulnserver, otherwise Im..Deb can't see because of less privileged then vulnserver.
Also we need run Immunity Debugger
That's mean port 9999 has already been opened which is expected if you're already executed vulnserver
wolf#linux:~$ nc -nv 127.0.0.1 9999
(UNKNOWN) [127.0.0.1] 9999 (?) open
Welcome to Vulnerable Server! Enter HELP for help.
It also means that you've already connected to the server. Go ahead and type HELP to see more info.
Try to setup using different network mode such as internal or host-only mode.
I just had a similar issue - I'm not sure if you are using it in conjunction with Immunity Debugger like me (as part of an ethical hacking course) but I kept getting that situation because I forgot to hit 'play' on the debugger.

Docker login from cmd line yields i/o timeouts

I'm trying to login to docker from the Docker Quickstart Terminal but it doesn't work.
I always get an error saying: "Error response from daemon: Server Error: Post https://index.docker.io/v1/users/: dial tcp: i/o timeout"
I have been working with docker the last few days and was always able to login and push/pull stuff. The only thing different I can think of is that I'm currently on a different Wifi.
Any help is greatly appreciated, thanks.
Karsten
This is a known issue, and is probably related to VirtualBox; after switching networks, boot2docker/VirtualBox sometimes looses connectivity or uses incorrect DNS settings.
DNS confused when switching WiFi networks
Container connectivity lost after switching wireless networks
Manual restart required everytime network connectivity changes (contains some workarounds)
Docker Machine 0.5.3 adds two new options that may circumvent this;
--virtualbox-dns-proxy
--virtualbox-host-dns-resolver
After restarting the machine it worked again, don't know what caused the problem though...

Hortonworks Sandbox URL not working

I downloaded the latest sandbox from this url - http://hortonworks.com/products/hortonworks-sandbox/ and played it with my VM Player. I have reached till the screen where URL is displayed. Now when I try to login with that URL I am getting 'Web page not available' (chrome), 'Can't reach page' (IE), 'The connection has timed out' (Firefox) error basically not working in any browser.
Can someone help me how to troubleshoot this?
I am using Windows 10, VMPlayer 7.3 and here is the page that gets displayed in VM Player -
I am using Windows 10, VMPlayer 15 and HDP Sandbox 3.0.1.
Setting the network adapter to Bridged Networking and restart the virtual Machine can solve this problem.
Try these methods
Ping 192.168.183.141. If you are getting any reply, then problem with VMNetwork adopter configurations
If it is not pinging check your firewall settings
Last try this...repair your VMPlayer (it will configure default services) or remove and re-install VMPlayer
Alright i had the same problem. It is actually not related to the VM or the browser. It is actually the Network adapter installed by the Vm which allows it to use host's machines networks.
In my case i was using Vmware so it had installed two network adapters
So all i had to do was to Enable them.
[Windows Users]
Go to Control Panel > Network and Internet > Network Connections
Find the Vmware Network Adapter
Right click on it and Enable them
Done Try to open the Url now
Cheers!

Filezilla FTP Server Fails to Retrieve Directory Listing

I'm running Filezilla Server 0.9.45 beta to manage my server remotely.
After setting it up, I tested connecting to it using the IP 127.0.0.1, and it worked successfully. However, to connect to the server remotely, I port forwarded to port 21, and tried to connect using my computer's IP.
Status: Connecting to [My IP]:21...
Status: Connection established, waiting for welcome message...
Response: 220 Powered By FileZilla Server version 0.9.45 beta
Command: USER hussain khalil
Response: 331 Password required for user
Command: PASS *********
Response: 230 Logged on
Status: Connected
Status: Retrieving directory listing...
Command: CWD /
Response: 250 CWD successful. "/" is current directory.
Command: PWD
Response: 257 "/" is current directory.
Command: TYPE I
Response: 200 Type set to I
Command: PORT 192,168,0,13,205,63
Response: 200 Port command successful
Command: MLSD
Response: 150 Opening data channel for directory listing of "/"
Response: 425 Can't open data connection for transfer of "/"
Error: Failed to retrieve directory listing
This continues to work locally, but not when connecting remotely... How can I fix this?
I just changed the encryption from "Use explicit FTP over TLS if available" to "Only use plain FTP" (insecure) at site manager and it works!
File > Site Manager > Select your site > Transfer Settings > Active
Works for me.
Most of the answers here involves configuring, actually just by adding sftp:// on your host (see below image) you can instantly fixed that kind of problem, works for me.
And also take note that if you follow Vaggelis guide you are lowering your security, sftp is better than using plain ftp.
I just changed the encryption from "Use explicit FTP over TLS if available" to "Only use plain FTP" (insecure) at site manager and it works!
When you send the port command to your server, you are asking the server to connect to you (on the remote network). If the remote network also has a NAT router, and you have not port-forwarded the port you are sending with your PORT command, the server will not be able reach you.
The most common solution would be to send the PASV command to the server instead of the PORT command. The PASV command will ask the server to create a listening socket and accept a connection from the remote machine to establish the data connection.
For the PASV command to work, you will also need to port-forward a range of ports for the passive data connections. The passive connection ports (which need to be forwarded) should be listed in the FileZilla documentation.
Ok this helped a lot, I couldn't find a fix.
Simply, I already port forwarded the FTP port to my server. (The default is 14147, I'll use this as example)
Go to Edit > General settings, Listening port should be the one your using, in this case 14147.
Then go to Passive Mode Settings, I checked "Use Custom Port", and entered in the Range 50000 - 50100.
Then on your router, port forward 50000 - 50100 to the server IP locally.
IPv4 specific settings I left at default, reconnected my client, and bam now the file listing appears.
Ensure your servers firewall has an inbound rule set to accept 14147, and 50000-50100.
Basically what Evan stated. I can't attest to the security of opening these ports, but this is what finally got my Filezilla client and server to communicate and view files. Hope this helps someone.
My experience is that the new version of Filezilla has this problem, but not the old versions. I was using Filezilla and everything was OK. After I upgraded to version 3.10, I faced this problem and I couldn't solve it. I uninstalled version 3.10 and reinstalled version 3.8 and the problem was gone! Now I am using version 3.8 and everything is OK. I prefer to face no problems even if I have to use old versions. ;)
Try installing the old version and do not upgrade, however odd that may sound.
I solved this by going into Site Manager -> selected the connection that Failed to retrieve directory listing -> Switched to tab "Transfer Settings" and set "Transfer Mode" to "Active" instead of "Default". Also check if you are connected via VPN or anything similar, this can also interfere.
Run Windows Defender Firewall with Advanced Security
Start > Run : wf.msc
I had the same problem-what worked for me -in Windows OS-is adding FileZilla as a firewall exception-under allow a program through firewall features
I've had the same problem, This was due to the firewall. I use windows server,
Can you allow the connection permission for program, intead of port 21,22 permission.
Windows Firewall with Advanced Security->
Inbound Rules->
Add Rule->
Program->
"Select Filezilla path with Browse button"->
Allow the Connection
I had Filezilla 3.6, and had the same issue as OP. I have upgraded to 3.10.3 thinking it would fix it. Nope, still the same.
Then I did a bit digging around the options, and what worked for me is:
Edit -> Settings -> FTP -> Passive Mode and switched from "Fall back to active mode" to "Use the server's external IP address instead"
I experienced the same problem with FZ-client, while my notebook connected via WLAN and DSL/Router. In the Site Manager connection settings I was applied Host:ftp.domain-name, Encryption:Only use plain FTP (insecure) and User:username#domain-name. Then the FTP-client succesfully connected to my website server.
More FTP connection information could be found in the CPanel of the webserver. Hope this helps.
It worked for me:
General -> Encryption -> Only use plain FTP
Transfer settings -> Transfer Mode -> Active
Consider that it is very insecure, and must be used only for testing.
After about 2 hours experience;
Open > Windows Defender Firewall with Advanced Security
Select > Inbound Rules
Click > New Rule...
Choose > Custom
Choose > This program path:
Click > Browse
Find > filezilla-server.exe (possibly C:\Program Files\FileZilla Server)
Click > Open
Click > Next
Click > Next
Click > Next (Allow the connection is already selected)
Click > Next (if you do not need change)
Fill > Name
Click > Finish
I also had the problem after upgrading to 3.10. I had versoin 3.6.02 hanging around and installed that. Problem solved.
I had that problem with my server hosted in the cloud. I only need the server a couple of times a year and so when I boot up my server the IP address changes. The new IP address then has to be updated on FTP Server passive mode settings!
The latest version of Filezilla works just fine!
If you're using VestaCP, you might want to allow ports 12000-12100 TCP on your Linux Firewall.
You can do this in VestaCP settings.
Check if the ip address on the router is the same with the one on the ftp server. If not make sure it is the same. This should works perfectly.
In my case, restarting my router which I used to connect to the internet worked. I think too much of connections were going from the same IP Address and when I restarted my router, possibly a new IP was assigned and now everything works fine, and passive mode gives good speed in directory listing.
My issue was also the firewall. I'm using a Linux server with WHM/cPanel. Adding my IP to the quick allow solved my issue. I hadn't updated Filezilla and I don't think there were any changes to the server that should have caused it. However, I did move and my IP changed so maybe that was the problem. Good luck to everyone else with this insanely annoying issue.
The issue of mine was the same but the solution was a little different.
I used the AWS EC2 server to host the WHM service. And found that "the passive ports are enabled, but not these Ports are not found in my EC2 Security Group".
[root#94367392 ~]cPs# egrep -i passiveport /etc/pure-ftpd.conf
Output:
PassivePortRange 49152 65534
Now I moved ahead and opened the ports from 49152 to 65534 in the Security group of EC2 and the Filezilla problem related to "Failed to retrieve directory listing" was solved and it worked like a charm.
This cPanel doc is helpful.
I've seen solutions that involve granting FileZilla full access via windows firewall. This is an alternative to that, if you know the IP of the connecting system and it's static, simply grant it full access to all ports via windows firewall.
Windows Firewall, Inbound Rules > Create a Rule > All Local Ports > Scope > This IP Address (the IP of the connecting system).
To me this is much safer than granting full access to FileZilla to all incoming ip addresses.
Once you've completed your transfer, you can then disable the rule.
I tried all the solution, i used CyberDuck and it works..
Now in FileZilla, create a new Account
1. Host is the FTP Address - e.g. ftp.somewhere.com
2. Protocol is "SFTP-SSH File Transfer Protocol"
3. User ID is your Bluehost User Id
4. Password is your Bluehost Password
5. Click "Connect" to establish a connection with Directory Listing!
This resolve the issue with 3.10 for me. And I'm glad to have the Secure Access for all of my future file transfers. It should prevent security issues in the future.

Self Hosted WebApi Accessible over LAN

Very new to the Self Host WebApi, but I am very impressed with its ease of use and extendability. At least through this tutorial. Everything I've done so far works on my development machine whether I use localhost, 127.0.0.1, or my LAN Ip (192.168.0.x) but I am baffled why I can't access the service from any other computer even others in the same subnet.
In short after going through the tutorial on the machine where it is running:
Browsing to
localhost:3636/api/products/
results in the expected xml return.
On another machine on the LAN browsing to:
192.168.0.x:3636/api/products/
results in a timeout
Data points for those who might know how this all interacts:
1.) My dev machine(192.168.0.x, server, host whatever you want to call it) has IIS on it; I was so paranoid it was in the way that I stopped it via the Administration GUI
2.) I have reserved the URL/Port with the following command line executions:
>netsh http add urlacl url=http://+:3636/ user=DOMAIN\USER listen=yes delegate=yes
>netsh http add urlacl url=http://192.168.0.x:3636/ user=DOMAIN\USER listen=yes delegate=yes
2.b) I've tried both of those together and individually, and tried changing the user to "everyone" to no avail
3.) I have tried to change the code in the tutorial to set the
config.HostNameComparisonMode = HostNameComparisonMode.Exact //default is Strong Wildcard
4.) I can successfully ping and tracert to 192.168.0.x from other machines on the LAN
5.) A friend recommended I setup a TCPListener and ensure I could telnet to that to eliminate the firewall as a possibility. If that logic is sound, the firewall isn't the problem
EDIT: Thanks for your help, here's another data point that I believe confirms it's not a firewall issue. I previously posted this connection when behind a rather obtuse (at least to a non Certified guy like me) Juniper Firewall/Router. I have since redone the tutorial on another machine (without IIS) on my home network and still cannot publish the service to other computers within my LAN. Any ideas?
Well it wasn't the hardware firewall, it was the windows firewall! yikes i wasted a bunch of time on that. Once I turned off the windows firewall (the code runs in an intranet anyway) everything worked.
Anyone know of a good site that explains how firewalls and wireshark interact; or i suppose that just has to be one's first test.
I would try a couple things:
First off, get rid of the HostNameComparisonMode line. That might actually disable requests coming from other machines.
If things still don't work, try getting rid of the URL ACLs and run your application as an administrator and see if that works. If that works, you may be able to add the URL ACL back on and not have to run as an administrator. You should only need the one with '+' as the hostname.
I faced the same problem when i tried to self host using OWIN. What worked for me was -
Run Visual Studio as an Admin
Remove any and all netsh urlacl port registrations that I had added while debugging this issue
Add a inbound rule to my windows firewall
I followed the instructions on this link
https://learn.microsoft.com/en-us/dotnet/framework/wcf/samples/firewall-instructions
Check out the section - To enable a port range in advance
That's it! I was able to call my api from other computers on the network.
Hope this helps...

Resources