osx docker max connections limit - macos

I installed Docker-ce(ver 17.03.1-ce-mac12 17661) on macOS Sierra(ver 10.12.5)
I created a container and run a simple socket echo server.
And then attempted to connect to the container's echo server from the host.
Initially, when the number of open sockets reached 370, a connection failure occurred,
I found the following via Google search.
https://github.com/docker/for-mac/issues/1009
To summarize, the docker for mac has its own maximum number of connections.
I modified the maximum number of connections moderately according to this content.
And I connected to the docker host in the following way.
http://pnasrat.github.io/2016/04/27/inside-docker-for-os-x-ii
I changed the ulimit configuration of the docker host as well, and changed the osx and container settings accordingly.
And again, I tried again, but this time the number of sockets exceeded the 370 limit mentioned above, but it is also blocked at about 930 ~ 940.
I try to change the settings like this, but it does not get better.
Note that a docker running on top of an Ubuntu server does not need to change any settings, and works well without any socket restrictions.
An echo server running inside the container of a docker running on Ubuntu can maintain at least 4000 sockets.
The problem only occurs with the docker for mac.
If you are aware of this situation, can anyone suggest a solution?
Thank you.

Related

Localhost refused to connect on WSL2 when accessed via https://localhost:8000/ but works when using internal WSL IP adress

What I'm Trying to Achieve
To access localhost from my local machine during the development of a Symfony web app.
My Environment
WSL2 running on Windows 10
Linux, Apache2, MySQL, PHP-7.4 stack (with Xdebug3 intalled)
Debian 10
Symfony 5.4 (although not sure on if relevant to this problem)
Steps I've Taken
Set up WSL2 according to this Microsoft WSL2 tutorial
Set up LAMP stack according to this Digital Ocean tutorial
Set up Symfony according to this Symfony tutorial
Run the following bash script on startup to start my services and set the host to the virtual WSL IP in my xdebug.ini file
#!/bin/sh
REMOTEIP=`cat /etc/resolv.conf | grep nameserver | sed 's/nameserver\s//'`
sed -i -E "s/client_host=[0-9\.]+/client_host=$REMOTEIP/g" /etc/php/7.4/mods-available/xdebug.ini
service php7.4-fpm start
service apache2 start
service mysql start
Run my Symfony project on the development server using symfony serve -d (Symfony then tells me "The Web server is using PHP FPM 7.4.23 https://127.0.0.1:8000")
Go to https://localhost:8000/ in Chrome where the app is running
What I Expect to Happen
My Symfony web app to be running on https://localhost:8000/ when I visit the URL in my Chrome browser
What Actually Happens
I get "This site can't be reached localhost refused to connect." in the Chrome browser
What I've Tried
This used to happen less frequently and I would give my laptop a restart, repeat the process above, and I could connect via https://localhost:8000/. However, it refuses to connect more regularly now (like 8/10 times I start up for the day)
Connecting to https://127.0.0.1:8000 yields the same result.
Connecting to the site using the internal WSL IP address, found using hostname -I and replacing localhost with this IP (still on port 8000). This is an adequate workaround to use my app, however I am unable to interact with my database via MySQL Workbench without having to set up a new connection, therefore a fix where I can use localhost would be very helpful!
(Based off comments) Only ran symfony serve -d without starting apache and PHP services separately - still sometimes allows connections to localhost but sometimes doesn't work.
Conclusion
The behaviour is odd as it works sometimes but other times it doesn't when the exact same steps are carried out. I am unsure where else to look for answers and I can't seem to find anything online with this same problem. Please let me know if any config files, etc would be helpful. Thank you so much for your help! :)
When it's working normally, as you are clearly aware, the "localhost forwarding" feature of WSL2 means that you can access services running inside WSL2 using the "localhost" address of the Windows host.
Sometimes, however, that feature breaks down. This is known to happen when you either:
Hibernate
Have the Windows "Fast Startup" feature enabled (and it is the default). Fast Startup is a pseudo-hibernation which triggers the same problem.
Typically the best solution is to disable Hibernation and Fast Startup. However, if you do need these features, you can reset the WSL localhost feature by:
Exiting any WSL instances
Issuing wsl --shutdown
Restarting your instance
It's my experience that localhost forwarding will work after that. However, if it doesn't, thanks to #lwohlhart in the comments for mentioning that another thing to try is disabling IPv6 on WSL2, since (I believe) there's a possibility that the application is listening on IPv6 while the Windows->WSL2 connection localhost connection is being attempted on IPv6.
You can disable IPv6 on WSL2 per this Github comment by creating or editing .wslconfig in your Windows user profile directory with the following:
[wsl2]
kernelCommandLine=ipv6.disable=1
A wsl --shutdown and restart will be necessary to complete the changes.
If you find that this works, it may be possible to solve the issue by making sure to either use the IPv4 (127.0.0.1) or IPv6 (::1) address specifically in place of localhost on the Windows side, or by configuring the service to listen on both addresses.
Try to run command netstat -nltp. It shows active addresses and ports. Your nginx process should be run at 0.0.0.0:8000. 0.0.0.0 means the nginx process is available from anywhere.
If your nginx process is ran by any specific ip address, you should access it by that ip address, e.g http://192.168.4.2:8000.

Docker container slow network access (Windows Server 2016 containers)

Notes:
Slow network performance in Docker container does not address either of my issues.
Very slow network performance of Docker containers with host's network might be related, but the one response there is definitely not the problem.
I am running the windows server 2016 integrated version of docker, with the microsoft/mssql-server-windows-developer image. (windows containers, linux is not an option for the ultimate purpose). My goal is to use this image for a temporary SQL server for repeated acceptance test runs.
As of now, everything works as I need it to, except for performance. As a measurement of performance I have a set of scripts (invoked by powershell) that will setup a database with tables, schema, role, etc and some small amount of initial data.
When I share a drive with the host system, I can connect to the container and run this powershell script inside the container. It takes 30 seconds to complete. No errors, and when I inspect the database with SSMS, it is all correct.
When I run the script from the host machine (via the exposed port 1433), the script takes about 6000 percent longer. (i.e. about 30 minutes.) However, it also runs correctly and produces correct results.
The above measurements were made using the default "nat" network, with the container run with -p 1433:1433. My main question is, how can I get remotely reasonable performance when running my script from the host system? (Ultimately running anything under test from within the container is not an option. Also ultimately this same performance issue must be resolved in order for our container deployment plans to be realistic.)
Thanks!
What I have tried so far.
First, there is no internal CPU or memory performance issues from within the container. So I have already experimented with the --cpus related options and -m options and given the container far more resources than it really needed. The internal performance does not change. It's very fast that way regardless of any of these settings.
I have also investigate creating a "transparent" network. Using the powershell cmdlet New-ContainerNetwork, I created a transparent network, and started the container with the "--net Trans" switch. I got a valid DHCP address from the external network and had connectivity to the internet and other domain machines on the intranet. Using netstat -a, (and powershell Get-WMIObject win32_service) I was able to determine that the MSSQLSERVER instance was running and listening on port 1433. I installed telnet inside the container, and could make a connect to that port using the command "telnet [ipaddressfromipconfig] 1433".
From a host command prompt, I could ping the containers ip address and get replies, but the telnet command (above) would not connect from the host. So naturally when I tried SSMS it would not connect either. The -P, or -p 1433:1433 port mapping option is not supported with a transparent network, but I have been imagining that if I were accessing from the host machine that should not be necessary for a transparent network.
Suspecting that there was a firewall somehow blocking the connect, I verified that the firewall service in the container is not even running. I turned off the firewall completely on the host, however, nothing changed. I still cannot connect. I tried both the "--expose 1433" parameter on the docker run, as well as rebuilt the image with the EXPOSE 1433 line in the docker file. No change in conditions.
I have no idea if a transparent network will even solve the issue, but I would like advice on this.
It would be okay for the performance to be somewhat slower, within reason, but a 6000 percent degradation is a problem for my intended purpose.
It turns out that I did not know to supply enough information on this issue.
The powershell code we are using to create and populate the test database with is sending a series of 328 script.sql files to the sql server. It is using Windows authentication. This works with the container, because we are using a GSMA credential_spec which is documented here:
https://learn.microsoft.com/en-us/virtualization/windowscontainers/manage-containers/manage-serviceaccounts
That method of authentication may or may not be relevant. Using Wireshark to monitor the container adapter, I noticed that a connection took just under 4 seconds to authenticate, but I have no other method to provide as a comparison. Therefore, I cannot say if that method of authentication is somehow significantly slower than some other method. What is definitely of relevant significance is that when our main powershell code sends a particular script.sql file, it does not use Invoke-Sqlcmd. Rather it invokes sqlcmd via Invoke-Expression similar to:
$args = "-v DBName = $dbName JobOwnerName = $jobOwnerName -E -i $fileName -S $server -V 1 -f 65001 -I";
if (!$skipDBFlag)
{
$args += " -d $dbName";
}
Invoke-Expression "& sqlcmd --% $args";
In this case, sqlcmd will reconnect to the database in the container, run a script.sql file, and then disconnect. It will not cache the connection the
way that Invoke-Sqlcmd does.
So, because of the lack of connection pooling the authentication was happening 328 times, once for each script.sql file. 4 seconds * 328 / 60 = ~21 minutes. Thats where the source of the above issue was. Not in any container network issue.
I apologize for not being able to supply all of the relevant information initially. I hope that this answer will help someone if they run into a similar issue with using containers in this way, and the length of time that authentication with SQL Server takes in this configuration.

Can I run a docker container with an own, externally and host accessible, IP address on a Mac

I want to run an WebRTC gateway in a docker container on my Mac.
I need to export essentially all ports (TCP and UDP) (Specify -p does not help because there seems to be limit on the number of ports) with its own IP address. Using --net=host does not work on Mac.
Is there another option?
You can expose all ports using -P (note the uppercase) or --publish-all=true (is the same) on docker run command.
Link to docker docs about this.
Then you can check the mappings docker assigned using:
docker port yourContainerName
My previous answer is identical to a similar question (about doing essentially the same thing on a different platform (ie Windows)).
The problems encountered on both platforms are different (because Mac OX and Windows have a different network stack), but the workaround is the same.
I think the answer (would help someone) encountering the problem (on both cases).

Docker-Compose Elkstack

I'd like to get an Elkstack running on my virtual machine. I read up on the topic already a while ago but just recently figured out how to use docker-compose. I found a complete compose file on github (https://github.com/deviantony/docker-elk) which is capable of composing the whole stack, however I ran into two questions: Kibana is awaiting input from Elasticsearch on the url "http://elasticsearch:9200" - The url is obviously not resolving to anything - Hows that supposed to work? I made changes to it so it tries to connect to localhost:9200 but Elasticsearch is refusing the connection - I checked the running containers and saw that elasticsearch is indeed running but using "lsof -i :9200" (or 9300) did not show anything which means it is not listening on the port right? Or would it now show up due to the fact that it is running in docker?
Thanks in advance.

Docker login from cmd line yields i/o timeouts

I'm trying to login to docker from the Docker Quickstart Terminal but it doesn't work.
I always get an error saying: "Error response from daemon: Server Error: Post https://index.docker.io/v1/users/: dial tcp: i/o timeout"
I have been working with docker the last few days and was always able to login and push/pull stuff. The only thing different I can think of is that I'm currently on a different Wifi.
Any help is greatly appreciated, thanks.
Karsten
This is a known issue, and is probably related to VirtualBox; after switching networks, boot2docker/VirtualBox sometimes looses connectivity or uses incorrect DNS settings.
DNS confused when switching WiFi networks
Container connectivity lost after switching wireless networks
Manual restart required everytime network connectivity changes (contains some workarounds)
Docker Machine 0.5.3 adds two new options that may circumvent this;
--virtualbox-dns-proxy
--virtualbox-host-dns-resolver
After restarting the machine it worked again, don't know what caused the problem though...

Resources