ddev Xdebug and step debugging not working with PhpStorm - ddev

I'm new to ddev, but not new to Ansible, Vagrant, etc, or setting up Xdebug, and I can't for the life of me get step debugging to work with PhpStorm and ddev. I've tried changing ports and did all the steps in this thread How do I get xdebug/step-debugging working with ddev? to no avail. What am I missing?
Here are my settings:
APIVersion: v1.13.0
name: ee-dev-clean
type: php
docroot: ""
php_version: "7.3"
webserver_type: nginx-fpm
router_http_port: "80"
router_https_port: "443"
xdebug_enabled: true
additional_hostnames: []
additional_fqdns: []
nfs_mount_enabled: false
provider: default
use_dns_when_possible: true
timezone: ""
I'm setting my own 20-xdebug.ini file which is 100% accepted and showing correct values when I run phpinfo(). These are basically the same settings that do work in other Ansible/Vagrant based machines that I have running locally (all of them have been suspended while testing ddev).
[XDebug]
zend_extension=xdebug.so
; zend_extension="/usr/lib/php/20190902/xdebug.so"
xdebug.remote_host=host.docker.internal
xdebug.remote_enable=1
xdebug.remote_connect_back=1
xdebug.show_exception_trace=0
; xdebug.remote_port=11011
xdebug.idekey=PHPSTORM
xdebug.remote_log="/var/log/xdebug.log"
The xdebug.log file has the following output after making a request and attempting to pick up a break point. I also have "stop at first line" enabled.
[1038] I: Remote address found, connecting to 172.18.0.1:9000.
[1037] I: Remote address found, connecting to 172.18.0.1:9000.
[1037] W: Creating socket for '172.18.0.1:9000', poll success, but error: Operation now in progress (29).
[1038] W: Creating socket for '172.18.0.1:9000', poll success, but error: Operation now in progress (29).
[1037] E: Could not connect to client. :-(
[1037] Log closed at 2020-02-19 19:56:05
[1038] E: Could not connect to client. :-(
[1038] Log closed at 2020-02-19 19:56:05
[1057] Log opened at 2020-02-19 19:56:50
[1057] I: Checking remote connect back address.
[1057] I: Checking header 'HTTP_X_FORWARDED_FOR'.
[1057] I: Remote address found, connecting to 172.18.0.1:9000.
[1057] W: Creating socket for '172.18.0.1:9000', poll success, but error: Operation now in progress (29).
[1057] E: Could not connect to client. :-(
[1057] Log closed at 2020-02-19 19:56:50
[1037] Log opened at 2020-02-19 19:56:51
[1037] I: Checking remote connect back address.
[1037] I: Checking header 'HTTP_X_FORWARDED_FOR'.
[1037] I: Remote address found, connecting to 172.18.0.1:9000.
[1038] Log opened at 2020-02-19 19:56:51
[1038] I: Checking remote connect back address.
[1038] I: Checking header 'HTTP_X_FORWARDED_FOR'.
[1038] I: Remote address found, connecting to 172.18.0.1:9000.
[1037] W: Creating socket for '172.18.0.1:9000', poll success, but error: Operation now in progress (29).
[1038] W: Creating socket for '172.18.0.1:9000', poll success, but error: Operation now in progress (29).
[1037] E: Could not connect to client. :-(
[1037] Log closed at 2020-02-19 19:56:51
[1038] E: Could not connect to client. :-(
[1038] Log closed at 2020-02-19 19:56:51

Ok, it was the port. I had to set it up to use port 11011 instead of 9000. Removed the 20-xdebug.ini entire override and added an xdebug-port.ini file with just the following:
[XDebug]
xdebug.remote_port=11011
Then changed the project settings in PhpStorm:
Also added the following server settings (though PhpStorm figured this out for me when it saw the incoming connection:

Related

Docker service tasks stuck in preparing state after reboot on windows

Restarting a windows server that is a swarm worker, causes windows containers to get stuck in a "Preparing" state indefinitely once the server and docker daemon are back online.
Image of tasks/containers stuck in preparing state:
https://user-images.githubusercontent.com/4528753/65180353-4e5d6e80-da22-11e9-8060-451150865177.png
Steps to reproduce the issue:
Create a swarm (in my case I have CentOS7 managers, and a few windows server 1903 workers)
Create a "global" docker service that only runs on the windows machines. They should start up fine
initially and work just fine.
Drain one or more of the windows nodes that is running the windows container(s) from step 2 (docker node update --availability=drain nodename)
Restart one or more of the nodes that were drained in step 3, wait for them to come back up
Set the windows node(s) back to active (docker node update --availability=active nodename)
At this point, just observe that the docker service created in step 2 will be "Preparing" the containers to start up on these nodes, and there it will stay (docker service ps servicename --no-trunc) -- you can observe this and run these commands from any master node
memberlist: Refuting a suspect message (from: c9347e85405d)
memberlist: Failed to send ping: write udp 10.60.3.40:7946->10.60.3.110:7946: wsasendto: The requested address is not valid in its
context.
grpc: addrConn.createTransport failed to connect to {10.60.3.110:2377 0 <nil>}. Err :connection error: desc = "transport: Error while
dialing dial tcp 10.60.3.110:2377: connectex: A socket operation was attempted to an unreachable host.". Reconnecting... [module=grpc]
memberlist: Failed to send ping: write udp 10.60.3.40:7946->10.60.3.186:7946: wsasendto: The requested address is not valid in its
context.
grpc: addrConn.createTransport failed to connect to {10.60.3.110:2377 0 <nil>}. Err :connection error: desc = "transport: Error while
dialing dial tcp 10.60.3.110:2377: connectex: A socket operation was attempted to an unreachable host.". Reconnecting... [module=grpc]
agent: session failed [node.id=wuhifvg9li3v5zuq2xu7c6hxa module=node/agent error=rpc error: code = Unavailable desc = all SubConns are
in TransientFailure, latest connection error: connection error: desc = "transport: Error while dialing dial tcp 10.60.3.69:2377:
connectex: A socket operation was attempted to an unreachable host." backoff=6.3s]
Failed to send gossip to 10.60.3.110: write udp 10.60.3.40:7946->10.60.3.110:7946: wsasendto: The requested address is not valid in its
context.
Failed to send gossip to 10.60.3.69: write udp 10.60.3.40:7946->10.60.3.69:7946: wsasendto: The requested address is not valid in its
context.
Failed to send gossip to 10.60.3.105: write udp 10.60.3.40:7946->10.60.3.105:7946: wsasendto: The requested address is not valid in its
context.
Failed to send gossip to 10.60.3.69: write udp 10.60.3.40:7946->10.60.3.69:7946: wsasendto: The requested address is not valid in its
context.
Failed to send gossip to 10.60.3.186: write udp 10.60.3.40:7946->10.60.3.186:7946: wsasendto: The requested address is not valid in its
context.
Failed to send gossip to 10.60.3.105: write udp 10.60.3.40:7946->10.60.3.105:7946: wsasendto: The requested address is not valid in its
context.
Failed to send gossip to 10.60.3.186: write udp 10.60.3.40:7946->10.60.3.186:7946: wsasendto: The requested address is not valid in its
context.
Failed to send gossip to 10.60.3.69: write udp 10.60.3.40:7946->10.60.3.69:7946: wsasendto: The requested address is not valid in its
context.
Failed to send gossip to 10.60.3.105: write udp 10.60.3.40:7946->10.60.3.105:7946: wsasendto: The requested address is not valid in its
context.
Failed to send gossip to 10.60.3.109: write udp 10.60.3.40:7946->10.60.3.109:7946: wsasendto: The requested address is not valid in its
context.
Failed to send gossip to 10.60.3.69: write udp 10.60.3.40:7946->10.60.3.69:7946: wsasendto: The requested address is not valid in its
context.
Failed to send gossip to 10.60.3.110: write udp 10.60.3.40:7946->10.60.3.110:7946: wsasendto: The requested address is not valid in its
context.
memberlist: Failed to send gossip to 10.60.3.105:7946: write udp 10.60.3.40:7946->10.60.3.105:7946: wsasendto: The requested address is
not valid in its context.
memberlist: Failed to send gossip to 10.60.3.186:7946: write udp 10.60.3.40:7946->10.60.3.186:7946: wsasendto: The requested address is
not valid in its context.
Many of these errors are odd, for example... 7946 is totally open between the cluster nodes, telnets confirm this.
I expect to see the docker service containers start promptly, and not stuck in a Preparing state. The docker image is already pulled, it should be fast.
docker version output
Client: Docker Engine - Enterprise
Version: 19.03.2
API version: 1.40
Go version: go1.12.8
Git commit: c92ab06ed9
Built: 09/03/2019 16:38:11
OS/Arch: windows/amd64
Experimental: false
Server: Docker Engine - Enterprise
Engine:
Version: 19.03.2
API version: 1.40 (minimum version 1.24)
Go version: go1.12.8
Git commit: c92ab06ed9
Built: 09/03/2019 16:35:47
OS/Arch: windows/amd64
Experimental: false
docker info output
Client:
Debug Mode: false
Plugins:
cluster: Manage Docker clusters (Docker Inc., v1.1.0-8c33de7)
Server:
Containers: 4
Running: 0
Paused: 0
Stopped: 4
Images: 4
Server Version: 19.03.2
Storage Driver: windowsfilter
Windows:
Logging Driver: json-file
Plugins:
Volume: local
Network: ics l2bridge l2tunnel nat null overlay transparent
Log: awslogs etwlogs fluentd gcplogs gelf json-file local logentries splunk syslog
Swarm: active
NodeID: wuhifvg9li3v5zuq2xu7c6hxa
Is Manager: false
Node Address: 10.60.3.40
Manager Addresses:
10.60.3.110:2377
10.60.3.186:2377
10.60.3.69:2377
Default Isolation: process
Kernel Version: 10.0 18362 (18362.1.amd64fre.19h1_release.190318-1202)
Operating System: Windows Server Datacenter Version 1903 (OS Build 18362.356)
OSType: windows
Architecture: x86_64
CPUs: 4
Total Memory: 8GiB
Name: SWARMWORKER1
ID: V2WJ:OEUM:7TUQ:WPIO:UOK4:IAHA:KWMN:RQFF:CAUO:LUB6:DJIJ:OVBX
Docker Root Dir: E:\docker
Debug Mode: false
Registry: https://index.docker.io/v1/
Labels:
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false
Product License: this node is not a swarm manager - check license status on a manager node
Additional Details
These nodes are not using Docker Desktop for windows. I am provisioning docker on the box primarily based on the powershell instructions here: https://docs.docker.com/install/windows/docker-ee/
Windows firewall is disabled
iptables/firewalld is disabled
Communication is completely open between the cluster nodes
Totally up-to-date on cumulative updates
I posted on the moby repo issues but never heard a peep:
https://github.com/moby/moby/issues/39955
The ONLY way I've found to temporarily fix the issue is to drain the node from the swarm, delete docker files, reinstall windows "Containers" feature, then rejoin to the swarm. But, it happens again on reboot.
What's interesting is that when I see a swarm task in a "Preparing" state on the windows worker, the server doesn't seem to be doing anything at all, it's like the manager thinks the worker is preparing the container, but it isn't...
Anyone have any suggestions??

Xdebug 2.7.0 does not work on PhpStorm with Laravel Homestead

I try to make Xdebug work on my machine for over a month now and its driving me crazy. I have the following configuration:
A Vagrant box with Laravel Homestead (PHP 7.3, Xdebug 2.7.0rc1 on Ubuntu 18.04.1)
On my own computer I have the most recent version of PhpStorm. I've checked the settings for over 10 times now, so I can dream them. The settings of the remote CLI Interpreter are
I also have some settings in the PHP > Servers dialog
When I want to be sure that PHP loads the extensions I use the command php -i | grep xdebug with the result
vagrant#vrfy:~$ php -i | grep xdebug
/etc/php/7.3/cli/conf.d/20-xdebug.ini,
xdebug
...
xdebug.idekey => no value => no value
...
xdebug.remote_enable => On => On
...
xdebug.remote_host => 192.168.10.1 => 192.168.10.1
...
xdebug.remote_port => 9000 => 9000
Note: I also tried to use PHPSTORM as the ide key.
As you can see.
Xdebug is installed on the VM
PhpStorm knows how to reach the VM with xdebug (validation succeeded)
What am I doing wrong?
EDIT: Phpinfo with xdebug
xdebug log
[31052] Log opened at 2019-04-05 12:50:48
[31052] I: Checking remote connect back address.
[31052] I: Checking header 'HTTP_X_FORWARDED_FOR'.
[31052] I: Checking header 'REMOTE_ADDR'.
[31052] I: Remote address found, connecting to 192.168.10.1:9000.
[31052] E: Time-out connecting to client (Waited: 200 ms). :-(
[31052] Log closed at 2019-04-05 12:50:48
[31052]
[31052] Log opened at 2019-04-05 12:50:49
[31052] I: Checking remote connect back address.
[31052] I: Checking header 'HTTP_X_FORWARDED_FOR'.
[31052] I: Checking header 'REMOTE_ADDR'.
[31052] I: Remote address found, connecting to 192.168.10.1:9000.
[31052] E: Time-out connecting to client (Waited: 200 ms). :-(
[31052] Log closed at 2019-04-05 12:50:49
[31052]

Filezilla - can't access folder when connecting with other computer using ip adress but works localhost

When I am connecting using localhost on the computer the filezilla server lies on it works perfectly fine, but when I connect with IP-Adress (It is port-forwarded correctly, im 100% sure of that) this happens:
Status: Connecting to **.**.**.**:800...
Status: Connection established, waiting for welcome message...
Status: Insecure server, it does not support FTP over TLS.
Status: Logged in
Status: Retrieving directory listing...
Command: PWD
Response: 257 "/" is current directory.
Command: TYPE I
Response: 200 Type set to I
Command: PASV
Response: 227 Entering Passive Mode (**,**,**,**,***,***)
Command: MLSD
Error: The data connection could not be established: ECONNREFUSED -
Connection refused by server
Response: 425 Can't open data connection for transfer of "/"
Error: Failed to retrieve directory listing
When this happens, it's usually a firewall configuration problem.
Besides a control connection, FTP also uses a data connection on a different port that needs to be assigned before data trasfers.
This means that you must open ports on your firewall to allow data transfers and, of course, you should make FileZilla Server aware of that.
For passive mode transfers, you should set a range of ports from the window below:
Of course those ports should be open at the firewall too. A longer discussion can be find here.

How to properly configure a FTPconnection with Windows Azure Server.?

I'm new to Windows Azure Server configurations, and I'm trying to configure the FTP Connection. But when I access the Server with FileZilla it doesn't work. What am I doing wrong here?
I'm using IIS with FTP Server Roles installed.
Following is the error log from FileZilla
Status: Resolving address of AZR-SRV-map01.cloudapp.net
Status: Connecting to 52.187.64.207:990...
Status: Connection established, initializing TLS...
Status: Verifying certificate...
Status: TLS connection established, waiting for welcome message...
Status: Logged in
Status: Retrieving directory listing...
Command: PWD
Response: 257 "/" is current directory.
Command: TYPE I
Response: 200 Type set to I.
Command: PASV
Response: 227 Entering Passive Mode (52,187,64,207,195,237).
Command: LIST
Response: 150 Opening BINARY mode data connection.
Error: Connection timed out after 20 seconds of inactivity
Error: Failed to retrieve directory listing
Status: Disconnected from server
I also tried the following steps in configuring the FTP Connection...
Here the endpoints have being configured from the Azure Portal.
This is how I published the FTP Site
Configured the FTP Firewall Support with the Azure Server Public IP
And enabled the firewall outbound and inbound rules..
After completing all the steps, I restarted the Microsoft FTP Service, but the problem still exists.
For now, we can't configure a active mode FTP on Azure VM. we should configure data channel port range in FTP Firewall Support, FTP work in passive mode. For example, we can use 10000-10010 ports as the data channel port range. Also, we should add ports to VM's endpoints and add then to VM's firewall inbound rules.
By the way, although the windows firewall seems to allow all traffic that’s required, we also need to enable stateful FTP filtering on the firewall:
netsh advfirewall set global StatefulFtp enable
Then restart the FTP windows service and we should be up and running:
net stop ftpsvc
net start ftpsvc
Here a case similar as you, please refer to it.

SFTP Connection Issue "Connection reset by peer"

I am unable to connect to Secured FTP server Using Filezilla and psfTP too.
While connecting one popup message comes for Certification, then I find this error
Error Message:--
Status: Connecting to idx.XYZ.com...
Response: fzSftp started
Command: open "abc_mnp#idx.XYZ.com" 22
Command: Pass: ****
*Error: Network error: Connection reset by peer
Error: Could not connect to server*
Any Idea guys..
I feel this is an Issue with Server.

Resources