Xdebug not working after storage symbolic link - Laravel - laravel

I have worked on few Laravel projects. I use PhpStorm as my IDE which I setup to work with Xdebug.
This is the first time I have used Laravel symbolic link to create a link between public/storage to storage/app/public directory.
After this my Xdebug stops hitting the breakpoint even though my Xdebug seems to be configured as always.
I have tested Xdebug in another Laravel project in PhpStorm and it was working so it is not related to PhpStorm. I opened my current project (one with symbolic link) in IntelliJ IDEA but Xdebug was not working there too so there is something fishy in my project only.
After reading JetBrains Xdebug troubleshooting I came to know that I have to use path mapping in
Settings |Languages & Frameworks | PHP | Servers.
Now the problem is I really don't know which path to be mapped to which folder and found no related solution on internet (maybe I'm bad at googling)
Here is my bad attempt:
I just tried to map storage (in public) directory with public (in storage -> app) directory.
Please guide me regarding this.
xdebug.log file
[3172] Log opened at 2020-08-25 17:39:58
[3172] I: Checking remote connect back address.
[3172] I: Checking header 'HTTP_X_FORWARDED_FOR'.
[3172] I: Checking header 'REMOTE_ADDR'.
[3172] W: Remote address not found, connecting to configured address/port: http://127.0.0.1:8000/:9000. :-|
[3172] W: Creating socket for 'http://127.0.0.1:8000/:9000', getaddrinfo: 0.
[3172] E: Could not connect to client. :-(
[3172] Log closed at 2020-08-25 17:39:58
[7416] Log opened at 2020-08-25 17:43:53
[7416] I: Checking remote connect back address.
[7416] I: Checking header 'HTTP_X_FORWARDED_FOR'.
[7416] I: Checking header 'REMOTE_ADDR'.
[7416] I: Remote address found, connecting to ::1:9000.
[7416] E: Time-out connecting to client (Waited: 200 ms). :-(
[7416] Log closed at 2020-08-25 17:43:53

I had this same problem. Path mappings are indeed the answer.
Instead of putting a mapping on the location of the symlink (public/storage) you need to put the mapping on the target of the symlink (storage/app/public). This needs to be mapped to the absolute path on the server (which you identify as C:/xampp/htdocs/HRMS/storage/app/public).
This is made somewhat easier if you can put a break point on a line which calls a script within the symlinked directory. When you hit that break point then "step into" the called function PHP Storm will complain that
remote path <something> is not mapped to
any file path in project
Click to set up path mappings
Click on that, find the storage/app/public directory in your project files and type in the <something> from the debugger message. Hit OK, and all should be good.

Related

Nifi: Failed to retrieve directory listing when connecting to FTP server

I have a ListenFTP processor opened on a port and when i am trying to connect to it via FileZila i have an error "Failed to retrieve directory listing".
The connection seems to be establish first but then this error occurs.
Nifi is hosted on an ubuntu server running in a docker image
ListenFTP processor is opened on port 2221
I tried to change some configuration in FileZila based on this issue but nothing worked.
The connection works well on localhost, i can connect to the ftp server and transfer files
Somone has an idea how to solved that ?
If you look at the documentation of the processor, it states that
"After starting the processor and connecting to the FTP server, an
empty root directory is visible in the client application. Folders can
be created in and deleted from the root directory and any of its
subdirectories. Files can be uploaded to any directory. Uploaded files
do not show in the content list of directories, since files are not
actually stored on this FTP server, but converted into FlowFiles and
transferred to the next processor via the 'success' relationship. It
is not possible to download or delete files like on a regular FTP
server. All the folders (including the root directory) are virtual
directories, meaning that they only exist in memory and do not get
created in the file system of the host machine. Also, these
directories are not persisted: by restarting the processor all the
directories (except for the root directory) get removed. Uploaded
files do not get removed by restarting the processor, since they are
not stored on the FTP server, but transferred to the next processor as
FlowFiles."

Firefox refuses a relative path for local files

I load a local html file from my windows 7 filesytem :
file:///C:/Users/...etc.../myfile.html
Inside it, an existent file relative to the directory of myfile.html :
....load("../common/events.json");
Firefox refuses it, error at console :
Cross-Origin Request Blocked: The Same Origin Policy disallows reading the remote
resource at file:///C:/Users/...etc.../common/events.json?timeshift=-60. (Reason: CORS request not http).
With link : https://developer.mozilla.org/en-US/docs/Web/HTTP/CORS/Errors/CORSRequestNotHttp
So I set privacy.file_unique_origin to false in config and restarted Firefox : same issue
NB all is ok with ... IE 11 !
You could start your own local server:
python3 -m http.server
which tells you the port (e.g. Serving HTTP on 0.0.0.0 port 8000 (http://0.0.0.0:8000/)).
Then something enter in browser address bar something like
http://0.0.0.0:8000/C:/Users/...etc.../myfile.html.
The path is relative to the location where the server was started.
The security feature you disabled only blocks accessing files in the same or lower directory as the HTML document.
Accessing files is other directories (i.e. if your relative path starts with ../ or you use an absolute path) is always forbidden.

cURL error 60: SSL certificate problem: unable to get local issuer certificate (see https://curl.haxx.se/libcurl/c/libcurl-errors.html)

GuzzleHttp\Exception\RequestException
cURL error 60: SSL certificate problem: unable to get local issuer certificate (see https://curl.haxx.se/libcurl/c/libcurl-errors.html)
how to solve this error i am trying everything available on the google including
the below description several times
Download this file: http://curl.haxx.se/ca/cacert.pem
Place this file in the C:\wamp64\bin\php\php7.1.9 folder
Open php.iniand find this line:
;curl.cainfo
Change it to:
curl.cainfo = "C:\wamp64\bin\php\php7.1.9\cacert.pem"
But still it didn't work for me
please help me i am so frustrated right now...
You need to put .pem file inside of C:\wamp64\bin\php\php7.1.9\extras\ssl instead of C:\wamp64\bin\php\php7.1.9
Make sure the file mod_ssl.so is inside of C:\wamp64\bin\apache\apache(version)\modules
Enable mod_ssl in httpd.conf inside of Apache directory C:\wamp64\bin\apache\apache2.4.27\conf
Enable php_openssl.dll in php.ini.
In wamp there are two php.ini files and You need to do this in both of them.
First one can be located inside of your WAMP taskbar icon
and the other one is located in C:\wamp64\bin\php\php(Version)
find the location for both of the php.ini files and find the line curl.cainfo = and give it a path like this
curl.cainfo = "C:\wamp64\bin\php\php7.1.9\extras\ssl\cacert.pem"
Now save the files and restart your server
In my case this error also happened when using Laravel's development server.
In order to fix it, just shifted stopped that server and ran Apache through XAMPP and that was enough to solve the problem.
If you're having troubles with this approach, please read the answers to this question for more details.

NOT your standard Curl error: WAMP/local copy of remote Magento site, curl_setopt() not defined

I have created a local copy of my remote store (Magento Community 1.6.2.0) using WampServer 2.2E:
Cleared entire Magento cache on remote site
exported remote MySql database using phpMyAdmin
tar'd up the entire remote public_html folder and downloaded to local PC
Recreated directory structure locally under C:\wamp\www\
created a new database locally (I'm using WAMPserver) with appropriate user/pass/DBname according to /app/etc/local.xml -- note: dbase host in local.xml is "localhost"
imported database with no errors
modified mage_core_config_data table's baseurl variables to both point to http://www.localhost.com/
modified local .htaccess to prevent configuration that would result in crashing as well as to modify the rewrite rule that does the 301 redirect for domain.com to www.domain.com (I changed domain.com to be localhost.com).
deleted everything in var/cache, var/session, var/tmp, and the system /tmp folder, as suggested in another Q&A
verified that WAMP has curl PHP extension enabled
So now everything loads except for the admin panel. When I go to http://localhost.com/index.php/admin and log in, the error is:
( ! ) SCREAM: Error suppression ignored for
( ! ) Fatal error: Call to undefined function curl_setopt() in C:\wamp\www\includes\src\Varien_Http_Adapter_Curl.php on line 52
I assume that curl_setopt() is defined in the curl library, and that extension is enabled in WAMPserver.. anyone know what's going on with this?
I just realized that my php version is 5.4.3. Wampserver.com offers a 2.2E package with 5.3.13, but that also doesn't work.
I got this to work only by finally trying wampserver 2.2D, which has php version 5.3.10. I used the 64-bit installer -- didn't try the 32-bit but I assume it will work.. This information is notably absent from the magentocommerce.com wiki entry on setting up Magento with WampServer.
New problem is now no page other than the home page loads (404 not found happens for all product pages and categories). This is solved by ensuring that Apache rewrites are enabled (wampserver menu->apache->apache modules->rewrite_module)
Fatal error: Call to undefined function curl_setopt()
This error mean that php5-curl is not available/activated on your system. You should be able to activate it through the wamp configuration panel.
For linux users, install php5-curl through your packet manager.

Mercurial reported error number 255: abort: Resource busy

Using MacHG I get this message:
"Mercurial reported error number 255:abort: Resource busy"
I'm trying to push changes across a local network from my mac to a SMB mounted shared directory. It was working earlier today for 2 pushes and a clone.
I have read all the forums about lock files and symlinks and that SMB supports symlinks for the file locking to work.
Also there are no .hg/store/lock or .hg/wlock files for me to delete to resolve the locking scenario.
EDIT: After trying CIFS as the protocol for mounting the share it would appear CIFS is now reporting the same issue/error message...
After repeating tests of:
Switching from SMB to CIFS
performing a verify on each repository.
Closing MacHG on all computers involved.
Closing XCode on all computers involved
Restarting all computers involved
It would seem the only solution that was consistent is to NOT map to a networked share folder...
http://hginit.com/02.html
The above link is a really great guide on getting a simple intranet share happening.
You'll need to edit the .hg/hgrc file so that it includes the following lines:
[web]
push_ssl=False
allow_push=*
Then in our situation we created a startup script (batch file for windows in our case) for when the server turned on to make sure it performed the following:
taskkill /f /im hg.exe /t
cd pathtorepository\MyProject
hg serve -d -p <portnumber1>
cd pathtosecondproject\MySecondProject
hg serve -d -p <portnumber2>
Visit the mercurial wiki or search SO for more details on setting up hg serve if you requre secure connections and authentication
https://www.mercurial-scm.org/wiki/hgserve

Resources