PHPStorm - Invalid descendent file name - windows

I'm attempting to sync my local PHPStorm project from my Windows 7 PC with my Ubuntu server.
When I try any kind of connection (e.g. "Test SFTP connection"), it fails with
Invalid descendent file name "C:\nppdf32Log\debuglog.txt"
the folder mentioned doesn't exist on my Windows machine, and of course not on my Ubuntu server.
Even the most basic operation connecting to the Ubuntu server is failing because of this - Jetbrains support suggested asking here, so does anyone have a clue?

You have a file on your Ubuntu server with that C:\nppdf32Log\debuglog.txt name. YES -- it's on Ubuntu and YES -- it's actually a file name and not full path (Linux allows : and \ characters in file names).
Unfortunately such file name is invalid on Windows and library used for SFTP communications in PhpStorm does not allow to process such files in any way (yes, it's valid as full path but not as file name alone).
The solution is to connect to your SFTP using another program (e.g. FileZilla) and delete that file. After that you will be able to continue with PhpStorm built-in SFTP functionality.
P.S.
Such file is usually created by Firefox on Linux (google that file name for additional details).
https://askubuntu.com/questions/144408/what-is-the-file-c-nppdf32log-debuglog-txt
Jetbrains support suggested asking here
That's odd (and hard to believe for me) -- they should know about such issue for sure -- you are not first who is facing the same error.
In any case -- this is the ticket to watch after -- hopefully the used library (for SFTP communications) will allow handling such situations better in the future.
http://youtrack.jetbrains.com/issue/WI-2449

I met with the same problem,
but I included logging of errors (description here https://devnet.jetbrains.com/docs/DOC-1202)
and I saw that I had created a file with incorrect name

I had this same problem, but it was not due to Firefox and I wonder if the original asker might have made the same mistake I did in configuring his xdebug.
As a newbie, in setting the value for xdebug.remote_log in my php.ini (actually in separate xdebug.ini), I used the windows file path to my project on my local machine. Why? Because the value called "remote_log", so I mistakenly thought it wanted the path on my windows machine, which I thought was very strange at the time. But I am new to remote debugging, so... Oops.
Using windows path is wrong:
xdebug.remote_log="C:\Users\Buttle\PhpstormProjects\xdebug_log.txt"
And it results in:
/var/www/myproject/C:\Users\Buttle\PhpstormProjects\xdebug_log.txt
(the bolded part is the actual file name)
This is right:
xdebug.remote_log="xdebug_log.txt"
And presumably results in:
/var/www/myproject/xdebug_log.txt
(the bolded part is the actual file name)
It appears that Xdebug saves that log file inside of the folder where the requested php file came from (in my case, my project's index.php file).
I imagine that if I enter an valid linux path, I might be able to put the file somewhere else. E.g.
This might work:
xdebug.remote_log="/var/www/xdebug_log.txt"
So this solves 2 problems: 1.) why the heck doesn't xdebug log anything on its server (it does!) 2.) descendant file problem.

Related

Bash: cannot create directory with unicode character

I am trying to checkout a git repo (this) which requires the creation of a folder with unicode characters (the name is uni¢𐍈d€), where the missing character (I don't know how to render it here) is the Hwair gothic character. On my workstation, it works fine, but when I try to do it on a server (RedHatEnterpriseServer, where I do NOT have sudo powers), I get the error
fatal: cannot create directory at 'python/ycm/tests/testdata/uni¢𐍈d€': Permission denied
As a side note, I have tried to create the folder manually, and the hwair character is the only one that gives problems, meaning, that the folder uni¢d€ can be successfully created.
The LANG variable is set to en_US.UTF-8 on both systems. I have tried to find the differences between my workstation fonts and the server fonts. Grepping 'hwair', I found a font on my workstation that was missing on the server, so I copied the font folder in ~/.fonts on the server, and ran fc-cache, but that didn't work.
Google-ing hwair and grepping here and there on my workstation and the server, I found out that the 'hwair' character should be in the Code2001 font, which I think should be installed on both systems, since both system have the file /etc/fonts/conf.d/69-unifont.conf, which are identical, and contain the line
<family>Code2001</family> <!-- plane1 and beyond -->
I never really dealt with fonts at this fine level, so I'm not really sure if this is something solvable by adding some fonts locally or not (perhaps the system inhibits such particular fonts). I don't even know if copying fonts files to ~/.fonts/ and running fc-cache is enough or if there is something more to it.
So I guess the questions are:
Is the problem related to missing fonts on the server?
If yes, how can I add the missing font?
If no, is it related to something I cannot fix (perhaps it requires sudo privileges)?
Edit: the folder that cannot be created is part of the git repository. The error appears when cloning the repo (or anytime you try to checkout the master branch). I don't think there is an issue with permissions, otherwise it would fail also on my workstation (the permissions of the 'host' folder are the same). Also, trying to do mkdir $( printf 'uni¢\xF0\x90\x8D\x88d€' ) does not work either. The bash seems indeed to interpret correctly the unicode encoding, since when printing 'cannot create...', the name of the folder DOES show the hwair symbol.
Edit2: if you think the question has some flaws, please, add a comment rather than just voting for closing. I'm happy to change/edit/improve if need be.
I had the exact same issue!
It seems that for some - unknown to me - reason it is not possible to have that Hwair character in filenames on some filesystems as that other guy has already mentioned.
So as a workaround I suggest that you look at the available filesystems with something like:
df -Th
and then e.g. use a symbolic link to a filesystem which uses ext4.
FYI: The filesystem where it didn't work for me is of type nfs4.

ClickOnce Error "different computed hash than specified in manifest" when transferring published files

I am in an interesting situation where I maintain the code for a program that is used and distributed primarily by our sister company. We are ready to distribute the program to all of the 3rd party users and since it is technically our sister companies program, we want to host it on their website. (in the interest of anonimity, I'll use 'program' everywhere instead of the actual application name, and 'www.SisterCompany.com' instead of their actual URL.)
So I get everything ready to go, setup the Publish setting to check for updates at program start, the minimum required version, and I set the Insallation Folder URL and Update Location to "http://www.SisterCompany.com/apps/program/", with the actual Publishing Folder Location as "C:\LocalProjects\Program\Publish\". Everything else is pretty standard.
After publish, I confirm that everything installs and works correctly when running directly from the publish location on my C: drive. So I put everything on our FTP server, and the guy at our sister company pulls it down and places everything in the '/apps/program/' directory on their webserver.
This is where it goes bad. When I try to install it from their site, I get the - File, Program.exe.config, has a different computed hash than specified in manifest. Error. I tested it a bit, and I even get that error trying to install from any network location on our network other than my local C: drive.
After doing the initial publish in visual studio, I have changed no files (which is the answer/reason I've found by doing some searching about this error).
What could be causing this? Is it because I set the Installation Folder URL to a location that it isn't initially published too?
Let me know if any additional info is needed.
Thanks.
After bashing my head against this all weekend, I have finally found the answer. After unsigning the project and removing the hash on the offending file (an xml file), I got the program to install, but it was giving me 'Windows Side by Side' Errors. I drilled down into the App Cache were the file was, and instead of a config .xml file, it was one of the HTML files from the website the clickonce installer was hosted on. Turns out that the web server didn't seem to like serving up an .XML (or .mdb it turns out) file.
This MSDN article ended up giving me the final solution:
I had to make sure that the 'Use ".deploy" file extension' was selected so that the web server wouldn't mangle files with extensions it didn't like.
I couldn't figure out why that one file's hash would be different. Turns out it wasn't even the same file at all.
It is possible that one of the FTP transfers is happening in text mode, rather than binary?
For me the problem was that .config transformations were done after generating manifest.
To anyone else who's still having trouble, five years later:
The first problem was configuring the MIME type, which on nginx (/etc/nginx/mime.types) should look like this:
application/x-ms-manifest application
See Click Once Server and Client Configuration.
The weirder problem to me was that I was using git to handle the push to the server, i.e.
git remote add live ssh://user#mybox/path/to/publish
git commit -am "committing...";git push live master
Works great for most things, but it was probably being registered as a "change," which prevented the app from installing locally. Once I started using scp instead:
scp -r * user#mybox/path/to/dir/
It worked without a hitch.
It is unfortunate that there is not a lot of helpful information out there about this.

How to show currently debugged line in PHPStorm?

Got remote debugging working in PHPStorm (version 6.0.2.) and every time execution stops on a breakpoint it shows me the current variables etc.
But it does not actually show me the file and line where the debugging has halted. I would expect the editor to jump to the file and highlight the line with the breakpoint that has just been reached. Can this be set by config somewhere, or it is like this by design?
I've found the answer. PHPStorm needs to be able to match the code repository file locations with those in the web root. Therefore we need to map them in the server settings of PHPStorm.
I have been using a Quickstart virtual machine (https://drupal.org/project/quickstart) on a Windows PC. The repository is located on the PC but shared with the virtual machine.
So it was debugging as I said but it kept saying "Remote file path ‘path/to/script/on/server.php’ is not mapped to any file path in project". I figured the problem of focussing on the currently debugged file and line might be related to mapping.
In the server settings of PHPStorm I mapped the Windows root of the repository directory to the Linux web path in the virtual server (in my case /home/quickstart/websites/mysite.local) and that was apparently what confused the debugger... when I later used the mount path in Linux, thus the path to the shared location which is actually on the PC (in my case /mnt/vbox-shared/mysite.local), it and from that moment the file and line started to focus!
If debugger successfully stopped at breakpoint, IDE should focus that file and line -- it works like that here (unless I completely misunderstood you).
In any case: Run | Show Execution Point Alt+F10 is always available (also available on debug toolbar).

Logging into TFS on a Mac

I got Team Explorer Everywhere so we can use TFS on the Mac Mini we got to test Iphone apps. Since we're using XCode for phonegap, we need to use the commandline program and it is giving me a lot of grief.
What I've done so far (Listing out for anyone who stumbles on this so they can use it):
-Downloaded the trial (free)
-Set the path using PATH=$PATH\:/FOLDERLOCATION
-Accepted EULA and got trial product key... for command line program (tf eula/tf productkey -trial)
-Set up workspace:
tf workspace -new WORKSPACENAME -server:http://SERVERNAME:PORT/FILEPATH -comment:"WORKSPACENAME" && prompted for username -> domain -> password
-Trying to setup the folder path (Fixed):
tf workfold -map SERVERFOLDERPATH LOCALFOLDERPATH -collection:http://SERVERNAME:PORT/FILEPATH -workspace:WORKSPACENAME && prompted for username -> domain -> password
-Make sure I can check out/check in (On hold):...
The error I'm getting right now is "An argument error occurred: First free argument must be a server path." This is what I've been following ever since I got the path set, but I think the versions are different because mine doesn't seem to be set up the same. Any help at all would be appreciated, and I'll keep up with the post as I figure parts out because there doesn't seem to be much online that I can find on TFS on macs.
Update: As normal, I'm an idiot. Have to put the options at the end of the command and have to have the serverfolder path as the first thing after -map. Now I just need to figure out how to use the damn thing. I'll post any other questions I have and try to get all the correct commands up for the selfish reason of having them somewhere in case I forget them later.
Update 2: The mapping hasn't worked out as well as I'd hoped, it seems a combination of my unfamiliarity with Unix/Mac file systems and some settings being missing is keeping me from using 'tf get' to load all of the test data I was trying to get. I'm planning on trying again after I get the location of where my boss wants the data saved and after I can look into something that would save the workspace so it won't say that it can't find the map path every time...
It looks like you're setting up your workspace and some working folder mappings just fine, after the edit. If you're having problems doing a tf get after this, then there are some common problems that might be occurring. TFS workspaces can be a little bit opaque and having a better understanding of them can sometimes help you understand where the problem is:
Team Foundation Server requires a workspace to be configured before you can get files out of source control, edit them or check them back in. A workspace basically simply contains working folder mappings that map your local path(s) to server path(s).
Workspaces are stored on the server and are uniquely identified by your computer's hostname, your username and the workspace's name. A cache of this information for the local host is saved on the client. This implies:
If you remove a workspace on the server, your workstation will be unable to connect.
If you remove the cache, your local computer will not be able to identify the workspace based on working folder mappings until the cache is rebuilt (which happens every time you connect to the server.)
If you change your username or local workstation's name, you cannot access those workspaces.
(Note that very early versions of the Teamprise command line client had certain issues on Mac OS that made identifying the local workstation name difficult. This is fixed, however, in Team Explorer Everywhere.)
Because you can have multiple workspaces for a single server on a single workstation, you can't always simply provide server paths to tf commands, since server paths are ambiguous. ($/ exists in every workspace, for example.) So the command line client resolves paths based on the current working directory and/or the arguments provided. Meaning that you can run tf get foo.txt if you're in a working folder, or you can run tf get /tmp/foo.txt if /tmp is mapped.
One more point - the configuration data for Team Explorer Everywhere is shared between the TFS plug-in for Eclipse and the command line client. So if you're more comfortable using a GUI to set up your workspace(s), you can do that and then use the CLC as you see fit. You don't need to be a Java programmer to use Eclipse - simply download Eclipse and install the TFS plug-in for Eclipse into it, and select Window > Open Perspective > Team Foundation Server Exploring. After that, you'll have the full GUI Team Explorer experience and this perspective will be restored when you open Eclipse, so you won't even need to worry about the Java IDE bits if you don't want to.

Path too long error when building a windows azure service

I have been trying to publish my service to windows azure. The service consists of a single webRole, however I have added remote login functionality published it and built it a few times, and now all the sudden it will not build. The reason it gives is that
Details below:
"Error 56 The specified path, file name, or both are too long. The fully qualified file name must be less than 260 characters, and the directory name must be less than 248 characters. C:\Program Files (x86)\MSBuild\Microsoft\Cloud Service\1.0\Visual Studio 10.0\Microsoft.CloudService.targets 202 5 FileSystemCreator"
I have gone on all the forums, I have used CSPack command line for packaging the service which is fine but I'm having a really hard time configuring the certificate for remote desktop connect and I would like to take advantage of this feature as I am creating some websites in the onStart event and I would like to peek into IIS. Some microsoft employees do agree that this is a bug and the have promised a fix this issue, refer to post . I am using VS2010 and I do not know how to fix this bug.
Can anyone please help, or point me to a place where I can get any help.
I ran into the same problem with a new solution.
Note that, unlike Eugenio Pace's response suggests, the error occurs only when deploying to Azure (and not when running the project in the Azure Compute Emulator).
Try adding the following line to the first property group of your Windows Azure Visual Studio Project file (*.ccproj):
<ServiceOutputDirectory>C:\Azure\</ServiceOutputDirectory>
The trailing slash (for whatever path you select) appears to be required. This folder will be deleted each time you create a package if it exists.
This setting seems to redirect the working folder for the package to a shorter base path, preventing the path too long error.
Credit goes to: http://govada.blogspot.com/2011/12/windows-azure-package-build-error.html
Perhaps the local folder used to store temporary development fabric is too long. See Windows Azure - Resolving "The Path is too long after being fully qualified" Error Message.
I was having this problem as well when deploying a Node.js project to Azure.
To fix it, I had to change my "TEMP" and "TMP" user environment variables to something shorter than their default values.
In my case, they were pointing by default to %USERPROFILE%\AppData\Local\Temp, changing them to C:\Temp solved it.
Make sure you restart Windows after.
The better solution may be to create a symbolic link to your project folder. This doesn't require moving files or changing system variables. Open up the command prompt as an administrator and run this:
mklink /D C:\Dev C:\Users\danzo\Source\Workspaces
Obviously you can change "C:\Dev" to whatever you want it to be and you'll need to change the longer path above to the root directory of your soltions/projects folder.
Same problem happened to me when I try Packaging an Umbraco project for Azure (https://github.com/WindowsAzure-Accelerators/wa-accelerator-umbraco/wiki/Deployment), I found the solution is to: Copy and rename the long-name path and folder to "C:\someshortname".
(solution was suggested by this: link)
I tried all the above 2 approaches:
-change TEMP and TMP enviromental variables
-<ServiceOutputDirectory> path
and didn't work.
In my case, I had to move the whole project to a shorter path C:\ and worked.
I'm using W7 and VS12.
When you run a cloud service on the development fabric, the development fabric uses a temporary folder to store a number of files including local storage locations, cached binaries, configuration, diagnostics information and cached compiled web site content.
By default this location is: C:\Users\\AppData\Local\dftmp
Credit goes to Jim Nakashima of Microsoft :
https://blogs.msdn.microsoft.com/jnak/2010/01/14/windows-azure-resolving-the-path-is-too-long-after-being-fully-qualified-error-message/
In order to change the temporary folder, a user environment variable has to be created :
It is named _CSRUN_STATE_DIRECTORY
Give it a value of short named directory like :
c:\AzureTemp
Don't forget to restart Visual Studio in order to have the environmennt variables to be read again
It fixed many compilations problem !

Resources