A client's Magento site had weird characters in the top of Magento Connect:
We tried installing a plugin and got the following error:
It turns out the problem was a bunch of (hidden) duplicate PHP files in lib/Mage/Connect. For example, there's Remote.php but there was also ._Remote.php. This forum post was how we found out the details.
(Deleting the duplicate files corrected the problem).
I'm wondering -- has anyone else experienced this duplicate PHP file problem in Magento before? Any idea what the cause is?
These files are mostly likely meta-data files for OS X's HFS+ file system. See this entire thread on the Apple Stack Exchange for some good starting points if you're interested in the details.
Oversimplifying things, when you create a tar archive on OS X these files are included along with the "real" file. This allows Macintosh specific meta data to survive the trip into a file format that wasn't created specifically for the Mac. If you untar the files on a Mac, the meta-data is preserved. If you untar the files on a non-Mac, the ._ files are generated in case the meta data is needed.
My guess is, at some point someone tared up those files to move them to the production server from their Mac, which brought along the the ._ files for the ride. You can avoide this in the future by running
export COPYFILE_DISABLE=true
from the terminal prior to copying the files. Details on this here.
(It's pretty bizarre that PHP would attempt to include those files instead of the correct files — did you debug this far enough to know why/what connect through it was doing?)
Related
I'm trying to figure out the root cause of a strange TFS error we are seeing in our current instance. It wasn't noticed until after a server move, but I'm not sure if they're directly related, because the error seems to be showing up for check-ins about a week prior to the move, as well as all those following it.
We first noticed the problem when I tried to get latest, and got several errors indicating:
"The downloaded file is corrupt. Please get the file again."
Upon looking into the error, we have noticed that starting as of a single check-in every code update has resulted in files being replaced with the contents of other files, ranging from project files to binary executable files (presumably assembly DLLs), rather than the expected content which is still present on our local development machines.
I don't have admin access to the servers myself, but am looking for ideas on possible causes and/or fixes for our team to investigate.
After weeks of searching, I finally found another mention of this sort of thing happening, along with a solution that appears to have worked.
Clear the Application Tier cache.
MSDN Archived Forums: TFS swapping contents of files
The National Speech Corpus is a Natural Language Processing corpus of Singaporean's speaking English, which can be found here: https://www.imda.gov.sg/programme-listing/digital-services-lab/national-speech-corpus.
When you sign up for the free corpus, you are directed to a dropbox folder. The corpus is 1 TB and (as of this writing) has four parts. I only wanted to download PART 1 but even this has 1446 zip files that are each quite larger. My question is: how do I programmatically download many large files from dropbox onto a Linux (Ubunut 16.04) VM using only the command line.
The directory tree for the relevant part looks like:
root
|-LEXICON
|-PART1
|-DATA
|-CHANNEL0
|-WAVE
|-SPEAKER0001.zip
|-SPEAKER0002.zip
...
|-SPEAKER1446.zip
I looked into a few different approaches:
Downloading the WAVE parent directory using a shared link via the wget command as described in this question. However, this didn't work as I received this error:
Reusing existing connection to www.dropbox.com:443
HTTP request sent, awaiting response... 400 Bad Request
2021-01-06 23:09:06 ERROR 400: Bad Request.
I assumed this was because the WAVE directory was too large for Dropbox to zip.
Based on this post, it was suggested that I could download the HTML of the WAVE parent directory and find all of the direct links to the individual zip files but the direct links to the individual files were not in the HTML file.
Based on the same post as in (2), I could also try to create shared links for each zip file using the dropbox API, though this seemed too cumbersome.
Download the Linux dropbox client and sync the relevant files as outlined in this installation.
In the end, the 4th option did work for me, but I wanted to post this investigation for anyone who needs to download this dataset in the future. Also, I wanted to see if anyone else had better approaches.
As I described, the approach that worked for me was to use Dropbox's linux client to sync the files on to my Linux VM. You can follow these instructions to download the Linux client. These instructions worked for me on my Ubuntu 16.04 VM.
One issue I encounter with the sync client was how to selectively exclude directories. I only had 630 GB on my VM and the entire National Speech Corpus size is 1TB, so I needed to exclude files before the Dropbox sync filled up my disk.
You can selectively exclude files using the dropbox python script that is at the bottom of the installation page. A link to the script is here. Calling the python script from my home directory (where the Dropbox sync folder is automatically installed) worked using the command:
python dropbox.py exclude add ~/Dropbox/<path_to_excluded_dir>
You may want to stop and start the dropbox client which can be done through:
python dropbox.py start
python dropbox.py stop
Finally, see the command in the python script for more information:
python dropbox.py --help
With this approach, I was able to easily download the desired files without overwhelming my VM.
I have asked this earlier, but the question got downwoted and deleted by any reason. The original is here:
https://stackoverflow.com/questions/47749448/filemaker-14-runtime-high-sierra-download-versus-copy-from-server
I am posting it again since I now have found the solution that is highly relevant for others having the same issue.
Original question:FileMaker 14 Runtime / High Sierra / Download versus copy from Server
Our solution consisting of a couple of FileMaker files built with runtime option. FileMaker14
When packed as a ZIP archive, or a DMG package - The result depends on how the package is downloaded.
Option 1: Distributed as a download link. When unpacking, the application does not work as expected. FileMaker ask for missing files in solution.
Option 2: Same file put on a FTP area, and connected as a remote disk. Then dragged into the the computer. Unpacking, all works well.
Does anyone have a clue what is going on here ?
I have found what is causing this.
Apple has introduced something called "Gateway Path Randomization". This is basically that the App-file is moved by the gatekeeper to a random location prohibiting it from accessing the database files (That is relative to the app itself). This occurs when an app is downloaded from the internet, and not have any certs. To Solve this, You need to sign the DMG archive with a propper certificate. I used DropDMG to accomplish this, then the translocation process does not occur when user is launching the app.
References:
https://community.filemaker.com/thread/165583
https://mjtsai.com/blog/2016/06/16/gatekeeper-path-randomization/
I'm using Node.js to start Watchman on Windows 2016 with a number of file type filters on a specific directory. This directory is being used for staging. Uploaded files will be routed to other folders depending on the filename.
The problem that I'm having is Watchman is picking up files that are being uploaded. It causes the moving processes to fail as it's locked. I'm thinking about using this package to check the file status (#ronomon/opened) before marking it as a candidate for moving. Is there a better way to do it?
Thanks,
Paul
Please take a look at this issue that sounds almost identical to your question; it has some other alternatives and details beyond what I've put below: https://github.com/facebook/watchman/issues/562#issuecomment-355450096
Summarizing that issue here: you need to allow for the filesystem to settle. There is a settle option you can set in your .watchmanconfig to control this:
{"settle": 60000}
You'd place that file in the upload directory (and make sure that you don't mistake it for an uploaded file and move it out) and then re-create your watch.
I am in an interesting situation where I maintain the code for a program that is used and distributed primarily by our sister company. We are ready to distribute the program to all of the 3rd party users and since it is technically our sister companies program, we want to host it on their website. (in the interest of anonimity, I'll use 'program' everywhere instead of the actual application name, and 'www.SisterCompany.com' instead of their actual URL.)
So I get everything ready to go, setup the Publish setting to check for updates at program start, the minimum required version, and I set the Insallation Folder URL and Update Location to "http://www.SisterCompany.com/apps/program/", with the actual Publishing Folder Location as "C:\LocalProjects\Program\Publish\". Everything else is pretty standard.
After publish, I confirm that everything installs and works correctly when running directly from the publish location on my C: drive. So I put everything on our FTP server, and the guy at our sister company pulls it down and places everything in the '/apps/program/' directory on their webserver.
This is where it goes bad. When I try to install it from their site, I get the - File, Program.exe.config, has a different computed hash than specified in manifest. Error. I tested it a bit, and I even get that error trying to install from any network location on our network other than my local C: drive.
After doing the initial publish in visual studio, I have changed no files (which is the answer/reason I've found by doing some searching about this error).
What could be causing this? Is it because I set the Installation Folder URL to a location that it isn't initially published too?
Let me know if any additional info is needed.
Thanks.
After bashing my head against this all weekend, I have finally found the answer. After unsigning the project and removing the hash on the offending file (an xml file), I got the program to install, but it was giving me 'Windows Side by Side' Errors. I drilled down into the App Cache were the file was, and instead of a config .xml file, it was one of the HTML files from the website the clickonce installer was hosted on. Turns out that the web server didn't seem to like serving up an .XML (or .mdb it turns out) file.
This MSDN article ended up giving me the final solution:
I had to make sure that the 'Use ".deploy" file extension' was selected so that the web server wouldn't mangle files with extensions it didn't like.
I couldn't figure out why that one file's hash would be different. Turns out it wasn't even the same file at all.
It is possible that one of the FTP transfers is happening in text mode, rather than binary?
For me the problem was that .config transformations were done after generating manifest.
To anyone else who's still having trouble, five years later:
The first problem was configuring the MIME type, which on nginx (/etc/nginx/mime.types) should look like this:
application/x-ms-manifest application
See Click Once Server and Client Configuration.
The weirder problem to me was that I was using git to handle the push to the server, i.e.
git remote add live ssh://user#mybox/path/to/publish
git commit -am "committing...";git push live master
Works great for most things, but it was probably being registered as a "change," which prevented the app from installing locally. Once I started using scp instead:
scp -r * user#mybox/path/to/dir/
It worked without a hitch.
It is unfortunate that there is not a lot of helpful information out there about this.