Convert Unicode decomposition when transferring files to web server - macos

I am doing website development on OS X, and fairly often I find myself in situations where I move some part of a live website (running Linux/LAMP) to a development server running on my own machine. One such instance involves downloading images (user generated content, e.g. via ftp download), processing them in one way or another and the putting them back on the production site.
The image files involved, being created in a Linux machine, appears to have their filenames encoded in UTF-8 using NFC decomposition. OS X's HFS+ file system on the other hand does not allow NFC decomposed filenames and converts into NFD. However, once I am done and want to upload the files their names will now be using NFD decompositions, since Linux supports them both. As a result, the newly uploaded (and in some cases replaced) files will not be accessible at the expected URL.
I'm looking for a way to change the UTF decomposition of the files during (preferably) or after (convmv looks like a good option, but I don't have sufficient permissions on this server it's not possible in this particular case) transfer, since I'm guessing it's impossible doing it beforehand. I've tried FTP-upload using Transmit and rsync (using a deploy script a normally use) to no avail. the --iconv option in rsync seemed ideal, but unfortunately my server running rsync 2.6.9 did not recognize it.
I'm guessing quite a few people are having similar issues, I'll be happy to hear any solution or workaround!
UPDATE: In this case I ended up rsyncing the files to a virtual machine running Ubuntu, running convmv on them on there, and then rsyncing again to my staging server. While this works fairly well it is a bit time consuming. Perhaps it would be possible to mount an ext file system on OS X and just store the files there instead, using their original NFC decomposed file names?
Also, to avoid this problems all together on future WordPress installs, which was my use case, you could add a simple add_filter('sanitize_file_name', 'remove_accents'); before uploading any files and you should be fine.

It seems that rsync --iconv is the best solution, as you can transfer the files and transcode the names all in one step. You just need to convince your host to upgrade their rsync. Given that the --iconv feature was introduced in rsync 3.0.0, which was released in 2008, it's a bit odd that your host is still running rsync 2.6.9.
If you can't convince your host to install an up-to-date rsync, you could compile your own rsync, upload it somewhere like ~/bin on the server, and add that to your path before the system installed rsync. Then you should be able to use the --iconv option. This should work as long as you are using rsync over SSH (the default), not the rsync daemon; because rsync over SSH works by SSHing to the remote machine, and running rsync --server with the same options that you passed to your local rsync.
Or you could find a host that has up-to-date tools and Perl installed.

Currently I'm using rsync --iconv like this:
Given Linux server and OS X machine:
Copying files from server to machine
You should execute this command from server (it won't work from OS X):
rsync --iconv=UTF-8,UTF-8-MAC /home/username/path/on/server/ 'username#your.ip.address.here:/Users/username/path/on/machine/'
Copying files from machine to server
You should execute this command from machine:
rsync --iconv=UTF-8-MAC,UTF-8 /Users/username/path/on/machine/ 'username#server.ip.address.here:/home/username/path/on/server/'

Related

How can I zip transfer files to using Putty in Windows?

I have a problem regarding transferring files from a server to another server. I tried using PSCP putty. it worked in the first time from local to server. what I'm trying to do is zip all files then transfer to another server. what commands should I use to achieve this?
pscp -P 22 test.zip root#domain.com:/root
this line of code works when transferring local to remote server, however, I want to compress files from a server to another remote server, or at least remote to local, then local to remote, whatever method is possible. I cannot compress the files because it's almost 50 GB in total so I am searching for a much faster way to achieve this.

Elixir Phoenix and Symlinks on Windows SMB Drive

So I have an interesting issue that I just can't figure out why I'm getting this and what to do.
So basically I store all my development projects on my Synology NAS for local access between my various devices. There has never been a problem with this until I started playing around with Elixir and more importantly Phoenix. The issue I am getting is when running mix phx.server. I get the following
[warn] Phoenix is unable to create symlinks. Phoenix' code reloader will run considerably faster if symlinks are allowed. On Windows, the lack of symlinks may even cause empty assets to be served. Luckily, you can address this issue by starting your Windows terminal at least once with "Run as Administrator" and then running your Phoenix application.
[info] Running DiscussWeb.Endpoint with cowboy 2.7.0 at 0.0.0.0:4000 (http)
[error] Could not start node watcher because script "z:/elHP/assets/node_modules/webpack/bin/webpack.js" does not exist. Your Phoenix application is still running, however assets won't be compiled. You may fix this by running "cd assets && npm install".
[info] Access DiscussWeb.Endpoint at http://localhost:4000
So I tried as it stated and ran it in CMD as admin but to no avail. After some further inspection I tried to create the symlinks manually but every time I tried I would get a Access is denied. error (yes this is elevated CMD).
c:\> mklink "z:\elHP\deps\phoenix" "z:\elHP\assets\node_modules\phoenix"
Access is denied.
So I believe it is something to do with the fact that the symlinks are trying to be created on the NAS because if I move the project and host it locally it will work. Now I know what you're thinking. Yes, I could just store them locally on my PC but I like to have them available between PCs without having to transfer files or rely on git etc. (i.e. offline access), not to mention that the NAS has a full backup routine.
What I have tried:
Setting guest read write access on the SMB share
Adding to /etc/samba/smb.conf on my Synology NAS:
[global]
unix extensions = no
[share]
follow symlinks = yes
wide links = yes
Extra logging on SMB to see what is happening when I try it (nothing extra logged)
Creating a symbolic link from my MAC (works)
Setting all of fsutil behavior query SymlinkEvaluation to enabled
At the moment I am stuck and unsure of what to try next, or even if it is possible. Considering just using NFS instead but will I face the same issues with SMB?
P.S I faced a similar issue with Python venvs a while ago, just a straight-up Access is denied. error and just gave up and moved just the venv locally and kept the bulk of the code on the NAS. (This actually ended up beingthe best solution for that because the environments of each device on my network clashed etc.)
Any ideas are greatly appreciated.

Getting vagrant synced folders to work on windows with openstack

I'm using vagrant-openstack-plugin to handle openstack integration and it uses a builtin rsync synced folder provider, I can't get it to use the plugin I specify (winrm) even by hacking the code a little.
For the record I can get rsync to work (with winsshd and cwrsync and a junction point from C:\cygdrive\C to C:\), but it's really gnarly.
I'm open to non-ssh/non-rsync synced folder options for windows (NFS? SMB only works if the host is Windows), but I need the openstack plugin to respect my choice of synced folder plugin first.

How to point a perforce workspace to a project that is synchronized across two machines?

I have an eclipse synchronized project where I do the work on my Windows machine and then synchronize and compile it on the linux build server. However, the Windows workspace is connected to perforce and the linux one is not. The problem is that when Eclipse synchronizes the two, the permissions get messed up on the linux side so that I cannot execute certain shell scripts that usually execute during the build. The workaround that I have would be to somehow chmod all the *.sh files before executing a build, but I would much rather have perforce know about both places (that way I could also commit from either the linux side or the windows side). For performance reasons, I couldn't run eclipse on the remote build server so this was the only solution I found. Also, when I tried setting up a second workspace for the linux side, it gave me errors saying "could not clobber X". I think the main problem is that I'm dealing with some sort of permissions issue here.
I may have found the answer. According to the perforce documentation:
By default, you can only use a workspace on the machine that is specified by the Host: field. If you want to use the same client workspace on multiple machines with different platforms, delete the Host: entry and set the AltRoots: field in the client specification. You can specify a maximum of two alternate client workspace roots. The locations must be visible from all machines that will be using them, for example through NFS or Samba mounts.
Perforce compares the current working directory against the main Root: first, and then against the two AltRoots: if specified. The first root to match the current working directory is used. If no roots match, the main root is used.
Note
If you are using a Windows directory in any of your client roots, specify the Windows directory as your main client Root: and specify your other workspace root directories in the AltRoots: field.
In the following example, if user bruno's current working directory is located under /usr/bruno, Perforce uses the UNIX path as his client workspace root, rather than c:\bruno_ws. This approach allows bruno to use the same client workspace specification for both UNIX and Windows development.
Client: bruno_ws
Owner: bruno
Description:
Created by bruno.
Root: c:\bruno_ws
AltRoots:
/usr/bruno/
To find out which workspace root is in effect, issue the p4 info command and check the Client root: field.
If you edit text files in the same workspace from different platforms, ensure that the editors and settings you use preserve the line endings. For details about line-endings in cross-platform settings, refer to the Perforce System Administrator's Guide.
You could have two workspaces, one for each machine. When you've done the work on the Windows machine, you could shelve the changes, and then unshelve them on the Linux machine and do the build there.
I did the following to set up alternate roots in Perforce:
Created a workspace in Windows
Set the current client (workspace) with "p4 set P4CLIENT=workspace_name", I used Windows+Cygwin to do this
Removed the "Host" from this client (workspace) "p4 client -o | grep -v Host: | p4 client -i" I used Windows+Cygwin to do this
In Linux added the alternate root in P4V, don't use ~, use the absolute path, I didn't test environment variables
Switched to the workspace in P4V
Got the latest revision in P4V, and checked the "Force" option
References:
http://answers.perforce.com/articles/KB/3412
Override host to null in perforce client creation
https://www.perforce.com/perforce/r12.1/manuals/cmdref/client.html
sqenixs's answer here

Remove execute permission on file downloaded on a Mac

We have a web app running on a Windows server, which allows a user to do some processing and download the results. The result is a set of files which are dynamically created on the server and zipped into a single file for facilitating the download process.
Everything works fine on Windows, but when users download the file from the web app on a Mac, the contents of the zip file have the execute (chmod +x) permission set (I presume that the same happens on *NIX and Linux machines). This can, of course, be removed by running the 'chmod -x' command, but is there a way by which one can remove the execute permission on the files, so that when downloaded on a Mac, the files don't have the execute permission set by default?
I believe it's not possible - .zip files don't contain permissions, so on a Mac it has to default to "most permissive" (otherwise it's possible that there are applications inside the zip that wouldn't be marked as executable when they need to be).
tars, for instance, do record permissions, but that'd be a bit more difficult to create on a Windows server.

Resources