Mac cannot copy because file is in use - macos

I have a external hard drive with NTFS format, I used my Mac to copied zip file in to there with size is about 23Gb.
However, when I bring it back to Mac, when transfer almost complete, the message alert with content: "cannot copy because file is in use", then copy process cancel.
Although I already installed paragon NTFS but still get that message.
I also try to copy using terminal by following command: cp -r /Volumn/source /destination.
Please help me solve this. The data on zip file is very important to me. Thanks

Related

How to zip a folder in MacOS without creating an extra directory inside the zip file

I got a .zip file from my friend and it was compressed under windows, which contains three subfolders inside of it, and when I check the contents of it on my Mac terminal it looks like this:
Now I unzip this file and then zip it through terminal, and when I check the contents of that zip file it becomes like this:
I have googled how to zip on mac without creating a subfolder with same name but none of them solves the problem, my question is how to do the zip on Mac which makes the zip file looks exactly same with the initial one I got.
Thanks very very much
New edit:
I think I might did not do very well to summarize my problem, so the initial folder contains three sub-folders and all of them were created in windows environment and compressed on windows, when I tried to unzip it on my MacOS machine, the unzipped folder looks still good but when I do the compression on Mac and then view the .zip file through unzip -l xxxx.zip, it is giving me 6 files in which the three sub-folders are also treated as files. Based on my knowledge this is because in BSD systems all the folders are treated as files but in windows they are not, what I'm currently doing is to delete all these files that represent folders through "zip -d", which I know is very silly. I would be more than happy to talk about this from an operating system view with anybody who is interested in it. Thanks in advance.
For me this command works fine:
zip -j zippedFolder.zip myFolder/*
To unzip I used
unzip zippedFolder.zip
and I've got only the data from the folder.
Example: The folder I want to zip is on the desktop and he's called testFolder.
Open Terminal
cd /Users/yourUser/Desktop
zip -f myZip.zip testFolder/*

Copy large amount of files from MAC to Windows server and preserve modify and creation date

I have an issue with moving a large number of files and folders from old MAS OSX server ( 10.9.4) to Windows Server 2016.
I already try to use robocopy but I always get an error - File creation error - The file or directory is not a reparse point.
The same thing happens when I am using xcopy.
I also tried to copy files from MAC to windows after mounting a share on MAC and use scp but also there are some errors and not all files are moved.
Can anyone know a way how I can copy files and preserver this creation and modified date?
rsync works fine now but I needed to give full permission over the newtork to prevent any issues
I generally use tar.
tar like rsync and lsyncdpreserves modification times, etc.
For tar you would:
tar up all the files just like a backup
gzip the tar ball
copy the gzipped tar ball to Windows
Extract the files using WinRAR. The file modification times will be preserved.

How do I diagnose input/output errors?

I am totally baffled. I used
su cp -a /my/home/dir /backup/home/dir
to backup my home directory and found a few files (about 20) that didn't copy due to input/output errors. These files look fine, some are .jpgs, some are .gifs, one is my Virtualbox VDI file...they work fine on the original home dir, but they JUST WON'T COPY. I tried manually doing them. I tried doing them using Nautilus. I tried changing the permissions to 777 and made sure the ownership was non-root...still no dice. I get:
cp: reading `/my/home/dir/subfolder/abc_def.gif': Input/output error
I'm scratching my head and while I could lose a gif or jpg here and there, I don't want to lose my vdi file. Do I need to add a --force to the cp command? Is there any way to find out more info about why these particular files aren't copying? In the case of the .jpgs, they're all in one folder of images I took during a recent trip, shot in the same camera, same CF card, and transferred the same way at the same time.
Totally baffled. Any help would be fantastic. Ideally, a way to force copy these files. They seem to be fine, usable, and I trust them, so I've no clue why they're not getting copied.

I can unzip on a remote machine but not on my computer

On a cluster I zipped a large (61GB, 9.2GB when zipped) directory.
zip -r zzDirectory Directory
I then scp the zzDirectory on my personal computer.
scp -r name#host.com:/path/to/zzDirectory.zip path/in/my/computer/zzDirectory.zip
And finally I unzipped it. I tried to unzip from the bash but it failed
warning [zzDirectory.zip]: 5544449626 extra bytes at beginning or within zipfile
(attempting to process anyway)
error [zzDirectory.zip]: start of central directory not found;
zipfile corrupt.
(please check that you have transferred or created the zipfile in the
appropriate BINARY mode and that you have compiled UnZip properly)
So I doubled click on the icon from the finder and the system started to unzip zzDirectory.zip. However, some files are missing and it looks like (I am not 100% sure yet) that some newline characters (\n) are missing as well. unzip used to work fine on my computer before.
In order to investigate where the problem come from, I unzipped zzDirectory.zip on the cluster and everything seem to work fine (no missing files).
I repeated the transfer and unzipped again but the problem persists. Note that transfers are made via internet. My OS is Mac OSX Yosemite 10.10.2.
How can I solve this issue? I would prefer not to transfer data that are not zipped because of band width issue. Do you think I should try to tar or should I use specific options that goes with the unzip command line?
On OS X you could try:
ditto -x -k the_over4gb.zip /path/to/dir/where/want/unzip
e.g:
ditto -x -k zzDirectory.zip .

How does Windows know if a file was downloaded from the Internet?

If I open a file in Windows, that was downloaded by chrome or another browser Windows popups a warning, that this file is downloaded from the internet. The same for documents you open in Microsoft Word.
But how does windows know that this file originate from the Internet? I think it's the same file as every other file on my hard drive. Has it to do something with the file properties?
Harry Johnston got it!
Had nothing to do with temporary folder or media cache. It's a NTFS Stream.
For further reading: MSDN File Streams
This blocking information is archieved with the following commands on the CLI:
(echo [ZoneTransfer]
More? echo ZoneId=3) > test.docx:Zone.Identifier
This creates an alternative file stream.
When you download any file from internet. It first downloaded in Media Cache instead of temp folder. Only after that it moves to actual location where you select to save that file.
If you copy and paste some file then it move that file through Temp folder only. Before opening any file windows check the location and if it is Media Folder then you get the error "File is downloading or other errors related to this".

Resources