How to transfer files on different computers through clipboard - clipboard

I have seen a situation many a times where i have to work on VDI desktop/RDP through my laptop where only copy to clipboard option is available.
So if i want to copy a zip file,i cant.
I think one of the option could be convert file to binary and copy to clipboard but how to paste the same on my laptops clipboard and retrieve the the file?

If you use Windows you can use following procedure:
to get your file prepared for the transfer run:
certutil.exe -encode file.zip filezip.txt
next, open the filezip.txt with Notepad and mark all and copy it to the clipboard
Open Notepad on the server with the same file name
notepad.exe filezip.txt
and paste the content from the clipboard. Save the file.
Last step is to convert the file back to binary using certutil again:
certutil.exe -decode filezip.txt file.zip
Now you can extract your files from the file.zip on the server.

As far as I know, if you are using microsoft's remote desktop, you can copy and paste file between local and remote machine.
It's very difficult to implement the copy and paste between two machine. If you can use the web browser in the remote machine, you can try convert the binary to base64 content, then convert it back to binary using some online base64 converter.

Related

Windows Batch Compressed file is Invalid

I have a batch file that compresses files into a zip folder so I can email the file. The file runs with no issues. However, when I try to open the zip folder, I get the following error:
'Folder is invalid'
and it won't open. Below is the part of the step that compresses the file on Windows Server 2012:
COMPRESS "%DIR_IP_INTERFACES%\SP_Backups\Outbound_GDC_Req.txt" "%DIR_IP_INTERFACES%\SP_Backups\Outbound_GDC_Req_%DATE:/=%_%TIME::=%.zip"
I need it to zip during the batch file, but I will not be the one unzipping. So, I was looking for a standard way to allow anybody to open/unzip the folder when they receive the email (that doesn't require multiple downloads/changes on each computer that might open these files).
Any ideas?

Is it possible to get a files owner url metadata in the macOS terminal?

I can access the meta data property "owner url" thru Photoshop, but am hoping that there's a way to access it from the command line without having to open the file.
Does anyone know of a way to do this?
mdls doesn't list this particular metadata field.
There is no built-in command line tool to achieve this.
However, you can utilize exiftool, which is a platform-independent Perl library plus a command-line application for reading, writing and editing meta information in a wide variety of files.
Installation:
The guidelines for installing it on macOS can be found here. In summary:
Download the ExifTool OS X Package from the ExifTool home page.
(The file you download should be named ExifTool-11.17.dmg.)
Install as a normal OS X package.
(Open the disk image, double-click on the install package, and
follow the instructions.)
You can now run exiftool by typing exiftool in a Terminal window.
Processing a single file:
Reading the "owner url" via the command line:
Run the following command in a Terminal window:
$ exiftool -b −xmp:WebStatement ~/Desktop/path/to/image.psd
Note: the ~/Desktop/path/to/image.psd part in the command above should be replaced with a real image filepath.
This command will log the URL to the console only if the image metadata contains one. For instance:
https://www.example.com
Writing the "owner url" via the command line:
You can also write the "owner url" to a file by running the following command:
$ exiftool −xmp:WebStatement="https://www.foobar.com" ~/Desktop/path/to/image.psd
Note: As mentioned previously, the ~/Desktop/path/to/image.psd part in the command above should be replaced with a real image filepath, and the https://www.foobar.com part should be replaced with the actual URL you want to apply.
Processing multiple files:
Reading the "owner url" for multiple files via the command line:
If you wanted to read the "owner url" for all image files within a given folder, (including those in sub folders), and generate a JSON report you can run the following command:
$ exiftool -j -r −xmp:WebStatement ~/Desktop/path/to/folder/ -ext jpg -ext png -ext psd -ext tif > ~/Desktop/owner-urls.json
Breakdown of command (above):
-j - Use JSON formatting for output.
-r - Recursively process sub directories.
−xmp:WebStatement - Retrieve the WebStatement value, i.e. "owner url".
~/Desktop/path/to/folder/ - The path to the folder containing images (This should be replaced with a real path to a folder).
-ext jpg -ext png -ext psd -ext tif - The file extension(s) to process.
> ~/Desktop/owner-urls.json - Save the JSON output to file at the Desktop named owners-url.json.

Mac cannot copy because file is in use

I have a external hard drive with NTFS format, I used my Mac to copied zip file in to there with size is about 23Gb.
However, when I bring it back to Mac, when transfer almost complete, the message alert with content: "cannot copy because file is in use", then copy process cancel.
Although I already installed paragon NTFS but still get that message.
I also try to copy using terminal by following command: cp -r /Volumn/source /destination.
Please help me solve this. The data on zip file is very important to me. Thanks

How does Windows know if a file was downloaded from the Internet?

If I open a file in Windows, that was downloaded by chrome or another browser Windows popups a warning, that this file is downloaded from the internet. The same for documents you open in Microsoft Word.
But how does windows know that this file originate from the Internet? I think it's the same file as every other file on my hard drive. Has it to do something with the file properties?
Harry Johnston got it!
Had nothing to do with temporary folder or media cache. It's a NTFS Stream.
For further reading: MSDN File Streams
This blocking information is archieved with the following commands on the CLI:
(echo [ZoneTransfer]
More? echo ZoneId=3) > test.docx:Zone.Identifier
This creates an alternative file stream.
When you download any file from internet. It first downloaded in Media Cache instead of temp folder. Only after that it moves to actual location where you select to save that file.
If you copy and paste some file then it move that file through Temp folder only. Before opening any file windows check the location and if it is Media Folder then you get the error "File is downloading or other errors related to this".

What does "download or create" mean in a tofrodos context?

I keep seeing this phrase "download or create" in tutorials right after tofrodos. What would be an example of how to download or create? I just get stuck in
^
^
^
mode.
apt-get -y install tofrodos
Download or create ZPX_ubuntu_12-04_auto_installer.sh
The author means you need to transfer the "ZPX_ubuntu_12-04_auto_installer.sh" shell script to the server, and if necessary, change to the directory you saved the script in using the cd command, before entering the commands below:
fromdos ZPX_ubuntu_12-04_auto_installer.sh
chmod +x ZPX_ubuntu_12-04_auto_installer.sh
./ZPX_ubuntu_12-04_auto_installer.sh
He seems to be referring to a particular script included in a .zip file posted on a web forum.
You may be able to download the .zip file to your computer, extract it, and then use the sftp or scp program to transfer just the shell script to the server. Alternatively, you could use wget or curl to download the .zip file to the server and then the unzip command to unzip it.
A graphical SFTP client like FileZilla may help for the former approach. The Firefox add-on cliget may help for the latter, especially because the file is hosted on a password-protected web forum.
fromdos is just a utility program to convert a text file from DOS/Windows format to Unix format by stripping out all the carriage return characters. Perhaps using this command is necessary because the author of the script used a Windows text editor or an ASCII-mode FTP transfer before zipping up the file. Of course, you need the file on the server if you are trying to run the command on the server.

Resources