Terminal in Mac: Put files in FTP Server - macos

I put files in the FTP server using my Macintosh Terminal. I use the command
"PUT filename.txt"
while doing this the text file which is an UTF-16 gets to the ftp server but do not retain the DoubleByte characters (E.g: Japanese Characters). I believe this is because I did not specify a file format for destination.
What is the command to be used to specify the Destination "File Format" "Type", "Structure" etc.

The default FTP type is ascii. Before you PUT your file, you might first change your type to binary.
Here is how it looks at my prompt:
ftp> binary
200 Type set to I

I used BBEdit to save the file as UTF-8 and then sent the file to the server. This worked fine.
BBEdit gave me the option to save the file as UTF-8 Unix format. When i used Excel, Access or anyother software which do not have UTF-8 unix format (they have UTF-8 Windows) as saving, the text file is getting issues . Unix gets crazy with non english characters with UTF-8 Windows encoding.

Related

File line ending not changed to CR LF by Windows FTP server and client

I downloaded one file (created with Linux line end character "LF") from Windows 10 to Windows 7 using ftp.
After downloading in Windows 7 from Windows 10, that file line end character has not changed to "CR LF".
I downloaded using "type ASCII" mode only. Why has line ending not changed to CR LF?
I have used IIS Windows FTP server. And client is Windows ftp client.
Also please explain, whether the line ending conversion (from LF to CR LF or CR LF to LF) is done by FTP server ot FTP client.
I checked in RFC protocol also, no clear definition.
In the ASCII mode, the server converts the file from its native format to canonical format specified by RFC 959 (section 3.1.1.1. ASCII TYPE). The canonical format is ASCII plain text format with CRLF line endings.
The client then converts the file from the canonical format to its native format.
As the FTP canonical format matches that of the Windows, it's quite probable that both Windows server and client actually transfer the files without any modification.

In Windows 10 how do I rename a file to a filename that includes a character with an umlaut?

I'm on Win10 and I have a .bat file to rename a bunch of files. Some of the entries need to be renamed to a non-English name, e.g.
RENAME "MyFile1.txt" "Eisenhüttenstadt.txt"
However, when I run this, the 'ü' comes out as something else, other characters with an umlaut also are replaced by different characters.
I've tried saving the .bat file in Notepad with Unicode and UTF-8 encoding but then Windows doesn't recognise the command when I try to run it.
I've read this and other similar issues but not found a solution, surely it's simple when you know how?
Any suggestions?
The default code page in the console is 437(USA) or 850(Europe), which does not support characters with umlaut, so you must change this to 1252(West European Latin). So, use Chcp command in the beginning of your batch file to change it, like this:
Chcp 1252
Example:
image via http://www.pctipp.ch/tipps-tricks/kummerkasten/windows-7/artikel/windows-7-umlaute-in-batch-dateien-55616/
Sources:http://ss64.com/nt/chcp.html , http://www.pctipp.ch/tipps-tricks/kummerkasten/windows-7/artikel/windows-7-umlaute-in-batch-dateien-55616/ (The article says for Windows 7 but this applies for Windows 10 too)

Jenkins Windows Batch Command: How to pass non-Latin file path?

I faced a problem today with running a windows batch command with Jenkins (at the time of writing I have Jenkins v1.607).
I'm trying to create a job to convert a DOCX to PDF file with OfficeToPDF.
With Latin filenames all OK, but when pass the Cyrillic filename as parameter it simply says "Can not find file". I guess it's all about encoding.
The workaround is to use for input parameter a short 8.3 filename, and Latin transcription for output PDF file name.
But how can I pass non-Latin file path correctly to allow the tool export to the Cyrillic file path?

CMD: '■m' is not recognized as an internal or external command

I am trying to get a batch file to work. Whenever I attempt to run a .bat the command line returns '■m' is not recognized... error, where "m" is the first letter of the file. For example:
md c:\testsource
md c:\testbackup
Returns
C:>"C:\Users\Michael\Dropbox\Documents\Research\Media\Method Guide\Program\test
.bat"
C:>■m
'■m' is not recognized as an internal or external command,
operable program or batch file.
Things I have tried:
Changing Path variables, rebooting, etc.
Changing file directory (i.e. run from C:)
Running example files from web (like above) to check for syntax errors.
Thanks
What text editor are you writing this in? It seems like your text editor may save the file as UTF-16 encoded text, which cmd.exe can't handle. Try setting the "coding"/"file encoding" to "ANSI" when saving the file.
This results in the first byte being a byte-order-mark (telling other editors how to process the file), and cmd.exe can't deal with this.
In addition to the approved answer I would add the case where is a PowerShell command the one that creates the file... PowerShell comes by default with the UTF-16 encoding.
To solve your problem then, force the file encoding lie this: | out-file foo.txt -encoding utf8
Answer based on this other answer.
In windows 10 I had the same issue.
Changing the character set to UTF-8 made it worse.
It worked correctly when I selected Encoding as UTF-8-NO BOM.

In Windows, how do I find all files in a certain directory that are encoded using unicode?

I am having trouble searching a large directory of files for a string. The search command I'm using is skipping any file encoded in Unicode. I want to find all the files in this directory that are encoded in Unicode. I am on Windows XP.
Thank you!
You don't know encoding before you open a file and read from it. So you will enumerate directory files, then go through the list, open and check either BOM or the content itself (such as certain amount of heading bytes).
The find command in Windows supports Unicode text files. findstr doesn't.
You can do it with my script below, the input does not care what encoding, as far as you specify the output encoding like this -Encoding ASCII.
Goto the Dir you want cd c:\MyDirectoryWithCrazyCharacterEncodingAndUnicode
Fire this script away!
Copy and past the script in your Powershell windows, you get the idea just play with it to fix the syntax
foreach($FileNameInUnicodeOrWhatever in get-childitem )
{
$tempEncoding = (Get-Content -encoding byte)
write-output $FileNameInUnicodeOrWhatever "has encoding" $tempEncoding
// [System.Text.Encoding]::$result
}
If you want to further resolve issues with not being able to find files because of encoding, change the encoding type

Resources