I've coded the following script in shell that creates a folder in a directory, then moves all images to this directory and then converts it and puts to a folder before. if I do this process manually it works fine, but apparently there's a conflict between the "convert" command and my loop. the "big" folder is the one that will retain the images with its original sizes and then the "tabloid" folder will contain thumbnail-sized formats of the same images.
cd tabloid
mkdir big
mv * big
cd big
for i in 00 01 02 03 04 05 06 07 08 09 10 11
do
convert -resize 351×383 "$i.jpg" "../$i.jpg"
done
it returns the following error:
convert.im6: invalid argument for option `-resize': 351×383 # error/convert.c/ConvertImageCommand/2382.
I don't know what could be wrong with my script. apparently it's in the loop because if I do the process manually (converting image per image via terminal), it works fine.
That's not an ASCII x (0x78) in your convert line that's × (MULTIPLICATION SIGN 0xd7).
Related
When I try to upload images to some CMS system I'm using, it fails on some JPEGs and others not. After some searching I found that at the bits and bytes level there is a signature that tells if a file is a JPEG or something else. Apparently JPEGs exported from photoshop have a slightly different signature, and my system seems to fails on them. They are still somewhat JPEGs but more like photoshop JPEGS. If I open and export the file in Gimp for example, it works without a problem.
The one that works starts with: FF D8 FF E0 00 10 4A 46 49 46
The one that doesn't work: FF D8 FF ED 00 2C 50 68
Both files end with: FF D9
So my question comes to if there is a way to export those images in photoshop as plain JPEG images?
As you haven't shared a Photoshop-created JPEG file and I can't generate one easily, I can only suggest an untested command to rectify the issue.
You should be able to use exiftool to remove Photoshop-related stuff from a single image like this:
exiftool -photoshop:all= IMAGE.JPG
That may remove more information than you want, so you could copy the original IPTC data back in like this. This will do all the JPEGs in the current directory:
exiftool -photoshop:all= -tagsfromfile # -iptc:all -ext jpg .
I have a folder full of shortcuts leading to F: recently due to complications I cannot use the F: drive and it now resides in D:. There are about 100+ shortcuts in the folder, is there anyway to change them in mass? I'd prefer using Batch for this but using external programs will work just as well.
This could done with free tool Xchang32.exe from Clay's Utilities
for Win32 which of course can be also used on Windows x64.
Download ClaysUtils32.zip and extract from ZIP archive Xchang32.exe into folder with the shortcut files.
Important note: Clay Ruth, author of Xchang32.exe and owner of domain clayruth.com died years ago. For that reason the ZIP file with this tool is not available anymore in world wide web although it was permitted explicitly to distribute this package free of charge according to read me file in the ZIP file.
Run the following two commands:
xchang32.exe /i *.lnk "F:^x5C" "D:^x5C"
xchang32.exe /i *.lnk "F^x00:^x00^x5C" "D^x00:^x00^x5C"
The first line replaces case-insensitive all F:\ by D:\ in ASCII in any *.lnk file in current directory.
The second line replaces case-insensitive all F:\ by D:\ in Unicode as shortcut files usually contain file and directory paths in ASCII/ANSI as well as in Unicode.
Note: This quick solution could modify also binary data streams with 46 3A 5C or 66 3A 5C or 46 00 3A 00 5C or 66 00 3A 00 5C in the *.lnk files which do not belong to a directory or file path. But I suppose those byte sequences do not exist in binary data of the shortcut files.
Most high-level representations of files are as streams. C's fopen, ActiveX's Scripting.FileSystemObject and ADODB.Stream - in fact, anything built on top of C is very likely to use a stream representation to edit files.
However, when modifying large (~4MiB) fixed-structure binary files, it seems wasteful to read the whole file into memory and write almost exactly the same thing back to disk - this will almost certainly have a performance penalty attached. Looking at the majority of uncompressed filesystems, there seems to me to be no reason why a section of the file couldn't be read from and written to without touching the surrounding data. At a maximum, the block would have to be rewritten, but that's usually on the order of 4KiB; much less than the entire file for large files.
Example:
00 01 02 03
04 05 06 07
08 09 0A 0B
0C 0D 0E 0F
might become:
00 01 02 03
04 F0 F1 F2
F3 F4 F5 0B
0C 0D 0E 0F
A solution using existing ActiveX objects would be ideal, but any way of doing this without having to re-write the entire file would be good.
Okay, here's how to do the exercise in powershell (e.g. hello.ps1):
$path = "hello.txt"
$bw = New-Object System.IO.BinaryWriter([System.IO.File]::Open($path, [System.IO.FileMode]::Open, [System.IO.FileAccess]::ReadWrite, [System.IO.FileShare]::ReadWrite))
$bw.BaseStream.Seek(5, [System.IO.SeekOrigin]::Begin)
$bw.Write([byte] 0xF0)
$bw.Write([byte] 0xF1)
$bw.Write([byte] 0xF2)
$bw.Write([byte] 0xF3)
$bw.Write([byte] 0xF4)
$bw.Write([byte] 0xF5)
$bw.Close()
You can test it from command line:
powershell -file hello.ps1
Then, you can invoke this from your HTA as:
var wsh = new ActiveXObject("WScript.Shell");
wsh.Run("powershell -file hello.ps1");
I need help in the conceptual area surrounding:
/usr/bin/screen,
~/.screenrc,
termcap
My Goal: is to create a 'correctly' formatted log file via 'screen'.
Symptom: The log file contains hundreds of carriage-return bytes [i.e. (\015) or (\r) ]. I would like to replace every carriage-return byte with a linefeed byte [i.e. (\012) or (\n)].
My Approach: I have created the file: ~/.screenrc and added a 'termcap' line to it with the hope of intercepting the inbound bytes and translating the carriage-return bytes into linefeed bytes BEFORE they are written to the log file. I cycled through nine different syntactical forms of my request. None had the desired effect (see below for all nine forms).
My Questions:
Can my goal be accomplished with my approach?
If yes, what changes do I need to make to achieve my goal?
If no, what alternative should I implement?
Do I need to mix in the 'stty' command?
If yes, how?
Note: I can create a 'correctly' formatted file using the log file as input to 'tr':
$ /usr/bin/tr '\015' '\012' <screenlog.0 | head
<5 BAUD ADDRESS: FF>
<WAITING FOR 5 BAUD INIT>
<5 BAUD ADDRESS: 33>
<5 BAUD INIT: OK>
Rx: C233F1 01 00 # 254742 ms
Tx: 86F110 41 00 BE 1B 30 13 # 254753 ms
Tx: 86F118 41 00 88 18 00 10 # 254792 ms
Tx: 86F128 41 00 80 08 00 10 # 254831 ms
Rx: C133F0 3E # 255897 ms
Tx: 81F010 7E # 255903 ms
$
The 'screen' log file ( ~/screenlog.0 ) is created using the following command:
$ screen -L /dev/tty.usbserial-000014FA 115200
where:
$ ls -dl /dev/*usb*
crw-rw-rw- 1 root wheel 17, 25 Jul 21 19:50 /dev/cu.usbserial-000014FA
crw-rw-rw- 1 root wheel 17, 24 Jul 21 19:50 /dev/tty.usbserial-000014FA
$
$
$ ls -dl ~/.screenrc
-rw-r--r-- 1 scottsmith staff 684 Jul 22 12:28 /Users/scottsmith/.screenrc
$ cat ~/.screenrc
#termcap xterm* 'XC=B%,\015\012' # 01 no effect
#termcap xterm* 'XC=B%\E(B,\015\012' # 02 no effect
#termcap xterm* 'XC=B\E(%\E(B,\015\012' # 03 no effect
#terminfo xterm* 'XC=B%,\015\012' # 04 no effect
#terminfo xterm* 'XC=B%\E(B,\015\012' # 05 no effect
#terminfo xterm* 'XC=B\E(%\E(B,\015\012' # 06 no effect
#termcapinfo xterm* 'XC=B%,\015\012' # 07 no effect
#termcapinfo xterm* 'XC=B%\E(B,\015\012' # 08 no effect
termcapinfo xterm* 'XC=B\E(%\E(B,\015\012' # 09 no effect
$
$ echo $TERM
xterm-256color
$ echo $SCREENRC
$ ls -dl /usr/lib/terminfo/?/*
ls: /usr/lib/terminfo/?/*: No such file or directory
$ ls -dl /usr/lib/terminfo/*
ls: /usr/lib/terminfo/*: No such file or directory
$ ls -dl /etc/termcap
ls: /etc/termcap: No such file or directory
$ ls -dl /usr/local/etc/screenrc
ls: /usr/local/etc/screenrc: No such file or directory
$
System:
MacBook Pro (17-inch, Mid 2010)
Processor 2.53 GHz Intel Core i5
Memory 8 GB 1067 MHz DDR3
Graphics NVIDIA GeForce GT 330M 512 MB
OS X Yosemite Version 10.10.4
Screen(1) Mac OS X Manual Page: ( possible relevant content ):
CHARACTER TRANSLATION
Screen has a powerful mechanism to translate characters to arbitrary strings depending on the current font and terminal type. Use this feature if you want to work with a common standard character set (say ISO8851-latin1) even on terminals that scatter the more unusual characters over several national language font pages.
Syntax: XC=<charset-mapping>{,,<charset-mapping>}
<charset-mapping> := <designator><template>{,<mapping>}
<mapping> := <char-to-be-mapped><template-arg>
The things in braces may be repeated any number of times.
A tells screen how to map characters in font ('B': Ascii, 'A': UK, 'K': german, etc.) to strings. Every describes to what string a single character will be translated. A template mechanism is used, as most of the time the codes have a lot in common (for example strings to switch to and from another charset). Each occurrence of '%' in gets substituted with the specified together with the character. If your strings are not similar at all, then use '%' as a template and place the full string in . A quoting mechanism was added to make it possible to use a real '%'. The '\' character quotes the special char- acters '\', '%', and ','.
Here is an example:
termcap hp700 'XC=B\E(K%\E(B,\304[,\326\\,\334]'
This tells screen how to translate ISOlatin1 (charset 'B') upper case umlaut characters on a hp700 terminal that has a german charset. '\304' gets translated to '\E(K[\E(B' and so on. Note that this line gets parsed three times before the internal lookup table is built, therefore a lot of quoting is needed to create a single '\'.
Another extension was added to allow more emulation: If a mapping translates the unquoted '%' char, it will be sent to the terminal whenever screen switches to the corresponding . In this special case the template is assumed to be just '%' because the charset switch sequence and the char- acter mappings normally haven't much in common.
This example shows one use of the extension:
termcap xterm 'XC=K%,%\E(B,[\304,\\\326,]\334'
Here, a part of the german ('K') charset is emulated on an xterm. If screen has to change to the 'K' charset, '\E(B' will be sent to the terminal, i.e. the ASCII charset is used instead. The template is just '%', so the mapping is straightforward: '[' to '\304', '\' to '\326', and ']' to '\334'.
The section on character translation is describing a feature which is unrelated to logging. It is telling screen how to use ISO-2022 control sequences to print special characters on the terminal. In the manual page's example
termcap xterm 'XC=K%,%\E(B,[\304,\\\\\326,]\334'
this tells screen to send escape(B (to pretend it is switching the terminal to character-set "K") when it has to print any of [, \ or ]. Offhand (referring to XTerm Control Sequences) the reasoning in the example seems obscure:
xterm handles character set "K" (German)
character set "B" is US-ASCII
assuming that character set "B" is actually rendered as ISO-8859-1, those three characters are Ä, Ö and Ü (which is a plausible use of German, to print some common umlauts).
Rather than being handled by this feature, screen's logging is expected to record the original characters sent to the terminal — before translation.
I am writing a script that takes a UTF-16 encoded text file as input and outputs a UTF-16 encoded text file.
use open "encoding(UTF-16)";
open INPUT, "< input.txt"
or die "cannot open > input.txt: $!\n";
open(OUTPUT,"> output.txt");
while(<INPUT>) {
print OUTPUT "$_\n"
}
Let's just say that my program writes everything from input.txt into output.txt.
This WORKS perfectly fine in my cygwin environment, which is using "This is perl 5, version 14, subversion 2 (v5.14.2) built for cygwin-thread-multi-64int"
But in my Windows environment, which is using "This is perl 5, version 12, subversion 3 (v5.12.3) built for MSWin32-x64-multi-thread",
Every line in output.txt is pre-pended with crazy symbols except the first line.
For example:
<FIRST LINE OF TEXT>
㈀ Ⰰ ㈀Ⰰ 嘀愀 ㌀ 䌀栀椀愀 䐀⸀⸀⸀ 儀甀愀渀最 䠀ഊ<SECOND LINE OF TEXT>
...
Can anyone give some insight on why it works on cygwin but not windows?
EDIT: After printing the encoded layers as suggested.
In Windows environment:
unix
crlf
encoding(UTF-16)
utf8
unix
crlf
encoding(UTF-16)
utf8
In Cygwin environment:
unix
perlio
encoding(UTF-16)
utf8
unix
perlio
encoding(UTF-16)
utf8
The only difference is between the perlio and crlf layer.
[ I was going to wait and give a thorough answer, but it's probably better if I give you a quick answer than nothing. ]
The problem is that crlf and the encoding layers are in the wrong order. Not your fault.
For example, say you do print "a\nb\nc\n"; using UTF-16le (since it's simpler and it's probably what you actually want). You'd end up with
61 00 0D 0A 00 62 00 0D 0A 00 63 00 0D 0A 00
instead of
61 00 0D 00 0A 00 62 00 0D 00 0A 00 63 00 0D 00 0A 00
I don't think you can get the right results with the open pragma or with binmode, but it can be done using open.
open(my $fh, '<:raw:encoding(UTF-16):crlf', $qfn)
You'll need to append a :utf8 with some older version, IIRC.
It works on cygwin because the crlf layer is only added on Windows. There you'd get
61 00 0A 00 62 00 0A 00 63 00 0A 00
You have a typo in your encoding. It should be use open ":encoding(UTF-16)" Note the colon. I don't know why it would work on Cygwin but not Windows, but could also be a 5.12 vs 5.14 thing. Perl seems to make up for it, but it could be what's causing your problem.
If that doesn't do it, check if the encoding is being applied to your filehandles.
print map { "$_\n" } PerlIO::get_layers(*INPUT);
print map { "$_\n" } PerlIO::get_layers(*OUTPUT);
Use lexical filehandles (ie. open my $fh, "<", $file). Glob filehandles are global and thus something else in your program might be interfering with them.
If all that checks out, if lexical filehandles are getting the encoding(UTF-16) applied, let us know and we can try something else.
UPDATE: This may provide your answer: "BOMed UTF files are not suitable for streaming models, and they must be slurped as binary files instead." Looks like you have to read the file in as binary and do the encoding as a string. This may have been a bug fixed in 5.14.
UPDATE 2: Yep, I can confirm this is a bug that was fixed in 5.14.