How to convert .epsf to .eps? - wolfram-mathematica

I'm looking for a method of converting .epsf to .eps for a publication I'm submitting. The submission site requires .eps (even though my understanding is that modern renderers should be able to read .epsf as well - the site is archaic, I have to upload all 100 images individually.) My co-author sent me the zipped files to upload (and now to convert) - I didn't make them myself. Further, the programs that made these images may exist on my co-authors computer but where is uncertain.
I've tried this in Mathematica 8 to reasonable but not full success - as in colored files become black and white, files with duplicate entries (as in Fig11a.eps and Fig11a.epsf both exist though they are different, it seems that the .eps is the background and the .epsf is the foreground layer) convert incorrectly. My attempt was to import the .epsf files to Mathematica and export them as .eps.
Also, I've using a middle man format - e.g. gif/tiff/png/jpg - with similar results. I haven't been able to find a program that's free (I assume photoshop could pull this off) that I could use - also I'd like to do it as a batch. A method that uses requires python/Mathematica or XP/Linux OS's would be fine. Thanks.

You do not need to convert anything. Encapsulated PostScript files can have both extensions (both EPS and EPSF). If you publisher refuses to accept files with an EPSF extension just rename them to EPS.
Any processing/conversion you do on the files (using GhostScript, Mathematica, etc.) carries the risk of corrupting the graphics in some way. But there's no need to do it. Send them as they are or rename them if you prefer.
(If you have any doubt, you can check the EPS Format Specification from 1992 which says that on the Macintish the recommended file extension is .epsf while on DOS it's .EPS)

Related

How to compress file on HFS programmatically?

macOS HFS+ supports transparent filesystem-level compression. How can I enable this compression for certain files via a programmatic API? (e.g. Cocoa or C interface)
I'd like to achieve effect of ditto --hfsCompression src dst, but without shelling out.
To clarify: I'm asking how to make uncompressed file compressed. I'm not interested in reading or preserving existing HFS compression state.
There's afsctool which is an open source implementation of HFS+ compression. It was originally by hacker brkirch (macrumors forum link, as he still visits there) but has since been expanded and improved a great deal by #rjvb who is doing amazing things with it.
The copyfile.c file discloses some of the implementation details.
There's also a compression tool based on that: afsctool.
I think you're asking two different questions (and might not know it).
If you're asking "How can I make arbitrary file 'A' an HFS compressed file?" the answer is, you can,'t. HFS compressed files are created by the installer and (technically[1]) only Apple can create them.
If you are asking "How can I emulate the --hfsCompression logic in ditto such that I can copy an HFS compressed file from one HFS+ volume to another HFS+ volume and preserve its compression?" the answer to that is pretty straight forward, albeit not well documented.
HFS Compressed files have a special UF_COMPRESSED file flag. If you see that, the data fork of the file is actually an uncompressed image of a hidden resource. The compressed version of the file is stored in a special extended attribute. It's special because it normally doesn't appear in the list of attributes when you request them (so if you just ls -l# the file, for example, you won't see it). To list and read this special attribute you must pass the XATTR_SHOWCOMPRESSION flag to both the listxattr() and getxattr() functions.
To restore a compressed file, you reverse the process: Write an empty file, then restore all of its extended attributes, specifically the special one. When you're done, set the file's UF_COMPRESSED flag and the uncompressed data will magically appear in its data fork.
[1] Note: It's rumored that the compressed resource of a file is just a ZIPed version of the data, possibly with some wrapper around it. I've never taken the time to experiment, but if you're intent on creating your own compressed files you could take a stab at reverse-engineering the compressed extended attribute.

overlay one pdf with another from the command line: pdftk alternative?

I use a bash script to auto-generate a pdf calendar each month.I use the wonderful remind program as the basis for this routine. Great as are the calendars I get using that program, I need a more detailed header for the calendar (than just the name of the month and the year). I couldn't puzzle out a way to get the remind program to enhance the header, but I was able to get the enhanced results I wanted by creating a second pdf containing the header enhancements I need, then overlaying that pdf onto the calendar I produce with remind, via the pdftk utility (pdftk calendar.pdf stamp calendar_overlay.pdf output MONTH-YEAR-cal.pdf). Unfortunately, I recently lost the ability to use pdftk since keeping it on my system would necessitate me ceasing to do other system updates. In short, I had to remove it in order to continue updating my system.
So now I'm looking for some alternative that I can incorporate into my bash script. I am not finding any utility that will allow me to overlay one pdf with another, like pdftk allows. It seems I may be able to do something like what I'm after using imagemagick (-convert), though I would likely need to overlay the pdf with an image file like a .jpg rather than with a pdf. Another possible solution may be to use TeX/LaTeX to insert text into the pdf as described at https://rsmith.home.xs4all.nl/howto/adding-text-or-graphics-to-a-pdf-file.html.
I wanted to ask here, before investing a lot of time and effort into pursuing one or other of the two potential options I've identified, whether there is some other way, using command line options that can be incorporated into a bash script, of overlaying one pdf with another in the manner described? Input will be appreciated.
LATER EDIT: another link with indications how to do such things using LaTeX https://askubuntu.com/questions/712691/batch-add-header-footer-to-pdf-files
Assuming for simplicity that both of your files are of size 500pt x 200pt,
you can use pdfjam with nup and delta options to trick it into overlaying your source pdf files.
pdfjam bottom.pdf top.pdf --outfile merged.pdf \
--nup "1x2" \
--noautoscale true \
--delta "0 -200pt" \
--papersize "{500pt, 200pt}"
Unfortunately, I've found in my tests that I needed to increase the y delta by one point to get perfect alignment.
pdftk-java is a Java-based port of pdftk which looks to be actively in development. Given that its only real requirement appears to be Java 7+, it should work even in environments such as your own that no longer support the requirements of pdftk, so long as they have a Java runtime installed.

How do I decompress a .astc file with an additional .ccz extension? How do I view .ita files?

First, full disclosure: I'm very new to coding and very new to file dissecting, but its something I anticipate studying in school very soon, so please pardon my ignorance in future interactions.
As a project I've decided to dissect the files of a mobile app I greatly enjoy. This app is Futurama: Worlds of Tomorrow. I'm a big fan of the cartoon, even spent money on the stuff, so I figured it was natural for me to pick.
Extracting the .apk file was easy, I found some of the assets they use in the game, like the music, the soundbytes, and some .pngs. All simple stuff.
However there are two files I'm absolutely baffled by: files with an .astc.czz extension and an .ita file that is not an italian read me file, the developers informed me that those are animation files.
Allow me to go into what I know and what I don't know:
Filename.astc.czz
Example file here
I recognize .astc as a compression file and was informed that .astc files are common for mobile games. Fair enough, but the real extension is .czz, the "real" extension of the file leads me to dead end. I've found the ASTC Evaluation Codec
by ARM-Software on github so I tried that. I changed the extension to .astc and then tried keeping .czz but the codec gives me an error every time. This is where I show my ignorance, I didn't know the right way to do this so I'm showing you every combination of what I tried. I replaced my name with user.
C:\Users\user\Downloads\astc-encoder-master\Binary\Win32
λ astcenc -d C:\Users\user\Downloads\astc-encoder-master\Binary\Win32\AC0001-dialogue1-003#2x.astc C:\Users\user\Downloads\astc-encoder-master\Binary\Win32\AC0001-dialogue1-003#2x.tga
File C:\Users\user\Downloads\astc-encoder-master\Binary\Win32\AC0001-dialogue1-003#2x.astc not recognized
C:\Users\user\Downloads\astc-encoder-master\Binary\Win32
λ astcenc -d AC0001-dialogue1-003#2x.astc AC0001-dialogue1-003#2x.tga
File AC0001-dialogue1-003#2x.astc not recognized
C:\Users\user\Downloads\astc-encoder-master\Binary\Win32
λ astcenc -d C:\Users\user\Downloads\astc-encoder-master\Binary\Win32\AC0001-dialogue1-003#2x.astc.czz C:\Users\user\Downloads\astc-encoder-master\Binary\Win32\AC0001-dialogue1-003#2x.tga
Failed to open file C:\Users\user\Downloads\astc-encoder-master\Binary\Win32\AC0001-dialogue1-003#2x.astc.czz
C:\Users\user\Downloads\astc-encoder-master\Binary\Win32
λ astcenc -d AC0001-dialogue1-003#2x.astc.czz AC0001-dialogue1-003#2x.tga
Failed to open file AC0001-dialogue1-003#2x.astc.czz
No success there.
So then I learned that .CZZ files are apparently associated with visECAD Viewer and I downloaded that and the .astc.czz files became associated with the program. I tried opening them but visECAD says it cant open them because they are "outdated." So that's another dead end.
Right, so that's all I know.
Filename.ita
Example file here
Out of curiosity I've actually emailed the developers about this file (and the astc ones too) and they said those are the animation of the game. They couldn't send me a viewer, which is perfectly fine, but I don't even know what .ita files are associated with that aren't italian read me's. Any insight would be appreciated, the animations are great and I would love to see them.
For full disclosure here are snippets of what the developers sent me:
Those strange file types are actually compressed files (like
".astc.ccz"). Different devices use different compression methods, so
we support many types to maintain low storage and memory usage. Some
devices don't use compression and just use .png versions of the same
file names.
The .lta files are the game's animations. I wish I could help you out
with viewing them, but there's no way for me to send you a viewer. :(
Well that's all folks, sorry it was so long, and thank you so much in advance. I'm grateful already!
I realise this is a few months old, but in case you're still interested, I've just cracked it. Basically, it's a compressed texture, the ccz part being the compression, and the astc being the texture format. I managed to decompress the file using QuickBMS (http://aluigi.altervista.org/quickbms.htm), using the following script for ccz files (copy the following into a txt file):
endian big
comtype zlib_dynamic
get ZSIZE asize
math ZSIZE - 0x10
get NAME basename
idstring "\x43\x43\x5a\x21"
goto 0xc
get SIZE long
clog NAME 0x10 ZSIZE SIZE
On running QuickBMS, it will first ask for a script, upon which point it to your new txt file. Then it will ask for the file you want to decompress, point it at your ccz file. Then it will ask where you want to save your astc file.
Now you will need a program that can open astc files! I used this one, Noesis: http://www.richwhitehouse.com/index.php?content=inc_projects.php&showproject=91
Find your astc file (the interface is quite straightforward), then from there you can double click the file to open it, then right-click and export to a variety of formats. For proof of concept, here is the extracted pf0001-action5-001#4xout (PF being Philip Fry I assume). https://www.dropbox.com/s/t2l3mesi2psbd1p/pf0001-action5-001%404xout.png?dl=0
Both programs allow for batch processing as well, so you should have everything you need! However, the lta files are skeletal animation I believe, so unfortunately the character animations are all in pieces. However, I'm looking into that next. Hope this helps!
EDIT: The above information is useful for your specific query, i.e. decompressing and reading the contents of those files. HOWEVER, if your end goal is to view the assets of the game, it's worth knowing that many of the assets are only downloaded AFTER the game is run, so looking in the "com.tinyco.futurama" on your Android voice will show all kinds of assets not present in the apk file. Many of them will be ready-extracted as well, being made ready for gameplay, so I would highly recommend copying the contents of this folder periodically. I think it re-compresses unused assets as well, so I would copy out the ccz files also, then either way you should reap the maximum benefits.

Steganography - Is there a more smart/complex way for hiding files ?

**So, by far every tutorial I've seen regarding this topic either for linux or windows, are including the compressing of files and putting them inside an image, and by so creating a new one containing these files.
--
A zip file appended to a jpg file can be easily detected. With a little analysis you can easily understand that the jpg file has some extra information at the end, and you can recognize the header of a zip file after the normal jpeg data (even if the zip file is encrypted)
Question is, is there a bit more smarter/complex method for hiding files inside an image ?
thnx for any help.**
Steganography can be done in 3 domains; the spatial, frequency and compressed domain. Each of these domains have their pros and cons.
For example, in the compressed domain, you can hide a large secret in a compressed image and it will be very difficult to detect as the binary stream is what will be transmitted from sender to receiver. However, in the spatial domain, the existence of a secret message becomes easier to detect as the natural image is transferred as a whole.
There are many new steganographic algorithms being published in journals each month, you can look at the top journals in this field (such as IEEE, or Information Sciences) to find a suitable technique.

Duplicate photo searching with compare only pure imagedata and image similarity?

Having approximately 600GB of photos collected over 13 years - now stored on freebsd zfs/server.
Photos comes from family computers, from several partial backups to different external USB HDDs, reconstructed images from disk disasters, from different photo manipulation softwares (iPhoto, Picassa, HP and many others :( ) in several deep subdirectories - shortly = TERRIBLE MESS with many duplicates.
So in the first i done:
searched the the tree for the same size files (fast) and make md5 checksum for those.
collected duplicated images (same size + same md5 = duplicate)
This helped a lot, but here are still MANY MANY duplicates:
photos what are different only with exif/iptc data added by some photo management software, but the image is the same (or at least "looks as same" and have the same dimensions)
or they are only a resized versions of the original image
or they are the "enhanced" versions of originals, etc..
Now the questions:
how to find duplicates withg checksuming only the "pure image bytes" in a JPG without exif/IPTC and like meta informations? So, want filter out the photo-duplicates, what are different only with exif tags, but the image is the same. (therefore file checksuming doesn't works, but image checksuming could...). This is (i hope) not very complicated - but need some direction.
What perl module can extract the "pure" image data from an JPG file what is usable for comparison/checksuming?
More complex
how to find "similar" images, what are only the
resized versions of the originals
"enchanced" versions of the originals (from some photo manipulation programs)
is here already any algorithm available in a unix command form or perl module (XS?) what i can use to detect these special "duplicates"?
I'm able make complex scripts is BASH and "+-" :) know perl.. Can use FreeBSD/Linux utilities directly on the server and over the network can use OS X (but working with 600GB over the LAN not the fastest way)...
My rough idea:
delete images only at the end of workflow
use Image::ExifTool script for collecting duplicate image data based on image-creation date, and camera model (maybe other exif data too).
make checksum of pure image data (or extract histogram - same images should have the same histogram) - not sure about this
use some similarity detection for finding duplicates based on resize and foto enhancement - no idea how to do...
Any idea, help, any (software/algorithm) hint how to make order in the chaos?
Ps:
Here is nearly identical question: Finding Duplicate image files but i'm already done with the answer (md5). and looking for more precise checksuming and image comparing algorithms.
Assuming you can work with localy mounted FS:
rmlint : fastest tool I've ever used to find exact duplicates
findimagedupes : automatize the whole ImageMagick way (as Randal Schwartz's script that I haven't tested? it seems)
Detecting Similar and Identical Images Using Perseptual Hashes goes all the way (a great reference post)
dupeguru-pe (gui) : dedicated tool that is fast and does an excellent job
geeqie (gui) : I find it fast/excellent to finish the job, using the granular deduplication options. Also then you can generate an ordered collection of images such that 'simular images are next to each other, allowing you to 'flip' between the two to see the changes.
Have you looked at this article by Randal Schwartz? He uses a perl script with ImageMagick to compare resized (4x4 RGB grid) versions of the pictures that he then compares in order to flag "similar" pictures.
You can remove exif data with mogrify -strip from ImageMagick toolset. So you could, for each image, copy it without exif, md5sum, and then compare md5sums.
When it comes to visually similar messages - you can, for example, use compare (also from ImageMagick toolset), and produce black/white diff map, like described here, then make histogram of the difference and check if there is "enough" white to mean that it's different.
I had a similar dilemma - several hundred gigs of photos and videos spread and duplicated over about a dozen drives. I know this may not be the exact way you are looking for, but the FSlint Janitor application (on Ubuntu 16.x, then 18.x) was a lifesaver for me. I took the project in chunks, eventually cleaning it all up and ended up with three complete sets (I wanted two off-site backups).
FSLint Janitor:
sudo apt install fslint

Resources