hwpf, xwpf, hssf, and xslf poi picture extraction - image

I'm looking to extract all images from new and legacy Word documents and spreadsheets to assist in a real time document classification system, and looking at the documentation, I seem to have run into a problem. I'm having no problems finding documentation within the hwpf module and packages for extracting images from the file, but when it comes to the other 3, it seems as though they don't support the same methods.
What I want to do is to have one block of code that is document type agnostic when it comes to the 4 above mentioned types, I just want fast, easy access to the pictures in the files so I can move on to my next task, but at this point it looks like only the hwpf module supports extraction of pictures or the methods in 'PicturesTable'.
I'm also somewhat concerned about the performance of the library: it looks like it loads the entire file when all I want to do is scrape the images out of it. Any suggestions on a library that operates directly on the 'Data' bytestream and the folder structure of the .***x zip files?
I've already tried using OLEtools to try to extract pictures from the streams, and I'm now moving on to this tool. I havn't tried any tools that operate on the lower levels of the documents yet though.

Related

Getting most relevant content from page

I need to create a universal web scraper to parse articles on the different websites. Of course, I know about XPath, but I want to try to make it universal for any website despite the HTML markup of a page.
I need to determine whether there is an article on the page and if it is - parse a text of title, body and tags (if exists).
Frankly speaking, my knowledge in DS is not very huge, but I assume this task (determine whether it is article, and parsing only needed parts) is possible to solve.
What tools should I use? Any help?
Actually, for the second task, I need to implement something similar that google chrome mobile does. When page is not optimised for mobile, then propose to show the page in adaptive mode (just title, and main content).
If you are using Python, some libraries to look at are:
scrapy, which scrapes data and can extract some of the results) and,
BeautifulSoup, which is more geared towards the extraction part itself.
It is possible to request a version of a website (e.g. for Chrome, Safari, Mobile, old-school systems) by creating a custom header for your scraper.
HAve a look at the relevant documentation, and you can get an idea of how to use headers in scrapy here.
I do not know of any more specialised tools. Your tasks are more analytical and are typically not performed with the use of models for estimating e.g. what content is where on a webpage. This might be an intersting research direction though; to see if you can create a model that generalises across many websites to extract the desired content.
That leads me on to my last point, which is to say that creating a single scraper that works for any website *containing your artile type) is not usually possible. People create websites differently, however they see fit, which means they also change them. This usually leads to a good scraper requiring constant updates as time (and developers) moves on.
EDIT:
Then if you have lots of labelled examples, it might be possible to train a model. The challenge might be the look-back range of the model. For example, a typical LSTM model is given a parameter that tells it how far to look back into the past. It is stored within its memory internally. In your case, you might be looking for a start and end HTML tag of an article, to then extract just that part. These tahs could be thousands of words apart. Something a standard LSTM might not be fit to retain and use.
If you could pose your problem a little differently, then there are other approaches that might be plausible. E.g., you could make it a "question-answer" problem, by saying: I have this HTML, where is the article content? If that sounds ok for your use-case, have a look here for some model based approaches.

Create Multiple Slides from a List with Common Template

I have created a certificate design with powerpoint.
Now I have to create 100+ copies of it... each with a different name (the recipent).
I was wondering if there was an easy way to do it...
I can have the list of names in excel or txt.
I am open to other ideas as well, like changing the slide into an images and batch processing it in a simple way
You may also try out SlideMight, a tool for merging hierarchical data with PowerPoint templates. SlideMight supports iteration over data, to generate slides or to populate tables. There is more functionality, but you don't seem to need that. SlideMight is in fact a coding system, like mail merge for Word is.
Input data format is at this time just JSON; you would need to convert your Excel sheets first, e.g. using this Excel to JSON add-in for Excel.
There are versions for Windows and Mac OS X.
More information is at www.SlideMight.com
Disclaimer:
I am the owner of Delftware Technology, the company that developed SlideMight.
And I am one of the developers.
This is a question that really belongs in SuperUser, not StackOverflow (which is intended for coding questions, not software how-to-use questions).
But ...
Save your names to a plain notepad TXT file, one name per line.
Start PowerPoint, choose File, Open and point to your TXT file (you may force the matter by choosing . in Files of type:
Apply whatever template you like to the result.
I have a commercial add-in that'll do this and quite a bit more, but from your description, you don't need it.

How can one create a polyglot PDF?

I like reading the PoC||GTFO issues and one thing I found remarkable when I first discovered it, was the "polyglot" nature of their PDF files.
Let met explain: when you consider for example their 8th issue, you may unzip files from it; execute the encryption they are talking about by running it as a script and even better(worse?) with their 9th issue you can even play it as a music file!
I'm currently in the process of writing small scripts every week and writing each time a little one page PDF in LaTeX to explain the said scripts. So I would really enjoy being able to create the same kind of PDF files. Sadly they explained (partly) in their first issue how to include zip files, but they did so through three small sketches of cmd lines without actual explanations.
So my question is basically :
how can one create such a polyglot PDF file containing stuff like a zip as well as being a shell script which may be run using arguments just like normal scripts?
I'm asking here about the process of creation, not just an explanation of how this is possible. The ideal way for me would that there are already some scripts or programs allowing to create easily such PDF files.
I've tried to search the net for the keywords "polyglot files" and others of the kind and wasn't able to find any useful matches. Maybe this process has another name?
I've already read the presentation by Julia Wolf which explains how things works, but I sadly haven't had time to apply the knowledge there to real world, because I'm sadly not used to play with file headers and the way a PDF is constructed.
EDIT:
Okay, I've read more and found the 7th edition of PoC||GTFO to be really informative concerning this subject. I may end up being able to create my own scripts to do such polyglot PDF files if I have some more time to consider it.
I played around with polyglots myself after attending Ange's talks and also talking to him in person. You really need to understand the file formats to be able to nest them into each other.
However, long story short, here are some links I found extremely useful for creating polyglots:
Some older Google Code Trunk
PoC of the polyglot stuff
Especially the second link (to github) will help you creating polyglots, but also understanding how they are working and how they are implemented. Since it is mostly Python stuff and very well / clean written, it is very useful and easy to follow.
I feel dissecting some file formats would be a good place to start. You can find many file format specifications for different file types through Google, but they can be a tough read and will likely take you some time to translate into whatever language you are using.
PDF: https://www.adobe.com/content/dam/acom/en/devnet/pdf/pdfs/PDF32000_2008.pdf
ELF: https://www.cs.cmu.edu/afs/cs/academic/class/15213-s00/doc/elf.pdf
ZIP: http://kat.sdf.org/zip_file_format.txt
The language(s) you select will need a way to read and write raw bytes (not just ascii alphanumeric), so perhaps C would be good for more direct access to memory. Some Python tricks could help with open sourcing the scripts easily.
To dissect the files, you may want to build a tool kinda like https://github.com/kvesel/zipbrk/ to take them apart, then put them all back together in a polyglot format. For example, zip does not require the section headers to be at the start (or even contiguous for that matter), and PDF magic number can appear in multiple places within the file as well. I also believe I recall a polyglot tool being included in one of the PoC||GTFO publishings (maybe issue 8 or 2??) as a polyglot in the pdf file.
Don't forget the hackers bible! :)
https://nostarch.com/gtfo

How to get list of figures in Asciidoc

I am using asciid for an article. In the end of my document I want to have a list of figures. How to I create a list of figures? Did not find something useful in the documentation for me.
Nope there isn't one at the time of answer. I checked the docs (which you indicated you did as well) and I also grepped the codebase. There is good news though! You should be able to do this with an extension.
Extensions can be written in any JVM language if you're using asciidoctorj, or in Ruby if you're using the core asciidoctor (I'm not sure about JavaScript for asciidoctorjs). You'll need to create two extensions probably: a TreeProcessor extension to go through the whole AST looking for images and pulling them out into a storage structure. Then you'll also need to create either an inline or block macro to actually place it within the page.
I strongly recommend examining the API for the nodes and functions you'll want to make use of. There are some other examples of processors that may also be helpful to examine.

ExpressionEngine: File Manager

I’m new to EE and trying to learn the basics. Some questions about the File Manager:
I upload a photo and put “cat, kitten” in the description. When I do a search for “kitten”, it finds the photo. But when I do a search for “cat”, I get nothing. Any ideas what’s going on?
The file metadata are: file title, file name, description, credit, and location. What if I wanted to add custom fields? How do I do that?
In the template files, how do I access a particular manipulation (I call this “rendition”) of an image? Say I define a rendition “thumbnail” to be 100x100. How do I access that particular rendition in a template?
Is there a way to randomize the file names of the files being uploaded?
After uploading an image and testing it against PageSpeed, it turns out that the image can still be optimized via losslessly compressing it. How can this problem be addressed?
Ah, the file manager. Not EE's brightest spot.
It would not surprise me if the search in the File Manager was not
very robust. I'd try more variations to narrow it down (what kind of
characters affect the results - commas, dashes, spaces, etc ... do
partial terms match?)
You cannot currently add custom metadata to files in the file manager.
Use this syntax: {field_name:rendition}, e.g.,
{my_image:thumbnail} (docs).
Nope.
EE just uses the GD library available in your PHP install to resize
images. If you want the highest possible optimization, you'll have
to do your image manipulations yourself.
Given your queries, I would suggest you have a look at Assets by Pixel and Tonic. It offers a far superior file management experience on most of these fronts.

Resources