dompdf - Create pdf file once, but make it imposible to reopen it later? - dompdf

I am using dompdf to create pdf files. But whant I want to be done is to create once the file, so the user can see the contents, but protect the file so the user once he close it, he can't reopen it later. Is that possible OR should I use other program?

This really isn't possible. It sounds like what you want is for the document to be destroyed after first reading (Mission Impossible style). That's not how the web works. A file that can be accessed over the web can be easily downloaded and opened offline.
Certainly there are hacks around this, but they would be fairly involved to implement. I once created a Flash-based viewer that loaded another file that contained the actual document. Any tech-savvy user could still obtain the original document by examining the network traffic, but your average non-technical user wouldn't know how to do it.
You do have options for enabling restrictions in a PDF, but the user will always be able to save it and re-open it later. Probably what you want to do is implement restrictions on the document and load it in an iframe to prevent saving.
You can implement print/copy restrictions as follows:
$dompdf = new DOMPDF();
$dompdf->load_html($html);
$dompdf->render();
$dompdf->get_canvas()->get_cpdf()->setEncryption('', 'ownerpass', array());
$dompdf->stream();
The parameters of setEncryption are:
string, user password (restrictions apply)
string, owner password (unlocks document)
array, strings indicating allowed actions when user password is supplied (e.g. print, copy). If left blank the user is limited to saving the document.

A pdf is a document, it has no scripting instructions, maybe you want to embed it in an exe, have the exe extract it, and keep checking the lock bit, as soon as it is clear delete it.

Related

An opened document which isn't found in Word.Application.Documents but still locked out?

I do some Word automation filling in the blanks in some Word documents which are used as templates.
One template is used more often than the others, and this causes the error, as it locks out and Word is unable to open it, though I wish to open it in read only.
Opening the document
do until lole_word.Documents.Count = 0
lole_word.Documents[1].Close(lole_word.SaveOptions.wdDoNotSaveChanges)
loop
boolean lb_readOnly
lb_readonly = true
lole_word.Documents.Open(as_fileIn, lb_readOnly)
The problem is that the template document is opened once, with no flaws of any kind. But when the same template has to be reused, although the lole_word.Documents.Count always returns 0, when Word opens the previously used template, it is locked out, and Word finally shows up asking me whether I want to open it in read only mode.
I wish to avoid this annoyance and simply open the file in read only mode, as it shall be saved elsewhere once it is filled in.
My problem is that even though I specify open in read only mode by setting the second parameter to true, Word doesn't seem to see it this way and still pops up his File Already in Use by Another User dialog, and then my application loses control over Word and it crashes.
We had a similar problem and I wish that I could remember how we solved it. We may have used the Quit command. I know that we also did an attempted FileOpen in exclusive mode (with no intention of using the file) and immediately closing it. If we got a file locked return code we prompted the user to close out of excel first because there were times they would have the program open outside of OLE. I know this isn't exactly what you were looking for but hope it leads you somewhere. I recall this being an intermittent problem and there were some cases users had to open task manager and kill the extraneous excel process.
I vaguely remember the locking being caused by the file system and not Word, as we were opening in read only as well.

Applescript: system-wide global variable accessible by all scripts

We have a PDF document processing system, implemented in AppleScript (where we call the scripts from the shell using osascript). In some of the scripts, we call Acrobat Preflight Droplets from the Applescript.
This does usually work without problems. However, in some cases, where the processed document is big or/and complex. the droplet returns control to the script before the report is written and the document is moved to the "success" or "failure" folder. The consequence is that the process continues, but without the moved file, it eventually fails.
The workaround so far has been to add a delay after those droplet calls. This does help, but it is a waste of time for small documents, and there will always be a document big and complex enough to take longer than the delay.
We also found out that the time needed for finishing writing the report and moving the document depends on the speed of the system (had to be expected…).
The workaround would be to calculate the delay from the document size, its number of pages, and a machine-dependent parameter. Document size, and number of pages are no big deal; they can be retrieved in the Applescript.
The problem is the machine-dependent parameter, which can be determined experimentally. But how do I make that parameter available to all the scripts needing it?
Incorporating it into the scripts is not an option, because we have a number of systems installed, and if we would do that, we'd end up in a maintenance nightmare. Passing it as an argument in the initial system call is also not possible, because the calls are many, and again would lead to a maintenance nightmare.
So, is there a way to set up a place where that machine parameter can be stored and easily called from any Applescript, no matter how it itself is called.
Thanks a lot for your advice.
You might find the Property List Suite in System Events useful. It’s a standard means of storing and then retrieving such information. Property List files themselves are simply XML files, so you can even create them outside of AppleScript and then read them within your scripts.
There’s a description with examples at https://apple.stackexchange.com/questions/58007/how-do-i-pass-variables-values-between-subsequent-applescript-runs-persistent
A simple suggestion if you only have one paramater to keep track of would be to just have a text file in a known location on each machine. The only content of the text file would be the machine paramater. I like to use the Application Support folder this kind of thing.
Assuming your machine parameter is CPU speed. You can save a text file in /Library/Application Support/Preflight Scripts/machinecpu.txt with the contents:
2.4
Then in Applescript, you would just read the text file.:
set machineParam to read file "Macintosh HD:Library:Application Support:Preflight Scripts:machinecpu.txt"

Store parsable backup of all printed documents

What I'm trying to accomplish is to always keep a parsable duplicate of all printed documents, and execute a secondary process for each print.
(i.e.: Be able to parse all text, account for pages, vectors, images, etc).
Processing the document can either be done immediately or deferred (immediately is desirable).
As formats go, any PDL might be suitable, my best guess is XPS would probably be the best bet for a parsable format, any recommendations for other formats are appreciated.
Ideally, I'd like to not mess with the user interaction with the printing (e.g.: print settings page; or create a virtual printer, which could save a XPS and then forward the print job to the physical printer).
Since users might not be tech savvy to either set up/use it properly and/or mess up the process at a later date.
What I'm looking for at this time:
Documentation on the print process and flow (WDK, PDL, what else?)
How this could be accomplished, if at all possible; are there any existing solutions?
Any directions into what I should be looking at.
It's only part of an answer, but rumor has it you can tell Windows to keep spooled documents (right-click the printer, choose "Printer Properties", Advanced, "Keep Printed Documents").
You could enable this, and then create a scheduled task (or system service, etc.) that watches the spool directory and moves all files older than a certain threshold to a more appropriate location for further processing. (The age threshold would be a reasonable way to avoid trying to move files that are currently being written.)
Then you'd have to find a program to convert the .spl files to whatever format you like, or try interpreting it yourself. It looks pretty low-level but Microsoft does offer some documentation about the MS-EMF and MS-EMFSPOOL formats that might be a start.

Adding processing code to a webpage using processing.js

I have created a Processing code (.pde file) to make a time series (coffee production v/s time) which takes its data from an excel file(.tsv table). Can anyone tell me how to include this to my webpage?
I have tried with processing.js but it does not show anything in the browser.
without additional information, you probably have your .tsv file in a "data" directory, but aren't explicitly loading it from "./data/myfile.tsv", instead relying on Processing to autoresolve. If you intend to use your sketch online, always include "data/" in your file locations, because browsers resolve locations relative to "where the page is right now".

How do you do full page caching with dynamic information

I imagine a very common scenario is where an entire dynamic page can be cached in such a way that an entire framework/CMS stack can be bypassed, except that some small amounts of information change depending on whether somebody is logged in or not. For example, the menu might change from "login" to "Welcome Somebody!". No there's not way to cache the page obviously.
One solution I was thinking of would be load this information via AJAX after the page has loaded already.
Does anybody have an advice here?
Write the page stream to the file system. Name the file with the entire URL including the query string. If the page contains session data, include a session id in the file name. Keep a list of cached pages with their names somewhere so that you can look up whether something is in the cache without having to go to the file system.
This is essentially what FatWire Content Server does.
Since this appears to be language-agnostic, you could create a temp file with the raw output of the page, and then when the same page is loaded again, dump the contents of the temp file directly into the HTTP response of the current page.

Resources