Run a script on files sent to a CUPS print queue? - macos

I am trying to configure a Mac OS X print queue so that a script can do some processing on each file printed before forwarding it on to another CUPS printer (on the same host).
I have been reading up on CUPS and have found an article describing how to use lpadmin to configure a queue with a "System V style interface script", but the caveat is that such a queue is seen as a "Generic Printer". I presume that means the user looses all ability to choose paper trays, etc. when submitting a job from the Print dialog. Is that correct?
[That makes this approach undesirable for my purposes, because the final destination is a POS receipt printer with non-standard paper sizes and print job options for cutting the paper roll, opening the cash drawer, etc.]
Is there a better way to accomplish my goal, which is simply to run a script on each receipt printed through a particular CUPS print queue?

I believe you need to configure a cups filter, which I believe can be created for just about any printer. It basically works as an input/output filter - where you modify the job in flight.
anyway, here is a link I found that explains an approach to do so (although it is for a slightly different use case). hope it helps.
Programming a Filter/Backend to 'Print to PDF' with CUPS from any Mac OS X application
be well.

Related

Making STDIN unbuffered under Windows in Perl

I am trying to do input processing (from the console) in Perl asynchronously. My first approach was to use IO::Select but that does not work under Windows.
I then came across the post Non-buffered processor in Perl which roughly suggests this:
binmode STDIN;
binmode STDOUT;
STDIN->blocking(0) or warn $!;
STDOUT->autoflush(1);
while (1) {
my $buffer;
my $read_count = sysread(STDIN, $buffer, 4096);
if (not defined($read_count)) {
next;
} elsif (0 == $read_count) {
exit 0;
}
}
That works as expected for regular Unix systems but not for Windows, where the sysread actually does block. I have tested that on Windows 10 with 64-bit Strawberry Perl 5.32.1.
When you check the return value of blocking() (as done in the code above), it turns out that the call fails with the funny error message "An operation was attempted on something that is not a socket".
Edit: My application is a chess engine that theoretically can be run interactively in a terminal but usually communicates via pipes with a GUI. Therefore, Win32::Console does not help.
Has something changed since the blog post had been published? The author explicitely claims that this approach would work for Windows. Any other option that I can go with, maybe some module from the Win32:: namespace?
The solution I now implemented in https://github.com/gflohr/Chess-Plisco/blob/main/lib/Chess/Plisco/Engine.pm (search for the method __msDosSocket()) can be outlined as follows:
If Windows is detected as the operating system, create a temporary file as a Unix domain socket with IO::Socket::Unix for writing.
Do a fork() which actually creates a thread in Perl for Windows because the system does not have a real fork().
In the "parent", create another instance of IO::Socket::Unix with the same path for reading.
In the "child", read from standard input with getline(). This blocks, of course. Every line read is echoed to the write end of the socket.
The "parent" uses the read-end of the socket as a replacement for standard input and puts it into non-blocking mode. That works even under Windows because it is a socket.
From here on, everything is working the same as under Unix: All input is read in non-blocking mode with IO::Select.
Instead of a Unix domain socket it is probably wiser to route the communication through the loopback interface because under Windows it is hard to guarantee that a temporary file gets deleted when the process terminates since you cannot unlink it while it is in use. It is also stated in the comments that IO::Socket::UNIX may not work under older Windows versions, and so inet sockets are probably more portable to use.
I also had trouble to terminate both threads. A call to kill() does not seem to work. In my case, the protocol that the program implements is so that the command "quit" read from standard input should cause the program to terminate. The child thread therefore checks, whether the line read was "quit" and terminates with exit in that case. A proper solution should find a better way for letting the parent kill the child.
I did not bother to ignore SIGCHLD (because it doesn't exist under Windows) or call wait*() because fork does not spawn a new process image under Windows but only a new thread.
This approach is close to the one suggested in one of the comments to the question, only that the thread comes in disguise as a child process created by fork().
The other suggestion was to use the module Win32::Console. This does not work for two reasons:
As the name suggests, it only works for the console. But my software is a backend for a GUI frontend and rarely runs in a console.
The underlying API is for keyboard and mouse events. It works fine for key strokes and most mouse events, but polling an event blocks as soon as the user has selected something with the mouse. So even for a real console application, this approach would not work. A solution built on Win32::Console must also handle events like pressing the CTRL, ALT or Shift key because they will not guarantee that input can be read immediately from the tty.
It is somewhat surprising that a task as trivial as non-blocking I/O on a file descriptor is so hard to implement in a portable way in Perl because Windows actually has a similar concept called "overlapped" I/O. I tried to understand that concept, failed at it, and concluded that it is true to the Windows maxim "make easy things hard, and hard things impossible". Therefore I just cannot blame the Perl developers for not using it as an emulation of non-blocking I/O. Maybe it is simply not possible.

Is it possible to set a setting to show how many pages a spool queue print job has with a raw postscript print file in windows

Currently it will show N/A as number of pages in the windows spool queue, is this possible to show the number of pages and what page is printing with bi-directional support on? with this file format? surely the printer itself must know what page is printing/how many pages have printed. thanks.
I can only answer the PostScript part of this question, not the Windows spool queue. I don't know how you are inserting the PostScript into the spool queue, but assuming you are printing from an application then my very hazy memory suggests that it's the application which declares the number of pages in the job.
The printer doesn't know how many pages a PostScript program contains. In fact, in the most general case, it's not possible to determine how many times a PostScript program will execute the showpage operator. That's because PostScript is a programming language and the program might execute differently on different interpreters, or under different conditions. It might print an extra page on Tuesdays for example.
Or to take an admittedly silly example:
%!
0 1 rand {
showpage
} for
That should print a random number of empty pages.
There are conventions which can indicate how many pages a PostScript program is expected to execute, the Document Structuring Convention contains comments which tell a PostScript processor (not a PostScript interpreter, which ignores comments, a processor treats the PostScript as just data and doesn't execute it) how many pages it has, amongst other data such as the requested media size, fonts, etc.
Not all PostScript programs contain this information and it can sometimes be awkward to extract (the 'atend' value is particularly annoying forcing the processor to read to the end of the file to find the value). Additionally people have been know to treat PostScript programs like Lego bricks and just concatenate them together, which makes the comments at the start untrustworthy. This is especially true if the first program has %% Pages: (atend)
Most printers do execute a job server loop, which means that it is possible for them to track which page they are executing, even if they don't know how many pages there will be.
It's also certainly true that a PostScript program running on a printer with a bi-directional interface could send information back to the print spooler, but there's no guarantee that a printer has a bi-directional communication interface, and there's no standard that I know of for the printer to follow when sending such information back to the computer. What should it send ? Will it be the same for Windows as Mac ?

Inconsistent printing with CUPS on Raspberry Pi

I am printing an image with a thermal printer off of a raspberry pi. This works fine most of the time but I have an issue where the job never completes.
The command is just a simple: lp <filename> and I have the thermal printer as my default. This works but occasionally, it won't print in that terminal. Then just opening a new terminal and sending the same exact command works. I've had to just have a handful of terminals open and jump between them until one of them prints it.
Does anyone have any insight to why this would be happening and what a possible solution might be?
I am running this from:
Raspbian Stretch
CUPS v2.2.1
Zebra ZD410
Here is the end of the output of an unsuccessful job (Job 118) and successful job (Job 119) from /var/logs/cups/error_log
I ran into a very similar issue to the one you are describing with Zebra ZD410 printers and cups printing from RaspberryPi.
By default, between each print job cups will reattach the usb printer. The ZD410 printer does not like when cups does that and the next print job will appear to go to the printer, but it won't print and there will be nothing in the logs that show any errors. This happens intermittently but frequently enough to make the printer unusable.
I was able to fix this problem by deleting the ZD410 printer from cups and adding it back using lpadmin and putting this configuration flag at the end of the lpadmin -p command:
-o usb-no-reattach-default=true
Try my suggestion and please let me know if it resolves your issue. Hopefully, this will help others with this problem as well.
You do not even name the exact Linux distro you are running, and which version of CUPS this involves.
No idea what your problem is...
But here is an idea how to start narrowing it down:
Enable LogLevel debug in /etc/cups/cupsd.conf and re-start the CUPS daemon.
Now you can follow whatever CUPS does with your print jobs:
less /var/log/cups/error_log
This may give you a hint. Note, that all lines in the log starting with E denote an message with log level 'error'. (I denotes log level 'info', W is 'warning' and N is 'notice'.)
There's one more thing you could do:
Determine, if it happens that exactly the same job file prints sometimes or doesn't print at other times. If so, it seems a "random" failure of the print device, or the transfer path from CUPS to the printer where some bit flip introduces an error. If not (the same file never prints), then this gives you another hint to narrow down the real problem.

Yes/No Dialog on every page the printer will print

I'm doing this in a printer server. I am trying to make a program that will show up a Yes/No dialog box before proceeding to print a page printed by any application or from the network.
So far, it was easy to pause and resume a job. But haven't found a solution to pause and resume a page to be printed.
For reference, here's more about controlling the printer on Windows
https://msdn.microsoft.com/en-us/library/windows/desktop/ff686805(v=vs.85).aspx
I am using Windows 10 64bit. Any language will do.
Thanks.
I'm prepared to be proven wrong, but I don't believe this is trivially possible. The print spooler doesn't have a notion of where pages begin and end in a spooled print session.
The location of each page is language dependent, and the print spooler can hardly be expected to parse the stream to find the start and end of each page, it would be very slow for one thing.
The alternative would be for the print processor to note the beginning and end of each page and the offset of the spool file at that moment, and then pass that information separately to the spooler.
But there are problems with that too. the 'RAW print' capability allows the printing application to inject 'stuff' straight into the print stream, bypassing the print processor altogether. This is common with page layout applications when printing to a PostScript printer for example, particularly on Windows as supporting CMYK isn't really possible when printing on Windows, and spot colours are even harder.
The injected code can contain any number of pages and the print spooler has no way to know where each page is.
So in the general case, I don't think this is possible, at least not in the print spooler.

WebSphere MQ q program read/write to file

I use the following to write queue contents to a file:
q -xb -ITESTQ -mTEST > messages.out
I had 3 binary messages in the queue that got written to the file successfully. Now I have a need to load the same file back to the queue (same queue at a later time). When I do:
q -xb -oTESTQ -mTEST < messages.out
It puts 9 messages instead of 3. I am guess the formatting is misread while the file is loaded. I've noticed there is -X option in the q program. What is the usage of it? What other options I have?
You really need to look at the QLoad program (SupportPac MO03) for this. Same author as the Q program and every bit as good a tool. Also free. As the author explains in the manual:
Ever since I released my MA01 (Q Utility) SupportPac I have had
periodic requests to explain how it can be used to unload, and
subsequently reload, messages from a queue. The answer has always been
that this is not what MA01 is for and that surely there must be a
utility available. Well, after sufficient numbers of these requests I
looked for a utility myself and didn’t really find anything which
fitted the bill. What was needed was a very simple, some would say
unsophisticated, program which unloaded a queue into a text file. The
notion of a text file was important because a number of users wanted
the ability to change the file once it had been created. I also find
that text based files are more portable and so this seemed useful if
we want to unload a queue, say on Windows, and then load the messages
again on a Solaris machine. The disadvantage of this approach is that
the file is larger than it would be in binary mode. Storing data using
the hex representation of the character rather than the character
itself essentially uses twice as much space. However, in general I do
not envisage people using this program to unload vast amounts of
message data but a few test messages or a few rogue messages on the
dead letter queue which are then changed and reloaded elsewhere.

Resources