I recently upgraded Ghostscript from 9.10 to 9.53.3 (also tried 9.50 first). Scripts that have run for years started failing with
Unrecoverable error: rangecheck in .putdeviceprops
After some research and trial and error testing, it seems that Ghostscript changed some of the command line switches from "-s" to "-d". for example:
-sGrayImageResolution=600 now errors but -dGrayImageResolution=600 does not.
Some switches appear to accept either form. For example:
-sColorImageResolution=600 and -dColorImageResolution=600 both work.
(Note: When I say "work" I mean they do not throw the error.)
I have 2 questions
Where can I find a complete list of Ghostscript command line parameters? The Ghostscript documents seem to be incomplete.
What is the difference between -s and -d for a switch? (this is really just a curiosity question)
Thanks
The ColorImageResolution and GrayImageResolution are PostScript distiller parameters found in the VectorDevices.htm#PDFWRITE so are used with setdistillerparams and currentdistillerparams in the PostScript code for -sDEVICE=pdfwrite. Also see TN 5151 Acrobat Distiller Parameters. For example:
<< /MonoImageResolution 72 >> setdistillerparams
EDIT: Here are some more: distillerparams
Some settings can be defined in the system dictionary without error except are never used for anything. Be sure to check all of the documentation. There are some settings not listed in the documentation that can be found by searching in the ghostscript Resource/Init files especially for advanced users. Some of these are unique to ghostscript and beyond normal PostScript.
This is from the ghostscript Use.htm#Options :
-Dname
-dname
Define a name in systemdict with value=true.
-Dname=token
-dname=token
Define a name in systemdict with the given value. The value must be a valid PostScript token (as defined by the token operator). If the
token is a non-literal name, it must be true, false, or null. It is
recommeded that this is used only for simple values -- use -c (above)
for complex values such as procedures, arrays or dictionaries.
Note that these values are defined before other names in systemdict, so any name that conflicts with one usually in systemdict
will be replaced by the normal definition during the interpreter
initialization.
-Sname=string
-sname=string
Define a name in systemdict with a given string as value. This is different from -d. For example, -dXYZ=35 on the command line is
equivalent to the program fragment
/XYZ 35 def
whereas -sXYZ=35 is equivalent to
/XYZ (35) def
Related
I have installed the Ruby environment manager rbenv, Ruby, RubyGems, and PNGlitch (on macOS). Now how do I use PNGlitch?
The best documentation I have been able to find is on this page, and here's an example of the syntax given:
How to use this library: The Simple Way
png = PNGlitch.open '/path/to/your/image.png'
png.glitch do |data|
data.gsub /\d/, 'x'
end
png.save '/path/to/broken/image.png'
png.close
Okay, great. When I insert my file paths, save that code as an .rb file, and open it, I just get:
test.rb: command not found
If I paste it directly into Terminal I get:
png = PNGlitch.open '/Users/username/Documents/testimage.png'
-bash: png: command not found
ComputerName:~ username$ png.glitch do |data|
> data.gsub /\d/, 'x'
-bash: png.glitch: command not found
-bash: data: command not found
-bash: data.gsub: command not found
ComputerName:~ username$ end
-bash: end: command not found
ComputerName:~ username$ png.save '/Users/username/Documents/testimage_glitched.png'
-bash: png.save: command not found
ComputerName:~ username$ png.close
I also tried the syntax given on this page and entered:
pnglitch /Users/username/Documents/testimage.png –filter=Sub /Users/username/Documents/testimage_glitched.png
...this resulted in getting the following message:
tried to create Proc object without a block
Usage:
pnglitch <infile> [--filter=<n>] <outfile>
Options:
-f, --filter=<n> Fix all filter types as passed value before glitching.
A number (0..4) or a type name (none|sub|up|average|paeth).
--version Show version.
-h, --help Show this screen.
↑ I guess this is the developer's idea of documentation. 🤣
Well, trying to follow that example I also did this:
pnglitch </Users/username/Documents/testimage.png> [--filter=<2>] </Users/username/Documents/testimage_glitched.png>
...but that only resulted in:
-bash: syntax error near unexpected token 2
(I chose 2 because apparently that corresponds to the "Sub" filter.)
I tried variants of this syntax as well, including omitting characters <> and [].
There must be some assumed knowledge here that I don't have. So what I would like to know is:
How can I actually use PNGlitch to glitch a PNG image?
How can I use PNGlitch to glitch all the PNG images in a folder?
Any additional advice on using different filters would also be appreciated.
Thank you.
There's a lot going on here that needs to be cleared up.
Ruby scripts need Ruby to run
You can't just paste these into bash and expect anything useful to happen.
The usual procedure is one of two variants. Either:
Create a .rb script, like example.rb
Run ruby example.rb where that's your script name at the end.
Or use the "hash-bang" method:
Create a script with #!/usr/bin/env ruby as the very first line.
Make this script executable with chmod +x example.rb
Run this script directly, ./example.rb or whatever path it has.
Note that example.rb by itself will not work unless it is in your path, hence the ./ is necessary.
Command line example syntax
Here <name> has special meaning, where it's just a way of saying name as if it had italics or special formatting. On a text-mode terminal it's not practical to add syntax like that, it's limited to ASCII in most cases, so this tradition evolved.
Within the POSIX shell > and < have special meaning, they're used to, respectively, redirect input to or from a file. For example, ls > ls.txt dumps the output of ls into a file called ls.txt, while cat < ls.txt reads in the contents of ls.txt and displays it.
Things like [name] mean optional arguments, like [--help] means the --help argument is optional.
Within the POSIX shell [ and ] have special meaning. They can be used in an if construct, but more commonly in file wildcards, like l[abc].txt means any of la.txt, lb.txt or lc.txt.
Putting this together it's possible to understand the notation used here:
pnglitch <infile> [--filter=<n>] <outfile>
Where that means infile is your "input file" argument, and outfile is your "output file" argument, and --filter is an optional argument taking n as an input.
So you call it like this:
pnglitch input.png output.png
Or with an option, like you did:
pnglitch testimage.png --filter=sub testimage_glitched.png
Though note I've used lower-case sub as that's precisely what's in the help output and following casing conventions usually matters.
I am creating an IDE and I wish to implement jump to definition.
I found the perfect tool for it: ctags (https://github.com/universal-ctags/ctags)
Now the only problem is that the tags file that ctags create looks something like this:
QLineNumberArea 2point56mb.py /^class QLineNumberArea(QWidget):$/;" c
I understand the format: {tagname}Tab{tagfile}Tab{tagaddress}
So from what I understand: tagname: QLineNumberArea, tagfile: 2point56mb.py and tagaddress: /^class QLineNumberArea(QWidget):$/;" c`
The tagaddress looks like gibberish but it's a vim/ex editor command that takes you to the definition.
Now from what I read on this website: https://github.com/cztchoice/ctags/wiki/Tag-Format
Under Security it states:
Specifically, these two Ex commands are allowed:
A decimal line number:
89
A search command. It is a regular expression pattern, as used by Vi, enclosed in // or ??:
/^int c;$/
?main()?
Now here comes the problem:
I need my tags file to have a line number, instead of the search command.
I tried looking the documentation for ctags (http://docs.ctags.io/en/latest/) but I couldn't find anything that would help me.
Does anyone know how make ctags give tag addresses as a line number, rather than a search command?
That documentation is only for the changes introduced by universal ctags. What you're looking for is the documentation for exuberant ctags:
−−excmd=type
Determines the type of EX command used to locate tags in the source file. [Ignored in etags mode]
Which can also be achieved with -n.
Pandoc supports a YAML metadata block in markdown documents. This can set the title and author, etc. It can also manipulate the appearance of the PDF output by changing the font size, margin width and the frame sizes given to figures that are included. Lots of details are given here.
I'd like to use the metadata block to remember the command line arguments that I'm supposed to be using, such as --toc and --number-sections. I tried this, adding the following to the top of my markdown:
---
title: My Title
toc: yes
number-sections: yes
---
Then I used the command line:
pandoc -o guide.pdf articheck_guide.md
This did produce a table of contents, but didn't number the sections. I wondered why this was, and if there is a way I can specify this kind of thing from the document so that I don't need to add it on the command line.
YAML metadata are not passed to pandoc as arguments, but as variables. When you call pandoc on your MWE, it does not produce this :
pandoc -o guide.pdf articheck_guide.md --toc --number-sections
as we think it would. rather, it calls :
pandoc -o guide.pdf articheck_guide.md -V toc:yes -V number-sections:yes
Why, then, does you MWE produces a toc? Because the default latex template makes use of a toc variable :
~$ pandoc -D latex | grep toc
$if(toc)$
\setcounter{tocdepth}{$toc-depth$}
So setting toc to any value should produce a table of contents, at least in latex output. In this template, there is no number-sections variables, so this one doesn't work. However, there is a numbersections variable :
~$ pandoc -D latex | grep number
$if(numbersections)$
Setting numbersections to any value will produce numbering in a latex output with the default template
---
title: My Title
toc: yes
numbersections: yes
---
The trouble with this solution is that it only works with some output format. I thought I had read somewhere on the pandoc mailing-list that we soon would be able to use metadata in YAML blocks as intended (ie. as arguments rather than variables), but I can't find it anymore, so maybe it won't happen very soon.
Have a look at panzer (GitHub repository).
This was recently announced and released by Mark Sprevak -- a piece of software, that adds the notion of 'styles' to Pandoc.
It's basically a wrapper around Pandoc. It exploits the concept of YAML metadata blocks to the maximum.
The 'styles' provide a way to set all options for a Pandoc document conversion process with one line ("I want this document be an article/CV/notes/letter.").
You can regard this as more general abstraction than Pandoc templates. Styles are combinations of...
...Pandoc command line options,
...metadata settings,
...templates,
...instructions to run filters, and
...instructions to run pre/postprocessors.
These settings can be customized on a per output type as well as a per document basis. Styles can be...
...combined and
...can bear inheritance relations to each other.
panzer styles simplify Makefiles: they bundle everything concerning the look of a document in one place -- the YAML metadata (a block in the Markdown file, or a separate file).
You just add one line of metadata (style: ...) to your document, and it will be treated as a letter/article/CV/notebook or whatever.
I created an empty directory on zsh and added a file
called hello.rb by doing the following:
echo 'Hello, world.' >hello.rb
If I want to make changes in this file using the terminal
what's the proper way of doing it without opening the file
itself using let's say TextEditor?
I want to be able to make changes in the file hello.rb strictly
by using my zsh terminal, is this at all possible?
Zsh is not a terminal but a shell. The terminal is the window in which the shell executes. The shell is the text program prompting you commands and executing them.
If you want to edit the file within the terminal, then using vim, nano, emacs -nw or any other text-mode text editor will do it. They are not Zsh commands, but external commands that you can call from Zsh or from any other shell.
If you want to edit the file within Zsh, then use zed. You will need to run once (in ~/.zshrc)
autoload zed
and then you can edit hello.rb using:
zed hello.rb
(exit and save with Control-j)
You have already created and edited the file.
To edit it again, you can use the >> to append.
For example
echo "\nAnd you too!\n" >> hello.rb
This would edit the file by concatenating the additional string.
Edit, of course, by your use and definition of 'changing' a file, this is the simplest way to do so using the shell.
In a normal way, though you probably want to use a terminal editor.
Zed is a great answer, but to be even more stripped down - for a level of editing that even a script can do - zsh can hand all 256 characters/byte-values (including null) in variables. This means you can edit line by line or chunk by chunk almost any kind of file data directly from the command-line. This is approximately what zed/vared does. If you have a current version with all the standard modules included, it is a great benefit to have zsh/mapfile or zsh/system loaded so that you can capture any of the characters that are left out by command-expansion (zed uses $(<$file) to read a file to memory). Here is an example of a way you could use this variable manipulation method:
% typeset -T Buffer buffer $'\n'
% typeset -T Edit edit $'\n'
It is most common to use newline to divide a text file one wishes to edit.
This handy feature will make zsh give you full access to one line or a range of lines at a time without unintentionally messing with the data.
% zmodload zsh/mapfile
% Buffer=$mapfile[path/to/file]
Here, I use the handy mapfile module because I can load the contents of a file byte-for-byte. Alternately you can use % Buffer="$(<path/to/file)", like zed does, but you will always have the trailing newlines removed and other word splitting is possible with a typo or environment variation, so the simplicity of the module's method is best. When finished, you save the changes by simply assigning the $Buffer value back to the $mapfile[file] or use a more classic command like printf '%s' $Buffer >path/to/file (this is exact string writing, byte-for-byte, so any newlines or formatting you added back will be written).
You transfer the lines between Buffer and Edit using the mapped arrays as follows, however, remember that in its simplest form assigning one array to another drops elements that are completely empty (one \n \n two \n three becomes one \n two \n three). You can suppress this empty-element removal by quoting the input array and adding an '#' symbol to its index "$buffer[#]", if using the whole array; and adding the '#' symbol to the flags if using a range of the array "${(#)buffer[2,50]}". Preserving empty lines can be a bit troublesome for typing, but these multiple arrays should only be used in a script or function, since you can just edit one line at a time from the command line with buffer[54]="echo This is a newly written line."
% edit=($buffer[50,70])
...
% buffer[50,70]=($edit)
This is standard Zsh syntax, that means in the ... area you can edit and manipulate the $edit array of lines or the $Edit scalar block of text all you want, including adding more lines or taking some away. When you add the lines back into $buffer it will replace the specified block of lines (50-70) with the new lines, automatically expanding or reducing the other array elements to accommodate the reintegrated lines. -- Because of the dynamic array accommodations, you can also just insert whatever you need as a new line like this buffer[40]=("new string as new line" "$buffer[40]"). This inserts it before the index given, while swapping the order of the elements ("$buffer[40]" "new string as new line") inserts the new line after the index given. Either will adjust all following elements, including totally empty elements, to their current index plus one.
If you wanted to re-write the zed function to use this method in some complex way like: newzed /path/to/file [start-line] [end-line], that would be great and handy too.
Before I leave, I wanted to mention that using vared directly, once you have these commands typed on the interactive terminal, you may find it frustrating that you can't use "Enter" for inserting or appending new lines. I found that with my terminal and Zsh version using ESC-ENTER worked well, but I don't know about older versions (Mac usually comes stocked with a not-most-recent version, if my memory is right). If that doesn't work, you may have to do some documentation digging to learn how to set up your ZLE (Zsh Line Editor, a component of Zsh) or acquire a newer version of Zsh. Also, some other shells, when indexing a scalar variable may count by the byte because in ascii and C a byte is the same as a character, but Zsh supports UTF8 and will index a scalar string by the UTF8 character unless you turn off the shell option multibyte (on by default). This will help with manipulating each line if you need to use the old byte-character indexing. Also, if you have a version of Zsh that for whatever was not compiled with zsh/mapfile or zsh/system, then you can achieve a similar effect using number of options to the read builtin, like <path/to/file |read -u 0 -k $[5 * 2**20] -r -s Buffer ||(($#Buffer)). As you can see here, you have to make the read length big enough to accommodate the file's size or it will leave off part of the file, and the read return code will nearly always be an error because of not being able to read the full length of the string. We fix this with ||(($#Buffer)), but this builtin was simply not meant to handle large scale byte manipulation efficiently, so what you see is what you can get from it.
I am reading Software foundations book and I came across a command that declares parameters
as implicit:
Arguments nil {X}.
where, for example:
Inductive list (X:Type) : Type :=
| nil : list X
| cons : X -> list X -> list X.
However, whenever I try to execute such commands I get the following message:
Error: No focused proof (No proof-editing in progress).
The same message appears even if I try to compile scripts that come with the book. What could be the problem?
I am using Coq version 8.3pl4 and CoqIDE editor.
I just tried it on my (somewhat old) Coq 8.4 and I don't have any problem with the implicit declaration.
However if I write Argument instead of Arguments (notice the lack of "s"), I get
Error: Unknown command of the non proof-editing mode.
Did you correctly spelled it ?
EDIT: sorry, I miss-read your version. It seems that the Arguments command has been added post 8.4 (it does not appear here but appears here. I advise you update your Coq version if possible, or restrict to using 8.3 Implicit related commands (wild guess: Implicit Arguments foo.)