This is closely related to (and stems from the same issue as)
What is the Debian equivalent of urw-fonts (needed for utf-8 in wkhtmltopdf)?
But I think this is a valid question on its own. As described in the link, I'm trying to convert a multi-language utf-8 html document to pdf using wkhtmltopdf (via command line in Debian). Several of the languages are not being rendered correctly and show up as white or black rectangles, presumably because wkhtmltopdf cannot find or access the necessary fonts.
Question: where on the system (Debian) does wkhtmltopdf look for fonts, and how can I check which font(s) it's looking for (if possible) given a particular command?
The fonts are under /usr/share/fonts/.
This command shows which fonts the command foo access.
strace -e open -o >(sed -n '/^open("\/usr\/share\/fonts\//p') foo
strace shows syscalls. -e open means to only show syscalls to open files. -o >(sed -n '/^open("\/usr\/share\/fonts\//p') means to output the output to sed, which prints out only syscalls to open files in /usr/share/fonts/.
For some programs, it is useful to turn on verbose output and check its stderr if it says what fonts are used.
For your specific problem, also check that the encoding of the HTML files are specified correctly.
Take for example the output from strace
open("/usr/share/fonts/X11/Type1/n019004l.pfb", O_RDONLY) = 8
It's in the same format as how you would use open in a c program. /usr/share/fonts/X11/Type1/n019004l.pfb is the path to the file. O_RDONLY means to open it read-only. 8 means the operation succeeded, and the resulting file descriptor is 8. Refer to the open man page.
Related
I'm looking for a simple explanation. I've already scoured the internet but I found nothing satisfactory. I know I use it to open multiple files in Vim in tabs, so I'm assuming it pipes everything into an array of some sort and Vim takes that argument and opens everything up in tabs.
Open all files in a folder
From the Vim docs:
-p[N] Open N tab pages. When N is omitted, open one tab page for each file.
It's nothing to do with Bash, and everything to do with Vim. Whatever arguments you pass to Vim are filenames, with or without the -p option. The -p option controls whether all the files appear in tabs or if you have to switch between them in a single window.
I tried this just now:
grep -RlI "id=\"kw\"" * | xargs vim
That gave me 16 results. It opened the first result in Vim. I made my very first edit and hit :q since I didn't know the shortcut to jump to the next file.
It threw me back to the console ( I am SSHed in to a server ). My console is messed up now. Anything I type I can't see, and anytime I hit enter it seems like it processes the command but the display/view is screwed up so
[meder#linode] is tabbed in on my console, at least halfway. reset does nothing since it seems to have messed up my real console.
Can anyone offer a solution that doesn't have this same downside? Or can anyone provide an explanation for why :qing out of the very first file messed up my console?
Background information: My PC is Debian Ubuntu, I am SSHed into a RHEL box. The files I opened were text/ascii files phtml/php files and not some weird binary files with crazy characters in them.
Here's a screenshot of what happened
EDIT #1: I just typed reset again and it seemed to work. The first reset did not work I think because somehow the console inserted some whitespaceish character inside it? Anyways, I would like an explanation for this weird behaviour.
Try:
vim -o `grep -RlI "id=\"kw\"" * `
From the man page for xargs:
Undefined behavior may occur if utility reads from the standard input.
That line isn't in the Linux man page but it is present on my Mac. If you want to run a program that you intend to read standard input, the usual linux version of xargs will need an argument to read its input from a file:
OPTIONS
--arg-file=file, -a file
Read items from file instead of standard input.
If you use this option, stdin remains unchanged
when commands are run. Otherwise, stdin is
redirected from /dev/null.
Vim is intended to run with both standard input and standard output connected to real (a very rare case these days) or pseudo tty devices. Wierd things will happen if you upset this arrangement.
The fundamental problem with your command was that, with standard input redirected to the pipe, xargs had no way to run a vim with a "normal" standard input. So the vim mode changes and command input were not what you expected.
You can probably fix this by typing a return, a tilde, and a period. This will force your ssh session closed from your end, you can then ssh in again, and run "ps" to check for anything left hung in the background that you should kill(1).
You can use :next or :n to get to the next file to edit. You can also use vim -o to open up all the matching files in different windows in Vim.
Not sure why your console is messed up though. I tried using your command and my console was fine.
Console options are set by stty, so you may want to save its options to a bash variable and restore them after vim exits, like this:
function vim()
{
STTYOPTS="$(stty --save)"
vim "$#"
stty "${STTYOPTS}"
}
But it is probably the better way to use zsh for this task: if you put the only line ttyctl -f into your ~/.zshrc, then zsh will automatically restore terminal options after program exits. ttyctl is a zsh builtin, so you cannot use it from bash.
Other folks covered what happened and what to do about it. As to why, the answer to that probably lies in what input Vim received from the xargs command and tried to execute as if that input came from a terminal. I don't know how to talk terminal, but you can imagine that Vim got some strange commands that crashed it or told it to quit. Similarly unpredictable things happen when you cat a binary file.
Anway, I have another idea. Have you tried using vimgrep to browse a list of files matching a pattern?
:vimgrep /id="kw"/ *
:copen
This greps for id="kw" in all files in the current directory. :copen opens up a window with a list of matches. You can browse that list, clicking enter to jump to a file position.
For more information, see
:help grep
:help :vimgrep
:help :copen
:help quickfix
If you really need that -I option, see
:help :grep
:help 'grepprg'
See also: Vim: Warning: Input is not from a terminal
Try to use ... | xargs sh -c '...' and then read from the controlling terminal device /dev/tty.
echo ~/.profile ~/.bashrc | xargs sh -c 'vim "$#" </dev/tty' dummy_script_name
# based on a tip by Laszlo Ersek on http://unix.derkeiler.com/Newsgroups/comp.unix.programmer/2010-03/msg00051.html
#find . -maxdepth 1 -type f | xargs sh -c 'rm -i "$#" </dev/tty' dummy_script_name
I have written an External Tool that uses plink.exe to execute gcc on a Linux system and then capture the output back on VS's output window (there is a checkmark in Tools/External Tools/Use Output Window). But Linux outputs with UTF-8 and so I get some garbage. Is there any way to get VS to translate that UTF-8 output to readable output?
For example, Linux is trying to output this:
test.c:214: warning: conflicting types for ‘test_zero_read’
but it shows up in VS's output window like this:
test.c:214: warning: conflicting types for ‘test_zero_read’
Changing the font of the output window can also address the issue of the characters.
In VS go to Tools -> Options -> Environment -> Fonts and Colors ->Show Settings for: Output Window
pipe the output on the linux box via unix2dos before finishing.
Edit: another go:
iconv -f utf8 -t iso89 oldfile > newfile
(from here)
This is related to this question : How to get coloured file listing in windows cmd shell ?
I'm trying to get, wouldn't you believe it, coloured file listing in windows cmd shell. Windows are XP SP2, if that matters.
In the old DOS days there used to be little programs like hdir, adir and such which displayed that nice. Nowadays, such programs are no more.
There is however, ls, from unixkit-tiny or unixtools. Unfortunatelly, it uses ANSI escape codes for displaying colours, and cmd doesn't handle those too well.
There are several solutions which include loading ansi.sys and command.com, but command.com doesn't handle long filenames that well, and is awfully slow. Even then sometimes it has problems displaying colours.
So what I'm asking, is there a way to get coloured file listing in windows cmd shell, apart from using cygwin ? Or is there a way to get ANSI escape codes to work with cmd.exe in a way so that native ls will play nicely ?
I ran across ANSICON at http://adoxa.110mb.com/ansicon/index.html ansicon github repo
Using it to colorize NAnt output. ls --color is being processed correctly.
Source code is provided, but I haven't examined it.
Actually I reckon A+ for ansicon -- Use
ansicon.exe -I
Installs it as a filter on your CMD.exe sessions. Works a treat with HTTY (ruby gem).
:-)
You could start the builtin Telnet server, firewall it to only allow localhost access, and use a telnet client that understands such escapes - even the native one. (I know, an ugly hack.)
It's possible to patch cmd.exe....
http://gynvael.coldwind.pl/?id=130&lang=en
I would like to compare all GNU Unix manuals and and Mac's Unix manuals by sdiff.
I do not know how you go through, for instance, all Mac's Unix manuals and then save them to a file.
The comparison can be done by the following code when the manuals are in two files
sdiff <(file1) <(file2)
Perhaps, there is some index of Unix command names such that we can do the following
sdiff <(man *[in the index]) <(man *[in the index])
How can you compare all GNU Unix manuals with all Unix manuals in Mac?
[edit]
Mac's manuals are at /usr/share/man/man[1-9]/*.
I have an encoding problem with them when I try to cat them.
Another problem is to find the location of Coreutils' manuals.
Your goal, to identify the differing parameters for the different BSD vs GNU/Linux versions of the various programs, is going to be somewhat tedious. It's useful to note that there are other variants of all commands as well. There are system V versions and BSD versions and GNU versions, and the Mac uses a mish-mash of all 3. In any event, as a starting point, the files themselves are filled with formatting macros that you have no interest in. Pipe the output of man through 'col -b' to get data you can diff. In terms of generating the list of commands, you could just ls -1 /bin /usr/bin' Then something like this would get you most of the way:
while read command ; do
man $command | col -b > output1
man ./path/to/GNU/$command | col -b > output2
diff output1 output2 | grep '^[ ]*-' > $command.diffs
done<<EOF
diff
grep
sort
...
...
EOF
GNU means (G)NU is (N)ot (U)nix. GNU is not based, in any way on UNIX, it could not be due to copyright and licensing issues.
Most GNU documentation was written in texinfo format (which Debian later converted to roff (man) format as users wanted man pages). The documentation is in no way based upon the BSD documentation, everything in GNU was written from scratch.
Trying to diff between the two is like diffing a dictionary against a thesaurus. You will find that they both contain many of the same words, but are entirely different books written by entirely different people.
The documentation in no way adequately explains the differences between GNU and BSD (and by extension MacOS).
All man pages exist within the /usr/share/man/*; I'm not sure what you are attempting to accomplish here. Mac runs on BSD, so most applications are going to be the same as the ones you would find on the BSD machine. If you still wanted to do it, you would need to grab the manual pages of the same applications in *nix as in Mac as well as the same version (since the man page can change). And yea, I would say that doing a diff /usr/share/man/man[0-9]/* and the expanded tar of all the man pages from the linux box.