What is the `#Name#` in command line? - bash

I'm looking for Tsung source code. There is a line like following in file tsung.sh.in:
ERL_OPTS=" $ERL_DIST_PORTS -smp auto +P $MAX_PROCESS +A 16 +K true #ERL_OPTS# "
What does the #ERL_OPTS# mean?

This seems to be something that gets substituted by autoconf during the build process.
Generally, a .in file gets preprocessed by some build script. Autoconf uses #IDENTIFIER# to indicate the place where the actual value has to be put in. The preprocessed version loses the .in extension, thus generating tsung.sh in this particular case.

Related

autoconf: how do I substitute the library prefix?

CLISP's interface to PARI is configured with the configure.in containing AC_LIB_LINKFLAGS([pari]) from lib-link.m4.
The build process also requires the Makefile to know where the datadir of PARI is located. To this end, Makefile.in has
prefix = #LIBPARI_PREFIX#
DATADIR = #datadir#
and expects to find $(DATADIR)/pari/pari.desc (normally
/usr/share/pari/pari.desc or /usr/local/share/pari/pari.desc).
This seems to work on Mac OS X where PARI is installed by homebrew in /usr/local (and LIBPARI_PREFIX=/usr/local), but not on Ubuntu, where PARI is in /usr, and LIBPARI_PREFIX is empty.
How do I insert the location of the PARI's datadir into the Makefile?
PS. I also asked this on the autoconf mailing list.
PPS. In response to #BrunoHaible's suggestion, here is the meager attempt at debugging on Linux (where LIBPARI_PREFIX is empty).
$ bash -x configure 2>&1 | grep found_dir
+ found_dir=
+ eval ac_val=$found_dir
+ eval ac_val=$found_dir
You are trying to use $(prefix) in an unintended way. In an Autotools-based build system, the $(prefix) represents a prefix to the target installation location of the software you're building. By setting it in your Makefile.in, you are overriding the prefix that configure will try to assign. However, since you appear not to have any installation targets anyway, at least at that level, that's probably more an issue of poor form than a cause for malfunction.
How do I insert the location of the PARI's datadir into the Makefile?
I'd recommend computing or discovering the needed directory in your configure script, and exporting it to the generated Makefile via its own output variable. Let's take the second part first, since it's simple. In configure.in, having in some manner located the wanted data directory and assigned it to a variable
DATADIR=...
, you would make an output variable of that via the AC_SUBST macro:
AC_SUBST([DATADIR])
Since you are using only Autoconf, not Automake, you would then manually receive that into your Makefile by changing the assignment in your Makefile.in:
DATDIR = #DATADIR#
Now, as for locating the data directory in the first place, you have to know what you're trying to implement before you can implement it. From your question and followup comments, it seems to me that you want this:
Use a data directory explicitly specified by the user if there is one. Otherwise,
look for a data directory relative to the location of the shared library. If it's not found there then
(optional) look under the prefix specified to configure, or specifically in the specified datadir (both of which may come from the top-level configure). Finally, if it still has not been found then
look in some standard locations.
To create a configure option by which the user can specify a custom data directory, you would probably use the AC_ARG_WITH macro, maybe like this:
AC_ARG_WITH([pari-datadir], [AS_HELP_STRING([--with-pari-datadir],
[explicitly specifies the PARI data directory])],
[], [with_pari_datadir=''])
Thanks to #BrunoHaible, we see that although the Gnulib manual does not document it, the macro's internal documentation specifies that if AC_LIB_LINKFLAGS locates libpari then it will set LIBPARI_PREFIX to the library directory prefix. You find that that does work when the --with-libpari option is used to give it an alternative location to search, so I suggest working with that. You certainly can try to debug AC_LIB_LINKFLAGS to make it set LIBPARI_PREFIX in all cases in which the lib is found, but if you don't want to go to that effort then you can work around it (see below).
Although the default or specified installation prefix is accessible in configure as $prefix, I would suggest instead going to the specified $datadir. That is slightly tricky, however, because by default it refers to the prefix indirectly. Thus, you might do this:
eval "datadir_expanded=${datadir}"
Finally, you might hardcode a set of prefixes such as /usr and /usr/local.
Following on from all the foregoing, then, your configure.in might do something like this:
DATADIR=
for d in \
${with_pari_datadir} \
${LIBPARI_PREFIX:+${LIBPARI_PREFIX}/share/pari} \
${datadir_expanded}/pari \
/usr/local/share/pari \
/usr/share/pari
do
AS_IF([test -r "$[]d/pari.desc"], [DATADIR="$[]d"; break])
done
AS_IF([test x = "x$DATADIR"], [AC_MSG_ERROR(["Could not identify PARI data directory"])])
AC_SUBST([DATADIR])
Instead of guessing the location of datadir, why don't you ask PARI/GP where its datadir is located? Namely,
$ echo "default(datadir)" | gp -qf
"/usr/share/pari"
does the trick.

CMake override cached variable using command line

As I understand it, when you provide a variable via the command line with cmake (e.g. -DMy_Var=ON), that variable is stored inside the cache. When that variable is then accessed on future runs of the CMake script, it will always get the value stored inside the cache, ignoring any subsequent -DMy_Var=OFF parameters on the command line.
I understand that you can force the cache variable to be overwritten inside the CMakeLists.txt file using FORCE or by deleting the cache file, however I would like to know if there is a nice way for the -DMy_Var=XXX to be effective every time it is specified?
I have a suspicion that the answer is not to change these variables within a single build but rather have separate build sub-dirs for the different configs. Could someone clarify?
I found two methods for changing CMake variables.
The first one is suggested in the previous answer:
cmake -U My_Var -D Mu_Var=new_value
The second approach (I like it some more) is using CMake internal variables. In that case your variables will be still in the CMake cache, but they will be changed with each cmake invocation if they are specified with -D My_Var=.... The drawback is that these variables would not be seen from GUI or from the list of user's cache variables. I use the following approach for internal variables:
if (NOT DEFINED BUILD_NUMBER)
set(BUILD_NUMBER "unknown")
endif()
It allows me to set the BUILD_NUMBER from the command line (which is especially useful on the CI server):
cmake -D BUILD_NUMBER=4242 <source_dir>
With that approach if you don't specify your BUILD_NUMBER (but it was specified in previous invocations), it will use the cached value.
You could use
CMake -UMy_Var -DMy_Var=new_value
see documentation https://cmake.org/cmake/help/v3.9/manual/cmake.1.html
I hope this helps.
Find this post by coincidence.
It seems the behavior described by OP isn't the case for CMake 3.12 onwards. For previous releases, I didn't make some tests so cannot confirm.
Variables provided by -D on command line are stored in CMakeCache.txt. They can be overridden, even the same variable is repeatedly provided and the last one is set to the value for that variable.
For example, a very easy CMake script
message(STATUS "FOO = " ${FOO})
$ cmake -DFOO=123 -DFOO=321 .. # the last one takes effect
-- FOO = 321
-- Configuring done
-- Generating done
-- Build files have been written to: xxx
$ cmake .. # cache is remembered
-- FOO = 321
-- Configuring done
-- Generating done
-- Build files have been written to: xxx
$ cmake -DFOO=changed .. # override it
-- FOO = changed
-- Configuring done
-- Generating done
-- Build files have been written to: xxx

How to use Linqpad to run a command with path and confirmations

I wanted to make a quick linqpad script to run a tfpt command that undoes unmodified files.
Syntax is like this:
"c:\myProject> tfpt uu . /noget /recursive"
So first I need to change the path to c:\myProject.
Secondly I need to run the command "tfpt uu . /noget /recursive".
And finally I need to confirm the undo.
Can this be done with linqpad's Util.Cmd... if so how?
Managed to do it like so (made it a one liner).
Util.Cmd("echo y |tfpt uu C:\\myProject /noget /recursive");
Yes you can! With the latest bits from the beta version http://www.linqpad.net/Beta.aspx you get an utility called lprun.exe. The syntax is straighforward:
Usage: lprun [<options>] <scriptfile> [<script-args>]
options: (all case-insensitive)
-format={text|html|htmlfrag|csv|csvi} Output format. csvi=invariant CSV.
-cxname=<connection-name> Sets/overrides a script's connection.
-lang=<language> Sets/overrides a script's language.
-warn Writes compiler warnings (to stderr).
-optimize Enables compiler optimizations.
-nunuget Freshens NuGet references to latest.
scriptfile: Path to script. If it's a .linq file, -lang & -cxname are optional.
script-args: Args following <script-filepath> are passed to the script itself.
Examples:
lprun TestScript.linq
lprun TestScript.linq > results.txt
lprun script1.linq | lprun script2.linq
lprun -format=csv script.linq HelloWorld
obviously you need to create a proper LINQPad script within the language of your choice, and yes, Util.Cmd() is the way to go
HTH

Programming a Filter/Backend to 'Print to PDF' with CUPS from any Mac OS X application

Okay so here is what I want to do. I want to add a print option that prints whatever the user's document is to a PDF and adds some headers before sending it off to a device.
I guess my questions are: how do I add a virtual "printer" driver for the user that will launch the application I've been developing that will make the PDF (or make the PDF and launch my application with references to the newly generated PDF)? How do I interface with CUPS to generate the PDF? I'm not sure I'm being clear, so let me know if more information would be helpful.
I've worked through this printing with CUPS tutorial and seem to get everything set up okay, but the file never seems to appear in the appropriate temporary location. And if anyone is looking for a user-end PDF-printer, this cups-pdf-for-mac-os-x is one that works through the installer, however I have the same issue of no file appearing in the indicated directory when I download the source and follow the instructions in the readme. If anyone can get either of these to work on a mac through the terminal, please let me know step-by-step how you did it.
The way to go is this:
Set up a print queue with any driver you like. But I recommend to use a PostScript driver/PPD. (A PostScript PPD is one which does not contain any *cupsFilter: ... line.):
Initially, use the (educational) CUPS backend named 2dir. That one can be copied from this website: KDE Printing Developer Tools Wiki. Make sure when copying that you get the line endings right (Unix-like).
Commandline to set up the initial queue:
lpadmin \
-p pdfqueue \
-v 2dir:/tmp/pdfqueue \
-E \
-P /path/to/postscript-printer.ppd
The 2dir backend now will write all output to directory /tmp/pdfqueue/ and it will use a uniq name for each job. Each result should for now be a PostScript file. (with none of the modifications you want yet).
Locate the PPD used by this queue in /etc/cups/ppd/ (its name should be pdfqueue.ppd).
Add the following line (best, near the top of the PPD):
*cupsFilter: "application/pdf 0 -" (Make sure the *cupsFilter starts at the very beginning of the line.) This line tells cupsd to auto-setup a filtering chain that produces PDF and then call the last filter named '-' before it sends the file via a backend to a printer. That '-' filter is a special one: it does nothing, it is a passthrough filter.
Re-start the CUPS scheduler:sudo launchctl unload /System/Library/LaunchDaemons/org.cups.cupsd.plist
sudo launchctl load /System/Library/LaunchDaemons/org.cups.cupsd.plist
From now on your pdfqueue will cause each job printed to it to end up as PDF in /tmp/pdfqueue/*.pdf.
Study the 2dir backend script. It's simple Bash, and reasonably well commented.
Modify the 2dir in a way that adds your desired modifications to your PDF before saving on the result in /tmp/pdfqueue/*.pdf...
Update: Looks like I forgot 2 quotes in my originally prescribed *cupsFilter: ... line above. Sorry!
I really wish I could accept two answers because I don't think I could have done this without all of #Kurt Pfeifle 's help for Mac specifics and just understanding printer drivers and locations of files. But here's what I did:
Download the source code from codepoet cups-pdf-for-mac-os-x. (For non-macs, you can look at http://www.cups-pdf.de/) The readme is greatly detailed and if you read all of the instructions carefully, it will work, however I had a little trouble getting all the pieces, so I will outline exactly what I did in the hopes of saving someone else some trouble. For this, the directory with the source code is called "cups-pdfdownloaddir".
Compile cups-pdf.c contained in the src folder as the readme specifies:
gcc -09 -s -lcups -o cups-pdf cups-pdf.c
There may be a warning: ld: warning: option -s is obsolete and being ignored, but this posed no issue for me. Copy the binary into /usr/libexec/cups/backend. You will likely have to the sudo command, which will prompt you for your password. For example:
sudo cp /cups-pdfdownloaddir/src/cups-pdf /usr/libexec/cups/backend
Also, don't forget to change the permissions on this file--it needs root permissions (700) which can be changed with the following after moving cupd-pdf into the backend directory:
sudo chmod 700 /usr/libexec/cups/backend/cups-pdf
Edit the file contained in /cups-pdfdownloaddir/extra/cups-pdf.conf. Under the "PDF Conversion Settings" header, find a line under the GhostScript that reads #GhostScript /usr/bin/gs. I did not uncomment it in case I needed it, but simply added beneath it the line Ghostscript /usr/bin/pstopdf. (There should be no pre-cursor # for any of these modifications)
Find the line under GSCall that reads #GSCall %s -q -dCompatibilityLevel=%s -dNOPAUSE -dBATCH -dSAFER -sDEVICE=pdfwrite -sOutputFile="%s" -dAutoRotatePage\
s=/PageByPage -dAutoFilterColorImages=false -dColorImageFilter=/FlateEncode -dPDFSETTINGS=/prepress -c .setpdfwrite \
-f %s Again without uncommenting this, under this I added the line GSCall %s %s -o %s %s
Find the line under PDFVer that reads #PDFVer 1.4 and change it to PDFVer, no spaces or following characters.
Now save and exit editing before copying this file to /etc/cups with the following command
sudo cp cups-pdfdownloaddir/extra/cups-pdf.conf /etc/cups
Be careful of editing in a text editor because newlines in UNIX and Mac environments are different and can potentially ruin scripts. You can always use a perl command to remove them, but I'm paranoid and prefer not to deal with it in the first place.
You should now be able to open a program (e.g. Word, Excel, ...) and select File >> Print and find an available printer called CUPS-PDF. Print to this printer, and you should find your pdfs in /var/spool/cups-pdf/yourusername/ by default.
*Also, I figured this might be helpful because it helped me: if something gets screwed up in following these directions and you need to start over/get rid of it, in order to remove the driver you need to (1) remove the cups-pdf backend from /usr/libexec/cups/backend (2) remove the cups-pdf.conf from /etc/cups/ (3) Go into System Preferences >> Print & Fax and delete the CUPS-PDF printer.
This is how I successfully set up a pdf backend/filter for myself, however there are more details, and other information on customization contained in the readme file. Hope this helps someone else!

Including a postscript file into another one?

I wonder if there a standard way to include a postscript file into another.
For example, say I have got one file of data generated by a 3rd party program:
%!PS
\mydata [ 1 2 3 4 5 6
(...)
1098098
1098099
] def
and I would like to include it into a main PS document
%PS
\processData
{
mydata { (..) } foreach
}
(...)
(data.ps) include %<=== ???
Thanks
The operator you want is run.
string run -
execute contents of named file
Unfortunately, run is not allowed if the interpreter has the SAFER option set.
Edit: Bill Casselman, author of *Mathematical Illustrations" has a Perl script called psinc you can use to "preprocess" yor postscript files, inlining all (...) run files.
The standard way to include PostScript is to make the code to be included an EPS (Encapsulated PostScript) file. There are rules on how encapsulated PostScript must be created, and how to include it. See Adobe Tech Note 5002 'Encapsulated PostScript File Format Specification'
Simply executing 'run' on a PostScript file may well work, but it might also cause problems. Many PostScript files (especially those produced by 3rd parties) will include procedure definitions which may clash with your own names, and also the included program may leave the interpreter in a state different from the one it was in when the included file was executed. At the very least you should execute a save/restore pair around the code included via 'run'.
I would suggest meta-solution: use C preprocessor or M4 preprocessor. They are powerful tools and their power may find use in other ways as well, not only file inclusion. Though this was not asked, but use of Makefile will be wise to automate whole workflow. By using a preprocessor and Makefile in combination you can elegantly automate complex inclusions processing and beyond.
C Preprocessor
Including a file:
#include "other.ps"
Commandline for preprocessing:
cpp -P main.pps main.ps
M4 Preprocessor
Including a file:
include(other.ps)
Commandline for preprocessing:
m4 main.pps > main.ps

Resources