I wonder if there a standard way to include a postscript file into another.
For example, say I have got one file of data generated by a 3rd party program:
%!PS
\mydata [ 1 2 3 4 5 6
(...)
1098098
1098099
] def
and I would like to include it into a main PS document
%PS
\processData
{
mydata { (..) } foreach
}
(...)
(data.ps) include %<=== ???
Thanks
The operator you want is run.
string run -
execute contents of named file
Unfortunately, run is not allowed if the interpreter has the SAFER option set.
Edit: Bill Casselman, author of *Mathematical Illustrations" has a Perl script called psinc you can use to "preprocess" yor postscript files, inlining all (...) run files.
The standard way to include PostScript is to make the code to be included an EPS (Encapsulated PostScript) file. There are rules on how encapsulated PostScript must be created, and how to include it. See Adobe Tech Note 5002 'Encapsulated PostScript File Format Specification'
Simply executing 'run' on a PostScript file may well work, but it might also cause problems. Many PostScript files (especially those produced by 3rd parties) will include procedure definitions which may clash with your own names, and also the included program may leave the interpreter in a state different from the one it was in when the included file was executed. At the very least you should execute a save/restore pair around the code included via 'run'.
I would suggest meta-solution: use C preprocessor or M4 preprocessor. They are powerful tools and their power may find use in other ways as well, not only file inclusion. Though this was not asked, but use of Makefile will be wise to automate whole workflow. By using a preprocessor and Makefile in combination you can elegantly automate complex inclusions processing and beyond.
C Preprocessor
Including a file:
#include "other.ps"
Commandline for preprocessing:
cpp -P main.pps main.ps
M4 Preprocessor
Including a file:
include(other.ps)
Commandline for preprocessing:
m4 main.pps > main.ps
Related
I need to make shell script to check my algorithms with loads of data(tests packages saved in .in files, every package contains folder with .in file and the other one with .out file where supposed to be correct result)
Sometimes It's about 1000 files in one packages so there's no point of doing it manually. I need some kind of loop which opens this .in file then redirect input of my c++ program and also redirect output of this program(save result to .out files) But the point is I can't get this language as quick as I need.
And I would like this script to compare results of my algorithm to .out files from packages
for f in ExternalIn/*.in; do//part of code which opens process with my algorithm and compare its .out file to .out file from package
Skipping checks for missing files, whitespace-safety, etc., you probably need something like:
for f in ExternalIn/*.in; do
# diff the result of my_cpp_app eating file.in with file.out
# and store the comparison result in file.diff
diff ${f/.in/.out} <(my_cpp_app <$f 2>/dev/null) > ${f/.in/.diff}
done
Although I would probably do it with find / xargs pipeline which is not only safer but also allows parallel execution.
Or even write a Makefile for this and use make, which after all is a tool for exactly this kind of work.
Is it possible to tell ASDF that it should produce only one fas(l) file for entire system? This file should be concatenation (in right order) of all compiled files of the system, including all files of systems on which target system depends.
Yes, with compile-bundle-op (ASDF 3.1): http://common-lisp.net/project/asdf/asdf/Predefined-operations-of-ASDF.html
edit: Actually, monolithic-compile-bundle-op seemes to be asked for (as shown in other answers).
If you have to predict the extension, use uiop:compile-file-type.
And/or you can just call (asdf:output-files 'asdf:monolithic-compile-bundle-op :my-system) to figure out what is actually used.
Option monolithic-compile-bundle-op will create single compiled file which includes all dependencies, while compile-bundle-op creates a file for every system.
Example of use:
(asdf:operate 'asdf:monolithic-compile-bundle-op :my-system)
This command will create file my-system--all-systems.fas(l) in output directory of target project, as well as "bundle" files for every system, named like my-system--system.fas(l).
I have a project that contains a folder to manage file templates, but it doesn't look like Go provides any support for non-Go-code project files. The project itself compiles to an executable, but it needs to know where this template folder is in order to operate correctly. Right now I do a search for $GOPATH/src/<templates>/templates, but this feels like kind of a hack to me because it would break if I decided to rename the package or host it somewhere else.
I've done some searching and it looks like a number of people are interested in being able to "compile" the asset files by embedding them in the final binary, but I'm not sure how I feel about this approach.
Any ideas?
Either pick a path (or a list of paths) that users are expected to put the supporting data in (/usr/local/share/myapp, ...) or just compile it into the binary.
It depends on how you are planning to distribute the program. As a package? With an installer?
Most of my programs I enjoy just having a single file to deploy and I just have a few templates to include, so I do that.
I have an example using go-bindata where I build the html template with a Makefile, but if I build with the 'devel' flag it will read the file at runtime instead to make development easier.
I can think of two options, use a cwd flag, or infer from cwd and arg 0:
-cwd path/to/assets
path/to/exe -cwd=$(path/to/exe/assets)
Internally, the exectable would chdir to wherever cwd points to, and then it can use relative paths throughout the application. This has the added benefit that the user can change the assets without having to recompile the program.
I do this for config files. Basically the order goes:
process cmd arguments, looking for a -cwd variable (it defaults to empty)
chdir to -cwd
parse config file
reparse cmd arguments, overwriting the settings in the config file
I'm not sure how many arguments your app has, but I've found this to be very useful, especially since Go doesn't have a standard packaging tool that will compile these assets in.
infer from arg 0
Another option is to use the first argument and get the path to the executable. Something like this:
here := path.Dir(os.Args[0])
if !path.IsAbs(os.Args[0]) {
here = path.Join(os.Getwd(), here)
}
This will get you the path to where the executable is. If you're guaranteed the user won't move this without moving the rest of your assets, you can use this, but I find it much more flexible to use the above -cwd idea, because then the user can place the executable anywhere on their system and just point it to the assets.
The best option would probably be a mixture of the two. If the user doesn't supply a -cwd flag, they probably haven't moved anything, so infer from arg 0 and the cwd. The cwd flag overrides this.
I have a conf file that is of the format:
name=value
What I want to do is using a template, generate a result based on some values in another file.
So for example, say I have a file called PATHS that contains
CONF_DIR=/etc
BIN_DIR=/usr/sbin
LOG_DIR=/var/log
CACHE_DIR=/home/cache
This PATHS file gets included into a Makefile so that when I call make install the paths are created and built applications and conf files copied appropriately.
Now I also have a conf file which I want to use as a template.
Say the template contains lines like
LogFile=$(LOG_DIR)/myapp.log
...
Then generate a destination conf that would have
LogFile=/var/log/myapp.log
...
etc
I think this can be done with a sed script, but I'm not very familiar with sed and regular expression syntax. I will accept a shell script version too.
You should definitely go with autoconf here, whose very job is to do this. You'll have to write a conf.in file, wherein all substitutions are marked with #'s, e.g.
prefix=#prefix#
bindir=#bindir#
and write up a configure.ac, which is a shell script that will perform these substitutions for you and create conf. conf is subsequently included in the Makefile. I'd even recommend using a Makefile.in file, i.e. including your snippet in the Makefile.
If you keep to the standard path names, your configure.ac is a four-liner and has the added advantage of being GNU compatible (easy to understand & use).
You may want to consider using m4 as a simple template language instead.
I've got a bunch of .coffee files that I need to join into one file.
I have folders set up like a rails app:
/src/controller/log_controller.coffee
/src/model/log.coffee
/src/views/logs/new.coffee
Coffeescript has a command that lets you join multiple coffeescripts into one file, but it only seems to work with one directory. For example this works fine:
coffee --output app/controllers.js --join --compile src/controllers/*.coffee
But I need to be able to include a bunch of subdirectories kind of like this non-working command:
coffee --output app/all.js --join --compile src/*/*.coffee
Is there a way to do this? Is there a UNIXy way to pass in a list of all the files in the subdirectories?
I'm using terminal in OSX.
They all have to be joined in one file because otherwise each separate file gets compiled & wrapped with this:
(function() { }).call(this);
Which breaks the scope of some function calls.
From the CoffeeScript documentation:
-j, --join [FILE] : Before compiling, concatenate all scripts together in the order they were passed, and write them into the specified file. Useful for building large projects.
So, you can achieve your goal at the command line (I use bash) like this:
coffee -cj path/to/compiled/file.js file1 file2 file3 file4
where file1 - fileN are the paths to the coffeescript files you want to compile.
You could write a shell script or Rake task to combine them together first, then compile. Something like:
find . -type f -name '*.coffee' -print0 | xargs -0 cat > output.coffee
Then compile output.coffee
Adjust the paths to your needs. Also make sure that the output.coffee file is not in the same path you're searching with find or you will get into an infinite loop.
http://man.cx/find |
http://www.rubyrake.org/tutorial/index.html
Additionally you may be interested in these other posts on Stackoverflow concerning searching across directories:
How to count lines of code including sub-directories
Bash script to find a file in directory tree and append it to another file
Unix script to find all folders in the directory
I've just release an alpha release of CoffeeToaster, I think it may help you.
http://github.com/serpentem/coffee-toaster
The most easy way to use coffee command line tool.
coffee --output public --join --compile app
app is my working directory holding multiple subdirectories and public is where ~output.js file will be placed. Easy to automate this process if writing app in nodejs
This helped me (-o output directory, -j join to project.js, -cw compile and watch coffeescript directory in full depth):
coffee -o web/js -j project.js -cw coffeescript
Use cake to compile them all in one (or more) resulting .js file(s). Cakefile is used as configuration which controls in which order your coffee scripts are compiled - quite handy with bigger projects.
Cake is quite easy to install and setup, invoking cake from vim while you are editing your project is then simply
:!cake build
and you can refresh your browser and see results.
As I'm also busy to learn the best way of structuring the files and use coffeescript in combination with backbone and cake, I have created a small project on github to keep it as a reference for myself, maybe it will help you too around cake and some basic things. All compiled files are in www folder so that you can open them in your browser and all source files (except for cake configuration) are in src folder. In this example, all .coffee files are compiled and combined in one output .js file which is then included in html.
Alternatively, you could use the --bare flag, compile to JavaScript, and then perhaps wrap the JS if necessary. But this would likely create problems; for instance, if you have one file with the code
i = 0
foo = -> i++
...
foo()
then there's only one var i declaration in the resulting JavaScript, and i will be incremented. But if you moved the foo function declaration to another CoffeeScript file, then its i would live in the foo scope, and the outer i would be unaffected.
So concatenating the CoffeeScript is a wiser solution, but there's still potential for confusion there; the order in which you concatenate your code is almost certainly going to matter. I strongly recommend modularizing your code instead.