Due to using both Windows and Ubuntu on my computer I'd like to be able to create documents independently. I have one directory for logos and I want to use them in any documents everywhere.
The problem with different file addressing I solved with those commands:
\newcommand{\winlogo}{D:/logo/}
\newcommand{\linlogo}{/media/DATA/logo/}
\includegraphics{\winlogo logo_bw}
How to provide this feature:
if(parameter==windows){adress:=D:/logo/}
elseif(parameter=linux){adress:=/media/DATA/logo}
else{error}
I've run into this problem as well, and I found that hard-coding the paths is an absolutely terrible idea. Also, keeping these directories in sync will eventually be a problem once your projects begin to grow.
The way I solved this was to put everything in version control (I like git, your mileage may vary).
Then I created an images folder, so my folder hierarchy looks like this:
Working-Dir
|-- images/
|-- myfile.tex
|-- nextfile.tex
Then in the preamble of my documents: \usepackage{graphicx} and \graphicspath{{images/}} which tells latex to look for a folder called images, then look for the graphics inside the folder.
Then I do my work on on comp, push my finished work back the repo, and when I switch computers I just pull from my repo. This way, everything stays in sync, no matter which computer i'm working on.
Treating tex source like source code has greatly improved my work flow and efficiency. I'd suggest similar measures for anyone dealing with a lot of latex source.
EDIT:
From: http://en.wikibooks.org/wiki/LaTeX/Importing_Graphics
Graphics storage
There is a way to tell LaTeX where to
look for images: for example, it can
be useful if you store images
centrally for use in many different
documents. The answer is in the
command \graphicspath which you supply
with an argument giving the name of an
additional directory path you want
searched when a file uses the
\includegraphics command, here are
some examples:
\graphicspath{{c:\mypict~1\camera}}
\graphicspath{{/var/lib/images/}}
\graphicspath{{./images/}}
\graphicspath{{images_folder/}{other_folder/}{third_folder/}}
please see
http://www.ctan.org/tex-archive/macros/latex/required/graphics/grfguide.pdf
As you may have noticed, in the first
example I've used the "safe" (MS-DOS)
form of the Windows MyPictures folder
because it's a bad idea to use
directory names containing spaces.
Using absolute paths, \graphicspath
does make your file less portable,
while using relative paths (like the
last example), you shouldn't have any
problem with portability, but remember
not to use spaces in file-names.
Alternatively, if you are using
PDFLaTeX, you can use the package
grffile which will then allow you to
use spaces in file names.
The third option should do you well-- just specify multiple paths for the \graphicspath I wonder if LaTeX will fail gracefully if you just include all of your paths in there (one for images, one for your logs on linux, one for your logos on windows)?
Mica, thank you once more, your advice works properly!
I've tested this code in preamble, in .sty file it doesn't work:
\usepackage{graphicx}
\graphicspath{{/media/DATA/logo/}{d:/logo/}{img/}}
where
/media/DATA/logo/ is address to directory with logos on mounted partition in Linux
d:/logo/ is address to same directory in windows
img/ is address of images for current document in actual working directory
and this code in document:
\includegraphics{logo_zcu_c} from logo dir
\includegraphics{hvof} from img/ dir`
Related
I am new to the Go programming language. I am hoping to integrate Go code, if possible, into existing code that contains heterogeneous code. My present organization of code is:
<reverse-TLD>/<component-path>/<code><extension>
where:
<reverse-TLD> is the domain with parts reversed. For example, com.mydomain.mysubdomain.
<component-path> is 1 or more subdirectories under which code lives. For example, image/jpeg.
<code> is the part of a code filename before the extension. For example, jpeg2000.
<extension> is the extension. For example, .sh, .py, etc. For example, this taken with the other elements above would have a path: com.mydomain.mysubdomain/image/jpeg/jpeg2000.go.
Note that code files other than Go files are in the same directory as Go files.
My issues are:
My existing structure above doesn't include src, pkg, or bin directories. Are there environment or Go env variables that allow me to specify these directories?
The directory <reverse-TLD> and all files under it is read-only. I need the output of the compilation to be based under another directory, given as $BUILD_DIR. That directory can have whatever directories are needed under it.
I am thinking that as a convention, I could use lowercase filenames for Go code that will become an executable command and leading-uppercase filenames for Go code that will become package objects. Is there a best practice naming convention for making this distinction in the Go community?
Is there any problem with my using reverse TLDs? For example, com.mydomain.mysubdomain vs. mysubdomain.mydomain.com.
If the src, pkg, and bin directories are hard requirements, then I think I'll have to write a script that finds Go files and copies them to a temporary directory that meets the requirements, compile them, and then move the built artifacts to the $BUILD_DIR. But, I'm hoping that Go is flexible enough to allow me to do this.
If it is possible, could you show me the commands or environment variables that are needed to compile given the constraints above? And, comments on items 1-4 above are appreciated. Thank you!
That against Go's conventions and is not a recommended practice
I have a project in which I have lots of different images. Once in a while, we are adding more images inside it, but before, we need to check if it already existed (because we added it previously).
We were doing this right now manually, looking for the image in the folders, but as the project got bigger, it's pretty time consuming.
SO, I would like to create a script that given an image, it looks in a directory to check if it exists.
Do you know if there is any command line based tool or something I can use to build a script to do this?
There is the fdupes utility which does byte to byte comparison. It has a -d or --delete option which will prompt you to ask which files it should keep when it finds duplicates. If you don't care about the filename you can ask it to keep only the first one:
fdupes --delete --noprompt
If you want to delete images that look the same but are slightly different, it's an image recognition problem which I guess does not have such a straightforward solution.
I am rebuilding a site with docpad and it's very liberating to form a folders structure that makes sense with my workflow of content-creation, but I'm running into a problem with docpad's hard-division of content-to-be-rendered vs 'static'-content.
Docpad recommends that you put things like images in /files instead of /documents, and the documentation makes it sound as if otherwise there will be some processing overhead incurred.
First, I'd like an explanation if anyone has it of why a file with a
single extension (therefore no rendering) and no YAML front-matter,
such as a .jpg, would impact site-regeneration time when placed
within /documents.
Second, the real issue: is there a way, if it does indeed create a
performance hit, to mitigate it? For example, to specify an 'ignore'
list with regex, etc...
My use case
I would like to do this for posts and their associated images to make authoring a post more natural. I can easily see the images I have to work with and all the related files are in one place.
I also am doing this for an artwork I am displaying. In this case it's an even stronger use case, as the only data in my html.eco file is yaml front matter of various meta data, my layout automatically generates the gallery from all the attached images located in a folder of the same-name as the post. I can match the relative output path folder in my /files directory but it's error prone, because you're in one folder (src/files/artworks/) when creating the folder of images and another (src/documents/artworks/) when creating the html file -- typos are far more likely (as you can't ever see the folder and the html file side by side)...
Even without justifying a use case I can't see why docpad should be putting forth such a hard division. A performance consideration should not be passed on to the end user like that if it can be avoided in any way; since with docpad I am likely to be managing my blog through the file system I ought to have full control over that structure and certainly don't want my content divided up based on some framework limitation or performance concern instead of based on logical content divisions.
I think the key is the line about "metadata".Even though a file does NOT have a double extension, it can still have metadata at the top of the file which needs to be scanned and read. The double extension really just tells docpad to convert the file from one format and output it as another. If I create a straight html file in the document folder I can still include the metadata header in the form:
---
tags: ['tag1','tag2','tag3']
title: 'Some title'
---
When the file is copied to the out directory, this metadata will be removed. If I do the same thing to a html file in the files directory, the file will be copied to the out directory with the metadata header intact. So, the answer to your question is that even though your file has a single extension and is not "rendered" as such, it still needs to be opened and processed.
The point you make, however, is a good one. Keeping images and documents together. I can see a good argument for excluding certain file extensions (like image files) from being processed. Or perhaps, only including certain file extensions.
On TextMate 2 and opening two files in two different locations such as /path/1/file.txt and /path/2/file.txt, I am no longer seeing a way to perform diffs as before since one cannot select files in the project "drawer." We now have a file browser that seems to have taken its place and thus no way to pick the two opposing files. This also precludes any other command that requires multi file selection that are not within the file structure.
Am I missing something that would allow this to work properly when dealing with files in two different paths?
This isn't a new trick. It's one we learned when grep in project would go insane when you had a project with files whose common ancestor was root or some directory far above the files. Instead of opening your files like:
mate /foo/bar/baz /quix/quacks/quux
You do the following, assuming you're in an empty directory or don't care that its files will be included in the project as well
ln /foo/bar/baz /quix/quacks/quux . && mate .
That can obviously be wrapped up into a function to reduce the syntactical difference. In fact, at one point, I actually wrote a wrapper script around mate to do that transparently when needed AND clean up the hard linked files after I closed the project or quit TextMate. That went away with some bad hard drive though.
Anyhow I HTH
I have a website that runs off an OpenWRT router. I'd like to optimize the site by removing an files that aren't being used. Here is my directory structure...
/www/images
/www/js
/www/styles
/www/otherSubDirectories <--- not really named that
I'm mostly concerned about identifying images that are not used because those take the most space. But it would also be nice to identify style sheets and javascript files that are not being used. So, is there a way I can search /www and all sub directories and files and print a list of files in /www/images, /www/js, and /www/styles that are not referenced by any other files?
When I'm looking for files that contain a specific string I use this:
find . | xargs grep -Hn 'myImage.jpg'
That would tell me all files that reference the image. Maybe some variation of that?
Any help would be appreciated!
EV
Swiss File Knife is very nice tool.
Find out which files are used (referenced) by other files through fuzzy content analysis
Consider using a cross-reference program (for example, lxr) for this problem. (I haven't verified if lxr can do the job, but believe it can.) If an off-the-shelf cross-reference program doesn't work, look for an open source cross-reference program in a language you know, and adapt it.