My task is to take a directory of website files, including html, css, and non-text image files, and change all image paths from relative paths to a universal cdn path. This is on a mac.
My starting point is something like this:
sed -i '' 's/old_link/new_link/g' *
but I want it to act only on css,html files, and to recurse through any subdirectories.
thanks
Try using find. Soemthing like:
find . -name *css -exec sed -i '' 's/old_link/new_link/g' {} ';'
will find all the css files in your current directory and below it and pass each one to sed. The {} stand in for the name (and location) of each file that find finds. Don't omit the quote marks around the final ;
Then repeat for html files.
As ever, for the finer points of syntax consult man, or google for the find documentation.
Related
I have a lot of markdown files in various directories each with the same format (# title, then ## sub-title).
can I make the --toc respect the folder layout, in that the folder itself is the name of chapter, and each markdown file is content of this chapter.
so far pandoc totally ignores my folder names, it works the same as putting all the markdown files within the same folder.
My approach to this is to create index files in each folder with first level heading and downgrade headings in other files by one level.
I use Git and by default I'm using default structure, having first level headings in files, but when I want to generate ebook using pandoc I'm modifying files via automated Linux shell script. After that, I revert changed files via Git.
Here's the script:
find ./docs/*/ -name "*.md" ! -name "*index.md" -exec perl -pi -e "s/^(#)+\s/#$&/g" {} \;
./docs/*/ means I'm looking only for files inside subfolders of docs directory like docs/foo/file1.md, docs/bar/file2.md.
I'm also interested only in *.md files, excluding *index.md files.
In index.md files (that I name usually 00-index.md to make them appear as first), I put a first level heading # and because those files are excluded from find portion of the script, their headings aren't downgraded.
Next, there's a perl's search and replace command with regular expression s/^(#)+\s/#$&/g that looks for all lines starting from one or more # and adds another # to them.
In the end, I'm running pandoc with --toc-depth=2 so the table of content contains only first and second level headings.
pandoc ./docs/**/*.md --verbose --fail-if-warnings --toc-depth=2 --table-of-contents -o ./ebook.epub
To revert all changes made to files, I restore changes in the Git repo.
git restore .
I was wondering how to move a number of folders to another one, according to the filename of a file inside each of the folders.
I mean, let's assume I have a big amount of folders, each one with a name starting by 'folder*', each one containing 3 files. Specifically one the files contains a string which might be '-100', '-200' or '-300' for example.
I want to move the folders containing the files according to this strings, and put them in a folder called 'string'. For example, to put every folder containing a file which contains the string '-100' into the folder 'FOLDER1'I'm trying something like:
find folder* -name '100' -exec mv {} folder* FOLDER1
but it returns -bash: /usr/bin/find: Argument list too long.
How can I pass less arguments to find at every step so I don't get this.
Thank in advance.
Best.
Using your example, and running in the topmost folder containing all the folders, I believe that what you need is this:
grep -rlw folder* -e "-100" | xargs -I % mv % FOLDER1
Preface: I’m not much of a shell-scripter, in fact not a shell-scripter at all.
I have a folder (folder/files/) with many thousand files in it, with varying extensions and random names. None of the file names have spaces in them. There are no subfolders.
I have a plain text file (filelist.txt) with a few hundred file names, all of them without extensions. All the file names have corresponding files in folder/files/, but with varying extensions. Some may have more than one corresponding file in folder/files/ with different extensions.
An example from filelist.txt:
WP_20160115_15_11_20_Pro
P1192685
100-1373
HPIM2836
These might, for example, correspond to the following files in folder/files/:
WP_20160115_15_11_20_Pro.xml
P1192685.jpeg
100-1373.php
100-1373.docx
HPIM2836.avi
(Note the two files named 100-1373 with different extensions.)
I am working on an OS X (10.11) machine. What I need to do is copy all the files in folder/files/ that match a file name in filelist.txt into folder/copiedfiles/.1
I’ve been searching and Googling like mad for a bit now, and I’ve found bucketloads of people explaining how to extract file names without extensions, find and copy all files that have no extension, and various tangentially related issues—but I can’t find anything that really helps me figure out how to do this in particular. Doing a cp ˋcat filelist.txtˋ folder/copiedfiles/ would work (as far as I can tell) if the file names in the text file included extensions; but they don’t, so it doesn’t.
What is the simplest (and preferably fastest) way to do this?
1 What I need to do is exactly the same as in this question, but that one is specifically asking about batch-file, which is a very different kettle of sea-dwellers.
This should do it:
while read filename
do
find /path/to/folder/files/ -maxdepth 1 -type f \
-name "$filename*" -exec cp {} /path/to/folder/copiedfiles/ \;
done</path/to/filelist.txt
I like to list all image names in a directory with its subdirectories using terminal on Mac. I used the below command, it listed everything including folder names, but not working for my problem.
ls -R /Users/samuel/Apps/assets/images > file_names.txt
Thanks for your time and help.
If you know the exact extension of what constitutes an "image" file, then use this example below for jpg files:
ls -R /Users/samuel/Apps/assets/images | grep "*.jpg" > file_names.txt
For a broader definition of "image" file try this:
mdfind image -onlyin /Users/samuel/Apps/assets/images > file_names.txt
The first has to be run for each known image type. The second one may include any file with "image" in its metadata.
I need to get the contents of a folder via mac console window
and put into a text file via >output.txt:
existing structure looks like:
folder/index.html
folder/images/backpack.png
folder/shared/bootstrap/fonts/helvertica.eot
folder/css/fonts/helverticabold.eot
folder/shared/css/astyle.css
folder/js/libs/jquery-ui-1.10.4/jquery-ui.min.js",
folder/js/libs/jquery.tipsy.js
folder/js/libs/raphael.js
what I want looks would look like this (the folder is missing):
index.html
images/backpack.png
shared/bootstrap/fonts/helvertica.eot
css/fonts/helverticabold.eot
shared/css/astyle.css
js/libs/jquery-ui-1.10.4/jquery-ui.min.js
js/libs/jquery.tipsy.js
js/libs/raphael.js
No css/fonts or js/libs or css folders listed
i.e. no folders….. and no formatting like
/folder/shared/css/astyle.css Or
./folder/shared/css/astyle.css
even better would be with parens and commas:
“index.html”,
“images/backpack.png”,
“shared/bootstrap/fonts/helvertica.eot”,
“css/fonts/helverticabold.eot”,
“shared/css/astyle.css”,
“js/libs/jquery-ui-1.10.4/jquery-ui.min.js”,
“js/libs/jquery.tipsy.js”,
“js/libs/raphael.js”
As I want to make a json document. Is this possible?
Thanks.
This is the sort of task that find is good at:
% find folder -type f | sed -e 's,folder/,",' -e 's/$/",/'
You might be able to adjust the 's,folder/,",' substitution by, say
(cd folder; find . -type f) | sed 's/\(.*\)/"\1",/'
Further refinements are an exercise for the reader!