I'm using the commandline to convert PNG images to a Base64-encoded string. What I'd like to do is to use find to do this on an entire directory.
find ./ -name "*.png" -exec base64 > out.txt {} \;
Rather than storing all the results in one text-file, I'd like to be able to preserve a relation between source-file and result. For both solutions I'm clueless:
Store matched file-name and the Base64-encoded result in one text-file for all matches (e.g. my_file.png = <base64-string>).
Create a text-file for each result, with the filename matching the base-name of the source PNG.
Does the find command offer to make use of it's matched filename through a variable? Can this be done?
If I well understand your problem, you want to convert each *.png file into a base64 one, preserving its name.
Now, this should do the trick:
find . -name "*png" -exec bash -c "base64 {} > {}.txt" \;
Now, let's say you have the files a.png b.png and c.png in your directory. This command will output you:
a.png.txt
b.png.txt
c.png.txt
Where the files are the text files you need.
The problem you were experiencing was actually how to redirect the output within -exec in find, which was solved here: https://superuser.com/questions/231495/how-can-i-use-to-redirect-the-output-of-a-command-run-through-finds-exec
Related
I need to convert a lot of XLSX files to CSV, but each file has tabs, which must be converted to individual files if they have data in them.
In addition, I need to convert only those tabs that follow a pattern in their name, for example, "Tab1".
So far, I have managed to batch convert the files, although it only prints the first tab, using the ssconvert utility:
find. -name '* .xlsx' -exec ssconvert -T Gnumeric_stf: stf_csv {} \;
But when I try to add the tabs flag (-S) or the flag to include those tabs with a pattern (-I), I only get the following message:
find. -name '* .xlsx' -exec ssconvert -S -I tab1 -T Gnumeric_stf: stf_csv {} \;
An output file name or an explicit export type is required.
How do I do this correctly?
I just found the solution, I leave it here in case someone finds it useful. In the end I created a small script that performs the actions:
#!/bin/bash
for i in *.xlsx; do
nombre=${i:0:${#i}-5};
name="$nombre-%n-%s.csv";
ssconvert --export-type=Gnumeric_stf:stf_csv --export-file-per-sheet $i $name;
done
echo "Finish!"
I would like to search for a simple text query (inside of a directory named "textfiles") and based on the matches, assign the results to a variable in bash (as an array or list). This query should be case-insensitive, and the context is inside of a bash script (.sh file). The names I'd hope to see in the array are simply the filenames, not the full paths.
What I am trying:
myfiles=./textfiles/*text*.txt
This matches all files that have the word text in them, but not the word TEXT.
I've also tried
myfiles=(find textfiles -iname *text*)
...and...
myfiles=find textfiles -iname *text*
Is there a solution to this?
myfiles=$(find textfiles -iname '*text*' -exec basename '{}' \; 2>/dev/null)
Note how -exec allows you to perform powerful operations on the files find finds. Maybe you do not even need the array after all, and can do what you need to do right there in the -exec argument.
And be aware that the -exec argument may be a script or other executable of your own making...
# plain
myfiles=($(find textfiles -iname *text*))
# if you write like below, you get the result in myflies as a single string
myfiles=$(find textfiles -iname *text*)
# if you want to assign all string as array then you write the following way
myfiles=(abc def ijk)
But this impose a problem, if there is space in your file name or directory name, it will give you incorrect result. Better solution would be
myfiles=()
while read -r fname; do
push myfiles $fname;
done < <(find . -type f)
As #Roadowl suggested better alternative to be xargs can be a better alternative
There are more than one way to solve a problem.
Since you said in your posting explicitly that you want to have files containing text, but not TEXT, you can not do a case-insensitive search, but have to be case-sensitive:
myfiles=($(find -name '*text*' 2>/dev/null))
However, this would also return a file named x.text.y.TEXT.z. If you want to exclude this file (since you consider exclusion of TEXT more important than inclusion of text), you can do a
myfiles=($(find -name '*text*' '!' -name '*TEXT*' 2>/dev/null))
Terminal noob need a little help :)
I have a 98 row long filename list in a .csv file. For example:
name01; name03, etc.
I have an external hard drive with a lot of files in chaotic file
structure. BUT the file names are consistent, something like:
name01_xy; name01_zq; name02_xyz etc.
I would like to copy every file and directory from the external hard
drive which begins with the filename stored in the .csv file to my
computer.
So basically it's a search and copy based on a text file from an eHDD to my computer. I guess the easiest way to do is a Terminal command. Do you have any advice? Thanks in advance!
The task can be split into three: read search criteria from file; find files by criteria; copy found files. We discuss each one separately and combine them in a one-liner step-by-step:
Read search criteria from .csv file
Since your .csv file is pretty much just a text file with one criterion per line, it's pretty easy: just cat the file.
$ cat file.csv
bea001f001
bea003n001
bea007f005
bea008f006
bea009n003
Find files
We will use find. Example: you have a directory /Users/me/where/to/search and want to find all files in there whose names start with bea001f001:
$ find /Users/me/where/to/search -type f -name "bea001f001*"
If you want to find all files that end with bea001f001, move the star wildcard (zero-or-more) to the beginning of the search criterion:
$ find /Users/me/where/to/search -type f -name "*bea001f001"
Now you can already guess what the search criterion for all files containing the name bea001f001 would look like: "*bea001f001*".
We use -type f to tell find that we are interested only in finding files and not directories.
Combine reading and finding
We use xargs for passing the file contents to find a -name argument:
$ cat file.csv | xargs -I [] find /Users/me/where/to/search -type f -name "[]*"
/Users/me/where/to/search/bea001f001_xy
/Users/me/where/to/search/bea001f001_xyz
/Users/me/where/to/search/bea009n003_zq
Copy files
We use cp. It is pretty straightforward: cp file target will copy file to directory target (if it is a directory, or replace file named target).
Complete one-liner
We pass results from find to cp not by piping, but by using the -exec argument passed to find:
$ cat file.csv | xargs -I [] find /Users/me/where/to/search -type f -name "[]*" -exec cp {} /Users/me/where/to/copy \;
Sorry this is my first post here. In response to the comments above, only the last file is selected likely because the others have a carriage return \r. If you first append the directory to each filename in the csv, you can perform the move with the following command, which strips the \r.
cp `tr -d '\r' < file.csv` /your/target/directory
I need to create a script that will go through and add underscores to all files in multiple directories, ignoring the files that already have prefixes. For example, _file1, _file2, file3, file4 needs to look like _file1, _file2, _file3, _file4
I've got little to no knowledge of Unix scripting, so a simple explanation would be greatly appreciated!
You could use one liner like this:
find dir_with_files -regextype posix-extended -type f -regex '^.*\/[^_][^\/]*$' -exec rename -v 's/^(.*\/)([^_][^\/]*)$/$1_$2/' '{}' \;
where dir_with_files is upper dir where you search for your files. Then it finds files with names starting not from _, and each of them is renamed.
Before doing any changes you can use rename with params -n -v showing you what operations will take place, without actually executing them.
find dir_with_files -regextype posix-extended -type f -regex '^.*\/[^_][^\/]*$' -exec rename -v -n 's/^(.*\/)([^_][^\/]*)$/$1_$2/' '{}' \;
From the best Bash resource out there:
Create a glob which matches all of the relevant files.
Loop through all of the matching files.
Remove the underscore from the file name and save the result to a variable.
Prepend an underscore to the variable.
echo the original file name followed by the changed file name using proper quotes to check that they look sane (the quotes will not be printed by echo since they are syntax).
Use mv instead of echo to actually rename the files.
In addition:
If your mv supports -n/--no-clobber, use it to avoid the possibility of data loss in case you mess up
I have a bunch of files (with same name, say abc.txt) strewn all over the network filesystem.
I need to recursively search for each of those files and once I find each one, do a content search and replace on them.
After some research, I see that I can find the files using the find command (with -r to recurse right?). Something like:
find . -r -type f abc.txt
And use sed to do find and replace on each one:
sed -ie 's/search/replace/g' abc.txt
But I'm not sure how to glue the two together so that I find all occurrences of abc.txt and then do a search/replace on each one.
My directory tree is really large so I know a recursive search through all the directories will take a long time but that's better than manually trying to find each file.
I'm on OSX 10.6 with Bash.
Thanks all!
Update: I think the answer posted below may work for other OSes (linux perhaps) but for OSX, I had to tweak it a bit.
find . -name abc.text -exec sed -i '' 's/search/replace/g' {} +
the empty quotes seem to required after a -i in sed to indicate that we don't want to produce a backup file. The man page for this reads:
-i extension:
Edit files in-place, saving backups with the specified extension. If a zero-length extension is given, no backup will be saved.
find . -r -type f abc.txt -exec sed -i -e 's/search/replace/g' {} +