How to search sub-directories using GCC - gcc

I have a top level folder abc which contains multiple folders. I don't want to provide every single path to headers in my different folders using -I gcc option. Is there a way through which gcc can search for header file in all subfolders of abc ?
Eg.
abc
|----abcd
|----header1.h
|----abce
|----header2.h
|----abcf
|----header3.h
I want to include header1.h header2.h header3.h but don't want to use following approach
-I abc/abcd/
-I abc/adbe/
-I abc/abcf/
Is there a way through which I can just do -<someflag> abc and it searches whole subdirectories ?

Afaik, gcc does not have anything like that. And for good reasons imho. What if there are two files with the same name but in different directories?
If using clang is an option, you can use -Iabc/**
Depending on what shell you're using, you could create a short shell script. Don't know if this fits your needs exactly, but it should probably be easy enough to modify:
dirs=""
for d in $(find -maxdepth 1 -mindepth 1 -type d -print0)
do
dirs=$dirs"-I $d "
done
echo $dirs
You could use that in combination with assigning a variable in a Makefile. Let's call the above script dirs.sh and then use this in the Makefile:
INCLUDE_DIRS := $(shell ./dirs.sh)

Related

Copying a file into multiple directories in bash

I have a file I would like to copy into about 300,000 different directories, these are themselves split between two directories, e.g.
DirA/Dir001/
...
DirB/Dir149000/
However when I try:
cp file.txt */*
It returns:
bash: /bin/cp: Argument list too long
What is the best way of copying a file into multiple directories, when you have too many to use cp?
The answer to the question as asked is find.
find . -mindepth 2 -maxdepth 2 -type d -exec cp script.py {} \;
But of course #triplee is right... why make so many copies of a file?
You could, of course, instead create links to the file...
find . -mindepth 2 -maxdepth 2 -type d -exec ln script.py {} \;
The options -mindepth 2 -maxdepth 2 limit the recursive search of find to elements exactly two levels deep from the current directory (.). The -type d matches all directories. -exec then executes the command (up to the closing \;), for each element found, replacing the {} with the name of the element (the two-levels-deep subdirectory).
The links created are hard links. That means, you edit the script in one place, the script will look different in all places. The script is, for all intents and purposes, in all the places, with none of them being any less "real" than the others. (This concept can be surprising to those not used to it.) Use ln -s if you instead want to create "soft" links, which are mere references to "the one, true" script.py in the original location.
The beauty of find ... -exec ... {}, as opposed to many other ways to do it, is that it will work correctly even for filenames with "funny" characters in them, including but not limited to spaces or newlines.
But still, you should really only need one script. You should fix the part of your project where you need that script in every directory; that is the broken part...
Extrapolating from the answer to your other question you seem to have code which looks something like
for TGZ in $(find . -name "file.tar.gz")
do
mkdir -p work
cd work
tar xzf $TGZ
python script.py
cd ..
rm -rf work
done
Of course, the trivial fix is to replace
python script.py
with
python ../script.py
and voilá, you no longer need a copy of the script in each directory at all.
I woud further advice to refactor out the cd and changing script.py so you can pass it the directory to operate on as a command-line argument. (Briefly, import sys and examine the value of sys.argv[1] though you'll often want to have option parsing and support for multiple arguments; argparse from the Python standard library is slightly intimidating, but there are friendly third-party wrappers like click.)
As an aside, many beginners seem to think the location of your executable is going to be the working directory when it executes. This is obviously not the case; or /bin/ls woul only list files in /bin.
To get rid of the cd problem mentioned in a comment, a minimal fix is
for tgz in $(find . -name "file.tar.gz")
do
mkdir -p work
tar -C work -x -z -f "$tgz"
(cd work; python ../script.py)
rm -rf work
done
Again, if you can change the Python script so it doesn't need its input files in the current directory, this can be simplified further. Notice also the preference for lower case for your variables, and the use of quoting around variables which contain file names. The use of find in a command substitution is still slightly broken (it can't work for file names which contain whitespace or shell metacharacters) but maybe that's a topic for a separate question.

shell-script -cd in all subdirecories of a directory, execute command on their files

I am new to bash and i am trying to cd to all subdirectories of a parent directory and execute a command in all files these subdirecories contain.But it s not working.
for subdir in $parentdirectory
do
for file in $subdir
do
ngram - lm somefilename.lm - ppl file
done
done
There's many ways to do this, but one would require you to explicitly change to that directory. Assuming $parentdirectory is correctly initialized, then you could look into something like:
for subdir in ${parentdirectory}
do
cd ${subdir} # go into the subdir
for file in * # glob expansion
do
ngram - lm somefilename.lm - ppl ${file}
done
cd .. # go back up
done
Also have a look at the excellent Advanced Bash-Scripting Guide: http://tldp.org/LDP/abs/html/loops1.html
If you're wanting to do this with a small amount of space, you could do something using find -exec.
Such as:
# add a file called foo into every subdirectory
find . -type d -exec sh -c 'touch "$0/foo"' {} \;
Or, if you wanted to echo a string into each of those files you just created:
# find all files and append 'ABC' into them
find . -type f -exec sh -c 'echo "ABC" >> $0' {} \;
The find -exec combo is an extremely powerful tool that can save you on a bit of directory / file navigation, and allows you to achieve what it sounds like is the desired functionality without having to play descend/ascend through the directory structure.
Also, as you can probably guess, this kind of thing can go horribly wrong if you're not careful, so use with great caution.

Loop to create copies

I'm making a bash script on an OSX computer at work to create a kit of prints.
The script works fine, as it make the correct amount of copies of the files to the correct destination.
However the destination, a 'print rip', serves the files with a 'first in - first out' principle. My script seems to be making the copies of each file by the file. I want it to make a copy of the entire 'kit' of files, before proceeding to the next copy...
The order the script makes the copies now are:
File1-Copy1.jpg
File1-Copy2.jpg
File2-Copy1.jpg
File2-Copy2.jpg
The order I want the script to make the copies are:
File1-Copy1.jpg
File2-Copy1.jpg
File1-Copy2.jpg
File2-Copy2.jpg
My current script looks like this:
for filename in $(find $scriptdir -iname '*-F_*' -o -name '*.jpg' -print)
do
for f in $(eval echo "{1..$copies}"); do
cp /$scriptdir/FILER/${filename##*/} $printdir"/"$PrintColor"_"$f"_"${filename##*/};
done
done
So instead of copying each file X amount of times, before proceeding to the next file, I need the script to copy all the files, before proceeding to the next 'set' of files...
Does anyone have an idea of how to make it work?
Use sort, for example:
for filename in $(find $scriptdir -name 'F*.jpg' -print|sort -t'/' -k2.11n)
do
echo $filename
base=${filename##*/}
cp "filename" "$printdir/$PrintColor/$base"
done
Note that the sort uses field 2 (-k2.11) character 11 to sort on. This assumes that there is just one leading directory name. You might have to adjust this if your path is more complex.
I got it working!
Thanks for your input, guys...
I changed the code to the following:
for setnumber in $(seq $copies); do
for filename in $(find $filesdir -iname '*-F_*' -o -name '*.jpg' -print)
do
cp $filesdir/${filename##*/} $printdir"/"$PrintColor"_SET"$setnumber"_"${filename##*/}; done
sleep 5
done
The sleep command is added just to make sure all the copies of the previous set has been created/timestamped, before proceeding to the next set....

How do I get the files in a directory and all of its subdirectories in bash?

I'm working on a C kernel and I want to make it easier to compile all of the sources by using a bash script file. I need to know how to do a foreach loop and only get the files with a .c extension, and then get the filename of each file I find so I can make gcc compile each one.
Use find to walk through your tree
and then read the list it generates using while read:
find . -name \*.c | while read file
do
echo process $file
done
If the action that you want to do with file is not so complex
and can be expressed using ore or two commands, you can avoid while
and make all things with the find itself. For that you will use -exec:
find . -name \*.c -exec command {} \;
Here you write your command instead of command.
You can also use -execdir:
find . -name \*.c -execdir command {} \;
In this case command will be executed in the directory of found file (for each file that was found).
If you're using GNU make, you can do this using only make's built-in functions, which has the advantage of making it independent of the shell (but the disadvantage of being slower than find for large trees):
# Usage: $(call find-recursive,DIRECTORY,PATTERN)
find-recursive = \
$(foreach f,$(wildcard $(1)/*),\
$(if $(wildcard $(f)/.),\
$(call find-recursive,$(f),$(2)),\
$(filter $(2),$(f))))
all:
#echo $(call find-recursive,.,%.c)

Copying files with specific size to other directory

Its a interview question. Interviewer asked this "basic" shell script question when he understand i don't have experience in shell scripting. Here is question.
Copy files from one directory which has size greater than 500 K to another directory.
I can do it immediately in c lang but seems difficult in shell script as never tried it.I am familiar with unix basic commands so i tried it, but i can just able to extract those file names using below command.
du -sk * | awk '{ if ($1>500) print $2 }'
Also,Let me know good shell script examples book.
It can be done in several ways. I'd try and use find:
find $FIRSTDIRECTORY -size +500k -exec cp "{\} $SECONDDIRECTORY \;
To limit to the current directory, use -maxdepth option.
du recurses into subdirectories, which is probably not desired (you could have asked for clarification if that point was ambiguous). More likely you were expected to use ls -l or ls -s to get the sizes.
But what you did works to select some files and print their names, so let's build on it. You have a command that outputs a list of names. You need to put the output of that command into the command line of a cp. If your du|awk outputs this:
Makefile
foo.c
bar.h
you want to run this:
cp Makefile foo.c bar.h otherdirectory
So how you do that is with COMMAND SUBSTITUTION which is written as $(...) like this:
cd firstdirectory
cp $(du -sk * | awk '{ if ($1>500) print $2 }') otherdirectory
And that's a functioning script. The du|awk command runs first, and its output is used to build the cp command. There are a lot of subtle drawbacks that would make it unsuitable for general use, but that's how beginner-level shell scripts usually are.
find . -mindepth 1 -maxdepth 1 -type f -size +BYTESc -exec cp -t DESTDIR {}\+
The c suffix on the size is essential; the size is in bytes. Otherwise, you get probably-unexpected rounding behaviour in determining the result of the -size check. If the copying is meant to be recursive, you will need to take care of creating any destination directory also.

Resources