I have a directory tree with several java files. Example:
top
|-- src1
| |--- folder A
| |--- folder B
|-- src2
| |--- folder A
| |--- folder B
...
I want to compile all the files in those folders and move the compiled files to folder A-binor folder B-bin accordingly in the respective src folder. I have read that I can do this with the xargs utility, but I can't make heads or tails from the manual entry.
Can some one point me a way?
Are you obliged to use xargs to compile these?
Why not take a look at java Makefiles?
They will make your life easier when building a project.
Also, one more advice, i recommend that you take look into Apache Maven. Easy to use, and very handful when your java project get bigger in time.
Here is a quick guide to Maven.
Basic Makefile:
JC=javac
JR=java
build: ref.java
$(JC) ref.java
run: ref.class
$(JR) ref
clean:
rm -f *.class
Another example: (taken from the guide above)
JFLAGS = -g
JC = javac
.SUFFIXES: .java .class
.java.class:
$(JC) $(JFLAGS) $*.java
CLASSES = \
Foo.java \
Blah.java \
Library.java \
Main.java
default: classes
classes: $(CLASSES:.java=.class)
clean:
$(RM) *.class
Another option if you want to stick with bash + javac is to use find to identify the .java files, store the results in a variable, and then check if the variable was not empty.
SRC=`find src -name "*.java"`
if [ ! -z $SRC ]; then
javac -classpath $CLASSPATH -d obj $SRC
# stop if compilation fails
if [ $? != 0 ]; then exit; fi
fi
Related
I currently have a couple of applications with different folder structures. I am using maven for building these apps. I am running into some issues. At times the main pom.xml file is in the root directory and other times it is in the sub-directory. I only want to do a maven build with the main pom.xml found in the main/highest directory. I am running the find command from workspace(below for more details). How could I achieve this without depending on the folder structure?
Shell Command:
find * -maxdepth 1 -name pom.xml -execdir mvn clean package -Dmaven.test.error.ignore=true -Dmaven.test.failure.ignore=true \;
This works with this folder structure:
workspace
|-- app1
| |-- pom.xml
| |-- abrt.conf
| |-- gpg_keys
| `-- web
| |-- pom.xml
| `-- test.conf
returns:
app1/pom.xml
The command does returns a couple of pom.xml files with this folder structure:
workspace
| |-- pom.xml
| |-- abrt.conf
| |-- gpg_keys
| `-- web
| |-- pom.xml
| `-- test.conf
returns:
web/pom.xml
pom.xml
Because find operates breadth-first unless explicitly told to do otherwise, the first result will always be the shallowest. Thus, you just need to tell it to quit after finding a single POM file.
In GNU find, there's a -quit action for exactly this purpose:
find . -name pom.xml \
-execdir mvn clean package -Dmaven.test.error.ignore=true -Dmaven.test.failure.ignore=true \; \
-quit
If you aren't guaranteed to have GNU find, but are guaranteed to have bash, you can read just the first result and operate on that:
if IFS= read -r -d '' pom_filename < <(find . -name pom.xml -print0); then
(cd -- "${pom_filename%/*}" && exec mvn clean package -Dmaven.test.error.ignore=true -Dmaven.test.failure.ignore=true)
fi
Note the use of -print0 -- that way we avoid maliciously-named directories (with newlines) being able to influence behavior by injecting extra names into the list.
Now, let's consider a trickier case:
A/pom.xml
A/sub/pom.xml
B/pom.xml
B/sub/pom.xml
If you want to run for both A and B, but not A/sub or B/sub, then things get a bit more interesting:
find . \
-type d -exec test -e '{}/pom.xml' ';' \
-execdir mvn clean pkg ... ';' \
-prune
Note that we're using -prune to tell find to stop recursing into any directory where a pom.xml exists, but only after we already ran a build there.
This line finds the pom.xml with the shortest path (ie the pom nearest the top of the directory tree) and executes mvn from its directory:
find . -name pom.xml | awk '{print length($0) " " $0}' | sort -n | cut -d ' ' -f 2- | head -n 1 | xargs -I {} bash -c 'cd $(dirname {}) && mvn clean package -Dmaven.test.error.ignore=true -Dmaven.test.failure.ignore=true \;'
Pipe chain breakdown:
finds all pom.xml under current directory
adds length of line to start of line
sorts lines by number (ie the length) lowest first
removes the length
keeps only the first (ie shortest) line
runs mvn from the directory of the pom.xml file
I'm not sure. If the main pom.xml is in a directory a/b/c and no other pom.xml is in a subdirectory below a other than a/b/c, then the main pom.xml must have the shortest path string. So in that case I guess this command brings up the pom.xml closest to root and can be executed from either a, a/b, or a/b/c:
find -name pom.xml | awk 'NR==1 || length() < length(s) { s = $0 } END { print s }'
Just do a find without the * (replace with . as suggested in comment) to list from highest directory to the lowest. Pipe to head -1 to get the first one of the list. Put all of this into a command substitution ($(...)) to assign a variable. Run the same command with your variable instead. Since you may have / in the file name now, we will not need -name, -maxdepth or *.
pomxmlfile="$(find . -name pom.xml | head -1)"
find "$pomxmlfile" -execdir mvn clean package -Dmaven.test.error.ignore=true -Dmaven.test.failure.ignore=true \;
I have the following folder structure:
.
`-- top_level/
|-- sub-01_ses-01/
| `-- filtered_data.tar.gz*
|-- sub-01_ses-02/
| `-- filtered_data.tar.gz*
|-- sub-02_ses-01/
| `-- filtered_data.tar.gz*
|-- sub-02_ses-02/
| `-- filtered_data.tar.gz*
I wanted to create symbolic links to these files preserving the parent structure (since they all have the same filenames).
Here's what I tried:
find -name "filtered_data.tar.gz" \
-exec cp -s --parents --no-clobber -t /home/data/filtered {} \;
Now, I notice that cp does create the parent structure, but the symbolic links fail and I get the following notice:
cp: '/home/data/filtered/./sub-01_ses-01/filtered_data.tar.gz': can make relative symbolic links only in current directory
I'd like to understand why this is hapenning, and what the cp warning is trying to tell me. Also, any pointers on how to fix the issue would be greatly appreciated.
Found the solution here: symlink-copying a directory hierarchy
The path of the file to cp must be absolute, not ./something. So, this should work for you:
find $(pwd) -name "filtered_data.tar.gz" \
-exec cp -s --parents --no-clobber -t /home/data/filtered {} \;
Per your comment about what you're really trying to do, here's a Python script that does it. You should be able to tweak it.
#!/usr/bin/env python3
import os
target_filename = 'filtered_data.tar.gz'
top_src_dir = '.'
top_dest_dir = 'dest'
# Walk the source directory recursively looking for
# target_filename
for parent, dirs, files in os.walk(top_src_dir):
# debugging
# print(parent, dirs, files)
# Skip this directory if target_filename not found
if target_filename not in files:
continue
# Strip off all path parts except the immediate parent
local_parent = os.path.split(parent)[-1]
# Compute the full, relative path to the symlink
dest_file = os.path.join(top_dest_dir, local_parent, target_filename)
# debugging
# print('{} {}'.format(dest_file, os.path.exists(dest_file)))
# Nothing to do if it already exists
if os.path.exists(dest_file):
print('{} already exists'.format(dest_file))
continue
# Make sure the destination path exists
dest_dir = os.path.dirname(dest_file)
os.makedirs(dest_dir, exist_ok=True)
# Translate the relative path to target_filename
# to be relative based on the new destination dir
src_file = os.path.join(parent, target_filename)
src_file = os.path.relpath(src_file, start=dest_dir)
os.symlink(src_file, dest_file)
print('{} --> {}'.format(dest_file, src_file))
I am starting to work on a project that I would like to grow to be fairly large. I Want to create a makefile that will grow with the project without much maintenance. Here is the directory structure that I have right now.
.
+--src
| +--part1
| | +--part1.c
| | +--part1.h
| +--part2
| | +--part2.c
| | +--part2.h
. .
. .
. .
| +--partN
| | +--partN.c
| | +--partN.h
+--test
| +--part1_tests
| | +--part1_testX.c
| | +--part1_testY.c
. .
. .
. .
+--obj
| +--part1.o
| +--part2.o
. .
. .
. .
| +--partN.o
+--a.out
I have never had a project of this scale and never needed to make a make file for such a project. How would I design a makefile for this? Thanks!
There are different ways to do it, but i would start by not letting the makefile grow with the project. Instead I would use the regular structure to define rules to handle projects with one codefile in one directory, to big projects with thousend of files.
For example lets play a little bit with your structure (not tested might have some typos and I assume that you start make from project dir):
# we nead the header directory
INCDIRS = src
# collect every cpp file
CXXSRCS = $(shell find src/ -type f -name '*.cpp' 2>/dev/null)
# now generate the object file names
OBJSTMP = $(CXXSRCS:%.cpp=obj/%.o)
# compiled the source file
obj/%.o: src/%.cpp
$(CXX) ${CXXFLAGS} $(foreach bin,${INCDIRS},-I${bin}) -o "$#" "$<"
now lets say your main is in part1.cpp:
ifneq (,$(wildcard src/part1/part1.cpp))
# I just dont like a.out...
executable=${APPLICATION_NAME}
# now lets build the exe:
${executable}: ${OBJS}
$(LD) $(LDFLAGS) -o $# $^
endif
now the last thing is a little cosmetic:
.PHONY: clean all
all: install
compile: ${OBJS}
package: compile ${executable}
#now we can move the object files to were they should be
mv -f obj/part*/* obj/
install: package
no matter how big your project is gonna be this makefile will do some of your steps. Its just to give your an idea. I ignored the test_files. But thinking ahead you can collect the test sources in the same way like I did with the normal sources.
So my point is, there is absolutely no reason why your makefile has to grow with the size of your project, only with its complexity.
For more information, look in the Documentation here at stackoverflow, there is plenty information for makefiles...
Hope that helps,
Kai
I have a build system setup for a library using the GNU autotools. The library has subdirectories for source and header files. It also contains a script in the top directory which auto-generates a source and header file (not the same as the config file) in the appropriate subdirectories. It is necessary that these files are generated before make is performed in the subdirectories.
What is the simplest way to have the script run before subdirectories are traversed (i.e. when user calls make, the script is ran before traversing SUBDIRS)? I have tried adding rules like all-local with no success.
EDIT:
Top-level directory:
configure.ac Makefile.am src include myscript.sh
Makefile.am:
EXTRA_DIST = myscript.sh
ACLOCAL_AMFLAGS = ${ACLOCAL_FLAGS} -I m4
SUBDIRS = src include
.PHONY: gen-script
gen-script:
./myscript.sh
src/Makefile.am:
AM_CPPFLAGS = -I$(top_srcdir)/include
lib_LTLIBRARIES = libmylib.la
libmylib_la_SOURCES = \
file1.cxx \
file2.cxx \
autogen-file.cxx
clean-local:
rm -f autogen-file.cxx
include/Makefile.am:
nobase_include_HEADERS = \
file1.h \
file2.h \
autogen-file.h
clean-local:
rm -f autogen-file.h
I think that the best solution would be to get rid of the recursive make and only have the top-level Makefile.am. Then you'd simply add a rule
include/autogen-file.h src/autogen-file.cxx: myscript.sh
${SHELL} $<
and list include/autogen-file.h in BUILT_SOURCES.
If you want to keep your recursive structure (even if this is considered harmful), you could place the rule to generate the files into the Makefile.ams in the sub-directories.
In src/Makefile.am:
autogen-file.cxx: ../myscript.sh
cd ${top_srcdir} && ${SHELL} myscript.sh
And in include/Makefile.am:
autogen-file.h: ../myscript.sh
cd ${top_srcdir} && ${SHELL} myscript.sh
By the way, how often do you need to re-build those? If the generated files only depend on the project configuration, you could simply run myscript.sh at the end of your configure.ac and delete the generated files only on make distclean.
What you can do is force the current directory to run before the rest of the subdirectories with
SUBDIRS = . src include
More seriously, though, you should not use recursive automake. Particularly if your structure is relatively simple as it seems to be. If you have interests in looking up how to do that you can see my documentation on Autotools Mythbuser.
I have a project which is in parent directory a. The script runs the executables in 3 different subdirectories. Example below:
A
/ | \
B C D
Now, I would like to compile the cpp files in B, C, and D using a scrpit from A.
So far, in my script, I remove all the old CMakeCache.txt and Makefile files and the CMakeFiles directory to make sure there is no overlap.
THen I run cmake B/ followed by make -C B/. I do this for each subdirectory. But I get an error saying CMake Error: The source "/home/ybouvron/Documents/A/B/CMakeLists.txt" does not match the source "/home/ybouvron/Documents/A/C/CMakeLists.txt" used to generate cache. Re-run cmake with a different source directory.
Why am I getting this and how to I fix it? Seems like it's trying to compile the two as the same project, but in each of the CMakeLists.txt files in the subdirectories, they have different project names and executable names.
Thanks in advance.
#! /bin/bash
echo Deleting old make files
rm B/CMakeCache.txt
rm -r B/CMakeFiles/
rm B/Makefile
rm C/CMakeCache.txt
rm -r C/CMakeFiles/
rm C/Makefile
rm D/CMakeCache.txt
rm -r D/CMakeFiles/
rm D/Makefile
set -e
echo Compiling subsystems...
cmake B
make -C B
cmake C/
make -C C/
cmake D/
make -C D/
cmake B configures project in subdirectory B into current directory. make -C B builds project, which should be configured into subdirectory B. For in-source build of the project B you need cd B && cmake ., so make -C B will then build that project.
– Tsyvarev