How to generate a indeterminate number of WAR files using Maven? - maven

I would like to generate multiple WAR files, during the build cycle of my project. I already know how to add multiple dest-file, and other configuration with the maven-war-plugin. But I want to know if there is way to generate an indeterminate number of war during the build cycle, without writing configuration for each WAR.
I want to generate a build for each clients, I have the following directory structure in my project:
| pom.xml
+ src
+ main
+ clients
+ client1
+ client2
+ client3
+ ...
+ clientn
I would like to know how to generate a WAR for each client directory. I just want to create a Maven configuration, then just care about adding a new folder then mvn package and get n WAR packages.
Is it possible?

Seems like Maven cannot do that by itself, so I ended up creating a script that invoke Maven for each folder found in src/main/clients.
In Unix
#!/bin/bash
clear
###
# Check for Apache Maven home environment variable.
###
M2_HOME=`$M2_HOME`
if [ -z "$M2_HOME" ]; then
echo "Error: M2_HOME variable not set. Please set Apache Maven home."
exit
fi
###
# Build all client packages.
###
echo "Build start."
for f in $0/src/main/clients/*
do echo Building: %f
$M2_HOME/bin/mvn package -Dclient.id=`basename $f` -Dclient.path=$f -DskipTests
done;
echo "Build end."
In Windows
#ECHO off
SET M2_HOME=%M2_HOME%
CD %~p0
ECHO Build start.
FOR /r /d %%G in (src\main\client\*) DO (
ECHO Building: %%~nG
%M2_HOME%\bin\mvn package -Dclient.id=%%~nG -Dclient.path=%%G -DskipTests
)
ECHO Build end.

Related

Integrating Go and Bazel tests

In my CI system, I have various go scripts that I run to analyze my Go code. For example, I have a script that validates if various main files can start a long running app successfully. For this I run the go script via go run startupvalidator -pkgs=pkg1,pkg2,pk3. I am interested in using Bazel to be able to utilize the cache for this since if pkg1 has not changed startupvalidator would be able to hit the cache for pkg1 and then run a fresh run for pkg2 and pkg3.
I thought about a couple different ways to do this but none of them feel correct. Is there a "best" way to accomplish this? Is this a reasonable use case for bazel?
I thought about creating a bash script where I run something like:
go run startupvalidator $1
With a BUILD.bazel file containing
sh_binary(
name = "startupvalidator-sh",
sources = [":startupvalidator.sh"],
deps = [
"//go/path/to/startupvalidator",
],
)
I also thought about placing a similar sh_test in the BUILD.bazel file for each pkg1, pkg2, and pkg3 so that I could run bazel run //go/pkg1:startupvalidator.
However, this doesn't actually work. Does anyone have feedback on how I should go about this? Any directions or pointers are appreciated.
To take advantage of the caching for test results, you need a *_test which you run with bazel test. Maybe the only part you're missing is that bazel run simply runs a binary (even if it's a test binary), while bazel test is looking for an up-to-date test result which means it use the cache?
You also need to split up the binary so changing the code in pkg2 doesn't affect the test action in pkg1. The action's key in the cache includes the contents of all its input files, the command being run, etc. I'm not sure if your startupvalidator has the various main functions compiled into it, or if it looks for the binaries at runtime. If it compiles them in, you'll need to build separate ones. If it's loading the files at runtime, put the files it looks for in data for your test rule so they're part of the inputs to the test action.
I'd do something like this in pkg1 (assuming it's loading files at runtime; if they're compiled in then you can just make separate go_test targets):
sh_test(
name = 'startupvalidator_test',
srcs = ['startupvalidator_test.sh'],
deps = ['#bazel_tools//tools/bash/runfiles'],
data = ['//go/path/to/startupvalidator', ':package_main'],
)
with a startupvalidator_test.sh which looks like:
# --- begin runfiles.bash initialization v2 ---
# Copy-pasted from the Bazel Bash runfiles library v2.
set -uo pipefail; f=bazel_tools/tools/bash/runfiles/runfiles.bash
source "${RUNFILES_DIR:-/dev/null}/$f" 2>/dev/null || \
source "$(grep -sm1 "^$f " "${RUNFILES_MANIFEST_FILE:-/dev/null}" | cut -f2- -d' ')" 2>/dev/null || \
source "$0.runfiles/$f" 2>/dev/null || \
source "$(grep -sm1 "^$f " "$0.runfiles_manifest" | cut -f2- -d' ')" 2>/dev/null || \
source "$(grep -sm1 "^$f " "$0.exe.runfiles_manifest" | cut -f2- -d' ')" 2>/dev/null || \
{ echo>&2 "ERROR: cannot find $f"; exit 1; }; f=; set -e
exec $(rlocation workspace/go/path/to/startupvalidator) \
-main=$(rlocation workspace/pkg1/package_main)
# --- end runfiles.bash initialization v2 ---
I'm assuming that package_main is the thing loaded by startupvalidator. Bazel is set up to pass full paths to dependencies like that to other binaries, so I'm pretending that there's a new flag that takes the full path instead of just the package name. The shell script uses runfiles.bash to locate the various files.
If you want to deduplicate this between the packages, I would write a macro that uses a genrule to generate the shell script.

Change maven settings.xml location and pass -s automatically

Intro
I moved my settings.xml file to a secured network share which only I can access.
The next step is to encrypt various credentials inside the file as well.
Unfortunately, now when I run mvn I need to specify the location every time, e.g.:
mvn -s Z:\CONFIG\settings.xml
Solution Try - Aliases
I tried making an alis in CMDER but I always get a goal not specfied error.
E.g. in user_aliases.cmd I add the following tries:
mvn1=echo "Using custom cmder alias (cmder user_aliases.cmd) : mvn -s Z:\CONFIG\settings.xml " & mvn -s Z:\CONFIG\settings.xml
mvn2=mvn -s Z:\CONFIG\settings.xml
They both fail with an error about goals not being passed.
So this is an issue with the arguements not being passed.
Anyone have a solution for hardcoding this location permanently???
Update
My current solution has been to edit the mvn.cmd file itself.
I added something like the following, and it works.....
though it breaks mvn for anyone else wanting to use it:
echo "Modified mvn.cmd to add custom path mvn -s Z:\CONFIG\settings.xml "
"%JAVACMD%" ^
%JVM_CONFIG_MAVEN_PROPS% ^
%MAVEN_OPTS% ^
%MAVEN_DEBUG_OPTS% ^
-classpath %CLASSWORLDS_JAR% ^
"-Dclassworlds.conf=%MAVEN_HOME%\bin\m2.conf" ^
"-Dmaven.home=%MAVEN_HOME%" ^
"-Dlibrary.jansi.path=%MAVEN_HOME%\lib\jansi-native" ^
"-Dmaven.multiModuleProjectDirectory=%MAVEN_PROJECTBASEDIR%" ^
%CLASSWORLDS_LAUNCHER% -s Z:\CONFIG\settings.xml %MAVEN_CMD_LINE_ARGS%
if ERRORLEVEL 1 goto error
goto end
I could possibly make a copy of maven to my Z drive (secured) and call that to avoid all of this.
I tried two solutions which both worked:
Move maven to folder with permissions
One solution was to move maven directly to a protected isolated folder such a shared network folder (with appropriate permissions) or user folder.
Edit the mvn cmd file
I edited the mvn.cmd file itself.
I added something like the following, and it works.....
WARNING: it breaks mvn for anyone else wanting to use it. So use only if this is not a consideration.
echo "Modified mvn.cmd to add custom path mvn -s Z:\CONFIG\settings.xml "
"%JAVACMD%" ^
%JVM_CONFIG_MAVEN_PROPS% ^
%MAVEN_OPTS% ^
%MAVEN_DEBUG_OPTS% ^
-classpath %CLASSWORLDS_JAR% ^
"-Dclassworlds.conf=%MAVEN_HOME%\bin\m2.conf" ^
"-Dmaven.home=%MAVEN_HOME%" ^
"-Dlibrary.jansi.path=%MAVEN_HOME%\lib\jansi-native" ^
"-Dmaven.multiModuleProjectDirectory=%MAVEN_PROJECTBASEDIR%" ^
%CLASSWORLDS_LAUNCHER% -s Z:\CONFIG\settings.xml %MAVEN_CMD_LINE_ARGS%
if ERRORLEVEL 1 goto error
goto end

GoogleService-Info.plist : Copy bundle resources or not

I user flutter flavors (dev and prod) and my respective GoogleServices-Info.plist file is present in a Firebase/dev Firebase/prod folders respectively
I use a "Build Phases" script to copy the file to Runner/ directory at build-time with the below script
if [ "${CONFIGURATION}" == "Debug-prod" ] || [ "${CONFIGURATION}" == "Release-prod" ] || [ "${CONFIGURATION}" == "Release" ]; then
cp -r "${PROJECT_DIR}/Firebase/prod/GoogleService-Info.plist" "${PROJECT_DIR}/Runner/GoogleService-Info.plist"
echo "Production plist copied"
elif [ "${CONFIGURATION}" == "Debug-dev" ] || [ "${CONFIGURATION}" == "Release-dev" ] || [ "${CONFIGURATION}" == "Debug" ]; then
cp -r "${PROJECT_DIR}/Firebase/dev/GoogleService-Info.plist" "${PROJECT_DIR}/Runner/GoogleService-Info.plist"
echo "Development plist copied"
fi
Was all working OK, till I tried to use CI/CD with codemagic.
(1) First I got this error from codemagic build(But worked fine locally)
error: Build input file cannot be found: >'/Users/builder/clone/ios/Runner/GoogleService-Info.plist'
(2) I then removed the file from "Copy bundle resources" from my Xcode Target. Now I get below error(Both locally as well as with codemagic):
error: Could not get GOOGLE_APP_ID in Google Services file from build environment
What is the correct setting I should keep to get the build working both locally as well as in codemagic?
Update: After a bit of pondering, looks like (1) is the correct setting. > GoogleServices-Info.plist should be part of the copy bundle resources. >ADDITIONALLY, Runner/GoogleServices-Info.plist MUST exist before build >starts. So I placed a default plist file in the directory and it worked. >My Build Phases script will override this default plist file to >appropriate file based on the flavor
After a bit of pondering, looks like (1) is the correct setting. GoogleServices-Info.plist should be part of the copy bundle resources.
ADDITIONALLY, Runner/GoogleServices-Info.plist MUST exist before build starts. So I placed a default plist file in the directory and it worked. My Build Phases script will override this default plist file to >appropriate file based on the flavor

Update Info.plist values in archive

I need to be able to set a couple custom values in the Info.plist during the Xcode ARCHIVE process. This is for Xcode 6 and Xcode 7.
I already have a script in place that successfully updates these values as a post-action on the BUILD process. It works great when deploying to the simulator or to a phone from Xcode 6.
However, the Info.plist doesn't seem to be available from within the directory structures during the ARCHIVE process. After a BUILD, I can find the results under .../Build/Products in $CONFIGURATION-iphoneos and $CONFIGURATION-iphonesimulator. But after the ARCHIVE, there isn't anything there and I only find the compiled binaries under .../Build/Intermediates.
Certainly, I can see the Info.plist in the IPA itself. Yet any attempts to update and replace this file after the fact are unsuccessful; the IPA is no longer valid, I assume due to checksum changes or something.
I don't want to update these values in the source Info.plist (e.g., using a pre-action) as it will always make the source dirty every time I archive.
Figured this out. The process is nearly identical to the build -- use a post-action for the build, use a post-action for the archive -- just the path is different (all listed below) for where to find the Info.plist.
Below is my build script where I've used tokens for the "name" and the "value" to be updated in Info.plist. I just copied this script and renamed it for use with the archive post-action. Note that this script also has an example of extracting a value from Info.plist as I am deriving the web services version from the client version.
The path to the build Info.plist is either of:
"$BUILD_DIR/$CONFIGURATION-iphoneos/$PRODUCT_NAME.app/Info.plist"
"$BUILD_DIR/$CONFIGURATION-iphonesimulator/$PRODUCT_NAME.app/Info.plist"
NOTE: Both targets are being updated for a build since I've not figured out a way to identify which build it is doing.
The path to the archive Info.plist is:
"$ARCHIVE_PRODUCTS_PATH/Applications/$PRODUCT_NAME.app/Info.plist"
Build post-action:
$SRCROOT/post_build.sh <value> ~/xcode_build_$PRODUCT_NAME.out
Build script:
#!/bin/bash
# post_build.sh
#
# This script is intended for use by Xcode build process as a post-action.
# It expects the only argument is the value to be updated in Info.plist. It
# derives the WS version for the URL from the version found in Info.plist.
printf "Running $0 using scheme '$SCHEME_NAME' as '$USER'\n"
# If this is a clean operation, just leave
if [ $COPY_PHASE_STRIP == "YES" ]
then
printf "Doing a clean; exiting.\n"
exit 1
fi
# Confirm that PlistBuddy is available
PLIST_BUDDY=/usr/libexec/PlistBuddy
if ![-f "$PLIST_BUDDY"]
then
printf "Unable to access $PLIST_BUDDY\n"
exit 1
else
printf "PLIST_BUDDY=$PLIST_BUDDY\n"
fi
# Function to perform the changes
updatePlist()
{
PLIST_FILE=$1
if [ -f "$PLIST_FILE" ]
then
printf "Determing WS version...\n"
if [[ $SCHEME_NAME == *"Local"* ]]
then
WS_VER=""
else
# Determine the services version
BUILD_VER=$(${PLIST_BUDDY} -c "Print CFBundleShortVersionString" "$PLIST_FILE")
WS_VER=$(printf $BUILD_VER | sed 's/\(.*\)\..*/\1/' | sed 's/\./_/g')
fi
# Update the plist
${PLIST_BUDDY} -c "Set <name> <value>" "$PLIST_FILE"
printf "Updated plist $PLIST_FILE\n"
else
printf "Skipping -- no plist: $PLIST_FILE\n"
fi
}
# Retrieve the supplied URL
BASE_URL=$1
printf "BASE_URL=$BASE_URL\n\n"
# Record the environment settings
printenv | sort > ~/xcode_build_$PRODUCT_NAME.env
# Locate the plist in the device build
printf "Checking device build...\n"
updatePlist "$BUILD_DIR/$CONFIGURATION-iphoneos/$PRODUCT_NAME.app/Info.plist"
printf "\n"
# Locate the plist in the simulator build
printf "Checking simulator build...\n"
updatePlist "$BUILD_DIR/$CONFIGURATION-iphonesimulator/$PRODUCT_NAME.app/Info.plist"
printf "\n"

Split large repo into multiple subrepos and preserve history (Mercurial)

We have a large base of code that contains several shared projects, solution files, etc in one directory in SVN. We're migrating to Mercurial. I would like to take this opportunity to reorganize our code into several repositories to make cloning for branching have less overhead. I've already successfully converted our repo from SVN to Mercurial while preserving history. My question: how do I break all the different projects into separate repositories while preserving their history?
Here is an example of what our single repository (OurPlatform) currently looks like:
/OurPlatform
---- Core
---- Core.Tests
---- Database
---- Database.Tests
---- CMS
---- CMS.Tests
---- Product1.Domain
---- Product1.Stresstester
---- Product1.Web
---- Product1.Web.Tests
---- Product2.Domain
---- Product2.Stresstester
---- Product2.Web
---- Product2.Web.Tests
==== Product1.sln
==== Product2.sln
All of those are folders containing VS Projects except for the solution files. Product1.sln and Product2.sln both reference all of the other projects. Ideally, I'd like to take each of those folders, and turn them into separate Hg repos, and also add new repos for each project (they would act as parent repos). Then, If someone was going to work on Product1, they would clone the Product1 repo, which contained Product1.sln and subrepo references to ReferenceAssemblies, Core, Core.Tests, Database, Database.Tests, CMS, and CMS.Tests.
So, it's easy to do this by just hg init'ing in the project directories. But can it be done while preserving history? Or is there a better way to arrange this?
EDIT::::
Thanks to Ry4an's answer, I was able to accomplish my goal. I wanted to share how I did it here for others.
Since we had a lot of separate projects, I wrote a small bash script to automate creating the filemaps and to create the final bat script to actually do the conversion. What wasn't completely apparent from the answer, is that the convert command needs to be run once for each filemap, to produce a separate repository for each project. This script would be placed in the directory above a svn working copy that you have previously converted. I used the working copy since it's file structure best matched what I wanted the final new hg repos to be.
#!/bin/bash
# this requires you to be in: /path/to/svn/working/copy/, and issue: ../filemaplister.sh ./
for filename in *
do
extension=${filename##*.} #$filename|awk -F . '{print $NF}'
if [ "$extension" == "sln" -o "$extension" == "suo" -o "$extension" == "vsmdi" ]; then
base=${filename%.*}
echo "#$base.filemap" >> "$base.filemap"
echo "include $filename" >> "$base.filemap"
echo "C:\Applications\TortoiseHgPortable\hg.exe convert --filemap $base.filemap ../hg-datesort-converted ../hg-separated/$base > $base.convert.output.txt" >> "MASTERGO.convert.bat"
else
echo "#$filename.filemap" >> "$filename.filemap"
echo "include $filename" >> "$filename.filemap"
echo "rename $filename ." >> "$filename.filemap"
echo "C:\Applications\TortoiseHgPortable\hg.exe convert --filemap $filename.filemap ../hg-datesort-converted ../hg-separated/$filename > $filename.convert.output.txt" >> "MASTERGO.convert.bat"
fi
done;
mv *.filemap ../hg-conversion-filemaps/
mv *.convert.bat ../hg-conversion-filemaps/
This script looks at every file in an svn working copy, and depending on the type either creates a new filemap file or appends to an existing one. The if is really just to catch misc visual studio files, and place them into a separate repo. This is meant to be run on bash (cygwin in my case), but running the actual convert command is accomplished through the version of hg shipped with TortoiseHg due to forking/process issues on Windows (gah, I know...).
So you run the MASTERGO.convert.bat file, which looks at your converted hg repo, and creates separate repos using the supplied filemap. After it is complete, there is a folder called hg-separated that contains a folder/repo for each project, as well as a folder/repo for each solution. You then have to manually clone all the projects into a solution repo, and add the clones to the .hgsub file. After committing, an .hgsubstate file is created and you're set to go!
With the example given above, my .hgsub file looks like this for "Product1":
Product1.Domain = /absolute/path/to/Product1.Domain
Product1.Stresstester = /absolute/path/to/Product1.Stresstester
Product1.Web = /absolute/path/to/Product1.Web
Product1.Web.Tests = /absolute/path/to/Product1.Web.Tests
Once I transfer these repos to a central server, I'll be manually changing the paths to be urls.
Also, there is no analog to the initial OurPlatform svn repo, since everything is separated now.
Thanks again!
This can absolutely be done. You'll want to use the hg convert command. Here's the process I'd use:
convert everything to a single hg repository using hg convert with a source type of svn and a dest type of hg (it sounds like you've already done this step)
create a collection of filemap files for use with hg convert's --filemap option
run hg convert with source type hg and dest type hg and the source being the mercurial repo created in step one -- and do it for each of the filemaps you created in step two.
The filemap syntax is shown in the hg help convert output, but here's the gist:
The filemap is a file that allows filtering and remapping of files and
directories. Comment lines start with '#'. Each line can contain one of
the following directives:
include path/to/file
exclude path/to/file
rename from/file to/file
So in your example your filemaps would look like this:
# this is Core.filemap
include Core
rename Core .
Note that if you have an include that the exclusion of everything else is implied. Also that rename line ends in a dot and moves everything up one level.
# this is Core.Tests
include Core.Tests
rename Core.Tests .
and so on.
Once you've created the broken-out repositories for each of the new repos, you can delete the has-everything initial repo created in step one and start setting up your subrepo configuration in .hgsub files.

Resources