maven release plugin and command line arguments - maven-release-plugin

I am trying to do a non-interactive maven release:
mvn clean install \
-PmyAssembly,attach-installer \
-DcustomerFlag=simple \
release:clean \
release:prepare \
release:perform \
-DreleaseVersion=1.0.1 \
-DdevelopmentVersion=1.0.2-SNAPSHOT \
-Dtag=my-project-1.0.1
But the property -DcustomerFlag=simple is not set when running perform only when running prepare.
Do I need to specify all command line arguments and profile twice, once for prepare and once for perform ?
Alternatively I guess I can just skip the perform step and do a regular build/deploy afterwards with the parameters I need from the generated tag.

None of the -D args from the commandline are passed by the maven-release-plugin to the inner Maven calls. You should use -Darguments="-DcustomerFlag=simple" to get the expected result, see arguments

Related

Metaplex uploading error. "path" argument must be string

I'm trying to use metaplex to upload NFTs and im having some issues with the uploading.
i'm running this command
ts-node c:/server3/NFT/metaplex/js/packages/cli/src/candy-machine-v2-cli.ts upload \ -e devnet \ -k C:\server3\NFT\keypair.json \ -cp config.json \ -c example \ c:/server3/NFT/assets
and getting this error
now i know WHY im getting the error, it says because its skipping unsuported file "/server3" which is where the files are located. how do i make it not skip that folder? i believe thats why path is returning undefined.
Windows has a issue with multi line commands. These new lines are indicated with the \ after every parameter. If you remove the extra \ and leave everything on one line it should resolve your issue for you.
ts-node c:/server3/NFT/metaplex/js/packages/cli/src/candy-machine-v2-cli.ts upload -e devnet -k C:\server3\NFT\keypair.json -cp config.json -c example c:/server3/NFT/assets

Assigning output from az artifacts universal download to variable

I have a question. Is it possible to assign output from az aritfacts universal download to variable?
I have a Jenkins job where I have script in shell like this:
az artifacts universal download \
--organization "sampleorganization" \
--project "sampleproject" \
--scope project \
--feed "sample-artifacts" \
--name $PACKAGE \
--version $VERSION \
--debug \
--path .
Then I would like transport the file to artifactory with this:
curl -v -u $ARTIFACTORY_USER:$ARTIFACTORY_PASS -X PUT https://artifactory.com/my/test/repo/my_test_file_{VERSION}
I ran the job but noticed that I passed to artifactory empty file. It created my_test_file_{VERSION} but it had 0 mb. As far as I understand I just created empty file with curl. So I would like to pass the output from az download to artifactory repo. Is it possible? How can I do this?
I understand that I need to assign file output to variable and pass it to the curl like:
$MyVariableToPass = az artifacts universal download output
And then pass this var to curl.
Is it possible? How can I pass files between Jenkins which triggers shell job to artifactory?
Also I am not using any plugin right now.
Please help.
The possible solution is to use a VM as the agent of the Jenkins, and then install the Azure CLI inside the VM. You can run the task in that node. To set a variable with the value of the CLI command, for example, the output looks like this:
{
"name": "test_name",
....
}
Then you can set the variable like this:
name=$(az ...... --query name -o tsv)
This is in the Linux system. If it's in the Windows, you can set it like this:
$name = $(az ...... --query name -o tsv)
And as I know, the command to download the file won't output the content of the file. So if you want to set the content of the file as a variable, it's not suitable.

Dockerfile with parametrization: fails with XXX

I have the following Dockerfile in a simple Spring Boot app:
FROM maven:3.6-jdk-8-alpine as build
WORKDIR /app
COPY ./pom.xml ./pom.xml
RUN mvn dependency:go-offline -B
# copy your other files
COPY ./src ./src
# build for release
RUN mvn package -DskipTests
FROM openjdk:8-jre-alpine
ARG artifactid
ARG version
ENV artifact ${artifactid}-${version}.jar
WORKDIR /app
COPY --from=build /app/target/${artifact} /app
EXPOSE 8080
ENTRYPOINT ["sh", "-c"]
CMD ["java","-jar ${artifact}"]
When I build it with the required arguments:
docker build --build-arg artifactid=spring-demo --build-arg version=0.0.1 -t spring-demo .
it builds with no errors.
When I try to run the image with:
docker container run -it spring-demo
it fails with the following error:
Usage: java [-options] class [args...]
(to execute a class)
or java [-options] -jar jarfile [args...]
(to execute a jar file)
where options include:
-d32 use a 32-bit data model if available
-d64 use a 64-bit data model if available
-server to select the "server" VM
The default VM is server,
because you are running on a server-class machine.
-cp <class search path of directories and zip/jar files>
-classpath <class search path of directories and zip/jar files>
A : separated list of directories, JAR archives,
and ZIP archives to search for class files.
-D<name>=<value>
set a system property
-verbose:[class|gc|jni]
enable verbose output
-version print product version and exit
-version:<value>
Warning: this feature is deprecated and will be removed
in a future release.
require the specified version to run
-showversion print product version and continue
-jre-restrict-search | -no-jre-restrict-search
Warning: this feature is deprecated and will be removed
in a future release.
include/exclude user private JREs in the version search
-? -help print this help message
-X print help on non-standard options
-ea[:<packagename>...|:<classname>]
-enableassertions[:<packagename>...|:<classname>]
enable assertions with specified granularity
-da[:<packagename>...|:<classname>]
-disableassertions[:<packagename>...|:<classname>]
disable assertions with specified granularity
-esa | -enablesystemassertions
enable system assertions
-dsa | -disablesystemassertions
disable system assertions
-agentlib:<libname>[=<options>]
load native agent library <libname>, e.g. -agentlib:hprof
see also, -agentlib:jdwp=help and -agentlib:hprof=help
-agentpath:<pathname>[=<options>]
load native agent library by full pathname
-javaagent:<jarpath>[=<options>]
load Java programming language agent, see java.lang.instrument
-splash:<imagepath>
show splash screen with specified image
See http://www.oracle.com/technetwork/java/javase/documentation/index.html for more details.
What's wrong with the above settings, please?
The app example code can be found here.
You should delete that ENTRYPOINT line and use the shell form of CMD.
# No ENTRYPOINT
CMD java -jar ${artifact}
The Dockerfile ENTRYPOINT and CMD lines get combined into a single command line. In your Dockerfile, that gets interpreted as
sh -c java '-jar ${artifact}'
But the sh -c option only actually takes the next single word and interprets it as the command to run; so that really gets processed as
sh -c 'java' # '-jar ${artifact}'
ignoring the -jar option.
There are two ways to "spell" CMD (and ENTRYPOINT and RUN). As you've done it with JSON arrays, you specify exactly the "words" that go into the command line, so for example, -jar ${artifact} would be passed as a single argument including the embedded space. If you just pass a command line, Docker will insert a sh -c wrapper for you, and the shell will handle word parsing and variable interpolation. You shouldn't ever need to manually include sh -c in a Dockerfile.
It looks to me that you have an error with the sh -c. The arguments are not read correctly. You could check that if you do a docker inspect on the exited container. In the output search for the "CMD".
ENTRYPOINT ["sh", "-c"]
CMD ["java","-jar ${artifact}"]
If you would like to run it with sh -c, you have to quote the arguments as one like:
CMD ["java -jar ${artifact}"]
Can you give it a try?
ENV variables are only available during the build. To get env variables into the container at runtime, you have to use --env or -e or --env-file. It is best to use --env-file.
See this for same problem answered already: How do I pass environment variables to Docker containers?
Also look at this: Use environment variables in CMD
Here is one possible solution:
Keep your CMD instruction as this:
CMD ["java","-jar ${artifact}"]
Use this docker run command:
docker container run -it -e artifact=spring-demo-0.0.1.jar spring-demo

Shell : Retrieve a string after a character and replace it with another

Within my Jenkins Job ,
i'm using an environment variable : $SVN_URL where:
$SVN_URL=http://svn.local:8080/svn/project1/trunk
i'm executing in a further step this shell command :
svn copy http://svn.local:8080/svn/project1/trunk http://svn.local:8080/svn/project1/tags/v$RELEASE_VERSION \
-m "Tagging the v$RELEASE_VERSION release"
this command can be replaced by this (using my $SVN_URL):
svn copy $SVN_URL http://svn.local:8080/svn/project1/tags/v$RELEASE_VERSION \
-m "Tagging the v$RELEASE_VERSION release"
But i still not able to optimise it for the second part which is :
http://svn.local:8080/svn/project1/tags/v$RELEASE_VERSION
So i wann use my $SVN_URL for it.
So to resume :
My $SVN_URL contains : http://svn.local:8080/svn/project1/trunk/
i wanna retrieve from it the part : /trunk/
add it the part : /tag/
Suggestions ?
i used simply "sed" and that worked fine
svn copy $SVN_URL "$(echo $SVN_URL| sed 's/trunk/tags/')"/v$RELEASE_VERSION \
-m "Tagging the v$RELEASE_VERSION release"

xcodebuild corrupts test result output when output redirected to file

I have Jenkins with the Xcode plugin configured to run unit tests by adding the test build action to the Custom xcodebuild arguments setting. For more information on getting Jenkins to run the unit tests at all with Xcode 5, see this question.
Now that I have it running, it seems to mix console output from NSLog statements or the final ** TEST SUCCEEDED ** message with the test results, thus occasionally tripping up the parser that converts unit test results to the JUnit format required for Jenkins.
For example, the Jenkins log shows output like this:
Test Case '-[Redacted_Conversion_Tests testConvertTo_ShouldSetamount_WhenamountIsNotZero]' passed (** TEST SUCCEEDED **
0.000 seconds).
Test Case '-[Redacted_Conversion_Tests testConvertTo_ShouldSetamount_WhenamountIsZero]' started.
when it should actually be:
Test Case '-[Redacted_Conversion_Tests testConvertTo_ShouldSetamount_WhenamountIsNotZero]' passed (0.000 seconds).
Test Case '-[Redacted_Conversion_Tests testConvertTo_ShouldSetamount_WhenamountIsZero]' started.
** TEST SUCCEEDED **
I have looked into this further and pulled Jenkins out of the picture. If I run the xcodebuild command directly at the command prompt:
xcodebuild \
-workspace project.xcworkspace \
-scheme Tests \
-configuration Release \
-sdk iphonesimulator7.0 \
-destination "platform=iOS Simulator,name=iPhone Retina (4-inch),OS=latest" \
test
the output always comes out fine, in-order.
If, however, I pipe the output to another program or redirect to a file:
xcodebuild \
-workspace project.xcworkspace \
-scheme Tests \
-configuration Release \
-sdk iphonesimulator7.0 \
-destination "platform=iOS Simulator,name=iPhone Retina (4-inch),OS=latest" \
test > xcodebuild.out
cat xcodebuild.out
The output is out-of-order as described above.
Could this be due to buffering or lack of buffering when not directly writing to stdout? Does anyone know why this is happening and any workaround I might be able to perform to fix it?
As noted by Malte in a comment above, a cleaner solution might be
env NSUnbufferedIO=YES xcodebuild ...
Thanks to this answer, I discovered a way to essentially disable buffering using the script command.
script -q -t 0 xcodebuild.out \
xcodebuild \
-workspace project.xcworkspace \
-scheme Tests \
-configuration Release \
-sdk iphonesimulator7.0 \
-destination "platform=iOS Simulator,name=iPhone Retina (4-inch),OS=latest" \
test
cat xcodebuild.out

Resources