Testing Oracle OutsideIn ImageExport on Solaris with Xvfb - x11

Has anyone had success testing Oracle Outside in on solaris using Xvfb? The error messages are a pecious few and I'm not sure what configuration is the problem.
Process
Edit /usr/openwin/server/etc/OWconfig
class="XDISPLAY" name="99"
coreKeyboard="IKBD"
corePointer="ps22b"
listOfScreens="stvga"; et...
Start Xvfb: Xvfb :99 -ac
Start window manager: metacity --display :99 --sm-disable --replace
Setup font path to all font directories
run test
get error: SCCERR_DISPLAYOPENFAILED 0x087 /* Failed to open display (XOpenDisplay failed) */

I found I had to pass the DISPLAY down to surefire. I thought it would pick it up from the upper level shell, but I was mistaken. The new shell spawned by surefire processes the standard shell init scripts and lacks the variable on my build machine.
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-surefire-plugin</artifactId>
<configuration>
<environmentVariables>
<GDFONTPATH>${localFontDir}</GDFONTPATH>
<DISPLAY>${env.DISPLAY}</DISPLAY>
</environmentVariables>
</configuration>
</plugin>`

Related

File.exists() sometimes wrong on Windows 10 with Java 8.191

I have a bunch of unit tests which contain code like:
File file = new File("src/main/java/com/pany/Foo.java");
assertTrue("Missing file: " + file.getAbsolutePath(), file.exists());
This test is suddenly failing when running it with Maven Surefire and -DforkCount=0. With -DforkCount=1, it works.
Things I tried so far:
The file does exist. Windows Explorer, command line (copy & paste), text editors, Cygwin can all find it and show the contents. That's why I think it's not a permission problem.
It's not modified by the unit tests or anything else. Git shows no modifications for the last two months.
I've checked the file system, it's clean.
I've tried other versions of Java 8, namely 8u171 and 8u181. Same problem.
I've run Maven from within Cygwin and the command prompt. Same result.
Reboot :-) No effect :-(
More details:
When I see this problem, I start to see the "The forked VM terminated without properly saying goodbye. VM crash or System.exit called?" in other projects. That's why I tried forkCount=0 which often helps in this case to find out why the forked VM crashed.
This has started recently, maybe around the October 2018 update of Windows 10. Before that, the builds were rock solid for about three years. My machine was switched to Windows 10 late 2017, I think.
I'm using Maven 3.6 and can't easily try an older version because of an important bug that was fixed with it. I did see the VM crash above with Maven 3.5.2 as well.
It's always the same files which fail (so it's stable).
ulimit (from Cygwin) says:
$ ulimit -a
core file size (blocks, -c) unlimited
data seg size (kbytes, -d) unlimited
file size (blocks, -f) unlimited
open files (-n) 256
pipe size (512 bytes, -p) 8
stack size (kbytes, -s) 2032
cpu time (seconds, -t) unlimited
max user processes (-u) 256
virtual memory (kbytes, -v) unlimited
I'm wondering if the "open files" limit of 256 only applied to Cygwin processes or whether that's something which Cygwin reads from Windows.
Let me know if you need anything else. I'm running out of ideas what I could try.
Update 1
Bernhard asked me to print absolute names. My answer was that I was already using absolute names but I was wrong. The actual code was:
File file = new File("src/main/java/com/pany/Foo.java");
if (!file.exists()) {
log.debug("Missing file {}", file.getAbsolutePath());
... fail ...
}
... do something with file...
I have now changed this to:
File file = new File("src/main/java/com/pany/Foo.java").getAbsoluteFile();
if (!file.exists()) {
log.debug("Missing file {}", file);
}
and that fixed the problem. I just can't understand why.
When Maven creates a forked VM to run the tests with Surefire, then it can change the current directory. So in this case, it would make sense that the tests work when forked but fail when running in the same VM (since the VM was created in the root folder of the multi-module build). But why is making the path absolute before the call to exists() fixing the issue?
Some background. Each process has a notion of "current directory". When started from the command line, then it's the directory in which the command was executed. When started from the UI, it's usually the folder in which the program (the .exe file) is.
In the command prompt or BASH, you can change this folder with cd for the process which runs the command prompt.
When Maven builds a multi-module project, it has to change this for each module (so that the relative path src/main/java/ always points to the right place). Unfortunately, Java doesn't have a "set current directory" method anywhere. You can only specify one when creating a new process and you can modify the system property user.dir.
That's why new File("a").exists() and new File("a").getAbsoluteFile().exists() work differently.
The latter will use new File(System.getProperty("user.dir"), "a") to determine the path and the former will use the Windows API function _wgetdcwd (docs) which in turn uses a field of the Windows process to get the current directory - in our case, that's always the folder in which Maven was originally started because Java doesn't update the field in the process when someone changes user.dir and Maven can only change this property to "simulate" changing folders.
WinNTFileSystem_md.c calls fileToNTPath(). That's defined in io_util_md.c and calls pathToNTPath(). For relative paths, it will call currentDirLength() which calls currentDir() which calls _wgetdcwd().
See also:
https://github.com/openjdk-mirror/jdk7u-jdk/blob/jdk7u6-b08/src/windows/native/java/io/WinNTFileSystem_md.c
https://github.com/openjdk-mirror/jdk7u-jdk/blob/jdk7u6-b08/src/windows/native/java/io/io_util_md.c
and here is the place where the Surefire plugin modifies the Property user.dir: https://github.com/apache/maven-surefire/blob/56d41b4c903b6c134c5e1a2891f9f08be7e5039f/maven-surefire-common/src/main/java/org/apache/maven/plugin/surefire/AbstractSurefireMojo.java#L1060
When not forking, it's copied into the current VM's System properties: https://github.com/apache/maven-surefire/blob/56d41b4c903b6c134c5e1a2891f9f08be7e5039f/maven-surefire-common/src/main/java/org/apache/maven/plugin/surefire/AbstractSurefireMojo.java#L1133
So I have checked via printing out the system properties with some simple tests.
During the tests via maven-surefire-plugin the user.dir will be changed to the root of the appropriate module in a multi module build.
But as I mentioned already there is a system property available basedir which can be used to correctly handle the location for tests which needs to access them via File...The basedir is pointed to the location of the pom.xml of the appropriate module.
But unfortunately the basedir property is not set by IDEA IntelliJ during the test run.
But this can be solved by a setup like this:
private String basedir;
#Before
public void before() {
this.basedir = System.getProperty("basedir", System.getProperty("user.dir", "Need to think about the default value"));
}
#Test
public void testX() {
File file = new File(this.basedir, "src/main/java/com/pany/Foo.java");
assertTrue("Missing file: " + file.getAbsolutePath(), file.exists());
}
This will work in Maven Surefire with -DforkCount=0 as well as -DforkCount=1 and in IDE's (checked only IDEA IntelliJ).
And yes that is an issue in Maven Surefire plugin changing the user.dir.
We might convince IDE's to support the basedir property as well ?
Aaron, we develop the Surefire. We can help you if you provide the path for this:
assertTrue("Missing file: " + file.getAbsolutePath(), file.exists());
Pls post the actual path, expected path and basedir where your POM resides.
The theory would not help here. We are testing all the spectrum of JDKs 7-12 but we do not have the combination Cygwin+Windows which must be considered.
The code setting user.dir in Surefire you mentioned exists a decade.

How to run arquillian-jms-mdb?

I am pretty green at arquillian and have some problem with it.
Could you please try out this (probably great) arquillian example for MDB:s?
https://github.com/mcs/arquillian-jms-mdb
I also downloaded the JBoss 7.2.0 from:
https://www.redpill-linpro.com/products/jboss/downloads-jboss-and-wildfly
I do not think you need any more setup actually, I bet you have a JDK installed already.
How ever, when I build it with mvn clean install, the container seems to start but the test is never executed. I just get:
Running com.github.mcs.arquillian.mdb.example.ExampleMDBBadTest
apr 17, 2018 3:20:37 EM org.jboss.as.arquillian.container.managed.ManagedDeployableContainer startInternal
INFO: Starting container with: ["C:\Program Files (x86)\Java\jdk1.8.0_92\bin\java", -Xmx768m, -XX:MaxPermSize=384m, -Xrunjdwp:transport=dt_socket,address=8787,server=y,suspend=y, -ea, -Djboss.home.dir=C:\Fredrik\Applications\jboss-as-7.2.0.Final, -Dorg.jboss.boot.log.file=C:\Fredrik\Applications\jboss-as-7.2.0.Final\standalone\log\boot.log, -Dlogging.configuration=file:/C:/Fredrik/Applications/jboss-as-7.2.0.Final/standalone/configuration/logging.properties, -Djboss.bundles.dir=C:\Fredrik\Applications\jboss-as-7.2.0.Final\bundles, -jar, C:\Fredrik\Applications\jboss-as-7.2.0.Final\jboss-modules.jar, -mp, C:\Fredrik\Applications\jboss-as-7.2.0.Final\modules, -jaxpmodule, javax.xml.jaxp-provider, org.jboss.as.standalone, -server-config, standalone-full.xml]
Listening for transport dt_socket at address: 8787
...nothing more happens.
Second, if I stop it and run again the port 8787 seems to still be in use, I need to kill the process from the task manger after finding out the PID.
I bet the example is great.
What do you think guys, what I'm I doing wrong?
Best regards
Fredrik
The message Listening for transport dt_socket at address: 8787 means that the JVM is suspended waiting for a debugger to connect to port 8787. If you take a look at the command which starts the JVM you'll see this system property:
-Xrunjdwp:transport=dt_socket,address=8787,server=y,suspend=y
This is what tells the JVM to "suspend" and wait for a connection on 8787.
This system property is configured in the project's arquillian.xml
I got in touch with the author himself and he pointed out that I need to run with Java 7, so in my cmd I set:
set MAVEN_OPTS=-Xms512m -Xmx1024m
set PATH=%PATH%;C:\Fredrik\Applications\Maven\apache-maven-3.3.9\bin
set M2_HOME=C:\Fredrik\Applications\Maven\apache-maven-3.3.9
set JAVA_HOME=C:\Program Files\Java\jdk1.7.0_75
cd C:\dev\git\test\arquillian-jms-mdb-master
I also noticed that I needed to add this property in the arquillian.xml
<property name="jbossHome">C:\Fredrik\Applications\jboss-as-7.2.0.Final</property>
Second I noticed that it seems like I have to set "suspend=n", else it just behave like before, it just stops and "Listening for transport dt_socket at address: 8787"
Third I noticed that I needed to add these lines to the pom, else I got
"Error assembling EJB: META-INF/ejb-jar.xml is required for ejbVersion 2.x"
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-ejb-plugin</artifactId>
<configuration>
<ejbVersion>3.0</ejbVersion>
</configuration>
</plugin>
Please comment if you disagree to my "workarounds" above.
How ever I think this arquillian example is great and helped me a lot!
Best regards
Fredrik

HelloWorld console application fails to publish with ClickOnce

thanks in advance. Publishing with my VisualStudio/ClickOnce I get always the following error.
ERROR SUMMARY
Below is a summary of the errors, details of these errors are listed later in the log.
* Activation of C:\Users\carlos\Documents\visual studio 2012\Projects\ConsoleApplication1\ConsoleApplication1\publish\ConsoleApplication1.application resulted in exception. Following failure messages were detected:
+ Configuration system failed to initialize
+ Unrecognized configuration section startup. (C:\Windows\Microsoft.NET\Framework64\v4.0.30319\dfsvc.exe.Config line 2)
The dfsvc.exe.Config is:
<configuration>
<startup useLegacyV2RuntimeActivationPolicy="false">
<supportedRuntime version="v4.0" sku="client" />
</startup>
</configuration>
Is something that I should install on my PC? Thanks again.
I found a solution,
I don't know exactly which was the fix, I did three steps:
1 - I repaired .net fx intallation
2 - Ran mage -cc to clear clickonce store
3 - Rebooted
thanks

Pig without Hadoop on Windows 7

I am trying to run PigUnit tests on a Windows 7 machine before running the actual pig script on a Ubuntu cluster and I start to think that my understanding of "withouthadoop" is not correct.
Do I need to install Hadoop to locally run a PigUnit test on a Windows 7 machine?
I installed:
eclipse Juno & ant
cygwin
I set up:
JAVA_HOME=C:\Program Files\Java\jdk1.6.0_39
PIG_HOME=C:\Users\john.doe\Java\eclipse\pig
PIG_CLASSPATH=%PIG_HOME%\bin
I created using eclipse's Ant builder jar-all and pigunit-jar:
pig.jar
pig-withouthadoop.jar
pigunit.jar
Still when I type pig -x local in cygwin I get:
$./pig -x local
cygpath: can't convert empty path
Exception in thread "main" java.io.IOException: Error opening job jar: /usr/lib/pig/pig-withouthadoop.jar
at org.apache.hadoop.util.RunJar.main(RunJar.java:135)
Caused by: java.io.FileNotFoundException: \usr\lib\pig\pig-withouthadoop.jar (the systen cannot find the given path)
at java.util.zip.ZipFile.open(Native Method)
at java.util.zip.ZipFile.<init>(ZipFile.java:127)
at java.util.jar.JarFile.<init>(JarFile.java:136)
at java.util.jar.JarFile.<init>(JarFile.java:73)
at org.apache.hadoop.util.RunJar.main(RunJar.java:133)
When I try to run the test from http://pig.apache.org/docs/r0.10.0/test.html#pigunit from within eclipse using the option "Run as JUnit", I get:
java.io.IOException
at org.apache.pig.pigunit.pig.PigServer.registerScript(PigServer.java:62)
at org.apache.pig.pigunit.PigTest.registerScript(PigTest.java:171)
at org.apache.pig.pigunit.PigTest.assertOutput(PigTest.java:267)
at org.apache.pig.pigunit.PigTest.assertOutput(PigTest.java:262)
at da.utils.pigunit.PigUnitExample.testTop2Queries(PigUnitExample.java:72)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
at java.lang.reflect.Method.invoke(Unknown Source)
at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
at org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:50)
at org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38)
at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:467)
at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:683)
at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:390)
at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:197)
I am starting to think that I missed some crucial basic information to run Pig on Windows, I have to say also that I am no experienced user with Windows 7 and cygwin, I come from the Unix world.
Don't fight it. Install Hadoop HDInsight server on Windows from the Web Platform installer:
http://www.microsoft.com/web/downloads/platform.aspx
It doesn't take long or take up that much space, and the whole shebang is just set up and running for you. I can't get Pig scripts to take parameters, and there's no HBase, but you get HDFS, Pig, Hive. You can even get a whole local cluster going if just follow:: http://social.msdn.microsoft.com/Forums/en-US/hdinsight/thread/885efc22-fb67-4df8-8648-4ff38098dac6/
I have installed pig 0.12 in cygwin (I run windows 7 64-bit) without installing hadoop. As far as I can see, the steps I followed where:
Install Cygwin64 (with Perl package)
Download pig-0.12.1.tar.gz, copy to home folder
Extract to home folder in cygwin:
$ tar xzf pig-0.12.1.tar.gz
Export JAVA_HOME:
$ export JAVA_HOME=/cygdrive/c/Program\ Files/Java/jre6/
Add pig to path:
$ export PATH=~/pig-0.12.1/bin/:$PATH
Rename pig-0.12.1.jar to pig.jar:
$ cp pig-0.12.1.jar pig.jar
Export PIG_CLASSPATH:
$ export PIG_CLASSPATH=~/pig-0.12.1/pig.jar
Run pig in local mode (start Grunt):
$ pig -x local
There will be a warning:
"cygpath: cannot create short name of C:\cygwin64\home\xxx\pig-0.12.1\logs"
We can remove it simply by running:
$ mkdir logs
Like you, I'm trying at the moment to get a functioning Pig installation on a Windows PC using cygwin in order to learn Pig Latin using small datasets on a single JVM. Not a huge ask, you would have thought, but the pain is almost unbearable. I come from a Windows background and the UNIX part is the steep learning curve for me. The pig-withouthadoop jar doesn't contain hadoop, so hadoop needs to be already installed on your machine to use it; the pig.jar contains pig's own version of hadoop and so is the one to use if hadoop is not already installed on your machine. This is the way I understand it, and it seems to be born out by dumping a list of the contents of each .jar to a text file and viewing the results in Notepad++ .
When you type pig -x local at cygwin's dollar prompt, the bash command script 'pig' is invoked and run. Have a look at it (from your PIG_HOME) with $cd bin $ cat pig. I've been right through it these last few days with vim (!) and near the end of the code is a little fork for cygwin users, in order to cast environment variables, that up until now have been in Unix format, into a form that the Windows version of java.exe will understand when 'exec java ...' is called right at the end of the script. Without this conversion, the Windows java.exe won't understand its parameters:
'#' cygwin path translation
if $cygwin; then
CLASSPATH=cygpath -p -w "$CLASSPATH"
PIG_HOME=cygpath -d "£PIG_HOME"
PIG_LOG_DIR=cygpath -d "$PIG_LOG_DIR"
fi
Cygpath is a cygwin utility that converts UNIX-style file paths into Windows-style file paths, and vice versa. The error message: "cygpath: can't convert empty path" must come from here, I think. Check that CLASSPATH, PIG_HOME and PIG_LOG_DIR aren't empty, perhaps by placing your own echo commands in the script.
On my machine and installation, there was an error generated here, but not the same as yours. I found replacing -w and -d with -m, which makes cygpath use the C:/Program Files/Java... syntax conversion, worked. But then other problems appear, which I shall leave for my own question.
According to this note [1], it is not possible to use the Hadoop native libraries on windows 7 directly using Cygwin:
Hadoop native library is supported only on *nix platforms only. Unfortunately it is known not to work on Cygwin [...]
I have traced down the error message cygpath: can't convert empty path to the line JAVA_LIBRARY_PATH=cygpath -w "$JAVA_LIBRARY_PATH" in hadoop-config.sh which I commented out following the advice from [2]
Comment some translation in hadoop-config.sh.
#if $cygwin; then
#HADOOP_PREFIX=`cygpath -w "$HADOOP_PREFIX"`
#HADOOP_LOG_DIR=`cygpath -w "$HADOOP_LOG_DIR"`
#JAVA_LIBRARY_PATH=`cygpath -w "$JAVA_LIBRARY_PATH"`
#fi
Now I get the following error:
Error before Pig is launched -- ERROR 2999: Unexpected internal error.
java.lang.UnsupportedOperationException: Not implemented by the DistributedFileSystem FileSystem implementation
So the conclusion I draw from this, is that Pig, even in local mode, requires the HDFS. And the HDFS requires the Hadoop native libraries. And the native libraries are known not to work on Cygwin. Hence: IMHO, Pig cannot run using Cygwin as it is.
To run PigUnit on Windows 7. I don't install Cygwin.
Thanks for Konstantin Kudryavtsev i use his FixHadoopOnWindows.runFix()
http://simpletoad.blogspot.com/2013/05/pigunit-issue-on-windows.html
I call the runFix in my setUp, for example
private static PigTest test;
#BeforeClass
public static void setUp() throws IOException, ParseException {
try {
FixHadoopOnWindows.runFix();
// TODO: load pig script properly
test = new PigTest("src/pig/example.pig");
//test.override(...);
}
catch (Exception e) {
}
}
Use maven, need the following dependency
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-core</artifactId>
<version>1.2.1</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.apache.pig</groupId>
<artifactId>pig</artifactId>
<version>0.15.0</version>
</dependency>
<dependency>
<groupId>org.jboss.forge</groupId>
<artifactId>forge-javassist</artifactId>
<version>2</version>
</dependency>
<dependency>
<groupId>org.apache.pig</groupId>
<artifactId>pigunit</artifactId>
<version>0.15.0</version>
<scope>test</scope>
</dependency>

Task throws error=7: Argument list too long

I have an Ant's build.xml file which executes with no problems on my machine (Ubuntu), but throws the following error:
/var/lib/hudson/workspace/myproject/build.xml:254: Error running /var/lib/hudson/tools/java_6/bin/javac compiler
at org.apache.tools.ant.taskdefs.compilers.DefaultCompilerAdapter.executeExternalCompile(DefaultCompilerAdapter.java:525)
(...)
Caused by: java.io.IOException: Cannot run program "/var/lib/hudson/tools/java_6/bin/javac": java.io.IOException: error=7, Argument list too long
at java.lang.ProcessBuilder.start(ProcessBuilder.java:460)
at java.lang.Runtime.exec(Runtime.java:593)
at org.apache.tools.ant.taskdefs.Execute$Java13CommandLauncher.exec(Execute.java:862)
at org.apache.tools.ant.taskdefs.Execute.launch(Execute.java:481)
at org.apache.tools.ant.taskdefs.Execute.execute(Execute.java:495)
at org.apache.tools.ant.taskdefs.compilers.DefaultCompilerAdapter.executeExternalCompile(DefaultCompilerAdapter.java:522)
... 19 more
Caused by: java.io.IOException: java.io.IOException: error=7, Argument list too long
at java.lang.UNIXProcess.<init>(UNIXProcess.java:148)
at java.lang.ProcessImpl.start(ProcessImpl.java:65)
at java.lang.ProcessBuilder.start(ProcessBuilder.java:453)
... 24 more
The argument list is quite big in fact contains all the jar files from WEB-INF/lib which is 231650 characters long!
Any suggestions how to fix it?
With a command that long you are likely running in to ARG_MAX at the shell.
This will report a good estimate of the available length,
expr getconf ARG_MAX - env|wc -c - env|wc -l * 4 - 2048
A nice article about command arg lists and length can be found here
Run ant -d. This will produce capacious amounts of output. However, it will also show your entire compile line which may help you understand why it is so long.
Are you using Jenkins/Hudson and that's where the error occurs?
Try the following:
Disable the build.
Log into your build server, AS YOUR JENKINS USER and find the workdir directory where Jenkins/Hudson is attempting the build.
You may have to change $PATH or set $JAVA_HOME to point to the JDK that Hudson/Jenkins is using.
Now, run ant -d  <target> just as Jenkins/Hudson would. Pipe this output through tee into a file. Now, take a look and see what Hudson/Jenkins is doing and why javac has too many arguments.
Use apply for your fileset in your build.xml, e.g.
<?xml version="1.0" encoding="UTF-8"?>
<project default="build">
<fileset id="myfiles" dir="${basedir}">
<include name="**/*.java"/>
<exclude name="**/Resources/**"/>
<modified>
<param name="cache.cachefile" value="${basedir}/cache.${project}.fileset.myfiles.properties"/>
</modified>
</fileset>
<target name="execute-some-command">
<apply executable="javac" dir="${basedir}" failonerror="true">
<fileset refid="myfiles"/>
</apply>
</target>
</project>
By default, the command will be executed once for every file.
If you need to use parallel to run the command only once, then use maxparallel to limit the amount of parallelism by passing at most this many sourcefiles at once (e.g. set to 1000 to pass a thousand files per run). For example:
<apply executable="javac" parallel="true" maxparallel="1000" dir="${basedir}">
<fileset refid="myfiles"/>
</apply>
To see how many files you've got in total, check the content of cache file (look for cache.cachefile in above example).

Resources