Adding more memory to h2o.ai Steam Prediction Service Builder - h2o

I trying to build a .war file in h2o steam's prediction service builder using a (~800MB) pojo file (a similara pojo of size ~200MB also produced these same problems). However, when trying this, an error appears after clicking 'build':
Problem accessing /makewar. Reason:
Compilation of pojo failed exit value 3 warning: [options] bootstrap class path not set in conjunction with -source 1.6
The system is out of resources.
Consult the following stack trace for details.
java.lang.OutOfMemoryError
at java.io.FileInputStream.readBytes(Native Method)
at java.io.FileInputStream.read(FileInputStream.java:255)
at com.sun.tools.javac.util.BaseFileManager.makeByteBuffer(BaseFileManager.java:302)
at com.sun.tools.javac.file.RegularFileObject.getCharContent(RegularFileObject.java:114)
at com.sun.tools.javac.file.RegularFileObject.getCharContent(RegularFileObject.java:53)
at com.sun.tools.javac.main.JavaCompiler.readSource(JavaCompiler.java:602)
at com.sun.tools.javac.main.JavaCompiler.parse(JavaCompiler.java:665)
at com.sun.tools.javac.main.JavaCompiler.parseFiles(JavaCompiler.java:950)
at com.sun.tools.javac.main.JavaCompiler.compile(JavaCompiler.java:857)
at com.sun.tools.javac.main.Main.compile(Main.java:523)
at com.sun.tools.javac.main.Main.compile(Main.java:381)
at com.sun.tools.javac.main.Main.compile(Main.java:370)
at com.sun.tools.javac.main.Main.compile(Main.java:361)
at com.sun.tools.javac.Main.compile(Main.java:56)
at com.sun.tools.javac.Main.main(Main.java:42)
I am launching the Prediction Service Builder from the command line following the instruction in this documentation. Is there a way to launch the service builder with more memory?
UPDATE
Using the command:
$ GRADLE_OPTS=-Xmx4g ./gradlew jettyRunWar
Trying to build a .war from the pojo returns the cli error:
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/home/reedv/.gradle/wrapper/dists/gradle-2.7-all/2glqtbnmvcq45bfjvhghri39p6/gradle-2.7/lib/gradle-core-2.7.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/home/reedv/Documents/h2o_production/h2o-steam/steam/prediction-service-builder/build/tmp/jettyRunWar/webapp/WEB-INF/lib/slf4j-simple-1.7.7.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.SimpleLoggerFactory]
2017-09-21 15:22:48.084 -1000 [1222676357#qtp-1014435252-3] INFO MakeWarServlet - servletPath = /home/reedv/Documents/h2o_production/h2o-steam/steam/prediction-service-builder/build/tmp/jettyRunWar/webapp
2017-09-21 15:22:48.086 -1000 [1222676357#qtp-1014435252-3] INFO MakeWarServlet - tmpDir /tmp/makeWar316567921053262563022859149567148
2017-09-21 15:22:57.175 -1000 [1222676357#qtp-1014435252-3] INFO MakeWarServlet - added pojo model drf_denials_v4_v3-10-5-2.java
2017-09-21 15:22:57.190 -1000 [1222676357#qtp-1014435252-3] INFO MakeWarServlet - prejar null preclass null
2017-09-21 15:22:58.047 -1000 [1222676357#qtp-1014435252-3] INFO Util - warning: [options] bootstrap class path not set in conjunction with -source 1.6
2017-09-21 15:23:25.017 -1000 [1190941229#qtp-1014435252-0] INFO MakeWarServlet - tmpDir /tmp/makeWar432278342000106527922896081353600
2017-09-21 15:23:39.448 -1000 [1190941229#qtp-1014435252-0] INFO MakeWarServlet - added pojo model drf_denials_v4_v3-10-5-2.java
2017-09-21 15:23:39.569 -1000 [1190941229#qtp-1014435252-0] INFO MakeWarServlet - prejar null preclass null
2017-09-21 15:23:40.651 -1000 [1190941229#qtp-1014435252-0] INFO Util - warning: [options] bootstrap class path not set in conjunction with -source 1.6
2017-09-21 15:23:57.124 -1000 [1190941229#qtp-1014435252-0] INFO Util - OpenJDK 64-Bit Server VM warning: INFO: os::commit_memory(0x00000006efd00000, 1592786944, 0) failed; error='Cannot allocate memory' (errno=12)
2017-09-21 15:23:57.604 -1000 [1190941229#qtp-1014435252-0] INFO Util - #
2017-09-21 15:23:57.605 -1000 [1190941229#qtp-1014435252-0] INFO Util - # There is insufficient memory for the Java Runtime Environment to continue.
2017-09-21 15:23:57.616 -1000 [1190941229#qtp-1014435252-0] INFO Util - # Native memory allocation (mmap) failed to map 1592786944 bytes for committing reserved memory.
2017-09-21 15:23:57.619 -1000 [1190941229#qtp-1014435252-0] INFO Util - # An error report file with more information is saved as:
2017-09-21 15:23:57.622 -1000 [1190941229#qtp-1014435252-0] INFO Util - # /tmp/makeWar432278342000106527922896081353600/hs_err_pid32313.log
2017-09-21 15:23:57.747 -1000 [1190941229#qtp-1014435252-0] ERROR MakeWarServlet - doPost failed
java.lang.Exception: Compilation of pojo failed exit value 1 warning: [options] bootstrap class path not set in conjunction with -source 1.6
OpenJDK 64-Bit Server VM warning: INFO: os::commit_memory(0x00000006efd00000, 1592786944, 0) failed; error='Cannot allocate memory' (errno=12)
#
# There is insufficient memory for the Java Runtime Environment to continue.
# Native memory allocation (mmap) failed to map 1592786944 bytes for committing reserved memory.
# An error report file with more information is saved as:
# /tmp/makeWar432278342000106527922896081353600/hs_err_pid32313.log
at ai.h2o.servicebuilder.Util.runCmd(Util.java:162)
at ai.h2o.servicebuilder.MakeWarServlet.doPost(MakeWarServlet.java:151)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:727)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
at org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:511)
at org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:390)
at org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216)
at org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182)
at org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:765)
at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:440)
at org.mortbay.jetty.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:230)
at org.mortbay.jetty.handler.HandlerCollection.handle(HandlerCollection.java:114)
at org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152)
at org.mortbay.jetty.Server.handle(Server.java:326)
at org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:542)
at org.mortbay.jetty.HttpConnection$RequestHandler.content(HttpConnection.java:943)
at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:756)
at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:218)
at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:404)
at org.mortbay.io.nio.SelectChannelEndPoint.run(SelectChannelEndPoint.java:410)
at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582)
2017-09-21 15:23:58.039 -1000 [1190941229#qtp-1014435252-0] ERROR MakeWarServlet - Compilation of pojo failed exit value 1 warning: [options] bootstrap class path not set in conjunction with -source 1.6
OpenJDK 64-Bit Server VM warning: INFO: os::commit_memory(0x00000006efd00000, 1592786944, 0) failed; error='Cannot allocate memory' (errno=12)
#
# There is insufficient memory for the Java Runtime Environment to continue.
# Native memory allocation (mmap) failed to map 1592786944 bytes for committing reserved memory.
# An error report file with more information is saved as:
# /tmp/makeWar432278342000106527922896081353600/hs_err_pid32313.log
Increasing the memory allocation value to Xmx6g or Xmx7g still gives this same error.
Furthermore, looking for the file /tmp/makeWar432278342000106527922896081353600/hs_err_pid32313.log that was supposedly created by this error, there seems to not be a directory named "makeWar432278342000106527922896081353600" in my root tmp/ directory, so I'm not really sure where to look for it.

I'm guessing you started it with Gradle. In that case you can do GRADLE_OPTS=-Xmx4g ./gradlew jettyrunwar to start it with 4 GB of memory.

Related

Install Hive on Windows

I am trying to install hive 3.1.2 on windows 10, hadoop 3.2.2.
I can start hadoop server and start hive shell by run "hive".
First problem is it show a lot of WARN:
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
2021-09-09T21:01:22,001 INFO [main] org.apache.hadoop.hive.conf.HiveConf - Found configuration file file:/C:/my_programs/hive_3.1.2/conf/hive-site.xml
2021-09-09T21:01:22,303 WARN [main] org.apache.hadoop.hive.conf.HiveConf - HiveConf of name hive.server2.enable.impersonation does not exist
2021-09-09T21:01:23,557 WARN [main] org.apache.hadoop.hive.conf.HiveConf - HiveConf of name hive.server2.enable.impersonation does not exist
Hive Session ID = f879881f-c49b-449b-b8cf-81302c585358
Logging initialized using configuration in jar:file:/C:/my_programs/hive_3.1.2/lib/hive-common-3.1.2.jar!/hive-log4j2.properties Async: true
2021-09-09T21:01:25,309 INFO [main] org.apache.hadoop.hive.ql.session.SessionState - Created HDFS directory: /tmp/hive/admin/f879881f-c49b-449b-b8cf-81302c585358
2021-09-09T21:01:25,317 INFO [main] org.apache.hadoop.hive.ql.session.SessionState - Created local directory: C:/Users/admin/AppData/Local/Temp/admin/f879881f-c49b-449b-b8cf-81302c585358
2021-09-09T21:01:25,325 INFO [main] org.apache.hadoop.hive.ql.session.SessionState - Created HDFS directory: /tmp/hive/admin/f879881f-c49b-449b-b8cf-81302c585358/_tmp_space.db
2021-09-09T21:01:25,345 INFO [main] org.apache.hadoop.hive.conf.HiveConf - Using the default value passed in for log id: f879881f-c49b-449b-b8cf-81302c585358
2021-09-09T21:01:25,345 INFO [main] org.apache.hadoop.hive.ql.session.SessionState - Updating thread name to f879881f-c49b-449b-b8cf-81302c585358 main
2021-09-09T21:01:25,383 WARN [f879881f-c49b-449b-b8cf-81302c585358 main] org.apache.hadoop.hive.conf.HiveConf - HiveConf of name hive.server2.enable.impersonation does not exist
Hive-on-MR is deprecated in Hive 2 and may not be available in the future versions. Consider using a different execution engine (i.e. spark, tez) or using Hive 1.X releases.
2021-09-09T21:01:53,237 INFO [f879881f-c49b-449b-b8cf-81302c585358 main] CliDriver - Hive-on-MR is deprecated in Hive 2 and may not be available in the future versions. Consider using a different execution engine (i.e. spark, tez) or using Hive 1.X releases.
hive>
It still run hive shell when I start run
hive> show databases;
it come to error like this:
hive> show databases;
2021-09-09T21:05:01,341 INFO [f879881f-c49b-449b-b8cf-81302c585358 main] org.apache.hadoop.hive.conf.HiveConf - Using the default value passed in for log id: f879881f-c49b-449b-b8cf-81302c585358
FAILED: HiveException java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient
2021-09-09T21:05:43,092 INFO [f879881f-c49b-449b-b8cf-81302c585358 main] org.apache.hadoop.hive.conf.HiveConf - Using the default value passed in for log id: f879881f-c49b-449b-b8cf-81302c585358
2021-09-09T21:05:43,093 INFO [f879881f-c49b-449b-b8cf-81302c585358 main] org.apache.hadoop.hive.ql.session.SessionState - Resetting thread name to main
hive>
I have read some solution and I think that problem come from hive metastore.
I followed by tutorial that connect derby metastore with hive.
But when I try to run
schematool -dbType derby -initSchema
Window cannot run schematool as a command line.
So I really confuse how can init db to hive, or can do in another way?
Update 2021/9/20:
I have fix all variable in my paths, and right now I got stuck at new problem. The error is quite clear but no solution found on my research:
PS C:\my_programs\hive_3.1.2> .\bin\hive
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/C:/my_programs/hive_3.1.2/lib/log4j-slf4j-impl-2.10.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/C:/my_programs/hadoop-3.2.2/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
2021-09-20T20:22:34,274 INFO [main] org.apache.hadoop.hive.conf.HiveConf - Found configuration file file:/C:/my_programs/hive_3.1.2/conf/hive-site.xml
2021-09-20T20:22:34,694 WARN [main] org.apache.hadoop.hive.conf.HiveConf - HiveConf of name hive.server2.enable.impersonation does not exist
2021-09-20T20:22:37,728 WARN [main] org.apache.hadoop.hive.conf.HiveConf - HiveConf of name hive.server2.enable.impersonation does not exist
Hive Session ID = 89fc5e06-2a55-496c-aea0-ab5512839ac3
Logging initialized using configuration in jar:file:/C:/my_programs/hive_3.1.2/lib/hive-exec-3.1.2.jar!/hive-log4j2.properties Async: true
Exception in thread "main" java.lang.NoSuchMethodError: org.apache.hadoop.conf.Configuration.getTimeDuration(Ljava/lang/String;JLjava/util/concurrent/TimeUnit;Ljava/util/concurrent/TimeUnit;)J
at org.apache.hadoop.hdfs.client.impl.DfsClientConf.<init>(DfsClientConf.java:248)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:307)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:291)
at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:173)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3354)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:124)
at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3403)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3371)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:477)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:226)
at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:624)
at org.apache.hadoop.hive.ql.session.SessionState.beginStart(SessionState.java:591)
at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:747)
at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:683)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
at java.lang.reflect.Method.invoke(Unknown Source)
at org.apache.hadoop.util.RunJar.run(RunJar.java:323)
at org.apache.hadoop.util.RunJar.main(RunJar.java:236)
PS C:\my_programs\hive_3.1.2>
It seems like error at hdfs config. But I don't know what I need to do with hadoop config, core-site.xml or hdfs-site.xml, or st else. Do we need some config at hadoop side to connect with hive???

While installing the Cassandra getting the error

Trying to installing the Cassandra in my local system getting the below error. Could you please solve or help on the error.
INFO [main] 2021-04-22 19:23:16,662 CassandraDaemon.java:507 - JVM Arguments: [-ea, -javaagent:C:\Cassandra\apache-cassandra-3.11.10\lib\jamm-0.3.0.jar, -Xms2G, -Xmx2G, -XX:+HeapDumpOnOutOfMemoryError, -XX:+UseParNewGC, -XX:+UseConcMarkSweepGC, -XX:+CMSParallelRemarkEnabled, -XX:SurvivorRatio=8, -XX:MaxTenuringThreshold=1, -XX:CMSInitiatingOccupancyFraction=75, -XX:+UseCMSInitiatingOccupancyOnly, -Dlogback.configurationFile=logback.xml, -Djava.library.path=C:\Cassandra\apache-cassandra-3.11.10\lib\sigar-bin, -Dcassandra.jmx.local.port=7199, -Dcassandra, -Dcassandra-foreground=yes, -Dcassandra.logdir=C:\Cassandra\apache-cassandra-3.11.10\logs, -Dcassandra.storagedir=C:\Cassandra\apache-cassandra-3.11.10\data]
WARN [main] 2021-04-22 19:23:16,682 StartupChecks.java:169 - JMX is not enabled to receive remote connections. Please see cassandra-env.sh for more info.
WARN [main] 2021-04-22 19:23:16,688 StartupChecks.java:220 - The JVM is not configured to stop on OutOfMemoryError which can cause data corruption. Use one of the following JVM options to configure the behavior on OutOfMemoryError: -XX:+ExitOnOutOfMemoryError, -XX:+CrashOnOutOfMemoryError, or -XX:OnOutOfMemoryError="<cmd args>;<cmd args>"
These aren't errors - they are just warnings that some parameters aren't configured, and suggesting that you can configure some options. Specifically you can ignore the second line (about JMX), but last line is more important - open conf/jvm.options and add the line with -XX:+ExitOnOutOfMemoryError option.

Kettle - pan.sh "No repository provided, can't load transformation"

I've created a kettle transformation and i've tested on my pc and it works. However, i've insered it in the server and starting as bash script by pan.sh. It was working but after few times it started to give this problem.
server$ bash pan.sh file="API_Mining_LatestVersion.ktr"
#######################################################################
WARNING: no libwebkitgtk-1.0 detected, some features will be unavailable
Consider installing the package with apt-get or yum.
e.g. 'sudo apt-get install libwebkitgtk-1.0-0'
#######################################################################
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=512m; support was removed in 8.0
14:56:00,682 INFO [KarafBoot] Checking to see if org.pentaho.clean.karaf.cache is enabled
14:56:00,803 INFO [KarafInstance]
*******************************************************************************
*** Karaf Instance Number: 2 at /data/Fernando/data-integration_updated/./s ***
*** ystem/karaf/caches/pan/data-1 ***
*** FastBin Provider Port:52902 ***
*** Karaf Port:8803 ***
*** OSGI Service Port:9052 ***
*******************************************************************************
Nov 20, 2018 2:56:01 PM org.apache.karaf.main.Main$KarafLockCallback lockAquired
INFO: Lock acquired. Setting startlevel to 100
*ERROR* [org.osgi.service.cm.ManagedService, id=255, bundle=53/mvn:org.apache.aries.transaction/org.apache.aries.transaction.manager/1.1.1]: Updating configuration org.apache.aries.transaction caused a problem: null
org.osgi.service.cm.ConfigurationException: null : null
at org.apache.aries.transaction.internal.TransactionManagerService.<init>(TransactionManagerService.java:136)
at org.apache.aries.transaction.internal.Activator.updated(Activator.java:63)
at org.apache.felix.cm.impl.helper.ManagedServiceTracker.updateService(ManagedServiceTracker.java:148)
at org.apache.felix.cm.impl.helper.ManagedServiceTracker.provideConfiguration(ManagedServiceTracker.java:81)
at org.apache.felix.cm.impl.ConfigurationManager$ManagedServiceUpdate.provide(ConfigurationManager.java:1448)
at org.apache.felix.cm.impl.ConfigurationManager$ManagedServiceUpdate.run(ConfigurationManager.java:1404)
at org.apache.felix.cm.impl.UpdateThread.run(UpdateThread.java:103)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.objectweb.howl.log.LogConfigurationException: Unable to obtain lock on /data/Fernando/data-integration/system/karaf/caches/pan/data-1/txlog/transaction_1.log
at org.objectweb.howl.log.LogFile.open(LogFile.java:191)
at org.objectweb.howl.log.LogFileManager.open(LogFileManager.java:784)
at org.objectweb.howl.log.Logger.open(Logger.java:304)
at org.objectweb.howl.log.xa.XALogger.open(XALogger.java:893)
at org.apache.aries.transaction.internal.HOWLLog.doStart(HOWLLog.java:233)
at org.apache.aries.transaction.internal.TransactionManagerService.<init>(TransactionManagerService.java:133)
... 7 more
2018-11-20 14:56:04.508:INFO:oejs.Server:jetty-8.1.15.v20140411
2018-11-20 14:56:04.544:INFO:oejs.AbstractConnector:Started NIOSocketConnectorWrapper#0.0.0.0:9052
[...]
INFO: New Caching Service registered
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/data/Fernando/data-integration_updated/launcher/../lib/slf4j-log4j12-1.7.7.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/data/Fernando/data-integration_updated/plugins/pentaho-big-data-plugin/lib/slf4j-log4j12-1.7.7.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
2018/11/20 14:56:09 - Pan - Start of run.
ERROR: No repository provided, can't load transformation.
I don't understand where the problem is. The transformation file hasn't been changed and it contains also repo, user and pass paramethers.

Using MOJOS in H2O Steam Prediction Service Builder

In h2o's documentation for the Steam Prediction Service Builder, here, it says that the service builder can compile both h2o pojos (.java files) and mojos (downloaded from h2o flow in my case as a .zip (version 3.10.5.2), which I have been using in the manner shown here). However, doing something like this:
gives this error:
Problem accessing /makewar. Reason:
Compilation of pojo failed exit value 1 warning: [options] bootstrap class path not set in conjunction with -source 1.6
error: Class names, 'drf_denials_v4.zip', are only accepted if annotation processing is explicitly requested
1 error
1 warning
So how can I use mojo files in the servicec builder? DO I need to use the "exported" model file from h2o flow rather than the "downloaded" zip file? The reason that I need to use mojos rather than the .java pojos is that my model is too large to fit in the pojo downloadable from h2o flow.
UPDATE:
Trying to use the CLI with the command:
$ curl -X POST --form mojo=#drf_denials_v4.zip --form jar=#h2o-genmodel.jar localhost:55000/makewar > drf_denials_v4.war
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 106M 100 53.6M 100 52.7M 6748k 6632k 0:00:08 0:00:08 --:--:-- 229k
in the dir. containing the relevant files, then using the command:
prediction-service-builder git:(master)$ java -jar jetty-runner-9.3.9.M1.jar --port 55001 ~/Documents/h2o_production/mojos/drf_denials_v4/drf_denials_v4.war
gives the output:
2017-09-21 12:33:58.226:INFO::main: Logging initialized #232ms
2017-09-21 12:33:58.234:INFO:oejr.Runner:main: Runner
2017-09-21 12:33:58.558:INFO:oejs.Server:main: jetty-9.3.9.M1
2017-09-21 12:33:59.557:WARN:oeja.AnnotationConfiguration:main: ServletContainerInitializers: detected. Class hierarchy: empty
2017-09-21 12:34:00.068 -1000 [main] INFO ServletUtil - modelNames size 1
2017-09-21 12:34:01.285 -1000 [main] INFO ServletUtil - added model drf_denials_v4 new size 1
2017-09-21 12:34:01.290 -1000 [main] INFO ServletUtil - added 1 models
2017-09-21 12:34:01.291:INFO:oejsh.ContextHandler:main: Started o.e.j.w.WebAppContext#4c75cab9{/,file:///tmp/jetty-0.0.0.0-55001-drf_denials_v4.war-_-any-39945022624149883.dir/webapp/,AVAILABLE}{file:///home/reedv/Documents/h2o_production/mojos/drf_denials_v4/drf_denials_v4.war}
2017-09-21 12:34:01.321:INFO:oejs.AbstractConnector:main: Started ServerConnector#176c9571{HTTP/1.1,[http/1.1]}{0.0.0.0:55001}
2017-09-21 12:34:01.322:INFO:oejs.Server:main: Started #3329ms
Going to localhost:55001, and trying to make a prediction, I see:
Notice that a prediction is given with a label, but there are no parameter input fields present and I get the cli error message:
2017-09-21 12:35:11.270:WARN:oejs.ServletHandler:qtp1531448569-12: Error for /info
java.lang.OutOfMemoryError: Java heap space
at java.util.Arrays.copyOf(Arrays.java:3332)
at java.lang.AbstractStringBuilder.ensureCapacityInternal(AbstractStringBuilder.java:124)
at java.lang.AbstractStringBuilder.append(AbstractStringBuilder.java:448)
at java.lang.StringBuffer.append(StringBuffer.java:270)
at java.io.StringWriter.write(StringWriter.java:101)
at java.io.StringWriter.append(StringWriter.java:143)
at java.io.StringWriter.append(StringWriter.java:41)
at com.google.gson.stream.JsonWriter.value(JsonWriter.java:519)
at com.google.gson.internal.bind.TypeAdapters$5.write(TypeAdapters.java:210)
at com.google.gson.internal.bind.TypeAdapters$5.write(TypeAdapters.java:194)
at com.google.gson.internal.bind.TypeAdapterRuntimeTypeWrapper.write(TypeAdapterRuntimeTypeWrapper.java:68)
at com.google.gson.internal.bind.ArrayTypeAdapter.write(ArrayTypeAdapter.java:93)
at com.google.gson.internal.bind.TypeAdapterRuntimeTypeWrapper.write(TypeAdapterRuntimeTypeWrapper.java:68)
at com.google.gson.internal.bind.ArrayTypeAdapter.write(ArrayTypeAdapter.java:93)
at com.google.gson.internal.bind.TypeAdapterRuntimeTypeWrapper.write(TypeAdapterRuntimeTypeWrapper.java:68)
at com.google.gson.internal.bind.ReflectiveTypeAdapterFactory$1.write(ReflectiveTypeAdapterFactory.java:112)
at com.google.gson.internal.bind.ReflectiveTypeAdapterFactory$Adapter.write(ReflectiveTypeAdapterFactory.java:239)
at com.google.gson.internal.bind.TypeAdapterRuntimeTypeWrapper.write(TypeAdapterRuntimeTypeWrapper.java:68)
at com.google.gson.internal.bind.ReflectiveTypeAdapterFactory$1.write(ReflectiveTypeAdapterFactory.java:112)
at com.google.gson.internal.bind.ReflectiveTypeAdapterFactory$Adapter.write(ReflectiveTypeAdapterFactory.java:239)
at com.google.gson.Gson.toJson(Gson.java:661)
at com.google.gson.Gson.toJson(Gson.java:640)
at com.google.gson.Gson.toJson(Gson.java:595)
at com.google.gson.Gson.toJson(Gson.java:575)
at InfoServlet.doGet(InfoServlet.java:59)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:687)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:790)
at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:837)
at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:583)
at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
The cli pojo example works, but trying to use my mojo zip does not.
Unfortunately, the UI has not been updated for mojo functionality yet. You can however use the command line to build war files with mojos.
Run this from your command line:
curl -X POST --form mojo=drf_denials_v4.zip --form jar=h2o-genmodel.jar localhost:55000/makewar > example.war
Then run the war file in the normal way.
For more information see: https://github.com/h2oai/steam/tree/master/prediction-service-builder

Getting Stack trace: ExitCodeException exitCode=255 during MapReduce

I am running some map reduce task on huge dataset on four node cluster but getting exception with exit code - 255
16/08/04 08:07:19 INFO mapreduce.Job: map 0% reduce 0%
16/08/04 08:07:27 INFO mapreduce.Job: Task Id : attempt_1470297644642_0001_m_000000_0, Status : FAILED
Exception from container-launch.
Container id: container_1470297644642_0001_01_000003
Exit code: 255
Stack trace: ExitCodeException exitCode=255:
at org.apache.hadoop.util.Shell.runCommand(Shell.java:543)
at org.apache.hadoop.util.Shell.run(Shell.java:460)
at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:720)
at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:210)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Container exited with a non-zero exit code 255
In log file - SLF4J:
Failed to load class "org.slf4j.impl.StaticLoggerBinder". SLF4J:
Defaulting to no-operation (NOP) logger implementation SLF4J: See
http://www.slf4j.org/codes.html#StaticLoggerBinder for further
details. ��h��׶9�A#���P VERSIONAPPLICATION_ACL MODIFY_APPVIEW_APP
APPLICATION_OWNEcentos(&container_1470297644642_0007_01_000007��stderr0stdout0syslog4142016-08-04
10:39:56,439 FATAL [main]
org.apache.hadoop.yarn.YarnUncaughtExceptionHandler: Thread
Thread[main,5,main] threw an Error. Shutting down now...
java.lang.NoSuchMethodError:
org.apache.hadoop.mapred.JVMId.(Lorg/apache/hadoop/mapred/JobID;ZJ)V
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:86)
2016-08-04 10:39:56,444 INFO [main] org.apache.hadoop.util.ExitUtil:
Exiting with status -1
(&container_1470297644642_0007_01_000013��stderr0stdout0syslog4142016-08-04
10:40:03,359 FATAL [main]
org.apache.hadoop.yarn.YarnUncaughtExceptionHandler: Thread
Thread[main,5,main] threw an Error. Shutting down now...

Resources