unable to init h2o. can somebody help me with it - h2o

Checking whether there is an H2O instance running at http://localhost:54321..... not found.
Attempting to start a local H2O server...
Java HotSpot(TM) 64-Bit Server VM (build 9.0.1+11, mixed mode)
Starting server from
C:\Users\Ramakanth\Anaconda2\lib\site-packages\h2o\backend\bin\h2o.jar
Ice root: c:\users\ramaka~1\appdata\local\temp\tmpeaff8n JVM stdout:
c:\users\ramaka~1\appdata\local\temp\tmpeaff8n\h2o_Ramakanth_started_from_python.out
JVM stderr:
c:\users\ramaka~1\appdata\local\temp\tmpeaff8n\h2o_Ramakanth_started_from_python.err
Traceback (most recent call last): File "", line 1, in
File
"C:\Users\Ramakanth\Anaconda2\lib\site-packages\h2o\h2o.py", line 262,
in init
min_mem_size=mmin, ice_root=ice_root, port=port, extra_classpath=extra_classpath) File
"C:\Users\Ramakanth\Anaconda2\lib\site-packages\h2o\backend\server.py",
line 121, in start
mmax=max_mem_size, mmin=min_mem_size) File "C:\Users\Ramakanth\Anaconda2\lib\site-packages\h2o\backend\server.py",
line 317, in _launch_server
raise H2OServerError("Server process terminated with error code %d" % proc.returncode) h2o.exceptions.H2OServerError: Server process
terminated with error code 1

Assuming "build 9.0.1+11" means Java 9, that is your problem: H2O currently only supports Java 7 or Java 8. This is the ticket to follow for adding Java 9 support. In the meantime uninstall your current Java, then install Java 8.
UPDATE: It seems Java 9 is now supported, so upgrade to h2o 3.20 or later.
BTW, normally you should be giving a lot more information: which language you are using, what code you used to try and start H2O (or the commandline if you started it that way), what OS, what versions of Java, R, Python, etc., number of cores, amount of memory, etc.

Related

What does "Failed to execute /init (error -7)" mean?

Linux kernel version: 4.18.0-17
I am porting some 4.15 kernel customizations to 4.18, but my 4.18 kernel does not boot. A stock 4.18 kernel (i.e. the starting point before merging the 4.15 modifications) boots and runs.
The error message is:
Failed to execute /init (error -7)
Starting init: /bin/sh exists but couldn't execute it (error -7)
"errno 7" is "E2BIG 7 Argument list too long"
What does that mean in the context of the kernel starting the init process?
If the kernel command line and root file systems is exactly the same as the one you are giving to the kernel version that does boot than the most likely cause is that get_user_pages_remote() is failing here: https://elixir.bootlin.com/linux/v4.18/source/fs/exec.c#L194
Which would imply one of your changes broke memory management.
To get here, just track from try_to_run_init_process() which runs init to all the functions called from it which can return E2BIG. This is the only call site that does not depend on init argument list or environment size - https://elixir.bootlin.com/linux/v4.18/source/init/main.c#L1001
Having said that, I would first make VERY sure that the kernel command line and root file system are the same.

Starting a payara 5 has encountered

I have build a very simple project of hello world in
Payara 5 (5.181)
JSF 2.3
JDK 1.8
CDI 2.0
Maven
and encountered a problem
Unable to start server due following issues: Launch process failed with exit code 1
in console it throws an error of :
Error: Could not find or load main class server\payara5\glassfish.lib.grizzly-npn-bootstrap.jar
[PIC] Payara 5 Error
It seems that the Payara Tools for Eclipse suffer by several bugs that may cause this. In my case, the following workarounds helped:
The Payara installation path should not contain spaces (e.g. Program Files\Payara)
It seems that only Java 8 is supported at the time
Open the domain.xml configuration file for the domain you are trying to start (typically payara_install_path/glassfish/domains/domain1/config/domain1.xml) and search for "Xbootclasspath". You should find a couple of lines like
<jvm-options>[1.8.0|1.8.0u120]-Xbootclasspath/p:${com.sun.aas.installRoot}/lib/grizzly-npn-bootstrap-1.6.jar</jvm-options>
<jvm-options>[1.8.0u121|1.8.0u160]-Xbootclasspath/p:${com.sun.aas.installRoot}/lib/grizzly-npn-bootstrap-1.7.jar</jvm-options>
<jvm-options>[1.8.0u161|1.8.0u190]-Xbootclasspath/p:${com.sun.aas.installRoot}/lib/grizzly-npn-bootstrap-1.8.jar</jvm-options>
<jvm-options>[1.8.0u191|1.8.0u500]-Xbootclasspath/p:${com.sun.aas.installRoot}/lib/grizzly-npn-bootstrap-1.8.1.jar</jvm-options>
Depending of your installed Java version (try running java --version) and choose the appropriate line (most likely the last one). Remove the remaining lines and remove the [...] part at the beginning of the chosen line so you will get something like
<jvm-options>-Xbootclasspath/p:${com.sun.aas.installRoot}/lib/grizzly-npn-bootstrap-1.8.1.jar</jvm-options>
After this, the tools seem to start normally.
The Problem is with Java version. The grizzly-npn-bootstrap-1.8.1.jar Jar is placed in bootclasspath, thats why it requires proper java version to start payara server. So remove unnecessary bootstrap jar from domain.xml.
In Windows:
1) Go To ---C:\Users\xxxx\payara5\glassfish\domains\domain1\config\domain.xml
2) According to my java verson(java version "1.8.0_191") I deleted the following lines from domain.xml. So delete according to your java version.
<jvm-options>[1.8.0|1.8.0u120]-Xbootclasspath/p:${com.sun.aas.installRoot}/lib/grizzly-npn-bootstrap-1.6.jar</jvm-options>
<jvm-options>[1.8.0u121|1.8.0u160]-Xbootclasspath/p:${com.sun.aas.installRoot}/lib/grizzly-npn-bootstrap-1.7.jar</jvm-options>
<jvm-options>[1.8.0u161|1.8.0u190]-Xbootclasspath/p:${com.sun.aas.installRoot}/lib/grizzly-npn-bootstrap-1.8.jar</jvm-options>
3) Remove this [1.8.0u191|1.8.0u500] part from jvm-options & Edit the line in your domain.xml(w.r.t java -version) as shown below
<jvm-options>-Xbootclasspath/p:${com.sun.aas.installRoot}/lib/grizzly-npn-bootstrap-1.8.1.jar</jvm-options>
4) restart your server.
As Radkovo said, "The Payara installation path should not contain spaces (e.g. Program Files\Payara)", so I moved the Payara to the Documents folder.
Problem solved!

Ambari-Update failed (Ambari 2.4 to 2.6) - Hadoop services won't start anymore

I just worked through this Upgrade guide for the Hortonworks Data Platform:
https://docs.hortonworks.com/HDPDocuments/Ambari-2.6.0.0/bk_ambari-upgrade/bk_ambari-upgrade.pdf
I did all the steps as described in section 1 - 4 (Ambari upgrade). But now I have the problem, that my services won't start anymore!
Ambari can find all hosts, but they won't start!
E.g. for the HDFS starting I got the following error message:
2017-11-13 19:41:11,427 - Unable to load available packages
Traceback (most recent call last):
File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 771, in load_available_packages
self.available_packages_in_repos = pkg_provider.get_available_packages_in_repos(repos)
File "/usr/lib/python2.6/site-packages/resource_management/core/providers/package/yumrpm.py", line 85, in get_available_packages_in_repos
available_packages.extend(self._get_available_packages(repo))
File "/usr/lib/python2.6/site-packages/resource_management/core/providers/package/yumrpm.py", line 146, in _get_available_packages
return self._lookup_packages(cmd, 'Available Packages')
File "/usr/lib/python2.6/site-packages/resource_management/core/providers/package/yumrpm.py", line 191, in _lookup_packages
if items[i + 2].find('#') == 0:
IndexError: list index out of range
Traceback (most recent call last):
File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/hdfs_client.py", line 73, in <module>
HdfsClient().execute()
File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 367, in execute
method(env)
File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 930, in restart
self.install(env)
File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/hdfs_client.py", line 35, in install
import params
File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/params.py", line 25, in <module>
from params_linux import *
File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/params_linux.py", line 391, in <module>
lzo_packages = get_lzo_packages(stack_version_unformatted)
File "/usr/lib/python2.6/site-packages/resource_management/libraries/functions/get_lzo_packages.py", line 45, in get_lzo_packages
lzo_packages += [script_instance.format_package_name("hadooplzo_${stack_version}"),
File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 538, in format_package_name
raise Fail("Cannot match package for regexp name {0}. Available packages: {1}".format(name, self.available_packages_in_repos))
resource_management.core.exceptions.Fail: Cannot match package for regexp name hadooplzo_${stack_version}. Available packages: []
(I think the most important part is the message resource_management.core.exceptions.Fail: Cannot match package for regexp name hadooplzo_${stack_version}. Available packages: [] which looks like there would be no version (package) available...!
I just saw, that I upgraded also the Ambari Metrics Monitor, Ambari Metrics Hadoop Sink and the Metrics Collector before starting the Services once (the manual is a little bit confusing here, see step 4.3.3)! Was this a mistake?
I tried to upgrade from Ambari 2.4 to Ambari 2.6 (HDP 2.5 installed)! Operating system is CentOS 7.
However, I need to reset / downgrade the Ambari or upgrade the Services to be able to start them again! Can someone help? Any help would be appreciated! Thank you!
Finally I was able to downgrade my Ambari installation back to version 2.4.2 as it was before starting the upgrade process.
To make a downgrade you have to do the following steps on the appropriate nodes:
# delete the new ambari repo file
rm /etc/yum.repos.d/ambari.repo
# download the old ambari repo file (for me version 2.4.2), as described in ambari installation guide (here for Centos 7)
wget -nv http://public-repo-1.hortonworks.com/ambari/centos7/2.x/updates/2.4.2.0/ambari.repo -O /etc/yum.repos.d/ambari.repo
yum clean all
yum repolist
# check for the correct version (e.g. 2.4.2) of the Ambari repo
# Downgrade all components to this version
yum downgrade ambari-metrics-monitor
yum downgrade ambari-metrics-hadoop-sink
yum downgrade ambari-agent
...
Afterwards I upgraded again, but to Ambari version 2.5.2.0, which now worked without a problem. I also was able to upgrade my HDP installation to version 2.6.3.0 via this Ambari version.
I will skip Ambari 2.6.0 and will try to upgrade Ambari with a later future release.

Execution error : file 'rm1p0018' error code: 114, pc=0, call=1, seg=0 114 Attempt to access item beyond bounds of memory (Signal 11)

When I ran a script on HP-UX server then I am getting below error. Script is calling one executable file( rm1p0018 ) which made by cobol file.
Here is the error message from the log:
Execution error : file 'rm1p0018'
error code: 114, pc=0, call=1, seg=0
114 Attempt to access item beyond bounds of memory (Signal 11)
HP/MF COBOL Version: B.13.50
HP-UX df2hp405 B.11.11 U 9000/800
pid: 12766 gid: 20 uid: 9831
Wed Aug 8 08:52:19 2012
8:52am up 2 days, 11:04, 4 users, load average: 0.01, 0.01, 0.01
Thread mode: No Threads
RTS Error: COBOL
Sync Signals: COBOL
ASync Signals: COBOL
cobtidy on exception: False
Recently oracle database was migrated from HP-UX to AIX server with the upgraded version of 10g from 9i.
Intially application and DB both were resides on HP-UX server but now application is resides on HP-UX and DB is resides on AIX server.
Can someone help me out on this issue.
A little bit hard to guess the cause of this memory violation.
If you are able to recompile the Cobol program, I advice you to trace it using "ready trace." statement along with $set trace directive.
You will be able then to have a trace of all cobol paragraphs executed and deduce the one where the program stopped and raised the error.

Help deciphering this Fatal Error (Java)

A fatal error has been detected by the Java Runtime Environment:
EXCEPTION_ACCESS_VIOLATION (0xc0000005) at pc=0x762a76d0, pid=4072, tid=2984
JRE version: 6.0_26-b03
Java VM: Java HotSpot(TM) Client VM (20.1-b02 mixed mode, sharing windows-x86 )
Problematic frame:
C [ole32.dll+0x376d0]
Error Log: http://pastebin.com/zpBst6W1
Line 3 says your program failed in the AWT-EventQueue-0 thread, also called the event dispatch thread (EDT). The execution stack trace starts in line 111, and builds up. At line 47, in the package sun.awt.windows and class WComponentPeer, the method addNativeDropTarget() attempted to call into the shared library, ole32.dll, failing at entry CoUnmarshalInterface. The rest describes the processor state at the time.
This can happen if the shared library is not the one expected by the Java Runtime Environment (JRE). You may need to check your installation.
This is a known bug in Java 6 that is still open:
http://bugs.java.com/bugdatabase/view_bug.do;jsessionid=e5f1f1011daf96ffffffffdd154dd2e731150?bug_id=6967456
https://bugs.openjdk.java.net/browse/JDK-6967456?page=com.atlassian.jira.plugin.system.issuetabpanels:changehistory-tabpanel

Resources