spark jobserver Missing settings.sh, exiting - shell

I am trying to run ./server_start.sh with spark-jobserver,
but it says
"Missing /home/spark/spark-jobserver1.5.1/bin/settings.sh, exiting",
I also check the details in ./server_start.sh from github where i found this(as the picture below):enter image description here
It means the setting.sh should be exists but not.

You need spark binaries to be installed on you machine. And export SPARK_HOME. See local.sh.template for example.

Related

Server agent not opening in JMeter

I installed serverAgent 2.2.3
When I run it in cmd I get this error.
enter image description here
I tried starting the startAgent. bat. It automatically closes.
Thanks for looking.
That's very weird because the error states that sl4j library cannot be found in CLASSPATH and ServerAgent doesn't use this library at all.
Try downloading it from Github and unpacking it somewhere else. Also if you have CLASSPATH environment variable set - try clearing/unsetting it.
set CLASSPATH= && startAgent.bat
More information: How to Monitor Your Server Health & Performance During a JMeter Load Test
Alternatively you can try downloading sl4j.jar and dropping it near ServerAgent.jar but it is not a part of normal ServerAgent installation procedure.

Can't pass path of my ES config file from the command line

Maybe I am thick, but I can't seem to find a way to pass ES a config file path from the command line. I have been searching and reading for 45 mins now (including several posts on Stack Overflow), and none of the proposed solutions works.
Here are the ones I tried:
elasticsearch -Des.config=/path/to/my/elasticsearch.yml
==> ERROR: D is not a recognized option
elasticsearch -Ees.config=/path/to/my/elasticsearch.yml
==> org.elasticsearch.bootstrap.StartupException: java.lang.IllegalArgumentException: unknown setting [es.config] please check that any required plugins are installed, or check the breaking changes documentation for removed settings
elasticsearch -Econfig=/path/to/my/config.yml
==> org.elasticsearch.bootstrap.StartupException: java.lang.IllegalArgumentException: unknown setting [config] please check that any required plugins are installed, or check the breaking changes documentation for removed settings
elasticsearch -Epath.conf=/path/to/config/dir/with/elasticsearch.yml
==> No exception, but the program terminates without any output whatsoever (no error message). Since I didn't specify the -d option, I am assuming that it's not running as a daemon and that therefore, the ES server is not running by the end of that.
Can anyone pull me out of the mud here?
Thx.
I too struggled with the same issue and tried the same sort of commands as you did. The problem here is caused due to the version of elastic search.
If your version is above 5.0.0 and as per this none of the above commands will work. Also it looks like they have limited the types of parameters that can be passed from the command line.
The easiest way is to just cd to the directory you installed elasticsearch and then just ./bin/elasticsearch (Make sure you don't execute it as root, it doesn't allow you to run as root.)
The issue here is that after every new version of ES, some older functionality gets removed/updated which is frustrating. I'm currently working with Elasticsearch v6.4.0 and as for now this works.

install weblogic on console mode without xming

I'm trying to install weblogic server on Centos 7 with following instruction of oracle about console mode. Everything will be fine till weblogic file 's extracting on my computer. I get this message about
display enviroment variable failed
I google it and found xming as solution. But is there any solution to install weblogic without xming.
You need to do a silent install as mentioned. You can find the documentation here.
Basically, you need two files:
A response file
Here, you will set some parameters like ORACLE_HOME, proxy information if needed and installation type, etc.
An oraInst.loc file
In this file, you need to do the following(from documentation):
Replace oui_inventory_directory with the full path to the directory where you want the installer to create the inventory directory. Then, replace oui_install_group with the name of the group whose members have write permissions to this directory.
After doing all of this, you can run the command as follows;
java -jar distribution_name.jar -silent -responseFile file [-options] [()*]
I uploaded my own oraInst.loc and response files here for demonstration. I strongly suggest you to read the documentation though. Good luck.

starting cassandra on windows 7 error the system can not find the path specified

I have tried to get cassandra working on windows 7. I followed the instructions from:
http://php-cms-job.blogspot.de/2012/09/how-to-install-cassandra-and-configure.html
I have double checked the steps of creating the folders and changing the yaml file but always get the message after running cassandra.bat:
Starting Cassandra Server
The System cannot find the Path Specified
I can not find which path exactly, any tips?
Cassandra will fail to start with no logs if the values in
HKEY_LOCAL_MACHINE\SOFTWARE\WOW6432Node\Apache Software Foundation\Procrun 2.0\cassandra\Parameters\Java are not correct.
For example, if the Jvm REG_SZ value is not pointing to the correct path for jvm.dll you will get a System event log Error with
"The cassandra service terminated with the following service-specific error: Incorrect function".
You can find where java is with >echo %JAVA_HOME%
To any future Googlers, in my case the issue was that Cassandra had created the service using the 32-bit version of prunsrv.exe. To fix it, I manually replaced the 32-bit version with the 64-bit version (found inside the amd64 folder).

Hadoop on OSX "Unable to load realm info from SCDynamicStore"

I am getting this error on startup of Hadoop on OSX 10.7:
Unable to load realm info from SCDynamicStore
put: org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot create directory /user/travis/input/conf. Name node is in safe mode.
It doesn't appear to be causing any issues with the functionality of Hadoop.
Matthew Buckett's suggestion in HADOOP-7489 worked for me. Add the following to your hadoop-env.sh file:
export HADOOP_OPTS="-Djava.security.krb5.realm=OX.AC.UK -Djava.security.krb5.kdc=kdc0.ox.ac.uk:kdc1.ox.ac.uk"
As an update to this (and to address David Williams' point about Java 1.7), I experienced that only setting the .realm and .kdc properties was insufficient to stop the offending message.
However, by examining the source file that is omitting the message I was able to determine that setting the .krb5.conf property to /dev/null was enough to suppress the message. Obviously if you actually have a krb5 configuration, better to specify the actual path to it.
In total, my hadoop-env.sh snippet is as follows:
HADOOP_OPTS="${HADOOP_OPTS} -Djava.security.krb5.realm= -Djava.security.krb5.kdc="
HADOOP_OPTS="${HADOOP_OPTS} -Djava.security.krb5.conf=/dev/null"
I'm having the same issue on OS X 10.8.2, Java version 1.7.0_21. Unfortunately, the above solution does not fix the problem with this version :(
Edit: I found the solution to this, based on a hint I saw here. In the hadoop-env.sh file, change the JAVA_HOME setting to:
export JAVA_HOME=`/usr/libexec/java_home -v 1.6`
(Note the grave quotes here.)
FYI, you can simplify this further by only specifying the following:
export HADOOP_OPTS="-Djava.security.krb5.realm= -Djava.security.krb5.kdc="
This is mentioned in HADOOP-7489 as well.
I had similar problem on MacOS and after trying different combinations this is what worked for me universally (both Hadoop 1.2 and 2.2):
in $HADOOP_HOME/conf/hadoop-env.sh set the following lines:
# Set Hadoop-specific environment variables here.
export HADOOP_OPTS="-Djava.security.krb5.realm= -Djava.security.krb5.kdc="
# The java implementation to use.
export JAVA_HOME=`/usr/libexec/java_home -v 1.6`
Hope this will help
and also add
YARN_OPTS="$YARN_OPTS -Djava.security.krb5.realm=OX.AC.UK -Djava.security.krb5.kdc=kdc0.ox.ac.uk:kdc1.ox.ac.uk"
before executing start-yarn.sh (or start-all.sh) on cdh4.1.3
I had this error when debugging MapReduce from Eclipse, but it was a red herring. The real problem was that I should have been remote debugging by adding debugging parameters to the JAVA_OPTS
-Xdebug -Xrunjdwp:transport=dt_socket,server=y,suspend=y,address=1044
And then creating a new "Remote Java Application" profile in the debug configuration that pointed to port 1044.
This article has some more in-depth information about the debugging side of things. It's talking about Solr, but works much the same with Hadoop. If you have trouble, stick a message below and I'll try to help.

Resources