ImportError: No module named apache when using Grinder Data Analyzer - analyzer

I used Grinder 3.11 and tried to use Grinder Data Analyzer for better graph result. My Grinder Data Analyzer is GrinderAnalyzer.V2.b19
when I tried to execute this,
" jython ./analyzer.py "<grinder data file(s)>" <grinder mapping file> [number of agents]"
I got error:
File "./analyzer.py", line 34, in <module>
from org.apache.log4j import *
ImportError: No module named apache
I have no idea what really happen. Can somebody help me?

Related

Is it possible to read a file using SparkSession object of Scala language on Windows?

I've been trying to read from a .csv file on many ways, utilizing SparkContext object. I found it possible through scala.io.Source.fromFile function, but I want to use spark object. Everytime I run function textfile for org.apache.spark.SparkContext I get the same error:
scala> sparkSession.read.csv("file://C:\\Users\\184229\\Desktop\\bigdata.csv")
21/12/29 16:47:32 WARN streaming.FileStreamSink: Error while looking for metadata directory.
java.lang.UnsupportedOperationException: Not implemented by the DistributedFileSystem FileSystem implementation
.....
As it's mentioned in the title I run the code on Windows in IntelliJ
[Edit]
In build.sbt have no redundant or overlapped dependencies. I use hadoop-tools, spark-sql and hadoop-xz.
Have you tried to run your spark-shell using local mode?
spark-shell --master=local
Also pay attention to not use both Hadoop-code and Hadoop-commons as a dependencies since you may have conflicting jars issues.
I've found the solution, precisely one of my colleague did that.
In dependencies build.sbt I changed hadoop-tools to hadoop-commons and it worked out.

Data Explorer: ImportError No module named Kqlmagic

I'm following this tutorial:
https://learn.microsoft.com/en-us/azure/data-explorer/kqlmagic
I have a Databricks cluster so I decided to use the notebook that is available on there.
When I get to step 2 and run:
reload_ext Kqlmagic
I get the error message:
ImportError: No module named Kqlmagic
Kqlmagic doesn't work with Databricks notebook. It might be supported in a future version.
Please try running instead of Steps 1,2:
%load_ext Kqlmagic

How do I connect to a Netcool / Omnibus “Object Server” using JayDeBeApi module along with SAP Sybase JDBC drivers (jconn4.jar) in Python3?

I am new to python programming. I'm trying to connect to a Netcool Object Server using Python3, I am using JayDeBeApi module along with SAP Sybase JDBC drivers (jconn4.jar)
following is the sample script:
import jaydebeapi
server="xxx"
database="xx"
user="xx"
password="xx"
jclassname='com.sybase.jdbc4.jdbc.SybDriver'
url='jdbc:sybase:Tds://'+server+'/'+database
driver_args=[url,user,password]
jars="path/jconn4.jar"
conn=jaydebeapi.connect(jclassname,driver_args,jars)
curs = conn.cursor()
curs.execute("select * from status")
curs.fetchall()`
when I am executing the script it showing an error as follows
File "sample.py", line 12, in <module>
conn=jaydebeapi.connect(jclassname,driver_args,jars)
File "/usr/local/lib/python3.5/site-packages/jaydebeapi/__init__.py", line 381, in connect
jconn = _jdbc_connect(jclassname, url, driver_args, jars, libs)
File "/usr/local/lib/python3.5/site-packages/jaydebeapi/__init__.py", line 199, in _jdbc_connect_jpype
return jpype.java.sql.DriverManager.getConnection(url, *dargs)
RuntimeError: No matching overloads found. at native/common/jp_method.cpp:117
if anyone successfully connected to a Netcool Object Server using JayDeBeApi module in Python3? please share the sample script
thanks
The url format you specified is not correct. The below works for me.
url = jdbc:sybase:Tds:++hostname:++dbport/++dbname
e.g
conn = jaydebeapi.connect('com.sybase.jdbc4.jdbc.SybDriver', ['jdbc:sybase:Tds:hostA:8888/db1','root',''],['path/jconn4.jar'])

Elastalert creating index not working

I'm installing elastalert in my local installation of ELK. When I run the command 'elastalert-create-index' I got this error message:
Traceback (most recent call last):
File "C:\Python27\Scripts\elastalert-create-index-script.py", line 11, in <module>
load_entry_point('elastalert==0.1.8', 'console_scripts', 'elastalert-create-index')()
File "C:\Python27\Scripts\elastalert\create_index.py", line 83, in main
profile_name=args.profile)
File "C:\Python27\Scripts\elastalert\auth.py", line 24, in __call__
aws_access_key=credentials.access_key,
AttributeError: 'NoneType' object has no attribute 'access_key'
Any idea?
Had the same issue, fixed by editing config.yaml, setting host/port there and uncommenting es_username & es_password.
Our ES instance is local and not password protected. Still worked with the default username & password (ignores it I guess).
Not a fix, but a workaround.
This could occur due to a number of possible reasons.
elasticsearch might not be running
If not, start elasticsearch
The elastalert version might not be compatible with elasticsearch.
The version of elastalert and elasticsearch should be same or atleast close. Ex: If elasticsearch version is 5.0.0 then elastalert version should also be 5.0.0 or something close to it.
Ensure that the host and port specified in config.yaml are correct.
Check if access key is specified properly.

Pig UDF running on AWS EMR with java.lang.NoClassDefFoundError: org/apache/pig/LoadFunc

I am developing an application that try to read log file stored in S3 bucks and parse it using Elastic MapReduce. Current the log file has following format
-------------------------------
COLOR=Black
Date=1349719200
PID=23898
Program=Java
EOE
-------------------------------
COLOR=White
Date=1349719234
PID=23828
Program=Python
EOE
So I try to load the file into my Pig script, but the build-in Pig Loader doesn't seems be able to load my data, so I have to create my own UDF. Since I am pretty new to Pig and Hadoop, I want to try script that written by others before I write my own, just to get a teast of how UDF works. I found one from here http://pig.apache.org/docs/r0.10.0/udf.html, there is a SimpleTextLoader. In order to compile this SimpleTextLoader, I have to add a few imports, as
import java.io.IOException;
import java.util.ArrayList;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.lib.input.TextInputFormat;
import org.apache.hadoop.mapreduce.InputFormat;
import org.apache.hadoop.mapreduce.RecordReader;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigSplit;
import org.apache.pig.backend.executionengine.ExecException;
import org.apache.pig.data.Tuple;
import org.apache.pig.data.TupleFactory;
import org.apache.pig.data.DataByteArray;
import org.apache.pig.PigException;
import org.apache.pig.LoadFunc;
Then, I found out I need to compile this file. I have to download svn and pig running
sudo apt-get install subversion
svn co http://svn.apache.org/repos/asf/pig/trunk
ant
Now i have a pig.jar file, then I try to compile this file.
javac -cp ./trunk/pig.jar SimpleTextLoader.java
jar -cf SimpleTextLoader.jar SimpleTextLoader.class
It compiles successful, and i type in Pig entering grunt, in grunt i try to load the file, using
grunt> register file:/home/hadoop/myudfs.jar
grunt> raw = LOAD 's3://mys3bucket/samplelogs/applog.log' USING myudfs.SimpleTextLoader('=') AS (key:chararray, value:chararray);
2012-12-05 00:08:26,737 [main] ERROR org.apache.pig.tools.grunt.Grunt - ERROR 2998: Unhandled internal error. org/apache/pig/LoadFunc Details at logfile: /home/hadoop/pig_1354666051892.log
Inside the pig_1354666051892.log, it has
Pig Stack Trace
---------------
ERROR 2998: Unhandled internal error. org/apache/pig/LoadFunc
java.lang.NoClassDefFoundError: org/apache/pig/LoadFunc
I also try to use another UDF (UPPER.java) from http://wiki.apache.org/pig/UDFManual, and I am still get the same error by try to use UPPER method. Can you please help me out, what's the problem here? Much thanks!
UPDATE: I did try EMR build-in Pig.jar at /home/hadoop/lib/pig/pig.jar, and get the same problem.
Put the UDF jar in the /home/hadoop/lib/pig directory or copy the pig-*-amzn.jar file to /home/hadoop/lib and it will work.
You would probably use a bootstrap action to do either of these.
Most of the Hadoop ecosystem tools like pig and hive look up $HADOOP_HOME/conf/hadoop-env.sh for environment variables.
I was able to resolve this issue by adding pig-0.13.0-h1.jar (it contains all the classes required by the UDF) to the HADOOP_CLASSPATH:
export HADOOP_CLASSPATH=/home/hadoop/pig-0.13.0/pig-0.13.0-h1.jar:$HADOOP_CLASSPATH
pig-0.13.0-h1.jar is available in the Pig home directory.

Resources