Elastalert creating index not working - elasticsearch

I'm installing elastalert in my local installation of ELK. When I run the command 'elastalert-create-index' I got this error message:
Traceback (most recent call last):
File "C:\Python27\Scripts\elastalert-create-index-script.py", line 11, in <module>
load_entry_point('elastalert==0.1.8', 'console_scripts', 'elastalert-create-index')()
File "C:\Python27\Scripts\elastalert\create_index.py", line 83, in main
profile_name=args.profile)
File "C:\Python27\Scripts\elastalert\auth.py", line 24, in __call__
aws_access_key=credentials.access_key,
AttributeError: 'NoneType' object has no attribute 'access_key'
Any idea?

Had the same issue, fixed by editing config.yaml, setting host/port there and uncommenting es_username & es_password.
Our ES instance is local and not password protected. Still worked with the default username & password (ignores it I guess).
Not a fix, but a workaround.

This could occur due to a number of possible reasons.
elasticsearch might not be running
If not, start elasticsearch
The elastalert version might not be compatible with elasticsearch.
The version of elastalert and elasticsearch should be same or atleast close. Ex: If elasticsearch version is 5.0.0 then elastalert version should also be 5.0.0 or something close to it.
Ensure that the host and port specified in config.yaml are correct.
Check if access key is specified properly.

Related

Does elastalert work with ElasticSearch 6

Have an elastalert docker image (https://hub.docker.com/r/ivankrizsan/elastalert/) that worked with elasticsearch 5.6, changed to a test environment with ElasticSearch 6.1 (no index) and now get
Creating Elastalert index in Elasticsearch...
Traceback (most recent call last):
File "/usr/bin/elastalert-create-index", line 11, in <module>
load_entry_point('elastalert', 'console_scripts', 'elastalert-create-index')()
File "/opt/elastalert/elastalert/create_index.py", line 153, in main
es.indices.put_mapping(index=index, doc_type='elastalert', body=es_mapping)
File "build/bdist.linux-x86_64/egg/elasticsearch/client/utils.py", line 73, in _wrapped
File "build/bdist.linux-x86_64/egg/elasticsearch/client/indices.py", line 282, in put_mapping
File "build/bdist.linux-x86_64/egg/elasticsearch/transport.py", line 312, in perform_request
File "build/bdist.linux-x86_64/egg/elasticsearch/connection/http_requests.py", line 90, in perform_request
File "build/bdist.linux-x86_64/egg/elasticsearch/connection/base.py", line 125, in _raise_error
elasticsearch.exceptions.RequestError: TransportError(400, u'mapper_parsing_exception', u'No handler for type [string] declared on field [aggregate_id]')
As of now, elastalert does not support for elasticsearch 6.0 out of the box. Here is the open issue on github: https://github.com/Yelp/elastalert/issues/1399 that tracks the issue. A walk-around is also mentioned in https://github.com/Yelp/elastalert/pull/1426.
Please upgrade to latest version of elastalert.
I am using Elasticsearch 6.2 with ElastAlert 0.1.29 and they work properly.

sctp_core_destroy(): SCTP API not initialized in kamailio start

Hi I have installed Kamalio it start first time but when I stop and start it again it gives sctp_core_destroy(): SCTP API not initialized . I have already installed sctp module.
yyerror_at(): parse error in config file /etc/kamailio/kamailio.cfg
load_module(): could not find module <db_mysql> in </usr/lib/kamailio/modules>
[sctp_core.c:53]: sctp_core_destroy(): SCTP API not initialized
From the log it is obvious that you have successfully compiled & installed SCTP module, however it could NOT be initialized.
Note that is error could must often than not be as a result of other errors in your cfg file.
Few tips:
Can you run kamailio -c and to be sure there is NO error in your cfg.
Found error? use this command to monitor what the exact issue is. Run from a different terminal tail -fn200 /var/log/syslog
On the second terminal try restarting you Kamalio server sudo service kamalio restart
Revisit terminal 1 and look out for the first line with CRITICAL output like the one below CRITICAL: <core> [core/cfg.y:3413]: yyerror_at(): parse error in config file /usr/local/etc/kamailio/kamailio.cfg, line 366, column 41: syntax error
Line 366 mostly is the issue so visit that file at that line (366) to fix the proble
sudo nano +366 /usr/local/etc/kamailio/kamailio.cfg
Let me know if it helps

graphlab create: unable to start cluster in aws

At the moment I'm trying to create a cluster in aws ec2 with Graphlab Create. The code is as follows:
import graphlab as gl
ec2config = gl.deploy.Ec2Config(region='us-west-2', instance_type='m3.large',
aws_access_key_id='secret-acces-key-id',
aws_secret_access_key='secret-access-key')
ec2 = gl.deploy.ec2_cluster.create(name='Test Cluster',
s3_path='s3://test-big-data-2016', ec2_config=ec2config, idle_shutdown_timeout=3600, num_hosts=1)
When the above code is executed I get the following error:
Traceback (most recent call last):
File "test.py", line 59, in
ec2 = gl.deploy.ec2_cluster.create(name='Test Cluster', s3_path='s3://test-big-data-2016', ec2_config=ec2config, idle_shutdown_timeout=36000, num_hosts=1)
File "/Users/remco/anaconda/envs/gl-env/lib/python2.7/site-packages/graphlab/deploy/ec2_cluster.py", line 83, in create
cluster.start()
File "/Users/remco/anaconda/envs/gl-env/lib/python2.7/site-packages/graphlab/deploy/ec2_cluster.py", line 233, in start
self.idle_shutdown_timeout
File "/Users/remco/anaconda/envs/gl-env/lib/python2.7/site-packages/graphlab/deploy/_executionenvironment.py", line 372, in _start_commander_host
raise RuntimeError('Unable to start host(s). Please terminate '
RuntimeError: Unable to start host(s). Please terminate manually from the AWS console.
When I look in EC2 Management Console a new instance is launched and running. But still getting the error in the terminal.
I really don't know what I'm doing wrong here. I followed the exact instructions from: https://turi.com/learn/userguide/deployment/pipeline-example.html

HAWQ stop cluster failed

I installed HAWQ from source code. After initializing and starting HAWQ cluster, I tried to stop it with "hawq stop cluster". However, it failed.
The error shows:
[hadoop#Master ~]$ hawq stop cluster
20161217:19:59:31:004594 hawq_stop:Master:hadoop-[INFO]:-Prepare to do 'hawq stop'
20161217:19:59:31:004594 hawq_stop:Master:hadoop-[INFO]:-You can check log in /home/hadoop/hawqAdminLogs/hawq_stop_20161217.log
20161217:19:59:31:004594 hawq_stop:Master:hadoop-[INFO]:-Stop hawq with args: ['stop', 'cluster']
Continue with HAWQ service stop Yy|Nn (default=N):
20161217:19:59:38:004594 hawq_stop:Master:hadoop-[INFO]:-No standby host configured
20161217:19:59:38:004594 hawq_stop:Master:hadoop-[INFO]:-Stop hawq cluster
Traceback (most recent call last):
File "/home/hadoop/hawq/bin/hawq_ctl", line 1276, in <module>
stop_hawq(opts, hawq_dict)
File "/home/hadoop/hawq/bin/hawq_ctl", line 1043, in stop_hawq
instance.run()
File "/home/hadoop/hawq/bin/hawq_ctl", line 891, in run
check_return_code(self._stopAll())
File "/home/hadoop/hawq/bin/hawq_ctl", line 816, in _stopAll
master_result = self._stop_master()
File "/home/hadoop/hawq/bin/hawq_ctl", line 760, in _stop_master
self._stop_master_checks()
File "/home/hadoop/hawq/bin/hawq_ctl", line 712, in _stop_master_checks
self.conn = dbconn.connect(self.dburl, utility=True)
File "/home/hadoop/hawq/lib/python/gppylib/db/dbconn.py", line 211, in connect
cnx = pgdb._connect_(cstr, dbhost, dbport, dbopt, dbtty, dbuser, dbpasswd)
AttributeError: 'module' object has no attribute '_connect_'
At present, I used the alternative way to stop the cluster, that is, stop master and segments separately with pg_ctl.
pg_ctl stop -D <master_data_dir>/<segment_data_dir>
Anything about this error is helpful. Thanks!
Because directly use the command 'pip install pygresql', it will install the latest version(5.0.3) pygresql. In the errors above, pgdb._connect_() is the old version (4.2.2) routine, in 5.0.3 it is pgdb._connect().
The solution is :
pip install pygresql==4.2.2
Before stop cluster, if it's not '-M immediate' stop, hawq will connect to database to check running connections.
From your log, the connection to master node is failed due to python module issues. Seems like pygresql module is not installed properly. Please try to reinstall it.

nginx + python app -- how to enable error logging/stack trace

I have a Flask app running on nginx + uWSGI.
On my local server (non-nginx), I get a nice stack trace + error reporting for exceptions.
Like this:
$ python run.py
Traceback (most recent call last):
File "run.py", line 1, in <module>
from myappname import app
File "/home/me/myappname/myappname/__init__.py", line 27, in <module>
file_handler.setLevel(logging.debug)
File "/usr/lib/python2.7/logging/__init__.py", line 710, in setLevel
self.level = _checkLevel(level)
File "/usr/lib/python2.7/logging/__init__.py", line 190, in _checkLevel
raise TypeError("Level not an integer or a valid string: %r" % level)
On nginx, there is next to no logging whatsoever (in /var/log/nginx/error.log).
This post suggests adding app.logger.exception('Failed') to my script, which didn't help.
How do I enable this sort of logging for debugging purposes?
Nginx will capture your app's console output, but you must make the app recover from exceptions. Else, you'll only get 500 or 400 errors from Nginx.
Try running the app off Nginx until it seems stable.
Use the logging module to capture app status information to your own log file. This strategy will be useful in the long run.

Resources