Connect Sonarqube 6.7 to external Elasticsearch - elasticsearch

I've been using Sonarqube with its embedded database for demos. Now, I need to connect it to an external Elasticsearch instance to meet the requirements of a production environment.
Which configurations I have to add on the elasticsearch.yml and sonar.properties?

For the move to production, you don't need to, and shouldn't try to connect to an external Elasticsearch instance. SonarQube starts up and manages its own instance internally.
What you do need to do is connect to an external database, and that's easily done by setting the correct properties in $SONARQUBE_HOME/conf/sonar.properties

I succeed to use a external ElasticSearch with latest sonarqube 8.9. But it's just a hack at your own risk.
Steps
Create a elastic search server
First start a elastic search instance anywhere.
Modify the config files
Modify the file
cat >> conf/sonar.properties < EOF
# your external host and port
sonar.search.port=9200
sonar.search.host=192.168.xx.xx
EOF
# create a dummy run script
cat > elasticsearch/bin/elasticsearch << EOF
#!/bin/bash
# it's a inflate sleep
cat
EOF
Run sonarqube
just start sonarqube and view indexs in your new elasticsearch.

Related

Path settings configuration for Logstash as a Service

I want to process my logs from my db to Kibana via Logstash. Presently I am able to manually update the logs by calling the command: sudo /usr/share/logstash/bin/logstash -f /data/Logstash_Config/Logstash_Config_File_for_TestReport.conf --pipeline.workers 1 --path.settings "/etc/logstash"
Now, I want to automate the process by using Logstash as a Service. I understand that by placing the path.settings parameter in either the config file or other corresponding file should solve the issue, but I am not able to process further.

Install Filebeat locally or in the VM?

I recently started learning ELK and I succeed to parse my XML files locally. But now I would like to have access to my server to get access to all of my XML files (upgrade every 30 seconds)
I have the ip-address of my server and my question is: should I install Filebeat locally and configure my filebeat.yml to get access to the server or I should install the Filebeat in the server and then indicate my locally address?
Filebeat is a shipper, which collects, aggregate and forward logs to your desired output (logstash, elasticsearch etc).
It works as an agent, so you need to install it in every node from which you want to collect logs from. For instance, if you want to collect logs from your local machine then install filebeat there, if you want to collect from logstash server itself, then install filebeat there. If you want to collect log from both, then filebeat needs to be installed in both machines. and use logstash as an output,
have a look at this illustration,
But when I tried to install filebeat on my server using
curl -L -O elastic.co/downloads/beats/filebeat/filebeat-6.3.1-amd64.deb
I get this message:
Could not resolve host: www.elastic.co; Name or service not known
The OS version of the server is : Linux version 3.10.0-693.17.1.el7.x86_64

Error at prompt:Setting Up Greenplum command center web Application with centos 6.5

We have small gpdb cluster . In that,We are trying to setup the Greenplum command center web portal.
ENVIRONMENT IS
Product | Version
Pivotal Greenplum (GPDB) 4.3.x
Pivotal Greenplum Command Center (GPCC) 2.2
stage of error is : Set up the Greenplum Command Center Console
We have launched the following setup utility.
that is
$ gpcmdr --setup
Getting the following error at prompt msg :
What is the hostname of the standby master host? [smdw]:sbhostname
standby is sbhostname
Done writing lighttpd configuration to /usr/local/greenplum-cc-web/./instances/gpcc/conf/lighttpd.conf
Done writing web UI configuration to /usr/local/greenplum-cc-web/./instances/gpcc/conf/gpperfmonui.conf
Done writing web UI clustrs configuration to /usr/local/greenplum-cc-web/./instances/gpcc/conf/clusters.conf
Copying instance 'gpcc' to host 'sbhostname'...
ERROR: the instance directory was not successfuly copied to the remote host: '/usr/local/greenplum-cc-web/./instances/gpcc'
+You have to reload the configuration by gpstop -u or restart the the database after the gpcc setup, Because setup will add some entries in pg_hba.conf for gpperfmon.
+Also check if you have correct entries in .pgpass file in /home/gpadmin

Elasctic Search is working at port 9200 but Kibana is not working

Hello I am starting work with kibana and elasticsearch. I am being able to run elasticsearch at port 9200 but kibana is not running at port 5601. The following two images are given for clarification
Kibana is not running and showing the page is not available
Kibana doesn't support space in the folder name. Your folder name is
GA Works
Remove the space between those two words kibana will then run without errors and you will be able to access at
http://localhost:5601
You can rename the folder with
GA_Works
Have you
a) Set the elasticsearch_url to point at your Elasticsearch instance in file kibana/config.yml?
b) Run ./bin/kibana (or bin\kibana.bat on windows) (after setting the above config)
?
If you tried all of the above and still it doesn't work make sure that the kibana process is running first. I found that /etc/init.d/kibana4_init doesn't start the process. If that is the case then try: opt/kibana/bin/kibana.
I also made kibana user:group owner of the folder/files.

ELK - Shield auth problems

I'm trying to setup Shield for Elasticsearch, but had some trouble
When I try to start Elasticsearch like:
/usr/share/elasticsearch/bin/elasticsearch
all work as expected, but when I'm trying to start/restart Elasticsearch like:
/etc/init.d/elasticsearch srart
I've got error described below
[2015-02-17 21:44:09,662][ERROR][shield.audit.logfile ] [Tusk] [rest] [authentication_failed] origin_address=[/192.168.88.17:58291], principal=[es_admin], uri=[/_aliases?pretty=true]
OS: Ubuntu 12.04
Elasticsearch: 1.4.3
Shield: 1.0.1
Elasticsearch and Shield were running with default settings
If your elasticsearch configs are not in /usr/share/elasticsearch but lets say at /etc/elasticsearch
Then just move the usr/share/elasticsearch/config/shield to /etc/elasticseach
Take care that if you start elasticsearch with the user elasticsearch that the new /etc/elasticsearch/shield folder belongs to the user elasticsearch.
If that doesn't make it, then also see this
http://www.elasticsearch.org/guide/en/shield/current/getting-started.html#_configuring_your_environment
Same thing happened with me when i tried to add shield to our elasticsearch cluster to add auth based access to elasticsearch data.
I was on ubuntu 14.04 machine and elasticsearch was installed using a .deb package from elastic-download-link.
Elasticsearch was using a service startup script from
/etc/init.d/elasticsearch
in which the configuration was mentioned as:
# Elasticsearch configuration directory
CONF_DIR=/etc/$NAME
But when i tried to install shield plugin on elasticsearch from this-link
and tried to add user on shield by following es-docs using this command.
sudo bin/shield/esusers useradd es_admin -r admin
shield configuration was being updated in
/usr/share/elasticsearch/config/shield/
but elasticsearch server was expecting configuration files to be in
/etc/elasticsearch/shield/
due to this mismatch in read configuration file for shield and new updated file with newly added users on shield causing this authentication failure.
This can be solved either by moving
/usr/share/elasticsearch/config/shield/
to
/etc/elasticsearch/shield/
or by changing conf file location in
/etc/init.d/elasticsearch
as
# Elasticsearch configuration directory
CONF_DIR=/usr/share/elasticsearch/config/

Resources