Running netflix conductor with standalone elastic search? - elasticsearch

How to configure Netflix conductor to run standalone elastic search rather than embedded elastic search ?

if you have a conductor-config.properties just make sure you have these pointing to the valid elasticsearch you have up and running:
workflow.elasticsearch.instanceType=EXTERNAL
workflow.elasticsearch.url=http://elasticsearch:9200
Then should be able to run conductor up with that config:
java conductor-server-2.15.0-SNAPSHOT-all.jar conductor-config.properties
https://github.com/Netflix/conductor/blob/master/es5-persistence/src/main/java/com/netflix/conductor/dao/es5/index/ElasticSearchRestDAOV5.java
You can inspect this as an example, swapping the elastic container by your own, modifying the conductor-config.properties. It will be copied in when you run:
check out https://github.com/s50600822/conductor-cheat
inside the repo just do
docker-compose up
Check out https://github.com/Netflix/conductor/blob/master/es5-persistence/src/main/java/com/netflix/conductor/dao/es5/index/ElasticSearchRestDAOV5.java for other options.

To add external elastic search we need to follow code changes as mentioned in
below link.
https://github.com/Netflix/conductor/tree/master/es5-persistence.
And rebuild jar and run conductor server again with properties.
if you still get errors , I suggest to follow below link
https://github.com/Netflix/conductor/issues/489.

You can use the standalone installation of elasticsearch2 or elasticsearch5 because the associated support classes are already provided with Netflix Conductor binary.
To configure externally you have to do the following
Install and configure standalone elasticsearch. By default the
installation would expose 2 ports 9200/http or 9300/tcp.
Update server.properties file with the host and port so that the
communication will start happening with the standalone instance of
elasticsearch.
Hope this helps.

Related

Elasticsearch local instance

I was wondering if it is possible to distribute an elasticsearch instance packed within some sort of installer to run locally on a client's machine.
I know that elastic is ok to distribute, but I'm looking for some way to pack it up as a dependency in a larger project.
Thanks!
I guess docker container is the best solution for your use case.
You can see this project :
https://github.com/deviantony/docker-elk
A requirement is the host/workstation targeted must have docker (+ optionnaly docker-composer) installed.
docker-compose allow to package a lot of container for example we could run a multi node cluster + kibana

How to monitor an ElasticSearch Cluster on the Elastic Cloud with Datadog?

We have an elasticsearch cluster deployed to the Elastic Cloud and would like to send monitoring/health metrics to Datadog. What is the best way to do that?
It seems like our options are:
Installing the datadog agent binary via the plugins upload
Using metric beat -> logstash -> datadog_metrics output
You can deploy the Datadog agent in a container / instance that you manage and the configure it according to these instructions to gather metrics from the remote ElasticSearch cluster that is hosted on Elastic Cloud. You need to create a conf.yaml file in the elastic.d/ directory and provide the required information (Elasticsearch endpoint/URL, username, password, port, etc) for the agent to be able to connect to the cluster. You may find a sample configuration file here.
As George Tseres mentioned above, the way I had to get this working was to set up collection on a separate instance (through docker) and then to configure it to read the specific Elastic Cloud instances.
I ended up making this: https://github.com/crwang/datadog-elasticsearch, building that docker image, and then pushing it up to AWS ECR.
Then, I spun up a Fargate service / task to run the container.
I also set it to run locally with docker-compose as a test.

Sending log files/data from one EC2 instance to another

So i have one EC2 instance with logstash, elastichsearch and kibana installed on it. and i have another EC2 instance thats running a dummy apache server. Now i know that i should install filebeat on the apache server instance to send the log files to the logstash instance but im not sure how to configure the files.
My main goal is to send the log files from one instance basically to another for processing and viewing aka ES and Kibana. Any help or advice is greatly appreciated.
Thanks in advance!
Cheers!
So as you have already stated, the easiest way to send log events from one machine to an Elastic instance is to install the filebeat agent on the machine the apache is running.
Filebeat has its own Apache module that makes the configuration even easier! In the module you specify the paths of the desired log files.
Then you also need a configuration of Filebeat itself. In the filebeat.yml you need to define the logstash destination under
output.logstash
This configuration guide gets into more details
Take a look at the filbeat.yml reference on all configuration settings.
If you are familiar with docker, there is also a guide on how to run filebeat on docker.
Have fun! :-)

Docker AspNet Core + Couchbase

I have a .net core api which includes the required couchbase-sdk installed
The api has docker support through visual studio. I added couchbase section in the docker-compose file to use couchbase, when I run the docker-compose up command both the api and couchbase is running. I am able to view the couchbase UI.
Question: what will my connectionstring be in the appsettings.json file to connect to this couchbase cluster because its running inside docker which will have its ip addresses. I cannot simply go localhost:8091?
In the compose file, how do I set the username and password, default bucket to use in couchbase, I had a look at the docs on docker/couchbase and couldnt find anything, couldnt find much docs on google about this also.
You should use the hostname, not IP, and that hostname will be the service name in the docker-compose.yml file. In other words, use db.
All the documentation about Couchbase on Docker can be found here: https://docs.couchbase.com/server/6.0/install/getting-started-docker.html and there's also some quick-start information here: https://hub.docker.com/_/couchbase
When you say the "connection string" for Couchbase, this is normally the IP address or network address of one or more Couchbase nodes. Since you are using docker-compose, I think "db" might be what you use instead.
As for your other questions: "how do I set the username and password, default bucket to use in couchbase" -> there is no way that I know of currently to do this with docker or docker compose using the off-the-shelf docker images supplied by Couchbase. If you want to automate this, you could create your own docker image (based on the Couchbase image) that runs a script (details too long for this SO answer, but you could check out this blog post, for instance).
Alternatively, there is a Kubernetes operator that is capable of doing exactly this (for Couchbase Enterprise only), which I guess isn't too helpful if you're set on using docker-compose.

How to set Elasticsearch 6.x password without using X-Pack

We are using Elasticsearch in a Kubernetes cluster (not exposed publicly) without X-Pack security, and had it working in 5.x with elastic/changeme, but after trying to get it set up with 6.x, it's now requiring a password, and the default of elastic/changeme no longer works.
We didn't explicitly configure it to require authentication, since it's not publicly exposed and only accessible internally, so not sure why it's requiring the password, or more importantly, how we can find out what it is or how to set/change it without using X-Pack security.
Will we end up needing to subscribe to X-Pack since we're trying to us it within a Kubernetes cluster?
Not sure how you are deploying Elasticseach in Kubernetes but we had a similar issue an ended passing this:
xpack.security.enabled=false
through the environment to the container.
If you don't use XPack at all you should use oss flavor of Elasticsearch. It includes only open source components of Elasticsearch:
docker pull docker.elastic.co/elasticsearch/elasticsearch-oss:6.4.2
The interesting thins is, Elastic have removed any mention of it in documentation since 6.3.
See:
Docker 6.2
Docker current

Resources