Installing Elastic Search over cassandra on live server - elasticsearch

I need to install elastic search over existing Cassandra database.
I have over 50 customers using the server live now.
There is an option to install elassandra [ Elastic search + Cassandra ]. But my customers are using the database live and in order to install elassandra i will have to uninstall Cassandra.
Is there any solution for this problem ?
Thanks in advance.

As the server is running right now, you can update node by node to a elassandra installation with deactivated elastic.
Once all nodes are updated, you restart one by one and activate elastic.
In case you have only one server running, you should rethink your architecture

Related

How to setup 2 nodes on elasticsearch?

Hello enthusiastic people.
I am a student trying to learn Elastic stack.
I have 1 node installed on my local machine. I have also successfully installed beats on my other local machine to get data and deliver it to my logstash.
My question is, what if I add another node, do I still need to install kibana and elasticsearch? Then connect it from my first node?
I just read a lot that a single node is prone to data loss.
Sorry for my noob question.
Your answer is very appreciated.
Thanks in advance.
Having a cluster with at least 3 nodes would be good to ensure data security and integrity.
A cluster can have one or more nodes.
An example scenario:
It will be easier for you to install with docker during the learning and development process. I recommend you follow the link below. This link explains how to set up an elasticsearch cluster with 3 nodes on docker.
Start a multi-node cluster with Docker Compose

Is it possible to install only the profiling feature of Elasticsearch 6.8 X-Pack?

My installation of Elasticsearch 6.8.22 does not include an installation of X-Pack because so far we have not needed any of its features.
I am generating profiling data on queries, using the Profile API, but I want to use the Search Profiler UI capabilities available in Kibana. The documentation says that this is part of X-Pack.
My questions are:
Do I have to install all of X-Pack in order to use the Search
Profiler UI, or is it possible to install only certain features?
Do
I have to install X-Pack on the entire Elasticsearch cluster where I
am running the query in order to profile it?
Can I isolate the
X-Pack installation by creating a new separate Kibana installation
on its own server and connecting to my cluster, or does the X-Pack
installation need to be on one or more Elasticsearch servers in my
cluster?
the answer to 1 is it need to be installed as an entire package
the answer to 2 is yes, the entire cluster
the answer to 3 is no, see question 2

How to define datacenter.group in conf/elasticsearch.yml in order to run Elassandra multi data center?

I've 2DC
DC1
x.x.x.1 running Elassandra 6.2.3 (seed)
x.x.x.2 running Elassandra 5.5.0 (seed)
DC2
x.x.x.3 running Elassandra 6.2.3 (seed)
Actually I didn't want to create a multi data center, at first I have only two nodes in DC1 but they're unable to connect with each other due to minimum version that allowed connectivity among Elassandra is 5.6.
The thing that stop me from re-install Elassandra from 5.5 to 6.2 is I have an important data on that node. So I came with the multi data center solution.
The solution I've got from Strapdata's guy previously is
1.Create a new Cassandra Datacenter DC2 running version 6.2.3, with a dedicated datacenter group (see https://elassandra.readthedocs.io/en/latest/configuration.html#multi-datacenter-configuration)
2. Re-create your indices in DC2, there is few differences in the elasticsearch mapping between version 5.5 and 6.2, so you have to deal with that manually.
If you have a lot of data to re-index, you can stop the single-thread index build with a nodetool stop -id <compaction_id>, and restart it in multi-threads, see https://elassandra.readthedocs.io/en/latest/operations.html?highlight=--num-threads#create-delete-and-rebuild-index
3. Test your application on DC2 (warning, there is breaking changes in the Elasticsearch API when upgrading)
4. Remove old DCs running version 5.5 when everything is ok on DC2.
I've search all over the internet, there're no mentioned about the datacenter.group in elasticsearch.yaml (http://doc.elassandra.io/en/latest/configuration.html#multi-datacenter-configuration)
Now I've no idea what should I do with the datacenter.group one
please help
Thanks
For guys that might came across the issue,
after a couple of hours, I've figured out how to define the datacenter.group
just simply add datacenter.group:<your desired name> at the bottom of elasticsearch.yml file.
Then restart Cassandra service
systemctl restart cassandra
You're good to go! all the data will automatically transfer to the new node.

Elasticsearch upgrade order based on node roles

I looked for this in many websites, including Elastic official documentation, without success.
I have one Elasticsearch cluster with:
3 Master nodes
4 Data nodes
4 Ingest nodes
2 Client nodes
I must perform a rolling upgrade (from 5.x to 5.x) but the official docs do not explain the order based on node roles.
Should I upgrade Master nodes at first? What next? Data nodes?
I mean, I need to know which is the best way to get the whole cluster upgraded.
Thanks,
Best regards
We had a similar situation and elastic recommended upgrading master first, followed by data and then client nodes.
It's worth checking if your 5.x version has an upgrade assistant available (You see that in Kibana).

Elasticsearch Migration Plugin installation steps

In order to upgrade 2.3.z elasticsearch to 5.x, I have successfully installed elasticsearch migration plugin, but when linking to http://localhost:9200/_plugin/elasticsearch-migration/ I get a blank page.
Can someone let me know the following:-
Do I have to install the plugin all the machines in the cluster?
Do I have to restart all the machines in the cluster?
I have already tried the steps as mentioned here. But still I am getting a blank page.
As discussed in elastic community
Plugin is not needed to be installed in all the nodes. Only one node is sufficient.
Restart of node or cluster is not needed.

Resources