Elasticsearch NEST client compatibility - elasticsearch

According to the NEST matrix compatibility https://github.com/elastic/elasticsearch-net/blob/master/readme.md#compatibility-matrix, i'm having some troubles!, we worked for a long time with NEST 2.5 client & Elasticsearch 5.4 server without any problem, and now with an in-local test (NEST alway 2.5 version with Elasticsearch 7.5) seems works fine (index creation, indexing, searching ...).
can you please help me to understand ?

The compatibility matrix lists the supported compatible versions. It may well be the case that for the APIs you use that you find that 2.5 works with 7.5, but this is coincidental and not something supported or tested to be compatible. In addition, if you do come across an issue, the first suggestion will be to change to a compatible version.

Related

Elasticsearch-dsl python migration - upgrading major versions

Based on Elasticsearch DSL docs (https://elasticsearch-dsl.readthedocs.io/en/latest/)
"you have to use a matching major version" of the library for compatibility.
Specifically:
For Elasticsearch 7.0 and later, use the major version 7 (7.x.y) of the library.
For Elasticsearch 6.0 and later, use the major version 6 (6.x.y) of the library.
What's the best practice then for upgrading from ES 6 to ES 7?
This seems to imply that you can't make your code forward compatible with ES 7 server without making it backwards incompatible with an ES 6 server at the same time.
I'm trying to avoid having two different versions of the code having to exist at the same time by making it forwards compatible in-place first, before upgrading the server. Has anyone done this?
(We have lots of analyzers, tokenizers, multiple Documents, etc that we really don't want to have to duplicate in the code in the middle of the migration.)
There's an upgrade path that you need to follow. There's no need to maintain two different code bases. You should first make sure to upgrade to the latest minor+patch version of the ES 6 releases (i.e. 6.7 or 6.8) and make sure your indexes are compatible with that version.
You should also migrate your clients to the same latest minor+patch version of the ES 6 release, as Elastic makes sure that that version is forward-compatible with the next major version (i.e. ES 7).
Once you've tested everything on ES 6.7/6.8 (and properly backed up your data), you can safely upgrade to ES 7 and your clients will continue to work. Once ES is upgraded, you can upgrade your client to ES 7 as well.

Elasticsearch - Post Data using Java 1.4

We are using Java 1.4 and we would like to push data to the ELK stack.
I checked their site and googled and its mostly turning up artifacts/articles
that need more than 1.5.Are they any options since we cant change the current
java version installed.
Regards
Java SE 6 was released in 2006 and if I remember correctly the minimum version for Elasticsearch (first public release in 2010) even in the early days has been that.
The oldest docs available on the Elastic website are for 0.90 and that is ancient. Even if you could run an older version, there are no docs for it, so you really don't want to go there.
While upgrading existing applications can be a challenge, it's still not possible to run new services on newer versions? Anyway, you need to get to Java 6 at the very least or rather 8 for current versions.

MongoDB C# 2.0 upgrade

We are currently in the process of upgrading the MongoDb c# driver. There used to be "GrdFS" functionality to save large BSON document into chunks. Looks like the 2.0 doesn't have that feature.
We would like to know whether it is still in the scope or when can we expect this feature to be out there?
Much appreciate your response regarding the same.
You can track the feature here: https://jira.mongodb.org/browse/CSHARP-1191.
It is largely implemented, but we are waiting for the specification to be finalized. It will ship with the 2.1 version of the driver.

What is the effort required for migrating from Hadoop 0.20.2 to 0.20.205 and from 0.20.2 to 1.0.1?

I was looking to migrate my EMR implementation from an older version to the latest versions because I am primarily facing a lot of issues.
My current implementation uses Hadoop 0.20.2.
I wanted to understand how much effort in terms of code change would be required for migrating from 0.20.2 to -
0.20.205
1.0.1
Are the APIs very different and require a lot of recoding? Any basic idea would be highly helpful.
0.20.205 was just renamed to 1.0 so it is esentially the same release. The APIs have hardly any difference. 1.0 is similar to 0.20.2 with append & security features which basically means it supports HBase integration and can be used in enterprises.
We ported our jobs running on EMR on 0.20.2 to directly run on 1.0. All our jobs, whether they were using the new or old API did not have a single issue but ran correctly without us having to change anything. So I believe you should not face any issues.

What options are available for mapping a database schema?

What are some programs that people use to map out a database schema with several tables and inter-connected keys? Preferably for OS X.
MySQL Workbench is a semi-decent tool (especially if you're using MySQL), although I haven't tried the OSX version. They're busy upgrading to version 5.2 at the moment, which looks like it will be a lot better than the current stable 5.1 version. Still kind of buggy though, so 5.1 is the way to go unless you're brave.

Resources