After upgate from 6.X to 7.X
I got next error in index create
RequestError(400, 'mapper_parsing_exception',
'Root mapping definition has unsupported parameters:
[speechanalytics-transcript : {
properties={
transcript_operator={similarity=scripted_tfidf, type=text}}]')
query body is
{
'settings': {
'similarity': {
'scripted_tfidf': {
'type': 'scripted',
'script': {'source': 'double tf = doc.freq; return query.boost * tf;'},
},
},
},
'mappings': {
'speechanalytics-transcript': {
'properties': {
'transcript_operator':{
'type': 'text',
'analyzer': 'standard',
'similarity': 'scripted_tfidf',
}
}
}
}
}
In new version mapping type was removed
https://www.elastic.co/guide/en/elasticsearch/reference/6.7/removal-of-types.html
need to change mapping
'mappings': {
'speechanalytics-transcript': {
'properties': {
'transcript_operator':{
'type': 'text',
'analyzer': 'standard',
'similarity': 'scripted_tfidf',
}
}
}
}
to
'mappings': {
'properties': {
'transcript_operator':{
'type': 'text',
'analyzer': 'standard',
'similarity': 'scripted_tfidf',
}
}
}
Related
I'm trying to use analyzers to return alphabetically sorted data, however my changes are always returned in lexographical order. I've tried multiple implementations from here and other sources to no avail. Is the issue in my tokenizer? Or possibly my use of custom analyzers i wrong? Thanks in advance
await client.indices.create({
index: esIndexReport,
body: {
settings: {
analysis: {
filter: {
min_term_length: {
type: 'length',
min: 2,
},
},
analyzer: {
name_analyzer: {
tokenizer: 'whitespace',
filter: [
'lowercase',
'min_term_length',
],
},
min_term_analyzer: {
tokenizer: 'standard',
filter: [
'lowercase',
'min_term_length',
],
},
},
},
},
mappings: {
report: {
properties: {
reportId: {
type: 'text',
analyzer: 'min_term_analyzer',
},
reportName: {
type: 'text',
analyzer: 'name_analyzer',
},
description: {
type: 'text',
analyzer: 'name_analyzer',
},
author: {
type: 'text',
analyzer: 'min_term_analyzer',
},
icType: {
type: 'text',
analyzer: 'min_term_analyzer',
},
status: {
type: 'text',
analyzer: 'min_term_analyzer',
},
lastUpdatedAt: {
type: 'text',
analyzer: 'min_term_analyzer',
},
'sort.reportName': {
type: 'text',
fielddata: true,
},
'sort.description': {
type: 'text',
fielddata: true,
},
'sort.author': {
type: 'text',
fielddata: true,
},
'sort.status': {
type: 'text',
fielddata: true,
},
'sort.lastUpdatedAt': {
type: 'text',
fielddata: true,
},
},
},
},
},
});
this is the mutation I want to perform:
const GraphQLAddPlayerResponseMutation = mutationWithClientMutationId({
name: 'AddPlayerResponse',
inputFields: {
cdx: { type: new GraphQLNonNull(GraphQLInt) },
},
mutateAndGetPayload: ({cdx}) => {
var cdxAdded = addplayerResponse(cdx);
console.log("cdxAdded = ",cdxAdded)
return cdxAdded;
}, // what u return on mutateAndGetPayload is available on outputFields
outputFields: {
playerResponse: {
type: GraphQLInt,
resolve: ({cdxAdded}) => {
console.log("outputFields cdxAdded = ",cdxAdded)
return cdxAdded
},
},
viewer: {
type: GraphQLUser,
resolve: () => getViewer(),
},
},
});
Can't figure out what's wrong with the code, it logs on the mutateAndPayload:
mutateAndGetPayload: ({cdx}) => {
var cdxAdded = addplayerResponse(cdx);
console.log("cdxAdded = ",cdxAdded)
return cdxAdded;
},
but I think the outputFields is not evaluated since it's not logging in the console and I get this error:
{
"data": {
"addPlayerResponse": null
},
"errors": [
{
"message": "Cannot create property 'clientMutationId' on number '3'",
"locations": [
{
"line": 4,
"column": 3
}
],
"path": [
"addPlayerResponse"
]
}
]
}
Help?
Replace return cdxAdded; by return { cdxAdded }; (wild guess)
I am getting the following Elastic Search error when I try to sort search results by distance with Mongoosastic:
{ message: 'SearchPhaseExecutionException[Failed to execute phase
[query_fetch], all shards failed; shardFailures
{[rQFD7Be9QbWIfTqTkrTL7A][users][0]: SearchParseException[[users][0]:
query[filtered(+keywords:cafe)->GeoDistanceFilter(location,
SLOPPY_ARC, 25000.0, -70.0264952, 41.2708115)],from[-1],size[-1]:
Parse Failure [Failed to parse source
[{"timeout":60000,"sort":[{"[object Object]":{}}]}]]]; nested:
SearchParseException[[users][0]:
query[filtered(+keywords:cafe)->GeoDistanceFilter(location,
SLOPPY_ARC, 25000.0, -70.0264952, 41.2708115)],from[-1],size[-1]:
Parse Failure [No mapping found for [[object Object]] in order to sort
on]]; }]' }
See bellow for code sample:
var query = {
"filtered": {
"query": {
"bool": {
"must": [
{
"term": {
"keywords": "cafe"
}
}
]
}
},
"filter": {
"geo_distance": {
"distance": "25km",
"location": [
41.2708115,
-70.0264952
]
}
}
}
};
var opts = {
"sort": [
{
"_geo_distance": {
"location": [
41.2708115,
-70.0264952
],
"order": "asc",
"unit": "km",
"distance_type": "plane"
}
}
],
"script_fields": {
"distance": "doc[\u0027location\u0027].distanceInMiles(41.2708115, -70.0264952)"
}
};
User.search(query, opts, function (err, data) {
if (err || !data || !data.hits || !data.hits.hits || !data.hits.hits.length) {
return callback(err);
}
var total = data.hits.total,
//page = params.page || 1,
per_page = query.size,
from = query.from,
//to = from +
page = query.from / query.size,
rows = data.hits.hits || [];
for (var i = 0; i < rows.length; i++) {
rows[i].rowsTotal = total;
}
callback(err, toUser(rows, params));
});
Here is the User schema:
var schema = new Schema({
name: {type: String, default: '', index: true, es_type: 'string', es_indexed: true},
location: {type: [Number], es_type: 'geo_point', es_indexed: true, index: true},
shareLocation: {type: Boolean, default: false, es_type: 'boolean', es_indexed: true},
lastLocationSharedAt: {type: Date},
email: {type: String, default: '', index: true, es_type: 'string', es_indexed: true},
birthday: {type: String, default: ''},
first_name: {type: String, default: ''},
last_name: {type: String, default: ''},
gender: {type: String, default: ''},
website: {type: String, default: '', index: true, es_indexed: true},
verified: {type: Boolean, default: false},
});
I am also getting an error, I think the upgrade of Mongoosastic is double wrapping the code. It def seems to be based on 'sort' rather than on search but still reviewing. Val seems to have a better idea of what is going on as perhaps it has something to do with user schema rather than function.
I am using a similar schema and just upgraded and encountered issues.
I'm getting an error when I run this script to concatenate longitude and latitude doubles into a geo_point.
ElasticsearchIllegalArgumentException[the character \'.\' is not a valid geohash character]
Here's my script for reference:
mappings: {
'index': {
'transform': {
'lang': 'groovy',
'script': "ctx._source['coords'] = [ctx._source['lon'],ctx._source['lat']]"
}
'properties': {
'lon': {
'type': 'double',
},
'lat': {
'type': 'string',
},
'coords': {
'type': 'geo_point',
}
}
}
}
I'd appreciate any help, thanks!
Since you are extracting data from the source, you need to transform strings in to doubles in your groovy script:
new Double(ctx._source['lon']);
I am getting QueryParsingException[[listings] failed to find geo_point field [location.coords]]; }] and can't quite figure out why.
My Query
esClient.search({
index: 'listings',
body: {
query: {
filtered: {
query: {
match_all: {}
},
filter: {
geo_distance: {
distance: '10km',
'location.coords': {
lat: parseFloat(query.point.lat),
lon: parseFloat(query.point.lon)
}
}
}
}
}
}
}, function(err, response) {
if(err) return console.log(err);
console.log(response);
});
My mapping (as you can see -yes, I did use geo_point type)
body: {
properties: {
location: {
type: 'object',
properties: {
country: {
type: 'integer'
},
address: {
type: 'string'
},
suburb: {
type: 'string'
},
coords: {
type: 'geo_point',
precision: '3m'
}
}
}
}
}
EDIT:
After looking up http://localhost:9200/listings/_mapping/my-mapping, I noticed the coords field have the lat, lon set to double - could this have something to do with it.
Ok so it turns out this was happening because of how I was defining the geo_point's precision (needs to be wrapped in fielddata property), oddly the JS api i'm using didn't throw any kind of error that I recall:
Correct:
coords: {
type: 'geo_point',
fielddata: {
format: 'compressed',
precision: '3m'
}
}
Incorrect:
coords: {
type: 'geo_point',
precision: '3m'
}
}
And voila...