Logstash csv import - mutate add_field if not empty - elasticsearch

I'm using logstash to import data from csv files into our elasticsearch.
During the import I want to create a new field that has values from two other fields. Here's a snippet of my import:
input {
file {
path => "/data/xyz/*.csv"
start_position => "beginning"
ignore_older => 0
sincedb_path => "/dev/null"
}
}
filter {
if [path] =~ "csv1" {
csv {
separator => ";"
columns =>
[
"name1",
"name2",
"name3",
"ID"
]
}
mutate {
add_field => {
"searchfield" => "%{name1} %{name2} %{name3}"
}
}
}
output {
if [path] =~ "csv1" {
elasticsearch {
hosts => "localhost"
index => "my_index"
document_id => "%{ID}"
}
}
}
}
This works as desired but on rows where for example name3 is empty, logstash writes %{name3} into the new field. Is there a way to only add the value if it's not empty?

I think there's no other way other than checking if name3 is present and based on that, build your search field.
if [name3] {
mutate {
id => "with-name3"
add_field => { "searchfield" => "%{name1} %{name2} %{name3}" }
}
} else {
mutate {
id => "without-name3"
add_field => { "searchfield" => "%{name1} %{name2}" }
}
}
Alternatively, if I understand your issue right, you obviously want to ship this data to Elasticsearch and want to have a single searchable field. In order to avoid data duplication in your source, you can build a search field by using copy_to statement. Your mappings would look as follows:
{
"mappings": {
"doc": {
"properties": {
"name1": {
"type": "text",
"copy_to": "searchfield"
},
"name2": {
"type": "text",
"copy_to": "searchfield"
},
"name3": {
"type": "text",
"copy_to": "searchfield"
},
"searchfield": {
"type": "text"
}
}
}
}
}
and then you can perfectly run your queries against that field without having duplicates in source.
Update. Basically your logstash.conf would look as follows:
input {
file {
path => "/data/xyz/*.csv"
start_position => "beginning"
ignore_older => 0
sincedb_path => "/dev/null"
}
}
filter {
if [path] =~ "csv1" {
csv {
separator => ";"
columns => ["name1", "name2", "name3", "ID"]
}
}
}
output {
if [path] =~ "csv1" {
elasticsearch {
hosts => "localhost"
index => "my_index"
document_id => "%{ID}"
}
}
}
Then create elasticsearch index using the following:
PUT /my_index/
{
"mappings": {
"doc": {
"properties": {
"name1": {
"type": "text",
"copy_to": "searchfield"
},
"name2": {
"type": "text",
"copy_to": "searchfield"
},
"name3": {
"type": "text",
"copy_to": "searchfield"
},
"searchfield": {
"type": "text"
}
}
}
}
}
And then you can run search as follows:
GET /my_index/_search
{
"query": {
"match": {
"searchfield": {
"query": "your text"
}
}
}
}

Related

Elastic Search Sorting on a number field that matches phrase field

I want to do sorting on nested field quality score where the trade name matches the search term.
My code is as below which works fine when search term is one word. It fails to sort when search term is a phrase. How can I solve this? :
sortDesc = _sortTerm switch
{
"Quality" => new SortDescriptor<Provider>().Field(so => so
.Field(f => f.Metrics.First().Data.Trades.First().QualityScore)
.Order(SortOrder.Descending)
.Nested(n => n
.Path(p => p.Metrics)
.Filter(q => q.Match(m => m
.Field(f => f.Metrics.First().Data.Trades.First().Name.Suffix("keyword"))
.Query(_searchTerm?.ToLower()))))
)
}
Thank you so much in advance.
If you want to sort on the nested numeric field, based on the match phrase results, you need to use the query as shown below :
Adding a working example, to replicate your requirements
Index Mapping:
{
"mappings": {
"properties": {
"Metrics": {
"type": "nested"
}
}
}
}
Index API
{
"Metrics": {
"FIELDNAME": "hello worlds",
"age": 2
}
}
{
"Metrics": {
"FIELDNAME": "hello worlds",
"age": 3
}
}
{
"Metrics": {
"FIELDNAME": "hello world",
"age": 1
}
}
Search Query:
{
"sort": [
{
"Metrics.age": {
"order": "desc",
"nested": {
"path": "Metrics"
}
}
}
],
"query": {
"nested": {
"path": "Metrics",
"query": {
"match_phrase": {
"Metrics.FIELDNAME": "SEARCH PHRASE"
}
}
}
}
}

Not Able to generate geo_point field with geo_point data type Logstash

I'm trying to load data from mysql table ES index using logstash. I'm able to load the data into ES but the location field mapping is not not coming as geo_point type. Its showing as keyword type. So that I'm not able to query on geo_point field.
Any help what is the issue? ES version: 6.7.0
This is my template.json file:
{
"settings" :
{
"number_of_shards" : 1,
"codec": "best_compression",
"number_of_replicas" : 0,
"index.translog.flush_threshold_size": "2g",
"bootstrap.mlockall": true,
"indices.fielddata.cache.size": 25%
},
"mappings":
{
"_doc" :
'| "dynamic_templates":
[
{
"message_field": {
"path_match": "message",
"match_mapping_type": "string",
"mapping": {
"type": "text",
"norms": false
}
}
},
{
"string_fields": {
"match": "*",
"match_mapping_type": "string",
"mapping": {
"type": "text",
"norms": false,
"fields": {
"keyword": {
"type": "keyword",
"ignore_above": 256
}
}
}
}
}
],
"properties" : {
"#timestamp": {
"type": "date"
},
"#version": {
"type": "keyword"
},
"location": {
"type": "geo_point"
},
"lat" : { "type" : "keyword", "index" : "not_analyzed","index_options" : "docs" },
"lon" : { "type" : "keyword", "index" : "not_analyzed","index_options" : "docs" }
}
}
}
}
logstash.conf fileinput
{
jdbc {
..........
}
}
filter
{
mutate {
convert => { "lon" => "float" }
convert => { "lat" => "float" }
rename => {
"lon" => "[location][lon]"
"lat" => "[location][lat]"
}
}
}
output {
elasticsearch {
hosts => "host:80"
index => "my_index"
manage_template => "false"
document_type => "_doc"
template_name=>"template.json"
document_id => "%{id}"
}
}
I think you're just missing manage_template => true and you can also add template_overwrite => true just to make sure the template is overridden:
elasticsearch {
hosts => "host:80"
index => "my_index"
manage_template => "true" <---- change this
template_overwrite => true <---- also add this
document_type => "_doc"
template_name=>"template.json"
document_id => "%{id}"
}

elasticsearch 7 nest aggregation text keyword error

I Have an index with the following mappings:
{
"winnings": {
"mappings": {
"properties": {
"handId": {
"type": "text",
"fields": {
"keyword": {
"type": "keyword",
"ignore_above": 256
}
}
},
"id": {
"type": "text",
"fields": {
"keyword": {
"type": "keyword",
"ignore_above": 256
}
}
},
"playerId": {
"type": "text",
"fields": {
"keyword": {
"type": "keyword",
"ignore_above": 256
}
}
},
"value": {
"type": "float"
}
}
}
}
}
generated from the class:
public class ElasticWinnings
{
public Guid Id { get; set; }
public Guid HandId { get; set; }
public Guid PlayerId { get; set; }
public decimal Value { get; set; }
}
I created that in nest with the ConnectionSettings:
.DefaultMappingFor<ElasticWinnings>(u =>
u.IndexName("winnings")
.IdProperty(x => x.Id)
);
when I try and run the following query:
var result = _client.Search<ElasticWinnings>(s =>
s.Aggregations(a =>
a.Terms("term_Agg", t =>
t.Field(f => f.PlayerId)
.Aggregations(aa =>
aa.Sum("sum", sum =>
sum.Field(f => f.Value))
)
))
);
I get a 400 back, with the error:
type: illegal_argument_exception Reason: "Fielddata is disabled on text fields by default
It creates this query:
{
"aggs":{
"term_Agg":{
"aggs":{
"sum":{
"sum":{
"field":"value"
}
}
},
"terms":{
"field":"playerId"
}
}
}
}
If I changed that query to:
{
"aggs":{
"term_Agg":{
"aggs":{
"sum":{
"sum":{
"field":"value"
}
}
},
"terms":{
"field":"playerId.keyword"
}
}
}
}
and used that in postman, it works.
I am not sure why it is not putting the .keyword into the query. Is it the way the nest client is configured, the indicies or the query?
You need to change your query a little bit to tell NEST to use keyword field instead of text, you can do this with .Suffix extension method. Link to docs.
var result = _client.Search<ElasticWinnings>(s =>
s.Aggregations(a =>
a.Terms("term_Agg", t =>
t.Field(f => f.PlayerId.Suffix("keyword"))
.Aggregations(aa =>
aa.Sum("sum", sum =>
sum.Field(f => f.Value))
)
))
);
Hope that helps.
The solution I found was to add [Keyword] to the PlayerId property in ElasticWinnings class.
I kept the .DefaultMappingFor<ElasticWinnings>(u => u.IndexName("winnings") in the creation of the ConnectionSettings class, but added this before the Elastic client is returned:
var client = new ElasticClient(settings);
client.Indices.Create("winnings", c =>
c.Map<ElasticWinnings>(m => m.AutoMap())
);
Without adding the section above, it did not apply the attributes. This changed my mappings (http://localhost:9200/winnings/_mappings) to
{
"winnings": {
"mappings": {
"properties": {
"handId": {
"type": "keyword"
},
"id": {
"type": "keyword"
},
"playerId": {
"type": "keyword"
},
"value": {
"type": "double"
}
}
}
}
}
This is the docs about setting up the mappings https://www.elastic.co/guide/en/elasticsearch/client/net-api/current/fluent-mapping.html

How to setup location as a geo_point in elasticsearch?

I've been running into this issue where I get failed to find geo_point field [location]
Here is my flow.
Import csv
input {
file {
path => "test.csv"
start_position => "beginning"
sincedb_path => "/dev/null"
}
}
filter {
csv {
separator => ","
#zip,lat, lon
columns => [ "zip" , "lat", "lon"]
}
mutate {
convert => { "zip" => "integer" }
convert => { "lon" => "float" }
convert => { "lat" => "float" }
}
mutate {
rename => {
"lon" => "[location][lon]"
"lat" => "[location][lat]"
}
}
mutate { convert => { "[location]" => "float" } }
}
output {
elasticsearch {
hosts => "cluster:80"
index => "data"
}
stdout {}
}
Test records
GET data
"hits": [
{
"_index": "data",
"_type": "logs",
"_id": "AVvQcOfXUojnX",
"_score": 1,
"_source": {
"zip": 164283216,
"location": {
"lon": 71.34,
"lat": 40.12
}
}
},
...
If I try to run a geo_distance query I get failed to find geo_point field [location]
Then I try to run
PUT data
{
"mappings": {
"location": {
"properties": {
"pin": {
"properties": {
"location": {
"type": "geo_point"
}
}
}
}
}
}
}
but I get index [data/3uxAJ4ISKy_NyVDNC] already exists
How to I convert location into a geo_point so I can run the query on it?
edit:
I tried planting a template before i index anything, but still same errors
PUT _template/template
{
"template": "base_map_template",
"order": 1,
"settings": {
"number_of_shards": 1
},
"mappings": {
"node_points": {
"properties": {
"location": {
"type": "geo_point"
}
}
}
}
}
You need to name your template data instead of base_map_template since this is how your index is named. Also the type name needs to be logs instead of node_points:
PUT _template/template
{
"template": "data", <--- change this
"order": 1,
"settings": {
"number_of_shards": 1
},
"mappings": {
"logs": { <--- and this
"properties": {
"location": {
"type": "geo_point"
}
}
}
}
}

How do I give whole words priority in elasticsearch?

Elasticsearch is running well for me at the moment, however I want to give whole words priority over Ngrams.
I've tried the following:
client.indices.create index: index,
body: {
mappings: {
search_variable: {
properties: {
"name" => {
"type" => "string",
"index" => "not_analyzed"
},
"label" => {
"type" => "string",
"index" => "not_analyzed"
},
"value_labels" => {
"type" => "string",
"index" => "not_analyzed"
},
"value_label_search_string" => {
"type" => "string",
"index" => "not_analyzed"
},
"search_text" => {
"type" => "multi_field",
"fields" => {
"whole_words" => {"type" => "string", "analyzer" => "simple"},
"ngram" => {"type" => "string", "analyzer" => "ngram", "search_analyzer" => "ngram_search"}
}
}
}
},
settings: {
analysis: {
filter: {
ngram: {
type: 'nGram',
min_gram: 3,
max_gram: 25
}
},
analyzer: {
ngram: {
tokenizer: 'whitespace',
filter: ['lowercase', 'stop', 'ngram'],
type: 'custom'
},
ngram_search: {
tokenizer: 'whitespace',
filter: ['lowercase', 'stop'],
type: 'custom'
}
}
}
}
}
}
This is the part relevant to my full text search field: search_text:
"search_text" => {
"type" => "multi_field",
"fields" => {
"whole_words" => {"type" => "string", "analyzer" => "simple"},
"ngram" => {"type" => "string", "analyzer" => "ngram", "search_analyzer" => "ngram_search"}
}
}
I want to give higher scores to items that match whole words in the search text.
[400] {"error":{"root_cause":[{"type":"mapper_parsing_exception","reason":"analyzer [ngram_search] not found for field [ngram]"}],"type":"mapper_parsing_exception","reason":"Failed to parse mapping [search_variable]: analyzer [ngram_search] not found for field [ngram]","caused_by":{"type":"mapper_parsing_exception","reason":"analyzer [ngram_search] not found for field [ngram]"}},"status":400}
Here's the error:
"reason":"analyzer [ngram_search] not found for field [ngram]"
What am I doing wrong?
Edit:
Here is my query, where I tried to match on whole words only for now, and I only get 0 results every time.
search_query = {
index: index,
body: {
_source: {
exclude: ["search_text", "survey_id"]
},
query: {
:bool => {
:filter => {
:term => {"survey_id" => 12}
},
:must => {
:match => {
"search_text.whole_words" => {"query" => "BMW", "operator" => "and"}
}
}
}
}
}
}
result = client.search(search_query)
Here is the output of:
curl -XGET localhost:9200/yg_search_variables
{"yg_search_variables":{"aliases":{},"mappings":{"search_variable":{"properties":{"label":{"type":"string","index":"not_analyzed"},"name":{"type":"string","index":"not_analyzed"},"search_text":{"type":"string","index":"no","fields":{"ngram":{"type":"string","analyzer":"ngram","search_analyzer":"ngram_search"},"whole_words":{"type":"string","analyzer":"simple"}}},"value_label_search_string":{"type":"string","index":"not_analyzed"},"value_labels":{"type":"string","index":"not_analyzed"}}},"variables":{"properties":{"category":{"type":"string"},"label":{"type":"string","index":"not_analyzed"},"name":{"type":"string","index":"not_analyzed"},"search_text":{"type":"string","index":"no"},"survey_id":{"type":"long"},"value_label_search_text":{"type":"string"},"value_labels":{"properties":{"0":{"type":"string"},"1":{"type":"string"},"10":{"type":"string"},"100":{"type":"string"},"101":{"type":"string"},"102":{"type":"string"},"103":{"type":"string"},"104":{"type":"string"},"105":{"type":"string"},"106":{"type":"string"},"107":{"type":"string"},"108":{"type":"string"},"109":{"type":"string"},"11":{"type":"string"},"110":{"type":"string"},"1100":{"type":"string"},"1101":{"type":"string"},"1102":{"type":"string"},"1103":{"type":"string"},"1104":{"type":"string"},"1105":{"type":"string"},"1106":{"type":"string"},"1107":{"type":"string"},"1108":{"type":"string"},"1109":{"type":"string"},"111":{"type":"string"},"1110":{"type":"string"},"1111":{"type":"string"},"1112":{"type":"string"},"1113":{"type":"string"},"1114":{"type":"string"},"112":{"type":"string"},"113":{"type":"string"},"114":{"type":"string"},"115":{"type":"string"},"116":{"type":"string"},"117":{"type":"string"},"118":{"type":"string"},"119":{"type":"string"},"12":{"type":"string"},"120":{"type":"string"},"121":{"type":"string"},"122":{"type":"string"},"123":{"type":"string"},"124":{"type":"string"},"125":{"type":"string"},"126":{"type":"string"},"127":{"type":"string"},"128":{"type":"string"},"129":{"type":"string"},"13":{"type":"string"},"130":{"type":"string"},"131":{"type":"string"},"132":{"type":"string"},"133":{"type":"string"},"134":{"type":"string"},"135":{"type":"string"},"136":{"type":"string"},"137":{"type":"string"},"138":{"type":"string"},"139":{"type":"string"},"14":{"type":"string"},"140":{"type":"string"},"141":{"type":"string"},"142":{"type":"string"},"143":{"type":"string"},"144":{"type":"string"},"145":{"type":"string"},"146":{"type":"string"},"147":{"type":"string"},"148":{"type":"string"},"149":{"type":"string"},"15":{"type":"string"},"150":{"type":"string"},"151":{"type":"string"},"152":{"type":"string"},"153":{"type":"string"},"154":{"type":"string"},"155":{"type":"string"},"156":{"type":"string"},"157":{"type":"string"},"158":{"type":"string"},"159":{"type":"string"},"16":{"type":"string"},"160":{"type":"string"},"161":{"type":"string"},"162":{"type":"string"},"163":{"type":"string"},"164":{"type":"string"},"165":{"type":"string"},"166":{"type":"string"},"167":{"type":"string"},"168":{"type":"string"},"169":{"type":"string"},"17":{"type":"string"},"170":{"type":"string"},"171":{"type":"string"},"172":{"type":"string"},"173":{"type":"string"},"174":{"type":"string"},"175":{"type":"string"},"176":{"type":"string"},"177":{"type":"string"},"178":{"type":"string"},"179":{"type":"string"},"18":{"type":"string"},"180":{"type":"string"},"181":{"type":"string"},"182":{"type":"string"},"183":{"type":"string"},"184":{"type":"string"},"185":{"type":"string"},"186":{"type":"string"},"187":{"type":"string"},"188":{"type":"string"},"189":{"type":"string"},"19":{"type":"string"},"190":{"type":"string"},"191":{"type":"string"},"192":{"type":"string"},"193":{"type":"string"},"194":{"type":"string"},"195":{"type":"string"},"196":{"type":"string"},"197":{"type":"string"},"198":{"type":"string"},"199":{"type":"string"},"2":{"type":"string"},"20":{"type":"string"},"200":{"type":"string"},"201":{"type":"string"},"202":{"type":"string"},"203":{"type":"string"},"204":{"type":"string"},"205":{"type":"string"},"206":{"type":"string"},"207":{"type":"string"},"208":{"type":"string"},"209":{"type":"string"},"21":{"type":"string"},"210":{"type":"string"},"211":{"type":"string"},"22":{"type":"string"},"23":{"type":"string"},"24":{"type":"string"},"25":{"type":"string"},"26":{"type":"string"},"27":{"type":"string"},"28":{"type":"string"},"29":{"type":"string"},"3":{"type":"string"},"30":{"type":"string"},"301":{"type":"string"},"302":{"type":"string"},"303":{"type":"string"},"304":{"type":"string"},"305":{"type":"string"},"306":{"type":"string"},"307":{"type":"string"},"308":{"type":"string"},"309":{"type":"string"},"31":{"type":"string"},"310":{"type":"string"},"311":{"type":"string"},"312":{"type":"string"},"313":{"type":"string"},"314":{"type":"string"},"315":{"type":"string"},"316":{"type":"string"},"317":{"type":"string"},"32":{"type":"string"},"33":{"type":"string"},"34":{"type":"string"},"35":{"type":"string"},"36":{"type":"string"},"37":{"type":"string"},"38":{"type":"string"},"39":{"type":"string"},"4":{"type":"string"},"40":{"type":"string"},"41":{"type":"string"},"42":{"type":"string"},"43":{"type":"string"},"44":{"type":"string"},"45":{"type":"string"},"46":{"type":"string"},"47":{"type":"string"},"48":{"type":"string"},"49":{"type":"string"},"5":{"type":"string"},"50":{"type":"string"},"51":{"type":"string"},"52":{"type":"string"},"53":{"type":"string"},"54":{"type":"string"},"55":{"type":"string"},"554":{"type":"string"},"555":{"type":"string"},"556":{"type":"string"},"56":{"type":"string"},"57":{"type":"string"},"58":{"type":"string"},"59":{"type":"string"},"6":{"type":"string"},"60":{"type":"string"},"601":{"type":"string"},"602":{"type":"string"},"603":{"type":"string"},"604":{"type":"string"},"61":{"type":"string"},"62":{"type":"string"},"63":{"type":"string"},"64":{"type":"string"},"65":{"type":"string"},"66":{"type":"string"},"666":{"type":"string"},"667":{"type":"string"},"67":{"type":"string"},"68":{"type":"string"},"69":{"type":"string"},"7":{"type":"string"},"70":{"type":"string"},"71":{"type":"string"},"72":{"type":"string"},"73":{"type":"string"},"74":{"type":"string"},"75":{"type":"string"},"76":{"type":"string"},"77":{"type":"string"},"777":{"type":"string"},"78":{"type":"string"},"79":{"type":"string"},"8":{"type":"string"},"80":{"type":"string"},"801":{"type":"string"},"802":{"type":"string"},"803":{"type":"string"},"804":{"type":"string"},"805":{"type":"string"},"806":{"type":"string"},"807":{"type":"string"},"808":{"type":"string"},"809":{"type":"string"},"81":{"type":"string"},"810":{"type":"string"},"811":{"type":"string"},"812":{"type":"string"},"813":{"type":"string"},"814":{"type":"string"},"815":{"type":"string"},"816":{"type":"string"},"817":{"type":"string"},"818":{"type":"string"},"819":{"type":"string"},"82":{"type":"string"},"820":{"type":"string"},"821":{"type":"string"},"822":{"type":"string"},"83":{"type":"string"},"84":{"type":"string"},"85":{"type":"string"},"86":{"type":"string"},"87":{"type":"string"},"88":{"type":"string"},"888":{"type":"string"},"89":{"type":"string"},"9":{"type":"string"},"90":{"type":"string"},"901":{"type":"string"},"902":{"type":"string"},"903":{"type":"string"},"904":{"type":"string"},"905":{"type":"string"},"906":{"type":"string"},"907":{"type":"string"},"908":{"type":"string"},"909":{"type":"string"},"91":{"type":"string"},"910":{"type":"string"},"911":{"type":"string"},"912":{"type":"string"},"913":{"type":"string"},"914":{"type":"string"},"915":{"type":"string"},"916":{"type":"string"},"917":{"type":"string"},"918":{"type":"string"},"919":{"type":"string"},"92":{"type":"string"},"920":{"type":"string"},"921":{"type":"string"},"922":{"type":"string"},"923":{"type":"string"},"924":{"type":"string"},"925":{"type":"string"},"926":{"type":"string"},"927":{"type":"string"},"928":{"type":"string"},"93":{"type":"string"},"94":{"type":"string"},"95":{"type":"string"},"96":{"type":"string"},"97":{"type":"string"},"98":{"type":"string"},"99":{"type":"string"},"997":{"type":"string"},"998":{"type":"string"},"999":{"type":"string"}}}}}},"settings":{"index":{"creation_date":"1457103857764","analysis":{"filter":{"ngram":{"type":"nGram","min_gram":"3","max_gram":"25"}},"analyzer":{"ngram":{"filter":["lowercase","stop","ngram"],"type":"custom","tokenizer":"whitespace"},"ngram_search":{"filter":["lowercase","stop"],"type":"custom","tokenizer":"whitespace"}}},"number_of_shards":"5","number_of_replicas":"1","uuid":"zPN2LDfCTFqPleW7d5nkwA","version":{"created":"2020099"}}},"warmers":{}}}%
It seems strange that index is no:
"search_text": {
"type": "string",
"index": "no",
"fields": {
"ngram": {
"type": "string",
"analyzer": "ngram",
"search_analyzer": "ngram_search"
},
"whole_words": {
"type": "string",
"analyzer": "simple"
}
}
}
Edit: Here is a sample matching document for the term "Ford":
{
"name"=>"car_ownership",
"label"=>"Customer: Ford",
"category"=>["Vehicles", "Passenger Vehicles"], "value"=>nil,
"value_labels"=>{"1"=>"Yes", "2"=>"No"},
"node_id"=>14813,
"survey_id" => 12,
"search_text" => "Customer Ford Vehicles Passenger Vehicles Yes No"
}
Edit: I have added a smaller beginning to end test case that can be found here, which replicates the error.
https://www.dropbox.com/s/wwxm3qe0oxc2z5y/Slimmed%20ElasticSearch%20Text%20%281%29.html?dl=0
The first issue is that settings is not properly nested when you create your index. settings and mappings should be at the same level.
Then, looking at your dropbox file, I think the issue is that the mapping type is called search_variable while in your bulk you are using the mapping type test_type. Hence the mapping will never be applied.

Resources