I'm trying to create a unique field for embedded documents:
class Chapter
include Mongoid::Document
field :title
end
class Book
include Mongoid::Document
field :name
embeds_many :chapters
index({ 'name' => 1 }, { unique: true })
index({ 'name' => 1, 'chapters.title' => 1 }, { unique: true, sparse: true })
# index({ 'name' => 1, 'chapters.title' => 1 }, { unique: true })
end
I run the task: rake db:mongoid:create_indexes
I, [2017-02-22T08:56:47.087414 #94935] INFO -- : MONGOID: Created indexes on Book:
I, [2017-02-22T08:56:47.087582 #94935] INFO -- : MONGOID: Index: {:name=>1}, Options: {:unique=>true}
I, [2017-02-22T08:56:47.087633 #94935] INFO -- : MONGOID: Index: {:name=>1, :"chapters.title"=>1}, Options: {:unique=>true, :sparse=>true}
But it doesn't work as I would expect...
Book.new( name: 'A book', chapters: [ { title: 'title1' }, { title: 'title1' }, { title: 'title2' } ] ).save # no errors
Book.new( name: 'Another book', chapters: [ { title: 'title2' } ] ).save
b = Book.last
b.chapters.push( Chapter.new( { title: 'title2' } ) )
b.save # no errors
Any idea?
UPDATE: Ruby 2.4.0, Mongo 3.2.10, Mongoid 5.2.0 | 6.0.3 (trying both)
UPDATE2: I add also the tests I made directly with mongo client:
use books
db.books.ensureIndex({ title: 1 }, { unique: true })
db.books.ensureIndex({ "title": 1, "chapters.title": 1 }, { unique: true, sparse: true, drop_dups: true })
db.books.insert({ title: "Book1", chapters: [ { title: "Ch1" }, { title: "Ch1" } ] }) # allowed?!
db.books.insert({ title: "Book1", chapters: [ { title: "Ch1" } ] })
b = db.books.findOne( { title: 'Book1' } )
b.chapters.push( { "title": "Ch1" } )
db.books.save( b ) # allowed?!
db.books.findOne( { title: 'Book1' } )
db.books.insert({ title: "Book2", chapters: [ { title: "Ch1" } ] })
UPDATE3: I made more tests but I didn't succeed, this link helped but the problem remains
You should use drop_dups
class Category
include Mongoid::Document
field :title, type: String
embeds_many :posts
index({"posts.title" => 1}, {unique: true, drop_dups: true, name: 'unique_drop_dulp_idx'})
end
class Post
include Mongoid::Document
field :title, type: String
end
Rails console:
irb(main):032:0> Category.first.posts.create(title: 'Honda S2000')
=> #<Post _id: 58adb923cacaa6f778215a26, title: "Honda S2000">
irb(main):033:0> Category.first.posts.create(title: 'Honda S2000')
Mongo::Error::OperationFailure: E11000 duplicate key error collection: mo_development.posts index: title_1 dup key: { : "Honda S2000" } (11000)
Related
I have an array of hashes like this:
[
{ name: 'Pratha', email: 'c#f.com' },
{ name: 'John', email: 'j#g.com' },
{ name: 'Clark', email: 'x#z.com' },
]
And this is second group array of hashes:
[
{ name: 'AnotherNameSameEmail', email: 'c#f.com' },
{ name: 'JohnAnotherName', email: 'j#g.com' },
{ name: 'Mark', email: 'd#o.com' },
]
What I want is, merge these two arrays into one, merge based on :email and keep latest (or first) :name.
Expected Result (latest name overrided):
[
{ name: 'AnotherNameSameEmail', email: 'c#f.com' },
{ name: 'JohnAnotherName', email: 'j#g.com' },
{ name: 'Mark', email: 'd#o.com' },
{ name: 'Clark', email: 'x#z.com' },
]
or (first name preserved)
[
{ name: 'Pratha', email: 'c#f.com' },
{ name: 'John', email: 'j#g.com' },
{ name: 'Mark', email: 'd#o.com' },
{ name: 'Clark', email: 'x#z.com' },
]
So, basically, I want to group by :email, retain one :name, drop dupe emails.
The examples found on SO is creates an array of values for :name.
Ruby 2.6.3
Maybe you could just call Array#uniq with a block on email key of the concatenation (Array#+) of the two arrays:
(ary1 + ary2).uniq { |h| h[:email] }
a1 = [
{ name: 'Pratha', email: 'c#f.com' },
{ name: 'John', email: 'j#g.com' },
{ name: 'Clark', email: 'x#z.com' },
]
a2 = [
{ name: 'AnotherNameSameEmail', email: 'c#f.com' },
{ name: 'JohnAnotherName', email: 'j#g.com' },
{ name: 'Mark', email: 'd#o.com' },
]
Let's first keep the last:
(a1+a2).each_with_object({}) { |g,h| h.update(g[:email]=>g) }.values
#=> [{:name=>"AnotherNameSameEmail", :email=>"c#f.com"},
# {:name=>"JohnAnotherName", :email=>"j#g.com"},
# {:name=>"Clark", :email=>"x#z.com"},
# {:name=>"Mark", :email=>"d#o.com"}]
To keep the first, do the same with (a1+a2) replaced with (a2+a1), to obtain:
#=> [{:name=>"Pratha", :email=>"c#f.com"},
# {:name=>"John", :email=>"j#g.com"},
# {:name=>"Mark", :email=>"d#o.com"},
# {:name=>"Clark", :email=>"x#z.com"}]
I have a database of movies that I would like to nest by genre. The problem is that each movie can have multiple genres. So if I have several movies formatted like so
[
{
title : 'foo',
genres : ['Action', 'Comedy', 'Thriller']
},{
title : 'bar',
genres : ['Action']
}
]
I'd like to nest them by each individual genre so that the result would be
[
{
key: 'Action',
values: [ { title: 'foo' }, { title: 'bar'} ]
},{
key: 'Comedy',
values: [ { title: 'foo' } ]
},{
key: 'Thriller',
values: [ { title: 'foo' } ]
}
]
not directly, but you can expand your array
For example:
jj = [{ genre: ['thriller', 'comedy'], title: 'foo'}, { genre: ['thriller', 'action'], title: 'papa'}]
to expand your array:
jj2 = []
jj.forEach(function(movie) { movie.genre.forEach( function(single_genre) { jj2.push({ language: movie.language, genre: single_genre, title: movie.title } ); } ); })
Then you can perform your nesting as normal:
d3.nest().key(function(d) { return d.genre; }).entries(jj2)
I am using elasticsearch and globalize gems for full text searching and what I expect is that I can search for supplier name, localised description using czech/english analyzer.
Example:
Supplier Name: "Bonami.cz"
Supplier Description_CZ: "Test description in czech."
It works when I search for "Bonami.cz", but it does not work (0 results) when I search for:
"Bonami" (part of the word)
"test" (description)
Based on documentation, the below methods should work, but apparently I have missed something. I verified the indexes and data is in ElasticSearch.
Also do I need to install somehow the czech/english analyzer before using it in the model?
require 'elasticsearch/model'
require 'activerecord-import'
class Supplier < ActiveRecord::Base
after_commit lambda { __elasticsearch__.index_document }, on: :create
after_commit lambda { __elasticsearch__.update_document }, on: :update
translates :description, :fallbacks_for_empty_translations => true
accepts_nested_attributes_for :translations
include Elasticsearch::Model
include Elasticsearch::Model::Callbacks
include Elasticsearch::Model::Globalize::MultipleFields
mapping do
indexes :id, type: 'integer'
indexes :name, analyzer: 'czech'
indexes :description_ma, analyzer: 'czech'
indexes :description_cs, analyzer: 'czech'
indexes :description_en, analyzer: 'english'
end
def as_indexed_json(options={})
{ id: id,
name: name,
description_ma: description_ma,
description_cs: description_cs,
description_en: description_en
}
end
def self.search(query)
__elasticsearch__.search(
{
query: {
multi_match: {
query: query,
fields: ['name^10', 'description_ma', 'description_cs', 'description_en']
}
}
})
end
end
Any idea, what is wrong?
Thanks, Miroslav
UPDATE 1
I inspired with the solution from Rails 4, elasticsearch-rails, but when I try to search now, for any word I always get zero results.
settings index: {
number_of_shards: 1,
analysis: {
filter: {
trigrams_filter: {
type: 'ngram',
min_gram: 2,
max_gram: 10
},
content_filter: {
type: 'ngram',
min_gram: 4,
max_gram: 20
}
},
analyzer: {
index_trigrams_analyzer: {
type: 'custom',
tokenizer: 'standard',
filter: ['lowercase', 'trigrams_filter']
},
search_trigrams_analyzer: {
type: 'custom',
tokenizer: 'whitespace',
filter: ['lowercase']
},
english: {
tokenizer: 'standard',
filter: ['standard', 'lowercase', 'content_filter']
},
czech: {
tokenizer: 'standard',
filter: ['standard','lowercase','content_filter']
}
}
}
} do
mappings dynamic: 'false' do
indexes :name, index_analyzer: 'index_trigrams_analyzer', search_analyzer: 'search_trigrams_analyzer'
indexes :description_en, index_analyzer: 'english', search_analyzer: 'english'
indexes :description_ma, index_analyzer: 'czech', search_analyzer: 'czech'
indexes :description_cs, index_analyzer: 'czech', search_analyzer: 'czech'
end
end
def as_indexed_json(options={})
{ id: id,
name: name,
description_ma: description_ma,
description_cs: description_cs,
description_en: description_en
}
end
def self.search(query)
__elasticsearch__.search(
{
query: {
multi_match: {
query: query,
fields: ['name^10', 'description_ma', 'description_cs', 'description_en']
}
},
highlight: {
pre_tags: ['<em>'],
post_tags: ['</em>'],
fields: {
name: {},
description_ma: {},
description_cs: {},
description_en: {}
}
}
}
)
end
This is what I see when I open elastic search URL for the given model:
{"suppliers":{"aliases":{},"mappings":{"supplier":
{"dynamic":"false","properties":{"description_cs":
{"type":"string","analyzer":"czech"},"description_en":
{"type":"string","analyzer":"english"},"description_ma":
{"type":"string","analyzer":"czech"},"name":
{"type":"string","index_analyzer":"index_trigrams_analyzer","search_analyzer":"search_trigrams_analyzer"}}}},"settings":{"index":
{"creation_date":"1445797508427","analysis":{"filter":
{"trigrams_filter":
{"type":"ngram","min_gram":"2","max_gram":"10"},"content_filter":
{"type":"ngram","min_gram":"4","max_gram":"20"}},"analyzer":{"english":
{"filter":["standard","lowercase","content_filter"],"tokenizer":"standard"},"index_trigrams_analyzer":{"type":"custom","filter":["lowercase","trigrams_filter"],"tokenizer":"standard"},"search_trigrams_analyzer":{"type":"custom","filter":["lowercase"],"tokenizer":"whitespace"},"czech":{"filter":["standard","lowercase","content_filter"],"tokenizer":"standard"}}},"number_of_shards":"1","number_of_replicas":"1","version":
{"created":"1060099"},"uuid":"wX9kf3OQSva24Iwk7sZ8AQ"}},"warmers":{}}}
UPDATE 2
Two steps were missing to have it working as expected =>
1. Re-import model data
2. Typo in names of description fields (instead of description_ma/en/cs, I had to use ma/cs/en_description.
settings index: {
number_of_shards: 1,
analysis: {
filter: {
trigrams_filter: {
type: 'ngram',
min_gram: 2,
max_gram: 10
},
content_filter: {
type: 'ngram',
min_gram: 4,
max_gram: 20
}
},
analyzer: {
index_trigrams_analyzer: {
type: 'custom',
tokenizer: 'standard',
filter: ['lowercase', 'trigrams_filter']
},
search_trigrams_analyzer: {
type: 'custom',
tokenizer: 'whitespace',
filter: ['lowercase']
},
english: {
tokenizer: 'standard',
filter: ['standard', 'lowercase', 'content_filter']
},
czech: {
tokenizer: 'standard',
filter: ['standard','lowercase','content_filter' ]
}
}
}
} do
mappings dynamic: 'false' do
indexes :name, index_analyzer: 'index_trigrams_analyzer', search_analyzer: 'search_trigrams_analyzer'
indexes :en_description, index_analyzer: 'english', search_analyzer: 'english'
indexes :ma_description, index_analyzer: 'czech', search_analyzer: 'czech'
indexes :cs_description, index_analyzer: 'czech', search_analyzer: 'czech'
end
end
def as_indexed_json(options={})
{ id: id,
name: name,
ma_description: ma_description,
cs_description: cs_description,
en_description: en_description
}
end
def self.search(query)
__elasticsearch__.search(
{
query: {
multi_match: {
query: query,
fields: ['name^10', 'ma_description', 'cs_description', 'en_description']
}
},
highlight: {
pre_tags: ['<em>'],
post_tags: ['</em>'],
fields: {
name: {},
ma_description: {},
cs_description: {},
en_description: {}
}
}
}
)
end
In order to be able to perform the search you are trying to do you'll to use the Ngram analyzer. (as discussed in the comments)
I am getting the following Elastic Search error when I try to sort search results by distance with Mongoosastic:
{ message: 'SearchPhaseExecutionException[Failed to execute phase
[query_fetch], all shards failed; shardFailures
{[rQFD7Be9QbWIfTqTkrTL7A][users][0]: SearchParseException[[users][0]:
query[filtered(+keywords:cafe)->GeoDistanceFilter(location,
SLOPPY_ARC, 25000.0, -70.0264952, 41.2708115)],from[-1],size[-1]:
Parse Failure [Failed to parse source
[{"timeout":60000,"sort":[{"[object Object]":{}}]}]]]; nested:
SearchParseException[[users][0]:
query[filtered(+keywords:cafe)->GeoDistanceFilter(location,
SLOPPY_ARC, 25000.0, -70.0264952, 41.2708115)],from[-1],size[-1]:
Parse Failure [No mapping found for [[object Object]] in order to sort
on]]; }]' }
See bellow for code sample:
var query = {
"filtered": {
"query": {
"bool": {
"must": [
{
"term": {
"keywords": "cafe"
}
}
]
}
},
"filter": {
"geo_distance": {
"distance": "25km",
"location": [
41.2708115,
-70.0264952
]
}
}
}
};
var opts = {
"sort": [
{
"_geo_distance": {
"location": [
41.2708115,
-70.0264952
],
"order": "asc",
"unit": "km",
"distance_type": "plane"
}
}
],
"script_fields": {
"distance": "doc[\u0027location\u0027].distanceInMiles(41.2708115, -70.0264952)"
}
};
User.search(query, opts, function (err, data) {
if (err || !data || !data.hits || !data.hits.hits || !data.hits.hits.length) {
return callback(err);
}
var total = data.hits.total,
//page = params.page || 1,
per_page = query.size,
from = query.from,
//to = from +
page = query.from / query.size,
rows = data.hits.hits || [];
for (var i = 0; i < rows.length; i++) {
rows[i].rowsTotal = total;
}
callback(err, toUser(rows, params));
});
Here is the User schema:
var schema = new Schema({
name: {type: String, default: '', index: true, es_type: 'string', es_indexed: true},
location: {type: [Number], es_type: 'geo_point', es_indexed: true, index: true},
shareLocation: {type: Boolean, default: false, es_type: 'boolean', es_indexed: true},
lastLocationSharedAt: {type: Date},
email: {type: String, default: '', index: true, es_type: 'string', es_indexed: true},
birthday: {type: String, default: ''},
first_name: {type: String, default: ''},
last_name: {type: String, default: ''},
gender: {type: String, default: ''},
website: {type: String, default: '', index: true, es_indexed: true},
verified: {type: Boolean, default: false},
});
I am also getting an error, I think the upgrade of Mongoosastic is double wrapping the code. It def seems to be based on 'sort' rather than on search but still reviewing. Val seems to have a better idea of what is going on as perhaps it has something to do with user schema rather than function.
I am using a similar schema and just upgraded and encountered issues.
I want to enable editing only for Address field for the below code:
$("#grid").kendoGrid({
columns: [{
field: "name",// create a column bound to the "name" field
title: "Name",// set its title to "Name"
},
{
field: "age",// create a column bound to the "age" field
title: "Age" ,// set its title to "Age"
},
{
field: "doj",
title: "DOJ",
},
{
field: "address",
title: "ADDRESS",
},
{ command: [{ name: "destroy", text: "Remove" }, { name: "edit", text: "edit" }] }],
editable: "popup",
sortable:true,
dataSource: [{ name: "Jane", age: 30, address: "Bangalore", }, { name: "John", age: 33, address: "Hyderabad" }]
});
Define editable: false & nullable: true to column of database schema.
dataSource = new kendo.data.DataSource({
..
schema: {
model: {
id: "YourID",
fields: {
YourID: { editable: false, nullable: true },
address: { editable: false, nullable: true },
..
..
}
}
}
..
})
You have to set editable: false to the columns.
model: {
fields: {
ProductID: {
editable: false
}
}
}