How to sort fetch request by related properties? - sorting

I have two CoreData entities with one-to-many relation - Contact (main) and PhoneNumber (related)
Contact can has several phone numbers, and I need to sort Contacts by PhoneNumbers property, and I want to sort it by existing of true values in registered property, eg:
Contact1
prop1 = "value1"
prop2 = "value2"
PhoneNumbers = [
{prop1 = "value1", prop2 = "value2", registered = false},
{prop1 = "value1", prop2 = "value2", registered = false}
]
Contact2
prop1 = "value1"
prop2 = "value2"
PhoneNumbers = [
{prop1 = "value1", prop2 = "value2", registered = false},
{prop1 = "value1", prop2 = "value2", registered = false},
{prop1 = "value1", prop2 = "value2", registered = true}
]
Contact3
prop1 = "value1"
prop2 = "value2"
PhoneNumbers = [
{prop1 = "value1", prop2 = "value2", registered = false},
{prop1 = "value1", prop2 = "value2", registered = false}
]
Contact2 has PhoneNumber with registered = true, and I want move it to fetching result's top
My sort descriptor:
[NSSortDescriptor sortDescriptorWithKey:#"phoneNumbers" ascending:YES selector:#selector(registeredCompare:)];
When I specifies phoneNumbers key (as relationship in data model), I receives error Terminating app due to uncaught exception 'NSInvalidArgumentException', reason: 'to-many key not allowed here'
When I changes key to another, error changes to unsupported NSSortDescriptor selector: registeredCompare: even if argument type is correct
How can I sort data by related properties?
Comparators (NSSortDescriptor sortDescriptorWithKey:ascending:comparator:) are not supported by NSFetchRequest
Get all values to array and sort it is not a solution - too many contacts can freeze application

You can't sort by a to-many relationship. There are potentially several PhoneNumbers for each Contact; to decide which Contact comes first, CoreData needs a single value for each Contact.
Your second error is because registeredCompare is not a recognised sort selector. From the Core Data Programming Guide:
The supported sort selectors for SQLite are compare: and caseInsensitiveCompare:, localizedCompare:, localizedCaseInsensitiveCompare:, and localizedStandardCompare:.
One solution to your problem would be to add a boolean attribute to the Contact entity, say hasRegisteredPhoneNumber which you update to true or false when you add/change/remove related PhoneNumbers. Then use that new attribute as a sort key.

Related

Iterating through map of objects in terraform with route 53 records

I am trying to figure out how to read from additional values in Terraform using for / for_each using Terraform 0.12.26
dns.tfvars
mx = {
"mywebsite.org." = {
ttl = "3600"
records = [
"home.mywebsite.org.",
"faq.mywebsite.org."
]
}
"myotherwebsite.org." = {
ttl = "3600"
records = [
"home.myotherwebsite.org."
]
}
}
variables.tf
variable "mx" {
type = map(object({
ttl = string
records = set(string)
}))
}
mx.tf
locals {
mx_records = flatten([
for mx_key, mx in var.mx : [
for record in mx.records : {
mx_key = mx_key
record = record
ttl = mx.ttl
}]
])
}
resource "aws_route53_record" "mx_records" {
for_each = { for mx in local.mx_records : mx.mx_key => mx... }
zone_id = aws_route53_zone.zone.zone_id
name = each.key
type = "MX"
ttl = each.value.ttl
records = [
each.value.record
]
}
In mx.tf, I can comment out the second value, faq.mywebsite.org, and the code works perfectly. I cannot figure out how to set up my for loop and for each statements to get it to "loop" through the second value. The first error I had received stated below:
Error: Duplicate object key
on mx.tf line 13, in resource "aws_route53_record" "mx_records":
13: for_each = { for mx in local.mx_records : mx.mx_key => mx }
|----------------
| mx.mx_key is "mywebsite.org."
Two different items produced the key "mywebsite.org." in this 'for'
expression. If duplicates are expected, use the ellipsis (...) after the value
expression to enable grouping by key.
To my understanding, I do not have two duplicate values helping to form the key so I should not have to use the ellipsis, but I tried using the ellipsis anyway to see if it would apply properly. After adding on the ellipsis after the value expression, I got this error:
Error: Unsupported attribute
on mx.tf line 20, in resource "aws_route53_record" "mx_records":
20: each.value.record
|----------------
| each.value is tuple with 2 elements
This value does not have any attributes.
Any advice on this issue would be appreciated.
UPDATE
Error: [ERR]: Error building changeset: InvalidChangeBatch: [Tried to create resource record set [name='mywebsiteorg.', type='MX'] but it already exists]
status code: 400, request id: dadd6490-efac-47ac-be5d-ab8dad0f4a6c
It's trying to create the record, but it already created because of the first record in the list.
I think you could just construct a map of your objects with key being the index of mx_records list (note the idx being the index):
resource "aws_route53_record" "mx_records" {
for_each = { for idx, mx in local.mx_records : idx => mx }
zone_id = aws_route53_zone.zone.zone_id
name = each.value.mx_key
type = "MX"
ttl = each.value.ttl
records = [
each.value.record
]
}
The above for_each expressions changes your local.mx_records from list(objects) to map(objects), where the map key is idx, and the value is the original object.
Update:
I verified in Route53 and you can't duplicate codes. Thus may try using orginal mx variable:
resource "aws_route53_record" "mx_records" {
for_each = { for idx, mx in var.mx : idx => mx }
zone_id = aws_route53_zone.zone.zone_id
name = each.key
type = "MX"
ttl = each.value.ttl
records = each.value.records
}
Moreover, if you want to avoid flatten function and for loop local variable, you can access the object in the map as:
resource "aws_route53_record" "mx_records" {
for_each = var.mx
zone_id = aws_route53_zone.zone.zone_id
name = each.key
type = "MX"
ttl = each.value["ttl"]
records = each.value["records"]
}

RavenDb 4: Check if a string of an array of strings exists in different array of strings

I am trying to filter my activities based on user roles. The manager is only allowed to see the activities that contain the role Manager. See below for more information.
Data:
var user = new User
{
Roles = [
"Manager"
]
}
var activities = new List<Activity>
{
new Activity
{
Description = "My First Activity",
Roles = [
"Admin",
"Manager"
]
},
new Activity
{
Description = "My Second Activity",
Roles = [
"Admin"
]
},
new Activity
{
Description = "My Third Activity",
Roles = [
"Manager",
"Client"
]
},
}
Query that filters the activity
var filtered = await context.Query<Activity>()
.Where(x => x.Roles.Any(y => user.Roles.Contains(y)))
.ToListAsync();
Expected result of filtered:
[
{
new Activity
{
Description = "My First Activity",
Roles = [
"Admin",
"Manager"
]
}
new Activity
{
Description = "My Third Activity",
Roles = [
"Manager",
"Client"
]
},
}
]
Error from query
System.InvalidOperationException : Can't extract value from expression of type: Parameter
I am getting an error, so obviously I am doing something wrong. Is this the correct way or is there something better?
This requires RavenDB to do computation during query, try this, instead:
.Where(x => x.Roles.In(user.Roles)) (might need to add a using statement for the In extension method, Raven.Client.Linq, IIRC).

GraphQL - Get all fields from nested JSON object

I'm putting a GraphQL wrapper over an exiting REST API as described in Zero to GraphQL in 30 minutes. I've got an API endpoint for a product with one property that points to a nested object:
// API Response
{
entity_id: 1,
nested_object: {
key1: val1,
key2: val2,
...
}
}
Is it possible to define the schema so that I can get this entire nested object without explicitly defining the nested object and all of its properties? I want my query to just specify that I want the nested object, and not need to specify all the properties I want from the nested object:
// What I want
{
product(id: "1") {
entityId
nestedObject
}
}
// What I don't want
{
product(id: "1") {
entityId
nestedObject {
key1
key2
...
}
}
}
I can do the second version, but it requires lots of extra code, including creating a NestedObjectType and specifying all the nested properties. I've also figured out how to automatically get a list of all the keys, like so:
const ProductType = new GraphQLObjectType({
...
fields: () => ({
nestedObject: {
type: new GraphQLList(GraphQLString),
resolve: product => Object.keys(product.nested_object)
}
})
})
I haven't figured out a way to automatically return the entire object, though.
You may try to use scalar JSON type. You can find more here (based on apollographql).
add scalar JSON to a schema definition;
add {JSON: GraphQLJSON} to a resolve functions;
use JSON type in a shema:
scalar JSON
type Query {
getObject: JSON
}
an example of a query:
query {
getObject
}
a result:
{
"data": {
"getObject": {
"key1": "value1",
"key2": "value2",
"key3": "value3"
}
}
}
Basic code:
const express = require("express");
const graphqlHTTP = require("express-graphql");
const { buildSchema } = require("graphql");
const GraphQLJSON = require("graphql-type-json");
const schema = buildSchema(`
scalar JSON
type Query {
getObject: JSON
}
`);
const root = {
JSON: GraphQLJSON,
getObject: () => {
return {
key1: "value1",
key2: "value2",
key3: "value3"
};
}
};
const app = express();
app.use(
"/graphql",
graphqlHTTP({
schema: schema,
rootValue: root,
graphiql: true
})
);
app.listen(4000);
console.log("Running a GraphQL API server at localhost:4000/graphql");
I can do the second version, but it requires lots of extra code, including creating a NestedObjectType and specifying all the nested properties.
Do it! It will be great. That's the way to go in order to use GraphQL to its full potential.
Aside from preventing over-fetching, it also gives you a lot of other benefits like type validation, and more readable and maintainable code since your schema gives a fuller description of your data. You'll thank yourself later for doing the extra work up front.
If for some reason you really don't want to go that route though and fully understand the consequences, you could encode the nested objects as strings using JSON.stringify.
But like I said, I recommend you don't!

Using a Regex against a simple list with `ElemMatch` in MongoDB .NET

Given a document with a list of strings, how would you assemble a C# MongoDB query to regex against each list item?
For example, here's some data.
{
"_id": {
"$oid": "4ded270ab29e220de8935c7b"
},
// ... some other stuff ...
"categories": [
{
"Some Category",
"Another Category",
"Yet Another Category",
"One Last Category"
}
]
},
Ultimately, how would I structure a query like this that could be strongly-typed through the MonoDB LINQ provider?
{ "categories": { $elemMatch: { $regex: someSearch, $options: "i" } } }
I'm trying to make this work with ElemMatch, but I can't seem to structure a BsonRegularExpression to work with that method. If the data was a list of keyed elements, it looks like I could make this work for some key, say itemName.
// Doesn't translate to a list of raw strings.
Query.ElemMatch ("categories", Query.Match("itemName", new BsonRegularExpression (new Regex (someSearch, RegexOptions.IgnoreCase)));
As soon as I try to make that regex work directly on ElemMatch, though, I can't match the overloads.
// Doesn't compile: cannot convert BsonRegularExpress to IMongoQuery.
Query.ElemMatch ("categories", new BsonRegularExpression (new Regex (someSearch, RegexOptions.IgnoreCase)));
Is there some method for converting a BsonRegularExpression into an IMongoQuery object directly?
Or is there some Matches syntax for applying the current iterated element in a list that would allow for a hybrid of the two? Something like this made up code.
// Doesn't work: totally making this up.
Query.ElemMatch ("categories", Query.Matches (iteratorElement, new BsonRegularExpression (new Regex (someSearch, RegexOptions.IgnoreCase)));
I was hoping to avoid just sending a raw string into the MongoDB driver, both so I can escape the search string from injection and so the code isn't littered with magic string field names (instead limited to just the BsonElement attribute on the DB model fields).
This might not be 100% what you are after (as it's not IQueryable); but it does achieve what you want (strongly typed regex filtering of a collection, using Linq):
var videosMongo = DbManager.Db.GetCollection<Video> ("videos");
var videosCollection = videosMongo.AsQueryable<Video> ();
videosMongo.Insert (new Video () {
Id = new ObjectId (),
Tags = new string[]{ "one", "two", "test three", "four test", "five" },
Name = "video one"
});
videosMongo.Insert (new Video () {
Id = new ObjectId (),
Tags = new string[]{ "one", "two", "test three", "four test", "five" },
Name = "video two"
});
videosMongo.Insert (new Video () {
Id = new ObjectId (),
Tags = new string[]{ "one", "two" },
Name = "video three"
});
videosMongo.Insert (new Video () {
Id = new ObjectId (),
Tags = new string[]{ "a test" },
Name = "video four"
});
var videos = videosCollection.Where (v => v.Name == "video four").ToList ();
var collection = DbManager.Db.GetCollection<Video> ("videos");
var regex = new BsonRegularExpression ("test");
var query = Query<Video>.Matches (p => p.Tags, regex);
var results = collection.Find (query).ToList ();
I worked this out by using the excellent resource here : http://www.layerworks.com/blog/2014/11/11/mongodb-shell-csharp-driver-comparison-cheat-cheet

Nest Elasticsearch- searching front of some fields and back of others

I have a case where I need a partial match on the first part of some properties (last name and first name) and a partial match on the end of some other properties, and I'm wondering how to add both analyzers.
For example, if I have the first name of "elastic", I can currently search for "elas" and find it. But, if I have an account number of abc12345678, I need to search for "5678" and find all account numbers ending in that, but I can't have a first name search for "stic" find "elastic".
Here's a simplified example of my Person class:
public class Person
{
public string AccountNumber { get; set; }
[ElasticProperty(IndexAnalyzer = "partial_name", SearchAnalyzer = "full_name")]
public string LastName { get; set; }
[ElasticProperty(IndexAnalyzer = "partial_name", SearchAnalyzer = "full_name")]
public string FirstName { get; set; }
}
Here's the relevant existing code where I create the index, that currently works great for searching the beginning of a word:
//Set up analyzers on some fields to allow partial, case-insensitive searches.
var partialName = new CustomAnalyzer
{
Filter = new List<string> { "lowercase", "name_ngrams", "standard", "asciifolding" },
Tokenizer = "standard"
};
var fullName = new CustomAnalyzer
{
Filter = new List<string> { "standard", "lowercase", "asciifolding" },
Tokenizer = "standard"
};
var result = client.CreateIndex("persons", c => c
.Analysis(descriptor => descriptor
.TokenFilters(bases => bases.Add("name_ngrams", new EdgeNGramTokenFilter
{
MaxGram = 15, //Allow partial match up to 15 characters.
MinGram = 2, //Allow no smaller than 2 characters match
Side = "front"
}))
.Analyzers(bases => bases
.Add("partial_name", partialName)
.Add("full_name", fullName))
)
.AddMapping<Person>((m => m.MapFromAttributes()))
);
It seems like I could add another EdgeNGramTokenFilter, and make the Side = "back", but I don't want the first and last name searches to match back side searches. Can someone provide a way to do that?
Thanks,
Adrian
Edit
For completeness, this is the new decorator on the property that goes with the code in the accepted answer:
[ElasticProperty(IndexAnalyzer = "partial_back", SearchAnalyzer = "full_name")]
public string AccountNumber { get; set; }
You need to declare another analyzer (let's call it partialBack) specifically for matching from the back but you can definitely reuse the existing edgeNGram token filter, like this:
var partialBack = new CustomAnalyzer
{
Filter = new List<string> { "lowercase", "reverse", "name_ngrams", "reverse" },
Tokenizer = "keyword"
};
...
.Analyzers(bases => bases
.Add("partial_name", partialName)
.Add("partial_back", partialBack))
.Add("full_name", fullName))
)
The key here is the double use of the reverse token filter.
The string (abc12345678) is
first lowercased (abc12345678),
then reversed (87654321cba),
then edge-ngramed (87, 876, 8765, 87654, 876543, ...)
and finally the tokens are reversed again (78, 678, 5678, 45678, 345678, ...).
As you can see, the result is that the string is tokenized "from the back", so that a search for 5678 would match abc12345678.

Resources