method naming convention for sortBy orderBy with multiple arguments, best practices - coding-style

For example i have table (e.g. in-memory collection) with name lastname fields and more.
Example JSON:
[
{name: 'arina', lastname: 'anyone'},
{name: 'bob', lastname: 'zorro'},
{name: 'bob', lastname: 'black'},
]
now i want a data sorting method, that sorts the data by name and lastname
that means i want the result:
arina - anyone
bob - black
bob - zorro
but if i write something like
function sortByNameAndLastname(data) {
//here i should first sort by lastname, then by name to get result i want
sortByLastName(data);
sortByName(data);
}
that is somehow confusing, if you read the order of calls.
What is the naming convention / best practices to name such method?
sortByNameAndLastname or sortByLastnameAndName ?
if you see SQL, there is a first case ORDER BY name, lastname returns the result i want.

You could be consistent with SQL syntax and use the naming convention of:
OrderBy<firstField>Then<SecondField>
For example in your case:
OrderByNameThenLastName
Your "name" field really should be renamed "firstname", especially as you have a field called "lastname" for the surname.

Related

Assign value to hash argument on condition

I have a ActiveRecord backend model where I am inserting records. However when I am assigning value to certain attributes i would like to assign them based on a certain condition being satisified or not. How can I go about doing it? Have attached an example below for better sake of clarity.
#user = User.find_by_name("John")
Store.create(
name: "Some Store",
email: "store#example.com",
user_id: #user.id if #user.applicant?
)
Yes, you can't apply suffix if to keyword arguments / hash elements this way. I normally do something along these lines:
store_params = {
name: "Some Store",
email: "store#example.com",
}
store_params[:user_id] = #user.id if #user.applicant?
Store.create(store_params)
This also works well if there's an existing user_id value that needs to be preserved if #user is not an applicant. For example, when you're updating records. For creation, simple parenthesizing should work, as pointed out by others
user_id: (#user.id if #user.applicant?)
Caveat: this assumes that the default value for user_id is nil, so a nil produced from the expression when user is not an applicant is equal to the value set when user_id was not supplied at all.
You can use safe-operator here
Just use brackets to avoid interpreter error
Store.create(
name: "Some Store",
email: "store#example.com",
user_id: (#user.id if #user&.applicant?)
)

Adding records with many to many back-references in Go Gorm

What is the approach in Go Gorm for writing and updating records that are back-referencing many to many structs.
For example, I have a Contact struct with a self-referenced Related property:
type Contact struct {
gorm.Model
FirstName string `json:"first_name"`
LastName string `json:"last_name"`
MiddleName string `json:"middle_name,omitempty"`
Related []Contact `json:"related,omitempty" gorm:"many2many:related_contacts"`
}
Populating two contacts where each references the other
fred := model.Contact{
FirstName: "Fred",
LastName: "Jones",
MiddleName: "B",
}
bill := model.Contact{
FirstName: "Bill",
LastName: "Brown",
MiddleName: "R",
}
fred.Related = []model.Contact{bill}
bill.Related = []model.Contact{fred}
Performing a db.Save(&fred) will write two records to the database; one for bill and one for fred, however, bill record doesn't have a back reference to fred (as it hasn't been added yet and has no ID).
If I also perform db.Save(&bill), I now get 4 records, because saving bill didn't know that fred was already saved.
While I do realize that I can save one, then find the second and update it, I am wondering if there is a better "Gorm Way" of doing it where I don't have to manually keep the back references in sync when adding records?
Also, what is the approach if fred is to be removed? Do I need to take care of the related_contacts table manually?
The problem I think you're having is that the fred that you have in bill.Related is a copy, and thus does not get its ID updated by the call to db.Save(&fred).
When you call Save gorm checks the ID field of any models to know whether it should create or update that model.
What I would try is:
fred := model.Contact{
FirstName: "Fred",
LastName: "Jones",
MiddleName: "B",
}
db.Save(&fred)
bill := model.Contact{
FirstName: "Bill",
LastName: "Brown",
MiddleName: "R",
Related: []model.Contact{fred},
}
db.Save(&bill)
// Now both bill and fred have IDs
fred.Related = []model.Contact{bill}
db.Save(&fred)
One thing to watch out for here will be whether gorm tries to save fred.Related[0] and goes into an infinite loop. You can manage this by skipping the upserting of associations. So that means you would change the last two lines to:
fred.Related = []model.Contact{bill}
db.Omit("Related.*").Save(&fred)
// or using association mode
db.Model(&fred).Association("Related").Append(&bill)
EDIT: And with regards to deletion. Knowing nothing about which database you're using, you would normally have to manage related-entity deletion yourself. If you're using a db that supports foreign key constraints you could declare the join table foreign keys as REFERENCES contacts(id) ON DELETE CASCADE. This would mean that if you delete any Contact, the rows in the join table that contain the same ID would be automatically deleted also.

GraphQL mutation with subquery for parameter - update/convert DB on write?

Is it possible to add a look up to a mutation in GraphQL? Let's say something like an input type where one property is the result of another query.
createPerson(name: "Steve", gender: genders ( name: { eq: "mail" } ) { id } )
where genders ( name: { eq: "mail" } ) { id } would return exactly one result value
GraphQL allows you to:
"get what you want";
manipulate data on write;
customize response on read;
You can of course write a createPerson mutation resolver to:
ask another table for additional data (if missing arg);
write into DB;
But it doesn't affect (isn't suitable for) existing records.
You can also use read resolvers to update missing fields in records using field resolver - a kind of 'eventual consistency'.
Write person resolver to read a person record and additional person.gender field resolver to get value for missing gender value. The second one can be used to update the 'main' person [DB table] record:
read a missing value from other table;
write a value into person record on the 'main' table;
return value.
Next time person resolver will read gender value at once (from 'main' person table), [additional] field resolver won't be called at all.
This technique can be used to convert DB data without writing SQL/DB specific code (different syntax/types problems, safer on more complex logic) - it's enough to read each/all person's gender fields. Field resolver (and the other/old table) can be removed after that.

How can i use hashmaps to query for multiple parameters?

How can I use a hash-table (is it possible ?) to query for multiple parameters belonging to the same object?
Let me explain.
If I have the following array of objects
persons = [{name: "AA"}, {name: "BB"}, {name: "CA"}]
And I want to store them on a hash table, using the name as the value the
hashtable.put(persons[0]); // will compute hash("AA")
hashtable.put(persons[1]); // will compute hash("BB")
hashtable.put(persons[2]); // will compute hash("CA")
This would allow me to query my hashtable by name very fast.
My question is, Is there any implementation of a hash-table that would allow me to query for multiple parameters for more complex objects like this
persons = [{name: "AA", city: "XX"}, {name: "BB", city: "YY"}, {name: "CA", city: "ZZ"}]
For example. Look for names = "AA" and cities = "ZZ"
If hash-tables are not for this type of operations which algorithms or Data structures are the best for this type of operations?
In python you are able to use tuples as keys in your hashmap:
persons = [{name: "AA", city: "XX"}, {name: "BB", city: "YY"}, {name: "CA", city: "ZZ"}]
hashmap = {}
for d in persons:
hashmap[(d["name"], d["city"])] = ...
Then you can query your hashmap like so:
hashmap[(name, city)]
In other languages you should be able to implement something similar (using groups of elements as keys). This may require implementing a custom hash, but hash map is still the correct data structure for this.
Hash tables require complete keys to be known. If a key consists of several fields, and even one is unknown it doesn't work.
Maybe you are looking for something like Partitioned Hash Functions:
http://mlwiki.org/index.php/Partitioned_Hash_Function_Index
Which are used in relational databases that support multidimensional indices.

ActiveRecord query interface and map

I'm trying to map an AR relation in a specific data structure (to be rendered as JSON) and I can't make it work, for some reason the relations are always nil
Client.includes(:fixed_odds_account, person: [:phones, :emails]).map do |client|
{
id: client.id,
uri: client.uri,
updated_at: client.updated_at,
balance: client.fixed_odds_account.current_balance,
email: client.person.emails.pluck(:address),
first_name: client.person.first_name,
last_name: client.person.last_name,
number: client.person.phones.pluck(:number)
}
I'd expect this to return an array of hashes, but it always fails on the "person" relationship, which is apparently nil (and it is not).
What's weird is that if I remove the Hash and just puts client.person I can see my data.
Any idea?
Use #joins. With #includes you might be hitting a Client without Person as it uses left outer join. You might also want to add #uniq to remove duplicates.
Client.joins(:fixed_odds_account, person: [:phones, :emails]).uniq.map #code omitted

Resources