How can I use a hash-table (is it possible ?) to query for multiple parameters belonging to the same object?
Let me explain.
If I have the following array of objects
persons = [{name: "AA"}, {name: "BB"}, {name: "CA"}]
And I want to store them on a hash table, using the name as the value the
hashtable.put(persons[0]); // will compute hash("AA")
hashtable.put(persons[1]); // will compute hash("BB")
hashtable.put(persons[2]); // will compute hash("CA")
This would allow me to query my hashtable by name very fast.
My question is, Is there any implementation of a hash-table that would allow me to query for multiple parameters for more complex objects like this
persons = [{name: "AA", city: "XX"}, {name: "BB", city: "YY"}, {name: "CA", city: "ZZ"}]
For example. Look for names = "AA" and cities = "ZZ"
If hash-tables are not for this type of operations which algorithms or Data structures are the best for this type of operations?
In python you are able to use tuples as keys in your hashmap:
persons = [{name: "AA", city: "XX"}, {name: "BB", city: "YY"}, {name: "CA", city: "ZZ"}]
hashmap = {}
for d in persons:
hashmap[(d["name"], d["city"])] = ...
Then you can query your hashmap like so:
hashmap[(name, city)]
In other languages you should be able to implement something similar (using groups of elements as keys). This may require implementing a custom hash, but hash map is still the correct data structure for this.
Hash tables require complete keys to be known. If a key consists of several fields, and even one is unknown it doesn't work.
Maybe you are looking for something like Partitioned Hash Functions:
http://mlwiki.org/index.php/Partitioned_Hash_Function_Index
Which are used in relational databases that support multidimensional indices.
Related
The docs for MultiEntry indexes say any Indexable Type can be used.
Arrays are indexable types. Therefore it should be possible to index arrays of arrays and so on, right?
However, I couldn't find any examples or figure out how to query these.
Basically, what I want is something like:
var db = new Dexie('dbname');
db.version(1).stores({
books: 'id, author, name, *properties'
});
db.books.put({
id: 1,
name: 'Human Action',
author: 'Ludwig von Mises',
properties: [['language', 'english'], ['subject', 'philosophy'], ['subject', 'economics']]
});
Then be able to find a book that has an economics subject.
That's right. Each entry in the properties array is an array of two strings - and that inner array itself is indexable and may act as an indexable entry.
So to find all books that has subject economics, do
db.books.where({properties: ['subject', 'economics']}).toArray()
or the equivalent form:
db.books
.where('properties')
.equals(['subject', 'economics'])
.toArray();
What is the approach in Go Gorm for writing and updating records that are back-referencing many to many structs.
For example, I have a Contact struct with a self-referenced Related property:
type Contact struct {
gorm.Model
FirstName string `json:"first_name"`
LastName string `json:"last_name"`
MiddleName string `json:"middle_name,omitempty"`
Related []Contact `json:"related,omitempty" gorm:"many2many:related_contacts"`
}
Populating two contacts where each references the other
fred := model.Contact{
FirstName: "Fred",
LastName: "Jones",
MiddleName: "B",
}
bill := model.Contact{
FirstName: "Bill",
LastName: "Brown",
MiddleName: "R",
}
fred.Related = []model.Contact{bill}
bill.Related = []model.Contact{fred}
Performing a db.Save(&fred) will write two records to the database; one for bill and one for fred, however, bill record doesn't have a back reference to fred (as it hasn't been added yet and has no ID).
If I also perform db.Save(&bill), I now get 4 records, because saving bill didn't know that fred was already saved.
While I do realize that I can save one, then find the second and update it, I am wondering if there is a better "Gorm Way" of doing it where I don't have to manually keep the back references in sync when adding records?
Also, what is the approach if fred is to be removed? Do I need to take care of the related_contacts table manually?
The problem I think you're having is that the fred that you have in bill.Related is a copy, and thus does not get its ID updated by the call to db.Save(&fred).
When you call Save gorm checks the ID field of any models to know whether it should create or update that model.
What I would try is:
fred := model.Contact{
FirstName: "Fred",
LastName: "Jones",
MiddleName: "B",
}
db.Save(&fred)
bill := model.Contact{
FirstName: "Bill",
LastName: "Brown",
MiddleName: "R",
Related: []model.Contact{fred},
}
db.Save(&bill)
// Now both bill and fred have IDs
fred.Related = []model.Contact{bill}
db.Save(&fred)
One thing to watch out for here will be whether gorm tries to save fred.Related[0] and goes into an infinite loop. You can manage this by skipping the upserting of associations. So that means you would change the last two lines to:
fred.Related = []model.Contact{bill}
db.Omit("Related.*").Save(&fred)
// or using association mode
db.Model(&fred).Association("Related").Append(&bill)
EDIT: And with regards to deletion. Knowing nothing about which database you're using, you would normally have to manage related-entity deletion yourself. If you're using a db that supports foreign key constraints you could declare the join table foreign keys as REFERENCES contacts(id) ON DELETE CASCADE. This would mean that if you delete any Contact, the rows in the join table that contain the same ID would be automatically deleted also.
I'm using Amplify from AWS to build a small ecommerce project using React as frontend.
I'd like to know how I should write the "Product" and "Order" types in the schema in order to be able to write productId's to a product array in the Order table when users complete a purchase.
My schema.graphql file:
type Product #model {
id: ID!
name: String!
price: Int!
category: String!
images: [String]!
}
type Order #model {
id: ID!
products: [Product] #connection
}
My question is about the last line, do I need to define that [Product] connection there or I can use [String] to store product id's in a simple string array?
Point 1: In dynamoDB, you only need to define the data type of your partition key and sort key, and these can be string, number etc. For all the other attributes, you don't need to define anything.
Point 2: The dynamoDB designers prefer using a single table per application, unless it's impossible to manage data without multiple tables. Keeping this in mind, your table can be something like this.
Please observe: Only Id aka partition key and Sk aka sort key column is fixed here, all other columns can be anything per item. This is the beauty of DynamoDB. Refer to this document for dynamoDB supported data types.
For example i have table (e.g. in-memory collection) with name lastname fields and more.
Example JSON:
[
{name: 'arina', lastname: 'anyone'},
{name: 'bob', lastname: 'zorro'},
{name: 'bob', lastname: 'black'},
]
now i want a data sorting method, that sorts the data by name and lastname
that means i want the result:
arina - anyone
bob - black
bob - zorro
but if i write something like
function sortByNameAndLastname(data) {
//here i should first sort by lastname, then by name to get result i want
sortByLastName(data);
sortByName(data);
}
that is somehow confusing, if you read the order of calls.
What is the naming convention / best practices to name such method?
sortByNameAndLastname or sortByLastnameAndName ?
if you see SQL, there is a first case ORDER BY name, lastname returns the result i want.
You could be consistent with SQL syntax and use the naming convention of:
OrderBy<firstField>Then<SecondField>
For example in your case:
OrderByNameThenLastName
Your "name" field really should be renamed "firstname", especially as you have a field called "lastname" for the surname.
Is there a Ruby, or Activerecord method that can write and read a hash to and from a database field?
I need to write a web utility to accept POST data and save it to a database, then later on pull it from the database in its original hash form. But ideally without 'knowing' what the structure is. In other words, my data store needs to be independent of any particular set of hash keys.
For example, one time the external app might POST to my app:
"user" => "Bill",
"city" => "New York"
But another time the external app might POST to my app:
"company" => "Foo Inc",
"telephone" => "555-5555"
So my utility needs to save an arbitrary hash to a text field in the database, then, later, recreate the hash from what was saved.
Rails 4 adds support for the Postgres hstore data type which will let you add hashes directly into your (postgres) database.
If you are using Rails 4 and Postgres, you can use hstore in your migration:
def up
execute "create extension hstore"
add_column :table, :column, :hstore
end
def down
remove_column :table, :column
end
That execute command will enable hstore in Postgres, so you only have to do that once.
This will enable you to store a hash in :column just like you would any other data type.
There are two ways to do this:
Serialize your hash and store it in a text field.
Split the hash and store each key in a separate row.
The problem with the first approach is that finding and manipulating is difficult and expensive. For example, prefix a "0" before the telephone number of all employees working in Foo Inc. will be a nightmare, compared to storing the data in regular tabular format.
Your schema would be:
employees (id, created_at, updated_at)
employee_details (id, employee_id, key, value)
So, to store
"company" => "Foo Inc",
"telephone" => "555-5555"
you would do:
employees: 1, 2012-01-01, 2012-01-01
employee_details (1, 1, "company", "Foo Inc"), (2, 1, "telephone", "555-5555")
Drawbacks of this approach: Rails does not natively support such kind of a schema.
You can use serialization with 3 options: Marshal in binary format, YAML and JSON human-readable formats of data store.
Once you are trying each of methods, do not forget to measure time to serialize and deserialize as well. If you need to pull data back in origin format, JSON is the good choice to use, because you don't need to deserialize it, but use it as a string itself.
You're looking for serialization. It will help you to do exactly what you want.
Rails 4 has a new feature called Store, so you can easily use it to solve your problem. You can define an accessor for it and it is recommended you declare the database column used for the serialized store as a text, so there's plenty of room. The original example:
class User < ActiveRecord::Base
store :settings, accessors: [ :color, :homepage ], coder: JSON
end
u = User.new(color: 'black', homepage: '37signals.com')
u.color # Accessor stored attribute
u.settings[:country] = 'Denmark' # Any attribute, even if not specified with an accessor
# There is no difference between strings and symbols for accessing custom attributes
u.settings[:country] # => 'Denmark'
u.settings['country'] # => 'Denmark'