What is the point of GraphQL's ID Scalar? - graphql

https://graphql.org/learn/schema/#scalar-types
I have read the description
ID: The ID scalar type represents a unique identifier, often used to refetch an object or as the key for a cache. The ID type is serialized in the same way as a String; however, defining it as an ID signifies that it is not intended to be human‐readable.
But in practice, what actually changes if I use ID instead of String ?

Nothing changes as it is mentioned in the description. ID is serialized exactly the same as String type; therefore, if you were to replace ID with String, computer would see no difference in it.
However, what is important to you is that YOU as a programmer know for sure that it's a unique identifier which will be useful when you paginate records, modify apollo cache, iteratively create jsx components.
So in practice, it has a very practical implication - it conveys important meaning for the programmer that the field is unique for the type.

Related

Is it possible to overwrite scalar types in GraphQL?

I want an ID scalar to serialize data not as a string but rather as a Mongo Object id. The default behaviour is serializing data as a string. I don't want to create a custom scalar with another name as it may create some difficulties down the road.
The ID type is required to serialize as a string, but absolutely nothing is specified about its format and it's intended to be opaque. You could use a Mongo object ID hex string as a GraphQL ID and (at a GraphQL level) there wouldn't be any problems with this.

using two separate keys in BoltDB

I have a User struct with ID and LoginName fields and I want this struct to be accessible by either of these fields with single call to the DB. I know BoltDB is not supposed to handle arbitrary field indexing etc. (unlike SQL) but this case is a little different as I happen to know in advance the additional field to b used as index.
So is there some kind of secondary key or multiple key indexing? or maybe some strategy that I fail to see? If not then I'll just implement it with two calls, I just prefer a "cleaner" solution...
Thanks!
No, it's not there. BoltDB is a lot like Go. Clean and simple. And building a layer on top is easy. BoltDB even allows update transactions to be trivially implemented so two more more buckets can be updated, or not, atomically. So creating an update transaction that keeps two or more buckets in sync is easy. But it sounds like you know that and just wanted to check that you aren't missing something.
There is no secondary key indexing in BoltDB, but you can implement it.
You can store ID to LoginName mapping in another bucket, and it will be technically the "secondary key" for your struct. That is, first obtain the primary key value from the secondary key, and then the User struct.
If most of your calls are on LoginName key, use LoginName to ID mapping and store User struct under LoginName key and vice versa.
Be careful: you have to maintain consistency by your own, remember it.

Getting started with Bleve using BoltDB

I am trying to wrap my head around Bleve and I understand everything that is going on in the tutorials, videos and documentation. I however get very confused when I am using it on BoltDB and don't know how to start.
Say I have an existing BoltDB database called data.db populated with values of struct type Person
type Person struct {
ID int `json:"id"`
Name string `json:"name"`
Age int `json:"age"`
Sex string `json:"sex"`
}
How do I index this data so that I can do a search? How do I handle the indexing of data that will be stored in the database in the future?
Any help will be highly appreciated.
Bleve uses BoltDB as one of several backend stores and is separate from where you store your application data. To index your data in Bleve, simply add your Index:
index.Index(person.ID, person)
That index exists separately from your application data (whether it's in Bolt, Postgres, etc).
To retrieve your data, you'll need to construct a search request using bleve.NewSearchRequest(), then call Index.Search(). This will return a SearchResult which includes a Hits field where you can retrieve the ID for your object. You can use this to look up the object in your application data store.
Disclaimer: I am the author of BoltDB.
How you index your data depends on how you want to query for it.
If you want to query by any arbitrary fields, like {Age:15, Name:"Bob"} then BoltDB isn't an awesome fit for your problem.
BoltDB is simply a key value store with fast access to sequential keys and efficient prefix seeking. It's not really a replacement for general use databases.
You likely want something more like a document store (ie: MongoDB) or RDBMS (ie: PostgreSQL).
If you just wanted something that uses simple files and is embedded, you could also use SQlite with the Go module
If you want to search by only a single field, like ID or Name, then use that as the key.
If lookup speed doesn't matter at all, I guess you can use Bolt to just iterate over the entire db, parse the json and check the fields. But that's probably the worst approach you could take.

FetchXML to filter by name of E-mail's RegardingObjectId

I have a query that starts at the QueueItem and, if the entity that Queue Item references is an E-mail, adds some additional filter conditions. One of those conditions is the primary field value of whatever that e-mail, in turn, is "Regarding". I don't really care what type of entity the E-mail references, I just need to allow the user to filter by the "Name" of that entity. Is this possible, and if so, how?
This depends somewhat from where the query is performed from.
With a basic FetchXML query, in around about sort of a way this is possible, you need to do joins on the relationship and see if there is a record there. Its not perfect but it might work.
If you are running the query from code, then its a bit easier as you can examine the entity type in code. For example:
EntityReference e = entity.GetAttributeValue("regardingobjectid");
string entityName = e.LogicalName;

How would you represent a relational entity as a single unit of retrievable data in BerkeleyDB?

BerkeleyDB is the database equivalent of a Ruby hashtable or a Python dictionary except that you can store multiple values for a single key.
My question is: If you wanted to store a complex datatype in a storage structure like this, how could you go about it?
In a normal relational table, if you want to represent a Person, you create a table with columns of particular data types:
Person
-id:integer
-name:string
-age:integer
-gender:string
When it's written out like this, you can see how a person might be understood as a set of key/value pairs:
id=1
name="john";
age=18;
gender="male";
Decomposing the person into individual key/value pairs (name="john") is easy.
But in order to use the BerkeleyDB format to represent a Person, you would need some way of recomposing the person from its constituent key/value pairs.
For that, you would need to impose some artificial encapsulating structure to hold a Person together as a unit.
Is there a way to do this?
EDIT: As Robert Harvey's answer indicates, there is an entity persistence feature in the Java edition of BerkeleyDB. Unfortunately because I will be connnecting to BerkeleyDB from a Ruby application using Moneta, I will be using the standard edition which I believe requires me to create a custom solution in the absence of this support.
You can always serialize (called marshalling in Ruby) the data as a string and store that instead. The serialization can be done in several ways.
With YAML (advantage: human readable, multiple implementation in different languages):
require 'yaml'; str = person.to_yaml
With Marshalling (Ruby-only, even Ruby version specific):
Marshal.dump(person)
This will only work if class of person is an entity which does not refer to other objects you want not included. For example, references to other persons would need to be taken care of differently.
If your datastore is able to do so (and BerkeleyDB does AFAICT) I'd just store a representation of the object attributes keyed with the object Id, without splitting the object attributes in different keys.
E.g. given:
Person
-id:1
-name:"john"
-age:18
-gender:"male"
I'd store the yaml representation in BerkleyDB with the key person_1:
--- !ruby/object:Person
attributes:
id: 1
name: john
age: 18
gender: male
Instead if you need to store each attribute as a key in the datastore (why?) you should make sure the key for the person record is somewhat linked to its identifying attribute, that's the id for an ActiveRecord.
In this case you'd store these keys in BerkleyDB:
person_1_name="john";
person_1_age=18;
person_1_gender="male";
Have a look at this documentation for an Annotation Type Entity:
http://www.oracle.com/technology/documentation/berkeley-db/je/java/com/sleepycat/persist/model/Entity.html

Resources