Data structure for implementing an Address Book in Java - data-structures

Which data structure from the Concurrent collections package will be suitable for implementing an address book?

Related

how to implement graphql with java with nebula graph like dgraph

I have a java project
I have a nebula graph to save all my information
I need to have graphql queries
I implement it but we need a dynamic schema
so we save our schema for any of dynamic entity types
for example
our users want to have dynamic datatypes that we implement and save in nebula graph
our users can save vertex as items
after that we need to have endpoint for query in our all vertex that saved by those data types
https://dgraph.io/docs/graphql/queries/search-filtering/
we need to post our request like this
how can we implement it better
we have a good cache of our datatypes in the java layer.
is there any suggestion for us?
For now, there is no GraphQL support/parsing layer implementation for NebulaGraph.
For java connector, there are some abstraction layers there, could you take a look at them?
https://github.com/nebula-contrib/graph-ocean
https://github.com/nebula-contrib/ngbatis
https://github.com/nebula-contrib/nebula-jdbc
Also, equivlant queries(towards graphql) could be composed, too to return complex things in OpenCypher.

Should you have different protos for write and list, or reuse?

When designing CRUD protos, the write service will have to write to multiple relational tables, so you would design the protos in a way that match the table schema and will have some kind of nested structure as well. This is not the same for the list operation as we abstract the data in a flat structure in a NOSQL structure. What is the recommendation on this case? Do I reuse the nested protos and transform the NOSQL response to unify the proto structure between the read and write? Or should I write a custom proto that is flat for the list service?
Is this actually a case by case and a design decision? Or is there some enforced opinion somewhere that I have to follow?
There are many schools of thought on API design. Given that you're talking about Google products (protobuf and gRPC), you may find Google's API Design Guide helpful. https://cloud.google.com/apis/design/.

Embedding data from another document in Cosmos DB

When embedding data from another document in another collection, what is the best practice as to who should be responsible to populate that data in a microservice architecture?
As an example, let's say I have basic information about an organization:
{
"id" : 1,
"legalName": "Initech"
}
which I want to embed in an invoice like this to avoid doing two service requests to show the invoice:
{
"type": "Payable",
"invoiceStatus": "Preparing Preliminary Version",
"applicablePeriod": {
"startDateTime": "2020-07-08T00:10:59.618Z",
"endDateTime": "2020-07-08T00:10:59.618Z"
},
"issuedDateTime": "2020-07-08T00:10:59.618Z"
"issuingOrganization":{
"id": 1,
"legalName": "Initech"
}
}
Would it be the caller's responsibility to supply the data while creating/updating the invoice or would it be the invoice service that would retrieve the external data using the organization id and then embed the data as necessary?
I feel like I should avoid having cross service dependencies in the backend as much as possible. I understand the maintenance of the embedded data could be achieved through the change feed but I was wondering about the initial population of the embedded data.
Did you get an answer back on this? I at least, wanted to provide an answer to serve as general guidance. It comes down to the state of the data. Please see the following document which covers this specific topic in greater detail: Data modeling in Azure Cosmos DB
In general, use embedded data models when (link):
There are contained relationships between entities.
There are one-to-few relationships between entities.
There is embedded data that changes infrequently.
There is embedded data that will not grow without bound.
There is embedded data that is queried frequently together.
Embedding data works nicely for many cases but there are scenarios when denormalizing your data will cause more problems than it is worth. So what do we do now?
When to reference (link):
In general, use normalized data models when:
Representing one-to-many relationships.
Representing many-to-many relationships.
Related data changes frequently.
Referenced data could be unbounded.
Hybrid data models (link).
We've now looked embedding (or denormalizing) and referencing (or normalizing) data, each have their upsides and each have compromises as we have seen.
It doesn't always have to be either or, don't be scared to mix things up a little.
Based on your application's specific usage patterns and workloads there may be cases where mixing embedded and referenced data makes sense and could lead to simpler application logic with fewer server round trips while still maintaining a good level of performance.
So, with the above Data Model information, the other half of the equation is Identifying microservice boundaries and designing a microservices architecture but, in a simpler scenario...the invoice service would perform an update to the root document, either through embedding the invoice or linking the invoice.

Kafka Connect Jdbc Source Data Schema

Does the connector generate a generic Schema such as record with the map of column->value in it, or each table get its own schema that would map to a different class trough binding. That is in the first scenario, trough binding all record across all table would bind to a record class, that contain a map.
I implemented some similar functionality in the past, and what I did was creating a generic record class containing a map with the field and value, which is consistent with what the underlying JDBC API returns.
Although this seems to defy the purpose of schema, hence i wonder how it works.
If anyone could give me some hint and documentation to investigate that, that would be much appreciated

What is the difference between DAO and DAL?

Having studied Java at school I am quite familiar with the DAO-pattern(Data access object). However at work I use .NET. In .NET there is often talk about the DAL(Data Access Layer). To me their purpose seems quite similar. So the question is are DAO and DAL basically the same thing? Is the term DAL only made up so it wouldn't be mixed up with Data Access Objects?
The Data Access Layer (DAL) is the layer of a system that exists between the business logic layer and the persistence / storage layer. A DAL might be a single class, or it might be composed of multiple Data Access Objects (DAOs). It may have a facade over the top for the business layer to talk to, hiding the complexity of the data access logic. It might be a third-party object-relational mapping tool (ORM) such as Hibernate.
DAL is an architectural term, DAOs are a design detail.
A data access layer will contain many data access objects.
It's primary role is to decouple the business logic from the database logic and implementation.
For example the DAL may have a single method that will retrieve data from several tables, queries or stored procedures via one or more data access objects.
Changes to the database structure, DAOs, stored procedures or even database type should not incur changes to the business logic and this is down to the decoupling provided by the DAL.

Resources