For our patient registration system as a standalone web service, we want to use FHIR.
Applications that want to request data from the web service in some cases want to retrieve information about multiple patients. For example a list of last seen patients.
It would be really inefficient to search every patient based on id individually, because it will cause much overhead in networking and searching.
Is it possible to search for multiple patients with a set of IDs?
Http should be able to handle this. I wonder if the FHIR standard supports this.
there's two choices. the first is
GET [base]/Patient?_id=1,2,3,4,5
Using commas like this is documented here: http://hl7.org/fhir/search.html#combining
An alternative is to use a batch. This is a much more flexible arrangement - see http://hl7.org/fhir/http.html#transaction
Related
I'm designing a solution and want to leverage some of Elasticsearch's query capabilities (version 7.x). We are expected to have around 10M documents per index.
Documents might have different 'associations' to what we call 'users' (not necessarily same meaning as in ES) -
associated to all, queryable in any context.
associated to single user, should appear only in this user context searches.
associated to a 'groups' of users (of size of up to 1000K), should appear in queries for user's of this group.
We expect to have a lot of users, in the 100Ks or so. which also mean we might have a lot of different groups, each 2 users might form a custom group.
I've been investigating ES's capabilities and it looks like each solution I came up with have disadvantages:
RBAC - will require creating a lot of rolls (per user + per group, can ES even handle that many?)
ABAC - will require creating a lot of users (can ES even handle that many?)
Simple AND clauses on a dedicated properties (complex template of the query as explained here)
it is important to note that I have a single user that I will be using in order to query on behalf of the users I will create, in case I will choose to go down this path.
I came across this question but I figured that thing might have evolved since its been answered Document access control in ElasticSearch
Any other suggestions that I should check out? maybe even custom 3rd party solutions?
Let's say we want to create the app with microservices.
We have some page where we display some items (products).
These products have multiple joins(categories, tags, users, and so on).
If users, categories data are within another services, how can we manage and filter the results?
For example in SQL you create 3,4 joins and get.
With microservices - I have to filter the categories, then filter tags and then products - this could be 10 time slower than the speed of the SQL query.
Also if I have table "products_categories" which set categories for each product which service is responsible for that? Product service or Category service ?
Thank you
In Microservices architecture there are two ways to deal with it.
The API composition pattern— This is the simplest approach and should be used whenever possible. It works by making clients of the services that own the data responsible for invoking the services and combining the results.
The Command query responsibility segregation (CQRS) pattern— This is more powerful than the API composition pattern, but it’s also more complex. It maintains one or more view databases whose sole purpose is to support queries.
I will prefer to use CQRS, Define a view database, which is a read-only replica to support specifically that query. The rest of the services keeps the replica up to date by subscribing to (create, update, insert)events published by the data owner services.
This is a very standard problem whenever any micro-service is built.. People just always feel micro-service is the solution for everything which is not true.
Solution to this problem is designing better. Designing so that there is a balance between performance and redundancy of data. Higher performance ( lower latency numbers ) means more duplicacy of data across different databases of microservice. You should not target to achieve performance as good as SQL Joins ; but also do not duplicate data too much. A balance is needed..
Most importantly, dividing the requirement into right set of micro-services is needed.
I assume you created a "microservice" per database table. Those are not microservices, those are just HTTP-based CRUD interfaces to your database.
First, know why you need microservices. (Is there an actual reason?) Second, you have to create microservices that encompass at least one full (business) functionality for your software. Meaning it doesn't need other services to do it.
If you need a table that needs data from multiple microservices, you by definition made wrong microservices. If a microservice can't provide it's own UI without the help of other services, it doesn't fully contain it's own functionality.
What's stopping you from having multiple services for reading / writing to the same database / table? For example:
One service to write to categories
One service to write to tags
One service to write to products
You could then write another service to read from all three of these services, however, this might not be at a HTTP level, instead you could read from the same database within your read service and leverage the power of SQL.
The service that reads could encompass your join logic which would mean you wouldn't need to consume the other services around it.
Person
NativeCountry
SpokenLanguages
Had a query about MIcroservice granularity. Will try to explain my query with an example.
Assume I have above 3 tables in database, with Many to one relationship between Person -> NativeCountry table. One to Many relationship between person -> LanguagesSpoken in database.
Front end Application is suppose do CRUD operation on person entity and will also have capability to retrieve people based on nativecountry or spokenlanguage.
Does it makes sense to develop 3 independent microservices for each of the entities and then use Aggregator Microservice at upper layer to build combined data for UX layer or I should think of combining those to build just single microservice?
From your description of the problem, it sounds like "people" are at the center of the functionality and the use case of the service if I understand this correctly.
Search for people by native country
Search for people by language
Add a person with both their native country and the languages spoken
List all the languages
Since the three required features are around people and one feature requiring just listing the languages, I would argue that this should be one microservice (again without knowing if there are external services that depends on the other possible entity services). My argument here would be that in order to serve requests, people is the entity of interest with the native country and language being just a dimension to retrieve users.
If you break each of the entities, people, language, and country into different microservices, the services would be too small and the complexity would increase eg. you might need to make multiple requests to multiple services to generate a single response while there may not be a need to. As for the one last feature that doesn't quite revolve around people, I would say that its too small of a feature to be in a microservice. Until there becomes a need for the last feature to be a standalone service, I would advise for putting this into the "people" microservice.
We are trying to implement a FHIR Rest Server for our application. In our current data model (and thus live data) several FHIR resources are represented by multiple tables, e.g. what would all be Observations are stored in tables for vital values, laboratory values and diagnosis. Each table has an independent, auto-incrementing primary ID, so there are entries with the same ID in different tables. But for GET or DELETE calls to the FHIR server a unique ID is needed. What would be the most sensible way to handle this?
Searching didn't reveal an inherent way of doing this, so I'm considering these two options:
Add a prefix to all (or just the problematic) table IDs, e.g lab-123 and vit-123
Add a UUID to every table and use that as the logical identifier
Both have drawbacks: an ID parser is necessary for the first one and the second requires multiple database calls to identify the correct record.
Is there a FHIR way that allows to split a resource into several sub-resources, even in the Rest URL? Ideally I'd get something like GET server:port/Observation/laboratory/123
Server systems will have all sorts of different divisions of data in terms of how data is stored internally. What FHIR does is provide an interface that tries to hide those variations. So Observation/laboratory/123 would be going against what we're trying to do - because every system would have different divisions and it would be very difficult to get interoperability happening.
Either of the options you've proposed could work. I have a slight leaning towards the first option because it doesn't involve changing your persistence layer and it's a relatively straight-forward transformation to convert between external/fhir and internal.
Is there a FHIR way that allows to split a resource into several
sub-resources, even in the Rest URL? Ideally I'd get something like
GET server:port/Observation/laboratory/123
What would this mean for search? So, what would /Obervation?code=xxx search through? Would that search labs, vitals etc combined, or would you just allow access on /Observation/laboratory?
If these are truly "silos", maybe you could use http://servername/lab/Observation (so swap the last two path parts), which suggests your server has multiple "endpoints" for the different observations. I think more clients will be able to handle that url than the url you suggested.
Best, still, I think is having one of your two other options, for which the first is indeed the easiest to implement.
Which is the best way to implement visitor's logic?
Create visitors table |ip|resource_type|resource_id|
Create serialize field in records (Post, Pet, Event, Ad, etc...)
Use nosql solutions
Any other idea
In the 1st case, we have extended the table size for every visit.
In the 2nd, we have a long field.
In the 3nd, I have trouble with mongoid at production (centOS).
Not sure I'm answering, but I would not implement that myself, but rather take a look at existing solutions. For basic counting :
Vanity
Google Analytics
For more detailed metrics about what each user does, I would go toward cohort.
A totally other option could be using just the log and something like lograge to log each request. It is very easy to add fields (such as the IP). You can then extract all the informations from your logs.