I'm working on extracting patients info in FHIR server however, I've came across two types of searching methods that were somewhat different. What is the difference between the search method of
Bundle bundle = client.seach().forResource(DiagnosticReport.class)
.
.
and
GET [base]/DiagnosticReport?result.code-value-
quantity=http://loinc.org|2823-3$gt5.4|http://unitsofmeasure.org|mmol/L
It's very confusing as it seemed that there isn't much that is mentioned about these two search methods. Can i achieve the same level of filtering with the first method compared to the url method?
The first is how to perform a search using the Java reference implementation. The latter explains what the actual HTTP query looks like that hits the server (and also specifies some additional search criteria). Behind the scenes the Java code in the first example is actually making an HTTP call that looks similar to the second example. The primary documentation in the FHIR specification deals with the HTTP call. The reference implementations work differently based on which language they are and are documented outside the FHIR specification on a reference implementation by reference implementation basis.
Related
As far as I know, POST is usually used for changing the state of the server, and PUT usually for updating the information. If I am creating a new index, should it not be POST instead of PUT? PUT does make sense when creating a document as it changes the state of data.
Your statement
As far as I know, POST is usually used for changing the state of the server, and PUT usually for updating the information.
does conform to the conventional HTTP vs CRUD semantics:
HTTP method
CRUD equivalent
Description
POST
Create
Let the target resource process the representation enclosed in the request.
PUT
Update
Set the target resource’s state to the state defined by the representation enclosed in the request.
However, the PUT spec also stipulates that:
The PUT method requests that the state of the target resource be
created or replaced with the state defined by the representation
enclosed in the request message payload
As such, PUT can (and is) used in Elasticsearch to both create an index AND update its
[settings and mappings].
Also, keep in mind that it's rarely just a matter of strict adherence to the semantics. One of the creators of ES put it this way:
It's all about REST semantics.
And our understanding of the semantics at the time when we made the APIs. And backwards compatibility constraints. And whatever "feels" natural to the person who implemented the API.
Where it makes a lot of sense Elasticsearch maps the HTTP verbs to
useful things. But when it doesn't make a ton of sense we just go with
whatever verb feels good rather than trying to be super strict about
REST. Also, we don't do linked data, instead relying on you to build
links from context. I'm told that is particularly non-REST. But it is
what we do.
Is there a way for us to define the policies of a GraphQL API, which is both machine-readable and human-readable, which contains a set of rules (in other words, a specification) to describe the format of the API? I'm not talking about the schema, but of a spec where we can add security-related details (for example, complexity value to be assigned per field and depth limitation values) or any other related details. Any thoughts or ideas? Or can we send all of this within the SDL itself?
For example, for REST APIs, we use Swagger to define information on how to define paths, parameters, responses, models, security and more. Is there a need for a similar approach for GraphQL APIs? Your response is highly appreciated
We are working in an approach to add policies to your GraphQL API and allow you to better manage it, especially as you expose the interface externally.
Part of the challenge is that as opposed to a REST call that can easily be differentiated from others, all GraphQL requests look the same, unless a deeper analysis is performed on the incoming query.
This blog post describes how we perform this analysis: https://www.ibm.com/blogs/research/2019/02/graphql-api-management/
if this is of interest let's connect!
As per my understanding you need a tool to make documentation for the APIs you have build for parameters and so on.
If that's what you are searching, there is like swagger for GraphQL - Swagger-to-GraphQL
Hope that helps.!!
This question is very similar to my question, however due to the way SO works, I think it is better to ask a new question rather than just continue a thread.
CoreNLP has the Simple API which allows for quicker access to various components of the NLP pipeline. The way to get named entities appears to be:
Form a document annotation from the text
Get the sentences from the document object
Use nerTags() from the sentences object to get the token-by-token ner labeling.
Via other mechanisms, as talked about in the question link above, one can retrieve full multi-token entity mentions such as George Washington, which is an entity mention composed of 2 tokens. Is there a way using the simple api to get these multi-token entity mentions?
Yes, though it gives you less information than the full API, returning only the String spans of the mention. See Sentence#mentions(String) and Sentence#mentions().
If you want to get more information about a mention, you'll have to either use the regular API, or re-implement the logic in these functions. You can also try mucking around in the raw Proto, which will certainly have all the information you could possibly want, but in a less-than-pleasant proto interface. The proto definition is here.
I am designing a SpringBoot RESTful API for a Product searching with various attributes (search can be one or more). Few of the criteria are greater than a certain amount and few are less than. In the #RequestParam we can take String or similar values but not any criteria.
My question is what's the best way to get the user data for these criteria in a GET search API call
#GetMapping("/search")
public ResponseEntity<List<OrderView>> searchOrders(...)
{
...
// call to service implementation
...
}
Hmm... https://spring.io/guides/tutorials/bookmarks/ has a good description about REST services with spring. It has also description about the different levels when it comes to RESTful principles (and you can do HATEOAS very simple and clever with spring-boot by HAL).
When you are doing a search you do not have the resource (level1) url but you want to obtain it... So it's okay (imho) to do a simple query parameter call. For example when looking at amazon.com and typing some search parameters there, you will see that they are using a simple approach:
https://www.amazon.de/s/ref=nb_sb_noss?....&url=search-alias%3Daps&field-keywords=criteria1+criteria2+criteria3
They just add the keywords as a concatenated string.
There is also a interesting blog entry from apigee available:
https://apigee.com/about/blog/technology/restful-api-design-tips-search
In this article Spring Boot: How to design efficient REST API?, I explained how to develop a REST API for search. As an example, you found the code and screenshots of postman in this article.
As optimization with one endpoint, I can get several results: resources are sorted, filtered, and paginated.
You do not deal with a lot of code (check of query param and all the control in your controller): the library specification-arg-resolver makes that with no problem
So this may be a more general question, but I feel it needs to be asked.
Time and time again I come across examples on Camel's documentation pages where I say "that's exactly what I want!... but it's in Java not Spring. How the heck do I convert it properly?"
So my question is: What is the rule of thumb for converting things?
Is there some conversion guide out there?
For example, I wanted to append a \n to the end of each line as the data comes through a socket into a file using the Netty4 component.
I see an example such as .transform().body(append("\n"))
How would I interpret that as Spring, to put in my Spring-based route?
Maybe this is just a thing that a person new to Camel struggles with and once you get the hang of it you can see the obvious answer. But I feel like I can't be the only one who's thinking this about the examples out there.
It seems like a lot of Java -> Spring conversion can be done in a 1 to 1 ratio, but that's not all the time.
Well, the mapping isn't straightforward and there isn't a 1-to-1 mapping available - generally, a Java DSL method invocation will in most cases translate to a tag in Spring XML DSL but the position of that tag is not always the same - in some cases Java DSL method invocation chains translate to tags being placed on the same level, sometimes (e.g. idempotent consumer) the chain translates to child tags of the first invocation.
I guess that the mapping was done this way because XML and Java are two very different languages and making the mapping 1-1 would have crippled the expressiveness of at least one, if not both, DSLs.
My advice would be to always import the XML schema and rely on your IDE's auto-completion and the documentations from the schema itself and Camel's online documentation.
You can run your camel context via mvn camel:run goal and then use a JMX client to connect to that process. There is an mbean in camel which provides a method called dumpRoutesAsXML or similar. Invoking that one will give u the xml equivalent of your context. But keep in mind that it only prints the routes and all stuff out of routes is discarded.
Hope that helps,
Lars