How to access one element of a REST collection through HATEOAS links? - spring

I'm trying to build an architecture of RESTful services, and to build a gateway service for all of those, with Java Spring. In order to make the latter, I need to implement a client for the other services, which me and my colleagues tried to design around the HATEOAS principle, by providing links to related resources through spring-hateoas module.
Let's say I have a service running on localhost, listening on 8080 port, which returns a collection of resources with a GET operation on /resources. For example:
{
"_embedded" : {
"resources" : [ {
"label" : "My first resource!",
"resourceId" : 3,
"_links" : {
"self" : {
"href" : "http://localhost:8080/resources/3"
},
"meals" : {
"href" : "http://localhost:8080/resources",
"templated" : true
}
}
}, {
"label" : "Another resource!",
"resourceId" : 4,
"_links" : {
"self" : {
"href" : "http://localhost:8080/resources/4"
},
"meals" : {
"href" : "http://localhost:8080/resources",
"templated" : true
}
}
} ]
},
"_links" : {
"self" : {
"href" : "http://localhost:8080/resources",
"templated" : true
}
}
}
I'm trying to use a HATEOAS client such as Traverson. How could I follow a resource element simply by following HATEOAS links? My solution so far has been to add a link to item on my collection, such as follow:
"_links" : {
"self" : {
"href" : "http://localhost:8080/resources",
"templated" : true
},
"item" : {
"href" : "http://localhost:8080/resources/{id}",
"templated" : true
}
}
So then I can replace the id directly in the template with Traverson and follow the result. But is it a good practice? Should I proceed another way?

Simply put, Traverson is meant to find a link.
In the simplest cases, each link has a unique name (rel). By simply providing the name of the rel to Traverson's follow(...) function, it will use the proper LinkDiscoverer and navigate to the corresponding URI of that rel.
This is a Hop.
Since the goal is to navigate the API kind of like following links on a webpage, you must define a chain of hops.
In your case, it's a little more complication, since you have an embedded with multiple items. Asking for the self link isn't straightforward, since you can easily see three on the root document.
Hence Traverson's support for JSON-Path. If you check the reference documentation, it's easy to see that a JSON-Path expression can be supplied to help pick which link you want.
As long as the attribute being selected is the URI, then Traverson will "hop" to it.
NOTE: When simply using rels, you can supply multiple rels as strings in follow(...). When using anything else, like a JSON-Path expression or rel(...), then use one follow(...) per hop. Thankfully, this isn't hard to read of you put each hop on a separate line (again, see ref docs for examples).

Related

Elasticsearch - Point alias from OLD_INDEX to NEW_INDEX

I have an alias which is pointing to my OLD_INDEX. I have a requirement where I am creating a new index and after creation I need to point my alias A to the NEW_INDEX. I need to do this in Java.
I have looked almost everywhere but I cannot find any java implementation for this.
Would really appreciate some help. If possible, it would be great to have a sample code as well.
Thanks.
Refer
You can add7remove an alias.
To remove,
POST /_aliases { "actions" : [ { "remove" : { "index" : "test1", "alias" : "alias1" } } ] }
To add an alias,
POST /_aliases { "actions" : [ { "add" : { "index" : "test1", "alias" : "alias1" } } ] }
List of supported actions
You can use java low or high level clients to do this. Refer
You have to initialize Rest client and make a call by using above Json requests and end points.

document field returns null when querying groups of Prismic Content-Realtionship fields in graphql

Issue:
I am using Prismic to send data through to my website.
In Prismic I have a Type (testimonial_list) that consists of a group of content-relation fields (Prismic Type testimonials).
To query the data on the inner Types I need to access them via the document field in graphql and use inline-fragments.
I have followed as instructed here:
https://github.com/angeloashmore/gatsby-source-prismic#Query-Content-Relation-fields
Inside graphql I have managed to navigate to the testimonial data-fields (on the document field) but the document field returns null, this is where I'm stuck. I can't work out why it would return null as the content exists and the fields are clearly being found in graphql.
Info:
My project is built using Gatsby and I'm using the plugin gatsby-source-prismic v3.1.1
Here you can see I can access the correct field data and I am getting the right number of nodes returned but document is empty:
This is the JSON for the testimonial_list Type on Prismic:
{
"Main" : {
"prismic_title" : {
"type" : "StructuredText",
"config" : {
"single" : "heading6",
"label" : "Title (only used to name entry in Prismic list)",
"placeholder" : "Prismic list title (otherwise \"undefined\")"
}
},
"page" : {
"type" : "Select",
"config" : {
"options" : [ "Homepage", "Option 2", "Option 3" ],
"label" : "Website page to appear on:"
}
},
"testimonial_list" : {
"type" : "Group",
"config" : {
"fields" : {
"testimonial" : {
"type" : "Link",
"config" : {
"select" : "document",
"customtypes" : [ "testimonial" ],
"label" : "testimonial"
}
}
},
"label" : "Testimonial List"
}
}
}
}
Thank you for any help, if there is any more info I can supply to help deduce the issue please let me know.
In the end, the issue turned out to be a typo in my gatsby-config where I was requiring the schema.
It was a daft mistake but stare at something too long and these things happen I guess.
In case anybody else has a similar issue you must ensure the property names of your Prismic schemas inside your gatsby-config are exactly the same as in Prismic.
For example if your Type in Prismic is called "my_type" then you must use that exact syntax - so for example don't use "myType".
Hey it might be something related to the gatsby-source-prismic plugin
I would directly open an issue for it here if I were you: https://github.com/angeloashmore/gatsby-source-prismic/issues

Google places API - Can I separate out the output?

Hiting the endpoint:
https://maps.googleapis.com/maps/api/place/findplacefromtext/json?input=Hoboken%20NJ&fields=formatted_address,name&inputtype=textquery&key=xxxxxxxxxxxxxxxxxxx
Getting the result:
{
"candidates" : [
{
"formatted_address" : "New Jersey, USA",
"name" : "Hoboken"
}
],
"debug_log" : {
"line" : []
},
"status" : "OK"
}
What bugs me is that I can't find a way to separate out the region and country - Yes, I know I can parse the result myself. But is there an option I get shoot out to Google Places API to have the response separate out city/state(or region)/country in the returned JSON?
Something like:
{
"candidates" : [
{
"state" : "New Jersey",
"country" : "USA",
"name" : "Hoboken"
}
],
"debug_log" : {
"line" : []
},
"status" : "OK"
}
As far as I know, it isn't possible, you'll have to parse it. Places API is designed to search businesses and POIs at first place.
Google does have, however a geocoding API which seems to give out Postal Code, Country, State, Address, separetely.
There are also some free alternatives

Accessing and updating association resources with Spring Data REST and MongoDB

I don't fully understand how creating and accessing association resources works with Spring Data REST and MongoDB. I have noticed a few strange behaviours, and I am hoping to get some clarification here.
I have a simple object model where one resource type (Request) contains a list of objects of another resource type (Point). Both resources are top-level resources, i.e. have their own Repository interfaces.
I would like to be able to add a new Point to a Request by POSTing to /requests/{id}/points with a link to an existing point (as I know from this question that I can't just POST a point as a JSON payload).
Furthermore, I would like to be able to GET /requests/{id}/points to retrieve all the points associated with a request.
Here is my object model (getters/setters omitted):
Request.java
public class Request {
#Id
private String id;
private String name;
#DBRef
private List<Point> points = new ArrayList<>();
}
RequestRepository.java
public interface RequestRepository extends MongoRepository<Request, String> {
}
Point.java
#Data
public class Point {
#Id
private String id;
private String name;
}
PointRepository.java
public interface PointRepository extends MongoRepository<Point, String> {
}
Let's say that I POST a new Request. Then doing a GET on /requests gives me this:
{
"_links" : {
"self" : {
"href" : "http://localhost:8080/requests",
"templated" : false
}
},
"_embedded" : {
"requests" : [ {
"name" : "request 1",
"_links" : {
"self" : {
"href" : "http://localhost:8080/requests/55e5dc47b7605d55c7c361ba",
"templated" : false
},
"request" : {
"href" : "http://localhost:8080/requests/55e5dc47b7605d55c7c361ba",
"templated" : false
},
"points" : {
"href" : "http://localhost:8080/requests/55e5dc47b7605d55c7c361ba/points",
"templated" : false
}
}
} ]
}
}
So I have a points link in there. Good. Now, if I do a GET on /requests/55e5dc47b7605d55c7c361ba, I notice strange thing #1. There is no points link anymore:
{
"name" : "request 1",
"_links" : {
"self" : {
"href" : "http://localhost:8080/requests/55e5dc47b7605d55c7c361ba",
"templated" : false
},
"request" : {
"href" : "http://localhost:8080/requests/55e5dc47b7605d55c7c361ba",
"templated" : false
}
}
}
But anyway forget about that for now, let's POST a point (which gets ID 55e5e225b76012c2e08f1e3e) and link it to the request:
curl -X PUT -H "ContentType: text/uri-list" http://localhost:8080/requests/55e5dc47b7605d55c7c361ba/points
-d "http://localhost:8080/points/55e5e225b76012c2e08f1e3e"
204 No Content
That seemed to work. Ok, so now if I do a GET on http://localhost:8080/requests/55e5dc47b7605d55c7c361ba/points, I expect to see the point I just added, right? Enter strange thing #2, I get the following response:
{
"_links" : {
"self" : {
"href" : "http://localhost:8080/requests/55e5dc47b7605d55c7c361ba/points",
"templated" : false
}
}
}
So at this point I realise I've done something wrong. Could somebody please explain to me what's happening here? Do I have a fundamental misunderstanding of the way SDR is working? Should I even be using DBRefs? Or is this simply not possible with MongoDB?
Thanks in advance, and sorry for the long question!
Edit
The Points are in fact being added as DBRefs (verified by looking at the db manually), but it appears that SDR is not giving them back. If I explicitly ask for http://localhost:8080/requests/55e5dc47b7605d55c7c361ba/points/55e5e225b76012c2e08f1e3e, then I get the following response:
{
"_links" : {
"self" : {
"href" : "http://localhost:8080/points/55e5e225b76012c2e08f1e3e",
"templated" : false
},
"point" : {
"href" : "http://localhost:8080/points/55e5e225b76012c2e08f1e3e",
"templated" : false
}
}
}
Which is also strange... where is my Point.name?
Both aspects are caused by a glitch in Spring Framework 4.2.0, which Spring Boot 1.3 M4 has upgraded to which causes Spring Data RESTs custom serializers not applied in some cases. This is already fixed in Spring 4.1.2. To upgrade your Gradle project to that version, upgrade to the Boot plugin to a 1.3 and then fix the Spring version to 4.2.1 using the following declaration:
ext['spring.version'] = '4.2.1.RELEASE'
Make sure you properly initialize the collection with an empty one as a null value will cause 404 Not Found being returned. An empty list will produce correct HAL output.
Generally speaking association resources should be used with already existing resources, esp. with Spring Data MongoDB there's no other choice as it doesn't have persistence by reachability. I.e. you'd need to persist the related instance first (aka. POST to it's collection resource and get a Location header back) and the assign the just created resource (aka. PUT or POST the just obtained Location to the association resource), which seems to be what you're doing.

ElasticSearch Filtered Aliases Creation - Best Practice

We are planning to use Filtered Aliases as mentioned here - https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-aliases.html
Our input data is going to be a stream with each line of the stream corresponding to an object we would like to store in ES.
Each object contains an 'id', which we are using for routing and filtering.
QUESTION -
How do we create alias and index data in a performant way ?
-- Do we index all data, keep track of all the unique 'id's and the very end create the filtered alias ? OR
-- For each object, check if an alias for that 'id' exists; if it doesn't create one ?
I'm leaning towards the first approach. Is it advisable and performant when compared to the second approach ?
TIA.
Based on our discussion above and after having glanced over the blog article you posted, I'm pretty positive that in your case you don't need aliases at all and the routing key would suffice. Again, only because you have a single index, if you had many indices this would not be true anymore!
You simply need to specify the routing key to use when indexing your document. Until ES 2.0, you can use the _routing field for that purpose, even though it's been deprecated in ES 1.5, but in your case it serves your purpose.
{
"customer" : {
"_routing" : {
"required" : true,
"path" : "customer_id" <----- the field you use as the routing key
},
"properties": { ... }
}
}
Then when searching you simply need to specify &routing=<customer_id> in your search URL in addition to your customer id filter (since a given shard can host documents for different customers). Your search will go directly to the shard identified by the given routing key, and thus, only retrieve data from the specified customer.
Using a filtered alias for this brings nothing as the filter and routing key you'd include in your alias definition would not contribute anything additional, since the retrieved documents are already "filtered" (kind of) by the routing key. This is way easier than trying to detect (on each new document to index) if an alias exists or not and create it if it doesn't.
UPDATE:
Now if you absolutely have/want to create filtered aliases, the more performant way would be the first one you mentioned:
First index your daily data
Then run a terms aggregation on your customer_id field with size high enough (i.e. higher than the cardinality of the field, which was ~100 in your case) to make sure you capture all unique customer ids to create your aliases
Loop over all the buckets to retrieve all unique customer ids
Create all aliases in one shot using one action for each customer_id
curl -XPOST 'http://localhost:9200/_aliases' -d '{
"actions" : [
{
"add" : {
"index" : "customers",
"alias" : "alias_cid1",
"routing" : "cid1",
"filter" : { "term" : { "customer_id" : "cid1" } }
}
},
{
"add" : {
"index" : "customers",
"alias" : "alias_cid2",
"routing" : "cid2",
"filter" : { "term" : { "customer_id" : "cid2" } }
}
},
{
"add" : {
"index" : "customers",
"alias" : "alias_cid3",
"routing" : "cid3",
"filter" : { "term" : { "customer_id" : "cid3" } }
}
},
...
]
}'
Note that you don't have to worry if an alias already exists, the whole command won't fail and silently ignore the existing alias.
When this command has run, you'll have all your aliases on your unique index, properly configured with a filter and a routing key.

Resources