I wonder how should I test rails api with dredd, specially the show and index actions(/post{id} and /post)
Should I fill my database with records before running dredd, I mean creating record post with id: 1 and so on?
Does dredd always trying to get object with id: 1 ( /post/1 )?
I found example project https://github.com/theodorton/dredd-test-rails but there is only one method(post) described in apib file https://github.com/theodorton/dredd-test-rails/blob/master/apiary.apib
Ad 1: Yes, you pretty much want to fill it with some data prior running Dredd (and clean it later). OR you can rely on the order of operations using the --sorted flag while launching Dredd (so a POST called before GET will create the data).
Ad 2: Your findings are indeed correct. What Dredd uses when calling an URI with parameter in is is the example value as stated in the blueprint. For example it will use 0 in call to /folders/{id} as defined here https://github.com/zdne/todoapi/blob/master/apiary.apib#L41
Edit:
All Dredd does at the moment is really taking your endpoints as specified in blueprint and calling them with examples values as stated in the blueprint.
Related
I am trying to patch a calendar event with the Microsoft Graph API (from a Node express app).
I create a new event with client.api('/me/events').post(myEvent) and it works just fine (I see it appear in my calendar). The return value has an ID which is:
AAMkADc0Yjg2ODdmLTlkNDQtNGQ0Yi1iNjBmLTE1MDdmYzI4MGJkOABGAAAAAADt0ZJy6xMCRq23C8icFGeqBwAOM3XMH4d2SYMQ5psbvFytAAAAAAENAAAOM3XMH4d2SYMQ5psbvFytAAJ_B-B7AAA=
I then use client.api('/search/query').post(myQuery) to find the event based on some criteria, and this works fine. I receive an array of hits, with only one hit (which actually is the freshly created event, looking at the subject and body), and with a hitId equal to:
AAMkADc0Yjg2ODdmLTlkNDQtNGQ0Yi1iNjBmLTE1MDdmYzI4MGJkOABGAAAAAADt0ZJy6xMCRq23C8icFGeqBwAOM3XMH4d2SYMQ5psbvFytAAAAAAENAAAOM3XMH4d2SYMQ5psbvFytAAJ+B/B7AAA=
For some reason I don't understand why the 2 IDs are not fully identical: the _ is changed to +and -changed to /.
I now want to modify the event, and try to update it with
let newVal = hits[0].resource // hits is coming from the result returned by the search query
newVal.id = hits[0].hitId // needed because the 'resource' does not contain the id
client.api('/me/events/'+hitId).patch(newVal)
But I get an error: Resource not found for the segment 'B7AAA='.
Could you please tell me how to make the patch work (and explain why the ID from the search is not strictly like the one created). I have read several examples in the documentation (such as https://learn.microsoft.com/en-us/graph/search-concept-events) but I could not find a solution.
Many thanks!
So what is happening here is, PATCH /me/events/{hitId} is being resolved by Graph API such that the forward slash in the hitId denotes a path and Graph ends up using B7AAA= as a resource id hence the error Resource not found for the segment 'B7AAA='.
A work around that might work is to replace / in hitId(s) with %252F. You can do it like this.
client.api(`/me/events/${hitId.replace('/', '%252F')}`).patch(patch)
There is already this Issue on GitHub for documentation on how to handled these base64 encoded resource ids with /
As for the two IDs being non identical, Graph API will accept both of them and resolve to the same resource. I have no idea why they are different though.
I am starting now with Kubernetes and the Operator SDK and I am trying to build my first operator and I have probably a simple question.
Question
How to detect a configuration change in the custom resource yaml in the reconcile loop and take an action according to the change?
I have some config properties specified in the my CR Spec:
apiVersion: my.example.com/v1alpha1
kind: StoreApp
metadata:
name: mystoreapp
spec:
username: technicalUser
password: abcd1234
catalogs:
- name: Bikes
description: Bikes_description
- name: Cars
description: Cars_description
I want when I add new custom resource of this kind my controller to create a new pod with my app image running inside (in a webserver). When my app is up and running for the first time I want to configure it (to add the catalogs from the spec) via HTTP request from the operator.
So far it's ok but I also what to change these catalogs while my app is up and running.
For example I want to add new catalog in the spec (through kubectl patch). My operator's reconcile method will be called and how can I understand that the spec is changed? I am not sure it's a good idea to make HTTP calls to my app to get all catalogs and compare them with the catalogs from the spec. Is this the correct way to understand there is a change?
I am thinking about two other ways to find that something is updated but I am not sure if they will work properly and are they the best way to do this.
First idea is to request the instance of StoreApp with client.Get(...) but as far as I understand this will call the API server and will get the updated version of mystoreapp. I read about some local index which acts like cache for these objects and I can check is there a difference between the cached object and the object returned from the API server. But I did not find how to get the object from this local index so I was not able to compare the two objects.
To create map in which I store the hash of the hole spec object and to check every time this hash with the hash of the object got with client.Get(...). I think this will work but there should be a better way to do this.
I read some Java Operators for K8s and there were methods like onAdd, onUpdate, onDelete. I couldn't find something similar in the Operatod SDK. Is there anything like this in the Operator SDK?
Every answer will be helpful. Thank you in advance!
Best Regards,
Hristiyan
The recommended practice is to look at the spec you received, and compare it to the state of the world/cluster, so retrieving the catalogs and comparing them to the spec is indeed the proper way to do it.
The reasoning for this recommandation is that the order of the events you get from Kubernetes is not guaranteed to be consistent, and it's also not guaranteed that you'll necessarily receive every event in a reasonable amount of time, or that you'll only receive each event once, so it's best to base your decision making on what was requested as compared to what is, rather than what specific event triggered the reconciliation.
I have recently upgraded application from Elastic search 5.3.1 to 6.0.
My requirement was to get all the indices which were associated with specific alias.
I used below mentioned snippet to fetch all indices associated with specific alias . This snippet is working fine in 5.3.1 and giving only those indices which are associated with that specific alias.
GetAliasesResponse r = client.admin().indices().getAliases(new
GetAliasesRequest("givenalias")).actionGet();
But after ES 6.0 , the same snippet giving all the indices which are created in system .
Ideally it should only return those indices which are associated with given alias .Not others. This was working in Elastic search 5.3.1.
TL;DR: It is an intended breaking change in the Java API of Elasticsearch (though it is not explicitly mentioned in the "Breaking changes in 6.0 ยป Java API changes" page).
The following is the story of discovering this fact. (Note: the original answer was heavily edited, hence comments might be out of date.)
Breaking changes in the REST API in 6.0
First I noticed that this part of the REST API changed in Elasticsearch 6.0. There are two reported breaking changes concerning aliases:
GET /_aliases,_mappings syntax was removed in favor of GET /_aliases/GET /_mappings
Indices aliases api resolves indices expressions only against indices
Though there isn't mentioned anything concerning OP's case.
From what I have seen doing queries, this query works in Elasticsearch 5:
GET /alias1/_aliases
And does not work in Elasticsearch 6, giving the following error:
{
"error": "Incorrect HTTP method for uri [/alias1/_aliases] and method [GET], allowed: [PUT]",
"status": 405
}
Interestingly, GET /alias1/_alias works in both versions and returns same result.
Moreover, I didn't manage to find an example of GET /alias1/_aliases in the documentation of nor 5.6, neither 6.0.
Reproducing the bug
After having realised that OP is actually using the Java API, I managed to reproduce the exact same behavior.
The following code:
GetAliasesResponse alias1 = client.admin().indices()
.getAliases(new GetAliasesRequest("alias1")).actionGet();
In ES 5 produces this in the IntelliJ debugger:
And for ES 6 I've got the following:
As you can see, there are extra keys in the second output, which have empty values.
Diving into the source code
Quick search over the elasticsearch codebase gave me the final explanation. In ES 5 there was a test testIndicesGetAliases which was checking that list of indices returned for a test alias has exactly one element (IndexAliasesIT.java#L554):
logger.info("--> getting alias1");
GetAliasesResponse getResponse = admin().indices().prepareGetAliases("alias1").get();
assertThat(getResponse, notNullValue());
assertThat(getResponse.getAliases().size(), equalTo(1));
And in 6.0 it checks that the size is 5! (IndexAliasesIT.java#L573)
logger.info("--> getting alias1");
GetAliasesResponse getResponse = admin().indices().prepareGetAliases("alias1").get();
assertThat(getResponse, notNullValue());
assertThat(getResponse.getAliases().size(), equalTo(5));
This change was introduced in this commit, which is related to these issues:
Remove comma-separated feature parsing for GetIndicesAction #24723
_alias API no longer accepts index wildcards #25090
This is actually interesting, because one of the reported REST API breaking changes that we have see above also broke compatibility of some Java API calls.
What you can do
In short term, you need just to filter out those keys with empty values.
In longer term I think it makes sense to migrate to Java High Level REST Client since Elastic plan to deprecate the TransportClient in version 7.0:
We plan on deprecating the TransportClient in Elasticsearch 7.0 and
removing it completely in 8.0. Instead, you should be using the Java
High Level REST Client, which executes HTTP requests rather than
serialized Java requests.
In general, Elasticsearch breaks compatibility quite often, so it's better to stay away from the dark corners of it like Java API.
Thanks for reading.
Hope that helps!
I am trying to get started using viewpoint against EWS within Ruby, and it's not making a lot of sense at the moment. I am wondering where I can get some good example code, or some pointers? I am using 1.0.0-beta.
For example: I know the name of the calendar folder I want to use, so I could search for it, but how to access methods in that folder once I find it? What are the appropriate parameters, etc...
Any advice?
If you haven't read it yet I would recommend the README file in the repository. It has a couple of examples that should put you on the right path. Also, the generated API documentation should give you enough to work with.
http://rubydoc.info/github/WinRb/Viewpoint/frames
At a very basic level you can get all of your calendar events with the following code:
calendar = client.get_folder :calendar
events = calendar.items
I hope that gives you a little more to get started with.
Follow-up:
Again, I would point you to the API docs for concrete methods like #items. There are however dynamically added methods depending on the type that you can fetch with obj.ews_methods. In the case of CalendarItem one of those methods is #name so you can call obj.name to get the folder name. The dynamic methods are all backed by a formatted Hash based on the returned SOAP packet. You can see it in its raw format by issuing obj.ews_item
Cheers,
Dan
I'm writing a web app using Ruby/Sinatra and Datamapper. I have some tables that I want to preload data into. Is there a best practice for that? E.g. I have a table called references
id name url
-- ---- -------------------
1 AOL http://www.aol.com
2 Dell http://www.dell.com
I want to ensure that this data is always loaded when someone starts up the app for the first time and and ensure the IDs do not changes....
Thoughts?
There may be a gem out there to do this for you which I'm not aware of, and if there is, I'd love to hear about it. But, for my last project I simply had my import data in a yaml file and wrote a Ruby class that parsed that yaml file and used DataMapper to insert the data for me.
I then created a Rake task that allowed me to run this Ruby class.
Is this the sort of solution you might be after? If so, I can give you more info on it.