The Docs don´t explain any difference, I could imagine that calling the methods on contract restricts them to events this contract emits, but the filter obj is made for these restrictions, isn´t it?
So is it just for convenience so that you could call the methods either way?
Docs for the Contract.Methods
Docs for the Provider.Methods
Related
As a newcomer to spring I would like to know the actual difference between:-
#PostMapping
#PutMapping
#PatchMapping
My understanding is PUT is for update but then we have to get the element by its id and then save() it. Similarly the save() method is again used by Post which automatically replaces by its identifier(PRIMARY). In my application I am able to use three of these methods interchangeably.
What is the point of having PATCH, POST, PUT types when we use repository save methods for all?
HTTP method tokens are used to define request semantics in such a way that general purpose components (browsers, reverse proxies, etc) can exploit the information to do intelligent things.
The easiest of these is that PUT has idempotent semantics; if an http response is lost, a general purpose component knows that it may autonomously retry sending the request. This in turn gives you a bit of extra reliability over an unreliable network, "for free".
The fact that your origin server uses the same persistence mechanism for each is an implementation detail, something deliberately hidden behind the "uniform interface".
The difference between PATCH and POST is subtle; PATCH gives you an unambiguous way to designate that the enclosed entity is a patch document, and offers a mechanism for discovering which patch document formats are understood by the origin server, neither of which you get from POST alone.
What's less clear, at least to me, is whether PATCH semantics allow an intermediate component to do something intelligent with a request - in other words, do the additional constraints (relative to POST) allow intermediaries to do anything interesting?
As best I can tell, the semantics of a PATCH request are more specific, but not actionably more specific -- certainly not as obviously as we have in the case of safe or idempotent request semantics.
POST is for creating a brand new object.
PUT will replace all of an objects properties in one go.
Leaving a property empty will empty the value in the datastore.
PATCH does a partial update of an object.
You can send it just the properties which should be updated.
A PATCH request with all object properties included will have the same effect as a POST request. But they are not the same.
The HTTP method is a convention not specific to Spring but is a main pillar of the REST API specification.
They make sure the intent of a request is clear and both the provider and consumer are in agreement of the end result.
Kind of like the pedals or gear shift in our cars. It's a lot easier when they all work the same.
Switching them up could lead to a lot of accidents.
For us as developers, it means we can expect most REST APIs to behave in a similar way, assuming an API is implemented according to or reasonably close to the specification.
POST/PUT/PATCH may look alike but there are subtle differences.
As you mention the PUT and PATCH methods require some kind of ID of the object to be updated.
In an example of a combined POST/PUT/PATCH endpoint, sending a request with an object, omitting some of its properties. How does the API react?
Update only the received properties.
Update the entire object, emptying the omitted properties.
Attempt to create a new object.
How is the consumer of the endpoint to know which of the three actions the server took?
This is where the HTTP method and specification/convention help determine the appropriate course of action.
Spring may facilitate the save method which can handle both creation, updates and partial updates. But this is not necessarily the case for other frameworks in Java or other languages.
Also, your application may be simple enough to handle POST/PUT/PATCH in the same controller method right now.
But over time as your application grows more complex, the separation of concerns makes your code a lot cleaner, more readable and maintainable.
Question
Is there an official recommended way to create a custom RxJs Subject?
Use Case
I have a need for a QueueSubject, i.e. a Subject that queues all values passed to its next method until there is a subscriber. This is different from the built-in ReplaySubject because the ReplaySubject does not clear its buffer upon a subscription.
What I have learned so far
An exact implementation of what I need is available in this GitHub project by James Pike. The reason for my question despite this perfectly available solution is that the _subscribe method is an internal method. It is even marked as #deprecated, therefore if a linter is used, a linter rule exception needs to be added to the class to suppress the deprecation warning.
I did not find anything in the documentation about how to create a custom Subject.
You can use any Subject implementation as a reference for your own custom one, for example this one on Github.
Concerning _subscribe: You can override it with your custom class, but never call it directly from an outside consumer class (this is why it is annotated with #deprecated). The function is called by the Subject class internally following the Template Method Pattern.
In summary: Your linked implementation looks valid to me.
In the past, I have setup two separate AWS lambdas written in Java. One for use with Alexa and one for use with Api.ai. They simply return "Hello world" to each assitant api. So although they are simple they work. As I started writing more and more code for each one, I started to see how similar my java code was and I was just repeating myself by having two separate lambdas.
Fast forward to today.
What I'm working on now is having a single AWS lambda that can handle input from both Alexa and Api.ai but I'm having some trouble. Currently, my thought is that when the lambda is run, there would be a simple if statement like so:
The following is not real code, just what I think I can do in my head
if (figureOutIfInputType.equals("alexa")){
runAlexaCode();
} else if (figureOutIfInputType.equals("api.ai")){
runApiAiCode();
}
The thing is now I need to somehow tell if the function is being called by an alexa or api.ai.
This is my actual java right now:
public class App implements RequestHandler<Object, String> {
#Override
public String handleRequest(Object input, Context context) {
System.out.println("myLog: " + input.toString());
return "Hello from AWS";
}
I then ran the lambda from Alexa and Api.ai to see what Object input would get generated in java.
API.ai
{id=asdf-6801-4a9b-a7cd-asdffdsa, timestamp=2017-07-
28T02:21:15.337Z, lang=en, result={source=agent, resolvedQuery=hi how
are you, action=, actionIncomplete=false, parameters={}, contexts=[],
metadata={intentId=asdf-3a2a-49b6-8a45-97e97243b1d7,
webhookUsed=true, webhookForSlotFillingUsed=false,
webhookResponseTime=182, intentName=myIntent}, fulfillment=
{messages=[{type=0, speech=I have failed}]}, score=1}, status=
{code=200, errorType=success}, sessionId=asdf-a7ac-43c8-8ae8-
bc1bf5ecaad0}
Alexa
{version=1.0, session={new=true, sessionId=amzn1.echo-api.session.asdf-
7e03-4c35-9d98-d416eefc5b23, application=
{applicationId=amzn1.ask.skill.asdf-a02e-4938-a747-109ea09539aa}, user=
{userId=amzn1.ask.account.asdf}}, context={AudioPlayer=
{playerActivity=IDLE}, System={application=
{applicationId=amzn1.ask.skill.07c854eb-a02e-4938-a747-109ea09539aa},
user={userId=amzn1.ask.account.asdf}, device=
{deviceId=amzn1.ask.device.asdf, supportedInterfaces={AudioPlayer={}}},
apiEndpoint=https://api.amazonalexa.com}}, request={type=IntentRequest,
requestId=amzn1.echo-api.request.asdf-5de5-4930-8f04-9acf2130e6b8,
timestamp=2017-07-28T05:07:30Z, locale=en-US, intent=
{name=HelloWorldIntent, confirmationStatus=NONE}}}
So now I have both my Alexa and Api.ai output, and they're different. So that's good. I'll be able to tell which one is which. but I'm stuck. I'm not really sure if I should try to create an AlexaInput object and an ApiAIinput object.
Am I doing this all wrong? Am I wrong with trying to have one lambda fulfill my "assistant" requests from more than one service (Alexa and ApiAI)?
Any help would be appreciated. Surely, someone else must be writing their assistant functionality in AWS and wants to reuse their code for both "assistant" platforms.
I had the same question and same thought, but as I got further and further in implementing, I realized that it wasn't quite practical for one big reason:
While a lot of my logic needed to be the same - the format of the results was different. Sometimes, even the details or formatting of the results would be different.
What I did was go back to some concepts that were familiar in web programming by dividing it into two parts:
A back-end system that was responsible for taking parameters and applying the business logic to produce results. These results would be fairly low-level, not entire phrases, but more a set of keys/value pairs that indicated what kind of result to give and what values would be needed in that result.
A front-end system that was responsible for handling things that were Alexa/Assistant specific. So it would take the request, extract parameters and state, call the back-end system with this information, get a result back which included what kind of reply to send and the values needed, and then format the exact phrase (and any other supporting info, such as a card or whatever) and put it into a properly formatted response.
The front-end components would be a different lambda function for each agent type, mostly to make the logic a little cleaner. The back-end components can either be a library function or another lambda function, whatever makes the most sense for the task, but is independent of the front-end implementation.
I suppose one could also this by having an abstract parent class that implements the back-end logic, and having the front-end logic be subclasses of this. I wouldn't do it this way because it doesn't provide as clear an interface boundary between the two, but its not unreasonable.
You can achieve the result (code reuse) a different way.
Firstly, create a method for each type of event (Alexa, API Gateway, etc) using the aws-lambda-java-events library. Some information here:
http://docs.aws.amazon.com/lambda/latest/dg/java-programming-model-handler-types.html
Each entry point method should deal with the semantics of the event triggering it (API Gateway) and call into common code to give you code reuse.
Secondly, upload your JAR/ZIP to an S3 bucket.
Thirdly, for each event you want to handle - create a Lambda function, referencing the same ZIP/JAR in the S3 bucket and specifying the relevant entry point.
This way, you'll get code reuse without having to juggle multiple copies of the code on AWS, albeit at the cost of having multiple Lambdas defined.
There's a great tool that supports working this way called Serverless Framework which I'd highly recommend looking at:
https://serverless.com/framework/docs/providers/aws/
I've been using a single Lambda to handle Alexa ASK and Microsoft Luis.ai responses. I'm using Python instead of Java but the idea is the same and I believe that using an AlexaInput and ApiAIinput object, both extending the same interface should be the way to go.
I first use the context information to identify where the request is coming from and parse it into the appropriate object (I use a simple nested dictionary). Then pass this to my main processing function and finally, pass the output to a formatter again based on the context. The formatter will be aware of what you need to return. The only caveat is that handling session information; which in my case I serialize to my own DynamoDB table anyway.
Maybe this is counterproductive, I don't know, but right now I am in need of a debugger in IntelliJ that are aware of EasyMock mocks and especially what the mocks methods actually returns.
For example, I have a transport interface ITransport, which has some methods that had to be mocked, and where I only want some of methods returning something. E.g.
ITransport myTransport = createMock(ITransport.class);
I want myTransport.getID() to return a mocked ID 10.
expect(myTransport.getID()).andReturn(10);
With ID 10 I want a method to be invoked once,
expect(myTransport.publish(any(...)));
expectLastCall.once();
Something in the transport class breaks and myTransport isn't called, and my test fails. Know I just want to step through the code with the debugger to check why my test fails. So I add a breakpoint to verify the values of the mocked myTransport object. But they all say "null", even the ID. So I assume, with some brief investigation, that the cause of this is the EasyMock mock class, it doesn't really update the object with value (which sounds reasonable) and instead returns the mocked value at runtime when the method is called.
So, are there any mock aware debuggers for IntelliJ that lets me see which value the method will eventually return.
Yes, and before I receive responses saying that "The debugger is not required if you write unit tests for everything", I just want to state that I know about that. And this is legacy code, or at least code that wasn't written with testing in mind.
This may not be what you're looking for... but it feels like the problem is more on the debugging approach.
A mock object is really just that - a mock - meaning it's a fake empty object that doesn't do anything unless you specifically tell it. When your debugger inspects the mock object, it won't find any values that you did not specifically program it to return. It's not meant to hold values.
EasyMock has an argument capture feature, but since you just want it for debugging, this is probably the wrong approach. Mockito has a spying feature that could be suitable for what you want, but it would involve additional mock-programming statements.
I would say the easiest approach would be to implement your own ITransport just for use in your test class. That way you can implement getID() to always return 10 and put in an assert statement inside your publish(). And you can implement whatever other methods you need in order to capture additional data for debugging purposes. And you get to keep this test-only ITransport for either shared use or future debugging needs.
Indeed, the methods are mocked but the internal implementation of the class is left to itself.
Usually, you don't need to know what is returned since you're the one who recorded it in the first place.
You can also evaluate myTransport.getID() in your debugger. But doing this will consume the expectations.
However, it seems like a good idea to be able to list the all current pending expectations on a mock. And maybe to have a peek function. You can request such features on the EasyMock bug tracker: http://jira.codehaus.org/browse/EASYMOCK
I am currently implementing a repository in my MVC 3 application. All the repository methods that I am implementing that change the data in some way (Add* and Delete*, primarily) currently DO NOT call the SaveChanges method. I explicitly require the user of my repository to do this.
The other option, of course, is that I always call SaveChanges in my mutation methods.
What tends to be the best practice here and why? I've been doing it the first way long enough that I have become used to it, but I'm curious if there is a reason the second would be better?
Normally a unit of work from a business or use case perspective involves modifications of many data entities.
You really want to store all the modifications in a transaction or none of them if something fails while submitting.
So its a good idea to call SaveChanges only once at the end of your unit of work and not inside your Add, Update and Delete methods.
I think there are good reasons for doing it either way. However, it's important that you make it known if your mutation methods do not persist changes to the database. What I'll do is provide overloads that allows the callers to specify. Also, providing the overloads makes it very clear via intellisense (if we can assume everybody's IDE has that feature in 2012) that there is an overload for each of those methods. Otherwise, you'll be depending on the users of your repository to read your documentation (and nobody does that anymore, right?).