Anonymous function on visitObject? - apollo-server

Right now when a user 'visitFieldDefinition's, I do a bit of computation in the field.resolve function. Is there a way to do this at the visitObject level?
e.g. I update a time value in my database whenever a user visits a field. One query on the object level might trigger that processing several hundred times which is completely redundant. Is there some analogous field.resolve function at the visitObject level?

You can have a schema directive with OBJECT as a target that iterates through that object's fields and modifies each field's resolver. However, the resolver logic is always at the field level because only fields are resolved. If the logic you're repeating is independent of the arguments passed to the resolver, then it can reside inside the visitObject, otherwise it needs to be inside the resolver function itself.
If the work you're doing inside the resolver is redundant, then you can probably cache whatever value you're repeatedly calculating. The cache can be a variable inside your directive class (in which case it will only be cleared when the process is restarted) or it can be injected into the context (in which case it will be request-specific and be destroyed after your request completes).

Related

How can warm start a method in TwinCAT?

One of the features of the method based on the definition of Beckoff site is that:
All data of a method are temporary and are only valid while the method
is executed (stack variables). This means that TwinCAT re-initializes
all variables and function blocks, which you have declared in a
method, with each call of the method.
Is there any way to use a method in the plc loop as warm start!
it means that we use the method without re_initializing and method declare variations run just once at the first time we call it and the rest of the time that is called the variables retain their own values?
Yes, this is possible through VAR_INST or VAR_STAT.
Just declare your variables as VAR_INST/VAR_STAT instead, then they will retain their values between the calls.
VAR_INST means it will be unique for every instantiation of the function block of where the method resides, while VAR_STAT will act as a static/global (all instances will point to the same memory location).
https://infosys.beckhoff.com/english.php?content=../content/1033/tc3_plc_intro/2528798091.html&id=
https://infosys.beckhoff.com/english.php?content=../content/1033/tc3_plc_intro/2528787339.html&id=

Parameter validation vs property validation

Most (almost all?) validation frameworks are based on reading object's property value and checking if it obeys validation rules.
Do we really need it?
If we pass valid parameters into object's constructor, property setters and other methods, object seems to be perfectly valid, and property value checks are not needed!
Isn't it better to validate parameters instead of properties?
What validation frameworks can be used to validate parameters before passing them into an object?
Update
I'm considering situation where client invokes service method and passes some data. Service method must check data, create / load domain objects, do business logic and persist changes.
It seems that most of the time data is passed by means of data transfer objects. And property validation is used because DTO can be validated only after it has been created by network infrastructure.
This question can spread out into wider topic. First, let's see what Martin Fowler has said:
One copy of data lies in the database itself. This copy is the lasting
record of the data, so I call it the record state.
A further copy lies inside in-memory Record Sets within the application. This data
was only relevant for one particular session between the application
and the database, so I call it session state.
The final copy lies
inside the GUI components themselves. This, strictly, is the data they
see on the screen, hence I call it the screen state.
Now I assume you are talking about validation at session state, whether it is better to validate the object property or validate the parameter. It depends. First, it depends on whether you use Anemic or Rich Domain Model. If you use anemic domain model, it will clear that the validation logic will reside at other class.
Second, it depends on what type of object do you build. An Framework / operation /utility object need to have validation against object property. e.g: C#'s FileStream object, in which the stream class need to have valid property of either file path, memory pointer, file access mode, etc. You wouldn't want every developer that use the utility to validate the input beforehand or it will crash in one operation, and giving wrong error message instead of fail fast.
Third, you need to consider that "parameter can come in many sources / forms", while "class / object property only has 1 definition". You need to place the parameter validation at every input source, while object property validation only need to be defined once. But you also need to understand the object's state. An object can be valid in some state (draft mode) and not in other state (submission mode).
Of course you can also add validation into other state level as well, such as database (record state) or UI (screen state), but it also have different pros/cons.
What validation frameworks can be used to validate parameters before passing them into an object?
C#'s ASP.Net MVC can do one kind of parameter validation (for data type) before constructing into an object, at controller level.
Conclusion
It depends entirely on what architecture and kind of object you want to make.
In my experience such validations were done when dealing with complex validation rules and Parameter object. Since we need to keep the Separation of concerns - the validation logic is not in the object itself. That's why - yes we
we really need it
What is more interesting - why construct expensive objects and later validate them.

Hibernate : em.getReference().getList().add(record) performance?

When using Hibernate (JPA), if I do the following call :
MyParent parent = em.getReference("myId");
parent.getAListMappedAsOneToMany().add(record)
record.setParent(parent);
Is there any performance problem ?
My thoughts is that getReference does not load the entity and getAListeMappedAsOneToMany().add do not need to load the list as it is defined as lazy fetch.
getAListMappedAsOneToMany could return a very big list if it is really accessed (by calling get or size method).
Could you confirm that there is no performance problem with such a code ?
getReference() doesn't go to the database, and returns a proxy. But if you call a method on the proxy, it initializes the proxy and gets the entity data from the database. So since you call getAListMappedAsOneToMany() on your entity, you don't gain anything by calling getReference() instead of find().
Similarly, the list is loaded lazily. this means that it will only be loaded when you call a method on it. And you do call a method on it: add(). So the data of the elements in the list is also loaded from the database.
Turn on SQL logging in devlopment, to see and understand all the queries executed by Hibernate.
If you want to avoid loading the list, replace your code by
MyParent parent = em.getReference("myId");
record.setParent(parent);
This won't load anything from the database, and it will make the association persistent because Record.parent is the owner side of the association. But beware that this will also make your in-memory object graph inconsistent if the parent has already been loaded before.
getReference() is useful when you don't want to use any members of the object but to give the reference of the object to another object. For example, when entity A referencing entity B and you want to set your b as B of A, then getReference() is what you need.
But in your case, when you get the proxy object, you immediately try to reach one of its members. (aListMappedAsOneToMany) Thus this will result, the whole parent object will be loaded from db.
It is true that, when you getAListMappedAsOneToMany().add(record), it will not load from db yet only if you set inverse="true"
You can learn more information about performance from:
http://docs.jboss.org/hibernate/orm/3.3/reference/en/html/performance.html#performance-collections-mostefficentinverse

Returning both computation result and status. Best practices

I was thinking about patterns which allow me to return both computation result and status:
There are few approaches which I could think about:
function returns computation result, status is being returned via out parameter (not all languages support out parameters and this seems wrong, since in general you don't expect parameters to be modified).
function returns object/pair consisting both values (downside is that you have to create artificial class just to return function result or use pair which have no semantic meaning - you know which argument is which by it's order).
if your status is just success/failure you can return computation value, and in case of error throw an exception (look like the best approach, but works only with success/failure scenario and shouldn't be abused for controlling normal program flow).
function returns value, function arguments are delegates to onSuccess/onFailure procedures.
there is a (state-full) method class which have status field, and method returning computation results (I prefer having state-less/immutable objects).
Please, give me some hints on pros, cons and situations' preconditions of using aforementioned approaches or show me other patterns which I could use (preferably with hints on preconditions when to use them).
EDIT:
Real-world example:
I am developing java ee internet application and I have a class resolving request parameters converting them from string to some business logic objects. Resolver is checking in db if object is being created or edited and then return to controller either new object or object fetched from db. Controller is taking action based on object status (new/editing) read from resolver. I know it's bad and I would like to improve code design here.
function returns computation result, status is being returned via out
parameter (not all languages support out parameters and this seems
wrong, since in general you don't expect parameters to be modified).
If the language supports multiple output values, then the language clearly was made to support them. It would be a shame not to use them (unless there are strong opinions in that particular community against them - this could be the case for languages that try and do everything)
function returns object/pair consisting both values (downside is that
you have to create artificial class just to return function result or
use pair which have no semantic meaning - you know which argument is
which by it's order).
I don't know about that downside. It seems to me that a record or class called "MyMethodResult" should have enough semantics by itself. You can always use such a class in an exception as well, if you are in an exceptional condition only of course. Creating some kind of array/union/pair would be less acceptable in my opinion: you would inevitably loose information somewhere.
if your status is just success/failure you can return computation
value, and in case of error throw an exception (look like the best
approach, but works only with success/failure scenario and shouldn't
be abused for controlling normal program flow).
No! This is the worst approach. Exceptions should be used for exactly that, exceptional circumstances. If not, they will halt debuggers, put colleagues on the wrong foot, harm performance, fill your logging system and bugger up your unit tests. If you create a method to test something, then the test should return a status, not an exception: to the implementation, returning a negative is not exceptional.
Of course, if you run out of bytes from a file during parsing, sure, throw the exception, but don't throw it if the input is incorrect and your method is called checkFile.
function returns value, function arguments are delegates to
onSuccess/onFailure procedures.
I would only use those if you have multiple results to share. It's way more complex than the class/record approach, and more difficult to maintain. I've used this approach to return multiple results while I don't know if the results are ignored or not, or if the user wants to continue. In Java you would use a listener. This kind of operation is probably more accepted for functinal languages.
there is a (state-full) method class which have status field, and
method returning computation results (I prefer having
state-less/immutable objects).
Yes, I prefer those to. There are producers of results and the results themselves. There is little need to combine the two and create a stateful object.
In the end, you want to go to producer.produceFrom(x): Result in my opinion. This is either option 1 or 2a, if I'm counting correctly. And yes, for 2a, this means writing some extra code.
My inclination would be to either use out parameters or else use an "open-field" struct, which simply contains public fields and specifies that its purpose is simply to carry the values of those fields. While some people suggest that everything should be "encapsulated", I would suggest that if a computation naturally yields two double values called the Moe and Larry coefficients, specifying that the function should return "a plain-old-data struct with fields of type double called MoeCoefficient and LarryCoefficient" would serve to completely define the behavior of the struct. Although the struct would have to be declared as a data type outside the method that performs the computation, having its contents exposed as public fields would make clear that none of the semantics associated with those values are contained in the struct--they're all contained in the method that returns it.
Some people would argue that the struct should be immutable, or that it should include validation logic in its constructor, etc. I would suggest the opposite. If the purpose of the structure is to allow a method to return a group of values, it should be the responsibility of that method to ensure that it puts the proper values into the structure. Further, while there's nothing wrong with a structure exposing a constructor as a "convenience member", simply having the code that will return the struct fill in the fields individually may be faster and clearer than calling a constructor, especially if the value to be stored in one field depends upon the value stored to the other.
If a struct simply exposes its fields publicly, then the semantics are very clear: MoeCoefficient contains the last value that was written to MoeCoefficient, and LarryCoefficient contains the last value written to LarryCoefficient. The meaning of those values would be entirely up to whatever code writes them. Hiding the fields behind properties obscures that relationship, and may impede performance as well.

Mimicking SQL Insert Trigger with LINQ-to-SQL

Using LINQ-to-SQL, I would like to automatically create child records when inserting the parent entity. Basically, mimicking how an SQL Insert trigger would work, but in-code so that some additional processing can be done.
The parent has an association to the child, but it seems that I cannot simply add new child records during the DataContext's SubmitChanges().
For example,
public partial class Parent
{
partial void OnValidate(System.Data.Linq.ChangeAction action)
{
if(action == System.Data.Linq.ChangeAction.Insert)
{
Child c = new Child();
... set properties ...
this.Childs.Add(c);
}
}
}
This would be ideal, but unfortunately the newly created Child record is not inserted to the database. Makes sense, since the DataContext has a list of objects/statements and probably doesn't like new items being added in the middle of it.
Similarly, intercepting the partial void InsertParent(Parent instance) function in the DataContext and attempting to add the Child record yields the same result - no errors, but nothing added to the database.
Is there any way to get this sort of behaviour without adding code to the presentation layer?
Update:
Both the OnValidate() and InsertParent() functions are called from the DataContext's SubmitChanges() function. I suspect this is the inherent difficulty with what I'm trying to do - the DataContext will not allow additional objects to be inserted (e.g. through InsertOnSubmit()) while it is in the process of submitting the existing changes to the database.
Ideally I would like to keep everything under one Transaction so that, if any errors occur during the insert/update, nothing is actually changed in the database. Hence my attempts to mimic the SQL Trigger functionality, allowing the child records to be automatically inserted through a single call to the DataContext's SubmitChanges() function.
If you want it to happen just before it is saved; you can override SubmitChanges, and call GetChangeSet() to get the pending changes. Look for the things you are interested in (for example, delta.Inserts.OfType<Customer>(), and make your required changes.
Then call base.SubmitChanges(...).
Here's a related example, handling deletes.
The Add method only sets up a link between the two objects: it doesn't mark the added item for insertion into the database. For that, you need call InsertOnSubmit on the Table<Child> instance contained within your DataContext. The trouble, of course, is that there's no innate way to access your DataContext from the method you describe.
You do have access to it by implementing InsertParent in your DataContext, so I'd go that route (and use InsertOnSubmit instead of Add, of course).
EDITED I assumed that the partial method InsertParent would be called by the DataContext at some point, but in looking at my own code that method appears to be defined but never referenced by the generated class. So what's the use, I wonder?
In linq to sql you make a "trigger" by making a partial class to the dbml file, and then inserting a partial method. Here is an example that wouldn't do anything because it calls the build-in deletion.
partial void DeleteMyTable(MyTable instance)
{
//custom code here
ExecuteDynamicDelete(instance);
//or here :-)
}

Resources