In a Grails service, I create/update domain entities based on an external list. Updates take place in a loop. If the line in the input file is valid, entities are updated. Otherwise, I store the faulty line number and process the next one.
Problem: when I call validate() on an entity, if the result is false, any work done in the service in rolled back, even if does not pertain to the validation, and although no exception is visible. For instance:
assert contact.firstName = 'Bruce'
contact.firstName = 'John'
assert contact.save()
...
company.vat = 'bogus'
if (!company.validate()) {
log.error "bogus"
company.discard()
} else {
assert company.save()
}
log.debug "done"
If company does not validate, my log will show "bogus" and "done". Contact first name will be 'Bruce' in the database, not 'John'.
Alternate version: if I do not call company.validate(), the contact first name is updated.
At this stage, I suspect Grails attaches my company instance upon the validate() call, dooming the transaction when it realizes validations do not pass, and despite the discard() effort.
From the validate() semantic and its javadoc ("Validates an instance"), I didn't expect any side effect, whether the validation fails or not, or whether I even invoke validate() or not.
What do you think? Is this a "normal" bahavior, or a bug? How can I work around this?
I have a simple reproduction case with two entities if needed.
I might be missing something here, but isn't the whole goal of the 'discard' method to not apply the change, and what you're seeing here (not applying the change in case of a validation error) would be logical?
Not able to test it on this setup, but could you set transactional = false in your service, process all the objects (adding the correctly validated objects in one list and saving/logging the failed ones elsewhere), and then finally saving the "good" list of objects using withTransaction? Something like:
static transactional = false
def sessionFactory
List<Company> validCompanies = []
for(Company company: listOfCompaniesToValidate) {
if (!company.validate()) {
log.error "bogus"
} else {
validCompanies.add(company)
}
}
Company.withTransaction {
for (Company validCompany: validCompanies)
validCompany.save()
}
validCompanies.clear()
}
sessionFactory.getCurrentSession().clear();
Just an idea!
Related
I have a function that takes a number of values, creates a new model, and then saves this in storage via a PersistentMap.
I would like to test that the item is successfully saved in storage. This is how I'm going about it:
it("saves a new item to storage", () => {
VMContext.setCurrent_account_id("test_account_id");
contract.createMyPersistantMapItem(
"value 1",
"value 2",
"value 3"
);
expect(storage.hasKey("myPersistantMap::test_account_id")).toBeTruthy();
});
However the test fails with the following:
[Actual]: false
[Expected]: Truthy
[Stack]: RuntimeError: unreachable
at node_modules/#as-pect/assembly/assembly/internal/assert/assert (wasm-function[53]:0xd71)
at start:src/simple/__tests__/index.unit.spec~anonymous|0~anonymous|0~anonymous|0 (wasm-function[130]:0x2a84)
at node_modules/#as-pect/assembly/assembly/internal/call/__call (wasm-function[134]:0x2aac)
Am I going about this the wrong way?
Also I would like to test the values in the item are correct. In the todos example the created todo is returned from the function that creates it, and then this returned value is tested. Is this the best way of doing things to make testing easier?
EDIT
This is the createMyPersistantMapItem function - edited a bit to make things clearer:
export function createMyPersistantMapItem(
blah1: string,
blah2: string,
blah3: string,
): void {
const accountId = context.sender;
assert(
!storage.hasKey("myPersistantMap::" + accountId),
"Item already exists"
);
const newPersistantMapItem = new PersistantMapItem(
blah1,
blah2,
blah3
);
myPersistantMap.set(accountId, newPersistantMapItem);
}
About the first question:
Does myPersistentMap use the "myPersistantMap" prefix when initialized? Does PersistantMapItem use the #nearBingden annotation on the class?
Also, in your test, I think you should use
VMContext.setSigner_account_id("test_account_id")
//instead of
VMContext.setCurrent_account_id("test_account_id")
Because you are using context.sender when you call createMyPersistantMapItem
About the second question:
In the todos example the created todo is returned from the function that creates it, and then this returned value is tested. Is this the best way of doing things to make testing easier?
This question is primarily opinion based, so I can only answer for myself. Testing the returned value is completely fine. In a smart contract however, I would probably test if the value is actually stored on the contract. And I think they are doing that in the TODO example. They just use the ID of the generated TODO to do a query on the smart contract.
const a = Todo.insert("Drink water");
// They use the id of the todo (a) to check if it was actually stored, and is in fact the same object. I think this is fine.
expect(getById(a.id)).toStrictEqual(a);
See https://github.com/grpc/grpc-node/issues/1202.
Usually in CRUD operations, the value not provided means do not change that field, and the empty array [] means to clear all items inside that field.
But if you tries to implement CRUD operations and provide them as services via grpc, then the above scenario is hard to implement.
service CRUD {
rpc updateTable(updateRequest) returns updateResponse {}
}
message updateRequest {
repeated string a = 1;
string b = 2;
}
message updateResponse {
boolean success = 1;
}
If you load the package with default options, then the client can't delete items of a by
client.CRUD.updateTable({a: []})
because the argument {a: []} becomes {} when it arrives the server side.
If you load the package with options {arrays: true}, then the field a will be cleared unintentionally while client side only tries to update other fields:
client.CRUD.updateTable({b: 'updated value'})
because the argument {b: 'updated value'} becomes {a: [], b: 'updated value'} when it arrives the server side.
Can anyone share some better ideas regards how to handle these 2 scenarios with grpc-node and proto3?
The protobuf encoding doesn't distinguish between these two cases. Since protobuf is language-agnostic, it doesn't understand the conceptual nuance of "undefined" versus "[]" of Javascript.
You would need to pass additional information inside the proto message in order to distinguish between the two cases.
I would highly suggest reading the design documentations here: https://developers.google.com/protocol-buffers
I apologize at the outset as this is more of a design question rather than a specific problem so it may not have a simple answer .. Anyway I am developing quite a complex piece of code to process transactions in a warehouse system. The transactions themselves are user defined with switches which determine fields to enter on a screen .. I use a command object to process / validate the form. Many of the validation steps are straightforward but some are a little tricky and have cross dependencies between the screen fields .. So for instance I may prompt for the user to enter a serial reference but if that doesn't uniquely identify a carton I need to ask for a part to go with it .. I then hit the database to validate these combinations .. The screen may also have a document reference for the user to enter .. This in turn may be cross referenced with the part / serial on the screen (by hitting the database) to ensure that the part/serial is for the document .. This continues, depending on what's been defined on the transaction, with more cross references and validations. What I end up with is a lot of 'if this entered and this entered .. validate .. else .. validate' type stuff in the command object and it looks really ugly .. So my questions are : Should I be putting ALL my validation in the command Object (all the database checking etc) or is there somewhere better to put it and is there anything I can do better than all my complicated if/else combos given that this is a command object ? I had thought of creating additional classes which the current command object could kinda spawn , making those validatable and spreading the logic out amongst those .. If anyone would like to chip in i'd appreciate the discussion ..
After Joshua's comments i've started refactoring the code into a service .. It's not complete but it's taking shape ..
#Transactional
class TransactionValidationService {
static enum state {
RECEIPT,
RECEIPT_SERIAL,
RECEIPT_DOCUMENT,
RECEIPT_PART,
RECEIPT_SERIAL_PART,
RECEIPT_SERIAL_DOCUMENT,
RECEIPT_PART_DOCUMENT,
ISSUE,
TRANSFER
}
def validateTransaction(TransactionDetailCommand transaction) {
// Set initial state ..
def currentState
switch (transaction.transactionType.processType) {
case ProcessType.ISSUE:
currentState = state.ISSUE
break
case ProcessType.RECEIPT:
currentState = state.RECEIPT
break
case ProcessType.TRANSFER:
currentState = state.TRANSFER
}
switch (currentState) {
case state.RECEIPT:
if (transaction.serialReference) {
// validateSerialReference
currentState = state.RECEIPT_SERIAL
} else if (transaction.documentHeader) {
// validateReceiptDocument
currentState = state.RECEIPT_DOCUMENT
}
break
case state.RECEIPT_SERIAL:
if (transaction.part) {
// validatePartSerial
currentState = state.RECEIPT_SERIAL_PART
}
if (transaction.documentHeader) {
// validateDocumentPart
currentState = state.RECEIPT_SERIAL_DOCUMENT
}
break
case state.ISSUE:
break
case state.TRANSFER:
break
}
}
}
First off, using command objects to gather this information is the correct choice.
However, implementing complex validation logic within the constraints as custom validators can be a bit overwhelming. You may want to consider injecting a service into your command object then delegating the validation to the service from within the custom validator.
For example
#Validateable
#ToString(includeNames=true)
class MyExampleCommand {
def myValidationService = Holders.grailsApplication.mainContext.myValidationService
String someThing
Long someValue
..
static constraints = {
someThing(
nullable: false,
blank: false,
size:1..20,
validator: { val, obj ->
obj.myValidationService.validateSomeThing(obj)
}
)
...
}
...
}
I've been trying to figure out how to do this without manually defining a validation but without any success so far.
I have a StringField
class Foo private() extends MongoRecord[Foo] with ObjectIdKey[Foo] {
...
object externalId extends StringField(this, 255) {
// none of these seem to have any effect on validation whatsoever:
override def optional_? = false
override def required_? = true
override def defaultValueBox = Empty
}
...
}
Now when I call .validate on a Foo, it returns no errors:
val foo = Foo.createRecord
foo.validate match {
case Nil => foo.save
...
}
...and the document is saved into the (mongo) DB with no externalId.
So the question is: is there any way at all to have Lift automatically validate missing fields without me having to manually add stuff to validations?
EDIT: am I thinking too much in terms of the type of productivity that frameworks like Django and Rails provide out of the box? i.e. things like basic and very frequent validation without having to write anything but a few declarative attributes/flags. If yes, why has Lift opted to not provide this sort of stuff out of the box? Why would anybody not want .validate to automatically take into consideration all the def required_? = true/def optional_? = false fields?
As far as I'm aware, there isn't a way for you to validate a field without explicitly defining validations. The reason that optional_? and required_? don't provide validation is that it isn't always clear what logic to use, especially for non String fields. The required_? value itself is used by Crudify to determine whether to mark the field as required in the produced UI, but it's up to you to provide the proper logic to determine that the requirement is satisfied.
Validating the field can be as easy as
override def validations = super.validations :: valMinLen(1, "Required!")
Or see the answer to your other question here for how to create a generic Required trait.
Grails has a bug with regards to databinding in that it throws a cast exception when you're dealing with bad numerical input. JIRA: http://jira.grails.org/browse/GRAILS-6766
To fix this I've written the following code to manually handle the numerical input on the POGO class Foo located in src/groovy
void setPrice(String priceStr)
{
this.priceString = priceStr
// Remove $ and ,
priceStr = priceStr.trim().replaceAll(java.util.regex.Matcher.quoteReplacement('$'),'').replaceAll(',','')
if (!priceStr.isDouble()) {
errors.reject(
'trade.price.invalidformat',
[priceString] as Object[],
'Price:[{0}] is an invalid price.')
errors.rejectValue(
'price',
'trade.price.invalidformat')
} else {
this.price = priceStr.toDouble();
}
}
The following throws a null reference exception on the errors.reject() line.
foo.price = "asdf" // throws null reference on errors.reject()
foo.validate()
However, I can say:
foo.validate()
foo.price = "asdf" // no Null exception
foo.hasErrors() // false
foo.validate()
foo.hasErrors() // true
Where does errors come from when validate() is called?
Is there a way to add the errors property without calling validate() first?
I can't exactly tell you why, but you need to call getErrors() explicitly instead of accessing it as errors like a property. For some reason, Groovy isn't calling the method for it. So change the reject lines in setPrice() to
getErrors().reject(
'trade.price.invalidformat',
[priceString] as Object[],
'Price:[{0}] is an invalid price.')
getErrors().rejectValue(
'price',
'trade.price.invalidformat')
That is the easiest way to make sure the Errors object exists in your method. You can check out the code that adds the validation related methods to your domain class.
The AST transformation handling #Validateable augments the class with, among other things
a field named errors
public methods getErrors, setErrors, clearErrors and hasErrors
The getErrors method lazily sets the errors field if it hasn't yet been set. So it looks like what's happening is that accesses to errors within the same class are treated as field accesses rather than Java Bean property accesses, and bypassing the lazy initialization.
So the fix appears to be to use getErrors() instead of just errors.
The errors are add to your validateable classes (domain classes and classes that have the annotation #Validateable) dinamically.
Allowing the developer to set a String instead of a number doesn't seem a good way to go. Also, your validation will work only for that particular class.
I think that a better approach is to register a custom property editor for numbers. Here's a example with dates, that enable the transform of String (comming from the form) to Date with a format like dd/MM/yyyy. The idea is the same, as you will enforce that your number is parseable (eg. Integer.parseInt() will throw exception).
In your domain class, use the numeric type instead of String, so by code developers will not be allowed to store not number values.