validate request input phoenix elixir - validation

I'm struggling to find something in the documentation that seems like it should be there...
In Phoenix I see validation at the point of trying to create an Ecto change set, but I'm not seeing much prior to that, upon validating the actual user input.
I'm not really a fan of exposing my data models across API boundaries, and I would rather just have structs representing the requests and responses, as they are likely very different shapes to my actual data models.
I'd like a way of converting user input to a struct and using some kind of validation framework to determine if the input is valid before I even think about hitting a database.
I've found https://github.com/CargoSense/vex and have gone down the route of converting the input to a struct, and using their validation, but there are a few things that worry me about this approach, namely:
I hear that there are issues with atoms in Elixir, and as structs are basically atom keyed maps, am I going to run into this atom exhaustion issue converting user input to these?
I also have some structs that would have nested structs. I'm currently checking the default value provided. If it's a struct doing some magic, based on the answers here In Elixir how do you initialize a struct with a map variable, to automatically convert a nested map to my nested struct. But again, I'm not sure if this is sensible.
The validations I'm defining in one DSL will be very similar in my Ecto models, and I would rather be using this for both.
Basically, how would you go about validating user input correctly in a Phoenix app. Am I on the right lines, or way off?

Related

Gorm relationship and issues

I was creating my first-ever rest API in golang with fiber and form. I wanted to create a model that had a slice of strings, but gorm does not allow me to do so. SO, the next thing I tried was to use a map, hoping that it will be easily converted to JSON and saved to my postgres instance. But the same, gorm does not support maps. So, I created another struct into which I put all the data in a not-so-elegant way, where I made a single string value for each possible string I can save, and then I embedded this struct into the other. But now the compiler's complaints that I have to save a primary key into it, and not raw json given from the request. I am a bit overwhelmed by now
If someone knows I way that I can use to save all the data I need into the way that respects my requirements (slice of string, easy to parse when I read from the database), and to finish this CRUD app, I would really be thankful for that. thank you a lot

Protocol buffers: read only fields?

Is it possible to mark fields as read only in a .proto file such that when the code is generated, these fields do not have setters?
Ultimately, I think the answer here will be "no". There's a good basic guidance rule that applies to DTOs:
DTOs should generally be as simple as possible to convey the data for serialization in a manner well-suited to the specific serializer.
if that basic model is sufficient for you to work with above that layer, then fine
but if not: do not fight the serializer; instead, create a separate domain model above the DTO layer, and simply map between the two models before serialization or after deserialization
Or put another way: the fact that the generator doesn't want to expose read-only members is irrelevant, because if you need something exotic, you shouldn't be using the generated type outside of the code that directly touches serialization. So: in your domain type that mirrors the DTO: make it read-only there.
As for why read-only fields aren't usually a thing in serialization tools: you presumably want to be able to give it a value. Serialization tools usually want to be able to write everything they can read, and read everything they can write.
Minor note for completeness since you mention C#: if you are using a code-first approach with protobuf-net, it'll work fine with {get;}-only auto-props, and with {get;}-only manual props if all public members trivially map to an obvious constructor.

Validate a GraphQL schema against another reference schema

I'm not quite sure the wording I should be searching for on this.
I have a GraphQL schema which wraps a group of services using graphql-link-schema to perform the data resolution on the client side. The schema is intended to be built against a separate reference schema. How can I programmatically validate that my implementation matches the reference?
For bonus points- is it possible to determine whether a schema is a superset of another?
Thanks in advance (:
It's an interesting use case, but it's a bit unclear how validation like that would work. What causes validation to fail? Any differences between the two schemas? Extra types? Extra fields on existing types? Differences in return types? Differences in arguments or argument types?
Depending on your answer to the above questions, though, you may be able to cobble together your own validation function using the utility functions available here. Outside the main findBreakingChanges function, some of the utility functions available in that module:
findRemovedTypes
findTypesThatChangedKind
findFieldsThatChangedTypeOnObjectOrInterfaceTypes
findFieldsThatChangedTypeOnInputObjectTypes
findTypesRemovedFromUnions
findValuesRemovedFromEnums
findArgChanges
findInterfacesRemovedFromObjectTypes
If you have a reference or base schema available, though, rather than validating against it, you might also consider extending it when building the second schema. In doing so, you would effectively guarantee that the second schema matches the first except in whatever ways you intentionally deviate from it (by extending existing types, etc.). You could use extendSchema for relatively simply changes, or something like graphql-tool's mergeSchemas for more complicated changes.

mvc3 - implementing a variable type recursive model with editors..

I'm quite new to MVC3, just learning and am looking for some guidance.
For sake of simplicity, I have a model that represents 3 types of elements, questions, answers and containers.
All 3 inherit from a common base type, which I'll call baseElement.
When the model is delivered to the view it is a single object of type 'baseElement'
The container elements have an internal list of baseElements.
Those baseElements can be of any of the three types. So - containers can contain questions, or containers (which could also contain questions, containers, etc..)
Each question can contain different types of answer types.
I'm trying to figure out how to use mvc3 to best implement a system to display this container/question structure to the user - permitting them to answer questions with various answer types while respecting the nested structure of the incoming model.
Alright, despite the dynamic nature of my model, after poking around for awhile longer I've been able to render my model object structure without too much complication.
I did it by using a strongly typed Editor Templates (one for each type), and the following code in the View.
#Html.EditorFor(x => #Model, #Model.GetType().Name)
This automatically chooses the proper editor template to use based on the actual type.
In each of the type specific Editor Templates I make the same call for each of the children.
It actually ends up being pretty simple.
The big problem I'm running into now, is how to model bind (or otherwise retrieve) the form values back into something usable once it's posted by the user. The dynamic nature of the structure causes the default model binder to cross it's arms and give up.
At this point I think retrieving/remapping the form data might be a much bigger issue, but that will certainly require more tinkering, and perhaps a separate question.
Thanks for the help.
:-)

Is there any data typing for the parameters in HTTP POST?

I am building a RESTful api using a Ruby server and a MongoDB database. The database stores objects as they are, preserving their natural data types (at least those that it supports).
At the moment I am using HTTP GET to pass params to the API, and understandably everything in my database gets stored as strings (because thats what the ruby code sees when it accesses the params[] hash). After deployment, the API will use exclusively HTTP POST, so my question is whether its possible to specify the data types that get sent via POST individually for each parameter (say I have a "uid" which is an integer and a "name" which is a string), or do I need to cast them within Ruby before passing them onto my database?
If I need to cast them, are there any issues related to it?
No, its not possible.
Post variables are just string key value pairs.
You could however implement your own higher level logic.
For example a common practice is to put a suffix to the names. For example everything that ends with _i gets parsed as integer and so on.
However what benefit would it bring to preserve the types? Or better asked. How do you output them? Is it only for storage?
Then it should not be a problem to convert the strings to proper types if that benefits your application and cast them back to strings before delivering.

Resources