Is there any data typing for the parameters in HTTP POST? - ruby

I am building a RESTful api using a Ruby server and a MongoDB database. The database stores objects as they are, preserving their natural data types (at least those that it supports).
At the moment I am using HTTP GET to pass params to the API, and understandably everything in my database gets stored as strings (because thats what the ruby code sees when it accesses the params[] hash). After deployment, the API will use exclusively HTTP POST, so my question is whether its possible to specify the data types that get sent via POST individually for each parameter (say I have a "uid" which is an integer and a "name" which is a string), or do I need to cast them within Ruby before passing them onto my database?
If I need to cast them, are there any issues related to it?

No, its not possible.
Post variables are just string key value pairs.
You could however implement your own higher level logic.
For example a common practice is to put a suffix to the names. For example everything that ends with _i gets parsed as integer and so on.
However what benefit would it bring to preserve the types? Or better asked. How do you output them? Is it only for storage?
Then it should not be a problem to convert the strings to proper types if that benefits your application and cast them back to strings before delivering.

Related

Gorm relationship and issues

I was creating my first-ever rest API in golang with fiber and form. I wanted to create a model that had a slice of strings, but gorm does not allow me to do so. SO, the next thing I tried was to use a map, hoping that it will be easily converted to JSON and saved to my postgres instance. But the same, gorm does not support maps. So, I created another struct into which I put all the data in a not-so-elegant way, where I made a single string value for each possible string I can save, and then I embedded this struct into the other. But now the compiler's complaints that I have to save a primary key into it, and not raw json given from the request. I am a bit overwhelmed by now
If someone knows I way that I can use to save all the data I need into the way that respects my requirements (slice of string, easy to parse when I read from the database), and to finish this CRUD app, I would really be thankful for that. thank you a lot

Why Elasticsearch uses PUT instead of POST for creating index?

As far as I know, POST is usually used for changing the state of the server, and PUT usually for updating the information. If I am creating a new index, should it not be POST instead of PUT? PUT does make sense when creating a document as it changes the state of data.
Your statement
As far as I know, POST is usually used for changing the state of the server, and PUT usually for updating the information.
does conform to the conventional HTTP vs CRUD semantics:
HTTP method
CRUD equivalent
Description
POST
Create
Let the target resource process the representation enclosed in the request.
PUT
Update
Set the target resource’s state to the state defined by the representation enclosed in the request.
However, the PUT spec also stipulates that:
The PUT method requests that the state of the target resource be
created or replaced with the state defined by the representation
enclosed in the request message payload
As such, PUT can (and is) used in Elasticsearch to both create an index AND update its
[settings and mappings].
Also, keep in mind that it's rarely just a matter of strict adherence to the semantics. One of the creators of ES put it this way:
It's all about REST semantics.
And our understanding of the semantics at the time when we made the APIs. And backwards compatibility constraints. And whatever "feels" natural to the person who implemented the API.
Where it makes a lot of sense Elasticsearch maps the HTTP verbs to
useful things. But when it doesn't make a ton of sense we just go with
whatever verb feels good rather than trying to be super strict about
REST. Also, we don't do linked data, instead relying on you to build
links from context. I'm told that is particularly non-REST. But it is
what we do.

validate request input phoenix elixir

I'm struggling to find something in the documentation that seems like it should be there...
In Phoenix I see validation at the point of trying to create an Ecto change set, but I'm not seeing much prior to that, upon validating the actual user input.
I'm not really a fan of exposing my data models across API boundaries, and I would rather just have structs representing the requests and responses, as they are likely very different shapes to my actual data models.
I'd like a way of converting user input to a struct and using some kind of validation framework to determine if the input is valid before I even think about hitting a database.
I've found https://github.com/CargoSense/vex and have gone down the route of converting the input to a struct, and using their validation, but there are a few things that worry me about this approach, namely:
I hear that there are issues with atoms in Elixir, and as structs are basically atom keyed maps, am I going to run into this atom exhaustion issue converting user input to these?
I also have some structs that would have nested structs. I'm currently checking the default value provided. If it's a struct doing some magic, based on the answers here In Elixir how do you initialize a struct with a map variable, to automatically convert a nested map to my nested struct. But again, I'm not sure if this is sensible.
The validations I'm defining in one DSL will be very similar in my Ecto models, and I would rather be using this for both.
Basically, how would you go about validating user input correctly in a Phoenix app. Am I on the right lines, or way off?

Nested Hash from JSON structure - Critical look

I'm new on ruby, I come from C/C++.
I'm currently working on data integration between a partner and me.
I get the API response with httparty and then parse it with JSON.parse.
The hash result is like multi level nested ( around 5-6 levels )
Initially, since I'm new on ruby, I wanted to develop naturally, without thinking of number of methods, number of line in methods, the only goal was to clearly separate each extraction from another in distinct methods.
The extraction from this nested hash is conditional extract, what I mean is there is multiples object of the same structure inside the hash.
And my extraction is like something like this:
if get_flight(json_response) == blabla_id
stored_blabla_id = blabla_id
end
then later
get_departure_place_from_flight(json_response, stored_blabla_id)
I read many articles about obectifying hash like this good one, or building some engines-extractor that getting value based on key passed in arguments.
Since I'm getting a really huge json response, and since I'm not extracting all the values but specifics one, I'm wondering if its not bad for usage/performance.
My point : the class is working properly BUT: I have 25 methods in one class, and the content of theses methods are like a direct access from the nested hash. I find it very ugly.
I was wondering since I have 2 request methods to the API, 1 method dedicated to construct URL, and the others one dedicated to extraction from the JSON response , is it appropriate to split the class into modules?
Or, is this kind of ugly class common in JSON parse/extracting value from whatever API's?

Which is the most efficient way to access the value of a control?

Of the two choices I have to access the value of a control which is the most efficient?
getComponent("ControlName").getValue();
or
dataSource.getItemValue("FieldName");
I find that on occasion the getComponent does not seem to return the current value, but accessing the dataSource seems to be more reliable. So does it make much difference from a performance perspective which one is used?
The dataSource.getValue seems to work everywhere that I have tried it. However, when working with rowData I still seem to need to do a rowData.getColumnValue("Something"). rowData.getValue("Something") fails.
Neither. The fastest syntax is dataSource.getValue ("FieldName"). The getItemValue method is only reliable on the document data source, whereas the getValue method is not only also available on view entries accessed via a view data source (although in that context you would pass it the programmatic name of a view column, which is not necessarily the same name as a field), but will also be available on any custom data sources that you develop or install (e.g. third-party extension libraries). Furthermore, it does automatic type conversion that you'd have to do yourself if you used getItemValue instead.
Even on very simple pages, dataSource.getValue ("FieldName") is 5 times as fast as getComponent ("id").getValue (), because, as Fredrik mentions, first it has to find the component, and then ask it what the value is... which, behind the scenes, just asks the data source anyway. So it will always be faster to just ask the data source yourself.
NOTE: the corresponding write method is dataSource.setValue ("FieldName", "NewValue"), not dataSource.replaceItemValue ("FieldName", "NewValue"). Both will work, but setValue also does the same type conversion that getValue does, so you can pass it data that doesn't strictly conform to the old Domino Java API and it usually just figures out what the value needs to be converted to in order to be "safe" for Domino to store.
I would say that the most efficient way is to get the value directly from the datasource.
Because if you use getComponent("ControlName").getValue(); you will do a get on the component first and then a getValue from that. So do a single get from the datasource is more efficient if you ask me.

Resources