Gorm relationship and issues - go

I was creating my first-ever rest API in golang with fiber and form. I wanted to create a model that had a slice of strings, but gorm does not allow me to do so. SO, the next thing I tried was to use a map, hoping that it will be easily converted to JSON and saved to my postgres instance. But the same, gorm does not support maps. So, I created another struct into which I put all the data in a not-so-elegant way, where I made a single string value for each possible string I can save, and then I embedded this struct into the other. But now the compiler's complaints that I have to save a primary key into it, and not raw json given from the request. I am a bit overwhelmed by now
If someone knows I way that I can use to save all the data I need into the way that respects my requirements (slice of string, easy to parse when I read from the database), and to finish this CRUD app, I would really be thankful for that. thank you a lot

Related

Spring data Mongodb - Encrypting a single field using convertor

I have a collection which has several array of objects. In one of the sub-objects there is a field called secret which has to be stored in encrypted format, the field is of type String.
What is the best way of achieving?
I don't think writing a custom writer for the entire document is feasible.
How to write a String convertor that will be only applied for this single field?
There are many answers to this questions and different approaches that depend on your actual requirements.
The first question that you want to ask is, whether MongoDB is a good place to store encrypted values at all or whether there is a better option that gives you features like rewrapping (re-encrypt), key rotation, audit logging, access control, key management, …
Another thing that comes into play is decryption: Every time you retrieve data from MongoDB, the secret is decrypted. Also, encrypting a lot of entries with the same key allows facilitates cryptanalysis so you need to ensure regular key rotation. Least, but not last, you're in charge of storing the crypto keys securely and making sure it's hard to get hold of these.
Having a dedicated data type makes it very convenient writing a with a signature of e.g. Converter<Secret, String> or Converter<Secret, Binary> as you get full control over serialization.
Alternatively, have a look at https://github.com/bolcom/spring-data-mongodb-encrypt or external crypto tools like HashiCorp Vault.

validate request input phoenix elixir

I'm struggling to find something in the documentation that seems like it should be there...
In Phoenix I see validation at the point of trying to create an Ecto change set, but I'm not seeing much prior to that, upon validating the actual user input.
I'm not really a fan of exposing my data models across API boundaries, and I would rather just have structs representing the requests and responses, as they are likely very different shapes to my actual data models.
I'd like a way of converting user input to a struct and using some kind of validation framework to determine if the input is valid before I even think about hitting a database.
I've found https://github.com/CargoSense/vex and have gone down the route of converting the input to a struct, and using their validation, but there are a few things that worry me about this approach, namely:
I hear that there are issues with atoms in Elixir, and as structs are basically atom keyed maps, am I going to run into this atom exhaustion issue converting user input to these?
I also have some structs that would have nested structs. I'm currently checking the default value provided. If it's a struct doing some magic, based on the answers here In Elixir how do you initialize a struct with a map variable, to automatically convert a nested map to my nested struct. But again, I'm not sure if this is sensible.
The validations I'm defining in one DSL will be very similar in my Ecto models, and I would rather be using this for both.
Basically, how would you go about validating user input correctly in a Phoenix app. Am I on the right lines, or way off?

Connection is based on `array`,is this a design style guide for design a relay server?

In connection/arrayconnection.js, It seems all the function is tend to work with array.
For example: offsetToCursor is the only way to generate Cursor. Does this mean its a design pattern i must follow, or imply that i should generate Cursor by myself when using something other than array.If im planning to use Mongodb,should i make the database interface like an static array ?
BTW:
As a newbie to web develop, im a bit confused how to implement a qualified relay server.
Are there some guide for design a graphql-relay server, should i follow all the way in graphql-relay-js, which Database Facebook used with relay-server ? mysql or ?
Im not sure ask this here is appropriate or not,but the topic for graphql-relay-js is rarely on the web.
Thanks a lot, forgive my impolite.
var PREFIX = 'arrayconnection:';
/**
* Creates the cursor string from an offset.
*/
export function offsetToCursor(offset: number): ConnectionCursor {
return base64(PREFIX + offset);
}
Additional question:
Maybe i get some idea from developers.facebook.com/docs/graph-api.
Seems should do an array style cache for pagination lookup( not sure about this).
But graph-api looks a bit different from graphql-relay-js (is graph-api still some part of old restful style?),
What is the relationship between graph-api and graphql-relay-js ? Is graphql-relay-js a common design guide for design a graphql server in facebook?
Thanks a lot! please give me some hints
Connection is a design pattern that your schema may implement if you want Relay to perform efficient pagination. How it gets implemented on the backend is an implementation detail. It may be backed by something array-like, or it may not (think about something like the infinite scrolling news feed on Facebook, which is ranked by a terribly sophisticated backend service: this is clearly not backed by an array). We provide the arrayconnection.js module as a way of demonstrating how this can be done if your data source has that array-like nature. If it does not, or cannot be efficiently converted to it, you are better off implementing something from scratch.
Cursors are opaque identifiers. You could use an array index or some kind of primary key if you are using an array source or a typical database backend (like MySQL), but again the details are implementation-specific and should be chosen to suit your back end. The only requirement is that the cursor should encode whatever information you need on the server to be able to return the next page of results after (or before) that point.
graphql-relay-js is just a collection of modules that provide some helpers for building Relay-compatible GraphQL schemas in JavaScript. The schema provides a uniform interface to your data, but the actual underlying storage can be anything you want to plug into it (a MySQL database, an object in memory, some REST service). For simple examples, look in the examples directory in the Relay repo. As an illustration of how you can put a schema in front of something that is not a traditional database, this is an example of a schema that reads its data out of a Git repo, with the help of indices in Redis and cached data in memcached.
Stay away from developers.facebook.com/docs/graph-api; despite the "graph" in the name this is an entirely different thing and has nothing to do with the GraphQL hierarchical query language that Relay uses.

Is there any data typing for the parameters in HTTP POST?

I am building a RESTful api using a Ruby server and a MongoDB database. The database stores objects as they are, preserving their natural data types (at least those that it supports).
At the moment I am using HTTP GET to pass params to the API, and understandably everything in my database gets stored as strings (because thats what the ruby code sees when it accesses the params[] hash). After deployment, the API will use exclusively HTTP POST, so my question is whether its possible to specify the data types that get sent via POST individually for each parameter (say I have a "uid" which is an integer and a "name" which is a string), or do I need to cast them within Ruby before passing them onto my database?
If I need to cast them, are there any issues related to it?
No, its not possible.
Post variables are just string key value pairs.
You could however implement your own higher level logic.
For example a common practice is to put a suffix to the names. For example everything that ends with _i gets parsed as integer and so on.
However what benefit would it bring to preserve the types? Or better asked. How do you output them? Is it only for storage?
Then it should not be a problem to convert the strings to proper types if that benefits your application and cast them back to strings before delivering.

Appropriate data structure for flat file processing?

Essentially, I have to get a flat file into a database. The flat files come in with the first two characters on each line indicating which type of record it is.
Do I create a class for each record type with properties matching the fields in the record? Should I just use arrays?
I want to load the data into some sort of data structure before saving it in the database so that I can use unit tests to verify that the data was loaded correctly.
Here's a sample of what I have to work with (BAI2 bank statements):
01,121000358,CLIENT,050312,0213,1,80,1,2/
02,CLIENT-STANDARD,BOFAGB22,1,050311,2359,,/
03,600812345678,GBP,fab1,111319005,,V,050314,0000/
88,fab2,113781251,,V,050315,0000,fab3,113781251,,V,050316,0000/
88,fab4,113781251,,V,050317,0000,fab5,113781251,,V,050318,0000/
88,010,0,,,015,0,,,045,0,,,100,302982205,,,400,302982205,,/
16,169,57626223,V,050311,0000,102 0101857345,/
88,LLOYDS TSB BANK PL 779300 99129797
88,TRF/REF 6008ABS12300015439
88,102 0101857345 K BANK GIRO CREDIT
88,/IVD-11 MAR
49,1778372829,90/
98,1778372839,1,91/
99,1778372839,1,92
I'd recommend creating classes (or structs, or what-ever value type your language supports), as
record.ClientReference
is so much more descriptive than
record[0]
and, if you're using the (wonderful!) FileHelpers Library, then your terms are pretty much dictated for you.
Validation logic usually has at least 2 levels, the grosser level being "well-formatted" and the finer level being "correct data".
There are a few separate problems here. One issue is that of simply verifying the data, or writing tests to make sure that your parsing is accurate. A simple way to do this is to parse into a class that accepts a given range of values, and throws the appropriate error if not,
e.g.
public void setField1(int i)
{
if (i>100) throw new InvalidDataException...
}
Creating different classes for each record type is something you might want to do if the parsing logic is significantly different for different codes, so you don't have conditional logic like
public void setField2(String s)
{
if (field1==88 && s.equals ...
else if (field2==22 && s
}
yechh.
When I have had to load this kind of data in the past, I have put it all into a work table with the first two characters in one field and the rest in another. Then I have parsed it out to the appropriate other work tables based on the first two characters. Then I have done any cleanup and validation before inserting the data from the second set of work tables into the database.
In SQL Server you can do this through a DTS (2000) or an SSIS package and using SSIS , you may be able to process the data onthe fly with storing in work tables first, but the prcess is smilar, use the first two characters to determine the data flow branch to use, then parse the rest of the record into some type of holding mechanism and then clean up and validate before inserting. I'm sure other databases also have some type of mechanism for importing data and would use a simliar process.
I agree that if your data format has any sort of complexity you should create a set of custom classes to parse and hold the data, perform validation, and do any other appropriate model tasks (for instance, return a human readable description, although some would argue this would be better to put into a separate view class). This would probably be a good situation to use inheritance, where you have a parent class (possibly abstract) define the properties and methods common to all types of records, and each child class can override these methods to provide their own parsing and validation if necessary, or add their own properties and methods.
Creating a class for each type of row would be a better solution than using Arrays.
That said, however, in the past I have used Arraylists of Hashtables to accomplish the same thing. Each item in the arraylist is a row, and each entry in the hashtable is a key/value pair representing column name and cell value.
Why not start by designing the database that will hold the data then you can use the entity framwork to generate the classes for you.
here's a wacky idea:
if you were working in Perl, you could use DBD::CSV to read data from your flat file, provided you gave it the correct values for separator and EOL characters. you'd then read rows from the flat file by means of SQL statements; DBI will make them into standard Perl data structures for you, and you can run whatever validation logic you like. once each row passes all the validation tests, you'd be able to write it into the destination database using DBD::whatever.
-steve

Resources