Practical approach to pretty print vs. string conversion - go

I'd like to trivially provide a mechanism for logging data using pretty prints rather than plain type->string conversions which doesn't interfere with data transfer through strings.
I can add a type.String() converter method - which will then automatically be used by the fmt library which is generally what is being used for logging output.
However, this is likely to interfere in other domains which use type->string conversion and default to using the .String() mechanic (maybe there is a better standard interface that should be used when "give me this thing as a scannable string" is desired?)
What is the "right Go way" or a practical approach for writing type->string converters which are intended for data I/O - such as HTTP URI params or database I/O etc., vs. pretty print to logs?

Related

How to customize a serialization

I'm newbie with graphql and spqr. I would like to serialize my dates with personal format. How I can do it?
The best answer I'd offer is: don't! SPQR serializes all temporal scalars as ISO 8601 strings in UTC zone for a reason. It is the most portable format, that any client can easily parse and understand, and any conversion and display logic is better left to the client itself.
If this is for some reason impossible (e.g. backwards compatibility with a legacy client), your best bet is providing your own scalar implementations. In the future there might be a feature to avoid this, but currently you have to implement your own scalars and a TypeMapper that will map the desired Java types to those scalars. See the existing ScalarMapper for inspiration. Once you have the mapper, register it via generator.withTypeMappers.

Using the unicode package for validation in Go?

The unicode package in Go provides functions such as unicode.IsDigit and unicode.IsUpper. Are these safe to use for form input validation, password validation in particular?
How would you approach form validation in Go without using a third party package? From what I can gather regex is expensive and should be avoided if possible, is this a viable solution?
Here's an example using the unicode package:
https://play.golang.org/p/6XLSqPim54E
I wouldn't say that regex is expensive and should be avoided if possible. It is the best tool for your case. Although there is nothing wrong to use unicode package for password validation, it might be more readable for other programmers to just use single regexp, where you can write all your password requirements into a single regular expression.
Ad regex performance:
You can read about golang regex implementation's performance here. I wouldn't care about it unless you're implementing a very critical performance heavy tool where you can especially measure that regex slows your program down.

When to use dump vs. generate vs. to_json and load vs. parse in Ruby's JSON lib?

david4dev's answer to this question claims that there are three equivalent ways to convert an object to a JSON string using the json library:
JSON.dump(object)
JSON.generate(object)
object.to_json
and two equivalent ways to convert a JSON string to an object:
JSON.load(string)
JSON.parse(string)
But looking at the source code, each of them seems to be pretty much different, and there are some differences between them (e.g., 1).
What are the differences among them? When to use which?
TL;DR:
In general:
Use to_json (or the equivalent JSON::generate).
Use JSON::parse.
For some special use cases, you may want dump or load, but it's unsafe to use load on data you didn't create yourself.
Extended Explanation:
JSON::dump vs JSON::generate
As part of its argument signature, JSON::generate allows you to set options such as indent levels and whitespace particulars. JSON::dump, on the other hand, calls ::generate within itself, with specific pre-set options, so you lose the ability to set those yourself.
According to the docs, JSON::dump is meant to be part of the Marshal::dump implementation scheme. The main reason you'd want to explicitly use ::dump yourself would be that you are about to stream your JSON data (over a socket for instance), since ::dump allows you to pass an IO-like object as the second argument. Unfortunately, the JSON data being produced is not really streamed as it is produced; it is created en masse and only sent once the JSON is fully created. This makes having an IO argument useful only in trivial cases.
The final difference between the two is that ::dump can also take a limit argument that causes it to raise an ArgumentError when a certain nesting depth is exceeded.
Comparison to #to_json
#to_json accepts options as arguments, so internal implementation aside, JSON::generate(foo, opts) and foo.to_json(opts) are equivalent.
JSON::load vs JSON::parse
Similar to ::dump calling ::generate internally, ::load calls ::parse internally. ::load, like ::dump, may also take an IO object, but again, the source is read all at once, so streaming is limited to trivial cases. However, unlike the ::dump/::generate duality, both ::load and ::parse accept options as part of their argument signatures.
::load can also be passed a proc, which will be called on every Ruby object parsed from the data; it also comes with a warning that ::load should only be used with trusted data. ::parse has no such restriction, and therefore JSON::parse is the correct choice for parsing untrusted data sources like user inputs and files or streams with unknown contents.

How to Work with Ruby Duck Typing

I am learning Ruby and I'm having a major conceptual problem concerning typing. Allow me to detail why I don't understand with paradigm.
Say I am method chaining for concise code as you do in Ruby. I have to precisely know what the return type of each method call in the chain, otherwise I can't know what methods are available on the next link. Do I have to check the method documentation every time?? I'm running into this constantly running tutorial exercises. It seems I'm stuck with a process of reference, infer, run, fail, fix, repeat to get code running rather then knowing precisely what I'm working with during coding. This flies in the face of Ruby's promise of intuitiveness.
Say I am using a third party library, once again I need to know what types are allow to pass on the parameters otherwise I get a failure. I can look at the code but there may or may not be any comments or declaration of what type the method is expecting. I understand you code based on methods are available on an object, not the type. But then I have to be sure whatever I pass as a parameter has all the methods the library is expect, so I still have to do type checking. Do I have to hope and pray everything is documented properly on an interface so I know if I'm expected to give a string, a hash, a class, etc.
If I look at the source of a method I can get a list of methods being called and infer the type expected, but I have to perform analysis.
Ruby and duck typing: design by contract impossible?
The discussions in the preceding stackoverflow question don't really answer anything other than "there are processes you have to follow" and those processes don't seem to be standard, everyone has a different opinion on what process to follow, and the language has zero enforcement. Method Validation? Test-Driven Design? Documented API? Strict Method Naming Conventions? What's the standard and who dictates it? What do I follow? Would these guidelines solve this concern https://stackoverflow.com/questions/616037/ruby-coding-style-guidelines? Is there editors that help?
Conceptually I don't get the advantage either. You need to know what methods are needed for any method called, so regardless you are typing when you code anything. You just aren't informing the language or anyone else explicitly, unless you decide to document it. Then you are stuck doing all type checking at runtime instead of during coding. I've done PHP and Python programming and I don't understand it there either.
What am I missing or not understanding? Please help me understand this paradigm.
This is not a Ruby specific problem, it's the same for all dynamically typed languages.
Usually there are no guidelines for how to document this either (and most of the time not really possible). See for instance map in the ruby documentation
map { |item| block } → new_ary
map → Enumerator
What is item, block and new_ary here and how are they related? There's no way to tell unless you know the implementation or can infer it from the name of the function somehow. Specifying the type is also hard since new_ary depends on what block returns, which in turn depends on the type of item, which could be different for each element in the Array.
A lot of times you also stumble across documentation that says that an argument is of type Object, Which again tells you nothing since everything is an Object.
OCaml has a solution for this, it supports structural typing so a function that needs an object with a property foo that's a String will be inferred to be { foo : String } instead of a concrete type. But OCaml is still statically typed.
Worth noting is that this can be a problem in statically typed lanugages too. Scala has very generic methods on collections which leads to type signatures like ++[B >: A, That](that: GenTraversableOnce[B])(implicit bf: CanBuildFrom[Array[T], B, That]): That for appending two collections.
So most of the time, you will just have to learn this by heart in dynamically typed languages, and perhaps help improve the documentation of libraries you are using.
And this is why I prefer static typing ;)
Edit One thing that might make sense is to do what Scala also does. It doesn't actually show you that type signature for ++ by default, instead it shows ++[B](that: GenTraversableOnce[B]): Array[B] which is not as generic, but probably covers most of the use cases. So for Ruby's map it could have a monomorphic type signature like Array<a> -> (a -> b) -> Array<b>. It's only correct for the cases where the list only contains values of one type and the block only returns elements of one other type, but it's much easier to understand and gives a good overview of what the function does.
Yes, you seem to misunderstand the concept. It's not a replacement for static type checking. It's just different. For example, if you convert objects to json (for rendering them to client), you don't care about actual type of the object, as long as it has #to_json method. In Java, you'd have to create IJsonable interface. In ruby no overhead is needed.
As for knowing what to pass where and what returns what: memorize this or consult docs each time. We all do that.
Just another day, I've seen rails programmer with 6+ years of experience complain on twitter that he can't memorize order of parameters to alias_method: does new name go first or last?
This flies in the face of Ruby's promise of intuitiveness.
Not really. Maybe it's just badly written library. In core ruby everything is quite intuitive, I dare say.
Statically typed languages with their powerful IDEs have a small advantage here, because they can show you documentation right here, very quickly. This is still accessing documentation, though. Only quicker.
Consider that the design choices of strongly typed languages (C++,Java,C#,et al) enforce strict declarations of type passed to methods, and type returned by methods. This is because these languages were designed to validate that arguments are correct (and since these languages are compiled, this work can be done at compile time). But some questions can only be answered at run time, and C++ for example has the RTTI (Run Time Type Interpreter) to examine and enforce type guarantees. But as the developer, you are guided by syntax, semantics and the compiler to produce code that follows these type constraints.
Ruby gives you flexibility to take dynamic argument types, and return dynamic types. This freedom enables you to write more generic code (read Stepanov on the STL and generic programming), and gives you a rich set of introspection methods (is_a?, instance_of?, respond_to?, kind_of?, is_array?, et al) which you can use dynamically. Ruby enables you to write generic methods, but you can also explicity enforce design by contract, and process failure of contract by means chosen.
Yes, you will need to use care when chaining methods together, but learning Ruby is not just a few new keywords. Ruby supports multiple paradigms; you can write procedural, object oriend, generic, and functional programs. The cycle you are in right now will improve quickly as you learn about Ruby.
Perhaps your concern stems from a bias towards strongly typed languages (C++, Java, C#, et al). Duck typing is a different approach. You think differently. Duck typing means that if an object looks like a , behaves like a , then it is a . Everything (almost) is an Object in Ruby, so everything is polymorphic.
Consider templates (C++ has them, C# has them, Java is getting them, C has macros). You build an algorithm, and then have the compiler generate instances for your chosen types. You aren't doing design by contract with generics, but when you recognize their power, you write less code, and produce more.
Some of your other concerns,
third party libraries (gems) are not as hard to use as you fear
Documented API? See Rdoc and http://www.ruby-doc.org/
Rdoc documentation is (usually) provided for libraries
coding guidelines - look at the source for a couple of simple gems for starters
naming conventions - snake case and camel case are both popular
Suggestion - approach an online tutorial with an open mind, do the tutorial (http://rubymonk.com/learning/books/ is good), and you will have more focused questions.

Is accessing Generic Objects bad compared to Strict Data-Type classes in AS3?

I'm having a debate with a friend regarding Generic Objects vs. Strict Data-Type instances access.
If I have a fairly large JSON file to convert to objects & arrays of data in Flash, is it best that I then convert those objects to strict AS3 classes dedicated to each objects?
Is there a significant loss on performance depending on the quantity of objects?
What's the technical reason behind this? Does Generic Object leave a bigger foot-print in memory than Strict Data-Type instances of a custom class?
It's hard to answer this question on a generic scale since in the end "it all depends". What it depends on is what type of objects you are working with, how you expose those objects to the rest of the program and what type of requirements you have on your runtime environment.
Generally speaking, generic objects are bad since you no longer have "type security".
Generally speaking, converting objects to typed objects forces you to leave a bigger memory footprint since you need to run that class during runtime, and also forces you to recompile an untyped object "again" into another type of object, causing some extra cpu cycles.
In the end it kinda bowls down to this, if the data that you received is exposed to the rest of system, it's generally a good idea to convert it into some kind of typed object.
Converting it to a typed object and then working on that object, improves code readability and makes it easier to read the code since you don't have to remember if the data/key table used "image" or "Image" or "MapImage" as the accessor to retrieve the image info of something.
Also, if you ever change the backend system to provide other/renamed keys, you only have to do the change in one place, instead of scattered all over the system.
Hope this answer helps :)

Resources