I understand the other Reader subclasses in java.io, but I can't think of a use case where I'd need a CharArrayReader or a StringReader when I already have data available as String or char[].
Is it because of compatibility? To "feed" a String or char[] into something that expects Reader as parameter?
I would not call it compatibility, it's flexibility.
You are right, some libraries that deal with character based data provide a method accepting a Reader. So the user of that library can choose any mechanism to feed that library.
If you have a file on the harddisk, use a FileReader. If you have an arbitrary InputStream, use an InputStreamReader (with an appropriate encoding). If you already have a simple String in your code, use a StringReader. And so on ...
In addition to what has been answered already, these classes are very handy when you write unit tests for a method that expects a reader.
Related
I'm using protobuf (and protoc) for the first time with Go.
message MyProtoStruct {
string description = 1;
}
I'm a little bit confused:
should I use methods to get values (like MyProtoStruct.GetDescription()) or
should I use fields directly (like MyProtoStruct.Description)?
You can use either. Note that for proto2 generated code rather than proto3 (proto2 is the default), fields in protocol buffer messages are always pointers. In that case, the getters return a zero value if the field is nil. That's very convenient, since it's quite difficult to write code that uses fields directly without causing nil pointer dereferences when a field is missing.
In proto3 generated code (which I'd suggest you use, for more than one reason), I'd suggest you use fields directly. In proto2 generated code, I'd suggest using the get methods.
As the title, does anybody could explain the use of parse_transform with ms_transform?
what the different between with it and without it ?
The -compile({parse_transform, ms_transform}). syntax invokes a parse transform.
A parse transform is a module which the compiler calls after the file or input has been parsed. The module is called with the full abstract syntax of the whole module and must return a new abstract for a whole module. The parse transform is allowed to do whatever it wants as long as the result is legal erlang syntax. It is like a super macro facility which works on the whole module not just single function calls. The resulting module is then compiled. You can have many parse transforms.
Parse transforms are typically used to do compile-time evaluation and code transformations. The ets:fun2ms call mentioned by #P_A is a typical example of this as it takes a fun and at compile-time transforms this into a match specification, see Matchspecs and ets:fun2ms. But parse transforms allow you to do much more, for example add and remove functions. An example of this is a parse transform which generates access functions for all the fields in a record.
It is a very powerful tool, but unfortunately easy to get wrong and so create a real mess. There are, however, some 3rd party support tools which can be very helpful.
ms_transform module implements parse_transform that translates fun syntax into match specifications. For example ets:fun2ms fun uses it.
Also you can use
-include_lib("stdlib/include/ms_transform.hrl").
david4dev's answer to this question claims that there are three equivalent ways to convert an object to a JSON string using the json library:
JSON.dump(object)
JSON.generate(object)
object.to_json
and two equivalent ways to convert a JSON string to an object:
JSON.load(string)
JSON.parse(string)
But looking at the source code, each of them seems to be pretty much different, and there are some differences between them (e.g., 1).
What are the differences among them? When to use which?
TL;DR:
In general:
Use to_json (or the equivalent JSON::generate).
Use JSON::parse.
For some special use cases, you may want dump or load, but it's unsafe to use load on data you didn't create yourself.
Extended Explanation:
JSON::dump vs JSON::generate
As part of its argument signature, JSON::generate allows you to set options such as indent levels and whitespace particulars. JSON::dump, on the other hand, calls ::generate within itself, with specific pre-set options, so you lose the ability to set those yourself.
According to the docs, JSON::dump is meant to be part of the Marshal::dump implementation scheme. The main reason you'd want to explicitly use ::dump yourself would be that you are about to stream your JSON data (over a socket for instance), since ::dump allows you to pass an IO-like object as the second argument. Unfortunately, the JSON data being produced is not really streamed as it is produced; it is created en masse and only sent once the JSON is fully created. This makes having an IO argument useful only in trivial cases.
The final difference between the two is that ::dump can also take a limit argument that causes it to raise an ArgumentError when a certain nesting depth is exceeded.
Comparison to #to_json
#to_json accepts options as arguments, so internal implementation aside, JSON::generate(foo, opts) and foo.to_json(opts) are equivalent.
JSON::load vs JSON::parse
Similar to ::dump calling ::generate internally, ::load calls ::parse internally. ::load, like ::dump, may also take an IO object, but again, the source is read all at once, so streaming is limited to trivial cases. However, unlike the ::dump/::generate duality, both ::load and ::parse accept options as part of their argument signatures.
::load can also be passed a proc, which will be called on every Ruby object parsed from the data; it also comes with a warning that ::load should only be used with trusted data. ::parse has no such restriction, and therefore JSON::parse is the correct choice for parsing untrusted data sources like user inputs and files or streams with unknown contents.
I need to generate a private key in Go. I am using the rsa package (http://golang.org/pkg/crypto/rsa/). In particular, it seems that I would like to use the GenerateKey method. One of the parameters for this method is of type io.Reader (http://golang.org/pkg/io/#Reader), but it seems like there are many different types of readers. Is there any advantage to using one type of Reader over another? Thanks!
I believe that in this particular case the suitable io.Reader would be, for example, crypto/rand.Reader.
var Reader io.Reader
Reader is a global, shared instance of a cryptographically strong pseudo-random generator. On Unix-like systems, Reader reads from /dev/urandom. On Windows systems, Reader uses the CryptGenRandom API.
I think I understand StringIO somewhat as being similar to Java's StringBuffer class, but I don't really understand it fully. How would you define it and its purpose/possible uses in Ruby? Just hoping to clear up my confusion.
no, StringIO is more similar to StringReader/StringWriter than StringBuffer.
In Java StringBuffer is the mutable version of String (since String is immutable).
StringReader/StringWriter are handy classes meant to be used when you want to fake file access . You can read/write in a String with the same stream-oriented interface of Reader/Writer: it is immensely useful in unit testing.