I am getting accustomed to the Ruby RDF JSON-LD toolset and currently I'm trying to extract knowledge from plain JSON files which do not carry a JSON-LD context. Therefore I need a way to set a locally-provided context when loading them.
So what I am doing is using JSON::LD::API.toRdf. But it seems I can't set a local context directly in its interface. How can it be done?
The JSON-LD API (Spec and Ruby) describes a expandContext option. In Ruby, this can take a string (interpreted as a URL), a something that responds to #read (such as a StringIO or File), a Hash, Array, or an instance of JSON::LD::Context.
Typically, if you want to pass a locally-provided context, you can do so as a Hash. A common pattern I use is the following:
context = JSON.parse(%({
"#context": {...}
}))
graph = RDF::Graph.new
JSON::LD::API.toRdf(input, context) { |statement| graph << statement}
Have a look at the documentation here, or feel free to find me on gitter.
Related
This is working but it feels sloppy to me. I'm wondering if it is a code smell or if there is a better way to be accomplishing this result. Basic question is how to stub some arbitrary object in ruby.
I'm testing an edge case- that the final value of a parsing helper method correctly formats the result of a google analytics query (thus the odd assert statement) the incoming data is a google analytics object whose data is inside- essentially we have to call result.data["rows"]. The whole purpose of the struct here is to give my method's internals the ability to send that #data message. The test passes/fails appropriately but like I said, i'm wondering if this was the best way to go about it, for example getting my data out of the GA result object before sending it to be parsed.
my approach from the test- effectively it calls parse_monthly_chart_data(#ga_result)
def test_parse_monthly_chart_data_with_good_values
typical_data = {"rows" => [["0000", "194346"]...more arrays...]}
typical_vals = typical_data["rows"].to_h.values.map(&:to_i)
expected_result = typical_vals[-30..-1].inject(&:+)
Struct.new("GaResult") {def data; end }
#ga_result = Struct::GaResult.new
#ga_result.stub :data, typical_data do
assert_equal(ga.send(:parse_monthly_chart_data, #ga_result).flatten.last, expected_result)
end
end
Edit: I've solved for part of this issue by replacing stub with mocha's implementation. I'm still wondering if this is a code smell.
Not at all. I use this type of thing all the time. What you're using is called stubbing and using a Struct to accomplish this is no different than using a testing framework's implementation of a stub.
For further reading on mocks, stubbing, faking, etc. see this SO Question.
If I want to handle many parameters from for example a web request and pass it between classes (layers) - what is the preferred way?
I know it is easy to pass optional numbers of parameters through the constructor as a map.
I can also pass a map directly and if the keys match the receiving objects property names it should work in a similar way
Or I could just pass the map and then instantiate for example domain classes from that
I could use a special class as data carrier with given number of properties
I have a domain class (not database domain but business domain) that needs data from the user interface.
What is the best way to pass data through the layers and how do I know that all required data is being passed if using a data structure - like a map - with key values? If I would have a more static constructor with a given number of parameters, then I would know that the parameters are being passed. But how do I secure this when using a more dynamic approach? With unit tests?
Well in Grails command objects are an excellent choice. You can pass them up to various layers without issues. They are pretty analogous to domain classes, only without the whole persistence functionality.
Otherwise I would recommend using plain old Groovy classes (POGOs). Groovy allows you to keep your code very short (compared to Java and many other languages as well) and offers very handy transforms for common design patterns you might need (e.g. Canonical, Immutable, IndexedProperty, DelegatesTo...).
Compared to command objects POGOs do require you to write e.g. validation code by yourself, but this can be as simple as
boolean isValid() {
name && lastName && countryCode in ['US', 'CA']
}
You can keep static factories in a POGO to help you construct them in the various circumstances. Plus you can define more than one class in a file so you can keep the POGO code wherever it makes most sense. I would definitely prefer this approach to simple maps because the code is better encapsulated, POGOs can be unit tested & documented.
I'm new on ruby, I come from C/C++.
I'm currently working on data integration between a partner and me.
I get the API response with httparty and then parse it with JSON.parse.
The hash result is like multi level nested ( around 5-6 levels )
Initially, since I'm new on ruby, I wanted to develop naturally, without thinking of number of methods, number of line in methods, the only goal was to clearly separate each extraction from another in distinct methods.
The extraction from this nested hash is conditional extract, what I mean is there is multiples object of the same structure inside the hash.
And my extraction is like something like this:
if get_flight(json_response) == blabla_id
stored_blabla_id = blabla_id
end
then later
get_departure_place_from_flight(json_response, stored_blabla_id)
I read many articles about obectifying hash like this good one, or building some engines-extractor that getting value based on key passed in arguments.
Since I'm getting a really huge json response, and since I'm not extracting all the values but specifics one, I'm wondering if its not bad for usage/performance.
My point : the class is working properly BUT: I have 25 methods in one class, and the content of theses methods are like a direct access from the nested hash. I find it very ugly.
I was wondering since I have 2 request methods to the API, 1 method dedicated to construct URL, and the others one dedicated to extraction from the JSON response , is it appropriate to split the class into modules?
Or, is this kind of ugly class common in JSON parse/extracting value from whatever API's?
OK, Ruby gurus, this is a hard one to describe in the title, so bear with me for this explanation:
I'm looking to pass a string that represents a variable: not an instance, not the collection of properties that make up an object, but the actual variable: the handle to the object.
The reason for this is that I am dealing with resources that can be located on the filesystem, on the network, or in-memory. I want to create URI handler that can handle each of these in a consistent manner, so I can have schemes like eg.
file://
http://
ftp://
inmemory://
you get the idea. It's the last one that I'm trying to figure out: is there some way to get a string representation of a reference to an object in Ruby, and then use that string to create a new reference? I'm truly interested in marshalling the reference, not the object. Ideally there would be something like taking Object#object_id, which is easy enough to get, and using it to create a new variable elsewhere that refers to the same object. I'm aware that this could be really fragile and so is an unusual use case: it only works within one Ruby process for as long as there is an existing variable to keep the object from being garbage collected, but those are both true for the inmemory scheme I'm developing.
The only alternatives I can think of are:
marshal the whole object and cram it into the URI, but that won't work because the data in the object is an image buffer - very large
Create a global or singleton purgatory area to store a variable for retrieval later using e.g. a hash of object_id:variable pairs. This is a bit smelly, but would work.
Any other thoughts, StackOverflowers?
There's ObjectSpace._id2ref :
f = Foo.new #=> #<Foo:0x10036c9b8>
f.object_id #=> 2149278940
ObjectSpace._id2ref(2149278940) #=> #<Foo:0x10036c9b8>
In addition to the caveats about garbage collection ObjectSpace carries a large performance penalty in jruby (so much so that it's disabled by default)
Variables aren't objects in Ruby. You not only cannot marshal/unmarshal them, you can't do anything with them. You can only do something with objects, which variables aren't.
(It would be really nice if they were objects, though!)
You could look into MagLev which is an alternative Ruby implementation built on top of VMware's Gemstone. It has a distributes object model wiht might suit your use-case.
Objects are saved in the central Gemstne instance (with some nifty caching) and can be accessed by any number of remote worker instances. That way, any of the workers act on the same object space and can access the very same objects simultaneously. That way, you can even do things like having the global Garbage Collector running on a single Ruby instance or seamlessly moving execution at any point to different nodes (while preserving all the stack frames) using Continuations.
I was curious if anyone had insight on what is the best way for an object to load data from a file in Ruby. Is there a convention? There are two ways I can think of accomplishing this:
Have the initialize method accept a path or file and parse the data within the initialize method, setting the object variables as well.
Have the main "runner" code open the file and parse it, then pass the correct arguments to your constructor.
I am also aware that I could support both methods through an options hash or *args and looking at its size, but I do not have any need to implement both.
I would use the second option combined with providing the path info as an argument to the main code. This makes it more portable and keeps the object de-coupled from the source of the data