Which is faster to create and/or apply?
set.Bind(...).For(...).To(vm => vm.PropertyName)
or
set.Bind(...).For(...).To(nameof(TViewModel.PropertyName))
?
Related
I'm typically using:
#coll.find({"lang"=>#language,"description"=>#description,"location"=>#location},{:limit=>#results_needed}).to_a
But there are times when I have an array of "_ids", that I don't want to be included in the results. Is there a native way to do that? I've been doing a hack with .delete_if but I would like to keep the database doing as much work as possible.
What about
#coll.find(:id.ne => array_of_ids)
or
#coll.find(:id => {:$ne => array_of_ids})
From Not equals in mongo mapper.
I am trying to check if a field is Null or Empty.
I have the following script:
return db.Clients.Where(a => string.IsNullOrEmpty(a.ClientName) == false)
.OrderBy(a => a.ClientName);
It seems to work as expected. I was wondering if the above is the most efficient or if there may be some gottchas that I may not be aware of in using what I have above that may lead to issues.
AFAIK, String.IsNullOrEmpty cannot be translated, therefor it can be faster to check for null and empty manually and seperately.
I am new to programming. I know what XML is. Can anyone please explain in simple terms what xpath and xquery do Where are they used?
XPath is a way of locating specific elements in an XML tree.
For instance, given the following structure:
<myfarm>
<animal type="dog">
<name>Fido</name>
<color>Black</color>
</animal>
<animal type="cat">
<name>Mitsy</name>
<color>Orange</color>
</animal>
</myfarm>
XPath allows you to traverse the structure, such as:
/myfarm/animal[#type="dog"]/name/text()
which would give you "Fido"
XQuery is an XML query language that makes use of XPath to query XML structures. However it also allows for functions to be defined and called, as well as complex querying of data structures using FLWOR expressions. FLWOR allows for join functionality between data sets defined in XML.
FLWOR article from wikipedia
Sample XQuery (using some XPath) is:
declare function local:toggle-boolean($b as xs:string)
as xs:string
{
if ($b = "Yes") then "true"
else if ($b = "No") then "false"
else if ($b = "true") then "Yes"
else if ($b = "false") then "No"
else "[ERROR] # local:toggle-boolean"
};
<ResultXML>
<ChangeTrue>{ local:toggle-boolean(doc("file.xml")/article[#id="1"]/text()) }</ChangeTrue>
<ChangeNo>{ local:toggle-boolean(doc("file.xml")/article[#id="2"]/text()) }</ChangeNo>
</ResultXML>
XPath is a simple query language which serves to search in XML DOM. I think that it can be compared to SQL Select statements with databases. XPath can evaluate many programs which work with XML and has a mass usage. I recommend u to learn it.
XQuery is much more powerful and complicated it also offers many options how to transform result, it offers cycles etc. But also it is query language. It is also used as query language into XML databases. I think that this language has only specific usage and probably is not necessary to know it, in the beginning there will be enough if u know that it exists and what it can
There is simple explanation I hope that it is enough and understandable
There are some collections - let's say each collection is set of programming languages a developer knows.
{"Alice" => Set["Java", "Python", "C++"], "Bob" => Set["Ruby"], "Charlie" => Set["Ruby", "C++"]}.
I want to group these objects by collections they belong to - in this case getting a mapping from sets of developers sharing knowledge of certain languages to sets of such languages. Every language present in the input will occur exactly once here:
{Set["Alice"] => Set["Java", "Python"], Set["Alice", "Charlie"] => Set["C++"], Set["Bob", "Charlie"] => Set["Ruby"]}
Type of this operation would be Hash[A, Set[B]] => Hash[Set[A], Set[B]]. (in practice plain arrays would most likely be used instead of sets, I'm using sets here to say that order doesn't matter and there are no duplicates)
I'm not asking how to code this operation (of course if you know a particularly elegant way, feel free to share) - I'm wondering if it has a name. It seems common enough that it should, but I cannot think of anything.
"Reverse Mapping"?
I'd like to read a large XML file that contains over a million small bibliographic records (like <article>...</article>) using libxml in Ruby. I have tried the Reader class in combination with the expand method to read record by record but I am not sure this is the right approach since my code eats up memory. Hence, I'm looking for a recipe how to conveniently process record by record with constant memory usage. Below is my main loop:
File.open('dblp.xml') do |io|
dblp = XML::Reader.io(io, :options => XML::Reader::SUBST_ENTITIES)
pubFactory = PubFactory.new
i = 0
while dblp.read do
case dblp.name
when 'article', 'inproceedings', 'book':
pub = pubFactory.create(dblp.expand)
i += 1
puts pub
pub = nil
$stderr.puts i if i % 10000 == 0
dblp.next
when 'proceedings','incollection', 'phdthesis', 'mastersthesis':
# ignore for now
dblp.next
else
# nothing
end
end
end
The key here is that dblp.expand reads an entire subtree (like an <article> record) and passes it as an argument to a factory for further processing. Is this the right approach?
Within the factory method I then use high-level XPath-like expression to extract the content of elements, like below. Again, is this viable?
def first(root, node)
x = root.find(node).first
x ? x.content : nil
end
pub.pages = first(node,'pages') # node contains expanded node from dblp.expand
When processing big XML files, you should use a stream parser to avoid loading everything in memory. There are two common approaches:
Push parsers like SAX, where you react to encoutered tags as you get them (see tadman answer).
Pull parsers, where you control a "cursor" in the XML file that you can move with simple primitives like go up/go down etc.
I think that push parsers are nice to use if you want to retrieve just some fields, but they are generally messy to use for complex data extraction and are often implemented whith use case... when... constructs
Pull parser are in my opinion a good alternative between a tree-based model and a push parser. You can find a nice article in Dr. Dobb's journal about pull parsers with REXML .
When processing XML, two common options are tree-based, and event-based. The tree-based approach typically reads in the entire XML document and can consume a large amount of memory. The event-based approach uses no additional memory but doesn't do anything unless you write your own handler logic.
The event-based model is employed by the SAX-style parser, and derivative implementations.
Example with REXML: http://www.iro.umontreal.ca/~lapalme/ForestInsteadOfTheTrees/HTML/ch08s01.html
REXML: http://ruby-doc.org/stdlib/libdoc/rexml/rdoc/index.html
I had the same problem, but I think I solved it by calling Node#remove! on the expanded node. In your case, I think you should do something like
my_node = dblp.expand
[do what you have to do with my_node]
dblp.next
my_node.remove!
Not really sure why this works, but if you look at the source for LibXML::XML::Reader#expand, there's a comment about freeing the node. I am guessing that Reader#expand associates the node to the Reader, and you have to call Node#remove! to free it.
Memory usage wasn't great, even with this hack, but at least it didn't keep on growing.