Why is listFindNoCase() and listFind() "much faster" than a simple OR / IS in CF? - performance

I fail to understand, why is using listFindNoCase() and ListFind() the preferred way of doing a series of OR and IS/EQ comparison? Wouldn't the JVM be able to optimize it and produce efficient code, rather then making a function call that has to deal with tokenizing a string? Or is CF doing something much more inefficient??
Use listFindNoCase() or listFind() instead of the is and or operators
to compare one item to multiple items. They are much faster.
http://www.adobe.com/devnet/coldfusion/articles/coldfusion_performance.html

The answer is simple: Type conversion. You can can compare a 2 EQ "2" or now() EQ "2011-01-01", or true EQ "YES". The cost of converting (to multiple types ) and comparing is quite high.
ListFind() does not need to try multiple conversions, so it is much faster.
This is the price of dynamic typing.

I find this odd too. The only thing I can think of is that the list elements are added to a fast collection that check if an element exists based on some awesome hash of the elements it contains. This would in fact be faster for large or very large lists. The smaller lists should show little or no speed boost.

Related

MarkLogic: Xpath vs searches

Consider the following Xpath expression:
/book/metadata/title[. = "Good Will Hunting"]
And the following search expression:
cts:search(/book/metadata, cts:element-value-query(xs:QName("title"), "Good Will Hunting"), "unfiltered")
Xpath will make use of the relationship indexes and the value indexes.
Does search make use of both term list indexes and value indexes ? Which of the above queries are more efficient and scale able ?
I'd suggest looking at xdmp:plan of each of these. This will show you exactly what questions we are sending to the index given your particular index settings. These would usually be fairly comparable, except your cts:search is missing the first argument. I'm assuming it would be /book/metadata so that you pick up those constraints in search as well. A key difference is that XPaths will always be filtered. OTOH, the main cost of that is pulling all the fragments off disk, so if you are doing that anyway in consuming the results, that won't make a huge difference unless there are a lot of false positives, or you only consume the top N results.

Selection Sort vs. Insertion Sort

I'm just wondering, would the speed of selection sort change depending on the TYPE of data? So like would the speed change depending on if it were words or numbers? why or why not?. Also, would the speed of insertion sort change depending on the TYPE of data?
I personally don't think they do change the speed of the sorting but I'm not 100% sure.
A comparison-based sort algorithm needs to know, among other things:
how to tell if element a is "smaller" than element b or not
how to swap two elements in the array,
ie. moving one element to the position of another and vice-versa.
Other than this two things, no part of the algorithm depends on the actual type and/or content of the data elements. The only possible speed difference is in this parts.
Yes, comparing eg. strings is usually slower than comparing integer, and the longer the strings get, the more noticable the difference will be. The same is valid for swapping, altough there are some C++ things to remedy that (depending on the implementation of the data type, swapping could be done by copying everything (slow) or just by swapping eg. a pointer (faster))

I'm looking for an algorithm or function that can take a text string and convert it a number

I looking for a algorithm, function or technique that can take a string and convert it to a number. I would like the algorithm or function to have the following properties:
Identical string yields the same calculated value
Similar strings would yield similar values (similar can be defined as similar in meaning or similar in composition)
Capable of handling strings of variable length
I read an article several years ago that gives me hope that this can be achieved. Unfortunately, I have been unable to recall the source of the article.
Similar in composition is pretty easy, I'll let somebody else tackle that.
Similar in meaning is a lot harder, but fun :), I remember reading an article about how a neural network was trained to construct a 2D "semantic meaning graph" of a whole bunch of english words, where the distance between two words represented how "similar" they are in meaning, just by training it on wikipedia articles.
You could do the same thing, but make it one-dimensional, that will give you a single continuous number, where similar words will be close to each other.
Non-serious answer: Map everything to 0
Property 1: check. Property 2: check. Property 3: check.
But I figure you want dissimilar strings to get different values, too. The question then is, what is similar and what is not.
Essentially, you are looking for a hash function.
There are a lot of hash functions designed with different objectives. Crypographic hashes for examples are pretty expensive to compute, because you want to make it really hard to go backwards or even predict how a change to the input affects the output. So they try really hard to violate your condition 2. There are also simpler hash functions that mostly try to spread the data. They mostly try to ensure that close input values are not close to each other afterwards (but it is okay if it is predictable).
You may want to read up on Wikipedia:
https://en.wikipedia.org/wiki/Hash_function#Finding_similar_substrings
(Yes, it has a section on "Finding similar substrings" via Hashing)
Wikipedia also has a list of hash functions:
https://en.wikipedia.org/wiki/List_of_hash_functions
There is a couple of related stuff for you. For example minhash could be used. Here is a minhash-inspired approach for you: Define a few random lists of all letters in your alphabet. Say I have the letters "abcde" only for this example. I'll only use two lists for this example. Then my lists are:
p1 = "abcde"
p2 = "edcba"
Let f1(str) be the index in p1 of the first letter in my test word, f2(str) the first letter in p2. So the word "bababa" would map to 0,3. The word "ababab" also. The word "dada" would make to 0,1, while "ce" maps to 2,0. Note that this map is invariant to word permutations (because it treats them as sets) and for long texts it will converge to "0,0". Yet with some fine tuning it can give you a pretty fast chance of finding candidates for closer inspection.
Fuzzy hashing (context triggered piecewise hashing) may be what you are looking for.
Implemenation: ssdeep
Explanation of the algorithm: Identifying almost identical files using context triggered piecewise hashing
I think you're probably after a hash function, as numerous posters have said. However, similar in meaning is also possible, after a fashion: use something like Latent Dirichlet Allocation or Latent Semantic Analysis to map your word into multidimensional space, relative to a model trained on a large collection of text (these pre-trained models can be downloaded if you don't have access to a representative sample of the kind of text you're interested in). If you need a scalar value rather than multi-dimensional vector (it's hard to tell, you don't say what you want it for) you could try a number of things like the probability of the most probable topic, the mean across the dimensions, the index of the most probable topic, etc. etc.
num = 0
for (byte in getBytes(str))
num += UnsignedIntValue(byte)
This would meet all 3 properties(for #2, this works on the strings binary composition).

Do I need to implement a b-tree search for this?

I have an array of integers, which could run into the hundreds of thousands (or more), sorted numerically ascending since that's how they were originally stacked.
I need to be able to query the array to get the index of its first occurrence of a number >= some input, as efficiently as possible. The only way I would know how to do this without even thinking about it would be to iterate through the array testing the condition until it returns true, at which point I'd stop iterating. However, this is the most expensive solution to this problem and I'm looking for the best algorithm to solve it.
I'm coding in Objective-C, but I'll give an example in JavaScript to broaden the audience of people who are able to respond.
// Sample set
var numbers = [1, 7, 23, 23, 23, 89, 1002, 1003];
var indexAfter100 = getIndexOfValueGreaterThan(100);
var indexAfter7 = getIndexOfValueGreaterThan(7);
// (indexAfter100 == 6) == true
// (indexAfter7 == 2) == true
Putting this data into a DB in order to perform this search will only be a last-resort solution since I'm keen to see some sort of algorithm to tackle this quickly in memory.
I do have the ability to change the data structure, or to store an additional data structure as I'm building the array, since my program has already pushed each number one by one onto this stack, so I'd just modify the code that's adding them to the stack. Searching for the index as they're being added to the stack isn't possible since the search operation will be repeated frequently with different values after the fact.
Right now I'm thinking "B-Tree" but to be honest, I would have no idea how to implement one and before I go off and start figuring that out, I wonder if there's a nice algorithm that fits this single use-case better?
You should use binary search. Objective C could even have a built-in method for that (many languages I know do). B-tree won't probably help much, unless you want to store the data on disk.
I don't know about Objective-C, but C (plain 'ol C) comes with a function called bsearch (besides, AFAIK, Obj-C can call C functions just fine):
http://www.cplusplus.com/reference/clibrary/cstdlib/bsearch/
That basically does a binary search which sounds like it's what you need.
A fast search algorithm should be able to handle an array of ints of that size without taking too long, I should think (and the array is sorted, so a binary search would probably be the way to go).
I think a btree is probably overkill...
Since they are sorted in a particular ASCending order and you only need the bigger ones, I would serialize that array, explode it by the INT and keep the part of the serialized string that holds the bigger INTs, then unserialize it and voilá.
Linear search also referred to as sequential search looks at each element in sequence from the start to see if the desired element is present in the data structure. When the amount of data is small, this search is fast.Its easy but work needed is in proportion to the amount of data to be searched.Doubling the number of elements will double the time to search if the desired element is not present.
Binary search is efficient for larger array. In this we check the middle element.If the value is bigger that what we are looking for, then look in the first half;otherwise,look in the second half. Repeat this until the desired item is found. The table must be sorted for binary search. It eliminates half the data at each iteration.Its logarithmic

Optimized "Multidimensional" Arrays in Ruby

From birth I've always been taught to avoid nested arrays like the plague for performance and internal data structure reasons. So I'm trying to find a good solution for optimized multidimensional data structures in Ruby.
The typical solution would involve maybe using a 1D array and accessing each one by x*width + y.
Ruby has the ability to overload the [] operator, so perhaps a good solution would involve using multi_dimensional_array[2,4] or even use a splat to support arbitrary dimension amounts. (But really, I only need two dimensions)
Is there a library/gem already out there for this? If not, what would the best way to go about writing this be?
My nested-array-lookups are the bottleneck right now of my rather computationally-intensive script, so this is something that is important and not a case of premature optimization.
If it helps, my script uses mostly random lookups and less traversals.
narray
NArray is an Numerical N-dimensional
Array class. Supported element types
are 1/2/4-byte Integer,
single/double-precision Real/Complex,
and Ruby Object. This extension
library incorporates fast calculation
and easy manipulation of large
numerical arrays into the Ruby
language. NArray has features similar
to NumPy, but NArray has vector and
matrix subclasses.
You could inherit from Array and create your own class that emulated a multi-dimensional array (but was internally a simple 1-dimensional array). You may see some speedup from it, but it's hard to say without writing the code both ways and profiling it.
You may also want to experiment with the NArray class.
All that aside, your nested array lookups might not be the real bottleneck that they appear to be. On several occasions, I have had the same problem and then later found out that re-writing some of my logic cleared up the bottleneck. It's more than just speeding up the nested lookups, it's about minimizing the number of lookups needed. Each "random access" in an n-dimensional array takes n lookups (one per nested array level). You can reduce this by iterating through the dimensions using code like:
array.each {|x|
x.each {|y|
y.each {|z|
...
}
}
}
This allows you to do a single lookup in the first dimension and then access everything "behind" it, then a single lookup in the second dimension, etc etc. This will result in significantly fewer lookups than randomly accessing elements.
If you need random element access, you may want to try using a hash instead. You can take the array indices, concatenate them together as a string, and use that as the hash key (for example, array[12][0][3] becomes hash['0012_0000_0003']). This may result in faster "random access" times, but you'd want to profile it to know for certain.
Any chance you can post some of your problematic code? Knowing the problem code will make it easier for us to recommend a solution.
nested arrays aren't that bad if you traverse them properly this means first traverse rows and then travers through columns. This should be quite fast. If you need a certain element often you should store the value in a variable. Otherwise you're jumping around in the memory and this leads to a bad performance.
Big Rule: Don't jump around in your nested array try to traverse it linear from row to row.

Resources