Suppose I want to update some existing value in a map, or do something else if the key is not found. How do I do this, without performing 2 lookups? What's the golang equivalent of the following C++ code:
auto it = m.find(key);
if (it != m.end()) {
// update the value, without performing a second lookup
it->second = calc_new_value(it->second);
} else {
// do something else
m.insert(make_pair(key, 42));
}
Go does not expose the map's internal (key,value) pair data structure like C++ does, so you can't replicate this exactly.
One possible work around would be to make the values of your map pointers, so you can keep the same values in the map but update what they point to. For example, if m is a map[int]*int, you could change a value with:
v := m[10]
*v = 42
With that said, I wouldn't be surprised if the savings from reducing the number of hash lookups will be eaten by the additional memory management overhead. So it would be worth benchmarking whatever solution you settle on.
You cannot. The situation is actually the same with Python dicts. However it shouldn't matter. Both lookup and assignment to a Go map are amortized O(1). Combining the two operations has the same time complexity.
Related
Can someone explain the differences in performance when using the reflect package to access a struct field, like so:
v := reflect.ValueOf(TargetStruct)
f := reflect.Indirect(v).FieldByName("Field")
VS using the normal way:
f := TargetStruct.Field
I'm asking because I haven't been able to find the resources on the actual performance.. I mean, if direct access (example 2) is O(1), then what is indirect access (example 1) speed? And is there another factor to consider, expect for having the code a little less clean & the compiler missing some information like the type of the field, etc. ?
Reflection is much slower, even if both operations are O(1), because big-O notation deliberately doesn't capture the constant, and reflection has a large constant (its c is very roughly about 100, or 2 decimal orders of magnitude, here).
I would quibble slightly (but only slightly) with Volker's comment that reflection is O(1) as this particular reflection has to look up the name at runtime, and this may or may not involve using a Go map,1 which itself is unspecified: see What is the Big O performance of maps in golang? Moreover, as noted in the accepted answer to that question, the hash lookup isn't quite O(1) for strings anyway. But again, this is all swamped by the constant factor for reflection.
An operation of the form:
f := TargetStruct.Field
would often compile to a single machine instruction, which would operate in anywhere from some fraction of one clock cycle to several cycles or more depending on cache hits. One of the form:
v := reflect.ValueOf(TargetStruct)
f := reflect.Indirect(v).FieldByName("Field")
turns into calls into the runtime to:
allocate a new reflection object to store into v;
inspect v (in Indirect(), to see if Elem() is necessary) and then that the result of Indirect() is a struct and has a field whose name is the one given, and obtain that field
and at this point you still have just a reflect.Value object in f, so you still have to find the actual value, if you want the integer:
fv := int(Field.Int())
for instance. This might be anywhere from a few dozen instructions to a few hundred. This is where I got my c ≈ 100 guess.
1The current implementation has a linear scan with string equality testing in it. We must test every string at least once, and for strings whose lengths match, we must do the extra testing of the individual string bytes as well, at least up until they don't match.
The problem here is: I have to add two numbers and then do 3 operations with that sum. So either i add them both and put their value inside one variable and compute operations or i re-add like(a+b<c) while doing any operation. So which way is more memory efficient and fast?
val sum = k+d
if(sum<=b && sum>spend){
spend = sum
}
or,
if(k+d<=b && k+d>spend){
spend = k+d
}
It depends on the context. If k and d happen to be compile-time constants then the compiler can just replace k+d with the sum so it won't matter how many times you write k+d. Also if the second form would turn out to be faster and the sum variable would not escape the function (not returned or used as parameter to other functions) then the compiler could replace sum with k+d and again it would make no difference. Compiler optimization is usually pretty good so I don't think you should worry about it.
Since the first is clearer you should go for that, but you can always benchmark (make sure not to use compile-time constants then, as the sums will get optimized away) and tune if that piece of could would turn out to be a bottleneck.
In go, is it more of a convention to modify maps by reassigning values, or using pointer values?
type Foo struct {
Bar int
}
Reassignment:
foos := map[string]Foo{"a": Foo{1}}
v := foos["a"]
v.Bar = 2
foos["a"] = v
vs Pointers
foos := map[string]*Foo{"a": &Foo{1}}
foos["a"].Bar = 2
You may be (inadvertently) conflating the matters here.
The reason to store pointers in a map is not to make "dot-field" modifications work—it is rather to preserve the exact placements of the values "kept" by a map.
One of the crucial properties of Go maps is that the values bound to their keys are not addressable. In other words, you cannot legally do something like
m := {"foo": 42}
p := &m["foo"] // this won't compile
The reason is that particular implementations of the Go language¹ are free to implement maps in a way which allow them to move around the values they hold. This is needed because maps are typically implemented as balanced trees, and these trees may require rebalancing after removing and/or adding new entries.
Hence if the language specification were to allow taking an address of a value kept in a map, that would forbid the map to move its values around.
This is precisely the reason why you cannot do "in place" modification of map values if they have struct types, and you have to replace them "wholesale".
By extension, when you add an element to a map, the value is copied into a map, and it is also copied (moved) when the map shuffles its entries around.
Hence, the chief reason to store pointers into a map is to preserve "identities" of the values to be "indexed" by a map—having them exist in only a single place in memory—and/or to prevent excessive memory operations.
Some types cannot even be sensibly copied without introducing a bug—sync.Mutex or a struct type containing one is a good example.
Getting back to your question, using pointers with the map for the purpose you propose might be a nice hack, but be aware that this is a code smell: when deciding on values vs pointers regarding a map, you should be rather concerned with the considerations outlined above.
¹ There are at least two of them which are actively maintained: the "stock" one, dubbed "gc", and a part of GCC.
Basically I need to keep track of a large number of counters. I can increment or decrement each counter by name. The simplest way to do so is to use a hash table, using counter_name as key and its corresponding count as the value for that key.
The counters don't need to be 100% accurate, approximate values for count are fine. So I'm wondering if there is any probabilistic data structure that can reduce the space complexity of N counters to lower than O(N), kinda similar to how HyperLogLog reduces the memory requirement of counting N items by giving only an approximate result. Any ideas?
In my opinion, the thing you are looking for is Count-min sketch.
Reading a stream of elements a1, a2, a3, ..., an where there can be a
lot of repeated elements, in any time it will give you the answer to
the following question: how many ai elements have you seen so far.
basically your unique elements can be bijected into your counters. Countmin sketch allows you to adjust parameters to trade your memory for the accuracy.
P.S. I described some other popular probabilistic data structures here.
Stefan Haustein's correct that the names are likely to take more space than the counters, and you may be able to prioritise certain names as he suggests, but failing that you can consider how best to store the names. If they're fairly short (e.g. 8 characters or less), you might consider using a closed hashing table that stores them directly in the buckets. If they're long, you could store them contiguously (NUL terminated) in a block of memory, and in the hash table store the offset into that block of their first character.
For the counter itself, you can save space by using a probabilistic approach as follows:
template <typename T, typename Q = unsigned>
class Approx_Counter
{
public:
Approx_Counter() : n_(0) { }
Approx_Counter& operator++()
{
if (n_ < 2 || rand() % (operator Q()) == 0)
++n_;
return *this;
}
operator Q() const { return n_ < 2 ? n_ : 1 << n_; }
private:
T n_;
};
Then you can use e.g. Approx_Counter<unsigned char, unsigned long>. Swap out rand() for a C++11 generator if you care.
The idea's simple:
when n_ is 0, ++ has definitely not be invoked
when n_ is 1, ++ has definitely been invoked exactly once
when n_ >= 2, it indicates ++ has probably been invoked about 2n_ times
To keep that last implication in line with the number of ++ invocations actually made, each invocation has a 1 in 2n_ chance of actually incrementing n_ again.
Just make sure your rand() or substitute returns values much larger than the largest counter value you want to track, otherwise you'll get rand() % (operator Q()) == 0 too often and increment inappropriately.
That said, having a smaller counter doesn't help much if you have pointers or offsets to it, so you'll want to squeeze the counter into the bucket too, another reason to prefer your own closed hashing implementation if you genuinely need to tighten up memory usage but want to stick with a hash table (a trie is another possibility).
The above is still O(N) in counter space, just with a smaller constant. For genuinely < O(N) options, you need to consider whether/how keys are related, such that incrementing a counter might reasonable impact multiple keys. You've given us no insights in your question to date.
The names probably take up more space than the counters.
How about having a fixed number of counters and only keep the ones with the highest counts, plus some kind of LRU mechanism to allow new counters to rise to the top? I guess it really depends on your use case...
In Matters Computational I found this interesting linear search implementation (it's actually my Java implementation ;-)):
public static int linearSearch(int[] a, int key) {
int high = a.length - 1;
int tmp = a[high];
// put a sentinel at the end of the array
a[high] = key;
int i = 0;
while (a[i] != key) {
i++;
}
// restore original value
a[high] = tmp;
if (i == high && key != tmp) {
return NOT_CONTAINED;
}
return i;
}
It basically uses a sentinel, which is the searched for value, so that you always find the value and don't have to check for array boundaries. The last element is stored in a temp variable, and then the sentinel is placed at the last position. When the value is found (remember, it is always found due to the sentinel), the original element is restored and the index is checked if it represents the last index and is unequal to the searched for value. If that's the case, -1 (NOT_CONTAINED) is returned, otherwise the index.
While I found this implementation really clever, I wonder if it is actually useful. For small arrays, it seems to be always slower, and for large arrays it only seems to be faster when the value is not found. Any ideas?
EDIT
The original implementation was written in C++, so that could make a difference.
It's not thread-safe, for example, you can lose your a[high] value through having a second thread start after the first has changed a[high] to key, so will record key to tmp, and finish after the first thread has restored a[high] to its original value. The second thread will restore a[high] to what it first saw, which was the first thread's key.
It's also not useful in java, since the JVM will include bounds checks on your array, so your while loop is checking that you're not going past the end of your array anyway.
Will you ever notice any speed increase from this? No
Will you notice a lack of readability? Yes
Will you notice an unnecessary array mutation that might cause concurrency issues? Yes
Premature optimization is the root of all evil.
Doesn't seem particularly useful. The "innovation" here is just to get rid of the iteration test by combining it with the match test. Modern processors spend 0 time on iteration checks these days (all the computation and branching gets done in parallel with the match test code).
In any case, binary search kicks the ass of this code on large arrays, and is comparable on small arrays. Linear search is so 1960s.
See also the 'finding a tiger in africa' joke.
Punchline = An experienced programmer places a tiger in cairo so that the search terminates.
A sentinel search goes back to Knuth. It value is that it reduces the number of tests in a loop from two ("does the key match? Am I at the end?") to just one.
Yes, its useful, in the sense that it should significantly reduce search times for modest size unordered arrays, by virtue of eliminating conditional branch mispredictions. This also reduces insertion times (code not shown by the OP) for such arrays, because you don't have to order the items.
If you have larger arrays of ordered items, a binary search will be faster, at the cost of larger insertion time to ensure the array is ordered.
For even larger sets, a hash table will be the fastest.
The real question is what is the distribution of sizes of your arrays?
Yes - it does because while loop doesn't have 2 comparisons as opposed to standard search.
It is twice as fast.It is given as optimization in Knuth Vol 3.