I have a list view inside an activex control that contains approximately 700 items. When a filter event occurs, items are removed from the list view using the code below, leaving only a few filtered items. I have noticed that the first 300 of 700 items are deleted successfully, however the rest then fail to delete (ListView_DeleteItem returns false). On subsequent calls to the code, half of the remaining items that should be removed are deleted, and then half again etc. Eventually all of the items that should be deleted have been, however it takes probably 5 or six calls to the loop below.
for (size_t rowNum=0; rowNum < toDelete.size() ; rowNum ++)
{
bool result = ListView_DeleteItem(listCtrl, rowNum);
}
Try this :
for (size_t rowNum=0; rowNum < toDelete.size() ; rowNum ++)
{
bool result = ListView_DeleteItem(listCtrl, 0);
}
This is what happens with your code :
Initial list :
Item 1
Item 2
Item 3
Item 4
First pass of the loop: you remove item with index 0 (Item 1), the list becomes this:
Item 2
Item 3
Item 4
Second pass of the loop: you remove item with index 1 (which is now Item 3), the list becomes this:
Item 2
Item 4
and so on.
Related
I need to DELETE relations of particular type of a node which is iterating over FOREACH.
In detail ::
PROFILE MATCH (n:Label1)-[r1:REL1]-(a:Label2)
WHERE a.prop1 = 2
WITH n
WITH COLLECT(n) AS rows
WITH [a IN rows WHERE a.prop2 < 1484764200] AS less_than_rows,
[b IN rows WHERE b.prop2 = 1484764200 AND b.prop3 < 2] AS other_rows
WITH size(less_than_rows) + size(other_rows) AS count, less_than_rows, other_rows
FOREACH (sub IN less_than_rows |
MERGE (sub)-[r:REL2]-(:Label2)
DELETE r
MERGE(l2:Label2{id:540})
MERGE (sub)-[:APPEND_TO {s:0}]->(l2)
SET sub.prop3=1, sub.prop2=1484764200)
WITH DISTINCT other_rows, count
FOREACH (sub IN other_rows |
MERGE(l2:Label2{id:540})
MERGE (sub)-[:APPEND_TO {s:0}]->(l2)
SET sub.prop3=sub.prop3+1)
RETURN count
As FOREACH is not suppoting MATCH, I used MERGE to achieve it. But it is very slow when I execute it (It is taking around 1 min).
But If I excete with out FOREACH (stop updaing), it is giving around 1 sec.
Problem:: Clearly the problem with FOREACH or inside operations with in FOREACH.
I want to delete a particular relation, create another relation and set some properties to node.
Note:: I showed total query because Is there any other way to achieve the same requirement (out of this FOREACH, I tried with CASE WHEN)
I noticed a few things about your original query:
MERGE(l2:Label2 {id:540}) should be moved out of both FOREACH clauses, since it only needs to be done once. This is slowing down the query. In fact, if you expect the node to already exist, you can use a MATCH instead.
MERGE (sub)-[:APPEND_TO {s:0}]->(l2) may not do what you intended, since it will only match existing relationships in which the s property is still 0. If s is not 0, you will end up creating an additional relationship. To ensure that there is a single relationship and that its s value is (reset to) 0, you should remove the {s:0} test from the pattern and use SET to set the s value; this should also speed up the MERGE, since it will not need to do a property value test.
This version of your query should fix the above issues, and be faster (but you will have to try it out to see how much faster):
PROFILE
MATCH (n:Label1)-[:REL1]-(a:Label2)
WHERE a.prop1 = 2
WITH COLLECT(n) AS rows
WITH
[a IN rows WHERE a.prop2 < 1484764200] AS less_than_rows,
[b IN rows WHERE b.prop2 = 1484764200 AND b.prop3 < 2] AS other_rows
WITH size(less_than_rows) + size(other_rows) AS count, less_than_rows, other_rows
MERGE(l2:Label2 {id:540})
FOREACH (sub IN less_than_rows |
MERGE (sub)-[r:REL2]-(:Label2)
DELETE r
MERGE (sub)-[r2:APPEND_TO]->(l2)
SET r2.s = 0, sub.prop3 = 1, sub.prop2 = 1484764200)
WITH DISTINCT l2, other_rows, count
FOREACH (sub IN other_rows |
MERGE (sub)-[r3:APPEND_TO]->(l2)
SET r3.s = 0, sub.prop3 = sub.prop3+1)
RETURN count;
If you only intend to set the s value to 0 when the APPEND_TO relationship is being created, then use the ON CREATE clause instead of SET:
PROFILE
MATCH (n:Label1)-[:REL1]-(a:Label2)
WHERE a.prop1 = 2
WITH COLLECT(n) AS rows
WITH
[a IN rows WHERE a.prop2 < 1484764200] AS less_than_rows,
[b IN rows WHERE b.prop2 = 1484764200 AND b.prop3 < 2] AS other_rows
WITH size(less_than_rows) + size(other_rows) AS count, less_than_rows, other_rows
MERGE(l2:Label2 {id:540})
FOREACH (sub IN less_than_rows |
MERGE (sub)-[r:REL2]-(:Label2)
DELETE r
MERGE (sub)-[r2:APPEND_TO]->(l2)
ON CREATE SET r2.s = 0
SET sub.prop3 = 1, sub.prop2 = 1484764200)
WITH DISTINCT l2, other_rows, count
FOREACH (sub IN other_rows |
MERGE (sub)-[r3:APPEND_TO]->(l2)
ON CREATE r3.s = 0
SET sub.prop3 = sub.prop3+1)
RETURN count;
Instead of FOREACH, you can UNWIND the collection of rows and process those. You can also use OPTIONAL MATCH instead of MERGE, so you avoid the fallback creation behavior of MERGE when a match isn't found. See how this compares:
PROFILE
MATCH (n:Label1)-[:REL1]-(a:Label2)
WHERE a.prop1 = 2
WITH COLLECT(n) AS rows
WITH [a IN rows WHERE a.prop2 < 1484764200] AS less_than_rows,
[b IN rows WHERE b.prop2 = 1484764200 AND b.prop3 < 2] AS other_rows
WITH size(less_than_rows) + size(other_rows) AS count, less_than_rows, other_rows
// faster to do it here, only 1 row so it executes once
MERGE(l2:Label2{id:540})
UNWIND less_than_rows as sub
OPTIONAL MATCH (sub)-[r:REL2]-(:Label2)
DELETE r
MERGE (sub)-[:APPEND_TO {s:0}]->(l2)
SET sub.prop3=1, sub.prop2=1484764200
WITH DISTINCT other_rows, count, l2
UNWIND other_rows as sub
MERGE (sub)-[:APPEND_TO {s:0}]->(l2)
SET sub.prop3=sub.prop3+1
RETURN count
Let's say we have these information:
UPDATE
Group A - Item 1, Item 2, Item 3
Group B - Item 1, Item 3
Group C - Item 3, Item 4
I'd like to know which groups contains the most common items:
Output:
Group A - (Item 1 and Item 3)
Group B - (Item 1 and Item 3)
What algorithm would you use?
First of all you have to represent the dataset:
data[A] = {1,2,3}
data[B] = {1,3}
data[C] = {3,4}
It is better to use numbers so you can use for loops, counters, etc.. so:
data[0] = {1,2,3}
data[1] = {1,3}
data[2] = {3,4}
then I would have another data structure with a counter of how many matches between groups you have, so for example matches[A][B] = 2, matches[A][C] = 1 and so on. That is the data structure that you will need to calculate. If you do that, then your problem is reduced to finding the maximum value in that data structure.
for i = 0; i < 3; i++
for item in data[i]
for j = 0; j < 3; j++
//optimize a little bit (match[A][A] doesn't make sense)
if j == i
next
if item in data[j]
matches[i][j]++
Of course you can optimize this some more. For example, we know that matches[A][B] is going to be equal to matches[B][A], sou you can skip those iterations.
So given a list of groups and their contained items, you want to output the identities of the all the groups that have the same, maximum number of items in common with one other group.
Let's get a list of groups and items:
group_items = (
('Group A', ('Item 1', 'Item 2', 'Item 3')),
('Group B', ('Item 1', 'Item 3')),
('Group C', ('Item 3', 'Item 4')),
)
Then let's store the max # items shared value for each group, so we can collect all matching groups at the end. We'll also track the max of the maxes since we can (rather than go back and re-compute it).
max_shared = {item[0]:0 for item in group_items}
num_groups = len(group_items)
group_sets = {}
max_max = 0
Now we're going to have compare every group with every other group, but we can ignore certain comparisons. As #Perroloco mentions, comparing Group A with Group A isn't useful, and computing intersect(A,B) is symmetric with computing intersect(B,A), so we can range from 0 to N and then from i+1 to N, instead of doing 0..N cross 0..N.
I'm using the set data type, which costs something to construct. So I cached the sets because we aren't modifying the membership, just counting the membership of the intersection.
It's worth pointing out that while intersection(A,B) == intersection(B,A), it is not the case that the MAX for A is the same as the MAX for B. Thus, there are separate comparisons for the inner max and the outer max.
for i in range(num_groups):
outer_name, outer_mem = group_items[i]
if outer_name not in group_sets:
group_sets[outer_name] = set(outer_mem)
outer_set = group_sets[outer_name]
outer_max = max_shared[outer_name]
for j in range(i+1, num_groups):
inner_name, inner_mem = group_items[j]
if inner_name not in group_sets:
group_sets[inner_name] = set(inner_mem)
inner_set = group_sets[inner_name]
ni = len(outer_set.intersection(inner_set))
if ni > outer_max:
outer_max = max_shared[outer_name] = ni
if ni > max_max:
max_max = ni
if ni > max_shared[inner_name]:
max_shared[inner_name] = ni
print("Overall max # of shared items:", max_max)
results = [grp for grp,mx in max_shared.items() if mx == max_max]
print("Groups with that many shared items:", results)
I have a list of numbers. Instead of painting them all in one row I am painting the list in rows of 5.
Now I can select one number and from there move left, right, up or down.
In this list of 15 numbers (indexed 0 to 14) I have selected index 11, coloured red.
If I move left I have to subtract 1 from the index of my selection. If I move right I add 1. Down means I add 5 and up means I subtract 5.
However, if I go down when I am in the bottom-most row, I want to end up in the first row, as such:
The math / algorithm for that is simple:
index += 5;
if (index > list.size() ) index = index % 5; // % is modulo
//So, since I start with index 11: (11 + 5) % 5 = 1, which is the index of 01.
However, I cannot seem to figure out what to do when I am going from the top-most row up, which takes me to the bottom-most row. (From 01 I would end at 11)
If I have a list of exactly 15 items, then I could simply do:
index -= 5;
if (index < 0) index += index.size();
//So: 1 - 5 = -4
// -4 + 15 = 11.
But if my list is not divisible by 5, then this does not work.
So I am looking for an algorithm that would solve this problem in all cases, including when the size of a list is not divisble by the length of its rows.
This can probably be optimized further, but here's one approach:
var fullRows = list.Length / NUM_COLUMNS; //using integer division
var maxPos = fullRows * NUM_COLUMNS + currentIndex;
return maxPos < list.Length ? maxPos : maxPos - NUM_COLUMNS;
What this does is gets the number of full rows then starts by assuming there is another row after it. It then checks if that position really exists, and if not it backs off a row to be inside the final full row.
Attempting to build a random walk simulation that counts how many times a level is reached within a given number of steps. The amount is then passed into a list. My issue is that I would like to run multiple samples in a row, adding the individual results to the list. Right now the code produces a list with ten items, but they are all the same.
ie:
"sample" = 10
"steps" = 1000
the code runs 10, one thousand step random walk sessions, each one a sampling, and produces a list with 10 unique results of how many times the level of 100 was reached during each run.
Thanks in advance.
import random
sample = input('Samples : ')
steps = input('Steps : ')
s = 0
a = 0
x = 0
list1 = []
list2 = []
while s < int(sample):
s = s + 1
while a < int(steps):
a = a + 1
r = random.randint(-1,1)
x = x + r
if x == 100:
list1.append(x)
y = len(list1)
list2.append(y)
print(list2)
I think you'd want to reset a and list1 at the start of each sample, no? That is, move their initializations to just before the start of the inner while loop. Since you don't reset a, the inner loop never gets entered after the first sample is done, so y never changes any more, so you keep getting the count from the first sample.
And the only time you append to list1 is when x reaches 100, which is unlikely to happen at all given a limit of 1000 steps. (Why do you need a list, since you only care about the count, and know you'll be appending 100 each time?)
Saw this question recently:
Given 2 arrays, the 2nd array containing some of the elements of the 1st array, return the minimum window in the 1st array which contains all the elements of the 2nd array.
Eg :
Given A={1,3,5,2,3,1} and B={1,3,2}
Output : 3 , 5 (where 3 and 5 are indices in the array A)
Even though the range 1 to 4 also contains the elements of A, the range 3 to 5 is returned Since it contains since its length is lesser than the previous range ( ( 5 - 3 ) < ( 4 - 1 ) )
I had devised a solution but I am not sure if it works correctly and also not efficient.
Give an Efficient Solution for the problem. Thanks in Advance
A simple solution of iterating through the list.
Have a left and right pointer, initially both at zero
Move the right pointer forwards until [L..R] contains all the elements (or quit if right reaches the end).
Move the left pointer forwards until [L..R] doesn't contain all the elements. See if [L-1..R] is shorter than the current best.
This is obviously linear time. You'll simply need to keep track of how many of each element of B is in the subarray for checking whether the subarray is a potential solution.
Pseudocode of this algorithm.
size = bestL = A.length;
needed = B.length-1;
found = 0; left=0; right=0;
counts = {}; //counts is a map of (number, count)
for(i in B) counts.put(i, 0);
//Increase right bound
while(right < size) {
if(!counts.contains(right)) continue;
amt = count.get(right);
count.set(right, amt+1);
if(amt == 0) found++;
if(found == needed) {
while(found == needed) {
//Increase left bound
if(counts.contains(left)) {
amt = count.get(left);
count.set(left, amt-1);
if(amt == 1) found--;
}
left++;
}
if(right - left + 2 >= bestL) continue;
bestL = right - left + 2;
bestRange = [left-1, right] //inclusive
}
}