Is there an intelligent way to identify the order of a Bitwise Enum?
Take this enum for example:
[System.FlagsAttribute()]
internal enum AnswersSort : int
{
None = 0,
Bounty = 1,
MarkedAnswer = 2,
MostVotes = 4
}
If I combine these in different ways:
var result = AnswersSort.MarkedAnswer | AnswersSort.MostVotes | AnswersSort.Bounty;
result = AnswersSort.MostVotes | AnswersSort.Bounty | AnswersSort.MarkedAnswer;
Both results are 7 and the order is lost. Is there a way to do this without using an array or a list? Ideally I'm looking for a solution using an enum but I'm not sure how or if it's possible.
If you have 10 values, you need 4 bits per item. You could treat the combination as a single 40-bit value, encoded with 4 bits per digit. So, given your two examples:
var result = AnswersSort.MarkedAnswer | AnswersSort.MostVotes | AnswersSort.Bounty;
result = AnswersSort.MostVotes | AnswersSort.Bounty | AnswersSort.MarkedAnswer;
The first would be encoded as
0010 0100 0001
---- ---- ----
| | - Bounty
| - MostVotes
- MarkedAnswer
You could build that in a 64-bit integer:
long first = BuildValue(AnswersSort.MarkedAnswer, AnswersSort.MostVotes, AnswersSort.Bounty);
long BuildValue(params AnswersSort[] values)
{
long result = 0;
foreach (var val in values)
{
result = result << 4;
result |= (int)val;
}
return result;
}
Related
What is the meaning of (number) & (-number)? I have searched it but was unable to find the meaning
I want to use i & (-i) in for loop like:
for (i = 0; i <= n; i += i & (-i))
Assuming 2's complement (or that i is unsigned), -i is equal to ~i+1.
i & (~i + 1) is a trick to extract the lowest set bit of i.
It works because what +1 actually does is to set the lowest clear bit, and clear all bits lower than that. So the only bit that is set in both i and ~i+1 is the lowest set bit from i (that is, the lowest clear bit in ~i). The bits lower than that are clear in ~i+1, and the bits higher than that are non-equal between i and ~i.
Using it in a loop seems odd unless the loop body modifies i, because i = i & (-i) is an idempotent operation: doing it twice gives the same result again.
[Edit: in a comment elsewhere you point out that the code is actually i += i & (-i). So what that does for non-zero i is to clear the lowest group of set bits of i, and set the next clear bit above that, for example 101100 -> 110000. For i with no clear bit higher than the lowest set bit (including i = 0), it sets i to 0. So if it weren't for the fact that i starts at 0, each loop would increase i by at least twice as much as the previous loop, sometimes more, until eventually it exceeds n and breaks or goes to 0 and loops forever.
It would normally be inexcusable to write code like this without a comment, but depending on the domain of the problem maybe this is an "obvious" sequence of values to loop over.]
I thought I'd just take a moment to show how this works. This code gives you the lowest set bit's value:
int i = 0xFFFFFFFF; //Last byte is 1111(base 2), -1(base 10)
int j = -i; //-(-1) == 1
int k = i&j; // 1111(2) = -1(10)
// & 0001(2) = 1(10)
// ------------------
// 0001(2) = 1(10). So the lowest set bit here is the 1's bit
int i = 0x80; //Last 2 bytes are 1000 0000(base 2), 128(base 10)
int j = -i; //-(128) == -128
int k = i&j; // ...0000 0000 1000 0000(2) = 128(10)
// & ...1111 1111 1000 0000(2) = -128(10)
// ---------------------------
// 1000 0000(2) = 128(10). So the lowest set bit here is the 128's bit
int i = 0xFFFFFFC0; //Last 2 bytes are 1100 0000(base 2), -64(base 10)
int j = -i; //-(-64) == 64
int k = i&j; // 1100 0000(2) = -64(10)
// & 0100 0000(2) = 64(10)
// ------------------
// 0100 0000(2) = 64(10). So the lowest set bit here is the 64's bit
It works the same for unsigned values, the result is always the lowest set bit's value.
Given your loop:
for(i=0;i<=n;i=i&(-i))
There are no bits set (i=0) so you're going to get back a 0 for the increment step of this operation. So this loop will go on forever unless n=0 or i is modified.
Assuming that negative values are using two's complement. Then -number can be calculated as (~number)+1, flip the bits and add 1.
For example if number = 92. Then this is what it would look like in binary:
number 0000 0000 0000 0000 0000 0000 0101 1100
~number 1111 1111 1111 1111 1111 1111 1010 0011
(~number) + 1 1111 1111 1111 1111 1111 1111 1010 0100
-number 1111 1111 1111 1111 1111 1111 1010 0100
(number) & (-number) 0000 0000 0000 0000 0000 0000 0000 0100
You can see from the example above that (number) & (-number) gives you the least bit.
You can see the code run online on IDE One: http://ideone.com/WzpxSD
Here is some C code:
#include <iostream>
#include <bitset>
#include <stdio.h>
using namespace std;
void printIntBits(int num);
void printExpression(char *text, int value);
int main() {
int number = 92;
printExpression("number", number);
printExpression("~number", ~number);
printExpression("(~number) + 1", (~number) + 1);
printExpression("-number", -number);
printExpression("(number) & (-number)", (number) & (-number));
return 0;
}
void printExpression(char *text, int value) {
printf("%-20s", text);
printIntBits(value);
printf("\n");
}
void printIntBits(int num) {
for(int i = 0; i < 8; i++) {
int mask = (0xF0000000 >> (i * 4));
int portion = (num & mask) >> ((7 - i) * 4);
cout << " " << std::bitset<4>(portion);
}
}
Also here is a version in C# .NET: https://dotnetfiddle.net/ai7Eq6
The operation i & -i is used for isolating the least significant non-zero bit of the corresponding integer.
In binary notation num can be represented as a1b, where a represents binary digits before the last bit and b represents zeroes after the last bit.
-num is equal to (a1b)¯ + 1 = a¯0b¯ + 1. b consists of all zeroes, so b¯ consists of all ones.
-num = (a1b)¯ + 1 => a¯0b¯ + 1 => a¯0(0…0)¯ + 1 => ¯0(1…1) + 1 => a¯1(0…0) => a¯1b
Now, num & -num => a1b & a¯1b => (0..0)1(0..0)
For e.g. if i = 5
| iteration | i | last bit position | i & -i|
|-------- |--------|-------- |-----|
| 1 | 5 = 101 | 0 | 1 (2^0)|
| 2 | 6 = 110 | 1 | 2 (2^1)|
| 3 | 8 = 1000 | 3 | 8 (2^3)|
| 4 | 16 = 10000 | 4 | 16 (2^4)|
| 5 | 32 = 100000 | 5 | 32 (2^5)|
This operation in mainly used in Binary Indexed Trees to move up and down the tree
PS: For some reason stackoverflow is treating table as code :(
Here's the problem:
We have n tasks to complete in n days. We can complete a single task per day. Each task has a start date and an end date. We can't start a task before the start date and it has to be completed before the end date. So, given a vector s for the start dates and e for the end dates give a vector d, if it exists, where d[i] is the date you do task i. For example:
s = {1, 0, 1, 2, 0}
e = {2, 4, 2, 3, 1}
+--------+------------+----------+
| Task # | Start Date | End Date |
+--------+------------+----------+
| 0 | 1 | 2 |
| 1 | 0 | 4 |
| 2 | 1 | 2 |
| 3 | 2 | 3 |
| 4 | 0 | 1 |
+--------+------------+----------+
We have as a possible solution:
d = {1, 4, 2, 3, 0}
+--------+----------------+
| Task # | Date Completed |
+--------+----------------+
| 0 | 1 |
| 1 | 4 |
| 2 | 2 |
| 3 | 3 |
| 4 | 0 |
+--------+----------------+
It is trivial to create an algorithm with O(n^2). It is not too bad either to create an algorithm with O(nlogn). There is supposedly an algorithm that gives a solution with O(n). What would that be?
When you can't use time, use space! You can represent the tasks open on any day using a bit vector. In O(n) create a "starting today" array. You can also represent the tasks ending soonest using another bit vector that can also be calculated in O(n). And then finally, in O(n) again scan each day, adding in any tasks starting that day, picking the lowest numbered task open that day giving priority to the ones ending soonest.
using System.IO;
using System;
using System.Math;
class Program
{
static void Main()
{
int n = 5;
var s = new int[]{1, 0, 1, 2, 0};
var e = new int[]{2, 4, 2, 3, 1};
var sd = new int[n];
var ed = new int[n];
for (int task = 0; task < n; task++)
{
sd[s[task]] += (1 << task); // Start for task i
ed[e[task]] += (1 << task); // End for task i
}
int bits = 0;
// Track the earliest ending task
var ends = new int[n];
for (int day = n-1; day >= 0; day--)
{
if (ed[day] != 0) // task(s) ending today
{
// replace any tasks that have later end dates
bits = ed[day];
}
ends[day] = bits;
bits = bits ^ sd[day]; // remove any starting
}
var d = new int[n];
bits = 0;
for (int day = 0; day < n; day++)
{
bits |= sd[day]; // add any starting
int lowestBit;
if ((ends[day] != 0) && ((bits & ends[day]) != 0))
{
// Have early ending tasks to deal with
// and have not dealt with it yet
int tb = bits & ends[day];
lowestBit = tb & (-tb);
if (lowestBit == 0) throw new Exception("Fail");
}
else
{
lowestBit = bits & (-bits);
}
int task = (int)Math.Log(lowestBit, 2);
d[task] = day;
bits = bits - lowestBit; // remove task
}
Console.WriteLine(string.Join(", ", d));
}
}
Result in this case is: 1, 4, 2, 3, 0 as expected.
Correct me if I'm wrong, but I believe that the name for such problems would be Interval Scheduling.
From your post, I assume that you are not looking for the Optimal Schedule and that you're simply looking to find any solution within O(n).
The problem here is that sorting will take O(nlogn) and computing will also take O(nlogn). You can try do it in one step:
Define V - a vector of 'time taken' for each task.
Define P - a vector of tasks which finish before a given task.
i.e if Task 2 finishes before Task 3, P[3] = 2.
Of course, as you can see the main computation is involved in finding P[j]. You always take the latest-ending non-overlapping task.
Thus, to find P[i]. Find a task which ends before the ith task. If more than one exists, pick the one which ends last.
Define M - a vector containing the resultant slots.
The algorithm:
WIS(n)
M[0] <- 0
for j <- 1 to n
do M[j] = max(wj + M[p(j)], M[j-1])
return M[n]
Sort by end date, start date. Then process data in sorted order only executing tasks that are after the start date.
There are many sequences of start-end pairs. How to find all the ranges which are contained in all of the sequences? Start and end are integers and they may be far away, so making bit-fields of the sequences and &-ing them is not feasible. The ranges (i.e. start-end pairs) on one "row" (i.e. one sequence) do not overlap, if that helps. And there is lower and upper bound for start and end, 32-bits integers would be enough I think (i.e., 0 <= values <= 65535).
Let me draw an example:
|----------| |---------------------| |----------|
|----------------------| |-------|
|---------------------| |--|
The result should be:
|--------| |--|
The example above would read, approximately:
row1 = (100, 200), (300, 600), (800, 900)
row2 = (140, 450), (780, 860)
row3 = (280, 580), (820, 860)
result = (300, 450), (820, 860)
Also, is there any known algorithm for this? I mean, does this problem have a name?
That should not be to hard assuming the ranges in each sequence do not overlap. In this case it is just a matter of iterating over all points and tracking when you enter or leave a range.
Throw all your points from all your sequences into one list, sort it and remember for each point if it is a start or end point.
100 S ---
140 S | ---
200 E --- |
280 S | ---
300 S --- | |
450 E | --- |
580 E | ---
600 E ---
780 S ---
800 S --- |
820 S | | ---
860 E | --- |
860 E | ---
900 E ---
Now you iterate over this list and every time you encounter a start point you increment a counter, every time you encounter an end point you decrement the counter.
0
100 S 1
140 S 2
200 E 1
280 S 2
300 S 3 <--
450 E 2 <--
580 E 1
600 E 0
780 S 1
800 S 2
820 S 3 <--
860 E 2 <--
860 E 1
900 E 0
When the counter is equal to the number of sequences - three in your example - you have found the start of one range and the next point is the end of this range.
Note that it is not even required to build the list explicitly if the ranges in each sequence are sorted by start or can be sorted by start. In this case you can just iterate over all sequences in parallel by keeping a pointers to the current range in each sequence.
Here the whole thing in C# - the class for ranges.
internal sealed class Range
{
private readonly Int32 start = 0;
private readonly Int32 end = 0;
public Range(Int32 start, Int32 end)
{
this.start = start;
this.end = end;
}
internal Int32 Start
{
get { return this.start; }
}
internal Int32 End
{
get { return this.end; }
}
}
The class for points with a flag to discriminate between start and end points.
internal sealed class Point
{
private readonly Int32 position = 0;
private readonly Boolean isStartPoint = false;
public Point(Int32 position, Boolean isStartPoint)
{
this.position = position;
this.isStartPoint = isStartPoint;
}
internal Int32 Position
{
get { return this.position; }
}
internal Boolean IsStartPoint
{
get { return this.isStartPoint; }
}
}
And finally the algorithm and test program.
internal static class Program
{
private static void Main()
{
var s1 = new List<Range> { new Range(100, 200), new Range(300, 600), new Range(800, 900) };
var s2 = new List<Range> { new Range(140, 450), new Range(780, 860) };
var s3 = new List<Range> { new Range(280, 580), new Range(820, 860) };
var sequences = new List<List<Range>> { s1, s2, s3 };
var startPoints = sequences.SelectMany(sequence => sequence)
.Select(range => new Point(range.Start, true));
var endPoints = sequences.SelectMany(sequence => sequence)
.Select(range => new Point(range.End, false));
var points = startPoints.Concat(endPoints).OrderBy(point => point.Position);
var counter = 0;
foreach (var point in points)
{
if (point.IsStartPoint)
{
counter++;
if (counter == sequences.Count)
{
Console.WriteLine("Start {0}", point.Position);
}
}
else
{
if (counter == sequences.Count)
{
Console.WriteLine("End {0}", point.Position);
Console.WriteLine();
}
counter--;
}
}
Console.ReadLine();
}
}
The output is as desired the following.
Start 300
End 450
Start 820
End 860
I think you can do that rather simply by fusing the sequences 2 by 2.
Each fusion should be doable in linear time of the number of intervals in the considered sequences (if the sequences are sorted), and M-1 fusions are required (with M number of sequences)
Taking your example and adding an extra sequence:
|----------| |---------------------| |----------|
|----------------------| |-------|
|---------------------| |--|
|-----------------------------------| |-----|
Fuse by pair of sequences:
|-----| |--------| |-----|
|---------------------| |--|
Fuse again:
|--------| |--|
But you may be able to find a faster way to do it. The worst case has O(N log M) runtime (N total number of intervals).
Edit: Pseudocode for fusion
Take s1 and s2 an iterator on each sequence
While there are still intervals in both sequences
Compare the intervals:
If s1.begin < s2.begin
If s2.begin < s1.end
If s2.end > s1.end
Add [s2.begin,s1.end] to the fused sequence
Increment s1
Else
Add [s2.begin,s2.end] to the fused sequence
Increment s2
Else
Increment s1
Else
Same thing with s1 and s2 reversed
It's called the longest common substring. I can be solved using suffix trees. There is a pretty clean Java implementation of it available on this blog and it works with more than just two source strings.
I don't know if you are dealing with characters but I'm sure you can probably adapt it if you don't.
I'm looking for some algorithm that for a given record with n properties with n possible values each (int, string etc) searches a number of existing records and gives back the one that matches the most properties.
Example:
A = 1
B = 1
C = 1
D = f
A | B | C | D
----+-----+-----+----
1 | 1 | 9 | f <
2 | 3 | 1 | g
3 | 4 | 2 | h
2 | 5 | 8 | j
3 | 6 | 5 | h
The first row would be the one I'm looking for, as it has the most matching values. I think it doesn't need to calculate any closeness to the values, because then row 2 might be more matching.
Loop through each row, add one to the row score of a field matches (field one has a score of 2) and when that's done, you have a resultset of scores which you can sort.
The basic algorithm could look like (in java pseudo code):
int bestMatchIdx = -1;
int currMatches = 0;
int bestMatches = 0;
for ( int row = 0 ; row < numRows ; row++ ) {
currMatches = 0;
for ( int col = 0 ; col < numCols ; col++ ) {
if ( search[col].equals( rows[ row ][ cols] ))
currMatches++;
}
if ( currMatches > bestMatches ) {
bestMatchIdx = row;
bestMatches = currMatches;
}
}
This assumes that you have an equals function to compare, and the data stored in a 2D array. 'search' is the reference row to compare all other rows against it.
I have two sets of ranges. Each range is a pair of integers (start and end) representing some sub-range of a single larger range. The two sets of ranges are in a structure similar to this (of course the ...s would be replaced with actual numbers).
$a_ranges =
{
a_1 =>
{
start => ...,
end => ...,
},
a_2 =>
{
start => ...,
end => ...,
},
a_3 =>
{
start => ...,
end => ...,
},
# and so on
};
$b_ranges =
{
b_1 =>
{
start => ...,
end => ...,
},
b_2 =>
{
start => ...,
end => ...,
},
b_3 =>
{
start => ...,
end => ...,
},
# and so on
};
I need to determine which ranges from set A overlap with which ranges from set B. Given two ranges, it's easy to determine whether they overlap. I've simply been using a double loop to do this--loop through all elements in set A in the outer loop, loop through all elements of set B in the inner loop, and keep track of which ones overlap.
I'm having two problems with this approach. First is that the overlap space is extremely sparse--even if there are thousands of ranges in each set, I expect each range from set A to overlap with maybe 1 or 2 ranges from set B. My approach enumerates every single possibility, which is overkill. This leads to my second problem--the fact that it scales very poorly. The code finishes very quickly (sub-minute) when there are hundreds of ranges in each set, but takes a very long time (+/- 30 minutes) when there are thousands of ranges in each set.
Is there a better way I can index these ranges so that I'm not doing so many unnecessary checks for overlap?
Update: The output I'm looking for is two hashes (one for each set of ranges) where the keys are range IDs and the values are the IDs of the ranges from the other set that overlap with the given range in this set.
This sounds like the perfect use case for an interval tree, which is a data structure specifically designed to support this operation. If you have two sets of intervals of sizes m and n, then you can build one of them into an interval tree in time O(m lg m) and then do n intersection queries in time O(n lg m + k), where k is the total number of intersections you find. This gives a net runtime of O((m + n) lg m + k). Remember that in the worst case k = O(nm), so this isn't any better than what you have, but for cases where the number of intersections is sparse this can be substantially better than the O(mn) runtime you have right now.
I don't have much experience working with interval trees (and zero experience in Perl, sorry!), but from the description it seems like they shouldn't be that hard to build. I'd be pretty surprised if one didn't exist already.
Hope this helps!
In case you are not exclusively tied to perl; The IRanges package in R deals with interval arithmetic. It has very powerful primitives, it would probably be easy to code a solution with them.
A second remark is that the problem could possibly become very easy if the intervals have additional structure; for example, if within each set of ranges there is no overlap (in that case a linear approach sifting through the two ordered sets simultaneously is possible). Even in the absence of such structure, the least you can do is to sort one set of ranges by start point, and the other set by end point, then break out of the inner loop once a match is no longer possible. Of course, existing and general algorithms and data structures such as the interval tree mentioned earlier are the most powerful.
There are Several existing CPAN modules that solve this issue, I have developed 2 of them: Data::Range::Compare and Data::Range::Compare::Stream
Data::Range::Compare only works with arrays in memory, but supports generic range types.
Data::Range::Compare::Stream Works with streams of data via iterators, but it requires understanding OO Perl to extend to generic data types. Data::Range::Compare::Stream is recommended if you are processing very very large sets of data.
Here is an Excerpt form the Examples folder of Data::Range::Compare::Stream.
Given these 3 sets of data:
Numeric Range set: A contained in file: source_a.src
+----------+
| 1 - 11 |
| 13 - 44 |
| 17 - 23 |
| 55 - 66 |
+----------+
Numeric Range set: B contained in file: source_b.src
+----------+
| 0 - 1 |
| 2 - 29 |
| 88 - 133 |
+----------+
Numeric Range set: C contained in file: source_c.src
+-----------+
| 17 - 29 |
| 220 - 240 |
| 241 - 250 |
+-----------+
The expected output would be:
+--------------------------------------------------------------------+
| Common Range | Numeric Range A | Numeric Range B | Numeric Range C |
+--------------------------------------------------------------------+
| 0 - 0 | No Data | 0 - 1 | No Data |
| 1 - 1 | 1 - 11 | 0 - 1 | No Data |
| 2 - 11 | 1 - 11 | 2 - 29 | No Data |
| 12 - 12 | No Data | 2 - 29 | No Data |
| 13 - 16 | 13 - 44 | 2 - 29 | No Data |
| 17 - 29 | 13 - 44 | 2 - 29 | 17 - 29 |
| 30 - 44 | 13 - 44 | No Data | No Data |
| 55 - 66 | 55 - 66 | No Data | No Data |
| 88 - 133 | No Data | 88 - 133 | No Data |
| 220 - 240 | No Data | No Data | 220 - 240 |
| 241 - 250 | No Data | No Data | 241 - 250 |
+--------------------------------------------------------------------+
The Source code can be found here.
#!/usr/bin/perl
use strict;
use warnings;
use Data::Dumper;
use lib qw(./ ../lib);
# custom package from FILE_EXAMPLE.pl
use Data::Range::Compare::Stream::Iterator::File;
use Data::Range::Compare::Stream;
use Data::Range::Compare::Stream::Iterator::Consolidate;
use Data::Range::Compare::Stream::Iterator::Compare::Asc;
my $source_a=Data::Range::Compare::Stream::Iterator::File->new(filename=>'source_a.src');
my $source_b=Data::Range::Compare::Stream::Iterator::File->new(filename=>'source_b.src');
my $source_c=Data::Range::Compare::Stream::Iterator::File->new(filename=>'source_c.src');
my $consolidator_a=new Data::Range::Compare::Stream::Iterator::Consolidate($source_a);
my $consolidator_b=new Data::Range::Compare::Stream::Iterator::Consolidate($source_b);
my $consolidator_c=new Data::Range::Compare::Stream::Iterator::Consolidate($source_c);
my $compare=new Data::Range::Compare::Stream::Iterator::Compare::Asc();
my $src_id_a=$compare->add_consolidator($consolidator_a);
my $src_id_b=$compare->add_consolidator($consolidator_b);
my $src_id_c=$compare->add_consolidator($consolidator_c);
print " +--------------------------------------------------------------------+
| Common Range | Numeric Range A | Numeric Range B | Numeric Range C |
+--------------------------------------------------------------------+\n";
my $format=' | %-12s | %-13s | %-13s | %-13s |'."\n";
while($compare->has_next) {
my $result=$compare->get_next;
my $string=$result->to_string;
my #data=($result->get_common);
next if $result->is_empty;
for(0 .. 2) {
my $column=$result->get_column_by_id($_);
unless(defined($column)) {
$column="No Data";
} else {
$column=$column->get_common->to_string;
}
push #data,$column;
}
printf $format,#data;
}
print " +--------------------------------------------------------------------+\n";
Try Tree::RB but to find mutually exclusive ranges, no overlaps
The performance is not bad, if I had about 10000 segments and had to find the segment for each discrete number. My input had 300 million records. I neaded to put them into separate buckets. Like partitioning the data. Tree::RB worked out great.
$var = [
[0,90],
[91,2930],
[2950,8293]
.
.
.
]
my lookup value were 10, 99, 991 ...
and basically I needed the position of the range for the given number
With this below comparison function, mine uses something like this:
my $cmp = sub
{
my ($a1, $b1) = #_;
if(ref($b1) && ref($a1))
{
return ($$a1[1]) <=> ($$b1[0]);
}
my $ret = 0;
if(ref($a1) eq 'ARRAY')
{
#
if($$a1[0] <= $b1 && $b1 >= $$a1[1])
{
$ret = 0;
}
if($$a1[0] < $b1)
{
$ret = -1;
}
if($$a1[1] > $b1)
{
$ret = 1;
}
}
else
{
if($$b1[0] <= $a1 && $a1 >= $$b1[1])
{
$ret = 0;
}
if($$b1[0] > $a1)
{
$ret = -1;
}
if($$b1[1] < $a1)
{
$ret = 1;
}
}
return $ret;
}
I should check time in order to know if its the fastest way, but according to the structure of your data you should try this:
use strict;
my $fromA = 12;
my $toA = 15;
my $fromB = 7;
my $toB = 35;
my #common_range = get_common_range($fromA, $toA, $fromB, $toB);
my $common_range = $common_range[0]."-".$common_range[-1];
sub get_common_range {
my #A = $_[0]..$_[1];
my %B = map {$_ => 1} $_[2]..$_[3];
my #common = ();
foreach my $i (#A) {
if (defined $B{$i}) {
push (#common, $i);
}
}
return sort {$a <=> $b} #common;
}