Doubles Round Robin Sorting Algorithm - sorting

Background: I have a group of 6 guys that get together and play pickleball every week. We play 2v2 and 2 players sit each game. We play 2 sets of 6 games each week (12 total games) - in each set, each player plays 4 games, and sits for 2 games. At the end of each set, each player totals up the amount of points he scores, and we have a winner.
Anyway, I worked out all the combinations of games for 6 people for playing doubles. There are a total of 45 unique games that could be played, where every combination of partners would play every other combination of partners. I would like to come up with a schedule to play all 45 of these games across 4 sessions of 2 sets each for a total of 8 sets (the final 3 games in the last set can be arbitrary since 6x8=48, and there are only 45 unique games).
The problem comes in with determining the order of the games, such that following criteria is met, for each of the sets of 6 games:
Each player plays 4 games, and sits for 2 games.
No player sits for 2 consecutive games.
Each player partners with a different partner every game. (i.e. no 2 players partner more than once per set).
Where I could use help is figuring out the algorithm to come up with the optimal order of games. With 45 factorial possible orders ~1e56...it's too many to brute force. Any advice would be appreciated.
All the combinations for players A, B, C, D, E, and F:
T1 T2 Bye
AB CD EF
AB CE DF
AB CF DE
AB DE CF
AB DF CE
AB EF CD
AC BD EF
AC BE DF
AC BF DE
AC DE BF
AC DF BE
AC EF BD
AD BC EF
AD BE CF
AD BF CE
AD CE BF
AD CF BE
AD EF BC
AE BC DF
AE BD CF
AE BF CD
AE CD BF
AE CF BD
AE DF BC
AF BC DE
AF BD CE
AF BE CD
AF CD BE
AF CE BD
AF DE BC
BC DE AF
BC DF AE
BC EF AD
BD CE AF
BD CF AE
BD EF AC
BE CD AF
BE CF AD
BE DF AC
BF CD AE
BF CE AD
BF DE AC
CD EF AB
CE DF AB
CF DE AB
Tried writing a brute force algorithm in Excel VBA...seems like it was going to take forever to run.

Related

forvalues and xtile in Stata

What do the last two lines do? As far as I understand, these lines loop through the list h_nwave and calculate the weighted quantiles, if syear2digit == 'nwave' , i.e. calculate 5 quantiles for each year. But I'm not sure if my understanding is correct. Also is this equivalent to using group() function?
h_nwave "91 92 93 94 95 96 97 98 99 00 01 02 03 04 05 06 07 08 09 10 11 12 13 14 15"
generate quantile_ip = .
forvalues number = 1(1)15 {
local nwave : word `number' of `h_nwave'
xtile quantile_ip_`nwave' = a_ip if syear2digit == `nwave' [ w = weight ], nq(5)
replace quantile_ip = quantile_ip_`nwave' if syear2digit == `nwave'
}
I try to convert this into R with forloop, mutate, xtile (statar package required) and case_when. However, so far I cannot find a suitable way to get similar result.
There is no source or context for this code.
Detail: The first command is truncated and presumably should have been
local h_nwave 91 92 93 94 95 96 97 98 99 00 01 02 03 04 05 06 07 08 09 10 11 12 13 14 15
Detail: The first list contains 25 values, presumably corresponding to years 1991 to 2015. But the second list implies 15 values, so we are only looking at 91 to 05.
Main idea: xtile bins to quintile bins on variable a_ip, with weights. So the lowest 20% of observations (taking weighting into account) should be in bin 1, and so on. In practice observations with the same value must be assigned to the same bin, so 20-20-20-20-20 splits are not guaranteed, quite apart from the small print of whether sample size is a multiple of 5. So, the result is assignment to bins 1 to 5, and not quintiles themselves, or any other kind quantiles.
This is done separately for each survey wave.
The xtile command is documented for everyone at https://www.stata.com/manuals/dpctile.pdf regardless of personal or workplace access to Stata.
In R, you may well be able to produce quintile bins for all survey years at once. I have no idea how to do that.
Otherwise put, the loop arises because xtile doesn't work on separate subsets in one command call. There are community-contributed Stata commands that allow that. This kind of topic is much discussed on Statalist.

Uniqueness in Permutation and Combination

I am trying to create some pseudocode to generate possible outcomes for this scenario:
There is a tournament taking place, where each round all players in the tournament are in a group with other players of different teams.
Given x amount of teams, each team has exactly n amount of players. What are the possible outcomes for groups of size r where you can only have one player of each team AND the player must have not played with any of the other players already in previous rounds.
Example: 4 teams (A-D), 4 players each team, 4 players each grouping.
Possible groupings are: (correct team constraint)
A1, B1, C1, D1
A1, B3, C1, D2
But not: (violates same team constraint)
A1, A3, C2, D2
B3, C2, D4, B1
However, the uniqueness constraint comes into play in this grouping
A1, B1, C1, D1
A1, B3, C1, D2
While it does follow the constraints of playing with different teams, it has broken the rule of uniqueness of playing with different players. In this case A1 is grouped up twice with C1
At the end of the day the pseudocode should be able to create something like the following
Round 1 Round 2 Round 3 Round 4
a1 b1 a1 d4 a1 c2 a1 c4
c1 d1 b2 c3 b4 d3 d2 b3
a2 b2 a2 d1 a2 c3 a2 c1
c2 d2 b3 c4 b1 d4 d3 b4
a3 b3 a3 d2 a3 c4 a3 c2
c3 d3 b4 c1 b2 d1 d4 b1
a4 b4 a4 d3 a4 c1 a4 c3
c4 d4 b1 c2 b3 d2 d1 b2
In the example you see that in each round no player has been grouped up with another previous player.
If the number of players on a team is a prime power (2, 3, 4, 5, 7, 8, 9, 11, 13, 16, 17, 19, etc.), then here's an algorithm that creates a schedule with the maximum number of rounds, based on a finite affine plane.
We work in the finite field GF(n), where n is the number of players on a team. GF(n) has its own notion of multiplication; when n is a prime, it's multiplication mod n, and when n is higher power of some prime, it's multiplication of univariate polynomials mod some irreducible polynomial of the appropriate degree. Each team is identified by a nonzero element of GF(n); let the set of team identifiers be T. Each team member is identified by a pair in T×GF(n). For each nonzero element r of GF(n), the groups for round r are
{{(t, r*t + c) | t in T} | c in GF(n)},
where * and + denote multiplication and addition respectively in GF(n).
Implementation in Python 3
This problem is very closely related to the Social Golfer Problem. The Social Golfer Problem asks, given n players who each play once a day in g groups of size s (n = g×s), how many days can they be scheduled such that no player plays with any other player more than once?
The algorithms for finding solutions to instances of Social Golfer problems are a patchwork of constraint solvers and mathematical constructions, which together don't address very many cases satisfactorily. If the number of players on a team is equal to the group size, then solutions to this problem can be derived by interpreting the first day's schedule as the team assignments and then using the rest of the schedule. There may be other constructions.

Identifying DEFLATE Algorithm Variant Being Used in Proprietary File Format

Disclaimer: This problem requires a very good knowledge of the DEFLATE algorithm.
I am hoping I could solicit some ideas identifying the compression algorithm being used in a particular file format. This is a legacy proprietary format that my application needs to support, so we are trying to reverse engineer it. (Going to the original creator is not an option, for reasons I won't get into).
I'm extremely close to cracking it, but I feel like I'm living Xeno's paradox because every day I seem to get halfway closer to the finish line but never there!
Here's what I know so far:
It is definitely using something extremely similar to the DEFLATE algorithm. Similarities -
The compressed data is represented by canonical Huffman codes
(usually starting with 000, but I'm not sure that is always the
case).
The data is preceded (I believe immediately) by a header table
which identifies the bit lenghts of each of the actual codes. Like
DEFLATE, this table ALSO comprises cannonical Huffman codes
(starting either at 0 or 00). These codes provide the bit-lenghts of
each character in the 0-255+ alphabet plus whatever distance codes
might be used.
Finally, again like DEFLATE, the header table with the
bit lenghts for the main codes is also preceded (I think immediately)
by a series of 3-bit codes used to derive the header table codes
(I'll call this the "pre-header").
At this point the similarities seem to end though.
The 3-bit codes in the pre-header do not appear go in the 16, 17, 18, 0, 8 ... optimal order as specified by DEFLATE, but rather seem to go sequentially, like 6 7 8 9....
Another difference is that each 3-bit code is not necessarily a literal bit length. For example, here's a header that I've mostly deciphered (I'm 99.99% confident it is correct):
00000001011 100 010 110 010 010 011 010 110 101 100 011 010 010 011 100 010 111
*0* skA *3* *4* *5* *6* *7* *8* *9* skB
Ignoring the unmarked bits, this results in the following code table:
00 7-bits
01 8-bits
100 6-bits
101 9-bits
1100 0-bits (skip code)
1101 skA = skip 3 + value of next 4 bits
1110 5-bits
11110 4-bits
111110 skB = skip 11? + value of next 9 bits
111111 3-bits
The most glaring problem is that there are additional bit-lenghts in the header table that are unused. And, in fact, they would not be usable at all, as there cannot be any additional 2-bit or 3-bit codes, for example, for the codes to be canonical (right?).
The author is also using non-standard codes for 16+. They don't seem to use the copy code (16 in DEFLATE) at all; the main headers all have huge strings of identical length codes (terribly inefficient...), and the skip codes use the next 4 and 9 bits to determine the number of skips, respectively, rather than 3 and 7 as in DEFLATE.
Yet another key difference is in the very first bits of the header. In DEFLATE the first bits are HLIT(5), HDIST(5), and HCLEN(4). If I interpreted the above header that way using LSB packing, I'd get HLIT = 257 (correct), HDIST = 21 (unsure if correct) and HCLEN = 7 (definitely not correct). If I use MSB packing instead, I'd get HLIT=257, HDIST = 6 (more likely correct) and HCLEN = 16 (appears correct). BUT, I don't think there are actually intended to be 14 bits in the prefix because I appear to need the "100" (see above) for the bit count of the 0-bit (skip) code. And in other examples, bits 10-13 don't appear to correlate to the length of the pre-header at all.
Speaking of other examples, not every file appears to follow the same header format. Here's another header:
00000001000 100 100 110 011 010 111 010 111 011 101 010 110 100 011 101 000 100 011
In this second example, I again happen to know that the code table for the header is:
0 8-bits
10 7-bits
110 6-bits
11100 skA
11101 5-bits
111100 0-bits (skip)
111101 skB
111110 9-bits
1111110 3-bits
1111111 4-bits
However, as you can see, many of the required code lenghts are not in the header at all. For example there's no "001" to represent the 8-bit code, and they are not even close to being in sequence (neither consecutively nor by the optimal 16, 17, 18...).
And yet, if we shift the bits left by 1:
skA *0* *5* *6* *7* *8* *9*
0000000100 010 010 011 001 101 011 101 011 101 110 101 011 010 001 110 100 010 001 1
This is much better, but we still can't correctly derive the code for skB (110), or 3 or 4 (111). Shifting by another bit does not improve the situation.
Incidentally, if you're wondering how I am confident that I know the code tables in these two examples, the answer is A LOT of painstaking reverse engineering, i.e., looking at the bits in files that differ slightly or have discernable patterns, and deriving the canonical code table being used. These code tables are 99+% certainly correct.
To summarize, then, we appear to have an extremely close variant of DEFLATE, but for inexplicable reasons one that uses some kind of non-standard pre-header. Where I am getting tripped up, of course, is identifying which pre-header bits correspond to the code bit-lengths for the main header. If I had that, everything would fall into place.
I have a couple of other examples I could post, but rather than ask people to do pattern matching for me, what I'm really praying for is that someone will recognize the algorithm being used and be able to point me to it. I find it unlikely that the author, rather than use an existing standard, would have gone to the trouble of coding his own algorithm from scratch that was 99% like DEFLATE but then change the pre-header structure only slightly. It makes no sense; if they simply wanted to obfuscate the data to prevent what I'm trying to do, there are much easier and more effective ways.
The software dates back to the late 90s, early 2000s, by the way, so consider what was being done back then. This is not "middle out" or anything new and crazy. It's something old and probably obscure. I'm guessing some variant of DEFLATE that was in use in some semi-popular library around that time, but I've not been having much luck finding information on anything that isn't actually DEFLATE.
Many, many thanks for any input.
Peter
PS - As requested, here is the complete data block from the first example in the post. I don't know if it'll be of much use, but here goes. BTW, the first four bytes are the uncompressed output size. The fifth byte begins the pre-header.
B0 01 00 00 01 71 64 9A D6 34 9C 5F C0 A8 B6 D4 D0 76 6E 7A 57 92 80 00 54 51 16 A1 68 AA AA EC B9 8E 22 B6 42 48 48 10 9C 11 FE 10 84 A1 7E 36 74 73 7E D4 90 06 94 73 CA 61 7C C8 E6 4D D8 D9 DA 9D B7 B8 65 35 50 3E 85 B0 46 46 B7 DB 7D 1C 14 3E F4 69 53 A9 56 B5 7B 1F 8E 1B 3C 5C 76 B9 2D F2 F3 7E 79 EE 5D FD 7E CB 64 B7 8A F7 47 4F 57 5F 67 6F 77 7F 87 8F 97 9D FF 4F 5F 62 DA 51 AF E2 EC 60 65 A6 F0 B8 EE 2C 6F 64 7D 39 73 41 EE 21 CF 16 88 F4 C9 FD D5 AF FC 53 89 62 0E 34 79 A1 77 06 3A A6 C4 06 98 9F 36 D3 A0 F1 43 93 2B 4C 9A 73 B5 01 6D 97 07 C0 57 97 D3 19 C9 23 29 C3 A8 E8 1C 4D 3E 0C 24 E5 93 7C D8 5C 39 58 B7 14 9F 02 53 93 9C D8 84 1E B7 5B 3B 47 72 E9 D1 B6 75 0E CD 23 5D F6 4D 65 8B E4 5F 59 53 DF 38 D3 09 C4 EB CF 57 52 61 C4 BA 93 DE 48 F7 34 B7 2D 0B 20 B8 60 60 0C 86 83 63 08 70 3A 31 0C 61 E1 90 3E 12 32 AA 8F A8 26 61 00 57 D4 19 C4 43 40 8C 69 1C 22 C8 E2 1C 62 D0 E4 16 CB 76 50 8B 04 0D F1 44 52 14 C5 41 54 56 15 C5 81 CA 39 91 EC 8B C8 F5 29 EA 70 45 84 48 8D 48 A2 85 8A 5C 9A AE CC FF E8
Edit 7/11/2015
I've managed to decipher quite a bit additional information. The algorithm is definitely using LZ77 and Huffman coding. The length codes and extra bits seem to all match that used in Deflate.
I was able to learn a lot more detail about the pre-header as well. It has the following structure:
HLEN 0 SkS SkL ?? 3 4 5 6 7 8 9 HLIT
00000 00101110 001 0 1100 100 100 110 10 110 101 100 011 010 010 011 100010111
HLEN = the last bit-length in the pre-header - 3 (e.g. 1100 (12) means 9 is the last bit-length code)
HLIT = the number of literal codes in the main dictionary
SkS = "skip short" - skips a # of codes determined by the next 4-bits
SkL = "skip long" - skips a # of codes determined by the next 9-bits
0 - 9 = the number of bits in the dictionary codes for the respective bit lengths
The unmarked bits I'm still unable to decipher. Also, what I'm now seeing is that the pre-header codes themselves appear to have some extra bits thrown in (note the ?? between SkL and 3, above). They're not all straight 3-bit codes.
So, the only essential information that's now missing is:
How to parse the pre-header for extra bits and whatnot; and
How many distance codes follow the literal codes
If I had that information, I could actually feed the remaining data to zlib by manually supplying the code length dictionary along with the correct number of literal vs. distance codes. Everything after this header follows DEFLATE to the letter.
Here are some more example headers, with the bit-length codes indicated along with the number of literal and length codes. Note in each one I was able to reverse engineer the the answers, but I remain unable to match the undeciphered bits to those statistics.
Sample 1
(273 literals, 35 length, 308 total)
????? ???????? ??? ? HLEN 0 SkS SkL ?? 3 ? 4 ? 5 6 7 8 9 HLIT
00000 00100010 010 0 1100 110 101 110 10 111 0 111 0 101 011 010 001 110 100010001
Sample 2
(325 literal, 23 length, 348 total)
????? ???????? ??? ? HLEN 0 SkS SkL ?? 3 4 5 6 7 8 9 HLIT
00000 00110110 001 0 1100 101 101 110 10 110 000 101 000 011 010 001 101000101
Sample 3
(317 literal, 23 length, 340 total)
????? ???????? ??? ? HLEN 0 SkS SkL ??? 4 5 ? 6 7 8 9 HLIT
00000 01000100 111 0 1100 000 101 111 011 110 111 0 100 011 010 001 100111101
Sample 4
(279 literals, 18 length, 297 total)
????? ???????? ??? ? HLEN 0 SkS SkL ?? 3 4 5 6 7 8 9 HLIT
00000 00101110 001 0 1100 100 100 110 10 110 101 100 011 010 010 011 100010111
Sample 5
(258 literals, 12 length, 270 total)
????? ???????? ??? ? HLEN 0 SkS SkL ?? 2 3 4 HLIT
00000 00000010 000 0 0111 011 000 011 01 010 000 001 100000010
I'm still hoping someone has seen a non-standard DEFLATE-style header like this before. Or maybe you'll see a pattern I'm failing to see... Many thanks for any further input.
Well I finally managed to fully crack it. It was indeed using an implementation of LZ77 and Huffman coding, but very much a non-standard DEFLATE-like method for storing and deriving the codes.
As it turns out the pre-header codes were themselves fixed-dictionary Huffman codes and not literal bit lengths. Figuring out the distance codes was similarly tricky because unlike DEFLATE, they were not using the same bit-length codes as the literals, but rather were using yet another fixed-Huffman dictionary.
The takeaway for anyone interested is that apparently, there are old file formats out there using DEFLATE-derivatives. They CAN be reverse engineered with determination. In this case, I probably spent about 100 hours total, most of which was manually reconstructing compressed data from the known decompressed samples in order to find the code patterns. Once I knew enough about what they were doing to automate that process, I was able to make a few dozen example headers and thereby find the patterns.
I still fail to understand why they did this rather than use a standard format. It must have been a fair amount of work deriving a new compression format versus just using ZLib. If they were trying to obfuscate the data, they could have done so much more effectively by encrypting it, xor'ing with other values, etc. Nope, none of that. They just decided to show off their genius to their bosses, I suppose, by coming up with something "new" even if the differences from the standard were trivial and added no value other than to make MY life difficult. :)
Thanks to those who offered their input.

How to write a compacted card deck generator?

I am writing a card deck generator, here is the solution I come up with, but I found it spans multiple lines and doesn't look good, is there any other way to build this card?
deck = []
'23456789TJQKA'.each_char do |rank|
'SHDC'.each_char do |suit|
deck << rank + suit
end
end
You can use Array#product to produce Cartesian product from two arrays.
# ruby 2.0
deck = '23456789TJQKA'.chars.product('SHDC'.chars).map{|a| a.join}
# ruby 1.9
deck = '23456789TJQKA'.split(//).product('SHDC'.split(//)).map{|a| a.join}
As DNNX commented, you can use .map(&:join) to get a shorter one.
However, I think it's better to write a clear program than a compact one.
As an example of using a block variant product(other_ary, ...) { |p| block }
deck = []
'23456789TJQKA'.split('').product('SHDC'.split('')){|el| deck << el.join}
As Arie suggested, it's better to write a clear program than a shorter one. I think this one is dead simple:
deck = %w(2S 2H 2D 2C
3S 3H 3D 3C
4S 4H 4D 4C
5S 5H 5D 5C
6S 6H 6D 6C
7S 7H 7D 7C
8S 8H 8D 8C
9S 9H 9D 9C
TS TH TD TC
JS JH JD JC
QS QH QD QC
KS KH KD KC
AS AH AD AC)
Just kidding.

Figuring out how to decode obfuscated URL parameters

I have web based system that uses encrypted GET parameters. I need to figure out what encryption is used and create a PHP function to recreate it. Any ideas?
Example URL:
...&watermark=ISpQICAK&width=IypcOysK&height=IypcLykK&...
You haven't provided nearly enough sample data for us to reliably guess even the alphabet used to encode it, much less what structure it might have.
What I can tell, from the three sample values you've provided, is:
There is quite a lot of redundancy in the data — compare e.g. width=IypcOysK and height=IypcLykK (and even watermark=ISpQICAK, though that might be just coincidence). This suggests that the data is neither random nor securely encrypted (which would make it look random).
The alphabet contains a fairly broad range of upper- and lowercase letters, from A to S and from c to y. Assuming that the alphabet consists of contiguous letter ranges, that means a palette of between 42 and 52 possible letters. Of course, we can't tell with any certainty from the samples whether other characters might also be used, so we can't even entirely rule out Base64.
This is not the output of PHP's base_convert function, as I first guessed it might be: that function only handles bases up to 36, and doesn't output uppercase letters.
That, however, is just about all. It would help to see some more data samples, ideally with the plaintext values they correspond to.
Edit: The id parameters you give in the comments are definitely in Base64. Besides the distinctive trailing = signs, they both decode to simple strings of nine printable ASCII characters followed by a line feed (hex 0A):
_Base64___________Hex____________________________ASCII_____
JiJQPjNfT0MtCg== 26 22 50 3e 33 5f 4f 43 2d 0a &"P>3_OC-.
JikwPClUPENICg== 26 29 30 3c 29 54 3c 43 48 0a &)0<)T<CH.
(I've replaced non-printable characters with a . in the ASCII column above.) On the assumption that all the other parameters are Base64 too, let's see what they decode to:
_Base64___Hex________________ASCII_
ISpQICAK 21 2a 50 20 20 0a !*P .
IypcOysK 23 2a 5c 3b 2b 0a #*\;+.
IypcLykK 23 2a 5c 2f 29 0a #*\/).
ISNAICAK 21 23 40 20 20 0a !## .
IyNAPjIK 23 23 40 3e 32 0a ###>2.
IyNAKjAK 23 23 40 2a 30 0a ###*0.
ISggICAK 21 28 20 20 20 0a !( .
IikwICAK 22 29 30 20 20 0a ")0 .
IilAPCAK 22 29 40 3c 20 0a ")#< .
So there's definitely another encoding layer involved, but we can already see some patterns:
All decoded values consist of a constant number of printable ASCII characters followed by a trailing line feed character. This cannot be a coincidence.
Most of the characters are on the low end of the printable ASCII range (hex 20 – 7E). In particular, the lowest printable ASCII character, space = hex 20, is particularly common, especially in the watermark strings.
The strings in each URL resemble each other more than they resemble the corresponding strings from other URLs. (But there are resemblances between URLs too: for example, all the decoded watermark values begin with ! = hex 21.)
In fact, the highest numbered character that occurs in any of the strings is _ = hex 5F, while the lowest (excluding the line feeds) is space = hex 20. Their difference is hex 3F = decimal 63. Coincidence? I think not. I'll guess that the second encoding layer is similar to uuencoding: the data is split into 6-bit groups (as in Base64), and each group is mapped to an ASCII character simply by adding hex 20 to it.
In fact, it looks like the second layer might be uuencoding: the first bytes of each string have the right values to be uuencode length indicators. Let's see what we get if we try to decode them:
_Base64___________UUEnc______Hex________________ASCII___re-UUE____
JiJQPjNfT0MtCg== &"P>3_OC- 0b 07 93 fe f8 cd ...... &"P>3_OC-
JikwPClUPENICg== &)0<)T<CH 25 07 09 d1 c8 e8 %..... &)0<)T<CH
_Base64___UUEnc__Hex_______ASC__re-UUE____
ISpQICAK !*P 2b + !*P``
IypcOysK #*\;+ 2b c6 cb +.. #*\;+
IypcLykK #*\/) 2b c3 c9 +.. #*\/)
ISNAICAK !## 0e . !##``
IyNAPjIK ###>2 0e 07 92 ... ###>2
IyNAKjAK ###*0 0e 02 90 ... ###*0
ISggICAK !( 20 !(```
IikwICAK ")0 25 00 %. ")0``
IilAPCAK ")#< 26 07 &. ")#<`
This is looking good:
Uudecoding and re-encoding the data (using Perl's unpack "u" and pack "u") produces the original string, except that trailing spaces are replaced with ` characters (which falls within acceptable variation between encoders).
The decoded strings are no longer printable ASCII, which suggests that we might be closer to the real data.
The watermark strings are now single characters. In two cases out of three, they're prefixes of the corresponding width and height strings. (In the third case, which looks a bit different, the watermark might perhaps have been added to the other values.)
One more piece of the puzzle — comparing the ID strings and corresponding numeric values you give in the comments, we see that:
The numbers all have six digits. The first two digits of each number are the same.
The uudecoded strings all have six bytes. The first two bytes of each string are the same.
Coincidence? Again, I think not. Let's see what we get if we write the numbers out as ASCII strings, and XOR them with the uudecoded strings:
_Num_____ASCII_hex___________UUDecoded_ID________XOR______________
406747 34 30 36 37 34 37 25 07 09 d1 c8 e8 11 37 3f e6 fc df
405174 34 30 35 31 37 34 25 07 0a d7 cb eb 11 37 3f e6 fc df
405273 34 30 35 32 37 33 25 07 0a d4 cb ec 11 37 3f e6 fc df
What is this 11 37 3f e6 fc df string? I have no idea — it's mostly not printable ASCII — but XORing the uudecoded ID with it yields the corresponding ID number in three cases out of three.
More to think about: you've provided two different ID strings for the value 405174: JiJQPjNfT0MtCg== and JikwPCpVXE9LCg==. These decode to 0b 07 93 fe f8 cd and 25 07 0a d7 cb eb respectively, and their XOR is 2e 00 99 29 33 26. The two URLs from which these ID strings came from have decoded watermarks of 0e and 20 respectively, which accounts for the first byte (and the second byte is the same in both, anyway). Where the differences in the remaining four bytes come from is still a mystery to me.
That's going to be difficult. Even if you find the encryption method and keys, the original data is likely salted and the salt is probably varied with each record.
That's the point of encryption.

Resources