How to Randomly Assign to Groups of Different Sizes - random

Say I have a dataset and I want to assign observations to different groups, the size of groups determined by the data. For example, suppose that this is the data:
sysuse census, clear
keep state region pop
order state pop region
decode region, gen(reg)
replace reg="NCntrl" if reg=="N Cntrl"
drop region
*Create global with regions
global region NE NCntrl South West
*Count the number in each region
bys reg (pop): gen reg_N=_N
tab reg
There are four reg groups, all of different sizes. Now, I want to randomly assign observations to the four groups. This is accomplished below by generating a random number and then assigning observations to one of the groups based on the random number.
*Generate random number
set seed 1
gen random = runiform()
sort random
*Assign observations to number based on random sorting
egen reg_rand = seq(), from(1) to (4)
*Map number to region
gen reg_new = ""
global count 1
foreach i in $region {
replace reg_new = "`i'" if reg_rand==$count
global count = $count + 1
}
bys reg_new: gen reg_new_N = _N
tab reg_new
This is not what I want, though. Instead of using the seq() command, which creates groups of equal sizes (assuming N divided by number of groups is a whole number), I would like to randomly assign based on the size of the original groups. In this case, that is equivalent to reg_N. For example, there would be 12 observations that have a reg_new value of NCntrl.
I might have one solution similar to https://stats.idre.ucla.edu/stata/faq/how-can-i-randomly-assign-observations-to-groups-in-stata/. The idea would be to save the results of tab reg into a macro or matrix, and then use a loop and replace to cycle through the observations, which are sorted by a random number. Assume that there are many, many more groups than the four in this toy example. Is there a more reasonable way to accomplish this?

It looks like you want to shuffle around the values stored in a group variable across observations. You can do this by reducing the data to the group variable, sorting on a variable that contains random values and then using an unmatched merge to associate the random group identifiers to the original observations.
Assuming that the data example is stored in a file called "data_example.dta" and is currently loaded into memory, this would look like:
set seed 234
keep reg
rename reg reg_new
gen double u = runiform()
sort u reg_new
merge 1:1 _n using "data_example.dta", nogen
tab reg reg_new

Related

How do I add noise/variability to a dataset in Python, given the CV?

Given a dataset of blood results, say cholesterol level, and knowing that the instrument that produced those results is subject to a known degree of variability, how would I add that variability back into the dataset? i.e. I want to assume the result in the original dataset is the true/mean value, and then produce new results that are subject to the known variability of the instrument.
In Excel you use =NORM.INV(RAND(), mean, std_dev), where RAND() provides a random value between 0 and 1, "mean" will be the original value and I have the CV so I can calculate the SD. NORM.INV then provides the inverse of the cumulative normal distribution function.
I've done the following to create a new column with my new values, but would like to know if it is valid (i.e., will each row have a different random number between 0 and 1 as the probability? and is this formula equivalent to NORM.INV?
df8000['HDL_1'] = norm.ppf(random(), loc = df8000['HDL_0'], scale = TAE_df.loc[0,'HDL'])
Thanks in advance!

Shuffle One Variable Within Group

This question is an extension of the excellent answer provided by Robert Picard here: How to Randomly Assign to Groups of Different Sizes
We have this dataset, which is the same as in the previous question, but adds the year variable:
sysuse census, clear
keep state region pop
order state pop region
decode region, gen(reg)
replace reg="NCntrl" if reg=="N Cntrl"
drop region
gen year=20
replace year=30 if _n>15
replace year=40 if _n>35
If I just wanted to re-randomly assign reg's across all observations (without regard to group), I could implement the answer to the previous post:
tempfile orig
save `orig'
keep reg
rename reg reg_new
set seed 234
gen double u = runiform()
sort u reg_new
merge 1:1 _n using `orig', nogen
How would the code be modified so that reg is shuffled, but only within year? For example, there are 15 observations where year==20. These observations should be shuffled separately than the other years.
Shuffling one variable doesn't require any file choreography. This can probably be shortened:
sysuse auto, clear
set seed 2803
gen double shuffle = runiform()
* example 1
sort shuffle
gen long which = _n
sort mpg
gen mpg_new = mpg[which]
list which mpg*
* example 2
bysort foreign (shuffle) : gen long which2 = _n
bysort foreign (mpg) : gen mpg2 = mpg[which2]
list which2 mpg mpg2, sepby(foreign)
All that said, I think sample does this so long as you specify the same sample size as the number in the dataset. It's overkill because you get all the variables.

Split test groups base on GUID

Users in the system are identified by GUID, and with a new feature, I want to divide users into two groups - test and control.
Is there a easy way to split users into one of the two group with a 50/50 chance, based on their GUID?
e.g. If the nth character's ascii code is an odd -> test group, otherwise control group.
What about 70/30, or other ratio?
The reason I want to classify users base on GUID, is because later I can easily tell which users are in which group and compare the performance between two groups, without having to keep track of the group assignment - I simply need to calculate it again.
As Derek Li notes, the GUID's bits might be based on a timestamp, so you shouldn't use them directly.
The safest solution is to hash the GUID using a hash function like MurmurHash. This will produce a random number (but the same random number every time for any given GUID) which you can then use to do the split.
For example, you could do a 30/70 split like this:
function isInTestGroup(user) {
var hash = murmurHash(user.guid);
return (hash % 100) < 30;
}
If some character in the GUID has a 1 in 16 change of being one of the following characters: "0123456789ABCEDF", then perhaps you could test a scheme that determines placement by that character.
Say the last character of the guid called c has a 1/16 chance of being any hex digit:
for 50/50 distribution -> c <= 7 for group 1, c > 7 for group 2
for 70/30 c <= A for group 1, c > A for group 2
etc...

Merge sort for GPU

Im trying to implement a merge sort using opencl wrapper.
The problem is, each pass needs a different indexing algorithm for threads' memory access.
Some info about this:
First pass(numbers indicate elements and arrows indicate sorting)
0<-->1 2<--->3 4<--->5 6<--->7
group0 group1 group2 group3 ===>1 thread per group, N/2 groups total
Second pass(all parallel)
0<------>2 4<------>6
1<------->3 5<------->7
group0 group1 ========> 2 threads per group, N/4 groups
Next pass
0<--------------->4 8<------------------>12
1<--------------->5 9<------------------->13
2<--------------->6 10<---------------->14
3<--------------->7 11<--------------->15
group0 group1 ===>4 threads per group
but N/8 groups
So, an element of a sub-group cannot make any comparison between another group's element.
I cannot simply do
A[i]<---->A[i+1] or A[i]<---->A[i+4]
because these cover
A[1]<---->A[2] and A[4]<----->A[8]
which are wrong.
I needed a more complex indexing algorithm which has potential to use same number of threads for all passes.
Pass n: global id(i): 0,1, 2,3 4,5 , ... to compare id 0,1 4,5 8,9
looks like compare id1=(i/2)*4+i%2
Pass n+1: global id(i): 0,1,2,3, 4,5,6,7, ... to compare id 0,1,2,3, 8,9,10,11, 16,17
looks like compare id=(i/4)*8+i%4
Pass n+2: global id(i): 0,1,2,3,... 8,9,10,... to compare id 0,1,2,3,... 16,17,18,...
looks like compare id=(i/8)*16+i%8
compare id1 = ( i/( pow(2,passN) ) ) * pow(2,passN+1) + i%( pow(2, passN) )
compare id2 = compare id1 + pow(2,passN)
so, in the kernel string, can it be
int i=get_global_id(0);
int compareId1=( i/( pow(2,passN) ) ) * pow(2,passN+1) + i%( pow(2, passN) );
int compareId2=compareId1+pow(2,passN);
if(compareId1!=compareId2)
{
if(A[compareId1]>A[compareId2])
{
xorSwapIdiom(A,compareId1,compareId2,B);
}
else
{
streamThrough(A,compareId1,compareId2,B);
}
}
else
{
// this can happen only for the first pass
// needs a different kernel structure for that
}
but Im not sure.
Question: Can you give any directions about which memory access pattern would not leak while satisfying the "no compare between different groups" condition?
Needed to hard-reset my computer many times already, trying different algortihms(memory leaked, black screen, crash, restart), this one is the last and I fear it can crash entire OS.
I already tried a much simpler version with a decreasing number of threads per pass, had some bad performance.
Edit: I tried the upper code, it sorts reversely ordered arrays. But not randomized arrays.

Include only complete groups in panel regression using Stata

I have a panel set of data but not all individuals are present for all periods. I see when I run my xtreg that there are between 1-4 observations per group with a mean of 1.9. I'd like to only include those with 4 observations. Is there any way I can do this easily?
I understand that you want to include in your regression only those groups for which there are exactly 4 observations. If this is the case, then one solution is to count the number of observations per group and condition the regression using if:
clear all
set more off
webuse nlswork
xtset idcode
list idcode year in 1/50, sepby(idcode)
bysort idcode: gen counter = _N
xtreg ln_w grade age c.age#c.age ttl_exp c.ttl_exp#c.ttl_exp tenure ///
c.tenure#c.tenure 2.race not_smsa south if counter == 12, be
In this example the regression is conditioned to groups with 12 observations. The xtreg command gives (among other things):
Number of obs = 1881
Number of groups = 158
which you can compare with the result of running the regression without the if:
Number of obs = 28091
Number of groups = 4697
As commented by #NickCox, if you don't mind losing observations you can drop or keep (un)desired groups:
bysort idcode: drop if _N != 4
or
bysort idcode: keep if _N == 4
followed by an unconditional xtreg (i.e. with no if).
Notice that both approaches count missings, so you may need to account for that.
On the other hand, you might want to think about why you want to discard that data in your analysis.

Resources