I have a decent-sized dataset (about 18,000 rows). I have two variables that I want to tabulate, one taking on many string values, and the second taking on just 4 values. I want to tabulate the string values by the 4 categories. I need these sorted. I have tried several commands, including tabsort, which works, but only if I restrict the number of rows it uses to the first 603 (at least with the way it is currently sorted). If the number of rows is greater than this, then I get the r(134) error that there are too many values. Is there anything to be done? My goal is to create a table with the most common words and export it to LaTeX. Would it be a lot easier to try and do this in something like R?
Here's one way, via contract and texsave from SSC:
/* Fake Data */
set more off
clear
set matsize 5000
set seed 12345
set obs 1000
gen x = string(rnormal())
expand mod(_n,10)
gen y = mod(_n,4)
/* Collapse Data to Get Frequencies for Each x-y Cell */
preserve
contract x y, freq(N)
reshape wide N, i(x) j(y)
forvalues v=0/3 {
lab var N`v' "`v'" // need this for labeling
replace N`v'=0 if missing(N`v')
}
egen T = rowtotal(N*)
gsort -T x // sort by occurrence
keep if T > 0 // set occurrence threshold
capture ssc install texsave
texsave x N0 N1 N2 N3 using "tab_x_y.tex", varlabel replace title("tab x y")
restore
/* Check Calculations */
type "tab_x_y.tex"
tab x y, rowsort
Related
I have a problem in which I have 4 objects (1s) on a 100x100 grid of zeros that is split up into 16 even squares of 25x25.
I need to create a (16^4 * 4) table where entries listing all the possible positions of each of these 4 objects across the 16 submatrices. The objects can be anywhere within the submatrices so long as they aren't overlapping one another. This is clearly a permutation problem, but there is added complexity because of the indexing and the fact that the positions ned to be random but not overlapping within a 16th square. Would love any pointers!
What I tried to do was create a function called "top_left_corner(position)" that returns the subscript of the top left corner of the sub-matrix you are in. E.g. top_left_corner(1) = (1,1), top_left_corner(2) = (26,1), etc. Then I have:
pos = randsample(24,2);
I = pos(1)+top_left_corner(position,1);
J = pos(2)+top_left_corner(position,2);
The problem is how to generate and store permutations of this in a table as linear indices.
First using ndgrid cartesian product generated in the form of a [4 , 16^4] matrix perm. Then in the while loop random numbers generated and added to perm. If any column of perm contains duplicated random numbers ,random number generation repeated for those columns until no column has duplicated elements.Normally no more than 2-3 iterations needed. Since the [100 ,100] array divided into 16 blocks, using kron an index pattern like the 16 blocks generated and with the sort function indexes of sorted elements extracted. Then generated random numbers form indexes of the pattern( 16 blocks).
C = cell(1,4);
[C{:}]=ndgrid(0:15,0:15,0:15,0:15);
perm = reshape([C{:}],16^4,4).';
perm_rnd = zeros(size(perm));
c = 1:size(perm,2);
while true
perm_rnd(:,c) = perm(:,c) * 625 +randi(625,4,numel(c));
[~ ,c0] = find(diff(sort(perm_rnd(:,c),1),1,1)==0);
if isempty(c0)
break;
end
%c = c(unique(c0));
c = c([true ; diff(c0)~=0]);
end
pattern = kron(reshape(1:16,4,4),ones(25));
[~,idx] = sort(pattern(:));
result = idx(perm_rnd).';
I have a vector which contains several different values, where all of them are between 0 and 1.
I have also two different values, called min and max, that represent the minimum and maximum values; this two values may change in time.
I would reduce dynamically the dimension of a vector, which values must be included within the gap described by min and max.
For example,
at time t=1 I have that vector:
a=[0.5,0.2,0.6,0.3,0.2187,0.8798,0.5432,0.3563,0.3981,0.7845];
min=0.3;
max=0.7;
given vector a, and the two values (min and max), the new vector: a_new,
should be:
a_new=[0.5,0.6,0.3,0.5432,0.3563,0.3981];
this due to the fact that the min and max values decide which is the bound such that a new vector, starting from the original is defined.
Code solution
If you just want to generate a new vector given the old one, use the following syntax:
a_new = a(a>=min & a<=max);
If you also want to calculate the positions of each the deleted and non deleted values, use MATLAB's find function:
nonDeleteIndices = find(a>=min & a<=max);
deletedIndices= find(a<min | a>max);
Result
a_new =
0.5000 0.6000 0.3000 0.5432 0.3563 0.3981
nonDeletedIndices=
1 3 4 7 8 9
deletedIndices=
2 5 6 10
Suggestion
I suggest using different variable names other than min and max - such as minVal and maxVal. There are already MATLAB functions with these names and you don't want to override them.
So I have the following constraints:
How to write this in MATLAB in an efficient way? The inputs are x_mn, M, and N. The set B={1,...,N} and the set U={1,...,M}
I did it like this (because I write x as the follwoing vector)
x=[x_11, x_12, ..., x_1N, X_21, x_22, ..., x_M1, X_M2, ..., x_MN]:
%# first constraint
function R1 = constraint_1(M, N)
ee = eye(N);
R1 = zeros(N, N*M);
for m = 1:M
R1(:, (m-1)*N+1:m*N) = ee;
end
end
%# second constraint
function R2 = constraint_2(M, N)
ee = ones(1, N);
R2 = zeros(M, N*M);
for m = 1:M
R2(m, (m-1)*N+1:m*N) = ee;
end
end
By the above code I will get a matrix A=[R1; R2] with 0-1 and I will have A*x<=1.
For example, M=N=2, I will have something like this:
And, I will create a function test(x) which returns true or false according to x.
I would like to get some help from you and optimize my code.
You should place your x_mn values in a matrix. After that, you can sum in each dimension to get what you want. Looking at your constraints, you will place these values in an M x N matrix, where M is the amount of rows and N is the amount of columns.
You can certainly place your values in a vector and construct your summations in the way you intended earlier, but you would have to write for loops to properly subset the proper elements in each iteration, which is very inefficient. Instead, use a matrix, and use sum to sum over the dimensions you want.
For example, let's say your values of x_mn ranged from 1 to 20. B is in the set from 1 to 5 and U is in the set from 1 to 4. As such:
X = vec2mat(1:20, 5)
X =
1 2 3 4 5
6 7 8 9 10
11 12 13 14 15
16 17 18 19 20
vec2mat takes a vector and reshapes it into a matrix. You specify the number of columns you want as the second element, and it will create the right amount of rows to ensure that a proper matrix is built. In this case, I want 5 columns, so this should create a 4 x 5 matrix.
The first constraint can be achieved by doing:
first = sum(X,1)
first =
34 38 42 46 50
sum works for vectors as well as matrices. If you have a matrix supplied to sum, you can specify a second parameter that tells you in what direction you wish to sum. In this case, specifying 1 will sum over all of the rows for each column. It works in the first dimension, which is the rows.
What this is doing is it is summing over all possible values in the set B over all values of U, which is what we are exactly doing here. You are simply summing every single column individually.
The second constraint can be achieved by doing:
second = sum(X,2)
second =
15
40
65
90
Here we specify 2 as the second parameter so that we can sum over all of the columns for each row. The second dimension goes over the columns. What this is doing is it is summing over all possible values in the set U over all values of B. Basically, you are simply summing every single row individually.
BTW, your code is not achieving what you think it's achieving. All you're doing is simply replicating the identity matrix a set number of times over groups of columns in your matrix. You are actually not performing any summations as per the constraint. What you are doing is you are simply ensuring that this matrix will have the conditions you specified at the beginning of your post to be enforced. These are the ideal matrices that are required to satisfy the constraints.
Now, if you want to check to see if the first condition or second condition is satisfied, you can do:
%// First condition satisfied?
firstSatisfied = all(first <= 1);
%// Second condition satisfied
secondSatisfied = all(second <= 1);
This will check every element of first or second and see if the resulting sums after you do the above code that I just showed are all <= 1. If they all satisfy this constraint, we will have true. Else, we have false.
Please let me know if you need anything further.
Lets say I have a matrix x=[ 1 2 1 2 1 2 1 2 3 4 5 ]. To look at its histogram, I can do h=hist(x).
Now, h with retrieve a matrix consisting only the number of occurrences and does not store the original value to which it occurred.
What I want is something like a function which takes a value from x and returns number of occurrences of it. Having said that, what one thing histeq does should we admire is, it automatically scales nearest values according!
How should solve this issue? How exactly people do it?
My reason of interest is in images:
Lets say I have an image. I want to find all number of occurrences of a chrominance value of image.
I'm not really shure what you are looking for, but if you ant to use hist to count the number of occurences, use:
[h,c]=hist(x,sort(unique(x)))
Otherwise hist uses ranges defined by centers. The second output argument returns the corresponding number.
hist has a second return value that will be the bin centers xc corresponding to the counts n returned in form of the first return value: [n, xc] = hist(x). You should have a careful look at the reference which describes a large number of optional arguments that control the behavior of hist. However, hist is way too mighty for your specific problem.
To simply count the number of occurrences of a specific value, you could simply use something like sum(x(:) == 42). The colon operator will linearize your image matrix, the equals operator will yield a list of boolean values with 1 for each element of x that was 42, and thus sum will yield the total number of these occurrences.
An alternative to hist / histc is to use bsxfun:
n = unique(x(:)).'; %'// values contained in x. x can have any number of dims
y = sum(bsxfun(#eq, x(:), n)); %// count for each value
I need to find out a method to determine how many items should appear per column in a multiple column list to achieve the most visual balance. Here are my criteria:
The list should only be split into multiple columns if the item count is greater than 10.
If multiple columns are required, they should contain no less than 5 (except for the last column in case of a remainder) and no more than 10 items.
If all columns cannot contain an equal number of items
All but the last column should be equal in number.
The number of items in each column should be optimized to achieve the smallest difference between the last column and the other column(s).
Well, your requirements and your examples appear a bit contradictory. For instance, your second example could be divided into two columns with 11 items in each, and satisfy your criteria. Let's assume that for rule #2 you meant that there should be <= 10 items / column.
In addition, I think you need to add another rule to make the requirements sensible:
The number of columns must not be greater than what is required to accomodate overflow.
Otherwise, you will often end up with degenerate solutions where you have far more columns than you need. For example, in the case of 26 items you probably don't want 13 columns of 2 items each.
If that's case, here's a simple calculation that should work well and is easy to understand:
int numberOfColumns = CEILING(numberOfItems / 10);
int numberOfItemsPerColumn = CEILING(numberOfItems / numberOfColumns);
Now you'll create N-1 columns of items (having `numberOfItemsPerColumn each) and the overflow will go in the last column. By this definition, the overflow should be minimized in the last column.
If you want to automatically determine the appropriate number of columns, and have no restrictions on its limits, I would suggest the following:
Calculate the square root of the total number of items. That would make an squared layout.
Divide that number by 1.618, and assign that to the total number of rows.
Multiply that same number by 1.618, and assign that to the total number of columns.
All columns but the right most one will have the same number of items.
By the way, the constant 1.618 is the Golden Ratio. That will achieve a more pleasant layout than a squared one.
Divide and multiply the other way round for vertical displays.
Hope this algorithm helps anyone with a similar problem.
Here's what you're trying to solve:
minimize y - z where n = xy + z and 5 <= y <= 10 and 0 <= z <= y
where you have n items split into x full columns of y items and one remainder column of z items.
There is almost certainly a smart way of doing this, but given these constraints a brute force implementation exploring all 6 + 7 + 8 + 9 + 10 = 40 possible combinations for y and z would take no time at all (only assignments where (n - z) mod y = 0 are solutions).
I think a brute force solution is easy, given the constraint on the number of items per columns: let v be the number of items per column (except the last one), then v belongs to [5,10] and can thus take a whooping 6 different values.
Evaluating 6 values is easy enough. Python one-liner (or not so far) to prove it:
# compute the difference between the number of items for the normal columns
# and for the last column, lesser is better
def helper(n,v):
modulo = n % v
if modulo == 0: return 0
else: return v - modulo
# values can only be in [5,10]
# we compute the difference with the last column for each
# build a list of tuples (difference, - number of items)
# (because the greater the value the better, it means less columns)
# extract the min automatically (in case of equality, less is privileged)
# and then pick the number of items from the tuple and re-inverse it
def compute(n): return - min([(helper(n,v), -v) for v in [5,6,7,8,9,10]])[1]
For 77 this yields: 7 meaning 7 items per columns
For 22 this yields: 8 meaning 8 items per columns