I have the following data with all categorical variables:
class education income social_standing
1 basic low good
0 low high V_good
1 high low not_good
0 v_high high good
Here education has four levels (basic, low, high and v_high). income has two levels low and high ; and social_standing has three levels (good, v_good and not_good).
In so far as my understanding of converting the above data to VW format is concerned, it will be something like this:
1 |person education_basic income_low social_standing_good
0 |person education_low income_high social_standing_v_good
1 |person education_high income_low social_standing_not_good
0 |person education_v_high income_high social_standing_good
Here, 'person', is namespace and all other are feature values, prefixed by respective feature names. Am I correct? Somehow this representation of feature values is quite perplexing to me. Is there any other way to represent features? Shall be grateful for help.
Yes, you are correct.
This representation would definitely work with vowpal wabbit, but under some conditions, may not be optimal (it depends).
To represent non-ordered, categorical variables (with discrete values), the standard vowpal wabbit trick is to use logical/boolean values for each possible (name, value) combination (e.g. person_is_good, color_blue, color_red). The reason this works is that vw implicitly assumes a value of 1 whereever a value is missing. There's no practical difference between color_red, color=red, color_is_red, or even (color,red) and color_red:1 except hash locations in memory. The only characters you can not use in a variable name are the special separators (: and |) and white-space.
Terminology note: this trick of converting each (feature + value) pair into a separate feature is sometimes called "One Hot Encoding".
But in this case the variable-values may not be "strictly categorical". They may be:
Strictly ordered, e.g (low < basic < high < v_high)
Presumably have a monotonic relation with the label you're trying to predict
so by making them "strict categorical" (my term for a variable with a discrete range which doesn't have the two properties above) you may be losing some information that may help learning.
In your particular case, you may get better result by converting the values to numeric, e.g. (1, 2, 3, 4) for education. i.e you could use something like:
1 |person education:2 income:1 social_standing:2
0 |person education:1 income:2 social_standing:3
1 |person education:3 income:1 social_standing:1
0 |person education:4 income:2 social_standing:2
The training set in the question should work fine, because even when you convert all your discrete variables into boolean variables like you did, vw should self-discover both the ordering and the monotonicity with the label from the data itself, as long as the two properties above are true, and there's enough data to deduce them.
Here's the short cheat-sheet for encoding variables in vowpal wabbit:
Variable type How to encode readable example
------------- ------------- ----------------
boolean only encode the true case is_alive
categorical append value to name color=green
ordinal+monotonic :approx_value education:2
numeric :actual_value height:1.85
Final notes:
In vw all variables are numeric. The encoding tricks are just practical ways to make things appear as categorical or boolean. Boolean variables are simply numeric 0 or 1; Categorical variables can be encoded as boolean: name+value:1.
Any variable whose value is not monotonic with the label, may be less useful when numerically encoded.
Any variable that is not linearly related to the label may benefit from a non-linear transformation before training.
Any variable with a zero value will not make a difference to the model (exception: when the --initial_weight <value> option is used) so it can be dropped from the training set
When parsing a feature, only : is considered a special separator (between the variable name and its numeric value) anything else is considered a part of the name and the whole name string is hashed to a location in memory. A missing :<value> part implies :1
Edit: what about name-spaces?
Name spaces are prepended to feature names with a special-char separator so they map identical features to different hash locations. Example:
|E low |I low
Is essentially equivalent to the (no name spaces flat example):
| E^low:1 I^low:1
The main use of name-spaces is to easily redefine all members of a name-space to something else, ignore a full name space of features, cross features of a name space with another etc. (see -q, --cubic, --redefine, --ignore, --keep options).
Related
"The lasso method requires initial standardization of the regressors,
so that the penalization scheme is fair to all regressors. For
categorical regressors, one codes the regressor with dummy variables
and then standardizes the dummy variables" (p. 394).
Tibshirani, R. (1997). The lasso method for variable selection in the Cox model.
Statistics in medicine, 16(4), 385-395. http://statweb.stanford.edu/~tibs/lasso/fulltext.pdf
H2O:
Similar to package ‘glmnet,’ the h2o.glm function includes a ‘standardize’ parameter that is true by default. However, if predictors are stored as factors within the input H2OFrame, H2O does not appear to standardize the automatically encoded factor variables (i.e., the resultant dummy or one-hot vectors). I've confirmed this experimentally, but references to this decision also show up in the source code:
For instance, method denormalizeBeta (https://github.com/h2oai/h2o-3/blob/553321ad5c061f4831c2c603c828a45303e63d2e/h2o-algos/src/main/java/hex/DataInfo.java#L359) includes the comment "denormalize only the numeric coefs (categoricals are not normalized)." It also looks like means (variable _normSub) and standard deviations (inverse of variable _normMul) are only calculated for the numerical variables, and not the categorical variables, in the setTransform method (https://github.com/h2oai/h2o-3/blob/553321ad5c061f4831c2c603c828a45303e63d2e/h2o-algos/src/main/java/hex/DataInfo.java#L599).
GLMnet:
In contrast, package 'glmnet' seems to expect categorical variables to be dummy-coded prior to fitting a model, using a function like model.matrix. The dummy variables are then standardized along with the continuous variables. It seems like the only way to avoid this would be to pre-standardize the continuous predictors only, concatenate them with the dummy variables, and then run glmnet with standardize=FALSE.
Statistical Considerations:
For a dummy variable or one-hot vector, the mean is the proportion of TRUE values, and the SD is directly proportional to the mean. The SD reaches its maximum when the proportion of TRUE and FALSE values is equal (i.e., σ = 0.5), and the sample SD (s) approaches 0.5 as n → ∞. Thus, if continuous predictors are standardized to have SD = 1, but dummy variables are left unstandardized, the continuous predictors will have at least twice the SD of the dummy predictors, and more than twice the SD for imbalanced dummy variables.
It seems like this could be a problem for regularization (LASSO, ridge, elastic net), because the scale/variance of predictors is expected to be equal so that the regularization penalty (λ) applies evenly across predictors. If two predictors A and B have the same standardized effect size, but A has a smaller SD than B, A will necessarily have a larger unstandardized coefficient than B. This means that, if left unstandardized, the regularization penalty will erroneously be more severe to A than B. In a regularized regression with a mixture of standardized continuous predictors and unstandardized categorical predictors, it seems like this could lead to systematic over-penalization of categorical predictors.
A commonly expressed concern is that standardizing dummy variables removes their normal interpretation. To avoid this issue, while still placing continuous and categorical predictors on an equal footing, Gelman (2008) suggested standardizing continuous predictors by dividing by 2 SD, rather than 1, resulting in standardized predictors with SD = 0.5. However, it seems like this would still be biased for class-imbalanced dummy variables, for which the SD might be substantially less than 0.5.
Gelman, A. (2008). Scaling regression inputs by dividing by two
standard deviations. Statistics in medicine, 27(15), 2865-2873.
http://www.stat.columbia.edu/~gelman/research/published/standardizing7.pdf
Question:
Is H2O's approach of not standardizing one-hot vectors for regularized regression correct? Could this lead to a bias toward over-penalizing dummy variables / one-hot vectors? Or has Tibshirani (1997)'s recommendation since been revised for some reason?
Personally, I rather keep the binary features untouched and apply MinMaxScalar between 0 and 1 to the numeric features instead of the normalization. This puts the numeric features on a similar standard deviation scale as those of binaries.
Is the following a common data type (i.e. does it have a name)?
Its unique characteristic is, unlike a regular Set, that it contains the "universe" on initialisation with O(C) memory overhead, and a max memory overhead of O(N/2) (which only occurs when you remove every-other element):
> s = new Structure(701)
s = Structure(0-700)
> s.remove(100)
s = Structure(0-99, 101-700)
> s.add(100)
s = Structure(0-700)
> s.remove(200)
s = Structure(0-199, 201-700)
> s.remove(202)
s = Structure(0-199, 201, 203-700)
> s.removeAll()
s = Structure()
Does something like this have a standard name?
I've used this many times in the past and seen it used in things like plane-sweep algorithms for polygon clipping.
Sometimes the abstract data type it represents is just a set, and the data structure is an optimization. I use this for representing the set of matching characters given by a regex expression like [^a-zA-z0-9.-], for example, and to perform intersection, union, and other operations on those sets.
This sort of data structure is implemented on top of some other ordered set or map structure, by simply storing the keys where membership in the set changes instead of the keys in the set itself. In all the other cases where I've seen this sort of thing done, the authors refer to that underlying structure instead of giving a name to the concept itself.
I like the idea of having a name for it, though, since as I said I've used it myself many times. Maybe I would call it an "in & out set" in honor of the hamburger chain I liked the best back when I ate hamburgers.
It's a Compressed Bit Set or Compressed Bitmap.
A Bit Set or Bitmap is a set specifically designed for storing Integers. Most languages offer standard implementations of these. They typically work by assigning a 1 to the Nth bit in an internal array of Integers where N is the number you're adding to the set. 0 indicates the value is not present. The memory usage for these types of Bit Sets is dictated by the largest number you store.
A Compressed Bit Set is one that compacts ranges of 0s and 1s.
In this case, the question demonstrates a type of compression called "run-length-encoding" (thank you #Ralf Kleberhoff), so it is specifically a Run-length Encoded Bitmap.
Common implementations of Compressed Bitmaps (from newest-to-oldest) are:
Roaring Bitmaps (only one to provide "good random access")
EWAH
WAH
Oracle BBC
I have around 200 dummies, and wish to run a constrained OLS regression where I impose that the sum of all coefficients on the dummies is equal to 1.
One option is to type:
constraint define 1 dummy_1+dummy_2 +...+dummy_200=1
cnsreg y x_1 x_2 dummy_1-dummy_200, c(1)
...but typing the constraint out would obviously be very painful.
Is there a way to quickly define such a large constraint? The matrix form would be very quick and straightforward, but after much reading online and in Stata guide, it is not clear to me how to do constraints in matrix form, and if they are even possible.
There are at least two sides to this, how to do it and whether it will work in any statistical sense.
How to do it seems easier than you fear as the difficult bit is just inserting "+" signs between the variable names, and that's string manipulation. Something like
unab myvars : dummy_*
local myvars : subinstr local myvars " " "+", all
mac li
constraint 1 `myvars' = 1
should get you started. The macro list is so you can see what you did, especially if it is not what you want.
Whether it will work for you statistically is outside the scope of this forum, but if that's the only constraint note that it's consistent with all kinds of negative and positive coefficients. Perhaps there are special features of your problem that make it a natural constraint, but my intuition is that such a model will be hard to estimate.
I would take a completely different approach. Such constraints typically occur when trying out a different coding scheme for a set of indicator variables. If that is the case then I would use Stata's factor variables, combined with margins with the contrast operators.
I'm looking for an algorithm, which is a one-way function, like Hash function. And the algorithm accept a small input(serveral bits, less than 512 bits), and map it to a long output(1K Byte or more). Do you know an algorithm or a function like this?
From the Shannon theorem you don't gain any security by having a cyphertext of a size bigger than your plain text, unless the key (or the procedure to create the cyphertext) is different for any input. Even in this case, you will need to assign only one key (or mechanism) for each input x otherwise you violate the definition of a function. So if you apply an encryption mechanism f: X (set of inputs) -> Y (set of outputs), then |Y| <= |X|.
All this to say that if your input is less than 512 bits, you gain nothing by producing a 1KB output. Now, I recommend you to use one of the functions listed on the one-way function wiki page
Keccak has variable length output, (although not evaluated for in SHA-3), it's "security claim is disentangled from the output length. There is a minimum output length..." and Skein hash function has a variable output of up to 16 exabytes
Whatever your reasons are, you can calculate hashes of the same small data using different algorithms, then concatenate those hashes. If the output is not large enough, calculate hashes of hashes and append them.
As pointed in other answers, this doesn't have much sense from security perspective.
I'm looking for something like a hash function but for which it's output is closer the closer two different inputs are?
Something like:
f(1010101) = 0 #original hash
f(1010111) = 1 #very close to the original hash as they differ by one bit
f(0101010) = 9999 #not very close to the original hash they all bits are different
(example outputs for demonstration purposes only)
All of the input data will be of the same length.
I want to make comparisons between a file a lots of other files and be able to determine which other file has the fewest differences from it.
You may try this algorithm.
http://en.wikipedia.org/wiki/Levenshtein_distance
Since this is string only.
You may convert all your binary to string
for example:
0 -> "00000000"
1 -> "00000001"
You might be interested in either simhashing or shingling.
If you are only trying to detect similarity between documents, there are other techniques that may suit you better (like TF-IDF.) The second link is part of a good book whose other chapters delve into general information retrieval topics, including these other techniques.
You should not use a hash for this.
You must compute signatures containing several characteristic values like :
file name
file size
Is binary / Is ascii only
date (if needed)
some other more complex like :
variance of the values of bytes
average value of bytes
average length of same value bits sequence (in compressed files there are no long identical bit sequences)
...
Then you can compare signatures.
But the most important is to know what kind of data is in these files. If it is images, the size and main color are more important. If it is sound, you could analyse only some frequencies...
You might want to look at the source code to unix utilities like cmp or the FileCmp stuff in Python and use that to try to determine a reasonable algorithm.
In my uninformed opinion, calculating a hash is not likely to work well. First, it can be expensive to calculate a hash. Second, what you're trying to do sounds more like a job for encoding than a hash; once you start thinking of it that way, it's not clear that it's even worth transforming the file that way.
If you have some constraints, specifying them might be useful. For example, if all the files are the exact same length, that may simplify things. Or if you are only interested in differences between bits in the same position and not interested in things that are similar only if you compare bits in different positions (e.g., two files are identical, except that one has everything shifted three bits--should those be considered similar or not similar?).
You could calculate the population count of the XOR of the two files, which is exactly the number of bits that are not the same between the two files. So it just does precisely what you asked for, no approximations.
You can represent your data as a binary vector of features and then use dimensionality reduction either with SVD or with random indexing.
What you're looking for is a file fingerprint of sorts. For plain text, something like Nilsimsa (http://ixazon.dynip.com/~cmeclax/nilsimsa.html) works reasonably well.
There are a variety of different names for this type of technique. Fuzzy Hashing/Locality Sensitive Hashing/Distance Based Hashing/Dimensional reduction and a few others. Tools can generate a fixed length output or variable length output, but the outputs are generally comparable (eg by levenshtein distance) and similar inputs yield similar outputs.
The link above for nilsimsa gives two similar spam messages and here are the example outputs:
773e2df0a02a319ec34a0b71d54029111da90838cbc20ecd3d2d4e18c25a3025 spam1
47182cf0802a11dec24a3b75d5042d310ca90838c9d20ecc3d610e98560a3645 spam2
* * ** *** * ** ** ** ** * ******* **** ** * * *
Spamsum and sdhash are more useful for arbitrary binary data. There are also algorithms specifically for images that will work regardless of whether it's a jpg or a png. Identical images in different formats wouldn't be noticed by eg spamsum.