I have 2 data sets for train and test with weka. They both having same amount of attributes and same type data type for variables (numeric or nominal) .But they are not compatible with each other because the order of nominal values is different
ex - Training set
Occupation
1 Doctor 40%
2 Engineer 40%
3 Teacher 20%
Test set
1 Engineer 40%
2 doctor 40%
3 Teacher 20%
So both sets are incompatible. My question is how to change these distinct value order to make them compatible?
It looks a bit like a data pre-processing issue. I am quite curious as to how the training and testing data ended up looking like this!
If you would like to change the nominal values, you could use RenameNominalValues to rename the labels of your data. One possible method is to apply this to your testing data:
This solution assumes that you are dealing with a Nominal attribute, that it is your last attribute and they are labelled as shown in the valueReplacements field.
Failing this, depending on the amount of cases, you could edit the values manually or use your favourite spreadsheet to replace the values.
Hope this Helps!
Use "SwapValues" under unsupervised > attribute
Related
I have a dataset in which there are Features of both float and object type . I want to apply feature selection On this dataset in such a way that fisrt Find Mutual Information Score of all the features with Target. Then I choose 20 high score feature and do SFFS on them. So, I use mutual_info_classif in my codes but I get this error: could not convert string to float Because of one of my feature (Name=Grade) that is categorical and the unique value of this feature is :A,B,C,D. I have searched for finding the solution and everybody said that in this condition you should use one hot encoding. but I cant understand how to use one hot encoding? because I want to score each feature , not each category of feature. and If for example category=A get high score and category=D get Low score How can I decide to select or not select Grade feature?
Using h2o python API to train some models and am a bit confused on how to correctly implement some parts of the API. Specifically, what columns should be ignored in a training dataset and how models look for the actual predictor features in a data set when actually using the model's predict() method. Also how weight columns should be handled (when the actual prediction datasets don't really have weights)
The details of the code here (I think) are not majorly important, but the basic training logic looks something like
drf_dx = h2o.h2o.H2ORandomForestEstimator(
# denoting update version name by epoch timestamp
model_id='drf_dx_v'+str(version)+'t'+str(int(time.time())),
response_column='dx_outcome',
ignored_columns=[
'ucl_id', 'patient_id', 'account_id', 'tar_id', 'charge_line', 'ML_data_begin',
'procedure_outcome', 'provider_outcome',
'weight'
],
weights_column='weight',
ntrees=64,
nbins=32,
balance_classes=True,
binomial_double_trees=True)
.
.
.
drf_dx.train(x=X_train, y=Y_train,
training_frame=train_u, validation_frame=val_u,
max_runtime_secs=max_train_time_hrs*60*60)
(note the ignored columns) and the prediction logic just looks like
preds = model.predict(X)
where X is some (h2o)dataframe with more (or less) columns than in X_train used to train the model (includes some columns for post-processing exploration (in a Jupyter notebook)). Eg. X_train columns may look like
<columns to ignore (as seen in the code)> <columns to use a features for training> <outcome label>
and X columns may look like
<columns to ignore (as seen in the code)> <EVEN MORE COLUMNS TO IGNORE> <columns to use a features for training>
My question is: Is this going to confuse the model when making predictions? Ie. is the model getting the columns to use as features by column name (in which case, I don't think the different dataframe width would be a problem) or is it going by column position (in which case adding more data columns to each sample would shift the positions and become a problem) or something else? What happens since these columns were not explicated in the ignored_columns arg in the model constructor?
** Slight aside: should the weights_column name be in the ignored_columns list or not? The example in the docs (http://docs.h2o.ai/h2o/latest-stable/h2o-docs/data-science/algo-params/weights_column.html#weights-column) seems to use it as a predictor feature as well as seems to recommend it
For scoring, all computed metrics will take the observation weights into account (for Gains/Lift, AUC, confusion matrices, logloss, etc.), so it’s important to also provide the weights column for validation or test sets if you want to up/down-weight certain observations (ideally consistently between training and testing).
but these weight values are not something that comes with the data used in actual predictions.
I've summarized your question into a few distinct parts, so the answers will be in a Q/A type fashion.
1). When I use my_model.predict(X), how does H2O-3 know which columns to predict with?
H2O-3 will use the columns that you passed as predictors when you built your model (i.e. whatever you passed to the x argument in the estimator, or all the columns you included in your training_frame which were not: ignored using ignored_columns, passed as a target to the y argument, dropped because the column has a constant value.). My recommendation would be to use the x argument to specify your predictors and ignore the ignore_columns parameter. If X, the new dataframe you are predicting on includes columns that were not used when you were building a model, those columns will be ignored - so column names not column positions.
2) Should the weights column name be in the ignored column list?
No, if you pass the weights column to the ignored column list, that column will not be considered in any fashion during the model building phase. In fact, if you test this out, you should get a null pointer error or something similar.
3) Why is the "weights" column specified as a predictor and as the weights_column in the following code example?
This is a great question! I've created two Jira tickets one to update the documentation to clear up the confusion and another one to potentially add a user warning.
The short answer, is if you pass the same column to the predictors argument x and the weights_column argument, that column will only be used as a weight - it will not be used as a feature.
4) Does the user guide recommend using the weights as a feature and as a weight?
No, in the paragraph you are pointing to, the recommendation is to ensure that the column you pass as your weights_column exists in your training frame and validation frame - not that it should also be included as a feature.
In GBM Model - I have near to 150 columns used to train and create a model - I have a case where for some records I won't be getting all the columns. In that case will the model work - I don't want to set the values to 0 in that case.?
Your question title and description are talking about 2 different things and title is not clear about what you are asking. My following answer is based on your question in description field:
If you use H2O to build your GBM model H2O replaces missing numerical, categorical & unseen values to NA. Please look at the following documentation regarding "handling missing values in GBM" which will help you understand more about your case:
http://docs.h2o.ai/h2o/latest-stable/h2o-docs/data-science/gbm-faq/missing_values.html?highlight=missing%20values
I know there must be a simple answer to this, but I can't find it.
I have added a couple of textboxes to a Matrix in a BIDS/SSRS report. I've given these textboxes values such as:
=Fields!WEEK1USAGE.Value
It works (after a fashion); when I run the report (either on the Preview tab, or on the Report Server site) I see the first corresponding data value on the report - but only one.
I would think that once a value has been assigned via expressions such as "=Fields!WEEK1USAGE.Value", each value would display (rows would automatically be added).
There must be some property on the Matrix or the textbox that specified this, but I can't see what it might be.
Here is how my report looks (very minimalistic, so far) in the Layout pane:
...and after running, on the Preview tab:
Obviously, I want the report to display as many rows as necessary, not just one. The textboxes do have a "RepeatWith" property, but there description doesn't sound interesting/useful/promising.
I don't see any property on the Matrix control that looks right, either.
I thought maybe the designer was only showing one row of values, and ran the report on the server, too, but there also it just shows the two values.
So what do I need to do to get all the data for a provided field?
Matrices are for display of grouped data and summary information, usually in a horizontally expanding pivot table type of format. Is a matrix really what you are after? Looking at your expression you have =Fields!Week1Usage.Value but in a matrix what I expect to see would be at least =Sum(Fields!Week1Usage.Value) or even better just =Sum(Fields!Usage.Value). Then you would have ProactDescription as your row group and the week as your column group and it would all just work out everything for you, grouping and summing by Proact vertically and expanding the weeks out horizontally.
What seems to be happening is that you have no grouping on rows or columns and no aggregation so it is falling back to the default display which is effectively the First function - it displays the first row of data and as far as the matrix is concerned it has done its job because there is no grouping.
Without knowing your problem or data, I'll make up a scenario that might be what you are doing and discuss how the matrix does the heavy lifting to solve that problem. Let's say you have usage data for multiple Proacts. Each time one is used you record the usage amount and the date and time it is used. It could be used multiple times per day but certainly multiple times in a week. So you might be able to get the times each Proact is used from a table like so:
SELECT ProactDescription, TimeUsed, Usage
FROM ProactUsage
ORDER BY ProactDescription, TimeUsed
In your report you want to show the total weekly usage for each Proact over multiple weeks. Something like this:
Proact Week1 Week2 Week3 ...
Description Usage Usage Usage ...
--------------------------------------------
Anise, Fennel 1 CT 20.00 22.50 16.35 ...
St John's Wort 15.20 33.90 28.25 ...
...
and so on. Using a dataset based on the SQL above we create a matrix and in the row group properties we group on =Fields!ProactDescription.Value and in the column group properties we group on a week expression like =DateDiff(DateInterval.Week, Fields!TimeUsed.Value, Today) and then in the intersection of the row and column we put =Sum(Fields!Usage.Value). To display the header of the column nicely put an expression like
="Week " & DateDiff(DateInterval.Week, Fields!TimeUsed.Value, Today)
The matrix automatically does all the summing by week and product and expands the weeks horizontally for as many as you are reporting. For bonus points you can also put totaling at the end of the columns and the rows to show the total use of that Proact for the period (row total) and total use of all Proacts in that week (column total).
I have inherited a mapreduce codebase which mainly calculates the number of unique user IDs seen over time for different ads. To me it doesn't look like it is being done very efficiently, and I would like to know if anyone has any tips or suggestions on how to do this kind of calculation as efficiently as possible in mapreduce.
We use Hadoop, but I'll give an example in pseudocode, without all the cruft:
map(key, value):
ad_id = .. // extract from value
user_id = ... // extract from value
collect(ad_id, user_id)
reduce(ad_id, user_ids):
uniqe_user_ids = new Set()
foreach (user_id in user_ids):
unique_user_ids.add(user_id)
collect(ad_id, unique_user_ids.size)
It's not much code, and it's not very hard to understand, but it's not very efficient. Every day we get more data, and so every day we need to look at all the ad impressions from the beginning to calculate the number of unique user IDs for that ad, so each day it takes longer, and uses more memory. Moreover, without having actually profiled the code (not sure how to do that in Hadoop) I'm pretty certain that almost all of the work is in creating the set of unique IDs. It eats enormous amounts of memory too.
I've experimented with non-mapreduce solutions, and have gotten much better performance (but the question there is how to scale it in the same way that I can scale with Hadoop), but it feels like there should be a better way of doing it in mapreduce that the code I have. It must be a common enough problem for others to have solved.
How do you implement the counting of unique IDs in an efficient manner using mapreduce?
The problem is that the code you inherited was written with the mindset "I'll determine the unique set myself" instead of the "let's leverage the framework to do it for me".
I would something like this (pseudocode) instead:
map(key, value):
ad_id = .. // extract from value
user_id = ... // extract from value
collect(ad_id & user_id , unused dummy value)
reduce(ad_id & user_id , unused dummy value):
output (ad_id , 1); // one unique userid.
map(ad_id , 1): --> identity mapper!
collect(ad_id , 1 )
reduce(ad_id , set of a lot of '1's):
summarize ;
output (ad_id , unique_user_ids);
Niels' solution is good, but for an approximate alternative that is closer to the original code and uses only one map reduce phase, just replace the set with a bloom filter. The membership queries in a bloom filter have a small probability of error, but the size estimates are very accurate.