Is there any way to convert numeric data to binary equivalent for giving input to neural networks
Checked in weka- "Converts all numeric attributes into binary attributes (apart from the class attribute, if set): if the value of the numeric attribute is exactly zero, the value of the new attribute will be zero. If the value of the numeric attribute is missing, the value of the new attribute will be missing. Otherwise, the value of the new attribute will be one. The new attributes will be nominal."
Checked in RapidMiner- "they are using 2 parameters for convertion min and max values.If the value of an attribute is between the specified minimal and maximal value, it becomes 'false', otherwise 'true'
"
Can anyone tell me a better way to convert numeric data to binary.
sample data
10,50,35,15
15,20,70,25
25,10,55,10
55,10,35,15
35,15,10,50
20,25,15,20
7,55,25,30
8,35,25,30
9,70,55,10
Or whether neural networks is capable of taking numeric data as input?
Related
The current redux-form documentation (version 6.5.0 at the time of this writing) mentions 2 callbacks for the Field object: normalize and parse.
Both descriptions sound pretty similar: They take the value entered by the user in an input field and transform it to a value stored in redux.
What's the difference between these 2 callbacks?
Essentially the two functions do exactly the same thing, i.e. take the value a user has input to the Field and transform it before it's stored in the redux store.
The differences lie in the flavor of these functions and the order in which they are called:
parse parses the string input value should convert it to the type you want to be stored the redux store, for example you parse a date string from a datepicker into a Date object
normalize is meant enforce certain formatting of input values in the redux store, for example ensuring that phone numbers are stored in a cohesive format
When it comes to the order in which these methods are called in the redux-form value lifecycle: parse is called before normalize, which means normalize is called with the parsed input value.
So in short, use parse to convert user input (usually in string form) to a type that suits your needs. Use normalize to enforce a specific input format on the user.
This is what the Value Lifecycle Hooks page tries to explain.
I am attempting to allow a dynamic sort on a text box on an SSRS report. The field upon which I am trying to sort will either have an "A" or a decimal number. I am wanting to sort the decimal numbers in descending order. The expression I am using is:
=iif(isnumeric(Fields!CommScore.Value), (cdbl(Fields!CommScore.Value)*-1),6)
For the decimal number will never be larger than 5. The error I get is:
The sortexpression for the text box 'textbox74' contains an error. Input string was not in a correct format. (rsRuntimeErrorInExpression)
I imagine this is something simple. What am I doing wrong?
The error relates to the CDbl function throwing an exception when trying to convert A to a number. Yes, I know you're checking if it is numeric first but IIF is not a language construct, it is a function and as a function it evaluates all its parameters before passing them to the function. This means that both the true and false parameters get calculated even though one will be discarded.
Try the Val function. It has the benefit of not erroring when it gets passed non-numeric data - it just does the best it can to convert it.
=IIF(IsNumeric(Fields!CommScore.Value), (Val(Fields!CommScore.Value)*-1), 6)
In CoreImage a CIFilter has both a set of Max/Min values and a set of SliderMax/Min values.
The documentation for the Max/Min says "The maximum/minimum value for a filter parameter" and the SliderMax/Min says "The maximum/minimum value, specified as a floating-point value, to use for a slider that controls input values for a filter parameter."
I'm wondering why these might be different values, as they are, for example, for the inputAngle parameter of CIHueAdjust, where max/min are 0/0 but sliderMax/Min is 3.14/-3.14?
And also what is the use of having the max/min values at 0/0 like they are for most of the filters?
I would wager that a value of 0 means there is no max/min, that any value representable by the datatype is valid for the filter.
As for why there's a separate slider value, it's because what you present to the user is often different than what's accepted. For example, the CIHueAdjust may accept any value for the actual adjustment, but a slider presented to the user has no reason to go outside the range of -3.14..3.14 (because anything outside this range is equivalent to a value inside the range).
I'm using SMOTE to oversample my dataset (affected by class imbalance). Some of my attributes have integer values, others have only two decimals but SMOTE creates new instances with many decimals. So to solve this problems I thought to use NumericCleaner Filter and set the number of decimals I desire. This seems to work but I've got problems with missing values. Each missing values is replaced with a 0.0 value, I need to evaluate my model using missing values in dataset. So how can I use NumericCleaner (or other filters that permit to round values) and keep my missing values?
Very interesting question. Okay, here is the solution:
use SMOTE to oversample the minority group (this produces decimal points but the missing values remain missing values)
then select weka filter->unsupervised->attribute->NumericTransform
then click on this filter and set the attribute instances (where you are having decimal points features) and in the methodName instead of "abs", put "ceil".
I hope that solves the problem.
I'm trying to determine the relationship between default values and the has_foo() methods that are declared in various programmatic interfaces. In particular, I'm trying to determine under what circumstances (if any) you can "tell the difference" between a field explicitly set to the default value, and an unset value.
If I explicitly set a field (e.g. "Bar.foo") to its default value (e.g., zero), then is Bar::has_foo() guaranteed return true for that data structure? (This appears to be true for the C++ generated code, from a quick inspection, but that doesn't mean it's guaranteed.) If this is true, then it's possible to distinguish between an explicitly set default value and an unset prior to serialization.
If I explicitly set a field to its default value (e.g., zero), and then serialize that object and send it over the wire, will the value be sent or not? If it is not, then clearly any code that receives this object can't distinguish between an explicitly set default value and an unset value. I.e., it won't be possible to distinguish these two cases after serialization -- Bar::has_foo() will return false in both cases.
If it's not possible to tell the difference, what is the recommended technique for encoding a protobuf field if I want to encode a "nullable" optional value? A couple options come to mind, but neither seem great: (a) add an extra boolean field that records whether the field is set or not, or (b) use a "repeated" field even though I semantically want an optional field -- this way I can tell the difference between no value (length-zero list) or a set value (length-one list).
The following applies for 'proto2' syntax, not 'proto3' :
The notion of a field being set or not is a core feature of Protobuf. If you set a field to a value (any value), then the corresponding has_xxx method must return true, otherwise you have a bug in the API.
If you do not set a field and then serialize the message, no value is sent for that field. The receiving side will parse the message, discover which values where included, and set the corresponding "has_xxx" values.
Exactly how this is implemented in the wire-format is documented here: http://code.google.com/apis/protocolbuffers/docs/encoding.html. The short version is that message are encoded as a sequence of key-value pairs, and only fields which are explicitly set are included in the encoded message.
Default values only come into play when you attempt to read an unset field.