There is a PVO that is having incorrect values when extracted using BICC. The values of this PVO in the OTBI have no issue. However, when I extracted that PVO using the BICC, I am getting values like "1.234567E218, (null), 1.234567E-68".
The column is only for numeric values (NUMERIC 50). This column should have values like "3.00E14" or 15-digit values. The values we are getting is larger or unusual.
I've compared this PVO with the same PVO in the other environments but they are the same (other environments dont have this issue). I have no idea where to check since the OTBI is having the correct values, and I think the cause of this error is in the BICC.
You can't do anything on Oracle BICC extracts.
Also would like to suggest that this question should be raised in the Oracle community.
Related
I am using Automated ML to run a time series forecasting pipeline.
When the AutoMLStep gets triggered, I get this error: Non numeric value(s) were encountered in the target column.
The data to this step is passed through an OutputTabularDatasetConfig, after applying the read_delimited_files() on an OutputFileDatasetConfig. I've inspected the prior step, and the data is comprised of a 'Date' column and a numeric column called 'Place' with +80 observations in monthly frequencies.
Nothing seems to be wrong with the column type or the data. I've also applied a number of techniques on the data prep side e.g. pd.to_numeric(), astype(float) to ensure it is numeric.
I've also tried forcing this through the FeaturizationConfig() add_column_purpose('Place','Numeric') but in this case, I get another error: Expected column(s) Place in featurization config's column purpose not found in X.
Any thoughts on how to solve?
So a few learnings on this interacting with the stellar Azure Machine Learning engineering team.
When calling the read_delimited_files() method, ensure that the output folder does not have many inputs or files. For example, if all intermediate outputs are saved to a common folder, it may read all the prior inputs into this folder, and depending upon the shape of the data, borrow the schema from the first file, or confuse all of them together. This can lead to inconsistencies and errors. In my case, I was dumping many files to the same location, hence this was causing confusion for this method. The fix is either to distinctly mark the output folder (e.g. with a UUID) or give different paths.
The dataframe from read_delimiter_files() may treat all columns as object type which can lead to a data type check failure (i.e. label_column needs to be numeric). To mitigate, explictly state the type. For example:
from azureml.data import DataType
prepped_data = prepped_data.read_delimited_files(set_column_types={"Place":DataType.to_float()})
I have a data set which contains missing values as shown in the image. I would like to fill in the minimum value of the column in place of missing values. Which methods in Mathematica can be used to solve this issue and how can it be done?
Without seeing your code, it's hard to say anything specific, but in general you just need to put your column into a list l and take Min[l], then just fill in the missing values manually.
I have a Pesky SSRS report Problem where in the main query of my report has a condition that can have more than 1000 choices and when user selects all it will fail as my backend database is Oracle. I have done some research and found a solution that would work.
Solution is
re-writing the in clause something like this
(1,ColumnName) in ((1,Searchitem1),(1,SearchItem2))
this will work however when I do this
(1,ColumnName) in ((1,:assignedValue))
and pass just one value it works. But when I pass more than one value it fails and gives me ORA-01722: Invalid number error
I have tried multiple combination of the same in clause but nothing is working
any help is appreciated...
Wild guess: your :assignedValue is a comma-separated list of numbers, and Oracle tries to parse it as a single number.
Passing multiple values as a single value for an IN query is (almost) never a good idea - either you have to use string concatenation (prone to SQL injection and terrible performance), or you have to have a fixed number of arguments to IN (which generally is not what you want).
I'd suggest you
INSERT your search items into a temporary table
use a JOIN with this search table in your SELECT
in DB which i do not have privilege to alter.
a column has number(13,4) and how is it possible to insert 999999999999999999 whose length is more than 13 ? It is throwing exception. Is it possible to convert in to 1.23e3 format and does the db save this format?
no it is not possible because of the rules and limitations you mentioned yourself. The column has that formatting, you cannot change it so you cannot make it fit. period
No it is not possible to insert a number, which is greater than the specified precision and scale of the column.
You have to change the database.
If you don't have permissions to alter the table then simply ask someone who does; you have a valid "business" need to do so.
I would highly recommend not working out some way to "hack" around this limitation. Constraints such as this exist to enforce data quality. Though maybe misapplied in this situation, putting data in two different formats in the same column makes it immeasurably more difficult to retrieve data from the database. Hence why you should always store numbers as numbers etc.
No, unfortunately not. There is no way how to achieve this.
While executing a DB2 (V8) Stored Procedure, I get the following error :
SQL0304N A value cannot be assigned to a host variable because the value is
not within the range of the host variable's data type. SQLSTATE=22003
I did not set any kind of tracing or specific error handling and as the error only occurs in our client's validation environment that I'm not allowed to play with, I do not have many options but analyze my code again.
Here is the result of my current analysis. Google is not much of a help...
My "10 pages" procedure creates a CURSOR over a set of data, goes though it and computes values for each element to be inserted it in a table.
I have checked (hopefully) all my variables types versus data types used to fill them and versus the data types of the target table and I do not see any conflict there.
Since there are a lot of decimal numbers, multiplications and additions, my only hypothesis is that a computed value becomes too large for a defined variable. Could anyone confirm that would be the "correct error" ? And would it also apply if the number of digits after the decimal point generated by computing is greater than allowed by the targeted variable type (eg. 100000.123 in decimal(6,2)) ?
I also tried to find a way to debug db2 pl sql through a client but I did not find any solution. If you have any suggestion...
Many thanks in advance for any clue :)
I answer myself...
First, my last question => I did not find any way to debug db2 pl sql through a client (with DB2 V8 at least).
After I was authorized to work on our integration client's environment, I could confirm my hypothesis was right. The variable format receiving the multiplication was sometimes too small (decimal(10,2)) for the computed result.
The solution adopted was to change the variable format to decimal(15,2) and since the final value to insert still had to be decimal(10,2) upon client's requirements, we validated the following with our client :
1-Check the variable value :
if (myval > 9999999,99)
then
set myval = 9999999,99;
end if;
=> "back to decimal(10,2) requirement"
2-Get back to decimal(10,2) at insert :
This last bit of code also solves the issue when there are too many digits after the decimal point. That was causing an error as well at insert time
insert into mytable values (
... ,
CAST(myval AS DECIMAL( 12 , 2 )),
...
)