QGIS 3.14 : Inserting into Oracle database does not work when metadata is set with three dimensions - oracle

I´m trying to set up an Oracle Connection from QGIS. The tables are set with metadata for Three dimensions in Oracle. When I try to add objects to the database, however, I get the following error:
Oracle error while adding features: Oracle error: Could not insert feature -27
SQL: ORA-29875: failed in the execution of the ODCIINDEXINSERT routine
ORA-13364: layer dimensionality does not match geometry dimensions
ORA-06512: at "MDSYS.SDO_INDEX_METHOD_10I", line 976
ORA-06512: at "MDSYS.SDO_INDEX_METHOD_10I", line 488
Unable to execute statement
When I remove the Z-dimension from the metadata it seems to work. Any help on what can resolve the problem is appreciated.
The Z-value is usually added by convention at the organization and not always used, i.e. set to 0.

ironic, I was actually having a similar error today when attempting to save data into a geopackage. (That is, i don't get the error on insert, i get it when hitting save).
I got a slightly different error:
'QGIS error creating feature -16: failed to prepare SQL: INSERT INTO..... etc.
However my error was related to the fact that the geometry attribute name had somehow been created as 'geom'....and the GDAL code is looking for a geometry attribute called 'geometry'.
I recreated my geopackage table, with the attribute named 'geometry' and z values, and no more problems, (even if source data has no z).
however I don't think this is your problem - it could be that source data doesn't have a z - You could try maybe setting a procedure somewhere or derived field, such that z value is populated upon insert.
The error code makes it sounds like either the source has no z value (ie: Geometry dimensions) or the layer indexing is not setup to handle z (ie: ODCIINDEXINSERT routine layer dimensionality).
That may be the starting point for yourself.

Related

Error in azureml "Non numeric value(s) were encountered in the target column."

I am using Automated ML to run a time series forecasting pipeline.
When the AutoMLStep gets triggered, I get this error: Non numeric value(s) were encountered in the target column.
The data to this step is passed through an OutputTabularDatasetConfig, after applying the read_delimited_files() on an OutputFileDatasetConfig. I've inspected the prior step, and the data is comprised of a 'Date' column and a numeric column called 'Place' with +80 observations in monthly frequencies.
Nothing seems to be wrong with the column type or the data. I've also applied a number of techniques on the data prep side e.g. pd.to_numeric(), astype(float) to ensure it is numeric.
I've also tried forcing this through the FeaturizationConfig() add_column_purpose('Place','Numeric') but in this case, I get another error: Expected column(s) Place in featurization config's column purpose not found in X.
Any thoughts on how to solve?
So a few learnings on this interacting with the stellar Azure Machine Learning engineering team.
When calling the read_delimited_files() method, ensure that the output folder does not have many inputs or files. For example, if all intermediate outputs are saved to a common folder, it may read all the prior inputs into this folder, and depending upon the shape of the data, borrow the schema from the first file, or confuse all of them together. This can lead to inconsistencies and errors. In my case, I was dumping many files to the same location, hence this was causing confusion for this method. The fix is either to distinctly mark the output folder (e.g. with a UUID) or give different paths.
The dataframe from read_delimiter_files() may treat all columns as object type which can lead to a data type check failure (i.e. label_column needs to be numeric). To mitigate, explictly state the type. For example:
from azureml.data import DataType
prepped_data = prepped_data.read_delimited_files(set_column_types={"Place":DataType.to_float()})

Tableau Error "ORA-00972: identifier is too long"

I'm attempting to add data to Tableau from Oracle but am getting the following error: Error "ORA-00972: identifier is too long. This error is because I'm attempting to use a column that is longer than 30 characters. I know that one fix for the issue to change the name in Oracle to shorter than 30 characters. Unfortunately, I can't do this myself and it will take the team that can do this longer than I'd like to wait so I was wondering if there are any workarounds to this issue.
I have already read the following page from Tableau: https://kb.tableau.com/articles/issue/error-ora-00972?signin=0f7d7b7f02b5d408316cdf9e3b03eef
An ORA-00972 means you are using a database column or object identifier name that is too long, as #tejash said in the comments. When you see this error in Tableau specifically, it only happens because the dimension or measure in Tableau has been renamed to be longer than 30 characters. It's an easy mistake to make because the Tableau UI doesn't prohibit the longer name.
I know that one fix for the issue to change the name in Oracle to
shorter than 30 characters. Unfortunately, I can't do this myself.
Because Oracle would have never let the name be longer than 30 characters when the column was created, there is nothing to fix in Oracle. You must fix it in Tableau and this is something you can fix yourself: rename the dimension or measure. There are a number of places this is possible and all follow the same principle (just different screen layouts) so I will describe the one I'm most familiar with. On any worksheet, on the left-hand pane where you see the dimensions and measures, right-click on the dimension or measure you need to rename and choose the "Rename" menu option.

VBA ACE issue with Oracle Decimal

We use VBA to retrieve data from an Oracle database using the Microsoft.ACE.OLEDB.12.0 provider.
We have used this method without issue for a long time, but we have encountered a problem with a specific query of data from a specific table.
When running it under VBA, we get "Run-Time error '1004': Application-defined or object-defined error. However investigating further, we find the following:
The queries we run are dynamically generated, and how we handle them is to read the results into a variant array, then output that array into Excel. When we step-through our particular query, we find that one specific database field is "blank": The locals window shows the value to be completely blank: it is not an empty string, it is not a null, it is not zero. VarType() shows it to be a decimal data type, and yet it is empty.
I can't seem to prevent this error from locking-out:
On Error Resume Next
...still breaks.
if (isEmpty(theValue)) then
...doesn't catch it, because it is not empty
if (theValue is nothing) then
...doesn't catch it because it is not an object
etc.
We used the SQL in the a .NET application, and got a more informative error:
Decimal's scale value must be between 0 and 28, inclusive. Parameter name: Scale
So: I see this as two issues:
1.) In VBA, how do I trap the variant datatype value-that-is-not-empty-or-null, and;
2.) How do I resolve the Oracle Decimal problem?
For #2, I see from the Oracle decimal data type, it can support precision of up to 31 places, and I assume that the provider I am using can only support 28. I guess I need to Cast it to a smaller type in the query.
Any other thoughts?

Error in pulling data from RPD -OBIEE 11g

I created 2 hierarchies in RPD:
1.) Time-Qtr
2.) Time-Month
When I tried pulling it out from OBI 11g , it returned the following error:
"Type Error:Unable to get property 'getAllLevelInfos' of undefined or null reference".
What can be done to rectify this errror?
Looks like your dimensional hierarchies are messed up. They must be constructed on either separate logical dimension tables in order to end on different leaf levels or on one logical dimension table id they are alternate hierarchies inside one dimensional hierarchy this ending on the same leaf level.

ATG-Endeca indexing error for autogen dimension

ATG-Endeca CAS deployment with migrated dimension value ids between environments using cas-cmd API. If the ids haven't been imported indexing completes ok. Otherwise the following error occurs:
Caused by:
com.endeca.soleng.eac.toolkit.exception.CasControlException: Crawl
'app-last-mile-crawl' failed with error: Dimension value
records cannot be specified for autogen dimensions. Received dimension
value record with spec 'r8-16' for dimension 'product.sizeRange'
Per my understanding when in the ids, exported from environment A, there are dimension value ids for an autogen dimension but when the A-ids are imported in to environment B and the B-indexing is triggered the error occurs.
Also the error seems more frequent for dimension that have configuration in index_config.json, ie range dimensions.
Any ideas on how is this resolved or a confirmation of the cause would be appreciated.
Thanks.
Range Dimension can not be auto dimension and this needs to be available in index-config.json or dimension mapping csv file. Migrate index-config from one env to other with all range dimensions configured with same id.
I hope this helps.
Thanks,
Ajay Agrawal

Resources