Obiee measures show null values only - oracle

I have the following problem. In obiee when I create analysis from table A, its measure columns showing null, even though in database there are no null values in this table.
I used only this table and did not select any columns from other tables. Non-measure columns are fine though like client_id, client_name. However any calculated in rpd measure of that table shows null.
What can be the reason?
Edit. Log has the following :
The expression "measure_column" is converted to null because
None of the fact tables are compatible with the query request

If you really have measure columns in your dimensions it's time to completely go over what was done in the RPD. it seems to have severe issues built in.

Related

When filtering a single table on only the primary key, why is the optimizer is doing a full table scan?

I have a monster sized non-partitioned table. I recently updated the statistics as on it as well.
The primary key is on a char field called "ID".
SELECT * FROM "MYDATA" WHERE "ID" = '0000492319'
The plan is saying TABLE ACCESS (FULL) and has a filter predicate for the the ID. This results in a query that takes 8 seconds to run.
If I give the optimizer a hint to use the primary key, the query takes 1.6 seconds to run.
Its bizarre to me that I should need to provide this hint. The indexed plan estimates a lower cost, and the optimizer should be aware of this.
Here is the filter predicate:
NLSSORT(INTERNAL_FUNCTION(ID),'nls_sort="JAPANESE_M"')=HEXTORAW('017...')
The database NLS_SORT is set to JAPANESE_M and the NLS_CHARAACTERSET is JA16SJIS.
So nothing seems to be mismatched that would cause a special sort function to be called. Its a bit odd though.
One more piece of information, if I select only the "ID" column in my query then the planer chooses INDEX (FAST FULL SCAN) automatically.
The problem only arises when I use select *.
Oracle Database Verion: 10.2.0.5.0.

Populating Tables into Oracle in-memory segments

I am trying to load the tables into oracle in-memory database. I have enable the tables for INMEMORY by using sql+ command ALTER TABLE table_name INMEMORY. The table also contains data i.e. the table is populated. But when I try to use the command SELECT v.owner, v.segment_name name, v.populate_status status from v$im_segments v;, it shows no rows selected.
What can be the problem?
Have you considered this?
https://docs.oracle.com/database/121/CNCPT/memory.htm#GUID-DF723C06-62FE-4E5A-8BE0-0703695A7886
Population of the IM Column Store in Response to Queries
Setting the INMEMORY attribute on an object means that this object is a candidate for population in the IM column store, not that the database immediately populates the object in memory.
By default (INMEMORY PRIORITY is set to NONE), the database delays population of a table in the IM column store until the database considers it useful. When the INMEMORY attribute is set for an object, the database may choose not to materialize all columns when the database determines that the memory is better used elsewhere. Also, the IM column store may populate a subset of columns from a table.
You probably need to run a select against the date first

DB Index not being called

I know this question has been asked more than once here. But I am not able to resolve my issue so posting it again for help.
I have a table called Transaction in Oracle database (11g) with 2.7 million records. There is a not-null varchar2(20) (txn_id) column which contains numeric values. This is not the primary key of the table, and most of the values are unique. By most of the values I mean there are cases where one value can be there 3-4 times in the table.
If I perform a simple query of select based on TXN_ID it take about 5 seconds or more to return the result.
Select * from Transaction t where t.txn_id = 245643
I have an index created on this column, but when I check the explain plan for above query, it is using full table scan. This query is being used many times in the application which is making the application slow.
Can you please provide some help what might be causing this issue?
You are comparing a varchar column with a numeric literal (245643). This forces Oracle to convert one side of the equality, and off hand, it seems as though it's choosing the "wrong" side. Instead of having to guess how Oracle will handle this conversion, use a character literal:
SELECT * FROM Transaction t WHERE t.txn_id = '245643'

Identical Oracle db setups: exception on just one of them

edit: Look to the end of this question for what caused the error and how I found out.
I have a very strange exception thrown on me from Hibernate when I run an app that does batch inserts of data into an oracle database. The error comes from the Oracle database, ORA-00001, which
" means that an attempt has been made to
insert a record with a duplicate
(unique) key. This error will also be
generated if an existing record is
updated to generate a duplicate
(unique) key."
The error is weird because I have created the same table (exactly same definition) on another machine where I do NOT get the same error if I use that through my app. AND all the data get inserted into the database, so nothing is really rejected.
There has to be something different between the two setups, but the only thing I can see that is different is the banner output that I get when issuing
select * from v$version where banner like 'Oracle%';
The database that gives me trouble:
Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - Prod
The one that works:
Oracle Database 10g Release 10.2.0.3.0 - 64bit Production
Table definitions, input, and the app I wrote is the same for both. The table involved is basically a four column table with a composite id (serviceid, date, value1, value2) - nothing fancy.
Any ideas on what can be wrong? I have started out clean several times, dropping both tables to start on equal grounds, but I still get the error from the database.
Some more of the output:
Caused by: java.sql.BatchUpdateException: ORA-00001: unique constraint (STATISTICS.PRIMARY_KEY_CONSTRAINT) violated
at oracle.jdbc.driver.DatabaseError.throwBatchUpdateException(DatabaseError.java:367)
at oracle.jdbc.driver.OraclePreparedStatement.executeBatch(OraclePreparedStatement.java:8728)
at org.hibernate.jdbc.BatchingBatcher.doExecuteBatch(BatchingBatcher.java:70)
How I found out what caused the problem
Thanks to APC and ik_zelf I was able to pinpoint the root cause of this error. It turns out the Quartz scheduler was wrongly configured for the production database (where the error turned up).
For the job running against the non-failing oracle server I had <cronTriggerExpression>0/5 * * * * ?</cronTriggerExpression> which ran the batch job every five seconds. I figured that once a minute was sufficent for the other oracle server, and set the quartz scheduler up with * */1 * * * ?. This turns out to be wrong, and instead of running every minute, this ran every second!
Each job took approximately 1.5-2 seconds, and thus two or more jobs were running concurrently, thus causing simultaneous inserts on the server. So instead of inserting 529 elements, I was getting anywhere from 1000 to 2000 inserts. Changing the crontrigger expression to the same as the other one, running every five seconds, fixed the problem.
To find out what was wrong I had to set true in hibernate.cfg.xml and disable the primary key constraint on the table.
-- To catch exceptions
-- to find the offending rows run the following query
-- SELECT * FROM uptime_statistics, EXCEPTIONS WHERE MY_TABLE.rowid = EXCEPTIONS.row_id;
create table exceptions(row_id rowid,
owner varchar2(30),
table_name varchar2(30),
constraint varchar2(30));
-- This table was set up
CREATE TABLE MY_TABLE
(
LOGDATE DATE NOT NULL,
SERVICEID VARCHAR2(255 CHAR) NOT NULL,
PROP_A NUMBER(10,0),
PROP_B NUMBER(10,0),
CONSTRAINT PK_CONSTRAINT PRIMARY KEY (LOGDATE, SERVICEID)
);
-- Removed the constraint to see what was inserted twice or more
alter table my_table
disable constraint PK_CONSTRAINT;
-- Enable this later on to find rows that offend the constraints
alter table my_table
enable constraint PK_CONSTRAINT
exceptions into exceptions;
You have a unique compound constraint. ORA-00001 means that you have two or more rows which have duplicate values in ServiceID, Date, Value1 and/or Value2. You say the input is the same for both databases. So either:
you are imagining that your program is hurling ORA-00001
you are mistaken that the input is the same in both runs.
The more likely explanation is the second one: one or more of your key columns is populated by an external source or default value (e.g. code table for ServiceId or SYSDATE for the date column). In your failing database this automatic population is failing to provide a unique value. There can be any number of reasons why this might be so, depending on what mechanism(s) you're using. Remember that in a unique compound key NULL entries count. That is, you can have any number of records (NULL,NULL.NULL,NULL) but only one for (42,NULL,NULL,NULL).
It is hard for us to guess what the actual problem might be, and almost as hard for you (although you do have the advantage of being the code's author, which ought to grant you some insight). What you need is some trace statements. My preferred solution would be to use Bulk DML Exception Handling but then I am a PL/SQL fan. Hibernate allows you to hook in some logging to your programs: I suggest you switch it on. Code is a heck of a lot easier to debug when it has decent instrumentation.
As a last resort, disable the constraint before running the batch insert. Afterwards re-enable it like this:
alter table t42
enable constraint t42_uk
exceptions into my_exceptions
/
This will fail if you have duplicate rows but crucially the MY_EXCEPTIONS table will list all the rows which clash. That at least will give you some clue as to the source of the duplication. If you don't already have an exceptions table you will have to run a script: $ORACLE_HOME/rdbms/admin/utlexcptn.sql ( you may need a DBA to gain access to this directory).
tl;dr
insight requires information: instrument your code.
The one that has problems is a EE and the other looks like a SE database. I expect that the first is on quicker hardware. If that is the case, and your date column is filled using SYSDATE, it could very well be that the time resolution is not enough; that you get duplicate date values. If the other columns of your data are also not unique, you get ORA-00001.
It's a long shot but at first sight I would look into this direction.
Can you use an exception table to identify the data? See Reporting Constraint Exceptions
My guess would be the service id. Whatever service_id hibernate is using for the 'fresh' insert has already been used.
Possibly the table is empty in one database but populated in another
I'm betting though that the service_id is sequence generated and the sequence number is out of sync with the data content. So you have the same 1000 rows in the table but doing
SELECT service_id_seq.nextval FROM DUAL
in one database gives a lower number than the other. I see this a lot where the sequence has been created (eg out of source control) and data has been imported into the table from another database.

Maximum number of columns in a LINQ to SQL object?

I have 62 columns in a table under SQL 2005 and LINQ to SQL doesn't handle the updates though the reading would work just fine, I tried re-adding the table to the model, created a new data model but nothing worked, I'm guessing I've hit the maximum number of columns limit on an object, can anyone explain that ?
I suspect there is some issue with an identity or timestamp column (something autogenerated on the SQL server). Make sure that any column that is autogenerated is marked that way in the model. You might also want to look at how it is handling concurrency. If you have triggers that update any values on the row after it is updated (changing values) and it is checking all columns on updates, this would cause the update to fail. Typically I create my tables with a timestamp column -- LINQ2SQL picks this up when I generate the model and uses it alone for concurrency.
Solved, either one of the following two
-I'm using a UniqueIdentifier column that was not set as Primary key
-Set Unique ID primary key, checked the properties of the same column in Server Explorer and it was still not showing as Primary key, refreshed the connection,dropped the same table on the model and voila.
So I assume I made a change to my model some time before, deleted the table from the model and added the same from the Server explorer without refreshing the connection and it never used to work.
Question is, does VS Server Explorer maintain it's own table schema and requires connection refresh everytime a change is made in the database ?
There is no limit to the number of columns LINQ to SQL will handle.
Have you got other tables updating successfully?
What else is different about how you are accessing the table content?

Resources