ORA-01747: invalid user.table.column on Nifi PutDatabaseRecord - oracle

I try to update some DB column in Oracle Database with Nifi.
I have such part of the circuit:
I have problem with last PutDatabaserecord:
nifi_ml_nifi.1.rb4f8g690fro#KoshDomain | 2021-07-12 17:34:20,919 ERROR [Timer-Driven Process Thread-2] o.a.n.p.standard.PutDatabaseRecord PutDatabaseRecord[id=017a10b4-fe2c-1b89-f752-67545ebb8406] Failed to put Records to database for StandardFlowFileRecord[uuid=db4a0554-6ea8-4cca-b1da-11bdacef1ccc,claim=StandardContentClaim [resourceClaim=StandardResourceClaim[id=1625839380170-6543, container=default, section=399], offset=292967, length=68],offset=0,name=f4d1875f-bec4-4944-ad8e-7955048148f3,size=68]. Routing to failure.: java.sql.BatchUpdateException: ORA-01747: invalid user.table.column, table.column, or column specification
nifi_ml_nifi.1.rb4f8g690fro#KoshDomain |
nifi_ml_nifi.1.rb4f8g690fro#KoshDomain | java.sql.BatchUpdateException: ORA-01747: invalid user.table.column, table.column, or column specification
nifi_ml_nifi.1.rb4f8g690fro#KoshDomain |
nifi_ml_nifi.1.rb4f8g690fro#KoshDomain | at oracle.jdbc.driver.OraclePreparedStatement.executeLargeBatch(OraclePreparedStatement.java:10032)
nifi_ml_nifi.1.rb4f8g690fro#KoshDomain | at oracle.jdbc.driver.T4CPreparedStatement.executeLargeBatch(T4CPreparedStatement.java:1364)
nifi_ml_nifi.1.rb4f8g690fro#KoshDomain | at oracle.jdbc.driver.OraclePreparedStatement.executeBatch(OraclePreparedStatement.java:9839)
nifi_ml_nifi.1.rb4f8g690fro#KoshDomain | at oracle.jdbc.driver.OracleStatementWrapper.executeBatch(OracleStatementWrapper.java:234)
nifi_ml_nifi.1.rb4f8g690fro#KoshDomain | at org.apache.commons.dbcp2.DelegatingStatement.executeBatch(DelegatingStatement.java:242)
nifi_ml_nifi.1.rb4f8g690fro#KoshDomain | at org.apache.commons.dbcp2.DelegatingStatement.executeBatch(DelegatingStatement.java:242)
nifi_ml_nifi.1.rb4f8g690fro#KoshDomain | at sun.reflect.GeneratedMethodAccessor156.invoke(Unknown Source)
nifi_ml_nifi.1.rb4f8g690fro#KoshDomain | at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
nifi_ml_nifi.1.rb4f8g690fro#KoshDomain | at java.lang.reflect.Method.invoke(Method.java:498)
nifi_ml_nifi.1.rb4f8g690fro#KoshDomain | at org.apache.nifi.controller.service.StandardControllerServiceInvocationHandler.invoke(StandardControllerServiceInvocationHandler.java:254)
nifi_ml_nifi.1.rb4f8g690fro#KoshDomain | at org.apache.nifi.controller.service.StandardControllerServiceInvocationHandler.access$100(StandardControllerServiceInvocationHandler.java:38)
nifi_ml_nifi.1.rb4f8g690fro#KoshDomain | at org.apache.nifi.controller.service.StandardControllerServiceInvocationHandler$ProxiedReturnObjectInvocationHandler.invoke(StandardControllerServiceInvocationHandler.java:240)
nifi_ml_nifi.1.rb4f8g690fro#KoshDomain | at com.sun.proxy.$Proxy149.executeBatch(Unknown Source)
nifi_ml_nifi.1.rb4f8g690fro#KoshDomain | at org.apache.nifi.processors.standard.PutDatabaseRecord.executeDML(PutDatabaseRecord.java:754)
nifi_ml_nifi.1.rb4f8g690fro#KoshDomain | at org.apache.nifi.processors.standard.PutDatabaseRecord.putToDatabase(PutDatabaseRecord.java:841)
nifi_ml_nifi.1.rb4f8g690fro#KoshDomain | at org.apache.nifi.processors.standard.PutDatabaseRecord.onTrigger(PutDatabaseRecord.java:487)
nifi_ml_nifi.1.rb4f8g690fro#KoshDomain | at org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27)
nifi_ml_nifi.1.rb4f8g690fro#KoshDomain | at org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1173)
nifi_ml_nifi.1.rb4f8g690fro#KoshDomain | at org.apache.nifi.controller.tasks.ConnectableTask.invoke(ConnectableTask.java:214)
nifi_ml_nifi.1.rb4f8g690fro#KoshDomain | at org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:117)
nifi_ml_nifi.1.rb4f8g690fro#KoshDomain | at org.apache.nifi.engine.FlowEngine$2.run(FlowEngine.java:110)
nifi_ml_nifi.1.rb4f8g690fro#KoshDomain | at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
nifi_ml_nifi.1.rb4f8g690fro#KoshDomain | at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
nifi_ml_nifi.1.rb4f8g690fro#KoshDomain | at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
nifi_ml_nifi.1.rb4f8g690fro#KoshDomain | at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
nifi_ml_nifi.1.rb4f8g690fro#KoshDomain | at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
nifi_ml_nifi.1.rb4f8g690fro#KoshDomain | at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
nifi_ml_nifi.1.rb4f8g690fro#KoshDomain | at java.lang.Thread.run(Thread.java:748)
This is configuration of the problematic node:
This is a schema of RecordReader:
{
"name": "load_date",
"type": "record",
"namespace": "maxi",
"fields": [
{
"name": "doc_id",
"type": "int"
},
{
"name": "line_id",
"type": "int"
},
{
"name": "load_date",
"type": "string"
}
]
}
And this is a sample of the json data coming to the node:
[{"doc_id":1795576199,"line_id":689617855,"load_date":"2021-34-12"}]
UPDATE
OK, I set PutDatabaseRecord to Debug mode, evaluated sigle record? for catch full debug info from the processor. This is the head of the log exactly as the processor starts to handle the record:
nifi_ml_nifi.1.zutify8jh9sv#KoshDomain | 2021-07-13 14:15:23,951 INFO [NiFi Web Server-456] o.a.n.c.s.StandardProcessScheduler Starting SplitJson[id=9f810155-017a-1000-890c-4b1e382a161e]
nifi_ml_nifi.1.zutify8jh9sv#KoshDomain | 2021-07-13 14:15:23,951 INFO [NiFi Web Server-456] o.a.n.controller.StandardProcessorNode Starting SplitJson[id=9f810155-017a-1000-890c-4b1e382a161e]
nifi_ml_nifi.1.zutify8jh9sv#KoshDomain | 2021-07-13 14:15:23,964 INFO [Timer-Driven Process Thread-1] o.a.n.c.s.TimerDrivenSchedulingAgent Scheduled SplitJson[id=9f810155-017a-1000-890c-4b1e382a161e] to run with 1 threads
nifi_ml_nifi.1.zutify8jh9sv#KoshDomain | 2021-07-13 14:15:23,976 ERROR [Timer-Driven Process Thread-1] o.a.n.p.standard.PutDatabaseRecord PutDatabaseRecord[id=017a10b4-fe2c-1b89-f752-67545ebb8406] Failed to put Records to database for StandardFlowFileRecord[uuid=9c53fd8b-775b-42a6-bcae-fb4208a40985,claim=StandardContentClaim [resourceClaim=StandardResourceClaim[id=1626174735665-61, container=default, section=61], offset=776075, length=330],offset=0,name=51510f97-7a54-4394-966a-12775653acb0,size=66]. Routing to failure.: java.sql.SQLDataException: Cannot map field 'doc_id' to any column in the database
nifi_ml_nifi.1.zutify8jh9sv#KoshDomain | Columns:
nifi_ml_nifi.1.zutify8jh9sv#KoshDomain | java.sql.SQLDataException: Cannot map field 'doc_id' to any column in the database
nifi_ml_nifi.1.zutify8jh9sv#KoshDomain | Columns:
nifi_ml_nifi.1.zutify8jh9sv#KoshDomain | at org.apache.nifi.processors.standard.PutDatabaseRecord.generateUpdate(PutDatabaseRecord.java:1073)
(I added one another processor there - SplitJson)
And, I added some table in my own scheme on the DB.
This is the DDL:
CREATE TABLE psheom.ml_task (
doc_id number,
line_id number,
load_date DATE,
CONSTRAINT pk_ml_task PRIMARY KEY(doc_id, line_id)
)
And this is result of DESCRIBE psheom.ml_task:
Name Null? Type
--------- -------- ------
DOC_ID NOT NULL NUMBER
LINE_ID NOT NULL NUMBER
LOAD_DATE DATE
UPDATE
I try to make ExecuteSql processor as #pmdba suggests, but I get some error. Here is configuration:
And here is error:
nifi_ml_nifi.1.zutify8jh9sv#KoshDomain | 2021-07-13 14:57:19,137 ERROR [Timer-Driven Process Thread-6] o.a.nifi.processors.standard.ExecuteSQL ExecuteSQL[id=9fb97e49-017a-1000-bae9-256f569c239b] Unable to execute SQL select query describe psheom.ml_task due to java.sql.SQLSyntaxErrorException: ORA-00900: invalid SQL statement
nifi_ml_nifi.1.zutify8jh9sv#KoshDomain | . No FlowFile to route to failure: java.sql.SQLSyntaxErrorException: ORA-00900: invalid SQL statement
nifi_ml_nifi.1.zutify8jh9sv#KoshDomain |
nifi_ml_nifi.1.zutify8jh9sv#KoshDomain | java.sql.SQLSyntaxErrorException: ORA-00900: invalid SQL statement
nifi_ml_nifi.1.zutify8jh9sv#KoshDomain |
nifi_ml_nifi.1.zutify8jh9sv#KoshDomain | at oracle.jdbc.driver.T4CTTIoer11.processError(T4CTTIoer11.java:494)
UPDATE
I set Translate Field Names to false, but it is same result.

OK, so it is quite a "few things" that we need to do for UPDATE some record in Oracle.
First of all DO NOT use DBeaver! It is very glitchy! Use just SQLDeveloper for working with Oracle.
Second, create session with same user configured for connection pool in Nifi side, and run DESCRIBE for table you need to update:
DESCRIBE GOODS.ML_TASK
Be sure, it succeeds!
Third, we need upper case all entity names, and it is good to set Translate Field Names to false, thus it removes underscores, ie.: doc_id -> docid.
Firth, you need json with upper case field names in the processor's input:
[
{"DOC_ID":1799041400,"LINE_ID":694098344,"LOAD_DATE":"14-Jul-21"},
{"DOC_ID":1802019315,"LINE_ID":697885808,"LOAD_DATE":"14-Jul-21"}
]
If you have lower case field in some place, use UpdateAttribute processor with reader and writer where reader will upper case the fields:
The schema defined for PutDatabaseRecord reader MUST be upper cased:
{
"name": "load_date",
"type": "record",
"namespace": "maxi",
"fields": [
{
"name": "DOC_ID",
"type": "int"
},
{
"name": "LINE_ID",
"type": "int"
},
{
"name": "LOAD_DATE",
"type": "string"
}
]
}
So here you have:
UpdateRecord:
IncomingJsonReader for accept record in lower case, and make uppercase
IncomingJsonWriter
PutDatanaseRecord
OutgoingJsonReader
All of them use THE SAME JSON SCHEMA
Fifth, as you seen, use string type for date.
You can put date as number of milliseconds sine THE DATE, but, despite the fact it worked in my own schema, it didn't worked in the production schema. So you need to query NLS_DATE_FORMAT.
Put the appropriate query in ExecuteSQL processor:
select * from nls_session_parameters where parameter = 'NLS_DATE_FORMAT'
You will have the format in the result queue:
[{"PARAMETER": "NLS_DATE_FORMAT", "VALUE": "DD-MON-RR"}]
Use it for make date formatted in the appropriate way.
Sixth, as we see, put the right date format in the "Expression Language" expression in the right place, for instance, in UpdateRecord in my case:
Seventh, make sure you have just single or few records passing threw the queues, for be able to monitor the log. Also put the PutDatabaseRecord in DEBUG mode:
In such a way, it will inform you, whether it succeed to fetch the schema from DB.
So... I wish you happy Oracle's table update!

Related

Is Possible to change an index sorting type from ascending to descending without dropping the index?

So I have created an index named IDX_TEST on table TEST,
CREATE UNIQUE INDEX "IDX_TEST" ON TEST ("ID")
then I want to change the sorting type of the index from ASC to DESC,
can we achieving that without dropping the current index ?
You cannot alter index to change the sort order. For that you have to create a new index. If you cannot allow to don't have any index for that column at any time and you would like to have the same name for that descending index you could:
create the new index with temporary name,
drop the old index,
and then alter your index with descending order to change it's name.
Below you've got all possible options for alter index (for Oracle 21c):
ALTER INDEX [ schema. ]index
{ { deallocate_unused_clause
| allocate_extent_clause
| shrink_clause
| parallel_clause
| physical_attributes_clause
| logging_clause
| partial_index_clause
} ...
| rebuild_clause
| PARAMETERS ( 'ODCI_parameters' )
)
| COMPILE
| { ENABLE | DISABLE }
| UNUSABLE [ ONLINE ] [ { DEFERRED | IMMEDIATE } INVALIDATION ]
| VISIBLE | INVISIBLE
| RENAME TO new_name
| COALESCE [ CLEANUP ] [ ONLY ] [ parallel_clause ]
| { MONITORING | NOMONITORING } USAGE
| UPDATE BLOCK REFERENCES
| alter_index_partitioning
}
;
It's worth to mention that ascending index can be read in descending order, so overall having an index in descending order only benefits in a few cases.

Can varchar datatype be a timestamp in Confluent?

I'm using confluent to implement realtime ETL.
My datasource is oracle, every table has a column named ts ,it's data type is varchar, but data in this column is YYYY-MM--DD HH24:MI:SS format.
can I use this column as timestamp in confluent kafka connector ?
how to config the xxxxx.properties file?
mode=timestamp
query= select to_date(a.ts,'yyyy-mm-dd hh24:mi:ss') tsinc,a.* from TEST_CORP a
poll.interval.ms=1000
timestamp.column.name=tsinc
connector.class=io.confluent.connect.jdbc.JdbcSourceConnector
query=select * from NFSN.BD_CORP
mode=timestamp
poll.interval.ms=3000
timestamp.column.name=TS topic.prefix=t_ validate.non.null=false
then I get this error:
[2018-12-25 14:39:59,756] INFO After filtering the tables are:
(io.confluent.connect.jdbc.source.TableMonitorThread:175) [2018-12-25
14:40:01,383] DEBUG Checking for next block of results from
TimestampIncrementingTableQuerier{table=null, query='select * from
NFSN.BD_CORP', topicPrefix='t_', incrementingColumn='',
timestampColumns=[TS]}
(io.confluent.connect.jdbc.source.JdbcSourceTask:291) [2018-12-25
14:40:01,386] DEBUG TimestampIncrementingTableQuerier{table=null,
query='select * from NFSN.BD_CORP', topicPrefix='t_',
incrementingColumn='', timestampColumns=[TS]} prepared SQL query:
select * from NFSN.BD_CORP WHERE "TS" > ? AND "TS" < ? ORDER BY "TS"
ASC
(io.confluent.connect.jdbc.source.TimestampIncrementingTableQuerier:161)
[2018-12-25 14:40:01,386] DEBUG executing query select
CURRENT_TIMESTAMP from dual to get current time from database
(io.confluent.connect.jdbc.dialect.OracleDatabaseDialect:462)
[2018-12-25 14:40:01,388] DEBUG Executing prepared statement with
timestamp value = 1970-01-01 00:00:00.000 end time = 2018-12-25
06:40:43.828
(io.confluent.connect.jdbc.source.TimestampIncrementingCriteria:162)
[2018-12-25 14:40:01,389] ERROR Failed to run query for table
TimestampIncrementingTableQuerier{table=null, query='select * from
NFSN.BD_CORP', topicPrefix='t_', incrementingColumn='',
timestampColumns=[TS]}: {}
(io.confluent.connect.jdbc.source.JdbcSourceTask:314)
java.sql.SQLDataException: ORA-01843: not a valid month
at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:447)
at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:396)
at oracle.jdbc.driver.T4C8Oall.processError(T4C8Oall.java:951)
at oracle.jdbc.driver.T4CTTIfun.receive(T4CTTIfun.java:513)
at oracle.jdbc.driver.T4CTTIfun.doRPC(T4CTTIfun.java:227)
at oracle.jdbc.driver.T4C8Oall.doOALL(T4C8Oall.java:531)
at oracle.jdbc.driver.T4CPreparedStatement.doOall8(T4CPreparedStatement.java:208)
at oracle.jdbc.driver.T4CPreparedStatement.executeForDescribe(T4CPreparedStatement.java:886)
at oracle.jdbc.driver.OracleStatement.executeMaybeDescribe(OracleStatement.java:1175)
at oracle.jdbc.driver.OracleStatement.doExecuteWithTimeout(OracleStatement.java:1296)
at oracle.jdbc.driver.OraclePreparedStatement.executeInternal(OraclePreparedStatement.java:3613)
at oracle.jdbc.driver.OraclePreparedStatement.executeQuery(OraclePreparedStatement.java:3657)
at oracle.jdbc.driver.OraclePreparedStatementWrapper.executeQuery(OraclePreparedStatementWrapper.java:1495)
at io.confluent.connect.jdbc.source.TimestampIncrementingTableQuerier.executeQuery(TimestampIncrementingTableQuerier.java:168)
at io.confluent.connect.jdbc.source.TableQuerier.maybeStartQuery(TableQuerier.java:88)
at io.confluent.connect.jdbc.source.TimestampIncrementingTableQuerier.maybeStartQuery(TimestampIncrementingTableQuerier.java:60)
at io.confluent.connect.jdbc.source.JdbcSourceTask.poll(JdbcSourceTask.java:292)
at org.apache.kafka.connect.runtime.WorkerSourceTask.poll(WorkerSourceTask.java:244)
at org.apache.kafka.connect.runtime.WorkerSourceTask.execute(WorkerSourceTask.java:220)
at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:175)
at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:219)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748) [2018-12-25 14:40:01,390] DEBUG Resetting querier
TimestampIncrementingTableQuerier{table=null, query='select * from
NFSN.BD_CORP', topicPrefix='t_', incrementingColumn='',
timestampColumns=[TS]}
(io.confluent.connect.jdbc.source.JdbcSourceTask:332) ^C[2018-12-25
14:40:03,826] INFO Kafka Connect stopping
(org.apache.kafka.connect.runtime.Connect:65) [2018-12-25
14:40:03,827] INFO Stopping REST server
(org.apache.kafka.connect.runtime.rest.RestServer:223)

drill not showing hive or hbase tables

I've created both a hbase and hive table to store some data logging information. I can query both hbase and hive from the command line no prob.
hbase: scan MVLogger; // comes back with 9k plus records
hive: select * from MVLogger; // comes back with 9k plus records
my hbase table definition is
'MVLogger', {NAME => 'dbLogData', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER => 'ROW', REPLICATION_SCOPE => '0', VERSIONS true
=> '1', COMPRESSION => 'NONE', MIN_VERSIONS => '0', TTL => '2147483647', KEEP_DELETED_CELLS => 'false', BLOCKSIZE => '65
536', IN_MEMORY => 'false', BLOCKCACHE => 'true'}
My hive (external) table definition is:
CREATE EXTERNAL TABLE `MVLogger`(
`rowid` int,
`ID` int,
`TableName` string,
`CreatedDate` string,
`RowData` string,
`ClientDB` string)
ROW FORMAT SERDE
'org.apache.hadoop.hive.hbase.HBaseSerDe'
STORED BY
'org.apache.hadoop.hive.hbase.HBaseStorageHandler'
WITH SERDEPROPERTIES (
'serialization.format'='1',
'hbase.columns.mapping'=':key,dbLogData:ID,dbLogData:TableName,dbLogData:CreatedDate,dbLogData:RowData,dbLogData:ClientDB')
TBLPROPERTIES (
'hbase.table.name'='MVLogger')
When I use sqlline and look at the drill schema this is what I see
0: jdbc:drill:zk=ip-*.compu> show schemas;
+-------------+
| SCHEMA_NAME |
+-------------+
| hive.default |
| dfs.default |
| dfs.root |
| dfs.tmp |
| cp.default |
| hbase |
| sys |
| INFORMATION_SCHEMA |
+-------------+
and when I do a use [schema] (any of them but sys) and then do a show tables I get nothing... For example
0: jdbc:drill:zk=ip-*.compu> use hbase;
+------------+------------+
| ok | summary |
+------------+------------+
| true | Default schema changed to 'hbase' |
+------------+------------+
1 row selected (0.071 seconds)
0: jdbc:drill:zk=ip-*.compu> show tables;
+--------------+------------+
| TABLE_SCHEMA | TABLE_NAME |
+--------------+------------+
+--------------+------------+
No rows selected (0.37 seconds)
In the Drill Web UI (ambari) under storage options for Drill I see an enabled hbase and hive. The configuration for the hive storage is the following.
{
"type": "hive",
"enabled": true,
"configProps": {
"hive.metastore.uris": "thrift://ip-*.compute.internal:9083",
"hive.metastore.warehouse.dir": "/apps/hive/warehouse/",
"fs.default.name": "hdfs://ip-*.compute.internal:8020/",
"hive.metastore.sasl.enabled": "false"
}
}
Any ideas of why I'm not able to query hive/hbase ?
Update: The table is showing up in the hive schema now but when I try to query it with a simple select * from ... it just hangs and I can't find anything in any of the log files. The hive table's actual data store is hbase BTW.
Found out Hbase .98 is not yet compatible with the drill/hbase plugin... http://mail-archives.apache.org/mod_mbox/incubator-drill-user/201410.mbox/%3CCAKa9qDmN_fZ8V8W1JKW8HVX%3DNJNae7gR-UMcZC9QwKVNynQJkA%40mail.gmail.com%3E
it's maybe too late but for others who may see the post and having this issue .
0: jdbc:drill:zk=ip-*.compu> use hbase;
+------------+------------+
| ok | summary |
+------------+------------+
| true | Default schema changed to 'hbase' |
+------------+------------+
1 row selected (0.071 seconds)
0: jdbc:drill:zk=ip-*.compu> show tables;
+--------------+------------+
| TABLE_SCHEMA | TABLE_NAME |
+--------------+------------+
+--------------+------------+
No rows selected (0.37 seconds)
the user that is running drill has no access permission on hbase . grant drill user access on hbase and you will see the tables .
try going to hbase shell with drill user and run "list" it will also be empty until you grant permission then u will see the tables .

how to compare values from "User defined variables" with values from "JDBC Request" in jMeter

I want to compare(assert maybe) some values from "User defined variables" with values obtained from DB query using "JDBC Request" in jMeter, the thing is after i do the SELECT query i get only the column names and not the values. How can i do this comparison step by step?
Thank you!
For instance, MySQL server has "mysql" database. In this database there is a "help_keyword" table which looks as follows:
MariaDB [mysql]> describe help_keyword;
+-----------------+------------------+------+-----+---------+-------+
| Field | Type | Null | Key | Default | Extra |
+-----------------+------------------+------+-----+---------+-------+
| help_keyword_id | int(10) unsigned | NO | PRI | NULL | |
| name | char(64) | NO | UNI | NULL | |
+-----------------+------------------+------+-----+---------+-------+
2 rows in set (0.00 sec)
So if you configure your JDBC Request to select first row as
select * from help_keyword limit 1;
It'll return the following:
help_keyword_id name
0 JOIN
For instance you need to assert this JOIN keyword. To do so:
Add User Defined Variables configuration element and define KEYWORD variable with the value of JOIN
Add JDBC Request configured as follows:
Query Type
Select Statement
Query
select * from help_keyword limit 1;
Variable names
id,name
Add Response Assertion as a child of JDBC Request configured as follows:
Apply to
JMeter Variable: name_1
Patterns to Test
${KEYWORD}
Test Plan above will perform whether 1st row of "name" column value equals JOIN
See How to Use JMeter Assertions in 3 Easy Steps guide for more information on how JMeter Assertions can be used.
Use "Response Assertion" for your JDBC request.
Select the below mentioned properties of "Response Assertion":
Apply to: "Jmeter Variable" e.g. ${value}, where value is User Defined variable.
Response field to test: "Text Response"
Pattern Matching Rules: As per your requirement.
Hope this will help

optimizing simple select query in oracle

I am trying to optimize the following query:
SELECT tickstime AS time,
quantity1 AS turnover
FROM cockpit_test.ticks
WHERE date_id BETWEEN 20111104 AND 20111109
AND mdc_id IN (297613)
ORDER BY time;
It is pretty simple but it takes about 60-90 seconds to run. cockpit_test.TICKS table contains more than 100M of rows. It also has an index by MDC_ID and DATE_ID columns.
EXPLAIN PLAN gives the following output
"-------------------------------------------------------------------------------------------------------"
"| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time |"
"-------------------------------------------------------------------------------------------------------"
"| 0 | SELECT STATEMENT | | 26905 | 604K| | 11783 (1)| 00:02:22 |"
"| 1 | SORT ORDER BY | | 26905 | 604K| 968K| 11783 (1)| 00:02:22 |"
"| 2 | TABLE ACCESS BY INDEX ROWID| TICKS | 26905 | 604K| | 11596 (1)| 00:02:20 |"
"|* 3 | INDEX RANGE SCAN | TICKS_MDC_DATE | 26905 | | | 89 (0)| 00:00:02 |"
"-------------------------------------------------------------------------------------------------------"
" "
"Predicate Information (identified by operation id):"
"---------------------------------------------------"
" "
" 3 - access(""MDC_ID""=297613 AND ""DATE_ID"">=20111104 AND ""DATE_ID""<=20111109)"
So I am not completely sure what all that means, but it seems that index is being hit and most time is being consumed by accessing rows by index rowid.
Are there any ways to make this query run faster?
UPD
Here is the table definition:
Name Null? Type
----------------------------------------- -------- ----------------------------
DATE_ID NOT NULL NUMBER(38)
MDC_ID NOT NULL NUMBER(38)
TICKSTIME NOT NULL DATE
STATE NOT NULL NUMBER(38)
VALUE1 NOT NULL FLOAT(126)
VALUE2 FLOAT(126)
VOLUME1 FLOAT(126)
VOLUME2 FLOAT(126)
QUANTITY1 NUMBER(38)
QUANTITY2 NUMBER(38)
There are 3 indexes on the table:
Index on MDC_ID
Compound index on DATE_ID, MDC_ID, TICKSTIME
Compound index on DATE_ID, MDC_ID
I would check that this explain plan has accurate estimations of the cardinalities. It's quite typical for the cardinality to poorly estimated when multiple predicates are supplied, and the execution time seems very high for such a small query and estimated sort size (unless you have grossly underpowered storage infrastructure, which again is pretty typical).
Given the duration of the query I'd make sure that the estimate is accurate by invoking dynamic sampling ...
SELECT
/*+ dynamic_sampling(4) */
tickstime AS time,
quantity1 AS turnover
FROM
cockpit_test.ticks
WHERE
date_id BETWEEN 20111104 AND 20111109 and
mdc_id IN (297613)
ORDER BY
tickstime;
If it turns out that the estimated temp space is smaller than realtiy (and you can check that by querying V$SQL_WORKAREA_ACTIVE) then you might have to tweak the memory settings for the session to switch to automatic memory management and increase the sort area size.
In general, Oracle can't combine two separate indexes (unless they're bitmap indexes and not "ordinary" btree indexes).
What is the mdc_id column? If there are many distinct values for it, you could create a compound index on mdc_id, date_id.
In theory, Oracle can use an index to return sorted data. In this case your index should be on mdc_id, date_id, time.
Why aren't you using date datatypes for your date columns? For this particular query it probably won't make much difference, but in general Oracle will much better be able to determine the distribution of data if you use correct datatypes.

Resources