I am having trouble setting up Oracle in Instantclient_21_7. I tried to add it as a user dsn and test connection, upon clicking test connection, it just crash, no error message or whatsoever. i enabled logging and it always shows as below.
odbcad32 2da8-b40 ENTER SQLAllocEnv
HENV * 0x000D9C94
odbcad32 2da8-b40 EXIT SQLAllocEnv with return code 0 (SQL_SUCCESS)
HENV * 0x000D9C94 ( 0x03FBDEB8)
odbcad32 2da8-b40 ENTER SQLAllocConnect
HENV 0x03FBDEB8
HDBC * 0x000D9C98
odbcad32 2da8-b40 EXIT SQLAllocConnect with return code 0 (SQL_SUCCESS)
HENV 0x03FBDEB8
HDBC * 0x000D9C98 ( 0x06A53198)
odbcad32 2da8-b40 ENTER SQLDriverConnectW
HDBC 0x06A53198
HWND 0x00120D9C
WCHAR * 0x58BB2430 [ -3] "******\ 0"
SWORD -3
WCHAR * 0x58BB2430
SWORD -3
SWORD * 0x00000000
UWORD 1 <SQL_DRIVER_COMPLETE>
Hope someone could help me, i been round and round on this issue for over a month now. I need to have it set-up as i will be using the driver to excel macro to execute query on oracle db.
Thank You.
Related
IN EXCEL SHEET FOR THE BELOW INPUT, I HAVE TO USE FILTER TO “NET” FIRST WHERE NET=APB AND NEED TO FILTER “CODE VALUES” AS WDL, LRTF & NEED TO USE “PIVOT TABLE” TO GET OUTPUT WITH COUNT AS:
BUT I NEED CODE IN ORACLE TO RUN FOR THE FOLLOWING OUTPUT:-
INPUT:
STTID
AMOUNT
NET
CODE
SVPC12309A
5000
NFS
SOP
SVPC12309A
10000
NFS
WDL
000DHP11291
2500
APB
WDL
SVPC12309A
3000
CMV
LRTF
SVPC12309A
3000
CMV
WDL
DHP12341
4500
APB
LRTF
DHP23451
9500
APB
LRTF
DHP12341
5500
APB
LRTF
OUTPUT AS:
STTID
LRTF
WDL
TOTAL
000DHP11291
0
1
1
DHP12341
2
0
2
DHP23451
1
0
1
It appears you want something like
select sttid,
sum( case when code = 'LRTF' then 1 else 0 end ) ltrf,
sum( case when code = 'WDL' then 1 else 0 end ) wdl,
sum( case when code in ('WDL', 'LTRF') then 1 else 0 end) total
from your_table_name
group by sttid
I'm using mrskew by Method-R to analyze Oracle SQL Trace files.
I want to list all database calls similar to the output of calls.rc
But instead of the value of $tim, I'd print a human readable date format.
Raw data (minimal obfuscated):
*** 2020-11-26 10:06:01.867
*** SESSION ID:(1391.49878) 2020-11-26 10:06:01.867
*** CLIENT ID:() 2020-11-26 10:06:01.867
*** SERVICE NAME:(SYS$USERS) 2020-11-26 10:06:01.867
*** MODULE NAME:(JDBC Thin Client) 2020-11-26 10:06:01.867
*** CLIENT DRIVER:(jdbcthin : 12.2.0.1.0) 2020-11-26 10:06:01.867
*** ACTION NAME:() 2020-11-26 10:06:01.867
...
WAIT #0: nam='SQL*Net message from client' ela= 491 driver id=1413697536 #bytes=1 p3=0 obj#=-1 tim=12568091328841
=====================
PARSING IN CURSOR #18446744071522016088 len=71 dep=0 uid=88 oct=7 lid=88 tim=12568091329190 hv=2304270232 ad='61e4d11e0' sqlid='5kpbj024phrws'
/*Begin.Work*/
SELECT ...
END OF STMT
PARSE #18446744071522016088:c=147,e=148,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=1,plh=957996380,tim=12568091329190
...
EXEC #18446744071522016088:c=683,e=11406,p=0,cr=2,cu=11,mis=0,r=1,dep=0,og=1,plh=957996380,tim=12568091341788
CLOSE #18446744071522016088:c=27,e=27,dep=0,type=1,tim=12568091343665
XCTEND rlbk=0, rd_only=0, tim=12568091343769
Current output (compacted for readability):
END-TIM LINE SQL_ID CALL-NAME STATEMENT-TEXT
-----------------------------------------------------------------------------
12568091.341788 36 5kpbj024phrws EXEC /*Begin.Work*/ SELECT ...
12568091.343769 42 XCTEND
Expected output (please don't cricicise my not correct subsecond calculation):
END-TIME LINE SQL_ID CALL-NAME STATEMENT-TEXT
-----------------------------------------------------------------------
2020-11-26 10:06:01.341788 36 5kpbj024phrws EXEC /*Begin.Work*/ SELECT ...
2020-11-26 10:06:01.343769 42 XCTEND
I assume I can use POSIX:strftime to format the timestamp properly, but I need a way to generate an epoch timestamp from the timestamp at the begin of the tracefile
*** 2020-11-26 10:06:01.867
and then an offset for each $tim relative to this begin of tracefile.
I hope methodr toolset can provide this. It should be easier for me to explain, when (in human readable form) which activity started.
Generating an epoch seconds number from a string date is simple enough. Time::Local has functions to do it.
#!/usr/bin/perl
use strict;
use warnings;
use feature 'say';
use Time::Local 'timelocal_posix';
my $date = '2020-11-26 10:06:01.867';
# Split the date apart
my ($yr, $mon, $day, $hr, $min, $sec, $micro) = split /[- :.]/, $date;
# Note necessary adjustments to month and year
say timelocal_posix($sec, $min, $hr, $day, $mon - 1, $yr - 1900);
With the help of comments by #DaveCross and #CaryMillsap my processing is now:
from the tracefile create an intermediate file similar to
*** 2020-11-26 10:06:01.867
END-TIM LINE SQL_ID CALL-NAME STATEMENT-TEXT
-----------------------------------------------------------------------------
XCTEND tim=12568091341788 e=2 dep=0 36 5kpbj024phrws EXEC /*Begin.Work*/ SELECT ..
XCTEND tim=12568091343769 e=1 dep=0 42 XCTEND
using in calls.rc
sprintf("XCTEND tim=%-20d e=%-5d dep=0 %10d %10d %10d %13s %-40.40s %-.46s", $tim*1000000, $line,($e+$ela)*1000000, $parse_id, $exec_id, $sqlid, "· "x$dep.$name.(scalar(#bind)?"(".join(",",#bind).")":""), "· "x$dep.$sql)
modify the result to have somewhere on top
*** 2020-11-26 10:06:01.867
process this file with mrwhen
get rid of unwantent parts by
sed -E 's/XCTEND t.{37}//'
In more detail it's documented here.
I have this code.
This cron message means "do this method every Sunday in 01.00 a.m." or I make a mistake translating this?
#Scheduled(cron = "0 1 0 * * ?")
private void notificationsScheduler() {
//implementation
}
you are wrong, it means every day.
Your expression
"0 1 0 * * ?"
means: At 00:01:00am every day
As per your requirement : At 01:00:00am, on every Sunday, every month
Use:
0 0 1 ? * SUN *
Follow this https://www.freeformatter.com/cron-expression-generator-quartz.html for more detail.
Can anyone suggest a way to set a Cron trigger to run every 1 hour?
I tried to do something like this:
#Scheduled(cron = "0 0 0/1 1/1 * ? *")
but when I run the server I get Error :
Error 404--Not Found
What is wrong in this cron?
Try any one of these:
0 0 0/1 * * ?
0 0 * * * ?
This means 0 seconds 0 minutes on change of every hour.
I've used the following cron statement:
#Scheduled(cron = "59 * * * * *")
We are in the process of upgrading to Oracle 12c and I need to track the queries being executed by the application. In other words if the application executes a query like select 'foobar' from dual; I would like see the text "select 'foobar' from dual" in the output file.
If I follow the instructions here: https://docs.oracle.com/database/121/TGSQL/tgsql_trace.htm#TGSQL809 I get files that contain statistics like the following but not the actual sql queries.
WAIT #0: nam='rdbms ipc message' ela= 2999770 timeout=300 p2=0 p3=0 obj#=-1 tim=1103506389
WAIT #0: nam='rdbms ipc message' ela= 9854 timeout=1 p2=0 p3=0 obj#=-1 tim=1103522400
*** 2016-04-07 15:07:20.715
WAIT #0: nam='rdbms ipc message' ela= 2999585 timeout=300 p2=0 p3=0 obj#=-1 tim=1106522506
WAIT #0: nam='rdbms ipc message' ela= 9690 timeout=1 p2=0 p3=0 obj#=-1 tim=1106532500
If I look for the query like this I get 0 results: grep -rnw "foobar" --include=*.trc ./
One option is looking up the AWR repositories for it. It will keep a few days worth of SQLs. There's plenty of additional information in the system views, so this is strictly the text, but feel free to explore.
SELECT DISTINCT u.username, to_char(substr(h.sql_text, 1, 4000)) sqltxt
FROM dba_hist_sqltext h
JOIN dba_hist_active_sess_history a ON a.sql_id = h.sql_id
JOIN dba_users u ON u.user_id = a.user_id
WHERE username = 'SYS';
I filtered the results for SYS just as an example, but you can change it as you wish.
If you would like to see all the activity the best thing you should do is have an EM (Enterprise Manager) set for you.
If you don't, gv$activity_session_history would be a good call, it's better seen when using grouping functions. Simply selecting on it would be a mess depending on the number of calls your application is pushing.
Another way, you could see it in an average manner:
`
select s.parsing_schema_name,
inst_id,
sql_id,
plan_hash_value,
child_number,
round(nullif(s.ELAPSED_TIME, 0) / nullif(s.EXECUTIONS, 0) / 1000000, 4) elap_per_exec,
round(s.USER_IO_WAIT_TIME / nullif(s.ELAPSED_TIME, 0) * 100, 2) io_wait_pct,
round(s.CLUSTER_WAIT_TIME / nullif(s.ELAPSED_TIME, 0) * 100, 2) cluster_wait_pct,
round(s.application_wait_time / nullif(s.ELAPSED_TIME, 0) * 100, 2) app_wait_pct,
round(s.CPU_TIME / nullif(s.ELAPSED_TIME, 0) * 100, 2) cpu_time_pct,
round(s.PHYSICAL_READ_BYTES / nullif(s.EXECUTIONS, 0) / 1024 / 1024, 2) pio_per_exec_mb,
round(s.PHYSICAL_READ_BYTES / nullif(s.PHYSICAL_READ_REQUESTS, 0), 2) / 1024 read_per_request_kbytes,
round(s.buffer_gets / nullif(s.executions, 0), 4) BufferGets_per_Exec
s.executions,
to_char(s.last_active_time,'dd/mm/yyyy hh24:mi:ss') last_act_time,
s.first_load_time,
s.sql_fulltext,
s.sql_profile,
s.sql_patch,
s.sql_plan_baseline
FROM gv$sql s
WHERE 1=1
and s.parsing_schema_name in ('LIST OF DATABASE USERS YOU WANT TO MONITOR')
order by s.last_active_time desc;
`
It would give a good perspective of how well your doing based in your average thresholds.