I have a sql_id. The corresponding SELECT SQL query has 4 bind variables.
There is a program created by me which lets me know that it ran for 1000 times in the last 1 month.
So basically I want to know that all 1000 times the same bind variable was used or not.
For the latest one, I got the bind variable values from v$sql_bind_capture.
So is it that whatever is the latest value in v$sql_bind_capture is the same used all 1000 times?
Does sql_id generation consider the bind value for generation of sql_id or it is the query without the bind value that is used to generate sql_id?
Thanks
Tarun
No, different bind value passed each time will not cause the SQL_ID to change. A different bind value passed may cause the sql plan hash value to change (PHV) but not the SQL_ID.
About your main question:
so basically I want to know that all 1000 times the same bind variable was used or not.
There are 2 standard ways to do that:
add hint "monitor" into the query and check bind variables values in v$sql_monitor. I have own script for that: https://github.com/xtender/xt_scripts/blob/master/rtsm/binds.sql
enable tracing for your sql_id:
alter system set events 'sql_trace [sql:&sqlid] bind=true, wait=false';
&sqlid is substituion variable which you can set to to your needed sql_id. Then you can periodically check bind variables tracefiles, for example using grep.
Related
I use Oracle Apex, I want that my 'display only field' is updated automatically. Well, when I use dynamic actions like this select 5 * price from ... or, for instance, random values, it works absolutely correctly, the field is filled with the value 5 * price (or set new random value). But when I use select :P4_COUNT * price from, the filed is empty. I think that the problem in :P4_COUNT (it is a number field) but I do not know what to do.
In the Dynamic Action, look for "Items to Submit" (usually under the SQL or PL/SQL code). Put the names of items that need to be submitted to session state prior to running the code. Also, note that currently, all values in session state are strings. So it's probably best to use to_number if you need a number.
In my report, i have created a xxx parameter which takes values from the report data set.
xxx parameter is not being passed to stored proc which is used to show the data for the report.
Now When the report does not have any data for other parameters, i get an error saying xxx parameter is missing a value.
I tried allowing blank values in the parameter properties.
Check that parameter's - Available Values by going to report parameters properties.
It must not be specified any values. So we should set it as None
Another way is,
Just add a blank space at Specify values - in Default values inside report parameters properties.
Third way
Ido an "if exists" statement for this to go away. It worked for me because it makes it always return a value even if that value is not need by my query.
if exists (my select query)
my select query
else
select '2'
// '2' would never be used, but it made ssrs stop giving me
// the stupid error and execute
This should help.
Is there any way to use a variable in the from part (for example SELECT myColumn1 FROM ?) in a task flow - source without having to give the variable a valid default value first?
To be more exact in my situation it is so that I'm getting the tablenames out of a table and then use a control workflow to foreach over the list of tablenames and then call a workflow from within that then gets data from these tables each. In this workflow I have the before mentioned SELECT statement.
To get it to work properly I had to set the variable to a valid default value (on package level) as else I could not create the workflow itself (as the datasource couldn't be created as the select was invalid without the default value).
So my question here is: Is there any workaround possible in this case where I don't need a valid default value for the variable?
The datatables:
The different tables which are selected in the dataflow have the exact same tables in terms of columns (thus which columns, naming of columns and datatypes of columns). Only the data inside of them is different (thus its data for customer A, customer B,....).
You're in luck as this is a trivial thing to implement with SSIS.
The base problem for most people is that they come at SSIS like it's still DTS where you could do whatever you want inside a data flow. They threw out the extreme flexibility with DTS in favor of raw processing performance.
You cannot parameterize the table in a SQL statement. It's simply not allowed.
Instead, the approach that people take is to use Expressions. In your case, assuming you had two Variables of type String created, #[User::QualifiedTableName] and #[User::QuerySource]
Assume that [dbo].[spt_values] is assigned to QualifiedTableName. As you loop through the table names, you will assign the value into this variable.
The "trick" is to apply an expression to the #[User::QuerySource]. Make the expression
"SELECT T.* FROM " + #[User::QualifiedTableName] + " AS T;"
This allows you to change out your table name whenever the value of the other variable changes.
In your data flow, you will change your OLE DB Source to be driven by a query contained in a variable instead of the traditional table selection.
If you want an example of where I use QuerySource to drive a data flow, there's an example on mixing an integer and string in an ssis derived column
Create a second variable. Set its Expression to create the full
Select statement, using the value of the first variable.
In the Data Source, use "SQL command from variable" option for the
Data Access Mode property.
If you can, set a default value for the variable you created in step
That will make filling out the columns from your data source much easier.
If you can't use a default value for the variable, set the Data
Source's ValidateExternalMetadata property to False.
You may have to open the data source with the Advanced Editor and
create Output columns manually.
I have a query that looks like this:
select * from foo where id in (:ids)
where the id column is a number.
When running this in TOAD version 11.0.0.116, I want to supply a list of ids so that the resulting query is:
select * from foo where id in (1,2,3)
The simple minded approach below gives an error that 1,2,3 is not a valid floating point value. Is there a type/value combination that will let me run the desired query?
CLARIFICATION: the query as shown is how it appears in my code, and I am pasting it into TOAD for testing the results of the query with various values. To date I have simply done a text replacement of the bind variable in TOAD with the comma separated list, and this works fine but is a bit annoying for trying different lists of values. Additionally, I have several queries of this form that I test in this way, so I was looking for a less pedestrian way to enter a list of values in TOAD without modifying the query. If this is not possible, I will continue with the pedestrian approach.
As indicated by OldProgrammer, the Gerrat's answer that "You can't use comma-separated values in one bind variable" in the indicated thread correctly answers this question as well.
Can someone please identify the functional/performance differences, if any, between SET and SELECT in T-SQL? Under what conditions should I choose one over the other?
UPDATE:
Thanks to all who responded. As a few people pointed out, this article by Narayana Vyas Kondreddi has lots of good info. I also perused the net after reading the article and found this condensed version by Ryan Farley which offers the highlights and thought I would share them:
SET is the ANSI standard for
variable assignment, SELECT is not.
SET can only assign one variable at
a time, SELECT can make multiple
assignments at once.
If assigning from a query, SET can
only assign a scalar value. If the
query returns multiple values/rows
then SET will raise an error. SELECT
will assign one of the values to the
variable and hide the fact that
multiple values were returned (so
you'd likely never know why
something was going wrong elsewhere - have fun troubleshooting that one)
When assigning from a query if there
is no value returned then SET will
assign NULL, where SELECT will not
make the assignment at all (so the
variable will not be changed from
it's previous value)
As far as speed differences - there
are no direct differences between
SET and SELECT. However SELECT's
ability to make multiple assignments
in one shot does give it a slight
speed advantage over SET.
SET is the ANSI standard way of assigning values to variables, and SELECT is not. But you can use SELECT to assign values to more than one variable at a time. SET allows you to assign data to only one variable at a time. So that in performance is where SELECT will be a winner.
For more detail and examples refer to: Difference between SET and SELECT when assigning values to variables
SQL Server: one situation where you have to use SELECT is when assigning ##ERROR and ##ROWCOUNT as these have to be set in the same statement (otherwise they get reset):
SELECT #error = ##ERROR, #rowcount = ##ROWCOUNT
(SET only works with one value at a time)
Set is ANSI standard for assigning values to variables.
Select can be used when assigning values to multiple vairables.
For more details please read this detailed post by Narayana Vyas
set and select both assign values to variables. Using select you can assign values to more than one variable
some thing like
select #var1=1,#var2=2
where as using set you have to use separate set statements (its an ANSI way of assigning values) i.e.
set #var1=1
set #var2=2
I hope this helps
cheers
Common question, canned answer:
http://sqlblog.com/blogs/alexander_kuznetsov/archive/2009/01/25/defensive-database-programming-set-vs-select.aspx