Is there any way to use a variable in the from part (for example SELECT myColumn1 FROM ?) in a task flow - source without having to give the variable a valid default value first?
To be more exact in my situation it is so that I'm getting the tablenames out of a table and then use a control workflow to foreach over the list of tablenames and then call a workflow from within that then gets data from these tables each. In this workflow I have the before mentioned SELECT statement.
To get it to work properly I had to set the variable to a valid default value (on package level) as else I could not create the workflow itself (as the datasource couldn't be created as the select was invalid without the default value).
So my question here is: Is there any workaround possible in this case where I don't need a valid default value for the variable?
The datatables:
The different tables which are selected in the dataflow have the exact same tables in terms of columns (thus which columns, naming of columns and datatypes of columns). Only the data inside of them is different (thus its data for customer A, customer B,....).
You're in luck as this is a trivial thing to implement with SSIS.
The base problem for most people is that they come at SSIS like it's still DTS where you could do whatever you want inside a data flow. They threw out the extreme flexibility with DTS in favor of raw processing performance.
You cannot parameterize the table in a SQL statement. It's simply not allowed.
Instead, the approach that people take is to use Expressions. In your case, assuming you had two Variables of type String created, #[User::QualifiedTableName] and #[User::QuerySource]
Assume that [dbo].[spt_values] is assigned to QualifiedTableName. As you loop through the table names, you will assign the value into this variable.
The "trick" is to apply an expression to the #[User::QuerySource]. Make the expression
"SELECT T.* FROM " + #[User::QualifiedTableName] + " AS T;"
This allows you to change out your table name whenever the value of the other variable changes.
In your data flow, you will change your OLE DB Source to be driven by a query contained in a variable instead of the traditional table selection.
If you want an example of where I use QuerySource to drive a data flow, there's an example on mixing an integer and string in an ssis derived column
Create a second variable. Set its Expression to create the full
Select statement, using the value of the first variable.
In the Data Source, use "SQL command from variable" option for the
Data Access Mode property.
If you can, set a default value for the variable you created in step
That will make filling out the columns from your data source much easier.
If you can't use a default value for the variable, set the Data
Source's ValidateExternalMetadata property to False.
You may have to open the data source with the Advanced Editor and
create Output columns manually.
Related
I need to search an oracle table column for multiple word strings in cognos oracle query.
For example:
If Focus parameter returns multiple values as below
TRAINING
OMNIA
COUNTER
PROGRAM
And I need to search project.proj_name column like '%TRAINING%' or '%OMNIA%' or '%COUNTER%' or '%PROGRAM%'
I am trying below but I know it does only single value match not multiple. I want to know how to achieve multiple value match here.
'-99' in (#promptmany('Focus', 'string','-99')#) OR REGEXP_LIKE(proj_name, #promptmany('Focus', 'string','-99')#))
Working from Cognos Paul's solution to use output from promptmany as a table:
Assuming your query is named Q1...
Add a query. (Q2)
Add a SQL object to that query.
Set the Data source property for the SQL object.
Change the SQL Syntax property to IBM Cognos.
Define the query as
SELECT
parameterValue
FROM (VALUES
(#join('),(',split(',',promptmany('Scenarios','string',sq('N/A'))))#)
) query(parameterValue)
(change the names for your own use case)
Add a query. (Q3)
Add a join to the new query.
Add Q1 and Q2 to the empty boxes for the join leading to Q3.
Set the join as
[Q1].[proj_name] like '%' || [Q2].[parameterValue] || '%'
Add the required data items to Q3.
Since two keywords (from your parameter -> Q2) could be found in a single value (in Q1), you'll likely end up with duplicate rows. Cognos will probably handle this with its default aggregations, but keep a lookout.
Be careful with this. The new query (Q2) will probably be joined on the Cognos server, not on the database server. Be sure you have sufficient filters leading into this structure so Cognos is not trying to process your entire database.
This worked for me with SQL Server. I don't have an Oracle database to test against, but using IBM Cognos as the SQL Syntax should handle that.
To use REGEXP_LIKE to solve this problem, you'll need to get the second argument correct. I can't see any reason to see the error message ORA-00996: the concatenate operator is ||, not |, but I'm not working with your code in your system.
You don't specify which version of Cognos, or even which Cognos product, you are using. I'll assume Cognos Analytics 11.1.7.
To determine what Cognos Analytics is doing with your macro, create a very simple query with one item from the database (preferably from a very small table) and another data item that contains the macro. So the data item expression is:
#sq(join('|',split(',',promptmany('Focus','string','-99'))))#
When you run this, you may not be prompted. You'll see the value is -99. So to test this we'll need to remove the default so that the prompt becomes required.
#sq(join('|',split(',',promptmany('Focus','string'))))#
Be sure to enter more than one value when you test.
In my environment, the parameter returns a value that is my values surrounded by quotes (') and delimited by semicolons (;). So my tests produced the following:
expression
value
#sq(promptmany('Focus','string'))#
'PROGRAM';'COUNTER';'TRAINING'
#sq(join('|',split(',',promptmany('Focus','string'))))#
'PROGRAM';'COUNTER';'TRAINING'
#sq(join('|',split(';',promptmany('Focus','string'))))#
'PROGRAM'|'COUNTER'|'TRAINING'
replace(#sq(join('|',split(';',promptmany('Focus','string'))))#, '''', '')
PROGRAM|COUNTER|TRAINING
Your mileage may vary.
At this point, you know which macro to use in the REGEXP_LIKE function.
I am using a software, pc/mrp, which appears to have a built-in Visual Fox Pro editor for FRX files. It also has an external usage of an ef file. Based on some usage of Google, the report designer seems standard, not custom. The ef file usage may be a custom thing. Now, I need to find a way to get access to a value from a SQL statement inside the report. The statement needs to run per-line in the report.
EF:
This file has sections:
~in~
~out~
In these sections, I can run code, but if there is a ~perline~ type section, I don't know how to access it. I can use the ~in~ to try to create a relationship between the databases, as shown in the following example:
~IN~
THISAREA = SELECT()
USE PARTMAST ORDER BYPARTNO IN 0
SELECT (THISAREA)
SET RELATION TO PARTNO INTO PARTMAST ADDITIVE
GO TOP
~OUT~
USE IN SELECT("SALES")
But, for this I don't know how to join the databases. I have two databases (A,B) I need to connect them based on two fields (pono,line). If (A.pono and a.line) = (B.pono and B.line) then they would be linked. Is this possible?
Report Designer:
The other way I see this working is to do the query inside the report designer. Inside report properties is a variable tab. I can use this to assign to variables using expressions. I need:
SELECT field from B where B.pono = pono and B.line = line; INTO ARRAY varArray;
But, it gives me an error, likely because this is trying to create a new variable as opposed to actually assigning to the variable in the report. I tried editing a field inside the designer to use the preceeding code as well, but that also failed.
Is there a way using the report designer or the ef file to grab the data I need per line?
The sample code you show is doing something like a join with the SET RELATION command. To use SET RELATION, there has to be an index on the relevant field (expression) in the child table. So, if your table B has an index on PONO + LINE (or, if those are numeric, STR(PONO, length) + STR(LINE, length)), you can SET RELATION TO PONO + LINE INTO B, again, using the more complicated expression if necessary.
I am trying to write the SQL task results in a flat file. I have a SQL task, followed by foreach loop that parses the object results to variables. Inside the foreach I have a data flow.
Inside the dataflow I have a Derived Column transformation editor, where I am trying to use the variables as columns. This is because I want to write the column in a flat file. However the Derived column keeps complaining about not having any INPUT columns (and writing 0 rows to flatfile) and I do not know why.
These are the instructions I am trying to follow: Using Variable as expression in Derived column transformation SSIS
Derived Column transformation is a part of Data Flow. Data Flow means that you have a set of rows with columns originated from some Data Flow Source, undergoing DFT transformations like Derived Column and then passing rows to Data Flow Destination. Data Flow Transformation needs to have input and output.
In your case - create a OLE DB Source with some dummy query like `select 0 as dummy' and direct this data flow to your Derived Column. Later you can drop this dummy column.
I have the following scenario: a single .rdl file with a stored procedure as datasource. This stored procedure accepts two parameters: #ProcedureName nvarchar(max) and #Parameters xml. The functionality of the stored procedure is to call another stored procedure (most probably on a different database) with the given XML parameters. So, in essence, each of the stored procs that gets executed will return it's own dataset.
How would I go about creating a tablix/matrix that consumes the dataset without specifying the columns as the columns need to get generated at runtime?
Unfortunately, SSRS doesn't have "AutoGenerateColumns"-style functionality and resolves a number of things at design time. So the short answer is that you cannot.
The designer checks field references when saving, and will not save with a reference to a field that isn't in a dataset's field list. If a field ceases to exist after the report definition is generated, it will show up as a static blank value on the report. Expressions will do so as well, even if the field is in an unevaluated portion. So if field B is removed, this expression would still be affected:
=IIF(1=1,Fields!A.Value,Fields!B.Value)
Which means that you can't use conditional grouping expressions as a workaround, even if you had an exhaustive list of the columns that might be returned.
I am loading data into Spotfire report from Oracle using view. In the Spotfire Prompt when the text value is entered, whether it is Upper,lower or mixure of both the relative column values should be displayed. Since this is done at the data base level i need some way to make the column values case insensitive.
I know that column values can be converted to upper/lower case with upper()/lower() function but i need a way so that even a mixure of upper/lower cases will search the column values.
I found one more solution for session :
alter session set NLS_COMP=ANSI;
alter session set NLS_SORT=BINARY_CI;
After using the above code any SQL search for column will be case insensitive. but i need a solution where i can use it in the view of the column element.
Does the inputted value need to be a prompt? It would be possible to do this using a parameter (which could be set by a document property/ input field/ configuration block).
Within the SQL for your query, you could add a where clause something like
upper(A1."mycolumn")=upper(?myParameter)
After refreshing the parameters section of the information link ‘myParameter’ will appear. The information link can then be added as an on demand table, with myParameter being set via a document property.