Is there a way to create an object whose format is defined in a message model?
Actually, I have created a message model with some fields containing default values and some restrictions. I managed with the following code to create a message in ESQL, but the other fields (which contain default values) do not appear :
CREATE LASTCHILD OF OutputRoot DOMAIN('DFDL');
-- SET OutputRoot.Properties = InputRoot.Properties;
SET OutputRoot.Properties.MessageSet = '{ObjectsDefinitionLibrary}';
SET OutputRoot.Properties.MessageType = '{}:Example1MsgModel';
SET OutputRoot.DFDL.Example1MsgModel.record[1].FieldOne = 'Value1';
Will this be possible with ESQL?
Is there a way to create an object whose format is defined in a message model
You need to define what you mean by 'object'. Do you want to create a message tree based on the model? Or do you want to generate a valid BLOB from the model?
As has been said by others, if you want to generate a BLOB from a DFDL model, you must ensure that everything in the message model (including complex elements) has minOccurs>=1, and you must provide a default value for every field.
If you want a message tree then you will need to parse that BLOB using the DFDL parser. Which leads nicely into the answer to your other question...
Will this be possible with ESQL?
ESQL does not offer a special statement to create a message tree from a model. However, it does offer two functions for parsing and writing any BLOB/message tree. Look up the ASBITSTREAM function, and the CREATE function (with the PARSE clause).
Related
I am building a generic CSV output module with a variable number of columns. The DataFormat in BW (5.14) lets you define repeating item and thus offers a list of items that I could use to map data to in the RenderCSV step.
But when I run this with data for >> 1 column (and loopings) only one column is generated.
Is the feature broken or do I use it wrongly?
Alternatively I defined "enough" optional columns in the data format and map each field separately - no really generic solution.
Looks like In BW 5, when using Data Format and Parse Data to parse text, repeating elements isn’t supported.
Please see https://support.tibco.com/s/article/Tibco-KnowledgeArticle-Article-27133
The workaround is to use Data Format resource, Parse Data and Mapper
activities together. First use Data Format and Parse Data to parse the
text into the xml where every element represents one line of the text.
Then use Mapper activity and tib:tokenize-allow-empty XSLT function to
tokenize every line and get sub-elements for each field in the lines.
The link has also attached workaround implementation
Say for instance I had a data set that was self describing. The first few well-structured records define data type IDs, which include the name and length of records, followed by content records, which start with the data IDs and contain a variable amount of data, depending on the ID.
It would be easy enough to describe the definition records using BNF, EBNF, or ABNF .. but how would one concisely describe the content records, whose length is defined in the definition records?
Here is an example of describing the classic NetCDF data format with a BNF-like notation, but not concisely because the lengths of the data recs is not specified as a function of data in the the earlier dim and var definitions.
Are you asking how to define the content of the content records? You made it clear that they're already defined in terms of the amount of data. If each data type ID implies not only a data length but also a data structure, it's straightforward, even in BNF, with one set of productions for each data type ID. Is that what you mean? (It's even likely to be LR(1).)
I am the creator of an Expert System, named XTRAN, that manipulates over 30 computer languages, as well as data and text. I got tired of writing parsers, so I created a parsing engine that executes EBNF at parse time, and I feed it the EBNF via the Expert System's rules language. Since EBNF itself is meta, the schema I use to parse and store it for execution at parse time is meta-meta.
XTRAN's rules language also provides a data base capability in which a data base is in-memory, content-addressable, and stored as a sparse matrix. It's effectively an n-space, with each cell addressed via a list of subscripts, with each subscript being either elided, an integer, or a text string. So I can construct the scenario you describe quickly, by storing the data descriptions in the same data base that contains the content records. It's loosely analogous to a relational data base describing its schema via its own contents.
FWIW, we call XTRAN's rules language meta-code, because it's a language that can manipulate other languages (as well as itself).
I have a metadata Name as CONTACTS(SOURCE.CSV|TAGET.CSV). Now I read this file using reader and populate the value in table that I created as CONTACT_TABLE(PK NUMBER, Source_name varchar2(500),target_name varchar2(500)) after that I want to read these source.csv and target.csv file stored in my table CONTACT_TABLE AND populate the value in other table called SOURCE_COLUMN_TARGET_COLUMN_TABLE(PK,FK as pk of contact_table,source_column,target_column) this table should contain all the column of source and target and should have one to one relationship with that, for example, source.csv(fn)-----target.csv(firstName)
My objective is whenever we add some other attribute in source or target I should not change the entire mapping for eg if we add source.csv(email) and target.csv(email) it should directly map
Thanks!
please help!
I have this task completed before Friday and I searched every source I found dynamic mapping thing and parameter thing but it was not very helpful I want to do this way itself
Not clear what you are asking actually. The source analyser uses source files(.csv) on import itself and thereby contains the same format in source qualifier.
So, if any of the values gets added into your existing files (source.csv, target.csv) then it becomes a new file for your existing mapping. hence, you dont need to change the whole mapping just that you need to import it again.
I have the following scenario: a single .rdl file with a stored procedure as datasource. This stored procedure accepts two parameters: #ProcedureName nvarchar(max) and #Parameters xml. The functionality of the stored procedure is to call another stored procedure (most probably on a different database) with the given XML parameters. So, in essence, each of the stored procs that gets executed will return it's own dataset.
How would I go about creating a tablix/matrix that consumes the dataset without specifying the columns as the columns need to get generated at runtime?
Unfortunately, SSRS doesn't have "AutoGenerateColumns"-style functionality and resolves a number of things at design time. So the short answer is that you cannot.
The designer checks field references when saving, and will not save with a reference to a field that isn't in a dataset's field list. If a field ceases to exist after the report definition is generated, it will show up as a static blank value on the report. Expressions will do so as well, even if the field is in an unevaluated portion. So if field B is removed, this expression would still be affected:
=IIF(1=1,Fields!A.Value,Fields!B.Value)
Which means that you can't use conditional grouping expressions as a workaround, even if you had an exhaustive list of the columns that might be returned.
Is there any way to use a variable in the from part (for example SELECT myColumn1 FROM ?) in a task flow - source without having to give the variable a valid default value first?
To be more exact in my situation it is so that I'm getting the tablenames out of a table and then use a control workflow to foreach over the list of tablenames and then call a workflow from within that then gets data from these tables each. In this workflow I have the before mentioned SELECT statement.
To get it to work properly I had to set the variable to a valid default value (on package level) as else I could not create the workflow itself (as the datasource couldn't be created as the select was invalid without the default value).
So my question here is: Is there any workaround possible in this case where I don't need a valid default value for the variable?
The datatables:
The different tables which are selected in the dataflow have the exact same tables in terms of columns (thus which columns, naming of columns and datatypes of columns). Only the data inside of them is different (thus its data for customer A, customer B,....).
You're in luck as this is a trivial thing to implement with SSIS.
The base problem for most people is that they come at SSIS like it's still DTS where you could do whatever you want inside a data flow. They threw out the extreme flexibility with DTS in favor of raw processing performance.
You cannot parameterize the table in a SQL statement. It's simply not allowed.
Instead, the approach that people take is to use Expressions. In your case, assuming you had two Variables of type String created, #[User::QualifiedTableName] and #[User::QuerySource]
Assume that [dbo].[spt_values] is assigned to QualifiedTableName. As you loop through the table names, you will assign the value into this variable.
The "trick" is to apply an expression to the #[User::QuerySource]. Make the expression
"SELECT T.* FROM " + #[User::QualifiedTableName] + " AS T;"
This allows you to change out your table name whenever the value of the other variable changes.
In your data flow, you will change your OLE DB Source to be driven by a query contained in a variable instead of the traditional table selection.
If you want an example of where I use QuerySource to drive a data flow, there's an example on mixing an integer and string in an ssis derived column
Create a second variable. Set its Expression to create the full
Select statement, using the value of the first variable.
In the Data Source, use "SQL command from variable" option for the
Data Access Mode property.
If you can, set a default value for the variable you created in step
That will make filling out the columns from your data source much easier.
If you can't use a default value for the variable, set the Data
Source's ValidateExternalMetadata property to False.
You may have to open the data source with the Advanced Editor and
create Output columns manually.