Fail to understand Octave structure declaration - data-structures

There is a new feature in Octave - Structure. I got the information about Structure from Octave Structure.
I also got some code to create a structure like that
data = struct;
data.timestep.sensor = struct;
But I never got this type of declaration in Octave Structure. So I become confused about these two coding lines.
Can anyone please help me to understand these two line?

First of all, structures aren't that new in Octave (the linked documentation page is already available for Octave 4.0.0 with last modified date March 2016).
Have you just played around a bit creating structures? The first line will just generate some empty structure.
data = struct
data =
scalar structure containing the fields:
As you can see, there are no fields, yet.
The second line (implicitly)
adds a field timestep to the structure data,
adds a field sensor to timestep, (implicitly) making timestep a (sub)structure,
makes the field sensor an empty structure itself.
If there's no data variable in your workspace before (or already a proper structure), the second line is sufficient. Then, data is also implicitly generated as a structure.
clear data;
data.timestep.sensor = struct
data =
scalar structure containing the fields:
timestep =
scalar structure containing the fields:
sensor =
scalar structure containing the fields:
If there's already a data variable, e.g. with some scalar, that won't work, and you'd need both lines.
data = 42;
data.timestep.sensor = struct
error: scalar cannot be indexed with .
data = struct
data =
scalar structure containing the fields:
data.timestep.sensor = struct
data =
scalar structure containing the fields:
timestep =
scalar structure containing the fields:
sensor =
scalar structure containing the fields:
Instead of data = struct, you could've also used clear data for example.
Hope that helps! If not, maybe provide some more details in your question, what EXACTLY confuses you.

Related

Trouble converting yaml file to toml file

I am trying to convert a yaml file into a toml file in python3.
My plan is to use toml.dumps() which expects a dictionary, then write to an output file.
I need to be able to match the input toml requirements for a tool that I need to plug into.
This tool expects inline tables at certain instances like this:
[states.Started]
outputs = { on = true, running = false }
[[states.Started.transitions]]
inputs = { go = true }
target = "Running"
I understand how to generate the tables [] and array of tables [[]] but I am having a hard time figuring out how to create the inline tables.
TOML documentation says that inline tables are the same as tables. So for example, with the array of tables above (states.Started.Transitions) I figured inputs would be a table within the overall list, however the TOML format will break it into separate tables at the output.
Can anyone help me figure out how to configure my dictionary to output the inline table?
EDIT***
I am not sure I fully understand what I am doing wrong. Here is my code
table = {'a':5,'b':3}
inline_table = toml.TomlDecoder().get_empty_inline_table()
inline_table['Values'] = table
encoder = toml.TomlPreserveInlineDictEncoder()
toml_config = toml.dumps(inline_table,encoder = encoder)
however this does not create an inline table, but a regular table in the output.
[Values]
a = 5
b = 3
This tool expects inline tables at certain instances like this
Then it is not a TOML tool. As you discovered yourself, inline tables are semantically equivalent to normal tables, so if the tool conforms to the TOML specification, it must not care whether the tables are inline or not. If it does, this is not a TOML question.
That being said, you can create the dicts that should become inline tables via
TomlDecoder().get_empty_inline_table()
The returned object is a dict subclass and can be used as such.
When you finished creating your structure, use TomlPreserveInlineDictEncoder for dumping and don't ask why this is suddenly called InlineDict instead of InlineTable.

Create new Object from XSD in ESQL

Is there a way to create an object whose format is defined in a message model?
Actually, I have created a message model with some fields containing default values and some restrictions. I managed with the following code to create a message in ESQL, but the other fields (which contain default values) do not appear :
CREATE LASTCHILD OF OutputRoot DOMAIN('DFDL');
-- SET OutputRoot.Properties = InputRoot.Properties;
SET OutputRoot.Properties.MessageSet = '{ObjectsDefinitionLibrary}';
SET OutputRoot.Properties.MessageType = '{}:Example1MsgModel';
SET OutputRoot.DFDL.Example1MsgModel.record[1].FieldOne = 'Value1';
Will this be possible with ESQL?
Is there a way to create an object whose format is defined in a message model
You need to define what you mean by 'object'. Do you want to create a message tree based on the model? Or do you want to generate a valid BLOB from the model?
As has been said by others, if you want to generate a BLOB from a DFDL model, you must ensure that everything in the message model (including complex elements) has minOccurs>=1, and you must provide a default value for every field.
If you want a message tree then you will need to parse that BLOB using the DFDL parser. Which leads nicely into the answer to your other question...
Will this be possible with ESQL?
ESQL does not offer a special statement to create a message tree from a model. However, it does offer two functions for parsing and writing any BLOB/message tree. Look up the ASBITSTREAM function, and the CREATE function (with the PARSE clause).

How to use the field cardinality repeating in Render-CSV BW step?

I am building a generic CSV output module with a variable number of columns. The DataFormat in BW (5.14) lets you define repeating item and thus offers a list of items that I could use to map data to in the RenderCSV step.
But when I run this with data for >> 1 column (and loopings) only one column is generated.
Is the feature broken or do I use it wrongly?
Alternatively I defined "enough" optional columns in the data format and map each field separately - no really generic solution.
Looks like In BW 5, when using Data Format and Parse Data to parse text, repeating elements isn’t supported.
Please see https://support.tibco.com/s/article/Tibco-KnowledgeArticle-Article-27133
The workaround is to use Data Format resource, Parse Data and Mapper
activities together. First use Data Format and Parse Data to parse the
text into the xml where every element represents one line of the text.
Then use Mapper activity and tib:tokenize-allow-empty XSLT function to
tokenize every line and get sub-elements for each field in the lines.
The link has also attached workaround implementation

Is there a metalanguage, similar to BNF that can concisely describe self-describing data?

Say for instance I had a data set that was self describing. The first few well-structured records define data type IDs, which include the name and length of records, followed by content records, which start with the data IDs and contain a variable amount of data, depending on the ID.
It would be easy enough to describe the definition records using BNF, EBNF, or ABNF .. but how would one concisely describe the content records, whose length is defined in the definition records?
Here is an example of describing the classic NetCDF data format with a BNF-like notation, but not concisely because the lengths of the data recs is not specified as a function of data in the the earlier dim and var definitions.
Are you asking how to define the content of the content records? You made it clear that they're already defined in terms of the amount of data. If each data type ID implies not only a data length but also a data structure, it's straightforward, even in BNF, with one set of productions for each data type ID. Is that what you mean? (It's even likely to be LR(1).)
I am the creator of an Expert System, named XTRAN, that manipulates over 30 computer languages, as well as data and text. I got tired of writing parsers, so I created a parsing engine that executes EBNF at parse time, and I feed it the EBNF via the Expert System's rules language. Since EBNF itself is meta, the schema I use to parse and store it for execution at parse time is meta-meta.
XTRAN's rules language also provides a data base capability in which a data base is in-memory, content-addressable, and stored as a sparse matrix. It's effectively an n-space, with each cell addressed via a list of subscripts, with each subscript being either elided, an integer, or a text string. So I can construct the scenario you describe quickly, by storing the data descriptions in the same data base that contains the content records. It's loosely analogous to a relational data base describing its schema via its own contents.
FWIW, we call XTRAN's rules language meta-code, because it's a language that can manipulate other languages (as well as itself).

How do I split in Pig a tuple of many maps into different rows

I have a relation in Pig that looks like this:
([account_id#100,
timestamp#1434,
id#900],
[account_id#100,
timestamp#1434,
id#901],
[account_id#100,
timestamp#1434,
id#902])
As you can see, I have three map objects within a tuple. All of the data above is within the $0'th field in the relation. So the data above in a relation with a single bytearray column.
The data is loaded as follows:
data = load 's3://data/data' using com.twitter.elephantbird.pig.load.JsonLoader('-nestedLoad');
DESCRIBE data;
data: {bytearray}
How do I split this data structure into three rows so that the output is as follows?
data: {account_id:chararray, timestamp:chararray, id:int}
(100, 1434,900)
(100, 1434,901)
(100, 1434,902)
It is very difficult to guess your problem without having a sample input data. If this is an intermediate result, then write it out using a STORE and put the output file as something that we can input to try out. I was able to solve this using STRSPLIT but am not sure if you meant that the input is a single column and a single row or are these three different rows with the same column.
In either case, Flattening out the data using the FLATTEN operator and using STRSPLIT later should help. If I get more information and input data for the problem, I can give a working example.
Data -> FLATTEN to get out of bag -> STRSPLIT over "," in a FOREACH,GENERATE

Resources