What is explicit generation in IBM ESQL? - websphere

IBM documentation of the LIST constructor function contains the following sentence: "The LIST constructor complex function is used to explicitly generate lists of values that can be assigned to fields in an output message".
What does the term "explicitly generate" mean?

I assume you are referring to this topic in the IBM docs for IBM Integration Bus:
https://www.ibm.com/docs/en/integration-bus/10.0?topic=functions-list-constructor-function.
You may find it useful to read this topic as well: https://www.ibm.com/docs/en/integration-bus/10.0?topic=functions-row-list-comparisons
In previous versions of IIB/WMB a 'list' was an internal data type (some built-in functions would return a list), but the ESQL programmer could not declare a list or construct a list using their own code.
So to answer your question...I think the phrase 'explicitly generate' is highlighting the fact that it is now possible for a programmer to declare and initialise a literal list of values, and work with that list in their ESQL code.

Related

What is the name of the variable naming convention used in VBS that includes the data type in the name? [duplicate]

This question already has an answer here:
What is Hungarian Notation? [duplicate]
(1 answer)
Closed 8 months ago.
I have recently started working in a system that includes the data type in the fieldnames for every record. I'm writing up the documentation for this system (in particular the coding conventions), and as a history lesson, I wanted to include a reference to this style of naming convention.
In the past, I know it was very standard to use names like
dim strName
dim intAge
dim fltIncome
To help keep track of datatypes in dynamically typed languages (VBS in the case above). I also, know that this convention was actually named after somebody who wrote a lengthy description about why this is a good idea.
Does anyone know the name of this convention, or have good references they could share?
COM doesn't use Hungarian Notation at all. The Windows API does. And its useful enough. I've bolded the pat below that refutes Hungarian Notation in COM.
This is from https://learn.microsoft.com/ms-my/previous-versions/windows/desktop/automat/naming-conventions
Choose names for exposed objects, properties, and methods that can be
easily understood by users of the application. The guidelines in this
section apply to all of the following exposed items:
Objects — implemented as classes in an application.
Properties and methods — implemented as members of a class.
Named arguments — implemented as named parameters in a member function.
Constants and enumerations — implemented as settings for properties and methods.
Use Entire Words or Syllables
It is easier for users to remember complete words than to remember
whether you abbreviated Window as Wind, Wn, or Wnd.
When you need to abbreviate because an identifier would be too long,
try to use complete initial syllables. For example, use AltExpEval
instead of AlternateExpressionEvaluation.
Use Don't use
Application App
Window Wnd
Use Mixed Case
All identifiers should use mixed case, rather than underscores, to
separate words.
Use Don't use
ShortcutMenus Shortcut_Menus, Shortcutmenus, SHORTCUTMENUS, SHORTCUT_MENUS
BasedOn basedOn
Use the Same Word Used in the Interface
Use consistent terminology. Do not use names like HWND that are
based on Hungarian notation. Try to use the same word users would
use to describe a concept.
Use Don't use
Name Lbl
Use the Correct Plural for the Class Name
Collection classes should use the correct plural for the class name.
For example, if you have a class named Axis, store the collection of
Axis objects in an Axes class. Similarly, a collection of Vertex
objects should be stored in a Vertices class. In cases where English
uses the same word for the plural, append the word "Collection."
Use Don't use
Axes Axiss
SeriesCollection CollectionSeries
Windows ColWindow

2 JSON Schema Questions, Is the type keyword required and what is the differencen between Core and Validation

Okay I have been UP and DOWN the internet and I cannot find an answer that DEFINITIVELY answers the following question.
"Is the type keyword required?" If it is not then can some one, for all that is holy, please, in EXCRUCIATING detail, describe what should happen when it is not provided, validation-wise.
I have found this...
http://json-schema.org/draft/2020-12/json-schema-validation.html#rfc.section.6.1.1
But I have found so many other examples where a schema object can be defined and not have this keyword.
For example I have found this repo with testing examples.
https://github.com/json-schema-org/JSON-Schema-Test-Suite/blob/master/tests/draft7/additionalProperties.json
Here they have a schema at line 5. It does not have a type but does look like they are talking about an object. Also on lines 21 - 25 they describe a test where an array is valid.
Can someone please clarify this for me.
Also for the second one,... What is the difference between the Core and the Validation as defined here...
https://json-schema.org/specification.html
Thank you in advanced
1. Is the type keyword required?
No. Keywords will respond to instances of the types they're designed for, otherwise they will be ignored (silently pass validation). So
{ "minimum": 5 }
will pass anything as long as it's not a number less than 5. Objects, strings, arrays, etc., all pass. But as soon as you introduce a number, this keyword becomes interested and it'll do its thing.
Every keyword has a type or set of types that it responds to. type is one of the ones that responds to all of them.
2. What are the different specs for?
We (the spec authors) thought it would make things a little simpler if we split the specification into two parts: one for the schema construction keywords (e.g. $id, $schema, allOf, properties, etc.), and one for value validation and annotation (e.g. minimum, minLength, etc.). It does mean that you have to look into several documents in order to create a validator, however.
It also allows us to revise one of them without the other, though we've never done that.
This split was done several iterations ago, and we've just kept it as it seems to work well.

How do I reinstate constraints collected with copy_term/3 in SICStus Prolog?

The documentation says that
copy_term(+Term, -Copy, -Body) makes a copy of Term in which all
variables have been replaced by new variables that occur nowhere
outside the newly created term. If Term contains attributed
variables, Body is unified with a term such that executing Body
will reinstate equivalent attributes on the variables in Copy.
I'm previously affirming numerical CLP(R) constraints over some variables, and at some point I collect these constraints using copy_term/3. Later, when I try to reinstate the constraints using 'call(Body)', I get an "Instantiation error" in arguments of the form [nfr:resubmit_eq(...)]
Here's a simplified example that demonstrates the problem:
:-use_module(library(clpr)).
{Old>=0, A>=0,A=<10, NR= Old+Z, Z=Old*(A/D)}, copy_term(Old,New,CTR), call(CTR).
Results in:
Instantiation error in argument 1 of '.'/2
! goal: [nfr:resubmit_eq([v(-1.0,[_90^ -1,_95^1,_100^1]),v(1.0,[_113^1])])]
My question is: how do I reinstate the constraints in Body over New? I haven't been able to find concrete examples.
copy_term/3 is a relatively new built-in predicate, that has been first introduced in SICStus about 2006. Its motivation was to replace the semantically cumbersome call_residue/2 which originated from SICStus 0.6 of 1987 by a cleaner and more efficient interface that splits the functionality in two:
call_residue_vars(Goal, Vars) which is like call(Goal) and upon success unifies Vars with a list variables (in unspecified order) that are attached to constraints and have been created or affected in Goal.
copy_term(Term, Copy, Body) like copy_term/2 and upon success unifies Body with a term to reinstate the actual constraints involved. Originally, Body was a goal that could be executed directly. Many systems that adopted this interface (like SWI, YAP) however, switched to use a list of goals instead. This simplifies frequent operations since you have less defaultyness, but at the expense of making reinstating more complex. You need to use maplist(call,Goals).
Most of the time, these two built-in predicates will be used together. You are using only one which makes me a bit suspicious. You first need to figure out which variables are involved, and only then you can copy them. Typically you will use call_residue_vars/2 for that. If you are copying only a couple of variables (as in your exemple) you are effectively projecting the constraints on these variables. This may or may not be your intention.
This is simply a bug in CLPR, which is unsupported. We lost touch with the CLPR supplier a long time ago.

Source code for specific stored procedure or function

I can use all_arguments and all_procedures to list the procedures and functions inside any given package and with DBMS_METADATA I can extract the DDL for that package. Is there an easy way (other than lots of instring and substring calls) to obtain the procedure or function source code separately for each separate block of code in a package.
Something like this:
Owner | Package Name | Object Name | Overload | Arguments | Source
Obviously using substring and instring will present issues with overloaded functions.
All_arguments has the subprogram_id field which according to the very sparse documentation on it looks like it does uniquely reference which procedure it related to in the package but there doesn't appear to be anything that uses it.
Cheers in advance
IIRC, PLSQL allows nested packages and functions. In this case, you'll find that "instring" and "substring" may not be adequate to extract the source code, as you're facing recursion, and string functions typically only handle a smaller class of computations (typically regular expressions). This is a classic problem people have trying to parse languages with simple string manipulation. You can get around limits of string functions by essentially hacking to produce a poor man's parser but this can be a surprising amount of work if you want it to be deadly right, because you have to handle at least the recursive grammar rules that matter for your extraction.
Another way to get reliable access to the elements of a PLSQL package is to use a language parser. The DMS Software Reengineering Toolkit has a full PLSQL parser.
You'd have to extract the package text to a file first, and then apply the PLSQL parser to it; that produces an abstract syntax tree (AST) internally in the parser. Given the name of a function, it is rather easy to search the AST for the function with a matching name. You'd end up with more than one hit if you have overloaded functions; you might qualify the function by the hierarchy in which it is embedded or the information about the arguments that you might have. Having identified a specific function in the AST, one can ask DMS to pretty-print that tree, and it will regenerate the text of (complete with comments) for that function.

How does Linq work (behind the scenes)?

I was thinking about making something like Linq for Lua, and I have a general idea how Linq works, but was wondering if there was a good article or if someone could explain how C# makes Linq possible
Note: I mean behind the scenes, like how it generates code bindings and all that, not end user syntax.
It's hard to answer the question because LINQ is so many different things. For instance, sticking to C#, the following things are involved:
Query expressions are "pre-processed" into "C# without query expressions" which is then compiled normally. The query expression part of the spec is really short - it's basically a mechanical translation which doesn't assume anything about the real meaning of the query, beyond "order by is translated into OrderBy/ThenBy/etc".
Delegates are used to represent arbitrary actions with a particular signature, as executable code.
Expression trees are used to represent the same thing, but as data (which can be examined and translated into a different form, e.g. SQL)
Lambda expressions are used to convert source code into either delegates or expression trees.
Extension methods are used by most LINQ providers to chain together static method calls. This allows a simple interface (e.g. IEnumerable<T>) to effectively gain a lot more power.
Anonymous types are used for projections - where you have some disparate collection of data, and you want bits of each of the aspects of that data, an anonymous type allows you to gather them together.
Implicitly typed local variables (var) are used primarily when working with anonymous types, to maintain a statically typed language where you may not be able to "speak" the name of the type explicitly.
Iterator blocks are usually used to implement in-process querying, e.g. for LINQ to Objects.
Type inference is used to make the whole thing a lot smoother - there are a lot of generic methods in LINQ, and without type inference it would be really painful.
Code generation is used to turn a model (e.g. DBML) into code
Partial types are used to provide extensibility to generated code
Attributes are used to provide metadata to LINQ providers
Obviously a lot of these aren't only used by LINQ, but different LINQ technologies will depend on them.
If you can give more indication of what aspects you're interested in, we may be able to provide more detail.
If you're interested in effectively implementing LINQ to Objects, you might be interested in a talk I gave at DDD in Reading a couple of weeks ago - basically implementing as much of LINQ to Objects as possible in an hour. We were far from complete by the end of it, but it should give a pretty good idea of the kind of thing you need to do (and buffering/streaming, iterator blocks, query expression translation etc). The videos aren't up yet (and I haven't put the code up for download yet) but if you're interested, drop me a mail at skeet#pobox.com and I'll let you know when they're up. (I'll probably blog about it too.)
Mono (partially?) implements LINQ, and is opensource. Maybe you could look into their implementation?
Read this article:
Learn how to create custom LINQ providers
Perhaps my LINQ for R6RS Scheme will provide some insights.
It is 100% semantically, and almost 100% syntactically the same as LINQ, with the noted exception of additional sort parameters using 'then' instead of ','.
Some rules/assumptions:
Only dealing with lists, no query providers.
Not lazy, but eager comprehension.
No static types, as Scheme does not use them.
My implementation depends on a few core procedures:
map - used for 'Select'
filter - used for 'Where'
flatten - used for 'SelectMany'
sort - a multi-key sorting procedure
groupby - for grouping constructs
The rest of the structure is all built up using a macro.
Bindings are stored in a list that is tagged with bound identifiers to ensure hygiene. The binding are extracted and rebound locally where ever an expression occurs.
I did track the progress on my blog, that may provide some insight to possible issues.
For design ideas, take a look at c omega, the research project that birthed Linq. Linq is a more pragmatic or watered down version of c omega, depending on your perspective.
Matt Warren's blog has all the answers (and a sample IQueryable provider implementation to give you a headstart):
http://blogs.msdn.com/mattwar/

Resources