Is it possible to restrict values for a column in informatica - informatica-powercenter

is it possible to restrict what values are allowed for a column in informatica. For eg, there is a column called Item_number, for which I want values to be in range of 1 to 10, is it possible to implement that, if so which transformation I can use for that?

You need to use FILTER transformation. In properties you can mention -
Item_number >=1 AND Item_number <=10

You can try using a lookup transformation in your mapping and that lookup can be a flat file or table depending on your requirement.
Keep the valid values in that lookup file/table and verify the input data for that column.
If lookup returns a value, the data is valid; else incorrect data in case null is returned.

If you want to restrict the data in your column without removing any rows, use a simple function combination:
IIF(in_val>10, 10, IIF(in_val<1, 1, in_val))
Check if that works fine with your scenario - come back if you need further help.

Related

Informatica - Concatenate Max value from each colum present in multiple rows for same Primary Key

enter image description here
I have tried traditional approach of using Agg (Group By: ID, Store Name) and Max(Each Object) columns separately.
Then in next expression, Concat(Val1 Val2 Val3 || Val4).
How ever, I'm getting output as '0100'.
But, REQUIRED OUTPUT: 1100
Please let me know, how this can be done in IICS.
IICS is similar to the Powercenter on-prem.
First use an aggregator.
in Group By tab add ID, Store Name
in Aggregate tab add max(object1)... please note to set data type and length correctly.
Then use an expression transformation.
link ID, Store Name first.
Then concat the max_* columns using pipe -
out_max=max_col1||max_col2||... please note to set data type and length correctly.
This should generate correct output. I think you are having wrong output because of data length or data type of object fields. Make sure you trim spaces from object data before aggregator.

Fetching only distinct data from table using hasura-graphql data provider in react-admin

I am working on a project where I have to fetch only distinct data from the table display in the dropdown. How can I do that? I am using '''hasura-graphql''' data-provider for it. So how can I get only distinct data from a particular column?
Thanks in advance.
I think passing a default "distinct_on" filter with the column name as value will do the job. Also, hasura recommends sorting by this column in first position.
It is typically recommended to use order_by along with distinct_on to
ensure we get predictable results (otherwise any arbitrary row with a
distinct value of the column may be returned). Note that the
distinct_on column needs to be the first column in the order_by
expression.
So i set default sort:
<ReferenceInput
reference="yourTable"
source="yourDistinctColumn"
sort={{field: "yourDistinctColumn", order: "ASC"}}//or DESC, your choice
filter={{distinct_on: "yourDistinctColumn"}}
>
<SelectInput optionText="yourDistinctColumn"/>
</ReferenceInput>
https://hasura.io/docs/1.0/graphql/manual/queries/distinct-queries.html

Sorting Issue After Table Render in Laravel DataTables as a Service Implementation

I have implemented laravel dataTable as a service.
The initial two columns are actual id and names so, I am able to sort it asc/desc after the table renders.
But the next few columns renders after performing few calculations, i.e. these values are not fetched directly from any column rather it is processed.
I am unable to sort these columns where calculations were performed, and I get this error. And I know it is looking for that particular column for eg outstanding_amount which I don't have in the DB, rather it is a calculated amount from two or more columns that are in some other tables.
Any Suggestions on how to overcome this issue?
It looks like you're trying to sort by values that aren't columns, but calculated values.
So the main issue here is to give Eloquent/MySql the data it needs to provide the sorting.
// You might need to do some joins first
->addSelect(DB::raw('your_calc as outstanding_amount'))
->orderBy('outstanding_amount') // asc can be omitted as this is the default
// Anternative: you don't need the value sorted by
// Don't forget any joins you might need
->orderByRaw('your_calc_for_outstanding_amount ASC')
For SQL functions it'll work as follow
->addSelect(DB::raw('COUNT(products.id) as product_count'));
->orderByRaw(DB::raw('COUNT(products.id)'),'DESC');

Is there any way to generate an ID without a sequence?

Current application use JPA to auto generate table/entity id. Now a requirement wants to get a query to manually insert data in to the database using SQL queries
So the questions are:
Is it worth to create a sequence in this schema just for this little requirement?
If answer to 1 is no, then what could be a plan b?
Yes. A sequence is trivial - why would you not do it?
N/A
Few ways:
Use a UUID. UUIDs are pseudo-random, large alphanumeric strings which are guaranteed to be unique once generated.
Does the data have something unique? Like a timestamp, or IP address, etc? If so, use that
Combination of current timestamp + some less unique value in the data
Combination of current timestamp + some integer i that you keep incrementing
There are others (including generating a checksum, custom random numbers instead of UUIDs, etc) - but those have the possibility of overlaps, so not mentioning them.
Edit: Minor clarifications
Are you just doing a single data load into an empty table, and there are no other users concurrently inserting data? If so, you can just use ROWNUM to generate the IDs starting from 1, e.g.
INSERT INTO mytable
SELECT ROWNUM AS ID
,etc AS etc
FROM ...

Is there any way to limit the number of columns in Hbase

Is there any way to limit the number of columns under a particular row in Hbase? I have seen methods to limit rows. I wonder if there is any ways i can limit column family values
Like,
row columnfamily(page) value
1 page:1 1
1 page:2 2
1 page:3 3
I need to retrieve row1 values for column families page:1 and page:2
Is it possible?
There are a number of different ways that you can go with this problem. Basically, you want a server-side filter to limit your return data in a Get/Scan. Normally, this would be done with a co-processor, but that is still under development, so you really want to apply a filter to your query.
Example Filters: http://svn.apache.org/repos/asf/hbase/branches/0.90/src/main/java/org/apache/hadoop/hbase/filter/
The easiest example would be a prefix filter (although it looks like you want some sort of range filter). Just to give you a rough idea of how this would work, here's how you apply a PrefixFilter to a Get:
HTable myTable; // predefined
Scan scan; // predefined
scan.setFilter(new ColumnPrefixFilter(Bytes.toBytes("myprefix")));
return myTable.getScanner(scan);
It is possible.
When scan-ning use http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/Scan.html#addColumn(byte[], byte[])
When get-ting use
http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/Get.html#addColumn(byte[], byte[])
If the column key is predictable, for example, key is an index, then based on a particular value the keys could be added by iterating. Besides you could use filters as well if the conditioning could be random and complicated for example > 1 and < 3, key in (3, 10, 11) etc. For filter use this. There are host of pre-implemented filters. You would probably be interested in the qualifier filter.
Hope this helps.

Resources