Using a LIKE clause within string.Concat() - visual-studio

I am trying to display image files pulled from a server. In SSRS I have managed to do this before but recently to add security, the location was changed so each file is in its own folder which has a random string for a name.
Before I used this:
=string.Concat("/files/reports/images/",Fields!FormTutorInitials.Value, ".jpg")
Which pulled the file from its location. Each file is named after the tutors initials, so Tutor who's initials are ABC would pull their picture from /files/reports/images/ABC.jpeg. Now however, that file sits in a location like this:
/files/reports/images/rgdg5w-gtreh65-hts-56hehj-5rs333/ABC.jpeg
Is there a way to insert a LIKE clause into the code? I have tried =string.Concat("/files/reports/images/",LIKE *Fields!FormTutorInitials.Value, ".jpg") but it does not accept it.

I'm assuming you are using the expression as the image source here...
This answer assumes that the filename only exists once in the folder/subfolders.
It uses xp_cmdshell so you must be aware of the security implications of enabling this if it is not already enabled. (https://learn.microsoft.com/en-us/sql/database-engine/configure-windows/xp-cmdshell-server-configuration-option?view=sql-server-ver15)
The way I would do this is to get a list of the files into a table variable then append this to your dataset query
As an example.
DECLARE #t TABLE(TutorName varchar(50), TutorInitials varchar(20))
INSERT INTO #t VALUES ('Dave Smith', 'DS'), ('Sally James', 'SJ'), ('Rob Jones', 'RJ'), ('Jane Bloggs', 'JB')
DECLARE #dir TABLE(filename varchar(1024))
INSERT INTO #dir
exec xp_cmdshell 'dir "\\myFileServer\ImgesRootFolder\*.png" /b /s'
SELECT DISTINCT
t.*,
MIN(d.[filename]) OVER(PARTITION BY t.TutorName, t.TutorInitials) as ImageFileName
FROM #t t
JOIN #dir d on d.[filename] like '%' + t.TutorInitials + '.png'
The first table is the 'tutors' table with names and initials, the second table contains the results of a recursive DIR command against \\myFileServer\ImgesRootFolder\*.png
Then finally all we do is join then togther.
In this example, I get the MIN() filename value so be aware that is for exmaple, DS.PNG appeared in more than one folder, only the first (alphabetically) would be returned.
The result of the query is something like this.
TutorName TutorInitials ImageFileName
Dave Smith DS \\myFileServer\ImgesRootFolder\randomfolder12345\ds.png
Jane Bloggs JB \\myFileServer\ImgesRootFolder\randomfolder68548\jb.png
Rob Jones RJ \\myFileServer\ImgesRootFolder\randomfolder96325\rj.png
Sally James SJ \\myFileServer\ImgesRootFolder\randomfolder74125\sj.png
You can now just use the ImageFileName column as the imagesource value

Related

How to omit aliased used in XMLForest function in oracle XML

pl SQL code segment
SELECT Xmlserialize(DOCUMENT
XMLELEMENT("intrastat",
XMLAGG(
Xmlforest(ENVELOPE_ID AS "envID",
XMLFOREST(DATE_ AS "date",TIME_ AS "Time")AS "Date
time", PARTY_ID AS "pid",PARTY_NAME AS "pname",
XMLFOREST(Xmlelement("RC",REGION_CODE) AS RC,Xmlelement("TCPCODE",MODE_OF_TRANSPORT_CODE) AS TCPCODE) AS "item")
)))
FROM INTRASTAT_XML_TEMPLATE_LINE_TMP
part of actual output that make the trouble
<item><RC><RC>as</RC></RC><TCPCODE><TCPCODE>22</TCPCODE></TCPCODE></item>
what i want to get
<item><RC>ads</RC><TCPCODE>22</TCPCODE></item>
Your current plsql code segment :
XMLFOREST(Xmlelement("RC",REGION_CODE) AS RC,Xmlelement("TCPCODE",MODE_OF_TRANSPORT_CODE) AS TCPCODE) AS "item")
And as we have used aliases in here, we are getting multiple tags - One for alias, and one for the first parameter of XMLELEMENT function.
Now, since you just want an element - item, with two tags - RC (holding data of REGION_CODE field) and TCPCODE (holding data of MODE_OF_TRANSPORT_CODE field),
in my opinion, this should suffice your requirement:
Xmlelement("item", XMLFOREST(REGION_CODE "RC", MODE_OF_TRANSPORT_CODE "TCPCODE")
~Kuntal
with data(rc, tcpcode) as (select 'ads', 22 from dual)
select xmlelement("item", xmlforest(rc, tcpcode)) from data;
XMLELEMENT("ITEM",XMLFOREST(RC,TCPCODE))
--------------------------------------------------------------------------------
<item><RC>ads</RC><TCPCODE>22</TCPCODE></item>

Duplicate removal from a table using Informatica

I have a scenario to be implemented in informatica where I need to remove duplicate records from a table based on PK. But I need to keep the 1st occurrence of the PK values and remove the others(in case of duplicate PK).
For example, If my source has 1,1,1,2,3,3,4,5,4. I want to see my target data as 1,2,3,4,5. I have to read data from the same table and need to load into the same table., no new table can be introduced. please help me with your inputs.
Thanks in Advance!
I suppose you want the first occurrence because there are other (data) columns in addition to the key you entered. Therefore you want
1,b
1,c
1,a
2,d
3,c
3,d
4,e
5,f
4,b
Turned into
1,b
2,d
3,c
4,e
5,f
??
In that case try this mapping layout:
SRC -> SQ -> SRT -> AGG -> TGT
SEQ /
Where the sorter is set to sort on the KEY,sequence_port (desc)
And the aggregator is set to group by the KEY, and the sequence_port should not go out of the sorter
Hope you can follow me :)
There are multiple ways to do this, the simplest would be too do it in the SQL override.
Assuming the example quoted above, the SQL would be like this. General idea is to set a row number for a primary key ( so if you have 3 rows with same pk they will have 1,2,3 as row numbers before being reset for the next pk)
SQL:
select * from (
Select primary_key,column2 row_number() over (partition by primary_key order by primary_key) as distinct_key) where distinct_key=1
Before:
1,b
1,c
1,a
2,d
3,c
3,d
Intermediate query:
1,c,1
1,a,2
2,d,1
3,c,1
3,d,2
output:
1,c
2,d
3,d
I am able to achieve this by following the below steps.
1. Passing Sorted data(keys are EMP_ID, MOBILE, DEPTID) to an expression.
2. Creating the following variable ports in the expression and getting the counts.
V_CURR_EMP_ID = EMP_ID
V_CURR_MOBILE = MOBILE
V_CURR_DEPTID = DEPTID
V_COUNT =
IIF(V_CURR_EMP_ID=V_PREV_EMP_ID AND V_CURR_MOBILE=V_PREV_MOBILE AND V_CURR_DEPTID=V_PREV_DEPTID ,V_COUNT+1,1)
V_PREV_EMP_ID = EMP_ID
V_PREV_MOBILE = MOBILE
V_PREV_DEPTID = DEPTID
O_COUNT =V_COUNT
3. In the next transformation which is filter, I am taking only the records which have count more than 1 and deleting them using update strategy(DD_DELETE).
Here is the mapping flow.
SQ->SRTR->EXP->FIL->UPD->TGT
Also, when I tried to delete them using aggregator , it is deleting only the first occurrence of duplicates but not all.
Thanks again for your inputs!

PL/SQL Stored proc that uses a comma separated parameter to drive a dynamic LIKE clause?

Is there a fairly simple way to take an input parameter containing a comma seperated list of prefixes and return a cursor based on a select statement that uses these?
i.e. (Pseudocode)
PROCEDURE get_by_prefix(p_list_of_prefixes IN varchar2, r_csr OUT SYS_REFCURSOR)
IS
BEGIN
OPEN r_csr FOR
SELECT * FROM my_table where some_column LIKE (the_individual_fields_from p_list_of_prefixes ||'%')
END
I've tried various combinations, and now have two problems - coercing the input into a suitable table (I think it needs to go into a table type rather than a VARCHAR2_TABLE), and secondly getting the like clause to be effectively a SELECT from an internal 'pseudotable'...
EDIT: It seems that people are suggesting ways to use 'IN' with a set of potential values - whereas Im looking at using LIKE. I could use a similar technique - building up dynamic SQL, but was wondering if there isnt a more elegant way...
PL/SQL has no concept of a comma-separated list and no built-in splitter as in Perl etc, so you'll have to use one of the hand-rolled methods such as this one:
https://stewashton.wordpress.com/2016/08/01/splitting-strings-surprise
(Other methods are available.) Then it's just a matter of either populating a collection in one step and using it in the next, or else combining the two as something like this:
declare
p_list_of_prefixes varchar2(100) := 'John,Jim,Jules,Janice,Jenny';
begin
open :refcur for
with params as
( select x.firstname
from xmltable(
'ora:tokenize($X, "\,")'
passing p_list_of_prefixes as x
columns firstname varchar2(4000) path '.'
) x
)
, people as
( select 'Dave Clark' as fullname from dual union all
select 'Jim Potter' from dual union all
select 'Jenny Jones' from dual
)
select x.firstname, p.fullname
from params x
left join people p on p.fullname like x.firstname || '%';
end;
Output:
FIRSTNAME FULLNAME
-------------- -----------
John
Jim Jim Potter
Jules
Janice
Jenny Jenny Jones
Using LIKE the way you want is easy, but it is the wrong solution. (See my Comment under the original post).
Anyway - if by order of your superiors, or some other semi-legitimate reason, you must use a LIKE condition, it should look something like this:
... where ',' || p_list_of_whatever || ',' like '%,' || some_column || ',%
Concatenating commas at both ends of both sides of the comparison is needed, because you don't want Jo in the column to match John in the input list. Start from there and you will see why you need the commas on the right-hand side, and then follow from there and you will see why you need them on the left also.

Oracle SQL: Select rows from table A with fallback to joined table A and B. (union, group by,...)

The requirement may seem a bit odd, but bear with me: Lets say I have a list of my employees like this:
pid name
-------------------------
1 Smith-Gordon
2 Hansen
3 Simpson
And a table of previous names (if e.g. Mrs Smith-Gordon and Mr Hansen had one or more different names before they were married, respectively), employeehist:
pid oldname
-------------------------
1 Smith
2 Taylor
2 Baker
What I want now is to be able to search for names and get results from both tables like this:
a) Search for "Simpson%" -> Get a result like "3, Simpson"
b) Search for "Hansen%" -> Get a result like "2, Hansen"
c) Search for "Taylor%" -> Get a result like "2, Hansen, matched on previous Taylor"
d) Search for "Smith%" -> Get a result like "1, Smith-Gordon"
In other words, I want the current record, plus the old name if that was where the pertinent match occurred.
What I tried so far:
1) Naively join the history to the current employees: The searches b), c) and d) will always contain something in the oldname column, so I can't tell where the match occurred. I also get duplicate hits for Mr Hansen.
2) I tried to UNION a first select on employees (containing a dummy NULL AS oldname) with a second select joining employeehist with employees which will return me a nice hit for search b) without an oldname and one with an oldname for c), but now I predictably get duplicates in d).
Any thoughts?
You can use the following query with a parameter:
SELECT e.pid,
CASE
WHEN e.name LIKE :search_key THEN e.name
WHEN eh.oldname LIKE :search_key THEN e.name || ' matched on previous ' || eh.oldname
END
FROM employees e
LEFT JOIN employeehist eh on (e.pid = eh.pid)
WHERE e.name LIKE :seach_key OR eh.oldname LIKE :search_key
I have come up with this solution:
SELECT * FROM ( /* (3) outer filter query */
SELECT e.pid, e.name, /* (1) query combining current and matching old names */
CASE
WHEN e.name LIKE :search_key THEN 'Y'
ELSE 'N'
END AS primary_match,
(
SELECT oldname /* (2) subquery that gives me one or no matching old name */
FROM employeehist eh
WHERE eh.pid = e.pid
AND eh.oldname LIKE :search_key
AND ROWNUM=1
)
FROM employees e
) combined
WHERE combined.primary_match = 'Y' OR combined.oldname IS NOT NULL;
There's one primary select (1) that gets me all current ids and names, and adds a CASE column whether the name matched. Additionally, it runs a subquery (2) that gets me one matching old name (also if there are several, or none if none). With that on hand I can use an outer select (2) that will filter away rows with no matches.
This would return e.g. for search key "Smith%"
pid | name | primary_match | oldname
1 | Smith-Gordon | Y | Smith
or for "Taylor%"
pid | name | primary_match | oldname
2 | Hansen | N | Taylor
I'm not sure how elegant it is, but it works as I want:
I get one result per matching current pid, no matter how many old names that pid has, matching or not. No duplicates.
I can distinguish between results that matched on the current name and those that ("only" or "also") matched on old names.
I don't need to define my matching condition twice because it gets rolled into that CASE column and I can filter on that.
There's obviously room for improvement: The subquery (2) could be made to return an aggregate of all matching old names (or the newest or oldest, I have a column for that).
But this works for me.
I have found a better solution than my previous one. My problem was that I couldn't GROUP BY pid and "squash" differing oldname rows. I'm quite sure I remember that this was possible in MySQL, but Oracle always ever gave me "979: not a GROUP BY expression". Strict but fair.
The solution is apparently to provide Oracle with a strategy how to deal with those rows:
SELECT pid, name,
MIN(oldname) KEEP (DENSE_RANK FIRST ORDER BY oldname NULLS FIRST) as oldname
/*(3) outer select combines current and old hits, and "squashes" duplicates, preferring current hits where available*/
FROM (
SELECT e.pid, e.name, null AS oldname /*(1) hits in current names*/
FROM employees e
WHERE e.name LIKE :search_key
UNION ALL
SELECT e.pid, e.name, eh.oldname /* (2) hits in old names*/
FROM employeehist eh
JOIN employees e ON e.pid = eh.pid
WHERE eh.oldname LIKE :search_key
) combined
GROUP BY pid, name;
The idea is simple: Run a query (1) that gives all matches in current names (plus a dummy "oldname" column with NULLs), then a query (2) that gives all matches in old names (complete with their joined current names to display). Then simply combine those, and remove the duplicates by pid (and name, because Oracle, but that's identical by definition) giving preference to rows where oldname is NULL.
This would return e.g. for search key "Smith%"
pid | name | oldname
1 | Smith-Gordon | NULL
which is exactly what I want. If there's a pid with a current and an old match, I don't care about the old one. Or for "Taylor%":
pid | name | oldname
2 | Hansen | Taylor
This query also appears to be roughly 10 times faster than my other solution - I guess because it avoids subqueries that depend on the current pid.
So the only odd thing is that I need to use MIN(oldname) instead of some form of identity. I get that Oracle needs an aggregate function here, but the whole point of the KEEP ... FIRST exercise is to only have one row anyway, no?
But it works, and it's fast, so I won't complain.

Oracle - dynamic column name in select statement

Question:
Is it possible to have a column name in a select statement changed based on a value in it's result set?
For example, if a year value in a result set is less than 1950, name the column OldYear, otherwise name the column NewYear. The year value in the result set is guaranteed to be the same for all records.
I'm thinking this is impossible, but here was my failed attempt to test the idea:
select 1 as
(case
when 2 = 1 then "name1";
when 1 = 1 then "name2")
from dual;
You can't vary a column name per row of a result set. This is basic to relational databases. The names of columns are part of the table "header" and a name applies to the column under it for all rows.
Re comment: OK, maybe the OP Americus means that the result is known to be exactly one row. But regardless, SQL has no syntax to support a dynamic column alias. Column aliases must be constant in a query.
Even dynamic SQL doesn't help, because you'd have to run the query twice. Once to get the value, and a second time to re-run the query with a different column alias.
The "correct" way to do this in SQL is to have both columns, and have the column that is inappropriate be NULL, such as:
SELECT
CASE WHEN year < 1950 THEN year ELSE NULL END AS OldYear,
CASE WHEN year >= 1950 THEN year ELSE NULL END AS NewYear
FROM some_table_with_years;
There is no good reason to change the column name dynamically - it's analogous to the name of a variable in procedural code - it's just a label that you might refer to later in your code, so you don't want it to change at runtime.
I'm guessing what you're really after is a way to format the output (e.g. for printing in a report) differently depending on the data. In that case I would generate the heading text as a separate column in the query, e.g.:
SELECT 1 AS mydata
,case
when 2 = 1 then 'name1'
when 1 = 1 then 'name2'
end AS myheader
FROM dual;
Then the calling procedure would take the values returned for mydata and myheader and format them for output as required.
You will need something similar to this:
select 'select ' || CASE WHEN YEAR<1950 THEN 'OLDYEAR' ELSE 'NEWYEAR' END || ' FROM TABLE 1' from TABLE_WITH_DATA
This solution requires that you launch SQLPLUS and a .sql file from a .bat file or using some other method with the appropriate Oracle credentials. The .bat file can be kicked off manually, from a server scheduled task, Control-M job, etc...
Output is a .csv file. This also requires that you replace all commas in the output with some other character or risk column/data mismatch in the output.
The trick is that your column headers and data are selected in two different SELECT statements.
It isn't perfect, but it does work, and it's the closest to standard Oracle SQL that I've found for a dynamic column header outside of a development environment. We use this extensively to generate recurring daily/weekly/monthly reports to users without resorting to a GUI. Output is saved to a shared network drive directory/Sharepoint.
REM BEGIN runExtract1.bat file -----------------------------------------
sqlplus username/password#database #C:\DailyExtracts\Extract1.sql > C:\DailyExtracts\Extract1.log
exit
REM END runExtract1.bat file -------------------------------------------
REM BEGIN Extract1.sql file --------------------------------------------
set colsep ,
set pagesize 0
set trimspool on
set linesize 4000
column dt new_val X
select to_char(sysdate,'MON-YYYY') dt from dual;
spool c:\DailyExtracts\&X._Extract1.csv
select '&X-Project_id', 'datacolumn2-Project_Name', 'datacolumn3-Plant_id' from dual;
select
PROJ_ID
||','||
replace(PROJ_NAME,',',';')-- "Project Name"
||','||
PLANT_ID-- "Plant ID"
from PROJECTS
where ADDED_DATE >= TO_DATE('01-'||(select to_char(sysdate,'MON-YYYY') from dual));
spool off
exit
/
REM ------------------------------------------------------------------
CSV OUTPUT (opened in Excel and copy/pasted):
old 1: select '&X-Project_id' 'datacolumn2-Project_Name' 'datacolumn3-Plant_id' from dual
new 1: select 'MAR-2018-Project_id' 'datacolumn2-Project_Name' 'datacolumn3-Plant_id' from dual
MAR-2018-Project_id datacolumn2-Project_Name datacolumn3-Plant_id
31415 name1 1007
31415 name1 2032
32123 name2 3302
32123 name2 3384
32963 name3 2530
33629 name4 1161
34180 name5 1173
34180 name5 1205
...
...
etc...
135 rows selected.

Resources