How can I add a custom variable to Sqitch, to be used in Snowflake change scripts? - sqitch

I'd like to define a custom variable called task_warehouse to be used in a Snowflake change script, like this:
CREATE OR REPLACE TASK dummy_task
WAREHOUSE = &task_warehouse
SCHEDULE = '1 minute'
AS
SELECT 1;
The task warehouse is different from &warehouse. I want the task warehouse to vary depending on the DATABASE being deployed to. I don't want to change the warehouse that's running the deploy script.
I tried adding ;task_warehouse=<WAREHOUSE_NAME> to the Snowflake connection string, but that didn't seem to do the trick.
I get this error when trying to deploy:
Variable task_warehouse is not defined
Does anyone know how to define a custom variable to be used by Sqitch, similar to how &warehouse is used in the following?
USE WAREHOUSE &warehouse;

One of my colleagues was able to figure it out. It says under the "variables" bullet towards the bottom of https://sqitch.org/docs/manual/sqitch-configuration/.
Example from that page:
sqitch deploy --set $key=$val -s $key2=$val2
Doing the following worked for me:
sqitch deploy -s task_warehouse=PROD_TASK_WAREHOUSE

Related

Something about the optimizer

I create a database and connect with it. But when I execute
select optimizer;
it returns
SELECT: identifier 'optimizer' unknown
What's the problem with it? And I can't find the sys table in the database using \d.
If I want to add an optimizer myopt, is it enough for the steps below:
write the opt_myopt.h and opt_myopt.c in /monetdb5/optimizer/
Add the code into codes in /monetdb5/optimizer/opt_wrapper.c
Add the function into optimizer_init_funcs in /monetdb5/optimizer/optimizer.c
Add a new pipe in /monetdb5/optimizer/opt_pipes.c
Since Oct2020, variables now have a schema (to keep it other SQL objects). In your session, 'sys' is not the session's schema, that's why it cannot find the 'optimizer' variable, the same for the tables.
In default branch (will be available in the next release) I added a "schema path" property on the user to search SQL objects besides the current session's schema. By default it includes the 'sys' schema.
For your first question: if your current_schema is not sys, you need to use select sys.optimizer;.
For your second question: the best existing example is probably in monetdb5/extras/mal_optimizer_template. Next to that, it's basically checking the source code to see how other optimisers have been implemented. NB, although it doesn't often happen, the internals of MonetDB can change between (major) versions. I'd recomment you to use Oct2020 or newer.
Concerning your second question,
You also have to create and add an optimizer pipeline to opt_pipes.c. Look for the default_pipe and then copy/paste that one to a new pipeline and add your optimizer to it.
There are some more places where you might need to add your optimizer, like in the codes[]array in opt_wrapper.c. Just mimick one of the standard optimizers like "reorder".

Informatica cloud: use field in pre/post sql commands

I am trying to delete a set of data in the target table based on a column (year) from the lookup in IICS (Informatica Cloud).
I want to solve this problem using pre/post sql commands but the constraint is I can't pass year column to my query.
I tried this:
delete from sample_db.tbl_emp where emp_year = {year}
I want to delete all the employees in a specific year i get from lookup return
For Ex:
I got year as '2019', all the records in table sample_db.tbl_emp containing emp_year=2019 must be deleted.
I am not sure how this works in informatica cloud.
Any leads would be helpful.
How are you getting the year value? A pre/post SQL may not be the way to go unless you need to do this as part of another transformation, i.e., before or after the transformation runs. Also, does your org only have ICDI, or also ICAI? ICAI may be a better solution depending on the value is being provided.
The following steps would help you achieve this.
Create an input-output parameter in your mapping.
Assign the result of your lookup in an expression transformation to the parameter using SetMaxVariable
Use the parameter in your target pre SQL as
delete from sample_db.tbl_emp where emp_year = $$parameter
Let me know if you have any further questions

BIRT: how to create multiple line charts dynamically in BIRT report

I have on BIRT report where I need to show testcase id , name in two columns (which I did with dataset), now I need to create line chart for each testcase dynamically. Where as my testcase number is not fixed. So i want to make a script which will create multiple line charts according to date of execution for each test case. first question is it a feasible one? If yes then how to do it?
If I understand you correctly you are wishing to add testcases as new
"series" in chart?
You can add new series inside script, but I'm not sure which phase (action) will be best for it, you'll need to experiment. BUT, this script will be part of your chart, so you won't be able to just access data from the table easily. So to make it possible you will need to make it accessible as a global variable BEFORE chart will start rendering.
Sorry I cannot give you sample solution, but I do not have IDE on this computer, so I cannot check what actions, and what input params we have possible to choose from.

postgresql update a bytea field with image from desktop

I'm trying to write an update sql statement in postgresql (pg commander) that will update a user profile image column
I've tried this:
update mytable set avatarImg = pg_read_file('/Users/myUser/profile.png')::bytea where userid=5;
got ERROR: absolute path not allowed
Read the file in the client.
Escape the contents as bytea.
Insert into database as normal.
(Elaborating on Richard's correct but terse answer; his should be marked as correct):
pg_read_file is really only intended as an administrative tool, and per the manual:
The functions shown in Table 9-72 provide native access to files on the machine hosting the server. Only files within the database cluster directory and the log_directory can be accessed.
Even if that restriction didn't apply, using pg_read_file would be incorrect; you'd have to use pg_read_binary_file. You can't just read text and cast to bytea like that.
The path restrictions mean that you must read the file using the client application as Richard says. Read the file from the client, set it as a bytea placement parameter in your SQL, and send the query.
Alternately, you could use lo_import to read the server-side file in as a binary large object, then read that as bytea and delete the binary large object.
pg_read_file can read the files only from the data directory path, if you would like to know your data directory path use:
SHOW data_directory;
For example it will show,
/var/lib/postgresql/data
Copy you file to the directory mentioned.
After the you can use only file name in your query.
UPDATE student_card SET student_image = pg_read_file('up.jpg')::bytea;
or can use pg_read_binary_file function.
UPDATE student_card SET student_image = pg_read_binary_file('up.jpg')::bytea;

SSIS - Join tables from different servers whose names are based on a variable

I have a simple query based on tables from two different linked servers. I need both servers to be changeable since we're moving from DEV to UAT to Production. I'm using an expression to set the Connection String and Password for server A. So, using that as a base I set a Data Flow Task and an 'OLE DB Source' to extract the data I need. Ultimately I'd like my query to look like this:
Select * from A.Payments p1
Full Outer Join ?.Payments p2 on p1.Id = p2.Id
where p1.OrderDesc is null or p2.OrderDesc is null
Is there a way around it? Can I use a variable or some kind of dynamic query? I haven't managed to parse a project parameter and run one. Thank you very much for your help.
This is done by making the Data Source SQL an expression.
Right click the Data Flow and then click the ellipsis [...] beside "Expressions". In there you will find one of the available properties you can set is the SQLCommand for your Data Flow Source.
It's not the most intuitive thing to be fair.

Resources