ScadaLts : Where to get the sample data sql scripts - sample-data

After execute "createTables-mysql.sql",and insert
data into tables 'systemsettings' and 'users':
INSERT INTO `users` VALUES (1,'admin','0DPiKuNIrrVmD8IUCuw1hQxNqZc=','admin#yourMangoDomain.com','','Y','N',1275399205446,1,NULL,0,'N');
INSERT INTO `systemSettings` VALUES ('databaseSchemaVersion','1.12.4');
Finally,I success to run the scada-lts.
but where can i find some sample data of scadalts?

Related

load NULL data into different table with the help of SQL*Loader

I have a CSV which contains 8 column. It has NULL data for few of the rows.
To load CSV data, I have created 2 tables with the same definition.
1) TABLE_NOT_NULL to load Not NULL data
2) TABLE_NULL to load NULL Data
I am successfully able to load data into TABLE_NOT_NULL with below when condition:
insert into table '<TABLE_NAME>' when '<COLUMN_NAME>'!=' '.
Now, I want to load NULL data into the table called TABLE_NULL but I am not able to filter out only NULL value with when condition.
I tried too many things but none of them worked; like:
a) insert into table '<TABLE_NAME>' WHEN '<COLUMN_NAME>'=BLANKS
b) insert into table '<TABLE_NAME>' WHEN '<COLUMN_NAME>'=' '
Can anyone please suggest any workaround or solution for it?
Workaround?
1
Load everything into TABLE_NULL
insert into TABLE_NOT_NULL select * From TABLE_NULL where column is not null
delete from TABLE_NULL where column is not null
2
load everything into TABLE_NOT_NULL
rows, that contain NULL values, won't be loaded but end up in the BAD
file
using another control file, load BAD file into TABLE_NULL
3 (EDIT)
instead of SQL*Loader, create an external table - it acts as if it was an ordinary Oracle table, but is (really) just a pointer to the file
you'd then write 2 INSERT statements:
insert into table_not_null
select * From external_table where column is not null;
insert into table_null
select * From external_table where column is null;

add partition in hive table based on a sub query

I am trying to add partition to a hive table (partitioned by date)
My problem is that the date needs to be fetched from another table.
My query looks like :
ALTER TABLE my_table ADD IF NOT EXISTS PARTITION(server_date = (SELECT max(server_date) FROM processed_table));
When i run the query hive throws the following error:
Error: Error while compiling statement: FAILED: ParseException line 1:84 cannot recognize input near '(' 'SELECT' 'max' in constant (state=42000,code=40000)
Hive does not allow to use functions/UDF's for the partition column.
Approach 1:
To achieve this you can run the first query and store the result in one variable and then execute the query.
server_date=$(hive -e "set hive.cli.print.header=false; select max(server_date) from processed_table;")
hive -hiveconf "server_date"="$server_date" -f your_hive_script.hql
Inside your script you can use the following statement:
ALTER TABLE my_table ADD IF NOT EXISTS PARTITION(server_date =${hiveconf:server_date});
For more information on the hive variable substitution, you can refer link
Approach 2:
In this approach, you will need to create a temporary table if the partition data you are expecting is already not loaded in any other partitioned table.
Considering your data doesn't have the server_date column.
Load the data into temporary table
set hive.exec.dynamic.partition=true;
Execute the below query:
INSERT OVERWRITE TABLE my_table PARTITION (server_date)
SELECT b.column1, b.column2,........,a.server_date as server_date FROM (select max(server_date) as server_date from ) a, my_table b;

how to use one sql insert data to two table?

I have two table,and they are connected by one field : B_ID of table A & id of table B.
I want to use sql to insert data to this two table.
how to write the insert sql ?
1,id in table B is auto-increment.
2,in a stupid way,I can insert data to table B first,and then select the id from table B,then add the id to table A as message_id.
You cannot insert data to multiple tables in one SQL statement. Just insert data first to B table and then table A. You could use RETURNING statement to get ID value and get rid of additional select statement between inserts.
See: https://oracle-base.com/articles/misc/dml-returning-into-clause
Have you heard about AFTER INSERT trigger? I think it is what you are looking for.
Something like this might do what you want:
CREATE OR REPLACE TRIGGER TableB_after_insert
AFTER INSERT
ON TableB
FOR EACH ROW
DECLARE
v_id int;
BEGIN
/*
* 1. Select your id from TableB
* 2. Insert data to TableA
*/
END;
/

Insert data to table column from column of different table

I am using Oracle Database.
I have view called VW_MREQ
It has column as follows :
M_Product_ID
AD_Client_ID
AD_ORG_ID
It has records inside.
Then, I have empty table called M_Requisition
It has column as follows :
M_Product_ID
AD_Client_ID
AD_ORG_ID
DESCRIPTION
CREATEDBY
I am making Procedure and would like to insert Data manually to M_Requisition, the foreign key is M_Product_ID and I want the AD_Client_ID and AD_ORG_ID to be the same as in VM_REQ as I insert M_Product_ID to M_Requisition manually.
INSERT INTO M_Requisition(M_Product_ID, AD_Client_ID, AD_ORG_ID, DESCRIPTION, CREATEDBY) VALUES(123, ?? , ??,"Insert Data","Me")
I plan to use SELECT INTO but still confused how I arrange it as I am newbie in Oracle.
Your help will be useful.
You could use the insert-select syntax, and just query the hardcoded values as literals:
INSERT INTO M_Requisition
(M_Product_ID, AD_Client_ID, AD_ORG_ID, DESCRIPTION, CREATEDBY)
(SELECT 123, AD_Client_ID, AD_ORG_ID, 'Insert Data', 'Me'
FROM VW_MREQ)

How to insert init-data into a table in hive?

I wanted to insert some initial data into the table in hive, so I created below HQL,
INSERT OVERWRITE TABLE table PARTITION(dt='2014-06-26') SELECT 'key_sum' as key, '0' as value;
but it does not work.
There is another query like the above,
INSERT OVERWRITE TABLE table PARTITION(dt='2014-06-26') SELECT 'key_sum' as key, '0' as value FROM table limit 1;
But it also didn't work, as I see that the tables are empty.
How can I set the initial data into the table?
(There is the reason why I have to do self-join)
About first HQL it should have from clause, its missing so HQL failure,
INSERT OVERWRITE TABLE table PARTITION(dt='2014-06-26') SELECT 'key_sum' as key, '0' as value;
Regarding second HQL, from table should have atleast one row, so it can set the constant init values into your newly created table.
INSERT OVERWRITE TABLE table PARTITION(dt='2014-06-26') SELECT 'key_sum', '0' FROM table limit 1;
you can use any old hive table having data into it, and give a hit.
The following query works fine if we have already test table created in hive.
INSERT OVERWRITE TABLE test PARTITION(dt='2014-06-26') SELECT 'key_sum' as key, '0' as value FROM test;
I think the table which we perform insert should be created first.

Resources