How do I delete all rows of a database table using NiFi Processor? [duplicate] - apache-nifi

I have a table that consists of 2 columns
cattlegendernm | price
----------------+-------
Female | 10094
Female | 12001
Male | 12704
I would like to add another column with filename to this table using a nifi processor.
Which Processor and SQL query should I use for this ?

Using a PutSQL processor would be suitable to add such a DDL statement like
ALTER TABLE T ADD COLUMN filename VARCHAR(150)
into the SQL Statement attribute of the properties tab as illustrated below
where PostGRES_DB represents a pre-configured Controller Service to interact with the database

Related

Laravel - Show update message after update using query builder [duplicate]

When updating a table in MySQL, for example:
Table user
user_id | user_name
1 John
2 Joseph
3 Juan
If I run the query
UPDATE `user` SET user_name = 'John' WHERE user_id = 1
Will MySQL write the same value again or ignore it since it's the same content?
As the MySQL manual for the UPDATE statement implies,
If you set a column to the value it currently has, MySQL notices this
and does not update it.
So, if you run this query, MySQL will understand that the value you're trying to apply is the same as the current one for the specified column, and it won't write anything to the database.

How to create a table in H base-shell without passing values & rowid?

hello "I m new to Hbase.. My question is How to create a table in hbase with column family & column names inside the columnfamily without passing values and row key?Is it possible to create that table in hbase shell?
In Sql we create a table first and later we add data ..same thing how can we do it in hbase?
HBase is a nosql key value database. The tables can be created just by specifying table name and column family for example create "sampletable","m" where sampletable is table name and m is column family. If you want to use SQL queries on HBase try Apache Phoenix.

incremental import in sqoop on a table with jumbled data and no modified time column

Suppose I have a table Customer :
CustomerID CustomerName CustomerBill
7 John 100
2 Bill 500
4 Mark 200
Here CustomerID is the primary key but the records are in no particular order. There is no modified time column in the corresponding table in the database. The previous entries can change as well. How do I do incremental imports on the data?
The database I am using is Sybase and importing it to Hive.
Records are in no particular order.
append mode can not be used.
There is no modified time column in the corresponding table in the database.
lastmodified mode can not be used.
Sqoop does do anything special. It needs incrementing ID or updated timstamp to make a SQL query to fetch ONLY inserted/updated recored.

How to let CREATE TABLE...AS SELECT in HIVE do not populate data?

When I run CTAS in HIVE, the data is also populated simultaneously. But I just want to create the table, but not populate the data. How and what I should do? Thanks.
You can do that by using the LIKE keyword.
create table new_table_name LIKE old_table_name
This will create the table structure without the data.
Use create EXTERNAL table instead of create table. Observe External keyword.
Use where condition in select statement and give a value of where which fetches no records from hive.
Example table name demo1
id name country
1 abc India
2 xyz Germany
3 pqr France
In CREATE TABLEā€¦AS SELECT in HIVE
Create table demo2...As SELECT id, name, country from demo1 where id=0;
So, in above where condition of id is given as 0 and from above data the select statement will fetch no record, similarly choose a value in where condition which returns no records. Hence no data will be inserted in newly created table.
#Sunil's answer helped me as well, I am just posting an addition that was necessary in my case.
The source table was in Avro format but the new one I wanted in ORC, hence,
CREATE TABLE dataaggregate_orc_empty LIKE dataaggregate_avro_compressed ROW FORMAT SERDE 'org.apache.hadoop.hive.ql.io.orc.OrcSerde' STORED AS INPUTFORMAT 'org.apache.hadoop.hive.ql.io.orc.OrcInputFormat' OUTPUTFORMAT 'org.apache.hadoop.hive.ql.io.orc.OrcOutputFormat' TBLPROPERTIES ('orc.compress'='ZLIB');
The above step can be split in two steps, if required :
CREATE TABLE dataaggregate_orc_empty LIKE dataaggregate_avro_compressed;
alter table dataaggregate_orc_empty set fileformat ORC;
I would be glad if someone provides inputs for the data format changes that occur in this process and related problems, if any.

dependent Lov in oracle forms 10g

I have database block table columns , that has no values means not inserted any values , i have 2 columns in that block order_no,order_name , table name called xx_customer_details, so i need to create LOV on order_no column using oracle table customer_details ,my requirement is i am selecting Order_no vlaues from LOV then need to populate in the order_name column automatically
Thanks
For this, you simply create a RECORD GROUP with the required columns
select order_no,order_name from xx_customer_details;
Now create LOV and assign this record group to it.
In your LOV properties you find Column Mapping properties and map the columns accordingly.
And finally assign this LOV to your text item.
Then you are done.

Resources