Update only one column in mysql table using sqoop - sqoop

I have a mysql table with 5 columns(a,b,c,d,e) with "a" being the primary key. And I have a csv file containing values for only "a" & "d" columns. I want to update the values for only one column "d" based on values of "a" present in the file. Is this can be achieved using sqoop? If my csv file has all the column's data then I am able to export the data from my csv file to the table and update all the rows using "--update-key" as "a". Is it possible to update only one column's data?

As far as I know, there is no direct command to do that. However the possible solution is to create a MySQL temp table with two columns "a" and "d", load the data using sqoop (two columns only "a" and "d") to the temp table and then do join between MySQL temp table and final table to update the specific column.

Related

How to create a table in H base-shell without passing values & rowid?

hello "I m new to Hbase.. My question is How to create a table in hbase with column family & column names inside the columnfamily without passing values and row key?Is it possible to create that table in hbase shell?
In Sql we create a table first and later we add data ..same thing how can we do it in hbase?
HBase is a nosql key value database. The tables can be created just by specifying table name and column family for example create "sampletable","m" where sampletable is table name and m is column family. If you want to use SQL queries on HBase try Apache Phoenix.

How to drop hive column?

I have two columns Id and Name in Hive table, and I want to delete the Name column. I have used following command:
ALTER TABLE TableName REPLACE COLUMNS(id string);
The result was that the Name column values were assigned to the Id column.
How can I drop a specific column of the table and is there any other command in Hive to achieve my goal?
In addition to the existing answers to the question : Alter hive table add or drop column
As per Hive documentation,
REPLACE COLUMNS removes all existing columns and adds the new set of columns.
REPLACE COLUMNS can also be used to drop columns. For example, ALTER TABLE test_change REPLACE COLUMNS (a int, b int); will remove column c from test_change's schema.
The query you are using is right. But this will modify only schema i.e, the metastore. This will not modify anything on data side.
So, before you are dropping the column you should make sure that you hav correct data file.
In your case the data file should not contain name values.
If you don't want to modify the file then create another table with only specific column that you need.
Create table tablename as select id from already_existing_table
let me know if this helps.

hive first column to consider in partition table

Creating partition table in hive, does it mandatory to choose always the last column for partition column.
If I choose 1st column as partition, I cant do filter data, is there any way to choose first column for partition?
In hive, if you want to partition a table, you have to define partition column first during table creation time. & while populating the data into table you need to specify as follow:
"INSERT INTO partitioned_table PARTITION(status) SELECT id , name, status from temp_tbl "
in this way using you can partition based on last column only. if you want to partition on the basis of first column. you have to write a Mapreduce job for that . that is the only option available.
I guess the problem you are facing is that you already have table "source" in your local system or hdfs and you want to upload it to partitioned table. And you want the first column in the source table to be partitioned in hive. As the source table does not have headers i guess we can not do anything here if we try to directly upload the file in the hive destination folder. The only alternate way i know is that create a non partitioned table in hive whose structure is exactly the same as the source file. then upload the source data to non partitioned table first, then copy the data from non partitioned table to partitioned table.
Suppose the source file is like this
create table source(eid int, ename int, esal int) partitioned by (dept string)
your non partioned table where you upload the data is like thiscreate table nopart(dept string, esal int,ename string, eid int)
then you use the dynamic partition by command insert overwrite table source partition(dept) select eid,ename,esal,dept from nopart;
the order of the parameters is the only point here.

Load multiple files content to table using SQL loader

How to insert data from multiple files having different columns into a table in Oracle database using SQL Loader with Single control file.
Basically ,
We have 3 CSV files
file 1 having columns a,b,c
file 2 having columns d,e,f
file 3 having columns g,h,i
We need to insert the above attributes to a Table named "TableTest"
having columns a,b,c ,d,e,f,g,h,i
Using single control file
Thanks in advance
You really can't. You can either splice the .csv files together (a lot of nasty work) or create 3 tables to load and then use plsql or sql to join them together into your target table.

Loading multiple concatenated CSV files into Qracle with SQLLDR

I have a dump of several Postgresql Tables in a selfcontained CSV file which I want to import into an Oracle Database with a matching schema. I found several posts on how to distribute data from one CSV "table" to multiple Oracle tables, but my problem is several DIFFERENT CVS "tables" in the same file.
Is it possible to specify table separators or somehow mark new tables in an SQLLDR control file, or do I have to split up the file manually before feeding it to SQLLDR?
That depends on your data. How do you determine which table a row is destined for? If you can determine which table base on data in the row, then it is fairly easy to do with a WHEN.
LOAD DATA
INFILE bunchotables.dat
INTO TABLE foo WHEN somecol = 'pick me, pick me' (
...column defs...
)
INTO TABLE bar WHEN somecol = 'leave me alone' (
... column defs
)
If you've got some sort of header row that determines the target table then you are going to have to split it before hand with another utility.

Resources