I am using the following code to query a row from two joined tables:
s.database.GetDatabase().Model(&dbUser).Related(&result, "Registrations")
However when i use select as follows:
s.database.GetDatabase().Model(&dbUser).Select("name").Related(&result, "Registrations")
it show the following error and returns zero records
(pq: zero-length delimited identifier at or near """")
Related
Am trying execute a sql query on oracle database and inserting the result into another table, for my trial am just performing a simple query as
SELECT 1 AS count
FROM dual
and trying to insert that into a single column table which has the name COUNT.
The content of the record on Nifi seems to be as follows
[
{
"COUNT" : "1"
}
]
but the logs keeps throwing the error
due to java.sql.SQLDataException:
None of the fields in the record map to the columns defined by
the schema_name.table_name table:
any ideas ?
I believe you get that same error message if your table name doesn't match. The Translate Field Names property only translates the fields (columns), not the table name. Try specifying the schema/table in uppercase to match what Oracle is expecting.
$data['establishments2'] = Establishments::Join("establishment_categories",'establishment_categories.establishment_id','=','establishments.id')->where('establishments.city','LIKE',$location)->where('establishments.status',0)->whereIn('establishment_id',array($est_data))->get(array('establishments.*'));
this is controller condition.
I have two tables, in table1 i am matching id with table2 and then fetching data from table1, and in table2 i have multiple values of same id in table 1. i want to get data of table1 values only one time, but as i am hvg multiple data of same id in table2 , data is repeating multiple times, can anyone please tell me how to get data only one time wheater table2 having single value or multiple value of same id... thank you
you can do it by selecting the field name you desired
instead get all field from table establishments
$data['establishments2'] = Establishments::Join("establishment_categories",'establishment_categories.establishment_id','=','establishments.id')->where('establishments.city','LIKE',$location)->where('establishments.status',0)->whereIn('establishment_id',array($est_data))->get(array('establishments.*'));
you can select specific field from table establishments like
$data['establishments2'] = Establishments::Join("establishment_categories",'establishment_categories.establishment_id','=','establishments.id')->where('establishments.city','LIKE',$location)->where('establishments.status',0)->whereIn('establishment_id',array($est_data))->get('establishments.fieldName');
or you can also do
$data['establishments2'] = `Establishments::Join("establishment_categories",'establishment_categories.establishment_id','=','establishments.id')->where('establishments.city','LIKE',$location)->where('establishments.status',0)->whereIn('establishment_id',array($est_data))->select('establishments.fieldName')->get();`
I have the below data.
hive> select * from authors;
author1 ["book1,book2,book3"]
hive> describe authors;
author string
books array<string>
hive> select explode(books) as mycol from authors;
book1,book2,book3
when i use the explode function the data is not splitting into rows.
This is because you might have not declared the collection items termination clause in the creation of the table. I am providing you the syntax of creation of this table:
CREATE TABLE IF NOT EXISTS authors (author string, books array ) ROW FORMAT DELIMITED FIELDS TERMINATED BY "|" COLLECTION ITEMS TERMINATED BY "," STORED AS TEXTFILE;
Then load the data : LOAD DATA LOCAL INPATH '/home/cloudera/Desktop/hadoop/dummy' into table authors;
Also please note that the collection items and the field termination both should be different from each other.It means that if you declare collection items to be separated by comma then you must declare filed termination value something different from comma Here I have declared array termination as comma and field termination as |(pipe) .Below is the sample data :
author1|book1,book2,book3 author2|book4,book5,book6
Now fire the select query and you can try simple explode as well and no need to do split here :
hive> select * from authors;
authors.author authors.books
author1 ["book1","book2","book3"]
author2 ["book4","book5","book6"]
hive>select explode(books) as mycol from authors;
col book1 book2 book3 book4 book5 book6
The output looks like books array contains the only element with string "book1,book2,book3".
It should look like this:
["book1","book2","book3"] not ["book1,book2,book3"]
That is why explode generated single row.
If you still want to explode it, use explode(split(books[0],','))
I am unable to append data to tables that contain an array column using insert into statements; the data type is array < varchar(200) >
Using jodbc I am unable to insert values into an array column by values like :
INSERT INTO demo.table (codes) VALUES (['a','b']);
does not recognises the "[" or "{" signs.
Using the array function like ...
INSERT INTO demo.table (codes) VALUES (array('a','b'));
I get the following error using array function:
Unable to create temp file for insert values Expression of type TOK_FUNCTION not supported in insert/values
Tried the workaround...
INSERT into demo.table (codes) select array('a','b');
unsuccessfully:
Failed to recognize predicate '<EOF>'. Failed rule: 'regularBody' in statement
How can I load array data into columns using jdbc ?
My Table has two columns: a STRING, b ARRAY<STRING>.
When I use #Kishore Kumar Suthar's method, I got this:
FAILED: ParseException line 1:33 cannot recognize input near '(' 'a' ',' in statement
But I find another way, and it works for me:
INSERT INTO test.table
SELECT "test1", ARRAY("123", "456", "789")
FROM dummy LIMIT 1;
dummy is any table which has atleast one row.
make a dummy table which has atleast one row.
INSERT INTO demo.table (codes) VALUES (array('a','b')) from dummy limit 1;
hive> select codes demo.table;
OK
["a","b"]
Time taken: 0.088 seconds, Fetched: 1 row(s)
Suppose I have a table employee containing the fields ID and Name.
I create another table employee_address with fields ID and Address. Address is a complex data of type array(string).
Here is how I can insert values into it:
insert into table employee_address select 1, 'Mark', 'Evans', ARRAY('NewYork','11th
avenue') from employee limit 1;
Here the table employee just acts as a dummy table. No data is copied from it. Its schema may not match employee_address. It doesn't matter.
I am new to Hadoop Hive and have just started to do basic querying in hive.
My intention is I have an input text file (which has large number of records per line). The format of the file is something like this:
1;23;0;;;;1;3;2;1;1;4;5;6;;;;
1;43;6;;;;1;3;2;1;1;4;5;5;;;;
1;53;7;;;;1;3;2;1;1;4;5;2;;;;
(Each integer before a ";" has a meaning which I am intending to put it in Hive table as column names - and each line contains about 400 fields)
So for inserting this I have created a table "test" - using the following query:
CREATE TABLE test (field1 INT, field2 INT, field3 INT, field4 INT, ... field390 INT)
ROW FORMAT DELIMITED
FIELDS TERMINATED BY "\073";
And I load my text file with the records using the LOAD query as below:
LOAD DATA LOCAL INPATH '/tmp/test.txt'
OVERWRITE INTO TABLE test;
For now all the fields are getting inserted into the table upto 50 fields accurately. Later I have mismatches.
What I have in my format of input is - at 50th field in the test.txt I have a INT number which decides the number of fields to take following the field.
Example:
50th field: 2 -> Hive has to take the next 2*10 field INT values and insert in the table.
50th field: 1 -> Hive has to take the next 1*10 field INT values and insert in the table. And the rest 10 fields can be set NULL.
(The maximum value of 50th field is 2 - so I have reserved 2*10 fields for this in the table)
After 50th+(2*10) fields , the data should be read normally in the sequence as it did before the 50th field.
Do we have a way in which we can have a condition on the input so that the data gets inserted accordingly in Hive.
A help may be appreciated. Need a solution which will not guide me to pre-process the test.txt and then supply to the table.
I have tried to answer it at http://www.knowbigdata.com/page/hive-hadoop-need-load-data-table-based-conditions-input-file#comment-85
Does it make sense?
You can use where clause in Hive.
First load data into Hive raw table or HDFS, then again create table and load data based on where clause.
Means:
SELECT * FROM table_reference
WHERE name like "%venu%"
GROUP BY City;
Resource: https://cwiki.apache.org/confluence/display/Hive/LanguageManual+Select