dhtmlx column is not sorting - dhtmlx

mygrid = new dhtmlXGridObject('mygrid_container');
mygrid.setImagePath("codebase/imgs/");
mygrid.setHeader("Col1, Col2, Col3, Col4, Col5, Col6, Col1=7");
mygrid.setInitWidths("*,*,*,*,*,*,*");
**mygrid.setColAlign("left,left,left,left,left,left,left");**
mygrid.setSkin("light");
mygrid.setColSorting("str, int, int, int, int, int, int");
mygrid.init();
I have this working and sorting on the first column but I get the sorting caret for all of the other columns but the rows don't sort. I am looking at the console with firebug but I don't see any errors.

Found it.
WRONG
mygrid.setColSorting("str, int, int, int, int, int, int");'
Remove the spaces.
RIGHT
mygrid.setColSorting("str,int,int,int,int,int,int");

Related

Hive insert query failing with error return code -101

I am trying to run a simple insert statement as below:
insert into table `bwc_test` partition(call_date)
select * from
`bwc_master`;
Then it fails with the below error:
INFO : Loading data to table dtc.bwc_test partition (call_date=null) from /apps/hive/warehouse/dtc.db/bwc_test/.hive-staging_hive_2018-11-13_19-10-37_084_8697431764330812894-1/-ext-10000
Error: Error while processing statement: FAILED: Execution Error, return code -101 from org.apache.hadoop.hive.ql.exec.MoveTask. HIVE_LOAD_DYNAMIC_PARTITIONS_THREAD_COUNT (state=08S01,code=-101)
Table definition for bwc_master:
CREATE TABLE `bwc_master`(
unique_id bigint,
customer_id string,
direction string,
call_date_time timestamp,
duration int,
billed_duration int,
retail_rate decimal(9,7),
retail_cost decimal(19,7),
billed_tier smallint,
call_type tinyint,
record_status tinyint,
aggregate_id bigint,
originating_ipaddress string,
originating_number string,
destination_number string,
lrn string,
ocn string,
destination_rate_center string,
destination_lata int,
billed_prefix string,
rate_id string,
wholesale_rate decimal(9,7),
wholesale_cost decimal(19,7),
cnam_dipped boolean,
billed_number_type tinyint,
source_lata int,
source_ocn string,
location_id string,
sippeer_id int,
rate_attempts tinyint,
source_state string,
source_rc string,
destination_country string,
destination_state string,
destination_ip string,
carrier_id string,
rated_date_time timestamp,
partition_id smallint,
encryption_rate decimal(9,7),
encryption_cost decimal(19,7),
trans_coding_rate decimal(9,7),
trans_coding_cost decimal(19,7),
file_name string,
call_id string,
from_tag string,
to_tag string,
unique_record_id string)
PARTITIONED BY (
`call_date` date)
CLUSTERED BY (
customer_id)
INTO 10 BUCKETS
ROW FORMAT SERDE
'org.apache.hadoop.hive.ql.io.orc.OrcSerde'
STORED AS INPUTFORMAT
'org.apache.hadoop.hive.ql.io.orc.OrcInputFormat'
OUTPUTFORMAT
'org.apache.hadoop.hive.ql.io.orc.OrcOutputFormat'
LOCATION
'hdfs://*****/apps/hive/warehouse/dtc.db/bwc_master'
Can someone help me debug this? I didn't find anything in the logs.
You missing the "table" before bwc_test
insert into table `bwc_test` partition(call_date)
select * from
`bwc_master`;

Parse Exception EOF Hive

Query:
hive> CREATE TABLE GREENTAXI(VendorID INT, pick_up_date DATE,drop_date DATE,Flag CHAR(1),rate_code INT, pick_up_long STRING,pick_up_lat STRING,drop_off_long STRING,drop_off_lat STRING,passenger_count INT,trip_distance DECIMAL,fare_amount DECIMAL,Extra DECIMAL,Tax DECIMAL,Tip DECIMAL,Tolls INT,Fee INT,Surcharge DECIMAL,total_amount DECIMAL,payment_type INT,trip_type INT)COMMENT 'Data about Green NYC Taxi for the year 2016-Jan’ ROW FORMAT DELIMITED FIELDS TERMINATED BY ','STORED AS TEXTFILE;
I get this error. Please advise
Looks like some character encoding problem. Use a simple editor. Tried this and worked:
CREATE TABLE greentaxi
(
vendorid INT,
pick_up_date DATE,
drop_date DATE,
flag CHAR(1),
rate_code INT,
pick_up_long STRING,
pick_up_lat STRING,
drop_off_long STRING,
drop_off_lat STRING,
passenger_count INT,
trip_distance DECIMAL,
fare_amount DECIMAL,
extra DECIMAL,
tax DECIMAL,
tip DECIMAL,
tolls INT,
fee INT,
surcharge DECIMAL,
total_amount DECIMAL,
payment_type INT,
trip_type INT
)
comment 'Data about Green NYC Taxi for the year 2016-Jan'
ROW FORMAT DELIMITED
FIELDS TERMINATED BY ','
STORED AS TEXTFILE;

Unable to load text data into Hive table as ORC through temporary Hive table

I want to load .csv file into Hive table as a ORC file. I came across one post
which suggested a workaround to the problem to which I executed the below queries:
1) Creating and loading data as a text file into a temporary table:
CREATE TABLE IF NOT EXISTS CrimesData( ID int, Case_Number int, CrimeDate string, Block string , IUCR string,Primary_Type string, Description string, Location_Description string, Arrest string, Domestic string, Beat int, District int, Ward int, Community_Area int, FBI_Code string, X_Coordinate int, Y_Coordinate int, Year int, Updated_On string, Latitude decimal(10,10), Longitude decimal(10,10), CrimeLocation string)
ROW FORMAT DELIMITED FIELDS TERMINATED BY ',' ESCAPED BY '"' LINES TERMINATED BY '\n'
tblproperties("skip.header.line.count"="1")
LOAD DATA LOCAL INPATH '/home/cloudera/Documents/CrimesData.csv' INTO TABLE CrimesData
2) Creating a new table and specifying ORC data as the source:
CREATE TABLE IF NOT EXISTS CrimesDataORC( ID int, Case_Number int, CrimeDate string, Block string , IUCR string,Primary_Type string, Description string, Location_Description string, Arrest string, Domestic string, Beat int, District int, Ward int, Community_Area int, FBI_Code string, X_Coordinate int, Y_Coordinate int, Year int, Updated_On string, Latitude decimal(10,10), Longitude decimal(10,10), CrimeLocation string)
STORED AS ORC;
3) Insert data into the new table from temporary table:
INSERT INTO TABLE CrimesDataORC SELECT * FROM CrimesData;
The first two steps execute without any error but the step 3 throws the following error:
Error while processing statement: FAILED: Execution Error, return code
2 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask
I am running the above queries on Cloudera Manager Quickstart VM 5.8.
Not sure where I am going wrong as similar steps for another table in the same database works as expected.
It might be kind of data non-compliance with structure. Try to set some where conditions in the select statement to check rather inserting all the data

Insert data of 2 Hive external tables in new External table with additional column

I have 2 external hive tables as follows. I have populated data in them from oracle using sqoop.
create external table transaction_usa
(
tran_id int,
acct_id int,
tran_date string,
amount double,
description string,
branch_code string,
tran_state string,
tran_city string,
speendby string,
tran_zip int
)
row format delimited
stored as textfile
location '/user/stg/bank_stg/tran_usa';
create external table transaction_canada
(
tran_id int,
acct_id int,
tran_date string,
amount double,
description string,
branch_code string,
tran_state string,
tran_city string,
speendby string,
tran_zip int
)
row format delimited
stored as textfile
location '/user/stg/bank_stg/tran_canada';
Now i want to merge above 2 tables data as it is in 1 external hive table with all same fields as in the above 2 tables but with 1 extra column to identify that which data is from which table. The new external table with additional column as source_table. The new external table is as follows.
create external table transaction_usa_canada
(
tran_id int,
acct_id int,
tran_date string,
amount double,
description string,
branch_code string,
tran_state string,
tran_city string,
speendby string,
tran_zip int,
source_table string
)
row format delimited
stored as textfile
location '/user/gds/bank_ds/tran_usa_canada';
how can I do it.?
You do SELECT from each table and perform UNION ALL operation on these results and finally insert the result into your third table.
Below is the final hive query:
INSERT INTO TABLE transaction_usa_canada
SELECT tran_id, acct_id, tran_date, amount, description, branch_code, tran_state, tran_city, speendby, tran_zip, 'transaction_usa' AS source_table FROM transaction_usa
UNION ALL
SELECT tran_id, acct_id, tran_date, amount, description, branch_code, tran_state, tran_city, speendby, tran_zip, 'transaction_canada' AS source_table FROM transaction_canada;
Hope this help you!!!
You can very well do it by manual partitioning as well.
CREATE TABLE transaction_new_table (
tran_id int,
acct_id int,
tran_date string,
amount double,
description string,
branch_code string,
tran_state string,
tran_city string,
speendby string,
tran_zip int
)
PARTITIONED BY (sourcetablename String)
Then run below command,
load data inpath 'hdfspath' into table transaction_new_table partition(sourcetablename='1')
You could use the INSERT INTO Clause of Hive
INSERT INTO TABLE table transaction_usa_canada
SELECT tran_id, acct_id, tran_date, ...'transaction_usa' FROM transaction_usa;
INSERT INTO TABLE table transaction_usa_canada
SELECT tran_id, acct_id, tran_date, ...'transaction_canada' FROM transaction_canada;

why doesn't the part file have nothing in HIVE ouput

My issue is i have tried this on my local machine with hadoop and used AWS EC2 to check, there are no return of records in the below query. Now the below script is correct and i know that for a fact?
My quesiton is why we don't see any results in the part file after the job is complete
DROP TABLE IF EXISTS batting;
CREATE EXTERNAL TABLE IF NOT EXISTS batting(id STRING, year INT, team STRING,
league STRING, games INT, ab INT, runs INT, hits INT, doubles INT, triples
INT, homeruns INT, rbi INT, sb INT, cs INT, walks INT, strikeouts INT, ibb
INT, hbp INT, sh INT, sf INT, gidp INT) ROW FORMAT DELIMITED FIELDS
TERMINATED BY ',' LOCATION 's3://hive-test1/batting';
DROP TABLE IF EXISTS master;
CREATE EXTERNAL TABLE IF NOT EXISTS master(id STRING, byear INT, bmonth INT,
bday INT, bcountry STRING, bstate STRING, bcity STRING, dyear INT, dmonth
INT, dday INT, dcountry STRING, dstate STRING, dcity STRING, fname STRING,
lname STRING, name STRING, weight INT, height INT, bats STRING, throws
STRING, debut STRING, finalgame STRING, retro STRING, bbref STRING) ROW
FORMAT DELIMITED FIELDS TERMINATED BY ',' LOCATION
's3://hive-test1/master';
INSERT OVERWRITE DIRECTORY 's3://hive-test1/output' SELECT n.fname,
n.lname, x.year, x.runs FROM master n JOIN (SELECT b.id as id, b.year as
year, b.runs as runs FROM batting b JOIN (SELECT year, max(runs) AS best FROM
batting GROUP BY year) o WHERE b.runs=o.best AND b.year=o.year) x ON
x.id=n.id ORDER BY x.runs DESC;
When you use Hive to create the two tables, all you're doing is creating a definition of name, field and their types, location and so on. Create does nothing with data.
Based on your similar question earlier, I think you have some existing HDFS files in CSV format that contain the data you want to query, right?
Before doing that I suggest that you manually insert a record into each table, likeINSERT INTO batting (id, year, team,league) VALUES ('1', 2016, 'Red Sox', 'AL Easr');. Then, query the table with SELECT * FROM batting; to confirm you have on record with some values in it.
Now you have the next problem to solve: how do I import an HDFS file to a Hive table? You can do this using Hue, if you have it installed. If not, I suggest you use Google to find an answer to this question.
In general, you have three problems to solve:
Create tables in Hive so the Hive megastore knows about their structure. This is called data definition langurs, or DDL in SQL.
Import and Lin your existing CSV data sets sitting as files on HDFS to their corresponding Hive tables
Query the tables using SQL likely using SELECT and JOIN, this is called data manipulation language or DML in SQL.
Each is a different step. Make them work, one by one and you'll take a complex problem and break it down into smaller problems that are easier to understand.

Resources