Sort DataFrame Columns individually - sorting

I have seen many examples about how to sort a DataFrame based on some specific columns.
What I want to achieve is to sort Columns DataFrame individually, independently of each other. See the example below.
Input
+-----------+-----------+----------------+
| Column1 | Column2 | Column 3 |
+-----------+-----------+----------------+
| 61 | 5 | 9 |
| 14 | 16 | 8 |
| 26 | 27 | 7 |
+-----------+-----------+----------------+
Output
+-----------+-----------+----------------+
| Column1 | Column2 | Column 3 |
+-----------+-----------+----------------+
| 14 | 5 | 7 |
| 26 | 16 | 8 |
| 61 | 27 | 9 |
+-----------+-----------+----------------+
Any clue how can I achieve this?

Related

Converting Column headings into Row data

I have a table in an Access Database that has columns that I would like to convert to be Row data.
I found a code in here
converting database columns into associated row data
I am new to VBA and I just don't know how to use this code.
I have attached some sample data
How the table currently is set up it is 14 columns long.
+-------+--------+-------------+-------------+-------------+----------------+
| ID | Name | 2019-10-31 | 2019-11-30 | 2019-12-31 | ... etc ... |
+-------+--------+-------------+-------------+-------------+----------------+
| 555 | Fred | 1 | 4 | 12 | |
| 556 | Barney| 5 | 33 | 24 | |
| 557 | Betty | 4 | 11 | 76 | |
+-------+--------+-------------+-------------+-------------+----------------+
I would like the output to be
+-------+------------+-------------+
| ID | Date | HOLB |
+-------+------------+-------------+
| 555 | 2019-10-31| 1 |
| 555 | 2019-11-30| 4 |
| 555 | 2019-12-31| 12 |
| 556 | 2019-10-31| 5 |
| 556 | 2019-11-30| 33 |
| 556 | 2019-12-31| 24 |
+-------+--------+-------------+---+
How can I modify this code into a Module and call the module in a query?
Or any other idea you may have.

Pivot Table in Hive and Create Multiple Columns for Unique Combinations

I want to pivot the following table
| ID | Code | date | qty |
| 1 | A | 1/1/19 | 11 |
| 1 | A | 2/1/19 | 12 |
| 2 | B | 1/1/19 | 13 |
| 2 | B | 2/1/19 | 14 |
| 3 | C | 1/1/19 | 15 |
| 3 | C | 3/1/19 | 16 |
into
| ID | Code | mth_1(1/1/19) | mth_2(2/1/19) | mth_3(3/1/19) |
| 1 | A | 11 | 12 | 0 |
| 2 | B | 13 | 14 | 0 |
| 3 | C | 15 | 0 | 16 |
I am new to hive, i am not sure how to implement it.
NOTE: I don't want to do mapping because my month values change over time.

create a hive table without specifying the column names and column types

I have huge dataset with 1000 columns stored on HDFS. I want to create a hive table to filter and work on the data.
CREATE EXTERNAL TABLE IF NOT EXISTS tablename(
var1 INT,var2 STRING, var2 STRING)
COMMENT 'testbykasa'
ROW FORMAT DELIMITED
FIELDS TERMINATED BY ','
STORED AS TEXTFILE
LOCATION '/folder1/';
For smaller no. of columns(~ 5-10), I manually specify the column name and column type. Is there a way to get hive create the table by inferring the column name and datatype, without manually specifying it.
Demo
mydata.csv
2,2,8,1,5,1,8,1,4,1,3,4,9,2,8,2,6,5,3,1,5,5,8,0,1,6,0,7,1,4
2,6,8,7,7,9,9,3,8,7,3,1,9,1,7,5,9,7,1,2,5,7,0,5,1,2,6,4,0,4
0,0,1,3,6,5,6,2,4,2,4,9,0,4,9,8,1,0,2,8,4,7,8,3,9,7,8,9,5,5
3,4,9,1,8,7,4,2,1,0,4,3,1,4,6,6,7,4,9,9,6,7,9,5,2,2,8,0,2,9
3,4,8,9,9,1,5,2,7,4,7,1,4,9,8,9,3,3,2,3,3,5,4,8,6,5,8,8,6,4
4,0,6,9,3,2,4,2,9,4,6,8,8,2,6,7,1,7,3,1,6,6,5,2,9,9,4,6,9,7
7,0,9,3,7,6,5,5,7,2,4,2,7,4,6,1,0,9,8,2,5,7,1,4,0,4,3,9,4,3
2,8,3,7,7,3,3,6,9,3,5,5,0,7,5,3,6,2,9,0,8,2,3,0,6,2,4,3,2,6
3,2,0,8,8,8,1,8,4,0,5,2,5,0,2,0,4,1,2,2,1,0,2,8,6,7,2,2,7,0
0,5,9,1,0,3,1,9,3,6,2,1,5,0,6,6,3,8,2,8,0,0,1,9,1,5,5,2,4,8
create external table mycsv (rec string)
row format delimited
stored as textfile
tblproperties ('serialization.last.column.takes.rest'='true')
;
select pe.pos + 1 as col
,count(distinct pe.val) as count_distinct_val
from mycsv
lateral view posexplode(split(rec,',')) pe
group by pe.pos
;
+------+---------------------+
| col | count_distinct_val |
+------+---------------------+
| 1 | 5 |
| 2 | 6 |
| 3 | 6 |
| 4 | 5 |
| 5 | 7 |
| 6 | 8 |
| 7 | 7 |
| 8 | 7 |
| 9 | 6 |
| 10 | 7 |
| 11 | 6 |
| 12 | 7 |
| 13 | 7 |
| 14 | 6 |
| 15 | 6 |
| 16 | 9 |
| 17 | 7 |
| 18 | 9 |
| 19 | 5 |
| 20 | 6 |
| 21 | 7 |
| 22 | 5 |
| 23 | 8 |
| 24 | 7 |
| 25 | 5 |
| 26 | 6 |
| 27 | 7 |
| 28 | 8 |
| 29 | 8 |
| 30 | 8 |
+------+---------------------+
Yes, it is possible, but not with SQL script. To do this I use a Python script that read the first line of the csv file and create a script dynamically sending to Hive using the pyhive library (and erasing the first line of the csv). To identify the types, is just use Python functions to discovery if is a String, a Number etc.
The problem with Python is that it just work on Python 2.7, so I recommend you considere try to do the same code on Scala.

Oracle "Total" plan cost is really less than some of it's elements

I cannot figure out why sometimes, the total cost of a plan can be a very small number whereas looking inside the plan we can find huge costs. (indeed the query is very slow).
Can somebody explain me that?
Here is an example.
Apparently the costful part comes from a field in the main select that does a listagg on a subview and the join condition with this subview contains a complex condition (we can join on one field or another).
| Id | Operation | Name | Rows | Bytes | Cost |
----------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 875 | 20 |
| 1 | SORT GROUP BY | | 1 | 544 | |
| 2 | VIEW | | 1 | 544 | 3 |
| 3 | SORT UNIQUE | | 1 | 481 | 3 |
| 4 | NESTED LOOPS | | | | |
| 5 | NESTED LOOPS | | 3 | 1443 | 2 |
| 6 | TABLE ACCESS BY INDEX ROWID | | 7 | 140 | 1 |
| 7 | INDEX RANGE SCAN | | 7 | | 1 |
| 8 | INDEX UNIQUE SCAN | | 1 | | 1 |
| 9 | TABLE ACCESS BY INDEX ROWID | | 1 | 461 | 1 |
| 10 | SORT GROUP BY | | 1 | 182 | |
| 11 | NESTED LOOPS | | | | |
| 12 | NESTED LOOPS | | 8 | 1456 | 3 |
| 13 | NESTED LOOPS | | 8 | 304 | 2 |
| 14 | TABLE ACCESS BY INDEX ROWID | | 7 | 154 | 1 |
| 15 | INDEX RANGE SCAN | | 7 | | 1 |
| 16 | INDEX RANGE SCAN | | 1 | 16 | 1 |
| 17 | INDEX RANGE SCAN | | 1 | | 1 |
| 18 | TABLE ACCESS BY INDEX ROWID | | 1 | 144 | 1 |
| 19 | SORT GROUP BY | | 1 | 268 | |
| 20 | VIEW | | 1 | 268 | 9 |
| 21 | SORT UNIQUE | | 1 | 108 | 9 |
| 22 | CONCATENATION | | | | |
| 23 | NESTED LOOPS | | | | |
| 24 | NESTED LOOPS | | 1 | 108 | 4 |
| 25 | NESTED LOOPS | | 1 | 79 | 3 |
| 26 | NESTED LOOPS | | 1 | 59 | 2 |
| 27 | TABLE ACCESS BY INDEX ROWID | | 1 | 16 | 1 |
| 28 | INDEX UNIQUE SCAN | | 1 | | 1 |
| 29 | TABLE ACCESS BY INDEX ROWID | | 1 | 43 | 1 |
| 30 | INDEX RANGE SCAN | | 1 | | 1 |
| 31 | TABLE ACCESS BY INDEX ROWID | | 1 | 20 | 1 |
| 32 | INDEX UNIQUE SCAN | | 1 | | 1 |
| 33 | INDEX UNIQUE SCAN | | 1 | | 1 |
| 34 | TABLE ACCESS BY INDEX ROWID | | 1 | 29 | 1 |
| 35 | NESTED LOOPS | | | | |
| 36 | NESTED LOOPS | | 1 | 108 | 4 |
| 37 | NESTED LOOPS | | 1 | 79 | 3 |
| 38 | NESTED LOOPS | | 1 | 59 | 2 |
| 39 | TABLE ACCESS BY INDEX ROWID | | 4 | 64 | 1 |
| 40 | INDEX RANGE SCAN | | 2 | | 1 |
| 41 | TABLE ACCESS BY INDEX ROWID | | 1 | 43 | 1 |
| 42 | INDEX RANGE SCAN | | 1 | | 1 |
| 43 | TABLE ACCESS BY INDEX ROWID | | 1 | 20 | 1 |
| 44 | INDEX UNIQUE SCAN | | 1 | | 1 |
| 45 | INDEX UNIQUE SCAN | | 1 | | 1 |
| 46 | TABLE ACCESS BY INDEX ROWID | | 1 | 29 | 1 |
| 47 | SORT GROUP BY | | 1 | 330 | |
| 48 | VIEW | | 1 | 330 | 26695 |
| 49 | SORT UNIQUE | | 1 | 130 | 26695 |
| 50 | CONCATENATION | | | | |
| 51 | HASH JOIN ANTI | | 1 | 130 | 13347 |
| 52 | NESTED LOOPS | | | | |
| 53 | NESTED LOOPS | | 1 | 110 | 4 |
| 54 | NESTED LOOPS | | 1 | 81 | 3 |
| 55 | NESTED LOOPS | | 1 | 61 | 2 |
| 56 | TABLE ACCESS BY INDEX ROWID | | 1 | 16 | 1 |
| 57 | INDEX UNIQUE SCAN | | 1 | | 1 |
| 58 | TABLE ACCESS BY INDEX ROWID | | 1 | 45 | 1 |
| 59 | INDEX RANGE SCAN | | 1 | | 1 |
| 60 | TABLE ACCESS BY INDEX ROWID | | 1 | 20 | 1 |
| 61 | INDEX UNIQUE SCAN | | 1 | | 1 |
| 62 | INDEX UNIQUE SCAN | | 1 | | 1 |
| 63 | TABLE ACCESS BY INDEX ROWID | | 1 | 29 | 1 |
| 64 | VIEW | | 164K| 3220K| 13341 |
| 65 | NESTED LOOPS | | | | |
| 66 | NESTED LOOPS | | 164K| 11M| 13341 |
| 67 | NESTED LOOPS | | 164K| 8535K| 10041 |
| 68 | TABLE ACCESS BY INDEX ROWID | | 164K| 6924K| 8391 |
| 69 | INDEX SKIP SCAN | | 2131K| | 163 |
| 70 | INDEX UNIQUE SCAN | | 1 | 10 | 1 |
| 71 | INDEX UNIQUE SCAN | | 1 | | 1 |
| 72 | TABLE ACCESS BY INDEX ROWID | | 1 | 20 | 1 |
| 73 | HASH JOIN ANTI | | 2 | 260 | 13347 |
| 74 | NESTED LOOPS | | | | |
| 75 | NESTED LOOPS | | 2 | 220 | 4 |
| 76 | NESTED LOOPS | | 2 | 162 | 3 |
| 77 | NESTED LOOPS | | 2 | 122 | 2 |
| 78 | TABLE ACCESS BY INDEX ROWID | | 4 | 64 | 1 |
| 79 | INDEX RANGE SCAN | | 2 | | 1 |
| 80 | TABLE ACCESS BY INDEX ROWID | | 1 | 45 | 1 |
| 81 | INDEX RANGE SCAN | | 1 | | 1 |
| 82 | TABLE ACCESS BY INDEX ROWID | | 1 | 20 | 1 |
| 83 | INDEX UNIQUE SCAN | | 1 | | 1 |
| 84 | INDEX UNIQUE SCAN | | 1 | | 1 |
| 85 | TABLE ACCESS BY INDEX ROWID | | 1 | 29 | 1 |
| 86 | VIEW | | 164K| 3220K| 13341 |
| 87 | NESTED LOOPS | | | | |
| 88 | NESTED LOOPS | | 164K| 11M| 13341 |
| 89 | NESTED LOOPS | | 164K| 8535K| 10041 |
| 90 | TABLE ACCESS BY INDEX ROWID | | 164K| 6924K| 8391 |
| 91 | INDEX SKIP SCAN | | 2131K| | 163 |
| 92 | INDEX UNIQUE SCAN | | 1 | 10 | 1 |
| 93 | INDEX UNIQUE SCAN | | 1 | | 1 |
| 94 | TABLE ACCESS BY INDEX ROWID | | 1 | 20 | 1 |
| 95 | NESTED LOOPS OUTER | | 1 | 875 | 20 |
| 96 | NESTED LOOPS OUTER | | 1 | 846 | 19 |
| 97 | NESTED LOOPS OUTER | | 1 | 800 | 18 |
| 98 | NESTED LOOPS OUTER | | 1 | 776 | 17 |
| 99 | NESTED LOOPS OUTER | | 1 | 752 | 16 |
| 100 | NESTED LOOPS OUTER | | 1 | 641 | 15 |
| 101 | NESTED LOOPS OUTER | | 1 | 576 | 14 |
| 102 | NESTED LOOPS OUTER | | 1 | 554 | 13 |
| 103 | NESTED LOOPS OUTER | | 1 | 487 | 12 |
| 104 | NESTED LOOPS OUTER | | 1 | 434 | 11 |
| 105 | NESTED LOOPS | | 1 | 368 | 10 |
| 106 | NESTED LOOPS | | 1 | 102 | 9 |
| 107 | NESTED LOOPS OUTER | | 1 | 85 | 8 |
| 108 | NESTED LOOPS | | 1 | 68 | 7 |
| 109 | NESTED LOOPS | | 50 | 2700 | 6 |
| 110 | HASH JOIN | | 53 | 1696 | 5 |
| 111 | INLIST ITERATOR | | | | |
| 112 | TABLE ACCESS BY INDEX ROWID| | 520 | 10400 | 3 |
| 113 | INDEX RANGE SCAN | | 520 | | 1 |
| 114 | INLIST ITERATOR | | | | |
| 115 | TABLE ACCESS BY INDEX ROWID| | 91457 | 1071K| 1 |
| 116 | INDEX UNIQUE SCAN | | 2 | | 1 |
| 117 | TABLE ACCESS BY INDEX ROWID | | 1 | 22 | 1 |
| 118 | INDEX UNIQUE SCAN | | 1 | | 1 |
| 119 | TABLE ACCESS BY INDEX ROWID | | 1 | 14 | 1 |
| 120 | INDEX UNIQUE SCAN | | 1 | | 1 |
| 121 | TABLE ACCESS BY INDEX ROWID | | 1 | 17 | 1 |
| 122 | INDEX UNIQUE SCAN | | 1 | | 1 |
| 123 | TABLE ACCESS BY INDEX ROWID | | 1 | 17 | 1 |
| 124 | INDEX UNIQUE SCAN | | 1 | | 1 |
| 125 | TABLE ACCESS BY INDEX ROWID | | 1 | 266 | 1 |
| 126 | INDEX UNIQUE SCAN | | 1 | | 1 |
| 127 | TABLE ACCESS BY INDEX ROWID | | 1 | 66 | 1 |
| 128 | INDEX UNIQUE SCAN | | 1 | | 1 |
| 129 | TABLE ACCESS BY INDEX ROWID | | 1 | 53 | 1 |
| 130 | INDEX UNIQUE SCAN | | 1 | | 1 |
| 131 | TABLE ACCESS BY INDEX ROWID | | 1 | 67 | 1 |
| 132 | INDEX UNIQUE SCAN | | 1 | | 1 |
| 133 | INDEX RANGE SCAN | | 1 | 22 | 1 |
| 134 | TABLE ACCESS BY INDEX ROWID | | 1 | 65 | 1 |
| 135 | INDEX UNIQUE SCAN | | 1 | | 1 |
| 136 | TABLE ACCESS BY INDEX ROWID | | 1 | 111 | 1 |
| 137 | INDEX UNIQUE SCAN | | 1 | | 1 |
| 138 | TABLE ACCESS BY INDEX ROWID | | 1 | 24 | 1 |
| 139 | INDEX UNIQUE SCAN | | 1 | | 1 |
| 140 | TABLE ACCESS BY INDEX ROWID | | 1 | 24 | 1 |
| 141 | INDEX UNIQUE SCAN | | 1 | | 1 |
| 142 | TABLE ACCESS BY INDEX ROWID | | 1 | 46 | 1 |
| 143 | INDEX UNIQUE SCAN | | 1 | | 1 |
| 144 | TABLE ACCESS BY INDEX ROWID | | 1 | 29 | 1 |
| 145 | INDEX UNIQUE SCAN | | 1 | | 1 |
----------------------------------------------------------------------------------------------------------
The total cost of a statement is usually equal to or greater than the cost of any of its child operations. There are at least 4 exceptions to this rule.
Your plan looks like #3 but we can't be sure without looking at code.
1. FILTER
Execution plans may depend on conditions at run-time. These conditions cause FILTER operations that will dynamically decide which query block to execute. The example below uses a static condition but still demonstrates the concept. Part of the subquery is very expensive but the condition negates the whole thing.
explain plan for select * from dba_objects cross join dba_objects where 1 = 2;
select * from table(dbms_xplan.display(format => 'basic +cost'));
Plan hash value: 3258663795
--------------------------------------------------------------------
| Id | Operation | Name | Cost (%CPU)|
--------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 0 (0)|
| 1 | FILTER | | |
| 2 | MERGE JOIN CARTESIAN | | 11M (3)|
...
2. COUNT STOPKEY
Execution plans sum child operations up until the final cost. But child operations will not always finish. In the example below it may be correct to say that part of the plan costs 214. But because of the condition where rownum <= 1 only part of that child operation may run.
explain plan for
select /*+ no_query_transformation */ *
from (select * from dba_objects join dba_objects using (owner))
where rownum <= 1;
select * from table(dbms_xplan.display(format => 'basic +cost'));
Plan hash value: 2132093199
-------------------------------------------------------------------------------
| Id | Operation | Name | Cost (%CPU)|
-------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 4 (0)|
| 1 | COUNT STOPKEY | | |
| 2 | VIEW | | 4 (0)|
| 3 | VIEW | | 4 (0)|
| 4 | NESTED LOOPS | | 4 (0)|
| 5 | VIEW | DBA_OBJECTS | 2 (0)|
| 6 | UNION-ALL | | |
| 7 | HASH JOIN | | 3 (34)|
| 8 | INDEX FULL SCAN | I_USER2 | 1 (0)|
| 9 | VIEW | _CURRENT_EDITION_OBJ | 1 (0)|
| 10 | FILTER | | |
| 11 | HASH JOIN | | 214 (3)|
...
3. Subqueries in the SELECT column list
Cost aggregation does not include subqueries in the SELECT column list. A query like select ([expensive query]) from dual; will have a very small total cost. I don't understand the reason for this; Oracle estimates the subquery and he number of rows in the FROM, surely it could multiply them together for a total cost.
explain plan for
select dummy,(select count(*) from dba_objects cross join dba_objects) from dual;
select * from table(dbms_xplan.display(format => 'basic +cost'));
Plan hash value: 3705842531
---------------------------------------------------------------
| Id | Operation | Name | Cost (%CPU)|
---------------------------------------------------------------
| 0 | SELECT STATEMENT | | 2 (0)|
| 1 | SORT AGGREGATE | | |
| 2 | MERGE JOIN CARTESIAN | | 11M (3)|
...
4. Other - rounding? bugs?
About 0.01% of plans still have unexplainable cost issues. I can't find any pattern among them. Perhaps it's just a rounding issue or some rare optimizer bugs. There will always be some weird cases with a any model as complicated as the optimizer.
Check for more exceptions
This query can find other exceptions, it returns all plans where the first cost is less than the maximum cost.
select *
from
(
--First and Max cost per plan.
select
sql_id, plan_hash_value, id, cost
,max(cost) keep (dense_rank first order by id)
over (partition by sql_id, plan_hash_value) first_cost
,max(cost)
over (partition by sql_id, plan_hash_value) max_cost
,max(case when operation = 'COUNT' and options = 'STOPKEY' then 1 else 0 end)
over (partition by sql_id, plan_hash_value) has_count_stopkey
,max(case when operation = 'FILTER' and options is null then 1 else 0 end)
over (partition by sql_id, plan_hash_value) has_filter
,count(distinct(plan_hash_value))
over () total_plans
from v$sql_plan
--where sql_id = '61a161nm1ttjj'
order by 1,2,3
)
where first_cost < max_cost
--It's easy to exclude FILTER and COUNT STOPKEY.
and has_filter = 0
and has_count_stopkey = 0
order by 1,2,3;

SphinxSE distinct empty result

I run this query in sphinx se console:
SELECT #distinct FROM all_ips GROUP BY ip1;
I get this result:
+------+--------+
| id | weight |
+------+--------+
| 1 | 1 |
| 2 | 1 |
| 3 | 1 |
| 9 | 1 |
| 15 | 1 |
| 16 | 1 |
| 17 | 1 |
| 20 | 1 |
| 21 | 1 |
| 25 | 1 |
| 26 | 1 |
| 27 | 1 |
| 31 | 1 |
| 32 | 1 |
| 38 | 1 |
| 39 | 1 |
| 40 | 1 |
| 46 | 1 |
| 50 | 1 |
| 51 | 1 |
+------+--------+
20 rows in set (0.57 sec)
How can i get number of unique values? Why #distinct column doesn't show up in results?
1) I dont think that is sphinxSE - do you really mean sphinxQL? That looks more like sphinxQL.
2) Distinct of what column? You need to sell sphinx what attribute you want to count the distinct values in. In sphinxQL use COUNT(DISTINCT column_name)
You will require simple SQL statement for getting count. Something like this
SELECT count(ip1),ip1
FROM all_ips
GROUP BY ip1;

Resources