Can I create headline row in LibreOffice Calc's ODS document, which is not joining in data sorting procedure, when I trying to sort table's data (by selected column's data)?
For example:
I have following table:
----------------
|Col1|Col2|Col3|
----------------
|A |. |0 |
----------------
|C |, |2 |
----------------
|B |! |1 |
----------------
When I'm trying to sort this table's data (from A to Z) by column number 3, my headline rown should not moves in footer.
Related
I have one to many relationships in parent and child but I want to insert in the child based on condition only but parent table always.
I think it would be helpful if you could provide us the data structure with example records like:
table 1
column1 | column 2 | column 3
value | value | value
table 2
column1 | column 2 | column 3
value | value | value
I have some topic data with the fields stringA stringB and I am just trying to use that as the key when creating a KSQL table from a topic.
Here's an example. First, I'll create and populate a test stream
ksql> CREATE STREAM TEST (STRINGA VARCHAR,
STRINGB VARCHAR,
COL3 INT)
WITH (KAFKA_TOPIC='TEST',
PARTITIONS=1,
VALUE_FORMAT='JSON');
Message
----------------
Stream created
----------------
ksql> INSERT INTO TEST (STRINGA, STRINGB, COL3) VALUES ('A','B',1);
ksql> INSERT INTO TEST (STRINGA, STRINGB, COL3) VALUES ('A','B',2);
ksql> INSERT INTO TEST (STRINGA, STRINGB, COL3) VALUES ('C','D',3);
ksql>
ksql> SET 'auto.offset.reset' = 'earliest';
Successfully changed local property 'auto.offset.reset' to 'earliest'. Use the UNSET command to revert your change.
ksql> SELECT * FROM TEST EMIT CHANGES LIMIT 3;
+--------------+--------+---------+----------+------+
|ROWTIME |ROWKEY |STRINGA |STRINGB |COL3 |
+--------------+--------+---------+----------+------+
|1578569329184 |null |A |B |1 |
|1578569331653 |null |A |B |2 |
|1578569339177 |null |C |D |3 |
Note that ROWKEY is null.
Now I'll create a new stream, populated from the first, and create the composite column set it as the key. I'm also including the original fields themselves, but this is optional if you don't need them:
ksql> CREATE STREAM TEST_REKEY AS
SELECT STRINGA + STRINGB AS MY_COMPOSITE_KEY,
STRINGA,
STRINGB,
COL3
FROM TEST
PARTITION BY MY_COMPOSITE_KEY ;
Message
------------------------------------------------------------------------------------------
Stream TEST_REKEY created and running. Created by query with query ID: CSAS_TEST_REKEY_9
------------------------------------------------------------------------------------------
Now you have a stream of data with the key set to your composite key:
ksql> SELECT ROWKEY , COL3 FROM TEST_REKEY EMIT CHANGES LIMIT 3;
+---------+-------+
|ROWKEY |COL3 |
+---------+-------+
|AB |1 |
|AB |2 |
|CD |3 |
Limit Reached
Query terminated
You can also inspect the underlying Kafka topic to verify the key:
ksql> PRINT TEST_REKEY LIMIT 3;
Format:JSON
{"ROWTIME":1578569329184,"ROWKEY":"AB","MY_COMPOSITE_KEY":"AB","STRINGA":"A","STRINGB":"B","COL3":1}
{"ROWTIME":1578569331653,"ROWKEY":"AB","MY_COMPOSITE_KEY":"AB","STRINGA":"A","STRINGB":"B","COL3":2}
{"ROWTIME":1578569339177,"ROWKEY":"CD","MY_COMPOSITE_KEY":"CD","STRINGA":"C","STRINGB":"D","COL3":3}
ksql>
With this done, we can now declare a table on top of the re-keyed topic:
CREATE TABLE TEST_TABLE (ROWKEY VARCHAR KEY,
COL3 INT)
WITH (KAFKA_TOPIC='TEST_REKEY', VALUE_FORMAT='JSON');
From this table we can query the state. Note that the composite key AB only shows the latest value, which is part of the semantics of a table (compare to the stream above, in which you see both values - both stream and table are the same Kafka topic):
ksql> SELECT * FROM TEST_TABLE EMIT CHANGES;
+----------------+---------+------+
|ROWTIME |ROWKEY |COL3 |
+----------------+---------+------+
|1578569331653 |AB |2 |
|1578569339177 |CD |3 |
Just an update to #Robin Moffat..
Use the below
CREATE STREAM TEST_REKEY AS
SELECT STRINGA + STRINGB AS MY_COMPOSITE_KEY,
STRINGA,
STRINGB,
COL3
FROM TEST
PARTITION BY STRINGA + STRINGB ;
instead of
CREATE STREAM TEST_REKEY AS
SELECT STRINGA + STRINGB AS MY_COMPOSITE_KEY,
STRINGA,
STRINGB,
COL3
FROM TEST
PARTITION BY MY_COMPOSITE_KEY ;
NOTE: column ordering matters
Worked for me! (CLI v0.10.1, Server v0.10.1)
I have scenario where i need to search & display records from huge tables with lots of rows. I have pre-defined search criteria for my tables for which user can provide the filter & click search .
Considering a sample table :
CREATE TABLE suppliers
( supplier_name varchar2(50) NOT NULL,
address varchar2(50),
city varchar2(50) NOT NULL,
state varchar2(25),
zip_code varchar2(10),
CONSTRAINT "suppliers_pk" PRIMARY KEY (supplier_name, city)
);
INSERT INTO suppliers VALUES ('ABCD','XXXX','YYYY','ZZZZ','95012');
INSERT INTO suppliers VALUES ('EFGH','MMMM','NNNN','OOOO','95010');
INSERT INTO suppliers VALUES ('IJKL','EEEE','FFFF','GGGG','95009');
I have provided the user with search fields as the primary key - supplier_name, city
If he enters both the fields, my query performance will be good since it goes for index scan
SELECT supplier_name, address, city, state, zip_code FROM suppliers where supplier_name = 'ABCD' and city = 'ZZZZ';
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
--------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 102 | 1 (0)| 00:00:01 |
| 1 | TABLE ACCESS BY INDEX ROWID| SUPPLIERS | 1 | 102 | 1 (0)| 00:00:01 |
|* 2 | INDEX UNIQUE SCAN | suppliers_pk | 1 | | 1 (0)| 00:00:01 |
However, if he enters only one of the search field, my query performance will go bad since it goes for full table scan
SELECT supplier_name, address, city, state, zip_code FROM suppliers where supplier_name = 'ABCD' ;
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
-------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 102 | 3 (0)| 00:00:01 |
|* 1 | TABLE ACCESS FULL| SUPPLIERS | 1 | 102 | 3 (0)| 00:00:01 |
Is there a way to force oracle to think it is a primary key search when i don't have all of the key fields in search , something like below ( which obviously is not working )
SELECT supplier_name, address, city, state, zip_code FROM suppliers where supplier_name = 'ABCD' and city = city;
Thanks.
You are thinking about this in the wrong way.
The query optimiser will choose what it thinks best execution plan for the query based on the information available at the time the query is parsed (or sometimes when the parameters changed). Generally - if you give it the right information in terms of stats etc, it usually will do a good job.
You might think that you know better than it, but remember that you won't be monitoring this for the life of the database. The data changes, you want the database to be able to react and change the execution plan when it needs to.
That said, if you are set on forcing it to use the index, you can use a hint:
SELECT /*+ INDEX(suppliers suppliers_pk) */
supplier_name, address, city, state, zip_code FROM suppliers where
supplier_name = 'ABCD' ;
A full table scan is not necessarily bad. You have only a few rows in your table, so the optimizer thinks it is better to do a FTS than an index range scan. It will start using the PK index a soon as the RDBMS thinks it is better, i.e. you have lots a rows and the restriction on a certain supplier reduces the result significantly. If you want to search on city only instead of supplier you need another index with city only (or at least starting with city). Keep in mind that you might have to update the table statistics after you have loaded your table with bulk data. It is always important to test query performance with somehow realistic amounts of data.
Index is organised first on supplier_name second on city so it is not possible to use that index for query based on city only.
Please create second index based only on city. This will help your query.
Much like these questions:
MSSQL Create Temporary Table whose columns names are obtained from row values of another table
Create a table with column names derived from row values of another table
I need to do the same thing with Oracle, but i also need to fill this new table with data from another table which is organized in a particular way. An Example:
Table Users
|id|name |
----------
|1 |admin|
|2 |user |
Table user_data_cols
|id|field_name|field_description|
---------------------------------
|1 |age |Your age |
|2 |children |Your children |
Table user_data_rows
|id|user_id|col_id|field_value|
-------------------------------
|1 |1 |1 |32 |
|2 |1 |2 |1 |
|3 |2 |1 |19 |
|4 |2 |2 |0 |
What i want is to create, using only sql, a table like this:
|user_id|age|children|
----------------------
|1 |32 |1 |
|2 |19 |0 |
Starting from the data in the other tables (which might change with time so i'll need to create a new table if a new field is added)
Is such a thing even possible? I feel this might be against a lot of good practices but it can't be helped...
For comparison purposes here is the answer from your link:
Create a table with column names derived from row values of another table
SELECT
CONCAT(
'CREATE TABLE Table_2 (',
GROUP_CONCAT(DISTINCT
CONCAT(nameCol, ' VARCHAR(50)')
SEPARATOR ','),
');')
FROM
Table_1
INTO #sql;
PREPARE stmt FROM #sql;
EXECUTE stmt;
Now let's do the same in Oracle 11g:
DECLARE
stmt varchar2(8000);
BEGIN
SELECT 'CREATE TABLE Table_2 ('||
(SELECT
LISTAGG(nameCol, ', ') WITHIN GROUP (ORDER BY nameCol) "cols"
FROM
(SELECT DISTINCT nameCol||' VARCHAR2(50)' nameCol FROM table_1) table_x)||
')'
INTO stmt
FROM DUAL;
EXECUTE IMMEDIATE stmt;
END;
/
You asked if you can do this using 'using only sql', my answer uses a PL/SQL block.
You can fill such a table with data, using a similar strategy. As others have noted, you must know the columns at parse time, to get around that restriction, you can follow a strategy such as was done here:
Dynamic Pivot
I have a table which has 70 columns, Where primary key is the combination of 15 columns (which includes number and varchar2) . Please see below query
select * from tab1 where k1=1234567889;
Plan hash value: 1179808636
---------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
---------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 6044 | 2201K| 4585K (1)| 15:17:04 |
|* 1 | TABLE ACCESS FULL| tab1 | 6044 | 2201K| 4585K (1)| 15:17:04 |
---------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
1 – filter ("K1"=30064825087)
Where tab1 is a table mentions above and k1 is a column which is part of primary key. Table is not partitioned. Table is also analyzed (table, index and columns) after data has been inserted. Output for above query returns like 100000 plus records. The problem is even after having PK on the k1 column, the query is doing full table scan, which is not acceptable. On the other hand using index hints does not really speed up the process.
Please advise what would be the possible solution.
For this query:
select *
from tab1
where k1 = 1234567889;
The best index is one that has k1 as the first key in the index. There can be a composite index, by k1 has to be the first key. It sounds like you have a composite primary key and k1 is not the first key.
I would recommend that you simply define another index:
create index idx_tab1_k1 on tab1(k1);
There are several ways to avoid a full-table scan
Indexes: Ensure that indexes exist on the key value and that the index has been analyzed with dbms_stats.
Use_nl hint: You can direct that the optimizer use a nested loops join (which requires indexes).
index hint: You can specify the indexes that you want to use.