I have to add ability to store Nullable data into some column of existing tables, but this tables have ReplicatedMergeTree engine. So I got a question if exists command alter table on cluster modify column... ?
And how much it will be cost in time?
Thanks!
This is the syntax of the modify query on cluster, it will also update the replicas (but it can take a while).
ALTER TABLE example_table ON CLUSTER example_cluster MODIFY COLUMN IF EXISTS `example_column` Nullable(UInt8)
For the time required to do the operation, it depends on a lot of parameters (the amount of data, the hardware ...), we can't estimate it without more informations.
Usually when we have such modification and we want to ensure that it won't take too much time and that it is working, we make some benchmarks to estimate the time it would take.
Related
I needed to truncate and reload a table.
I learned that truncate needs stats gathering on the table as its successor process so the database gets the actual statistics, otherwise previous stats are not cleared by the truncate statement.
After doing these two operations (truncate and stats gathering on the empty table), ran the insert... but don't see new statistics in all_tab_statistics table for my table. Sample_size is still 0.
Why is that? Shouldn't have Oracle done the automatic stats gathering after the insert?
Do I need to rerun the stats or is it just fine considering the performance around this table (please note it's going to truncate and reload each time)?
Consider the following approach. It has the advantage of the table always being present.
Create an empty new table like the old one.
Load the data into the new table. This is the slowest step.
Do whatever cleanup you might need, such as refreshing the statistics.
RENAME tables to swap the new table in place. This step is fast enough so you won't notice.
I know it's a long time since I posted my question above. But recently, we again faced the similar situation and this time below steps worked towards a much better performance on a table with 800 million rows.
Take a backup of the original table.
Truncate the original table.
Gather stats on the truncated table, so that statistics show 0 in the DB. Us CASCADE=>TRUE in the command to also include indexes in the process.
Drop the indexes on the truncated table and Insert the required data from the backup table.
Recreate the indexes and gather stats again (ofcourse, with CASCADE=>TRUE; however recreation of the indexes should ideally have calculated the appropriate stats).
Drop the backup table if not needed.
I have created some tables in Greenplum, performing insert update and delete operation. Regularly I am also performing vacuum operation. I Found bloat in it. Found solution to remove bloat https://discuss.pivotal.io/hc/en-us/articles/206578327-What-are-the-different-option-to-remove-bloat-from-a-table
However, if I truncate the table and reinsert the data, it removes bloat. Is it good practice to truncate the data from the table?
If you are performing UPDATE and DELETE statements on a heap table (default storage) and running VACUUM regularly, you will get some bloat by design. Heap storage, which is similar to the default PostgreSQL storage mechanism, provides read consistency using Multi-Version Concurrency Control (MVCC).
When you UPDATE or DELETE a record, the old value is still in the table and is able to be read by transactions that are still inflight and started before you issued the UPDATE or DELETE command. This provides the read consistency to the table.
When you execute a VACUUM statement, the database will mark the stale rows as available to be overwritten. It doesn't shrink the files. It just marks rows so they can be overwritten. The next time you execute an INSERT or UPDATE, the stale rows are now able to be used for the new data.
So if you UPDATE or DELETE 10% of a table between running VACUUM, you will probably have about 10% bloat.
Greenplum also has Append-Optimized (AO) storage which doesn't use MVCC and uses a visibility map instead. The files are bit smaller too so you should get better performance. The stale rows are hidden with the visibility map and VACUUM won't do anything until you hit the gp_appendonly_compaction_threshold percentage. The default is 10%. When you have 10% bloat in an AO table and execute VACUUM, the table will automatically get rebuilt for you.
Append-Optimized is called "appendonly" for backwards compatibility reasons but it does allow UPDATE and DELETE. Here is an example of an AO table:
CREATE TABLE sales
(txn_id int, qty int, date date)
WITH (appendonly=true)
DISTRIBUTED BY (txn_id);
Instead of truncate it is better to use drop the table, create the table and then insert the data.
Have to perform incremental load into an internal table from an external table in hive when the source data file is appended with new records, on a daily basis. The new records can be filtered out based on the timestamp(column load_ts in the table) at which they were loaded. Trying to achieve this by selecting the records from source table whose load_ts is greater than the current max(load_ts) in the target table as given below:
INSERT INTO TABLE target_temp PARTITION (DATA_DT)
SELECT ms.* FROM temp_db.source_temp ms
JOIN (select max(load_ts) max_load_ts from target_temp) mt
ON 1=1
WHERE
ms.load_ts > mt.max_load_ts;
But the above query does not give the desired output. Takes very long time for execution (should not be the case with Map-Reduce paradigm).
Tried other scenarios also like passing the max(load_ts) as a variable, instead of joining. Still no improvement in the performance. Would be very helpful if anyone can give their insights as to what is possibly incorrect in this approach, with any alternate solutions.
First of all, the map/reduce model does not guarantee that your queries will take less. The main idea is that its performance will scale linearly with the number of nodes, but you have to still think about how you're doing things, more so than in normal SQL.
First thing to check is if the source table is partitioned by time. If not, it should as you'd be reading the whole table every single time.
Second, you're calculating the max as well every time, also, on the whole destination table. You could make it a lot faster if you just calculate the max on the last partition, so change this
JOIN (select max(load_ts) max_load_ts from target_temp) mt
to this (you didn't write the partition column so I am going to assume it's called 'dt'
JOIN (select max(load_ts) max_load_ts from target_temp WHERE dt=PREVIOUS_DATA_DT) mt
since we know the max load_ts is going to be in the last partition.
Otherwise, it's hard to help without knowing the structure of the source table, and, like somebody else commented, the sizes of the two tables.
JOIN is slower than variable in the WHERE clause. But the main problem with performance here is that your query performs full scan of target table and source table. I would recommend:
Query only the latest partition for max(load_ts).
Enable statistics gathering and usage
set hive.compute.query.using.stats=true;
set hive.stats.fetch.column.stats=true;
set hive.stats.fetch.partition.stats=true;
set hive.stats.autogather=true;
Compute statistics on both tables for columns.
Statistics will make queries like selecting MAX(partition) or max(ts) executing faster
Try to put source partition files into target partition folder instead of INSERT if applicable (target and source tables partitioning and storage format should enable this). It works fine for example for textfile storage format and if source table partition contain only rows>max(target_partition). You can combine both copy files method(for those source partitions that exactly contain rows to be inserting without filtering) and INSERT(for partitions containing mixed data that need to be filtering).
Hive may be merging your files during INSERT. This merge phase takes additional time and adds additional stage job. Check hive.merge.mapredfiles option and try to switch it off.
And of course use pre-calculated variable instead of join.
Use Cost-Based Optimisation Technique by enabling below properties
set hive.cbo.enable=true;
set hive.stats.autogather=true;
set hive.stats.fetch.column.stats=true;
set hive.compute.query.using.stats=true;
set hive.vectorized.execution.enabled=true;
set hive.exec.parallel=true;
Also analyze the table
ANALYZE TABLE temp_db.source_temp COMPUTE STATISTICS [comma_separated_column_list];
ANALYZE TABLE target_temp PARTITION(DATA_DT) COMPUTE STATISTICS;
Our application manages a table containing a per-user set of rows that is the
result of a computationally-intensive query. Storing this result in a table
seems a good way of speeding up further calculations.
The structure of that table is basically the following:
CREATE TABLE per_user_result_set
( user_login VARCHAR2(N)
, result_set_item_id VARCHAR2(M)
, CONSTRAINT result_set_pk PRIMARY KEY(user_login, result_set_item_id)
)
;
A typical user of our application will have this result set computed 30 times a
day, with a result set consisting of between 1 single items and 500,000 items.
A typical customer will declare about 500 users into the production database.
So, this table will typically consist of 5 million rows.
The typical query that we use to update this table is:
BEGIN
DELETE FROM per_user_result_set WHERE user_login = :x;
INSERT INTO per_user_result_set(...) SELECT :x, ... FROM ...;
END;
/
After having run into performance issues (the DELETE part would take much time)
we decided to have a GLOBAL TEMPORARY TABLE (on commit delete rows) to hold a
“delta” of rows to suppress from the table and rows to insert into it:
BEGIN
INSERT INTO _tmp
SELECT ... FROM ...
MINUS SELECT result_set_item_id
FROM per_user_result_set
WHERE user_login = :x;
DELETE FROM per_user_result_set
WHERE user_login = :x
AND result_set_item_id NOT IN (SELECT result_set_item_id
FROM _tmp
);
INSERT INTO per_user_result_set
SELECT :x, result_set_item_id
FROM _tmp;
COMMIT;
END;
/
This has improved performance a bit, but still this is not satisfactory. So
we're exploring ways to speed up that process and here are the issues that
we experience:
We would have loved to use table partitioning (partitioning by user_login).
But partitioning is not always available (on our test databases we hit
ORA-00439). Our customers cannot all afford Oracle Enterprise Edition with
paid additional features.
We could make the per_user_result_set table GLOBAL TEMPORARY, so that it
is isolated and we can TRUNCATE it for example… but our application
sometimes loses connection to Oracle due to network problems, and will
automatically reconnect. By that time we lose the contents of our
computation.
We could split that table into a certain number of buckets, make a view that
UNIONs ALL all those buckets, and triggers INSTEAD OF UPDATE and DELETE on
that view, and repart rows according to ORA_HASH(user_login) % num_buckets.
But we are afraid this could make SELECT operations much slower.
This would result in a constant number of tables, with smaller indexes
affected in DELETE or INSERT operations. In short, “partioning table for the
poor”.
We've tried to ALTER TABLE per_user_result_set NOLOGGING. This does not
improve things much.
We've tried to CREATE TABLE ... ORGANIZATION INDEX COMPRESS 1. This speeds
things up by a ratio of 1:5.
We've tried to have one table per user_login. That's exactly what we could
have by partitioning using a number of partitions equal to the number of
distinct user_logins and a well-chosen hash function. Performance factor is
1:10. But I would really like to avoid this solution: have to maintain a
huge number of indexes, tables, views, on a per-user basis. This would be
an interesting performance gain for the users, but not for us maintainers of
the systems.
Since the users work at the same time there is no way that we create a new
table and swap it with the old one.
What could you please suggest in complement to these approaches?
Note. Our customers run Oracle Databases from 9i to 11g, and XE editions to
Enterprise edition. That's a wide variety of versions that we need to be
compatible with.
Thanks.
We've tried to have one table per user_login. That's exactly what we
could have by partitioning using a number of partitions equal to the
number of distinct user_logins and a well-chosen hash function.
Performance factor is 1:10. But I would really like to avoid this
solution: have to maintain a huge number of indexes, tables, views, on
a per-user basis. This would be an interesting performance gain for
the users, but not for us maintainers of the systems.
Can you then make a stored procedure to generate these table on a per-user basis? Or, better yet, have this stored procedure do the most appropriate thing depending on the licensure of Oracle being supported?
If Partitioning option
then create or truncate user-specific list partition
Else
drop user-specific result table
Create user-specific result table
as Select from template result table
create indexes
create constraints
perform grants
end if
Perform insert
If all your users were on 11g Enterprise Edition I would recommend you to use Oracle's built-in result-set caching rather than trying to roll your own. But that is not the case, so let's move on.
Another attractive option might be to use PL/SQL collections rather than tables. Being in memory these are faster to retrieve and require less maintenance. They are also supported in all the versions you need. However, they are session variables, so if you have lots of users with big result sets that would put stress on your PGA allocations. Also their data would be lost when the network connection drops. So that's probably not the solution you're looking for.
The core of your problem is this statement:
DELETE FROM per_user_result_set WHERE user_login = :x;
It's not a problem in itself but you have extreme variations in data distribution. Bluntly, the deletion of a single row is going to have a very different performance profile from the deletion of half a million rows. And because your users are constantly refreshing their data there is no way you can handle that, except by giving your users their own tables.
You say you don't want to have a table per user because
"[it] would be an interesting performance gain for the users, but not
for us maintainers of the systems,"
Systems exist for the benefit of our users. Convenience for us is great as long as it helps us to provide better service to them. But their need for a good working experience trumps ours: they pay the bills.
But I question whether having individual tables for each user really increases the work load. I presume each user has their own account, and hence schema.
I suggest you stick with index-organized tables. You only need columns which are in the primary key and maintaining a separate index is unnecessary overhead (for both inserting and deleting). The big advantage of having a table per user is that you can use TRUNCATE TABLE in the refresh process, which is a lot faster than deletion.
So your refresh procedure will look like this:
BEGIN
TRUNCATE TABLE per_user_result_set REUSE STORAGE;
INSERT INTO per_user_result_set(...)
SELECT ... FROM ...;
DBMS_STATS.GATHER_TABLE_STATS(user
, 'PER_USER_RESULT_SET'
, estimate_percent=>10);
COMMIT;
END;
/
Note that you don't need to include the USER column any more, so yur table will just have the single column of result_set_item_id (another indication of the suitability of IOT.
Gathering the table stats isn't mandatory but it is advisable. You have a wide variability in the size of result sets, and you don't want to be using an execution plan devised for 500000 rows when the table has only one row, or vice versa.
The only overhead is the need to create the table in the user's schema. But presumably you already have some set-up for a new user - creating the account, granting privileges, etc - so this shouldn't be a big hardship.
I currently have several audit tables that audit specific tables data.
e.g. ATAB_AUDIT, BTAB_AUDIT and CTAB_AUDIT auditing inserts, updates and deletes from ATAB, BTAB and CTAB respectively.
These audit tables are partitioned by year.
As the columns in these audit tables are identical (change_date, old_value, new_value etc.) would it be beneficial to use one large audit table, add a column holding the name of the table that generated the audit record (table_name) partition it by table_name and then subpartition by year?
The database is Oracle 11g on Solaris.
Why or why not do this?
Many thanks in advance.
I would guess that performance characteristics would be quite similar with either approach. I would make this decision based solely on how you decide to model your data; that is how your application(s) wish to interact with the database. I don't think your partitioning strategy would affect this decision (at least in this example).
Both approaches are valid, but sometimes people get carried away with the single-table approach and end up putting all data in one big table. There's a name for this (anti)pattern but it slips my mind.