Is there a way to attach materialized view in ClickHouse? - clickhouse

Something failed while we were swapping disks for our ClickHouse database. When ClickHouse started, I had to attach all the tables as they were not there via ATTACH TABLE IF NOT EXISTS ....
Is there a way to do the same for materialized views? I couldn't find a way how to do that and when I try to create it from scratch (CREATE MATERIALIZED VIEW IF NOT EXISTS ..., ClickHouse says:
Data directory for table already containing data parts - probably it
was unclean DROP table or manual intervention. You must either clear
directory by hand or use ATTACH TABLE instead of CREATE TABLE if you
need to use that parts.
So the files are still there but dunno how to attach the view.

You need to attach the ".inner." table first.
Materialized views do not store data, they create a special table with the engine that you choose when you create the view. The name of that table is ".inner.the_name_of_the_view".
So you need to attach that table first, and then attach the materialized view.

When attaching other tables and restarting the ClickHouse server, the views got attached automatically. I was trying to attach .inner tables too but it didn't let me.

Related

Creating Hive View - Turn off metadata lookup from Hive Metastore

Is it possible to create a hive view on top of a nonexistent hive table or views?. This ability will help us deploy the hive DDL without any order at the time of refresh (migrating tables or views from one environment to another). In our environment, we have views built on top of another view. If we deploy them in any order, with the default setup some of the views may fail saying the underlying table/view doesn't exist. Looking to see if we can turn off the metadata lookup from hive metastore so that the type checks are not done at the time of view creation. It can be enforced after the deployment or at the time of querying the view for data retrieval because by that time all the views/tables will be completely deployed and there won't be any type checking related errors.
I checked on the internet for pointers but I couldn't find any. Any suggestions in this regard will be helpful to us.
Thanks in advance.
Add IF NOT EXISTS to all create statements and run all several times until errors disappear.
If executed 2 times in the wrong order like this, second run will succeed without any error:
drop view if exists my_view;
create view if not exists my_view as select from table1; --fails first time, succeeds on second run
drop table if exists table1;
create table if not exists table1(id int);

AUTO-Scynhronize a table based on view - ORACLE DATABASES

I want to ask you if there is a solution to auto-synchronize a table ,e.g., every one minute based on view created in oracle.
This view is using data from another table. I've created a trigger but I noticed a big slowness in database whenever a user update a column or insert a row.
Furthermore, I've tested to create a job schedule on the specified table (Which I wanted to be synchronized with the view), however we don't have the privilege to do this.
Is there any other way to keep data updated between the table and the view ?
PS : I'm using toad for oracle V 12.9.0.71
A materialized view in Oracle is a database object that contains the results of a query. They are local copies of data located remotely, or are used to create summary tables based on aggregations of a table's data. Materialized views, which store data based on remote tables, are also known as snapshots.
Example:
SQL> CREATE MATERIALIZED VIEW mv_emp_pk
REFRESH FAST START WITH SYSDATE
NEXT SYSDATE + 1/48
WITH PRIMARY KEY
AS SELECT * FROM emp#remote_db;
You can use cronjob or dbms_jobs to schedule a snapshot.

oracle 11g Thamaterialized views against another view

Good day,
Is it technically possible to create a materialized view against a another view in the master database (as opposed to a solid table)?
My DBA advises that oracle does not allow creation of materialized view logs against a view. IMO a view is pretty much the same as a table, so it ought to be possible.
Has anyone ever done this successfully (Oracle 11g).
The documentation is pretty clear about this:
Restrictions on Master Tables of Materialized View Logs
The following restrictions apply to master tables of materialized view logs:
You cannot create a materialized view log for a temporary table or for a view.
You cannot create a materialized view log for a master table with a virtual column.
A view is not pretty much the same as a table. It's a stored query, so it has no storage and DML is only possible through instead-of triggers. It doesn't have the basis for noticing an underlying data change.
Just thinking about it, in order to support a view log - even for a simple single-table view - an update to the view's base table would have to check if there were any views; check if any of those had materialized view logs; and then work out if the change needed to be logged - which would mean executing each view's query and looking at the entire result set. Imagine doing that for every change to the underlying table. And then imagine a more complicated view query, with aggregates or multiple tables, etc.
You'd effectively be materialising the view query to decide whether was a change that needed to be logged, to update an actual materialized view. This is partly what actual materialized views do - stop you having to repeatedly re-execute an expensive view query by tracking changes on the master table(s).
You have to create the materialized view logs on the (normal) view's base tables.

Oracle materialized view log

If I have table T with 10 columns and create materialized view log with only 3 of them as well as materialized view with those 3, why when I updated ANY column in the table (besides those in the log) the record is inserted into MLOG$_T ? Is there a way to avoid?
Thanks
No, you can't avoid it. The primary purpose of the log is to identify the rows that have changed, not the changed data itself (although you can include that in the log as well). The process for maintaining the log doesn't know anything about the view(s) that might be created that will use the logs, so it has to maintain the log for any change in the table.
This of it this way: suppose you later create another materialized view on the same table that uses other columns. If the log is to be useful, it must maintain information about changes for those rows as well. After all, you can only create one log per table.

changes in Materialized

I have one Materialized view on one server which is created by DB link.
There is one job running on that Mview. (created with dbms_refresh.make earlier).
Now I have created 3 new fields in original table.
My queries are.
1) Do I need to drop and create Mview again, if yes, do i need to create Mview log on main server again
2) What happens to job running on Mview , do i need to create it agin?
Also there are views created on Mview ,so
--If i run create or replace view query , will it create any problem?
Please guide.
Thanks!
If you need to include the new columns in your materialized view then yes you need to re-create the materialized view. You must explicitly drop the view as there is no "create or replace materialized view" statement.
DROP MATERIALIZED VIEW blah;
CREATE MATERIALIZED VIEW blah...
Dropping/recreating the materialized view should re-create the refresh job. Not 100% certain, but you should probably recreate the log as well.
And, if you don't need to include the new columns in your view, you really don't need to do anything...
After dropping/creating the materialized view, you should recompile the other views afterwards, because they may have become invalid.
You can check if that happened with
select *
from user_objects
where status = 'INVALID';
Recompile a view can be done with
alter view the_view compile;
or
exec dbms_utility.compile_schema(user);
This simply recompiles everything in your schema. Be sure to have no running jobs while doing this!

Resources