Can truncate Magento catalog_product_entity_int table? - magento

In magento db table catalog_product_entity_int too much large near about 500 MB and due to this performence is low.
How we can reduce the size or can this table truncate as like we can log tables?

you can not truncate catalog_product_entity_int table.
In Magento database, an entity can have several tables that share the same prefix.
For example, the product entity has the catalog_product_entity table for its main data and several other tables prefixed with “catalog_product_” such as catalog_product_entity_int, catalog_product_entity_media_gallery, catalog_product_entity_text and so on.
To store the data more efficiently, product details are stored separately depending on their data types.
When the value of the data is an integer type, it’s saved in the catalog_product_entity_int table, and when its type is an image, it’s saved in the catalog_product_entity_media_gallery table.
if you want truncate table for performance
then you can truncate below table i think you got lots of data in this table
log_customer
log_visitor
log_visitor_info
log_url_info
log_quote
report_viewed_product_index
report_compared_product_index
-catalog_compare_item
Let me know if you have any query

Related

Many-to-many from relational design to dimensional design

This is my database design:
I make a data warehouse a with 2 fact table design as you show :
I want to add Order dimension but the problem is that there is bridge table between product table and order table by the way order table contain 830 rows and Order details table contain 2155 rows.
How can I create order dimension?

Generate Alter statements of partition of all existing tables from Oracle views in 12c

I want to generate dynamically the below alter code(the below one is an eg, it will differ table to table) for all the partitioned tables in 12c DB.
Some tables may be partitioned on RANGE, LIST etc.
The column name, partition type will also change as per the table.
ALTER TABLE EMP
MODIFY PARTITION BY RANGE (START_DATE)
( PARTITION P1 VALUES LESS THAN (date'2021-1-1') ) ONLINE;
I have already created tables without partition in another db and now want to partition those tables which were partitioned in the source db. So want a simple script which can create code to partition the tables in the target db. Note - all tables have different partition and my goal is to make them sync with source. Only data differs in both the DBs.

How can I add column from one table to another in Clickhouse?

I have a table having following columns: (id, col1,col2). I need to add col3 from a temporary table having (id,col3). So that after the operation table 1 should be: (id,col1,col2,col3) . After this, I drop the temporary table. How can this be done in Clickhouse?
I know of an approach that uses join table engine. However, join table data is stored in memory and I have memory resitrictions. How can I achieve the same result by not creating a in-memory table?
There is no magic in this realm. And the spoon does exists.
You can use that approach with samples and make many updates. Piece by piece by 10% for example.

Is constraints important for for select?

I have table on database1 with some constraints like primary key and check. I have all my logic on this database for insert data in this table, then I need this table on database2 to just select data.
So I made replication for this table using database links (select data from database1 insert data into database2), from database1 to database2. I made this table on database2 without constraints, I just create index on the fields which I need in my select's where clause.
Is there any reason why I need to create same constraints(primary key and checks) on database2, when I need this table just for select on this base? Maybe it gets performance difference?
Is there any reason why I need to create same constraints(primary key
and checks) on database2, when I need this table just for select on
this base? Maybe it gets performance difference?
There is no need to have same constraints on your table in database2 until you are not populating that table other that source table of database1 which is having constraints. This is because all validation is already been done while inserting records to table of database1. There is no performance related improvement with constraints addition. Indexing table of database2 will help in performance.

Populating Tables into Oracle in-memory segments

I am trying to load the tables into oracle in-memory database. I have enable the tables for INMEMORY by using sql+ command ALTER TABLE table_name INMEMORY. The table also contains data i.e. the table is populated. But when I try to use the command SELECT v.owner, v.segment_name name, v.populate_status status from v$im_segments v;, it shows no rows selected.
What can be the problem?
Have you considered this?
https://docs.oracle.com/database/121/CNCPT/memory.htm#GUID-DF723C06-62FE-4E5A-8BE0-0703695A7886
Population of the IM Column Store in Response to Queries
Setting the INMEMORY attribute on an object means that this object is a candidate for population in the IM column store, not that the database immediately populates the object in memory.
By default (INMEMORY PRIORITY is set to NONE), the database delays population of a table in the IM column store until the database considers it useful. When the INMEMORY attribute is set for an object, the database may choose not to materialize all columns when the database determines that the memory is better used elsewhere. Also, the IM column store may populate a subset of columns from a table.
You probably need to run a select against the date first

Resources