Global memory problems when the MOT feature is used in the openGauss database - open-gauss

In the openGauss database, data is inserted into the MOT table but is not submitted. When the mot_global_memory_detail() is used to check the global memory, the global memory keeps increasing. Why?

Inserts will use global memory for rows and index keys, if you rollback the transaction the memory will be released back. That's the reason why you see global memory increasing even though the transaction is not committed yet.

Related

Does openGauss MOT purely in-memory table?

the Memory-Optimized-Table for openGauss is purely in-memory table? or it's can persist the data when data is larger that the server memory? or at least persist when server shutdown?
MOT storage engine is a pure in-memory engine, so yes the whole data must fit into the memory. But MOT tables are strictly durable with persistence capabilities (redo log and checkpoint).
The Memory-Optimized-Table for openGauss is an memory table.When the total memory usage is close to the selected memory limit.MOT no longer allows additional data to be inserted.But It can also persist data when the server is shut down(persistence capabilities).

Redis cache tags flush memory leak

we using Laravel provided Redis tagged cache to cache query results for models in this way:
cache()->tags([model1, model2...])->remember(sql_query_hash, result_callback)
and if in example after some time and a users peak one of tags in example model1 have 500k of unique cached queries, and there comes update and need to do :
cache()>tags([model1])->flush();
my job gets allowed memory exhausted, for workers we have 128MB of memory. Yes I know if I would increase memory of workers I could flush then 1kk of keys etc. but its not a right way because we have exponential users increase and our project will grow, so maybe some of tags will have 10kk of keys on users peak, and how then I have to flush a cache for tag?
https://github.com/laravel/framework/blob/5.7/src/Illuminate/Cache/RedisTaggedCache.php#L174
this is how laravel flush tags keys, by retrieving all in memory then chunks it in memory again so this array_chunk double the memory usage after getting all keys, and then doing Redis::del operation to remove cached keys for this tag.
I don`t know how it call its a bug o not, but for me need some options does anyone dealing with that problem too, and maybe have some solutions?

SGA and PGA in Oracle Database Configuration

I know that SGA ( that contain data and control information for one Oracle Database instance) stands for System Global Area and PGA (that contains data and control information exclusively for use by an Oracle process) stands for Program Global Area but, I don't really understand the function of the variables does to the database. How would it help when retrieving data if SGA is configured like 10 times larger than PGA ?
The SGA is a memory structure on the server that contains pools to hold code, SQL, classes, cursors, etc. and caches to hold data. So when a client sends a query to the server, the code and data sits in the SGA to get processed by the RDBMS on the server.
The PGA is a shared memory area for a user server process and is used for temporary storage and work areas. Oracle uses the PGA and temp tablespaces to work to get to a result set which is passed back to the client, then the PGA for the session is freed.
There is no ratio between the two. The SGA is sized according to how much code and data is getting sent to the server, and the PGA is dynamic according to how many processes are active. If there are thousands of processes, the PGA can easily be double the SGA. The SGA is sized VERY carefully though; making it bigger does not necessarily make it better for performance reasons.
There is also a UGA (User Global Area) which is the memory area for each client (non-server) process.

Informix - how to release memory?

When I delete rows, those rows are only marked to be deleted and the new data can be saved there, but it is possible to force releasing memory ? How ?
First, it is really disk space rather than memory that you're worried about, I think. The memory used by a deleted row is only part of a page image (or several page images if the row size is big enough).
Second, there isn't really a way to release the disk space associated with the row. All the disk space allocated to the chunk remains in use. It was in use before the row was created and remains in use after the row is deleted. Informix handles the allocation.
What is the concern? You won't run out of space either within Informix or at the o/s level because of a deleted row.
Depending on your Informix version you can use SQL administration API commands
for storage optimization and consolidate free space in tables.
Use repack operation online or offline and table shrink or defragment partition extents.
V11.50 or above
• Repack moves rows from the end of the partition to the empty page space present in the upper end of the partition.
• Shrink releases extents that have been emptied back to the dbspace.
V12.10
• Defragment tables or indexes to merge non-contiguous extents.
See :
Administrator's Guide Manage disk space chapter
Administrator's Reference SQL Administration API Functions
If you want to reclaim the storage space occupied by deleted rows, you can unload the table, delete the table, re-create the table, load the unloaded rows into the empty new table you just re-created, re-create the indexes, and update the table's statistics. Note, this will also optimize the table for improved performance.

Does using NOLOGGING in Oracle break ACID? specifically during poweroutage

When using NOLOGGING in Oracle, say for inserting new records. Will my database be able to gracefully recover from a power outage? if it randomly went down during the insert.
Am I correct in stating that the the UNDO logs will be used for such recoveries ... as opposed to REDO log usage which be be used for recovery if the main datafiles were physically corrupted.
It seems to me, you're muddling some concepts together here.
First, let's talk about instance recovery. Instance recovery is what happens following a database crash, whether it is killed, server goes down, etc. On instance startup, Oracle will read data from the redo logs and roll forward, writing all pending changes to the datafiles. Next, it will read undo, determine which transactions were not committed, and use the data in undo to rollback any changes that had not committed up to the time of the crash. In this way, Oracle guarantees to have recovered up to the last committed transaction.
Now, as to direct loads and NOLOGGING. It's important to note that NOLOGGING is only valid for direct loads. This means that updates and deletes are never NOLOGGING, and that INSERT is only nologging if you specify the APPEND hint.
It's important to understand that when you do a direct load, you are literally "directly loading" data into the datafiles. So, no need to worry about issues around instance recovery, etc. When you do a NOLOGGING direct load, data is still written directly to the datafiles.
It goes something like this. You do a direct load (for now, let set aside the issue of NOLOGGING), and data is loaded directly into the datafiles. The way that happens, is that Oracle will allocate storage from above the high water mark (HWM), and format and load those brand new blocks directly. When that block allocation is made, those data dictionary updates that describe the space allocation are written to and protected by redo. Then when your transaction commits, the changes become permanent.
Now, in the event of an instance crash, either the transaction was committed (in which case the data is in the datafiles and the data dictionary reflects those new extents have been allocated), or it was not committed, and the table looks exactly like it did before the direct load began. So, again, data up to and including the last committed transaction is recovered.
Now, NOLOGGING. Whether a direct load is logged or not, is irrelevant for the purposes of instance recovery. It will only come into play in the event of media failure and media recovery.
If you have a media failure, you'll need to recover from backup. So, you'll restore the corrupted datafile and then apply redo, from archived redo logs, to "play back" the transactions that occurred from the time of the backup to the current point in time. As long as all the changes were logged, this is not a problem, as all the data is there in the redo logs. However, what will happen in the event of a media failure subsequent to a NOLOGGING direct load?
Well, when the redo is applied to your segments that were loaded with NOLOGGING, the required data is not in the redo. So, those data dictionary transactions that I mentioned that created the new extents where data was loaded, those are in the redo, but nothing to populate those blocks. So, the extents are allocated to the segment, but then are also marked as invalid. So, if/when you attempt to select from the table, and hit those invalid blocks, you'll get ORA-26040 "data was loaded using the NOLOGGING option". This is Oracle letting you know you have a data corruption caused by recovery through a NOLOGGING operation.
So, what to do? Well, first off, any time you load data with NOLOGGING, make sure you can re-run the load, if necessary. So, if you do suffer an instance failure during the load, you can restart the load, or if your suffer a media failure between the time of the NOLOGGING load and the next backup, you can re-run the load.
Note that, in the event of a NOLOGGING direct load, you're only exposed to data loss until your next backup of the datafiles/tablespaces containing the segments that had the direct load. Once it's protected by backup, you're safe.
Hope this helps clarify the ideas around direct loads, NOLOGGING, instance recovery, and media recovery.
IF you use NOLOGGING you don't care about the data. Nologging operations should be recoverable with other procedures than the regular databases recovery procedures. Many times the recovery will happen without problems. Problem is when you have a power failure on the storage. In that case you might end up corrupting the online redo - that was active - and because of that also have problems with corrupt undo segments.
So, specifically in your case: I would not bet on it.
Yes, much of the recovery would be done by reading undo, that might get stuck because of exactly the situation you described. That is one of the nastiest problems to recover.
As to be 100% ACID compliant a DBMS needs to be serializable, this is very rare even amongst major vendors. To be serializable read, write and range locks need to be released at the end of a transaction. There are no read locks in Oracle so Oracle is not 100% ACID compliant.

Resources