snmp shared tables after reconnect to master agent - snmp

I am using agentx++ to create shared tables.
After a snmpd restart, the subagent reconnects to the master agent but all rows in the shared tables are deleted.
Any Idea why the rows are deleted upon snmpd restart?

What kind of mechanism do you have for saving the data to DB? All row data may be kept in RAM during agent lifetime...

Related

Does Oracle Database delete all the memory on restart?

When we restart an Oracle database instance, does it delete all the data in the memory and kick a fresh start, or keep some information in the memory?
All memory that the instance had is cleared. In a RAC database that has multiple instances running, restarting one instance does not clear the memory from other instances.

How to perform GreenPlum 6.x Backup & Recovery

I am using GreenPlum 6.x and facing issues while performing backup and recovery. Does we have any tool to take the physical backup of whole cluster like pgbackrest for Postgres, further how can we purge the WAL of master and each segment as we can't take the pg_basebackup of whole cluster.
Are you using open source Greenplum 6 or a paid version? If paid, you can download the gpbackup/gprestore parallel backup utility (separate from the database software itself) which will back up the whole cluster with a wide variety of options. If using open source, your options are pretty much limited to pgdump/pgdumpall.
There is no way to purge the WAL logs that I am aware of. In Greenplum 6, the WAL logs are used to keep all the individual postgres engines in sync throughout the cluster. You would not want to purge these individually.
Jim McCann
VMware Tanzu Data Engineer
I would like to better understand the the issues you are facing when you are performing your backup and recovery.
For Open Source user of the Greenplum Database, the gpbackup/gprestore utilities can be downloaded from the Releases page on the Github repo:
https://github.com/greenplum-db/gpbackup/releases
v1.19.0 is the latest.
There currently isn't a pg_basebackup / WAL based backup/restore solution for Greenplum Database 6.x
WAL logs are periodically purged (as they get replicated to mirror and flushed) from master and segments individually. So, no manual purging is required. Have you looked into why the WAL logs are not getting purged? One of the reasons could be mirrors in cluster is down. If that happens WAL will continue mounting on primary and won't get purged. Perform select * from pg_replication_slots; for master or segment for which WAL is building to know more.
If the cause for WAL build is due replication slot as for some reason is mirror down, can use guc max_slot_wal_keep_size to configure max size WAL's should consume, after that replication slot will be disabled and not consume more disk space for WAL.

Need I install oracle database for standby or only oracle software?

I want to clarify what I need install for standby, I am confusing with this, in primary all fine, but for standby I dont know what I need install first? please explain simple, for primary I installed database without dbca, but for standby I dont know.
Assuming you have a database (primary) already configured, running, etc, the steps are
On the primary
Create a Backup Copy of the Primary Database Data Files Create a
Control File for the Standby Database Create a Parameter File for the
Standby Database Copy Files from the Primary System to the Standby
System Set Up the Environment to Support the Standby Database
On the Standby
Start the Physical Standby Database
Verify the Physical Standby Database Is Performing Properly
In effect, the software plus the instance (ie, the parameters etc needed to start an instance in nomount mode) are what is required on the standby node. Then you will copy the datafiles from the primary to "flesh out" the database.
But all of this is documented really well in a step-by-step guide
https://docs.oracle.com/en/database/oracle/oracle-database/19/sbydb/creating-oracle-data-guard-physical-standby.html

how to use db2 read on standby feature

IBM DB2 has a feature for HADR database - read on standby. This allows the standby database to be connected to for read-only queries (with certain restrictions on datatypes and isolation levels)
I am trying to configure this as a datasource in an application which runs on websphere liberty profile.
Previously, this application was using the Automatic Client Re-route (which ensures that all connections are directed to the current primary)
However, I would like to configure it in such a way that I can have SELECTs / read-only flows to run on the standby database, and others to run on primary. This should also work when a takeover has been performed on the database (that is, standby becoming primary and vice-versa). The purpose of doing this is to divide the number of connections created between all available databases
What is the correct way to do this?
Things I have attempted (assume my servers are dbserver1 and dbserver2):
Create 2 datasources, one with the db url of dbserver1 and the other with dbserver2.
This works until a takeover is performed and the roles of the servers are switched.
Create 2 datasources, one with the db url of dbserver1 (with the Automatic Client Re-route parameters) and the other with dbserver2 only.
With this configuration, the application works fine, but if dbserver2 becomes the primary then all queries are executed on it.
Setup haproxy and use it to identify which is the primary and which is the standby. Create 2 datasources pointing to haproxy
When takeover is carried out on the database, connection exceptions start to occur (not just at the time of takeover, but for some time following it)
The appropriate way is described in a Whitepaper "Enabling continuous access to read on standby databases using Virtual IP addresses" linked off the Db2 documentation for Read-on-standby.
Virtual IP addresses are assigned to both roles, primary and standby. They are cataloged as database aliases. Websphere or other clients would connect to either the primary or standby datasource. When there is a takeover or failover, the virtual IP addresses are reassigned to the specific server. The client would continue to be routed to the desired server, e.g. the standby.

How data will be stored to primary after the crash in hbase

I'm newbie to HBase. Assume that we have master and secondary regions.
Just assume that our primary region goes down for few hours due to some external factors. if the primary server is turned back to normal status.
It might have missed some amount of data loaded during the primary region offline. So how the primary server will be synchronized to load missed jobs.
Thanks in advance!!
If primary region server crashes or becomes unavailable, secondary region server will provide read-only access to the data. Primary region server provides both write/read access but secondary region server provides only read access.
See this
Regarding data recovery, data is written in WAL (Write-Ahread-Log) before actual write, when region server recovers all the pending logs will be replayed and node will be in sync.

Resources