Does my vertica works on in memory? - vertica

I have used Hp Vertica 7.0
It sometimes slowed down. (select count(*) from sessions ; returns : 250 )
When I checked system monitor on centos which the vertica is installed on ,
there is not huge load on the machine. I want to know the db is work on in memory?
it comes by default or should I set it in paramaters table?
thanks in advance

This means that you have 250 programs currently connected with your Vertica cluster.
Every connection uses memory and other resources, even if it is idle.
To see how many processes are active, you might go:
SELECT
COUNT(*) as active_request_count
FROM query_requests
WHERE is_executing;
And, by all means, verify how many of the 250 connections / sessions are really needed, and close all sessions that are not needed.
Maybe, if you can, it's easiest to shut down and restart the database.
Good luck
Marco

Related

DB2 database activation is very slow

we have a db2 11.1.4.4 database on centos 7, file system format is xfs and the size of database is 50 TB. Sometimes it takes too long time (1-2 hours) to activate the database, instance memory usage at the time when the activate command is running is less than 10 percents of its configured value, disk I/O is ok and there is no message in db2diag.log and server log, what causes this problem?
edited:
Our database is HADR and sometimes when we have stopped the database and we want to activate it, we encounter this problem on both of primary and standby.
Thanks
Too many primary log files could slow down your DB activation as well.

Stored Procedures Overwhelming Oracle.EXE On Oracle 11g On Windows

Until very recently we ran a 3rd party HR database on an Oracle Unix environment. I have additionally set up various web services that hit stored procedures to carry out a few bespoke processes for our users, and all ran well for years.
However, now that we have moved to Oracle on a Windows environment there is suddenly a big problem.
The best example I have is a VB.Net solution that reads in a 2000 row CSV of employees into a datatable, runs a couple of stored procedures to bring back Post Id etc, populates a database table with the results, then feeds it all back out into a new CSV. This process used to take 1-2 minutes to complete on Unix. It now takes well over 2 hours and kills the server!
The problem manifests by overwhelming the CPU on the database server. Any stored procedure call sends Oracle.EXE into overdrive, completely max-ing out the CPU core that it's using such that no other stored procedures can be run and everything grinds to a halt.
We have run Oracle Enterprise Manager, which suggested the creation of some indexes etc, but nothing will improve the issue. Like I say, the SQL ran fine and swiftly for years, and it hasn't changed at all.
Does anybody know what could be causing this? I am completely at a loss.
The way I see it, it must either be:
1. A CPU/hardware issue (but we have investigated, added extra cores etc to no avail)
2. An Oracle configuration issue?; or
3. An issue with the 3rd party database (which is supposedly identical to what it was on Unix).
Thanks to anyone who read this far.
P.S. I've had a Stack Overflow user account for years but can't get logged into it any more. Back to noobie status for me!

database stopped on running 500 quires per second

I built a chat application in which chatting page is loaded per every 1second through AJAX,
And i used DB2 express-c database for storing messages.
one day 500 user at a time used this app at a that time database is stopped working.
Is their any effect on database by running 500 quires at a time in one second.
please tell how to run quires for every second without effecting the database functionality.
The red mark on the DB2 icon means that the instance stop working. This issue should be related to a memory problem or something else.
You have to check the db2diag.log file, and check for message. It is highly probable that you have information at the time when the instance stopped. The first failrue data capture feature allows to recopile all that information when a crash occurs, in the diag directory.
In order to fix the problem, you just need to restart DB2. You can create a task that check if the instance is up, and if not, try to restarted. However, this is the wrong way to keep DB2 up.
You should see what happened at the time when DB2 crashed. Probably, the memory for the 500 agents was too high, and DB2 could not reserve more memory.
Are you running other processes in the same DB2 server? probably one of them corrupt the DB2 memory.

Selective tables/objects Oracle Backup

I need to automate a selective table / user object backup I currently am doing via PL / SQL Developer.
The way I currently do it is via Tools/Export Tables and Tools/Export User Objects, manually select tables / objects, then set the options, choose destination and export. I do this from a windows laptop and the database is located in a suse linux server, both are in the same LAN. DB is running 24/7 and can not be shutdown. Also currently my oracle programming skills are very basic as I only do maintenance to this solution. I would like to keep doing the backup process in the windows laptop, but I would consider a server side script solution also and then retrieving the .sql files from server.
Thanks in advance
I wouldn't really call it a backup, but look at exp/imp and expdp/impdp (data pump) in the Utilities manual
As Gary implies exp/imp really isn't a backup solution. If this database is important to you or others, figure out how to use RMAN , which is usually configured to run in a mode that doesn't require the database to be shut down. Although it executes on the database host and for non-tape destinations must write its files to a filesystem attached to the host, it can be launched remotely.
RMAN is aimed at restoring/recovering the entire database, so if what you're looking for is only the ability to recover isolated objects it may not be for you.

Oracle datafiles on a Network Share

I have an Oracle 8.1.7 Server running on Windows 2000 Advanced Server in a Virtual Machine. We are currently using MS Virtual Server to host this. (The allocated hardware is powerful enough - we have 3.5GB RAM assigned, and a single 2GHz processor core, more than most servers in 1999)
One of the limitations of Virtual Server i sthe maximum size of Virtual Hard Disk (127GB) and the database I'm trying to import is 143GB.
To get round this problem, I'm trying to create the DB Datafiles on the physical HDD, which has sufficient space.
My problem is that I'm having difficulty creating a database instance on a network share.
Does anyone know how I can do this while retaining my youthful good looks (and hair!)?
Cheers,
Brian
You need the account your Oracle service is started under to have access to the network share.
Can't say it's a good idea to create an Oracle datafile on a network share, but it's a viable solution if you don't mess much with you datafiles and share accessibility.
You say 'import'. If you are using exp/imp, one option may be to only import individual users or tables, and slim them down individually.
Also, the size of an IMP file doesn't correlate to the size of the database. A 140GB exp/imp file may result in a much smaller database (or conversely, it could be larger as the exp/imp file only has the index metadata). Even a database with datafiles totalling 140GB could be smaller if those datafiles contain a lot of unused space.

Resources