How to optimize the ever growing VCDB. Can we keep a tab on the database size? What are the measures to take to clean up the database and reduce the size?
The main thing that fills up the database are the vCenter logs, as a start we tend to set the following:
To purge the data in the VPX_EVENT table:
1. Connect to Servername\SQL Database and log in with the appropriate credentials.
2. Click databases to expand and select VIM_VCDB > Tables.
3. Right-click the dbo.VPX_PARAMETER table and select Open.
Note: If you are using SQL Server 2008, right-click the dbo.VPX_PARAMETER table and click Edit Top 200 Rows.
Modify event.maxAge to 30, and modify the event.maxAgeEnabled value to true.
Modify task.maxAge to 30, and modify the task.maxAgeEnabled value to true.
Note:To improve the time of the data cleanup, run the preceding steps in several intervals. To do this, ensure to keep the default value of event.maxAgeand task.maxAgeand perform step 6 to run the cleanup. Then, reduce the event.maxAgeand task.maxAgevalue by 60 and run the cleanup. Repeat the steps until the value is reached to 30 for the final cleanup process.
Run the built-in stored procedure:
a. Go to VIM_VCDB > Programmability > Stored Procedures.
b. Right-click dbo.cleanup_events_tasks_proc and select Execute Stored Procedure.
This purges the data from the vpx_event, vpx_event_arg, and vpx_task tables based on the date specified for maxAge.
c. When this has successfully completed, close SQL Management Studio and start the VMware Virtual Center Server service.
That should keep the db size down to a reasonable size. You can also setup maintenance plans to make the DB run better if required.
Related
I thought for sure this would be an easy issue, but I haven't been able to find anything. In SQL Server SSMS, if I run a SQL Statement, I get back all the records of that query, but in Oracle SQL Developer, I apparently can get back at most, 200 records, so I cannot really test the speed or look at the data. How can I increase this limit to be as much as I need to match how SSMS works in that regard?
I thought this would be a quick Google search to find it, but it seems very difficult to find, if it is even possible. I found one aricle on Stack Overflow that states:
You can also edit the preferences file by hand to set the Array Fetch Size to any value.
Mine is found at C:\Users<user>\AppData\Roaming\SQL
Developer\system4.0.2.15.21\o.sqldeveloper.12.2.0.15.21\product-preferences.xml on Win 7 (x64).
The value is on line 372 for me and reads
I have changed it to 2000 and it works for me.
But I cannot find that location. I can find the SQL Developer folder, but my system is 19.xxxx and there is no corresponding file in that location. I did a search for "product-preferences.xml" and couldn't find it in the SQL Developer folder. Not sure if Windows 10 has a different location.
As such, is there anyway I can edit a config file of some sort to change this setting or any other way?
If you're testing execution times you're already good. Adding more rows to the result screen is just adding fetch time.
If you want to add fetch time to your testing, execute the query as a script (F5). However, this still has a max number of rows you can print to the screen, also set in preferences.
Your best bet I think is the AutoTrace feature. You can tell it to fetch all the rows, you'll also get a ton of performance metrics and the actual execution plan.
Check that last box
Then use this button to run the scenario
I'm using Visual Studio 2015 Load Test and running a Web Performance test that has a data source connected. The data source contains user login information for 250 users.
Running this in sequential order on a single agent works fine. However, I'm attempting to add in 10 test agents to share out the load. By design the Load Test copies the data source to each agent and it runs the test. What ends up happening is that all 10 agents start the test using the row 1 user from the data source. I'm hoping there's away to set up the Load Test to run sequentially across all agents (ex: Agent 1 uses row 1, Agent 2 uses row 2, Agent 3 uses row 3, etc...)
I suspect there's not an option to set this up, but wondered if anyone ran into this and had workarounds to offer. I did find this info via http://vsptqrg.codeplex.com
Multiple machines running as a rig
Sequential – This works that same as if you are on one machine. Each agent receives a full copy of the data and each starts with row 1 in the data source. Then each agent will run through each row in the data source and continue looping until the load test completes.
Random – This also works the same as if you run the test on one machine. Each agent will receive a full copy of the data source and randomly select rows.
Unique – This one works a little differently. Each row in the data source will be used once. So if you have 3 agents, the data will be spread across the 3 agents and no row will be used more than once. As with one machine, once every row is used, the web test will stop executing.
You Can Split the Data set/CSV and distribute to each Agent, i.e in your case "25 data set"/agent and execute the test.
Each Agent can use their own Data set/CSV.
CSV Split: http://monchito.com/blog/autosplit-csv
The nearest you can get to what you want is to use the unique setting. However each data source row will only be used once, then the test will stop. With a data source containing 250 line only 250 test executions will take place. I do not know the exact distribution of data source rows to agents when unique is specified.
If more than one execution per data source row is wanted then another approach is to have one data source column per agent. Use the agent_id to select the column. Use the sequential data source access. A variation is to just have one set of data in the data source but append the agent_id to some of the values in the data sources. This answer has some variations on these ideas and some code.
Another possibility is to use the MoveDataTableCursor method to set a specific row for each test execution. This could be called in a PreWebTest method of a WebTestPlugin. The code would use the context parameters $AgentId and $WebTestIteration. The call would be based the following:
MoveDataTableCursor(..., ..., $AgentId * NumberOfAgents + $WebTestIteration);
Notes:
The values of $AgentId and $WebTestIteration from the context are strings, they would need to be converted to numbers to do the multiply and add.
Would also need to check whether the two values are zero-based or one-based.
The documentation for MoveDataTableCursor is not very informative
The situation is simple, there is a table in oracle used as a "shared table" for data exchange. The table structure and number of records remains unchanged. In normal case, I continuously update data into this table and other process read this table for current data.
Strange thing is, when my process starts, the time consumption of each update statement execution is approximately 2 ms. And after a certain peroid of time(like 8 hours), the time consumption increased to 10 ~ 20 ms per statement. It makes the procedure quite slow.
the structure of table
and the update statement is like:
anaNum = anaList.size();
qry.prepare(tr("update YC set MEAVAL=:MEAVAL, QUALITY=:QUALITY, LASTUPDATE=:LASTUPDATE where YCID=:YCID"));
foreach(STbl_ANA ana, anaList)
{
qry.bindValue(":MEAVAL",ana.meaVal);
qry.bindValue(":QUALITY",ana.quality);
qry.bindValue(":LASTUPDATE",QDateTime::fromTime_t(ana.lastUpdate));
qry.bindValue(":YCID",ana.ycId);
if(!qry.exec())
{
qWarning() << QObject::tr("update yc failed, ")
<< qry.lastError().databaseText() << qry.lastError().driverText();
failedAnaList.append(ana);
}
}
the update statement using qt interface
There is many reasons which can cause orcle opreation slowd down, but I cannot find a clue to explain this.
I never start a transaction manually in qt code, which means the commit operation is executed every time after update statement.
The update frequency is about 200 records per second, but the number is dynamically changed by time. It maybe increase to 1000 in one time and drop to 10 in next time.
once the time consumption up to 10 ~ 20 ms per statement, it'll never dorp down. time consumption can be restored to 2ms only be restart oracle service.(it's useless to shutdown or restart any user process which visit orcle)
Please tell me how to solve it or at least what to be examined.
Good starting points is to check the AWR and ASH reports.
Comparing the reports in "good" and "bad" times you can spot the cause of the change. This can be for example a change of an execution plan or increase of wait events. One possible outcome is that only change you see is that the database is waiting more time on the client (i.e. the problem is not in the DB).
Anyway as diagnosed in other answer, the root cause of problems seems to be the update in a loop. If your update lists are long (say more that 10-100 entries) you can profit by updating the whole list in a single statement using MERGE.
build a collection from your list
cast the collection as TABLE
use this table in a MERGE statement to update the rows.
See here for details.
You can trace the session while it is running quickly and again later when it is running slowly. Use the sql trace functionality and tkprof to get a breakdown of where the update is spending its time in each case and see what has changed.
https://docs.oracle.com/cd/E25178_01/server.1111/e16638/sqltrace.htm#i4640
If you need help interpreting the results you can update your question or ask a new one.
Secondly, as a rule single record updates are not the best way to do updates in Oracle. Since you have many records to update already prepared before you prepare the query, look at execBatch.
https://doc.qt.io/qt-4.8/qsqlquery.html#execBatch
This will both execute the update faster and only issue a single commit.
I am the sysadmin for a school, so I'm an IT generalist, jack of all trades, master of none, right? Our student information system runs on top of Oracle 11g. Would like to know how to use logminer to find out, at the very least, when something was changed in the database that shouldn't have been changed.
I have configured a test server to play with, so rest your mind, our production system isn't at risk while I play here.
The server is Windows. I go to a command prompt, type sqlplus / as sysdba.
Execute dbms.logmnr.addlogfile blah, blah multiple times to add the log files.
alter session set NLS_DATE_FORMAT = 'mm-dd-yyyy HH24:mi:ss'; so the time stamps tell me more than just the date.
Then I go to the application on my test server and make a change to a student demographic record. I want to find this change using logminer.
I do a select timestamp,sql_undo from V$LOGMNR_CONTENTS WHERE TIMESTAMP > TO_DATE('04-11-2013 11:59:00'); (I made the change just now, around 3 pm)
I get no rows.
If I do the same thing, but with a time just after midnight, I get thousands of rows, as the app has routines that kick off at midnight doing maintenance, like recalculating student's class ranks, for instance.
So why am I not finding the change I made logged? I believe I'm looking in the right log files, or I wouldn't see the activity at midnight.
Though your latest entry is recorded it won't appear in V$LOGMNR_CONTENTS till it has sufficient number of updates recorded. For example, if you do 100 updates, you may get 80. To flush out the remaining 20, you need to have some more updates done so that you can see them again. We had a similar problem where logminer was particularly not showing latest updates especially if they are very few. We had to create a dummy table to create some updates regularly so that logminer is always actively showing the updates and nothing is stored in buffer. In our usecase creating the dummy table was ok, I am not sure if it's ok in your case.
I have a stored procedure that returns about 50000 records in 10sec using at most 2 cores in SSMS. The SSRS report using the stored procedure was taking 20min and would max out the processor on an 8 core server for the entire time. The report was relatively simple (i.e. no graphs, calculations). The report did not appear to be the issue as I wrote the 50K rows to a temp table and the report could display the data in a few seconds. I tried many different ideas for testing altering the stored procedure each time, but keeping the original code in a separate window to revert back to. After one Alter of the stored procedure, going back to the original code, the report and server utilization started running fast, comparable to the performance of the stored procedure alone. Everything is fine for now, but I am would like to get to the bottom of what caused this in case it happens again. Any ideas?
I'd start with a SQL Profiler trace of both the stored procedure when you execute it normally, and then the same SP when it's called by SSRS. Make sure you include the execution plans involved, so you can see if it's making some bad decisions (though that seems unlikely - the SQL Server should execute an optimal - or at least consistent - plan regardless of the query's source).
We used to have cases where Business Objects would execute stored procs dozens of times for no aparent reason and it lead to occasionally horrible performance, though I've never seen that same behavior with SSRS. It may be somewhere to start, though. You'll also see the execution begin/end times - that will make it clear if it's the database layer that's hanging up, or if the SQL Server hands back the data in 10 seconds and then it's the SSRS service that's choking somewhere.
The primary solution to speeding SSRS reports is to cache the reports. If one does this (either my preloading the cache at 7:30 am for instance) or caches the reports on-hit, one will find massive gains in load speed.
You may also find that monthly restarts of SSRS application domain to resolve your issue.
Please note that I do this daily and professionally and am not simply waxing poetic on SSRS
Caching in SSRS
http://msdn.microsoft.com/en-us/library/ms155927.aspx
Pre-loading the Cache
http://msdn.microsoft.com/en-us/library/ms155876.aspx
If you do not like initial reports taking long and your data is static i.e. a daily general ledger or the like, meaning the data is relatively static over the day, you may increase the cache life-span.
Finally, you may also opt for business managers to instead receive these reports via email subscriptions, which will send them a point in time Excel report which they may find easier and more systematic.
You can also use parameters in SSRS to allow for easy parsing by the user and faster queries. In the query builder type IN(#SSN) under the Filter column that you wish to parameterize, you will then find it created in the parameter folder just above data sources in the upper left of your BIDS GUI.
[If you do not see the data source section in SSRS, hit CTRL+ALT+D.
See a nearly identical question here: Performance Issuses with SSRS