Reduce Memory Usage from 16GB to 8GB - Oracle - oracle

I had created a oracle instance using "Database Configuration Assistant". My system is having 64GB RAM. I had given 16GB to oracle instance, in Initialization Parameters Wizard.
Now i want to reduce that 16GB to 8GB. So that, the RAM occupied by oracle will be 8GB. I had tried this in SQL Developer,
ALTER SYSTEM SET pga_aggregate_target = 8289 M;
ALTER SYSTEM SET sga_target = 1536 M;
I had restarted the oracle service. It not got reflected. Still the oracle is using 16GB.
I dont know whether this is correct. Whether system reboot is needed for this.? or else how to reduce the memory usage.

There are various ways to define the amount of memory used. Historically, you needed a lot of settings to change to impact total memory footprint. Nowadays, it is often by default setting only one and start tweaking later (when the Oracle installer does not screw up; it often sets things wrongly).
I would check the following:
select *
from v$parameter
where name like '%size%'
or
name like '%target%'
Check which ones have been set and need changing. It can be settingslike shared_pool_size, memory_target, sga_target, and others.
When you change it, some settings (depends on version and edition) can be changed while the instance is open and running, while some require a restart. Also, sometimes you are using a text file (pfile) and in some instance you may be using a binary file (spfile). Binary file is pre-condition to allow online changing without restarting.
You will probably succeed using something like:
alter system set NAME = VALUE scope=[spfile|both]
as sys user. Scope=spfile only changes the spfile, both changes runtime and spfile. When using a pfile like init*.ora, you just edit the text file and restart your instance.
To quickly restart, the best way is IMHO:
startup force
Please decreasing size, you will generally not have a problem assuming that the size is sufficient to handle the load. Do it in a test environment first. When increasing and depending on platform, please make sure first that your new settings can be handled. For instance, increasing memory allocated on Linux may require you to change kernel settings. Otherwise, your Oracle instance will not start unless the corrections are made first.

Related

How to increase max_locks_per_transaction

I've been performing kind of intensive schema dropping and creating over a PostgreSQL server,
ERROR: out of shared memory
HINT: You might need to increase max_locks_per_transaction.
I need to increase max_locks_per_transaction but how can i increase it in MAC OSX
you might find ../data/postgresql.conf file, then edit with notepad, set max_locks_per_transaction = 1024
if it looks like # max_locks_per_transaction... you must remove #.
it must look like that:
max_locks_per_transaction = 1024 # min 10
than save it and restart postgresql
It is a setting in your postgresql.conf if you do not know where that file is run SHOW config_file; on an sql prompt/window.
Then when you have modified that file restart postgresql, I don't know how you do that on MacOS a reboot will work of course.

How to enable GC logging for Hadoop MapReduce2 History Server, while preventing log file overwrites and capping disk space usage

We recently decided to enable GC logging for Hadoop MapReduce2 History Server on a number of clusters (exact version varies) as a aid to looking into history-server-related memory and garbage collection problems. While doing this, we want to avoid two problems we know might happen:
overwriting of the log file when the MR2 History server restarts for any reason
the logs using too much disk space, leading to disks getting filled
When Java GC logging starts for a process it seems to replace the content of any file that has the same name. This means that unless you are careful, you will lose the GC logging, perhaps when you are more likely to need it.
If you keep the cluster running long enough, log files will fill up disk unless managed. Even if GC logging is not currently voluminous we want to manage the risk of an unusual situation arising that causes the logging rate to suddenly spike up.
You will need to set some JVM parameters when starting the MapReduce2 History Server, meaning you need to make some changes to mapred-env.sh. You could set the parameters in HADOOP_OPTS, but that would have a broader impact than just the History server, so instead you will probably want to set them in HADOOP_JOB_HISTORYSERVER_OPTS.
Now lets discuss the JVM parameters to include in those.
To enable GC logging to a file, you will need to add -verbose:gc -Xloggc:<log-file-location>.
You need to give the log file name special consideration to prevent overwrites whenever the server is restarted. It seems like you need to have a unique name for every invocation so appending a timestamp seems like the best option. You can include something like `date +'%Y%m%d%H%M'` to add a timestamp. In this example, it is in the form of YYYYMMDDHHMM. In some versions of Java you can put "%t" in your log file location and it will be replaced by the server start up timestamp formatted as YYYY-MM-DD_HH-MM-SS.
Now onto managing use of disk space. I'll be happy if there is a simpler way than what I have.
First, take advantage of Java's built-in GC log file rotation. -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=10 -XX:GCLogFileSize=10M is an example of enabling this rotation, having up to 10 GC log files from the JVM, each of which is no more than approx 10MB in size. 10 x 10MB is 100MB max usage.
With the GC log file rotation in place with up to 10 files, '.0', '.1', ... '.9' will be added to the file name you gave in Xloggc. .0 will be first and after it reaches .9 it will replace .0 and continue on in a round robin manner. In some versions of Java '.current' will be additionally put on the end of the name of the log file currently being written to.
Due to the unique file naming we apparently have to have to avoid overwrites, you can have 100MB per History server invocation, so this is not a total solution to managing disk space used by the server's GC logs. You will end up with a set of up to 10 GC log files on each server invocation -- this can add up over time. The best solution (under *nix) to that would seem to be to use the logrotate utility (or some other utility) to periodically clean up the GC logs that have not been modified in the last N days.
Be sure to do the math and make sure you will have enough disk space.
People frequently want more details and context in their GC logs than the default, so consider adding in -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps.
Putting this together, you might add something this to mapred-env:
## enable GC logging for MR2 History Server:
TIMESTAMP=`date +'%Y%m%d%H%M'`
# GC log location/name prior to .n addition by log rotation
JOB_HISTORYSERVER_GC_LOG_NAME="{{mapred_log_dir_prefix}}/$USER/mapred-jobhistory-gc.log-$TIMESTAMP"
JOB_HISTORYSERVER_GC_LOG_ENABLE_OPTS="-verbose:gc -Xloggc:$JOB_HISTORYSERVER_GC_LOG_NAME"
JOB_HISTORYSERVER_GC_LOG_ROTATION_OPTS="-XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=10 -XX:GCLogFileSize=10M"
JOB_HISTORYSERVER_GC_LOG_FORMAT_OPTS="-XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps"
JOB_HISTORYSERVER_GC_LOG_OPTS="$JOB_HISTORYSERVER_GC_LOG_ENABLE_OPTS $JOB_HISTORYSERVER_GC_LOG_ROTATION_OPTS $JOB_HISTORYSERVER_GC_LOG_FORMAT_OPTS"
export HADOOP_JOB_HISTORYSERVER_OPTS="$HADOOP_JOB_HISTORYSERVER_OPTS $JOB_HISTORYSERVER_GC_LOG_OPTS"
You may find that you already have a reference to HADOOP_JOB_HISTORYSERVER_OPTS so you should replace or add onto that.
In the above, you can change {{mapred_log_dir_prefix}}/$USER to wherever you want the GC logs to go (you probably want it to go the the same place as MapReduce history server logs). You can change the log file naming too.
If you are managing your Hadoop cluster with Apache Ambari, then these changes would be in MapReduce2 service > Configs > Advanced > Advanced mapred-env > mapred-env template. With Ambari, {{mapred_log_dir_prefix}} will be automatically replaced with the Mapreduce Log Dir Prefix defined a few rows above the field.
GC logging will start happening upon server restart the server, so you may need to have a short outage to enable this.

PVCS service getting down once the server CPU physical memory usage become high. Whats the issue and How to resolve it?

Our PVCS service getting down once the physical memory usage of the server goes high. Once the server restarts(Not recommended) again the service will be up. Is there any permenant fix for this?
I resolved this issue by increasing the heapsize parameters...:-)
1.On the server system, open the following file in a text editor:
Windows as of VM 8.4.6: VM_Install\vm\common\bin\pvcsrunner.bat
Windows prior to VM 8.4.6: VM_Install\vm\common\bin\pvcsstart.bat
UNIX/Linux: VM_Install/vm/common/bin/pvcsstart.sh
2.Find the following line:
set JAVA_OPTS=
And set the value of the following parameters as needed:
-Xmsvaluem -Xmxvaluem
3.If you are running a VM release prior to 8.4.3, make sure -Dpvcs.mx= is followed by the same value shown after -Xmx.
4.Save the file and restart the server.
The following is a rule of thumb when increasing the values for -Xmx:
•256m -> 512m
•512m -> 1024m
•1024m -> 1280m
As Riant points out above, adjusting the HEAP size is your best course of action here. I actually supported PVCS for nine years until this time in 2014 when I jumped ship. Riant's numbers are exactly what I would recommend.
I would actually counsel a lot of customers to set -Xms and -Xmx to the same value (basically start it at 1024) because if your PDBs and/or your user community are large you're going to hit the ceiling quicker than you might realize.

Windows paging file size

I am trying to understand how to set the paging file size appropriately on Vista. For example, under System Properties, Advanced, Performance options it shows under "Total paging file size for all drives", a recommended size of about 8 GB, and a currently allocated of about 4 GB. I've been trying everything possible to (unchecking the box for automatically manage paging file size for all drives) get the value to recommended in order to run some larger problems with my code.
But it only shows briefly (when I use a custom size setting on one of my other hard drives in the computer) after I hit Set and OK; but when I restart it goes back to the default settings?? What am I doing wrong? Appreciate if somebody can point me to some place for help with this or share their experience.
You can alternatively make the change in the registry.
Key: HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Session Manager\Memory management
Value: PagingFiles
This value will have an entry for each drive with it's associated pagefile location and its minimum and maximum sizes.
It might look something like this:
C:\pagefile.sys 250 500
Where 250 is the minimum and 500 is the maximum. Try changing it in here and see what happens.

I/O performance of multiple JVM (Windows 7 affected, Linux works)

I have a program that creates a file of about 50MB size. During the process the program frequently rewrites sections of the file and forces the changes to disk (in the order of 100 times). It uses a FileChannel and direct ByteBuffers via fc.read(...), fc.write(...) and fc.force(...).
New text:
I have a better view on the problem now.
The problem appears to be that I use three different JVMs to modify a file (one creates it, two others (launched from the first) write to it). Every JVM closes the file properly before the next JVM is started.
The problem is that the cost of fc.write() to that file occasionally goes through the roof for the third JVM (in the order of 100 times the normal cost). That is, all write operations are equally slow, it is not just one that hang very long.
Interestingly, one way to help this is to insert delays (2 seconds) between the launching of JVMs. Without delay, writing is always slow, with delay, the writing is slow aboutr every second time or so.
I also found this Stackoverflow: How to unmap a file from memory mapped using FileChannel in java? which describes a problem for mapped files, which I'm not using.
What I suspect might be going on:
Java does not completely release the file handle when I call close(). When the next JVM is started, Java (or Windows) recognizes concurrent access to that file and installes some expensive concurrency handler for that file, which makes writing expensive.
Would that make sense?
The problem occurs on Windows 7 (Java 6 and 7, tested on two machines), but not under Linux (SuSE 11.3 64).
Old text:
The problem:
Starting the program from as a JUnit test harness from eclipse or from console works fine, it takes around 3 seconds.
Starting the program through an ant task (or through JUnit by kicking of a separate JVM using a ProcessBuilder) slows the program down to 70-80 seconds for the same task (factor 20-30).
Using -Xprof reveals that the usage of 'force0' and 'pwrite' goes through the roof from 34.1% (76+20 tics) to 97.3% (3587+2913+751 tics):
Fast run:
27.0% 0 + 76 sun.nio.ch.FileChannelImpl.force0
7.1% 0 + 20 sun.nio.ch.FileDispatcher.pwrite0
[..]
Slow run:
Interpreted + native Method
48.1% 0 + 3587 sun.nio.ch.FileDispatcher.pwrite0
39.1% 0 + 2913 sun.nio.ch.FileChannelImpl.force0
[..]
Stub + native Method
10.1% 0 + 751 sun.nio.ch.FileDispatcher.pwrite0
[..]
GC and compilation are negligible.
More facts:
No other methods show a significant change in the -Xprof output.
It's either fast or very slow, never something in-between.
Memory is not a problem, all test machines have at least 8GB, the process uses <200MB
rebooting the machine does not help
switching of virus-scanners and similar stuff has no affect
When the process is slow, there is virtually no CPU usage
It is never slow when running it from a normal JVM
It is pretty consistently slow when running it in a JVM that was started from the first JVM (via ProcessBuilder or as ant-task)
All JVMs are exactly the same. I output System.getProperty("java.home") and the JVM options via RuntimeMXBean RuntimemxBean = ManagementFactory.getRuntimeMXBean(); List arguments = RuntimemxBean.getInputArguments();
I tested it on two machines with Windows7 64bit, Java 7u2, Java 6u26 and JRockit, the hardware of the machines differs, though, but the results are very similar.
I tested it also from outside Eclipse (command-line ant) but no difference there.
The whole program is written by myself, all it does is reading and writing to/from this file, no other libraries are used, especially no native libraries. -
And some scary facts that I just refuse to believe to make any sense:
Removing all class files and rebuilding the project sometimes (rarely) helps. The program (nested version) runs fast one or two times before becoming extremely slow again.
Installing a new JVM always helps (every single time!) such that the (nested) program runs fast at least once! Installing a JDK counts as two because both the JDK-jre and the JRE-jre work fine at least once. Overinstalling a JVM does not help. Neither does rebooting. I haven't tried deleting/rebooting/reinstalling yet ...
These are the only two ways I ever managed to get fast program runtimes for the nested program.
Questions:
What may cause this performance drop for nested JVMs?
What exactly do these methods do (pwrite0/force0)? -
Are you using local disks for all testing (as opposed to any network share) ?
Can you setup Windows with a ram drive to store the data ? When a JVM terminates, by default its file handles will have been closed but what you might be seeing is the flushing of the data to the disk. When you overwrite lots of data the previous version of data is discarded and may not cause disk IO. The act of closing the file might make windows kernel implicitly flush data to disk. So using a ram drive would allow you to confirm that their since disk IO time is removed from your stats.
Find a tool for windows that allows you to force the kernel to flush all buffers to disk, use this in between JVM runs, see how long that takes at the time.
But I would guess you are hitten some iteraction with the demands of the process and the demands of the kernel in attempting to manage disk block buffer cache. In linux there is a tool like "/sbin/blockdev --flushbufs" that can do this.
FWIW
"pwrite" is a Linux/Unix API for allowing concurrent writing to a file descriptor (which would be the best kernel syscall API to use for the JVM, I think Win32 API already has provision for the same kinds of usage to share a file handle between threads in a process, but since Sun have Unix heritige things get named after the Unix way). Google "pwrite(2)" for more info on this API.
"force" I would guess that is a file system sync, meaning the process is requesting the kernel to flush unwritten data (that is currently in disk block buffer cache) into the file on the disk (such as would be needed before you turned your computer off). This action will happen automatically over time, but transactional systems require to know when the data previously written (with pwrite) has actually hit the physical disk and is stored. Because some other disk IO is dependant on knowing that, such as with transactional checkpointing.
One thing that could help is making sure you explicitly set the FileChannel to null. Then call System.runFinalization() and maybe System.gc() at the end of the program. You may need more than 1 call.
System.runFinalizersOnExit(true) may also help, but it's deprecated so you will have to deal with the compiler warnings.

Resources