Where can I change allocated memory of JDeveloper? - jdeveloper

Where can I change allocated memory of JDeveloper? Also Maximum and optimum memory for this IDE. My Machine has a 4GB RAM, I use SOA,BPEL frameworks in that.

Where can I change allocated memory of JDeveloper?
For JDeveloper 11.1.2.x, you can configure VM options in the JDeveloper configuration file at jdeveloper/jdev/bin/jdev.conf. For example, to set the maximum heap size to 2GB, add the following line to this file:
AddVMOption -Xmx2048M
Note that this only affects the IDE - to set memory limits for the internal Weblogic Server, you can set the USER_MEM_ARGS environment variable before launching the IDE, like
$ export USER_MEM_ARGS="-Xms256m -Xmx1024m"
Also Maximum and optimum memory for this IDE
The more, the better. There is no limit besides those imposed by the operating system or the Java virtual machine you are using (32 bit vs. 64 bit).
My Machine has a 4GB RAM, I use SOA,BPEL frameworks in that.
I really suggest that you add more memory. Use a 64 bit operating system and upgrade to at least 8 GB. You need to consider that it is not only the IDE which is consuming memory, but also the internal Weblogic Server, where you are running your application during development. The internal WLS is launched as a separate java process and also consumes a considerable amount of memory.

Related

Under Windows [Redis 64Bit] whether can be used in a production environment?

I use this version on my dev environment : Redis-64 .
And I want to know if this version is suitable for the production environment?
If can use, then compared with under Linux, what need to be pay attention to?
Since version 3.0.3 the windows port developers abandoned the dlmalloc and began to use jemalloc as memory allocator. And the port was actually considered for production usage. The 3.0.500 build is approved for production by ms developers (see here).
And there is some kind of hell so how they bypassed the unix fork to save data to disk. Microsoft developers port call it point-in-time heap snapshot. And this is the most controversial part when used in production:
Redis under windows may need up to 3 times more memory than you need in linux version. This behavior is considered normal, because swap file in the windows can easily be up to 3 times larger than the actual amount of RAM.
I think this is acceptable only if the use Redis as LRU cache or not to save data to disk at all.
At least Redis under windows is absolutely susceptible if you Redis node use lot of memory. For example - we try to use Redis for windows (v2.8, v3.0.3, v3.0.5) on server with 512 gb of memory with 2 SSD drives (each 256 gb in raid 0) used as system disk. No any limits on windows swap file. Our test emulates our production - lots of writes and saves with RDB with utilization ~60-70% of memory. And here is was lots of hands up behaviours then this node try to save snapshots - memory consumption jumps, connection freeze during saving. Such behaviour never happens undex linux on same hardware.

Can I use Oracle JDeveloper 12C with 4GB ram?

When am working with 4GB RAM, the JDK is consuming all the RAM and the Processing speed is very less and unable to proceed with the work. Even for saving a simple JSF page, it's consuming 5-8 minutes.
So is it mandatory to use a RAM >4GB or is there any optimising technique so that I can continue with 4GB RAM and obtain better performance?
4 GB is enough use,you should improve memory management.
java parameters like
set JAVA_OPTS=%JAVA_OPTS% -Xms256m -Xmx512m -XX:+UseParallelGC
set JAVA_OPTS=%JAVA_OPTS% -XX:PermSize=256m -XX:MaxPermSize=512m
for JDeveloper and for tomcat . You should find some JDeveloper config file and change memory management.
4GB should be enough - the slow performance you are seeing doesn't seem to be related to memory but rather something else - try disabling the antivirus on your machine.
Some anti viruses scan JAR files as they are uploaded slowing down Java software such as JDeveloper.

Does JVM memory management work the same on Windows and Linux?

My original question is that, is this technically rational to check the required heap-size of my Java program on Windows 7, via VisualVM, and come to this conclusion that the program will require the same amount of heap on Linux(RedHat) as well?
I don't know how the system(OS or even CPU and RAM), affect memory management of JVM.
well, the windows is my development system with 4GB of RAM and a Core 2 Due CPU, however the
Linux is the production system with 32GB of RAM and multiple powerfull processors,
Actually, my concern is that the program on Linux might need more memory. less is ok.

Automatic Recovery of Virtual Memory Allocation

My system uses a third part kernel built in native libraries (C++) with a J2EE upper layer running on Tomcat 6. The vendor stipulates 32bit JDK and overall the application very memory hungry. We are presently running on Windows x64 with 32 bit JVM. Essentially, the JVM will hang once the Virtual Size gets close to the 2GB 32bit addressing limit.
Question: From time to time, the third party frameworks will make large requests for memory and this pushes up the Virtual Size allocated on the server. The Virtual Size allocated will never recover even though it appears that the memory that the kernel is reducing its memory needs. In a typical Tomcat deployment, does the Virtual Size ever recover automatically or does it always act as a high water level that keeps on rising? Is there a way to tell the JVM to try to lower the Virtual Size dynamically?
I suspect that the 3rd party native kernel is to blame here but I need to investigate all our options.
FYI - AWE in Windows is not a clear option as the vendor does not officially support any JVMs that have AWE support. Migration to Linux is also not an easy path but is being considered.

Process sizes and differences in behaviour on 32bit vs. 64bit Windows versions

I am investigating a strange problem with my application, where the behaviour is different on 2 versions of Windows:
Windows XP (32-bit)
Windows Server 2008 (64-bit)
My findings are as follows.
Windows XP (32-bit)
When running my test scenario, the XML parser fails at a certain point during the parsing of a very large configuration file (see this question for more information).
At the time of failure, the process size is approximately 2.3GB. Note that a registry key has been set to allow the process to exceed the default maximum process size of 2GB (on 32-bit operating systems).
The system of the failure is a call to IXMLDOMDocument::load() failing, as described in the question linked above.
Windows Server 2008 (64-bit)
I run exactly the same test scenario in Windows Server 2008 -- the only variable is the operating system. When I look at my process under Task Manager, it has a * 32 next to it, which I am assuming means it is running in 32-bit compatibility mode.
What I am noticing is that at the point where the XML parsing fails on Windows XP, the process size on Windows Server 2008 is only about 1GB (IOW, approximately half the process size as on Windows XP).
The XML parsing does not fail on Windows Server 2008, it all works as it should.
My questions are:
Why would a 32-bit application (running in 32-bit mode) consume half the amount of memory on a 64-bit operating system? Is it really using half the memory, it is usual virtual memory differently, or is it something else?
Acknowledging that my application (seems) to be using half the amount of memory on Windows Server 2008, does anyone have any ideas as to why the XML parsing would be failing on Windows XP? Every time I run the test case, the error accessed via IXMLDOMParseError (see this answer) is different. Because this appears to be non-deterministic, it suggests to me that I am running into a memory usage problem rather than dealing with malformed XML.
You didn't say how you observed the process. I'll assume you used Taskmgr.exe. Beware that it's default view gives very misleading values in the Memory column. It shows Working set size, the amount of RAM that's being used by the process. That has nothing to do with the source of your problem, running out of virtual memory space. There is not much reason to assume that Windows 2008 would show the same value as XP, it has a significantly different memory manager.
You can see the virtual memory size as well, use View + Columns.
The reason your program doesn't bomb on a 64-bit operating system is because 32-bit processes have close to 4 gigabytes of addressable virtual memory. On a 32-bit operating system, it needs to share the address space with the operating system and gets only 2 gigabytes. More if you use the /3GB boot option.
Use the SAX parser to avoid consuming so much memory.
Not only are there differences in available memory between 32 bit and 64 bit (as discussed in previous answers), but its the availability of contiguous memory that may be killing your app on 32 bit.
On 32 bit machine your app's DLLs will be littering the memory landscape in the first 2GB of memory (app at 0x00400000, OS DLLs up at 0x7xxx0000, other DLLs elsewhere). Most likely the largest contiguous block you have available is about 1.1GB.
On a 64 bit machine (which gives you the 4GB address space with /LARGEADDRESSAWARE) you'll have a least one block in that 4GB space that is 2GB or more in size.
So there is your difference. If your XML parser is relying on a large blob of memory rather than many small blobs it may be that your XML parser is running out of contiguous usable space on 32 bit but is not running out of contiguous usable space on 64 bit.
If you want to visualize this on the 32 bit OS, grab a copy of VMValidator (free) and look at the Virtual view for a visualization of your memory and the Pages and Paragraphs views to see the data for each memory page/paragraph.

Resources