postgres windows efficient memory usage - windows

I'm using Postgres 9.6 (64 bit) on Windows 10 on a laptop with 8 GB RAM for dev purposes. The application is batch mass data processing with the large table having 10 mio records.
I've read various Postgres tuning guide, and also previous questions/answers raised here, and I tried several of the suggestions, but without great success.
I know my laptop is not large but watching the performance monitor then, for a Query, I see Postgres mostly writing (to the disc), with a tiny bit of reading, and one of the cores mostly utilized. What I'm interested in is memory. I'm wondering why Postgres doesn't make use of it; it stay's at 5.7GB "used" but 8GB are available. My conclusion is that Postgres decides to write temp data to a file (memory mapped file), rather then to use the memory. If that is true, may be I can tune Windows and allow for more (files) pages in memory. Anyhow my gut feeling is this has something to do with Postres on Windows, rather then being a generic Postgres question.
Does anybody know how I can configure Postgres and/or Windows so that Postgres makes better use of the (free) memory available?
Thanks a lot for your help
Juergen

If it is a dedicated database machine, set shared_buffers to – say – 2GB and increase work_mem so that most sorts and hashes can be performed in memory.
For more specific questions, ask with more detail.

Related

How many nodes/shards for elasticsearch, and how much RAM

So I've really got two questions here.
If I have about 100GB of documents that I want to make searchable with elasticsearch, is it bad to just stick it in a single node / shard? ( I can figure out replicas later when we start looking at production)
Also, how much RAM do I need? Is it possible to run this ES instance on a machine with only 8gb ram or something (just during development) and just have it run slower, or do I need to need shell out now for a system with more memory?
My use case is that I am prototyping a system and need to get our full document set indexed so we can compare it apples to apples in usability testing against the existing system. Performance isn't huge right now. My dev machine is just a i7 ultrabook with 8gb of ram, and for the first, smaller version of the prototype that only had about 30mb of documents, my machine was just fine. Is it even possible for me to use this machine for dev with the next version of the prototype or do I need to shell out now for a more powerful machine?

Redis Windows, Performance Issues

I am running redis on windows and I am having some performance issues. The machine is a Xeon E5 with 32GM RAM and SSD with HW-Raid with Windows Server 2012. There are some other processes running, but they are not critical and are idle most of the time.
I noticed performance problems and operations timeout very often, so I started "redis-cli --intrinsic-latency 100". The output shows that the max-latency goes up to 15000 microseconds, which is very slow I think.
I was also running a memory-profiler: The r/w-performance is not so good (5GB/sec) but I think this should not be the bottleneck. At the moment I have absolutly no idea what to try.
Can you give me some tipps how to find the performance problem?
There is no "fork" as in Linux in Windows. So when you dump your redis db, it can just "stop the world" in order to write on the disk "dump.rdb". Well, they did implement a "Copy-on-write" strategy that don't stop redis, it just copies values when dumping (the redis clients will still be able to get responses from redis). It is in their version log: https://github.com/MSOpenTech/Redis
There is a replacement for the UNIX fork() API that simulates the copy-on-write behavior using a memory mapped file.
This is the real bottleneck of redis in windows as it is an overhead and is more complex (bugs?). It is explained here:http://blogs.msdn.com/b/interoperability/archive/2012/04/26/here-s-to-the-first-release-from-ms-open-tech-redis-on-windows.aspx
As a result you could try running a redis on Linux to test if this is a performance issue of the windows port. Also, the more you write a dump.rdb, the bigger is the overhead (you can change the frequency or try disabling it completely for testing).
Finally, it could also be a network problem and you should check if it is not a network rule / hardware problem (not enough throughput! Bad cable or stuff, firewalls...). Are your redis clients on the same hardware machine?
I have been using a Windows port of Redis called "Memurai". They have a developer edition free of charge.
Now, in one of their blog they claim they have solved the fork() problem. See excerpt below.
Memurai performance seems good to me, even with persistence enabled (both RDB and AOF) although I have not run any specific test myself. There's another blog about Memurai perf in here.
It's worth giving it a try.
"Internally, Redis uses the fork() system call to perform asynchronous writes, but that’s not an option for Memurai because fork() doesn’t exist on Windows. Instead, Memurai uses Windows shared memory to implement a start-of-the-art version of fork() that’s finely tuned for performance and..."

SQL Server 2008 enterprise setup and virtual memory

Hi we have a server with 32 cores and 256GB RAM, we are using this with SQL Server 2008 Enterprise on Windows 2008 R2 Enterprise.
Currently windows has allocated automatically a swapfile of 256GB which seems excessive. Is it advisable to hard limit the swapfile to something smaller like 32GB to force it to use the physical RAM?
Is it the swap file or is it the hibernate file?
The answer depends upon the work the machine is expected to do. You might find that Windows doesn't touch the swap file much because you have adequate physical memory available. One approach would be to cut the swap file allocation in half, then use the inbuilt performance monitoring tools to make sure it is still running ok, then after a period of stable running look to half the swap allocation again.
But is it really a problem? With a machine like that you probably have a good chunk of hard drive space available, and i doubt that they would be slow old 5400rpm drives :)
An ideally setup OLTP SQL Server should never need to use the swap file. It depends what you are using this server for.
But unless you are short of disk space, I wouldn't worry too much. 32GB sounds a better size though.

How can I force SQL Server to use more CPU

I have an data transformation query which takes a long time to run on my development machine (Core i7 920 running at 3.9GHz, and with 12GB of RAM under Windows Server 2003 x86 and with 2 Velociraptors 300GB iN RAID0).
When I look at the task manager, the CPU stays around 26%, with the third (out of 4) core being the most active.
As this is not a production environment, is there any way to tell SQL Server 2008 that I am alright with it using more of my CPU or is it because my query can not be parallelized for some reason?
If, shouldn't SQL Server be smart enough to cut the query in smaller chunks and run it across several threads so each core can get it?
Thanks.
Optimize your query. Chances are that the issue is with it and not SQL Server.
It already knows that it's okay unless you specifically limited it to use only a certain number of CPUs either through configuration or through setting the MAXDOP parameter.
It sounds like you may be constrained by your hard drives or memory more than anything.
Note that because you are running an x86 version of windows (and by extension sql server), you may be RAM limited to around 3GB. And even with the PAE (physical addressing extensions) turned on, it's going to be a world of difference slower than if you have an x64 OS and SQL Server to begin with.
In other words, you might consider reinstalling the machine from the ground up to take advantage of all the x64 goodness you have.

Does Windows Server 2003 SP2 tell the truth about Free System Page Table Entries?

We have some Win32 console applications running on Windows Server 2003 Service Pack 2 that regularly fail with this:
Error 1450 (ERROR_NO_SYSTEM_RESOURCES): "Insufficient system resources exist to complete the requested service."
All the documentation we've found suggests it is linked to the number of Free System Page Table Entries running out. We have 16GB RAM in these machines and use the /3GB Operating System switch to squeeze the Windows kernel into 1GB and allow our processes access to 3GB of address space. This drastically reduces the total number of Free System Page Table Entries, so combined with our heavy use of MapViewOfFile() it is perhaps not surprising that the kernel page table entries are running out.
However, when using Performance Monitor to view the Free System Page Table Entries counter, the value is around 36,000 on reboot and doesn't go down when our application starts. I find it hard to believe that our application, which opens many large memory-mapped files, doesn't have any effect on the kernel page table. If we can't believe the counter, it's much more difficult to test the effect of any system changes we make.
There is a promising Knowledge Base article, The Performance tool does not accurately show the available Free System Page Table entries in Windows Server 2003, but it says the problem has been fixed in Service Pack 1, and we are already on Service Pack 2.
Has anyone else struggled with or solved this issue?
Update: I have checked !sysptes in windbg (debugging the kernel) and the value matches the performance counter, around 36,000. I guess this is most likely to mean that there really are that many free page table entries and Windows is telling the truth. It does leave the question of why we're getting 1450 errors though, if the PTEs are not running out.
Further update: We never did get to the bottom of why the 1450 errors were occurring. However, instead we upgraded the OS on these servers to 64-bit Windows. This allows the existing 32-bit applications (without recompilation) to access a full 4GB of virtual address space, and lets the kernel memory area with those pesky Page Table Entries be as big as it likes too. I don't think we've had a 1450 error since.
Can you try the windbg command "!sysptes" to get System PTE Information? I'm not sure if you can do this with live kernel debug, you may have to get a memory dump.
I'm not sure why you assume that ERROR_NO_SYSTEM_RESOURCES is caused only by running out of free System Page Table Entries ? As far as I know, such generic error codes are used for more than one resource type. And in fact, the first Google hit suggests that running out of file cache memory may cause it too. (KB on an XP bug, which tripped this error mode).
In your case, I'd be checking the "Handle Count". Another possible problem is address space fragmentation. If you you want to create a 1GB file mapping view, you need 1GB of free address space, and it has to be contiguous. If you map a 1GB file, a 800 MB file, and a 1GB file, close the 800MB one and open a 900MB file, the 900MB file may not fit in the hole that's left.
MS has 2 ways to allow there 32 bit OS to "deal" with hardware that has 4 GB or more of RAM.
Option 1: is what you did with the /3GB Switch in the Boot.ini.
Option 1 Pros and Cons:
(CONS) This option sucks 1 GB from the normal 2 GB kernel area - hence making the OS struggle to meet the demands of both Paged Pool allocations and kernel stack allocations. So a person might think that using the /3GB Switch will help their, but really this option is screwing the 32 bit Window OS into a slow death.
(CONS) But, This gives my App 3GB.... WRONG (Hence this is a CON) The catch is that ONLY application that have been recompiled from the vendor to be "/3GB Switch aware" can really use the extra 1 GB. Hence the whole use of the /3GB Switch is a really BAD J.O.K.E on everyone.
Read this link for a much better write-up:
http://blogs.technet.com/askperf/archive/2007/03/23/memory-management-demystifying-3gb.aspx
Option 2: Use the /PAE switch in the Boot.ini.
Option 2 Pros and Cons:
(PROS) This really this only option if you have a more then 4GB of RAM. It tricks a application by placing the complete application memory footprint in RAM. Normally, only a application "Working Set" memory is in RAM and the remaining application memory requirements go into Windows Pagefile. What is a application total memory requirements?? - it called "Virtual Size".
In my world, I have a big fat Java based IBM Product that I deal with. The server that is running the "application" has 16 GB of RAM. I simply add the /PAE switch and watch (thanks to sysinternals Processes Explorer) application paging requests go from 200 KB per sec to up to 4MB per sec.
Question: "Why"?
Answer: The whole application is in RAM.
Question: "Does the application know that it is completely running in RAM?
Answer: No - It is running that same old way that it was always run, "THINKING" that it's has part of itself as the "Working Set" memory living in RAM and the remaining application memory requirements go into Windows Pagefile.
Yes, it is that flipping GOOD.
Please Note: Microsoft has done a poor job telling anyone about the great Windows OS option. Duh
Try it and report back to stackoverflow....

Resources