Under Windows [Redis 64Bit] whether can be used in a production environment? - windows

I use this version on my dev environment : Redis-64 .
And I want to know if this version is suitable for the production environment?
If can use, then compared with under Linux, what need to be pay attention to?

Since version 3.0.3 the windows port developers abandoned the dlmalloc and began to use jemalloc as memory allocator. And the port was actually considered for production usage. The 3.0.500 build is approved for production by ms developers (see here).
And there is some kind of hell so how they bypassed the unix fork to save data to disk. Microsoft developers port call it point-in-time heap snapshot. And this is the most controversial part when used in production:
Redis under windows may need up to 3 times more memory than you need in linux version. This behavior is considered normal, because swap file in the windows can easily be up to 3 times larger than the actual amount of RAM.
I think this is acceptable only if the use Redis as LRU cache or not to save data to disk at all.
At least Redis under windows is absolutely susceptible if you Redis node use lot of memory. For example - we try to use Redis for windows (v2.8, v3.0.3, v3.0.5) on server with 512 gb of memory with 2 SSD drives (each 256 gb in raid 0) used as system disk. No any limits on windows swap file. Our test emulates our production - lots of writes and saves with RDB with utilization ~60-70% of memory. And here is was lots of hands up behaviours then this node try to save snapshots - memory consumption jumps, connection freeze during saving. Such behaviour never happens undex linux on same hardware.

Related

Low hardware simulation for performance profiling

I need to optimize the app I'm working on and I can't get reliable profiling data on my development machine. The app should run on low end ARM hardware on QNX, but from logistic reasons I don't have access to the final hardware for profiling.
I've tried to do profiling on my development machine, but as you can imagine everything is so fast that I can't pin point the slow parts. I've created a Linux virtual machine with reduced memory and CPU cores count, but they are still too fast compared to the final hardware.
Is it possible to reduce the CPU clock speed/ram speed/disk speed in a virtual machine to simulate low performance hardware or is there any other way to get relevant profiling data on my development machine?
Considering the app is processing several gigabytes of data I assume disk access is a major bottleneck and limiting disk speed might help
I can use any (as in most open source and commercially available) tool/approach that runs on Windows/Linux/MacOS on real or virtual machine.
This URL describes how to limit disk bandwidth on VirtualBox images. You could run a Linux VM on Virtualbox and use this method to limit disk access speeds, turn off Disk Caching using suggestions from this answer and profile your application. Alternatively you can download QNX SDP, which comes with the option of a prebuilt x86_64 Virtual Machine image that can be run using VMWare/Virtualbox/qemu
My previous experiences with QNX on armv7 and x86_64 suggest that the devb-sdmmc driver is possibly a bottleneck when working with a lot of big files being read from flash storage. devb-sdmmc and io-blk often require fine tuning of the drivers with proper cache, block, read-ahead size and other parameters helps improve disk access performance.

Usergrid system requirements?

question, how much resources I need to run apache usergrid?
I mean hardware resources, RAM CPU
I want to deploy apache usergrid to be used as backed in our apps, the apps have a low traffic now, are custom projects to be used in small users groups (<10k)
I want to know the minimum requirements to know if it is viable for us, thanks.
From what I see of usergrid, I can think that the most hungry for resources component will be Elasticsearch, so to have a production environment that's working well, I guess you should start following ES' requirements:
At least 8 GB of RAM
At least 4 cores (the more cores you give Elasticsearch, the more love you get as it tends to works with a lot of threading, i.e. give more cores rather than more CPU processing power)
Fast HDDs should perform fine
See this article on Elasticsearch.A last thing is that depending on your system, you can tune several settings on Elasticsearch to achieve a better throughput. (For instance see https://www.elastic.co/guide/en/elasticsearch/reference/master/tune-for-indexing-speed.html)
I have deployed latest version of Usergrid i.e. 2.1, which is working smoothly in apache-cassandra-2.1.20, apache-tomcat-7.0.85, elasticsearch-1.7.6 on single node of Cassandra on Ubuntu 16.04 with 8 GB Ram and 180 GB SSD. Hope this will help you.

Redis versus hardware cache

Do tools like Redis provide control over the hardware cache present in the computer or does it run on the computer RAM? If it is the latter, how can it give better performance than the existing hardware cache which is controlled by the operating system?
After a lot of scattered reading I think I have got a better idea about this. So answering the question in case someone else has this question too.
The cache in a computer is not controlled by the Operating System. It is a part of the micro architecture. No software access can 'alter' cache configuration. On a linux machine, typing vi /proc/cpuinfo will show the cache size and alignment as prescribed by the chip manufacturer.
Tools like Redis and memcached,'cache' data by persisting it in the physical memory (RAM) of a machine. It is still caused caching as this prevents the data from being written to disk and hence gives faster access.

Does JVM memory management work the same on Windows and Linux?

My original question is that, is this technically rational to check the required heap-size of my Java program on Windows 7, via VisualVM, and come to this conclusion that the program will require the same amount of heap on Linux(RedHat) as well?
I don't know how the system(OS or even CPU and RAM), affect memory management of JVM.
well, the windows is my development system with 4GB of RAM and a Core 2 Due CPU, however the
Linux is the production system with 32GB of RAM and multiple powerfull processors,
Actually, my concern is that the program on Linux might need more memory. less is ok.

Automatic Recovery of Virtual Memory Allocation

My system uses a third part kernel built in native libraries (C++) with a J2EE upper layer running on Tomcat 6. The vendor stipulates 32bit JDK and overall the application very memory hungry. We are presently running on Windows x64 with 32 bit JVM. Essentially, the JVM will hang once the Virtual Size gets close to the 2GB 32bit addressing limit.
Question: From time to time, the third party frameworks will make large requests for memory and this pushes up the Virtual Size allocated on the server. The Virtual Size allocated will never recover even though it appears that the memory that the kernel is reducing its memory needs. In a typical Tomcat deployment, does the Virtual Size ever recover automatically or does it always act as a high water level that keeps on rising? Is there a way to tell the JVM to try to lower the Virtual Size dynamically?
I suspect that the 3rd party native kernel is to blame here but I need to investigate all our options.
FYI - AWE in Windows is not a clear option as the vendor does not officially support any JVMs that have AWE support. Migration to Linux is also not an easy path but is being considered.

Resources