is it safe to tunabe jemalloc page size to 64k on linux - jemalloc

for my team’s enterprise product, we have been using jemalloc 3.6. Currently, we are trying to upgrade to 5.3 version. In the 5.3 version, we are noticing that there is a slightly better performance when running with a 64k page size than with a 4k page size. This is on linux 4.4 kernel. My question is, is there any harm in setting page size to 64k in linux (ie, setting -with-lg-page=16 at compile time).

Related

Regarding hardware requirements for freeswitch

I have a ubuntu 16.04 desktop version installed on 64 bit 4GB RAM, intel core i3 processor 2.13 GHz.
I need to install freeswitch for doing a small project. It will take only one call at a time. I tried looking up the hardware requirements for freeswitch on their wiki. But i am not able to find the hardware requirements.
Will freeswitch run fine on my laptop? Is there a page giving details about minimum hardware requirements for freeswitch? Thanks.
Update: I got some more info on another website: Section Hardware and Software Requirements freeswitch versus asterisk
Min Requirement
System Tuning
Minimum/Recommended System Requirements:
32-bit OS (64-bit recommended)
512MB RAM (1GB recommended)
50MB of Disk Space
System requirements depend on your deployment needs.
If you just want group video calling feature then go for 1.6.x version, else just use only 1.4.x

Golang - why do compilations on similar machines result in significantly different binary file sizes?

I have a gorilla/mux based web service written in Golang.
I've observed that the exact same code produces a binary of size more than 10 MB on my Windows 10 Pro Machine while its about 7 MB on my colleague's Windows 10 Pro Machine.
On yet another colleague's MacBook Pro running OS X Yosemite, the binary is just a bit over 11 MB in size.
What does this binary actually contain?!
It may be due to different architectures (GOARCH env variable). Run go env to verify. Compiled binary to 386 and to amd64 differs significantly (compiled to amd64 is significantly bigger), but it should be close if the architecture is the same with different OS.
Also the Go version itself matters a lot, Go 1.7 reduced the compiled binary size. See blog post Smaller Go 1.7 binaries for details.
Also I assume it's the same, but whether debug info is excluded can reduce the compiled binary size significantly.

Under Windows [Redis 64Bit] whether can be used in a production environment?

I use this version on my dev environment : Redis-64 .
And I want to know if this version is suitable for the production environment?
If can use, then compared with under Linux, what need to be pay attention to?
Since version 3.0.3 the windows port developers abandoned the dlmalloc and began to use jemalloc as memory allocator. And the port was actually considered for production usage. The 3.0.500 build is approved for production by ms developers (see here).
And there is some kind of hell so how they bypassed the unix fork to save data to disk. Microsoft developers port call it point-in-time heap snapshot. And this is the most controversial part when used in production:
Redis under windows may need up to 3 times more memory than you need in linux version. This behavior is considered normal, because swap file in the windows can easily be up to 3 times larger than the actual amount of RAM.
I think this is acceptable only if the use Redis as LRU cache or not to save data to disk at all.
At least Redis under windows is absolutely susceptible if you Redis node use lot of memory. For example - we try to use Redis for windows (v2.8, v3.0.3, v3.0.5) on server with 512 gb of memory with 2 SSD drives (each 256 gb in raid 0) used as system disk. No any limits on windows swap file. Our test emulates our production - lots of writes and saves with RDB with utilization ~60-70% of memory. And here is was lots of hands up behaviours then this node try to save snapshots - memory consumption jumps, connection freeze during saving. Such behaviour never happens undex linux on same hardware.

Increase Memory Usage for Python process on Windows

I have a python script that loads mp3 music files into memory using NumPY, manipulates certain parts of each song, and renders the multiple music files into one single mp3 file. It can very RAM intensive depending on how many mp3 files the user specifies.
My problem is that the script throws "Memory Error" when I attempt to provide 8 or more mp3 songs (each around 5MB in size).
I am running:
Windows Server 2008 R2 64 bit with 64 GB of RAM and 4 core processors
32 bit version of Python
When I run Task Manager to view the python.exe process I notice that it crashes when it exceeds 1GB of RAM.
Is there a way I can increase the 1GB limit so that python.exe can use more RAM and not crash?
There is no way to increase memory usage for a process. The problem was with the python module I was using. After updating to a newer version of the module I was not limited to 1 GB of RAM.
There's a work around.
See Increasing the Amount of Memory Available to a 32-bit Windows Application.
Using Visual Studio command prompt:
editbin /LARGEADDRESSAWARE “C:pathtoexeexecutable.exe”

ARM Cortex-A8: How to measure cache utilization?

I have a Freescale's i.MX515EVK, an ARM Cortex-A8/Ubuntu platform with me, unfortunately the Linux kernel on the board is not supporting some of the well known profilers such as Oprofiler or Zoom Profiler(Zoom supports ARM processors, but it internally, uses Oprofiler driver) which give very detailed reports about the cache utilization.
Cortex-A8 has 32KB Instruction and Data caches and a 256KB L2 Cache. Currently when my image processing algorithm is running, I'm totally blind about their usage.
Are there any other methods, other than using profilers to find out cache hits and misses?
Install Valgrind (it supports ARM nowdays) and use the cachegrind tool to check cache utilization. If you are running Ubuntu on the device, it should be as simple as sudo apt-get install valgrind. Valgrind can also help you simulate what would happen with different cache sizes.

Resources