Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 3 years ago.
Improve this question
Does Linux Arch 3.1 support Intel Optane? I have booted kernel 3.1 on SATA. Is there Intel Optane on SATA? Or does Linux 3.1 support any other Optane interface?
EDIT
It is Arch based Audiophile Linux 3.1:
uname -a
Linux server1 3.10.14-rt9-1-rt #1 SMP PREEMPT RT Wed Oct 9 ... 2013 x86_64
The version 4.0 had a problem in my system. I did not try 5.0.
That distro snapshot is from 2015. Using it in 2020 (especially on a network) seem like a terrible idea from a security POV! It's not like RedHat or something where they backport security fixes to old versions of kernel and user-space, this snapshot of Arch GNU/Linux simply hasn't been maintained since then.
"Linux 3.1" is highly misleading terminology. You're talking about a distro release version, so you need to say "Audiophile Linux 3.1". If you just say Linux x.y, that's assumed to be a kernel version number. Linux is the name of the kernel itself.
AFAIK, only Optane DC PM needs any special support (for mmap(MAP_SYNC) since Linux (kernel version) 4.15), and maybe for talking to the NV-DIMM itself.
Other Optane devices (Optane DC and consumer-grade Optane) are just fast SSDs that use standard protocols, typically NVMe.
Some of the stuff that Intel associates with Optane like using Optane as a cacheing drive to accelerate a rotational HDD or to "augment your DRAM" is purely (Windows) software that's locked to using certain Intel HW. e.g. Confused about Intel Optane DC SSD usage as extra RAM with IMDT? explains that IMDT is just Intel software to use an Optane DC SSD as swap space.
SATA is too slow for most of the benefit. A quick google didn't find any Optane SATA devices; not really surprising. It's unlikely that Intel sells any SATA-connected Optane drives based on 3DXpoint memory.
Linux kernel version 3.10 supports NVMe; support was added in Linux 3.3. (Assuming this distro built its kernel with NVMe enabled.)
A kernel as old as 3.10 might have problems with other hardware on a new motherboard. (Including but maybe not limited to integrated graphics.)
If your realtime latency requirements are very low, you might want to look into NV-DIMM, or just a RAM disk (which you copy into at startup) for data that needs to be ready with low latency, to make sure reading never has to wait for disk latency at all.
If not, you can probably use a modern distro that's still maintained, with a low-latency kernel.
Or mmap files and pin them into memory with mlock to make sure they stay ready. (Doesn't solve the initial-read latency, but allows guaranteed low latency access for files once you've done that. And doesn't need expensive storage. A high-capacity TLC or QLC NVMe SSD could be fine, especially if you look for one that doesn't ever block for long periods of time under read-only workloads. Use noatime to prevent writes.)
Related
I have a 2019 MacBook Pro 16". It has an Intel Core i9, 8-core processor and an AMD Radeon Pro 5500M with 8 GB GPU RAM.
I have the laptop dual booting Mac OS 12.4 and Windows 11.
Running clinfo under Windows tells me essentially that the OpenCL support is version 2.0, and that the addressing is 64-bits, and the max allocatable memory is between 7-8 GB.
Running clinfo under Mac OS tells me that OpenCL support is version 1.2, that addressing is 32-bits little endian, and the max allocatable memory is about 2 GB.
I am guessing this means that any OpenCL code I run is then restricted to using 2GB because of the 32-bit addressing (I thought that limit was 4GB), but I am wondering a) is this true and b) if it is true, is there any way to enable OpenCL under Mac to use the full amount of GPU memory?
OpenCL support on macOS is not great and has not been updated/improved for almost a decade. It always maxes out at version 1.2 regardless of hardware.
I'm not sure how clinfo determines "max allocatable memory," but if this refers to CL_DEVICE_MAX_MEM_ALLOC_SIZE, this is not necessarily a hard limit and can be overly conservative at times. 32-bit addressing may introduce a hard limit though. I'd also experiment with allocating your memory as multiple buffers rather than one giant one.
For serious GPU programming on macOS, it's hard to recommend OpenCL these days - tooling and feature support on Apple's own Metal API is much better, but of course not source compatible with OpenCL and only available on Apple's own platforms. (OpenCL is now also explicitly deprecated on macOS.)
I have a ubuntu 16.04 desktop version installed on 64 bit 4GB RAM, intel core i3 processor 2.13 GHz.
I need to install freeswitch for doing a small project. It will take only one call at a time. I tried looking up the hardware requirements for freeswitch on their wiki. But i am not able to find the hardware requirements.
Will freeswitch run fine on my laptop? Is there a page giving details about minimum hardware requirements for freeswitch? Thanks.
Update: I got some more info on another website: Section Hardware and Software Requirements freeswitch versus asterisk
Min Requirement
System Tuning
Minimum/Recommended System Requirements:
32-bit OS (64-bit recommended)
512MB RAM (1GB recommended)
50MB of Disk Space
System requirements depend on your deployment needs.
If you just want group video calling feature then go for 1.6.x version, else just use only 1.4.x
I am curious to know about the difference between memory management in Windows and Linux.Does Windows OS support paging or segmentation?
I trying to understand, If all processes cumulatively uses all RAM on Windows machine then every user is prevented even from log in to the system but that is not the case with Linux systems.
So how it's achieved in Linux system?
Additionally to the recent posts, Windows 10 also supports compressing of the RAM. that means, before Windows tries so SWAP out memmory on hard drive, it will try to compress the RAM
I am aware of windows kernel but new to linux kernel. I just need to know how its done in linux, i.e. the program development.
You can check there (free-electrons.com), it's a good informations source for kernel developement. (specialized in embedded linux, but most of the docs are available for standard development)
You have also the classical Linux Devices Drivers, which is very complete and detailled.
And last but not least, the Linux kernel documentation.
Linux does not have a stable kernel API. This is by design, so you should generally avoid writing kernel code if you can; it is unlikely to remain source-compatible indefinitely, and will definitely NOT be binary-compatible, even between minor releases.
This is less-or-more true for vendor kernels; Redhat etc DO maintain source & binary kernel compatibility between major revisions.
More work is gradually being done in the kernel to reduce the amount of kernel-code required to carry out various tasks, such as driver development (for example, USB drivers can typically be done in userspace with libusb), filesystem development (FUSE) and network filtering (NFQUEUE). However, there are still some cases where you need to; in particular, block devices still need to be in the kernel to be able to be usefully used for boot devices and swap.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
While this many not seem like a programming question directly, it impacts my development activities and so it seems like it belongs here.
It seems that more and more developers are turning to virtual environments for development activities on their computers, SharePoint development being a prime example. Also, as a trainer, I have virtual training environments for all of the classes that I teach.
I recently purchased a new Dell E6510 to travel around with. It has the i7 620M (Dual core, HyperThreaded cpu running at 2.66GHz) and 8 GB of memory. Reading the spec sheet, it sounded like it would be a great laptop to carry around and run virtual machines on.
Getting the laptop though, I've been pretty disappointed with the user experience of developing in a virtual machine. Giving the Virtual Machine 4 GB of memory, it was slow and I could type complete sentences and watch the VM "catchup".
My company has training laptops that we provide for our classes. They are Dell Precision M6400 Intel Core 2 Duo P8700 running at 2.54Ghz with 8 GB of memory and the experience on these laptops is night and day compared to the E6510. They are crisp and you are barely aware that you are running in a virtual environment.
Since the E6510 should be faster in all categories than the M6400, I couldn't understand why the new laptop was slower, so I did a component by component comparison and the only place where the E6510 is less performant than the M6400 is the graphics department. The M6400 is running a nVidia FX 2700m GPU and the E6510 is running a nVidia 3100M GPU. Looking at benchmarks of the two GPUs suggest that the FX 2700M is twice as fast as the 3100M.
http://www.notebookcheck.net/Mobile-Graphics-Cards-Benchmark-List.844.0.html
3100M = 111th (E6510)
FX 2700m = 47th (Precision M6400)
Radeon HD 5870 = 8th (Alienware)
The host OS is Windows 7 64bit as is the guest OS, running in Virtual Box 3.1.8 with Guest Additions installed on the guest. The IDE being used in the virtual environment is VS 2010 Premium.
So after that long setup, my question is:
Is the GPU significantly impacting the virtual machine's performance or
are there other factors that I'm not
looking at that I can use to boost the
vm's performance? Do we now have to
consider GPU performance when
purchasing laptops where we expect to
use virtualized development
environments?
Thanks in advance.
Cheers,
Dave
EDIT:
The HDDs in the two systems are 7200 RPM, the E6510 having 500GB vs. the M6400 have 2x 250GB in a non-RAID configuration.
Also, when I turn off some of the graphics features of Windows 7 (host and guest) by going to non-Aero themes, VM performance visibly increases.
Just to closer off this question with my findings, what we have discovered was that driver performance was limiting the perceived performance of that virtual machine. With the default Dell drivers, which are built for "stabilty" the virtual machines would be visibly impacted in "visual" applications like IDEs (Visual Studio 2010) such that VS 2010 could not keep up with my typing. When we had some nVidia reference drivers installed, the IDEs were crisp and you couldn't really tell that you were in a VM anymore, which was my experience with the M6400s.
Thanks to everyone who threw out some ideas on the subject.
I am running two VMs on my development system simultaneously, one for development, and one for TeamCity. My graphics card on my Dell Optiplex is an ATI 2450, which is, quite honestly, complete crap. Personally, I have found RAM and CPU to make the most significant impact on my desktop. But since you are on a laptop, have you thought about the disk? Our M6400 has an SSD, and perhaps that is the biggest difference for your two laptops. I would not expect GPU to affect anything, unless of course you are trying to use the experimental Direct3D features in VirtualBox.
You guys are looking in the wrong places. Go to bios look for virturalization extensions AMD-v or VT-X. Off by default on most system. if it dosent have that option take a look at Sun Virtual box runs good on my older laptop with out virt support.
A GPU can significantly impact performance of any system. Visual Studio, for example, has a huge performance difference between on board video vs dedicated graphics.
That said, I would expect there are other differences. First, how do the two hard drives compare? notebook manufacturers love putting slow disks in machines in order to beef up their battery longevity numbers; and other the other side, sometimes they put in the faster drives to boost performance numbers. It really depends on what the new machine was marketed towards. Along these lines some hard drives also have configuration settings to determine their power / performance / noise levels. Depending on the drive you might be able to tweak this.
Another expected difference is the quality of memory. Nearly every dell I've used has had second or third tier ram installed. Sure they might both be DDR3 of a certain Ghz, but the quality of the chips is going to determine how they really perform. Sometimes by 200% of more.
Beyond those you start getting into chipset differences, mainly in the hard drive controllers. You can't do anything about this though.
The next thing I can think of is drivers. Make sure your up to date on everything you can. Also, test both Dell and nvidia supplied drivers. Sometimes nvidia has better drivers, sometimes the original ones from dell are better. That part is a crap shoot.
And, finally, consider blowing away the new machine and doing a complete reinstall from the bare metal up. Before installing any anti-virus or CPU sucking software, test your VM performace.