I've built an electron 8 app. It does some Computer Vision using tensorflow.js--modules are prebuilt by Google, so nothing custom on my end. Typically the GPU process floats around 200-300mb RAM. However somewhat randomly it'll go above 1gb and hang there and I can't figure out why. The application runs in the background and only does processing randomly, so it's visually painful to have it consume so much memory when it's doing nothing.
Any ideas how I could troubleshoot/debug the GPU process memory usage?
Any ideas on ways to force the GPU process to free up memory it's holding on to?
I've tried disabling hardware acceleration--and while yes that fixes the memory issue, the CPU goes sky high when the app does the Computer Vision work. I'd prefer it to hog more memory than impact the CPU that much.
Related
What I Want
I want to simulate the performance of a normal hard drive on my SSD based development machine.
Background
I'm developing a Mac application on a Macbook with an SSD. It's gloriously fast.
If someone has a standard platter hard drive, my app will be slower for them. My app is heavy on Core Data too, so the disk access speed will be a significant factor.
I worry that the performance measurements I take with Instruments look fine, but when a customer runs my app on their normal hard drive it will be achingly slow.
What I've Tried
Before I installed my SSD, I measured the performance of my app in Instruments. After the install, I measured it again and the two benchmarks were identical.
This doesn't make sense to me. I'm convinced I was doing something wrong here. Instruments probably measures clock speed, not wall time speed. But still, surely the speed of the hard drive should affect the benchmark I took? Or does Instruments somehow compensate for this?
Kudos to #PaulR above who suggested using an external USB hard drive to test performance. Thanks!
You can use a Virtual Machine and throttle disk access. In this way you should have control over disk speed.... still not possible to limit only writes or only reads.
Here are some tips about how do it in Virtualbox 5.8. Limiting bandwidth for disk images https://www.virtualbox.org/manual/ch05.html#storage-bandwidth-limit
I'm planning on buying a new PC for programming under Visual Studio 2010. My main other usages are:
Programming under Microsoft Visual Studio.
Running VMWare Virtual Machines.
Probably multi-monitor (if my budget lets me buy an extra one)
Here are my questions:
Do I need to purchase a high performance display adapter considering my usage described above? or a medium-range one will suffice? In general, I'd like to know how much a display adapter could affect my usages?
Which CPU could perform better? Core i7, Core 2 Quad, AMD? I have a limited budget but I really need a good performance and buying a good CPU/MB/RAM is my first priority.
A good video card is not a must have, unless you want to develop advanced 3D with Visual Studio (which is an option after all). WPF and multi-monitor can work on any video card you would buy nowadays.
What is an absolute requirement is 4GB of RAM, just for Visual Studio 2010 alone under Win7 (x64 obviously, since the x86 version cannot use 4GB of RAM). Adding Virtual machines raises that need. This has no up limit since it really depends on how many VMs you're planning to run at the same time and what application will run on them. Add 1GB minimum per VM running Win7, a lot more if they are supposed to run databases, source control or any heavy load application.
Also, for the VMs, it is almost mandatory to have them use separate physical hard drives if they are going to run simultaneously, if you don't you will experience stone age level disk performance for both the host and the VMs (unless it's all on SSD, which I never tried).
Would I be buying a computer for programming now I would definitely buy an SSD to host Win7, VS and the projects, it would really be comfy (my current desktop takes several minutes to boot and load my projects, anything that improves loading is good).
On the CPU side, you might want to spend money on the number of cores rather than the actual speed (frequency) of the processor. All CPUs have decent performance, but your computer may slow down a lot if you're running several VMs on a 2-core CPU.
the i7 chip is a really good one, but I don't think you would gain a lot buy spending big amounts of money on high-end Intel chips. Go for a good price/perf ratio with lots of cores, which for your budget will be a 4-core i5 or a 6-core Phenom II X6 (I personnally would prefer the X6 but I don't want to sound partial).
More generally, if your host or your VMs are meant to run stuff DBs or continuous integration build or source control servers that are accessible to a lot of people, you might want to use another computer as your developping computer, since availability will be important (that means no reboots, avoid hardware and software failures). You might want to buy a good mobo, and an excellent power supply, plus a good tower with sufficiently numerous fans. And you might want to think of what you're going to use for backups.
Edit: this last line almost excludes pre-built computers, since afaik computer makers will almost always include cheap power supply and motherboard even in high-end computers, because those points are not advertised.
Another thing to look for is drive speed. Visual Studio does a lot of writing and reading to disk so get the fastest you can. SSD is ideal.
With the exception of the amazing graphics card, the same rules for gaming setups apply for development environments. The more resources (RAM) the better, move your default Windows page file location to a drive other than the C: drive, use an SDD or if you cannot afford one then try a hybrid 7200rpm / 4GB SSD drive such as the Momentus made Seagate which will not break the bank.
A lot of people agree that with the 64bit era, memory is the new disk. 48GB will cost around $700 at the moment but this will drop rapidly over the coming months due to a better acceptance of 64bit machines than ever now.
Oh and your graphics card, whilst not needing to be a monster, should still be a better made one (by a decent manufacturer) with the most RAM you can afford. 2GB of graphics ram means that you can have a high resolution image, with multiple monitors, without affecting the host machine RAM.
Best thing for a good Visual Studio setup? Money.
i7 or core 2, whichever. I'd go quad core if possible,and I'd use as much money as I could on ram.
The Quadcore AMD processors are also quite good now.
finally, considering 2010 is WPF based, a fast video card would also help, maybe not as much as more ram, but I'd go with something more than onboard video.
I'm running VS2008/VS2010 on a triple monitor setup with a really awful graphics card -- ATI Radeon HD3450. Graphics performance hasn't affected me one bit since I'm just doing simple WPF applications. Your needs will vary if you're doing game development or something more demanding.
I would spend your money on RAM, especially if you're using VMs. And not only do the VMs need memory to run well, they will also need to use the same disk. So either put them on a different hard disk, or go SSD. VS20xx thrashes the drive during compiling, and a fast disk will help you out a lot.
You can really get a great developer machine if you're willing to build it yourself.
Scott Hanselman says:
Jedis build their own lightsabers, so
you should build your own computer at
least once!
He describes how he built GOM (God's Own Machine) here for under $3K, and describes it in a podcast here.
If building your own is a bit beyond your aspirations, you can get some good ideas there about the most important features for a developer, from a Microsoft guru who really knows.
If you can afford it, go for a solid state drive.
I would consider getting a better-than-average video card because you'll need some horsepower to run multiple monitors, since you'll want to take advantage of the new tab tear-off ability in vs 2010 to display code files in separate windows.
I would definitely recommend a 10,000 RPM Velociraptor hard-drive or a pair of them striped because VS is a bit of a hog on IO resources.
If it was me, I'd go with a 6-core AMD Phenom processor and 6GB of Triple-channel RAM to maximize performance. If you're an Intel fan, go i7.
A good read on the importance of hard drive speed from ScottGu's Blog.
Tip/Trick: Hard Drive Speed and Visual Studio Performance
When you are doing development with Visual Studio you end up reading/writing a lot of files, and spend a large amount of time doing disk I/O activity.
There are a few things that I don't understand about iOS memory management.
I wanted to know how much memory typically an iPhone app takes while running on device (Is there any fix number like 10MB?)
If an app includes lot of large images what is the impact on the memory? Do they only impact memory when they are loaded?
How does iOS manage the memory when there are multiple apps running?
Please help me understand these concepts.
There isn't a stated or fixed amount of memory available to apps on iOS devices.
That said, there are game apps that are reported to use over 55MB of memory, however the OS is also reported to kill these games some significant percentage of the time if not run right after a device reset.
If you use 22MB of memory or less, the OS could still kill your app because there wasn't enough available memory, but it would also have to kill a massive percentage of other apps in the app store, so you would be in very good company.
When any app (foreground or background) requests enough memory to start depleting the memory pool sufficiently, memory warnings are sent to other apps. If the memory pool gets small enough, apps are killed, including possibly the foreground app if it's a big memory hog.
Q1) There isn't a fixed value, of course. Every application (and application instance) will use a different amount of memory depending on it's task(s). There is a maximum, however. Reaching this maximum will trigger a memory warning and the OS may kill it.
Q2) Images: Depends on how many you are showing at once, or through animations.
Q3) The application in the foreground gets the most memory allocated to it. Applications in the background can request memory to perform background tasks.
Good article for best practices:
http://inessential.com/2010/06/28/how_i_manage_memory
I've got an application that does few computational CPU work, but mostly memory accesses (allocating objects and moving them around, there's few numeric or arithmetic code).
How can I measure the share of the time that am I spending in memory access latencies (due to cache misses), with the CPU being idle?
I should note that the app is running on a Hyper-V guest; I'm not sure it will pose any difficulties, but it might.
You could always profile your application to see where it spends most of the time.
You can learn a lot about your application's behaviour and data access patterns this way.
If you are using Linux, you have a wide range of available tools for profiling, like:
OProfile
sysprof
valgrind + kcachegrind
EDIT:
For a more exact measurement of the processor performance as well as memory accesses, you could also try the AMD CodeAnalyst Performance Analyzer. Here are instructions on how to use it with Intel processors, though I haven't tried it myself.
Another tool that you might also find useful is the Intel Performance Tuning Utility.
Unless you have a latency built into the system, just run the application for some time on a dedicated machine and check the CPU counters. If the app uses 100% of the CPU core it can access, it's CPU bound. Otherwise, it spends time on other things like memory allocations and IOs.
I'm trying to reproduce a bug that seems to appear when a user is using up a bunch of RAM. What's the best way to either limit the available RAM the computer can use, or fill most of it up? I'd prefer to do this without physically removing memory and without running a bunch of arbitrary, memory-intensive programs (ie, Photoshop, Quake, etc).
Use a virtual machine and set resource limits on it to emulate the conditions that you want.
VMWare is one of the leaders in this area and they have a free vmware player that lets you do this.
I'm copying my answer from a similar question:
If you are testing a native/unmanaged/C++ application you can use AppVerifier and it's Low Resource Simulation setting which will use fault injection to simulate errors in memory allocations (among many other things). It's also really useful for finding a ton of other subtle problems that often lead to application crashes.
You can also use consume.exe, which is part of the Microsoft Windows SDK for Windows 7 and .NET Framework 3.5 Service Pack 1 to easily use a lot of memory, disk space, cpu time, the page file, or kernel pool and see how your application handles the lack of available resources. (Does it crash? How is the performance affected? etc.)
Use either a job object or ulimit(1).
Create a virtual machine and set the ram to what you need.
The one I use is Virtual Box from SUN.
http://www.virtualbox.org/
It is easy to set up.
If you are developing in Java, you can set the memory limits for the JVM at startup.