Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Closed 9 years ago.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Questions asking us to recommend or find a tool, library or favorite off-site resource are off-topic for Stack Overflow as they tend to attract opinionated answers and spam. Instead, describe the problem and what has been done so far to solve it.
Improve this question
What would be optimal machine configuration(CPU power, memory, and drive technology) for following usage:
Visual studio 2010 and Visual studio 2012- several instances at once
Oracle navigator and MSSQL Management studio
+ some other programs (like GIMP, PDF converter, MS Office..)
The main goal is fast build and compile in Visual studio.
I have got to justify every component since it is config
for development machine at work.
There are some threads on Stackoverflow about that like:
SSD idea
I have not tried yet SSD proposal...
-OS Windows 8, 64bit
a multi-core architecture (with or without HyperThreading) will give you a performance gain when concurrently running clock cycle intensive operations, as each core has dedicated execution units and pipelining, so more cores equals less chance of applications having to timeshare e.g. while compiling.
A system with a lot of RAM will have advantages when switching between different instances of e.g. Visual Studio because their state won't need to be written away to (slower than RAM) disk, be it SSD or not.
It will also reduce disk I/O when working with Gimp, Photoshop or the likes.
The amount of cache can also have a positive influence on your daily work, because more (faster than RAM) cache will reduce the need for your system to leave the confines of the CPU and go the extra mile to read from/write to RAM.
Finally, the advantages of an SSD over "conventional" disks are mainly noticeable in disk and file access times. Booting the OS will be smoother, so will starting up programs from said SSD be. But SSD size is still fairly limited within a reasonable price range. Is it worth it? Imo, no. Once your tools have been loaded, there's little to no real need in a developer's day for SSD drives.
Related
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 6 years ago.
Improve this question
I have download a golang package from github. it is in middle size. when compiling it from source, my computor gets slow down for i have more than one golang compile process and it takes much of the cpu.
how does golang make it to do concurrent compile?
any params to turn the number of cpu it use when compile?
Go is using a lot of CPU because it's trying to build as fast as possible, like any other compiler. It may also be because you're using a package that is using cgo, which can drastically increase compiling times as compiling medium to large C libraries is often quite intensive.
You can control the number of processes Go is using by setting the GOMAXPROCS environment variable. Such as GOMAXPROCS=1 go get ... to limit Go to only use 1 process (and thus only 1 CPU core). This however does not affect the number of processes used by external compilers that cgo may invoke.
If you need further CPU control, on Unix based systems you can use the nice command to change the priority of processes such that other programs have higher CPU priority, making your computer less sluggish.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 8 years ago.
Improve this question
I recently had to enable VT for windows7 since I want to run ubuntu on vmware so I was wondering why isnt it enabled by default is it some kind of security issue or just not necessary for the average user?
There are several reasons, including "security" and "performance":
https://superuser.com/questions/291340/why-do-pc-manufacturers-disable-advanced-cpu-features-in-the-bios-by-default
http://www.vmware.com/pdf/asplos235_adams.pdf
Intel virtualization technology can get hardware intensive and although the software requirement is low (Windows Vista) only modern CPUs made by INTEL ONLY such as Intel i7 support it.
Not all windows computers have an Intel CPU though (a good amount of them do). The only people that use the VTX technology are developers and people who want to run a different operating system than their computer came with, so not everyone. As for security issue, I'm not sure but it can get very RAM intensive. (i.e. the Android HAXM for developers has a default RAM usage of 2GB, and the minimum is 512 MB!).
If you want to know more you can check out this article
or the website:
http://www.intel.com/content/www/us/en/virtualization/virtualization-technology/intel-virtualization-technology.html
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Closed 8 years ago.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Questions asking us to recommend or find a tool, library or favorite off-site resource are off-topic for Stack Overflow as they tend to attract opinionated answers and spam. Instead, describe the problem and what has been done so far to solve it.
Improve this question
I am learning to customize the Linux kernel to make it portable on embedded systems. To test my customized kernel , i want a completely open sourced ARM board. I investigated the Raspberry pi but some it's firmware (i.e. "start.elf") is not open source. Can anybody name an ARM board which is completely open source?
Also are there any such board whose ROM/AVRAM contents can also be replaced?
Thank you !
If by "completely open source" you mean open source bootloader, kernel and OS (correct me if I'm wrong), then I would recommend one of Beagle family boards -- they are inexpensive, user friendly and have a good community support. Their open source stack consists of U-Boot, Linux kernel and one of few available distributions. If you need advanced features, check out EVM's by Texas Instrument, but they cost much more.
Jetson-TK1 from nvidia, is a developer platform,
does have u-boot loader, Linux Kernel and rootfs,
The board layout is also shared, you can recompile things for you.
It comes with 2GB RAM, a 2.3GHz Quad core processor, with GPU that is ready for CUDA kind of high level programming
http://www.newegg.com/Product/Product.aspx?Item=N82E16813190005
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 8 years ago.
Improve this question
I have developed an app to analyze videos using OpenCV and Visual Studio 2013. I was planning to run this app in the Azure assuming that it will run faster in cloud. However, to my surprise, the app ran slower than my desktop, taking about twice the time when I configured the Azure instance with 8 cores. It is a 64-bit app, compiled with appropriate compiler optimization. Can someone help me understand why am I losing time in the cloud, and is there a way to improve the timing there?
The app takes as input a video (locally in each case) and outputs a flat file with the analysis data.
I am not sure why people are voting to close this question. This is very much about programming and if possible, please help me in pinpointing the problem.
There is only going to be 3 reasons for this
Disk IO speed
CPU Speed
Memory Speed
Taking a look here you can see someone who actually checked the performance of on premise to cloud: Azure compute power: Extra Large VM slow
Basically the Ghz is most likely slower (around 1.6) and disk IO speed, while local, is normally capped at 300 or 500 IOPS, which is only just higher than 15k rpm drives and no where near SSD level.
I am not sure on memory speed. While you can keep adding cores, most programs, even ones optimized for multiple cores, have a lot of dependencies on single threads, hence slowing the whole operation down. Increased Ghz is what can make a large difference.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 7 years ago.
Improve this question
I have 15 t5500 HP thin client and i am looking for Server configuration which can handle upto 20 thin clients.
Usage: Internet (browser), MS office, Simple multimedia (Movies, Songs).
No Games, No CPU intensive programs (like photoshop or visual studio etc).
I need to know a Normal configuration for the Server.
My thoughts:
8GB RAM, Core i5,500GB Hard disk,Windows Server 2008.
Does 1 licence of Windows server 2008 supports 15-20 thin clients?
Thank you
Little Complex of a question, but from my expeirence for 20 users on a windows 2008 system you will need at least 16 GB of RAM. My envirnment when I used physical machines I used Dell PowerEdge 1950's with Dual Xeon E5410s (4 Cores # 2.33GHz), 500GB is more than what I used but that is definitly something not to skimp on if you can get away with it.
1 Windows Sever 2008 Cal comes with 5 user CALs but you will need to purchase either user or machine CALs. Difference either each user will need a CAL, which will follow them or a CAL for each of the thin clients and it doesn't matter how many users you have logon as long as they are using one of those devices. You will need to install the TS Licensing Server feature as well and point your TS server to that installation. That is where you would install the purchased licenses
Now a little practical matter, multimedia does not work well across an RDP connection even while on the same LAN. You can get it to work ok but if all 20 clients are streaming video it will become very choppy to almost unwatchable. The hardware that you are wanting to use, yes its possible but I doubt very highly that you will achieve a very usable environment for your users. Now if you went with Windows 2003 Server x64 you could get away with the hardware you wanted. The reason is because of how Microsoft designed the user enviornment in 2008 they separated the memory for each user session while in 2003 and 2000 the sessions shared some of the memory.