Laravel php unit testing takes long in Windows Docker - laravel

I am working on Laravel with docker.
If I run php unit test in mac os, it takes few seconds.
However on windows 10, it takes few mins.
Is there anyway to fix this problem?
Thanks.

If you're running on a non-Linux OS, Docker has to virtualise your file system, and this requires a certain amount of time per file. For programs that are compiled into one executable, this is less of a problem at runtime (but clearly with its own compilation-time implications), but for scripting languages like PHP this can mean that every request runs super slowly, since every file that is used has to be 'translated' every time it is read. This is also a problem on Docker for Mac (so you're actually experiencing problems there, too, but less so, since at least it's a Linux system under the hood.) Linux is, I believe, completely virtualised on Windows which is going to add even more time.
This Reddit discusses the problem to an extent:
https://www.reddit.com/r/docker/comments/7xvlye/docker_for_macwindows_performances_vs_linux/
With this being particularly interesting (I have not tried it myself):
https://nickjanetakis.com/blog/setting-up-docker-for-windows-and-wsl-to-work-flawlessly
There is also a good community-created solution which we have used to solve our Docker for Mac problem. I don't see why their Windows options wouldn't work similarly well in your case. You can find it here:
https://github.com/EugenMayer/docker-sync/wiki/docker-sync-on-Windows
It basically sets up an intermediate service that copies all the files over into an intermediate volume (that uses the 'correct' filesystem) only when the file is updated, therefore speeding up run speed immensely.
I know it looks like quite an intimidating process, but this problem is fundamental, so you're going to have to do a certain amount of work to fix things!
FWIW I had that working on Docker 4 Mac, but it added a layer of complexity to our dev process that I found annoying, so in the end I've got myself a Linux box for work. To be honest, installing Linux as dual boot on my Windows machine (which has been my at-home solution) was probably easier than tweaking Docker 4 Mac to my satisfaction, so you might want to consider that. I have used this page twice:
https://itsfoss.com/install-ubuntu-1404-dual-boot-mode-windows-8-81-uefi/
And it's worked fine each time. One caveat - it suggests a low amount of disk for your root (/) volume, but Docker gets mounted on root so give it around 100G (not the 10-20G that page recommends.)

Related

docker on OSX slow volumes

I'm trying to use docker beta on OSX, mainly for Symfony development but the mounted volumes are incredible slow. Even for a vanilla Symfony project I get 6s page load time. That's unbearable! Has anyone found a solution to this issue? Trying to move away from vagrant but I just can't find any reasonable way to work with docker instead.
Okay the user Spiil gave a solution but I wanted to elaborate on the exact steps to take since I went through 12 hours trying to figure it out, but once you know how its super easy and fixes all the slow down issues!
The key here is to understand this solution creates NFS (Network File System) drives as the means of communication from the Docker Containers to your Mac instead of the standard OSX File System which is very slow currently either due to bugs or the way it works*
Follow these steps exactly.
1.) Clone this repo here (https://github.com/IFSight/d4m-nfs) in your home directory. To do this open up terminal and type cd ~
Then type git clone https://github.com/IFSight/d4m-nfs
Alternatively you can also do this in a one liner git clone https://github.com/IFSight/d4m-nfs ~/d4m-nfs
2.) Next go into the d4m-nfs folder and create a new file in the /etc folder and title it d4m-nfs-mounts.txt
3.) Add the following lines of code to this.
/Users/yourusername:/Users/yourusername:0:0
What the above does is allows you to still use relative folders with docker-compose and allows all ports to connect on it hence the 0:0.
EDIT
Do not put /Volumes here!!
4.) Go to your docker preferences and do the following
Make sure only /tmp is showing and NOTHING ELSE. I mean nothing else it won't work if there is anything else since it will create conflicts with the NFS systems that the script will make for you later. Restart docker and docker-compose down any containers as well.
5.) Finally navigate to the d4m-nfs directory we created in step 1 and type the following command, /bin/bash d4m-nfs.sh
edit The correct way to type the command above is this as another user from the github (if-kenn) pointed out, ./d4m-nfs.sh which uses the Shebang for what shell should run it.
If done correctly there should be no errors and this should work. Please note DO NOT RUN as sh d4m-nfs.sh this will create errors and you will have to delete your exports file to start over. In fact anytime you make any changes you will have to clear your exports file.
This is what mine looks like.
EDIT:: IMPORTANT -- Remove the /private and volumes! This should only be users/username now!
If you see anything other than this you were not running with bash. You can quickly get to the exports file like this in Mac if you make any errors and just clear it out to start over.
Just select go to folder
and then type /etc/exports
This is a nice shortcut to quickly get to it and clear it out in your favorite text editor.
Also make sure no containers are running or you will get the ........ loop of death. If this loop of death continues make sure you upgrade docker and then restart your computer. Yes restart... it seemed to be the only way to get it to work on my friends computer. Refer to this (https://github.com/IFSight/d4m-nfs/issues/3)
Note to .... loop. I recently found another solution. Make sure you are NOT logged as root, and make sure you pulled the git repo into your users ~ folder not the root ~ folder. In otherwords, it should be in Users/username.
Also, make sure /tmp folder has full write permissions since the script needs to write here or this won't work either. chmod 777 -R /tmp
6.) If you did it right when running the script it will look like this.
Then simply run your docker-compose up -d as usual in your symfony project folder (or whatever project you are using with docker) and everything should work... except NO MORE slow downs!
You will need to run this anytime you restart your computer or docker.
Also note if you get mounting errors showing up, you probably don't have your project stored in your Users/username directory. Remember that is where we mounted it. If your project is somewhere other than there you will need to modify the d4m-nfs-mounts.txt file accordingly.
Other Info:
For people reading this now, maybe it's better to wait for Docker to fix this issue. A pull request has already been accepted to improve performance(https://github.com/docker/docker/pull/31047).
This will be release somewhere in April 2017 and should be a big improvement.
I've tried some workarounds for Docker for Mac, but all of them had some pretty big disadvantages, mostly in usability. A good source for alternatives of the OSXFS can be found at: https://github.com/EugenMayer/docker-sync/wiki/Alternatives-to-docker-sync. Credits for Eugen Mayer for setting this up.
EDIT:
First improvement is implemented in the edge release. https://github.com/docker/for-mac/issues/77 has more info on this.
There's a long thread with explanation from Docker Team and various workarounds.
Currently, the issue is being tracked on GitHub.
While some workarounds may be better than others, I'm afraid the ideal option for now is to switch to Linux.
I spent a lot of my time in searching viable solution. And I found.
d4m-nfs
allow you use docker volumes via nfs.
In my case it gave increase performance 16 times! (1.8sec vs ~30sec)
Also d4m-nfs has quite a intricate manual, so here is another link with detailed example: https://github.com/laradock/laradock/issues/353#issuecomment-262897619
I just leave this here for other googlers.
Normaly volumes should be fast.
But you can not change anything to make them faster if you dont want to change the format of your disk.
But maybe the bottleneck is the CPU or RAM.
You can check that with the command docker stats. These are by default set to 2 cores and 2 GB RAM. You can change this in the Docker for Mac GUI.
I had exactly the same thing. For me using docker-bg-sync (see on GitHub) made a dramatic improvement in speed and CPU usage.
Not as nice as just mounting the volume as you have to start a new container for every sync but it does the job.
In latest docker 17.06.0-ce-mac18 volumes mounted with :cached seems to run quite decent.
I've found that creating a CoreOS VM under Parallels, then using the Docker that is inside CoreOS is far faster than Docker for Mac (currently running Version 17.12.0-ce-mac49 (21995)).
I'm doing Linux code builds using CMAKE/Ninja/GCC and it's almost twice as fast as the exact same build from Docker for Mac.
In my case, I have a ton of library sources that are part of the container (e.g. Boost, OpenSSL), and a decent amount of C++ code that I keep local to my Mac.
This seems to be a recent development. Docker/Mac has become much slower than I remember it being a month or two ago. Maybe it's just me...
We overcame this issue by synchronizing the local and the docker for mac filesystem using syncthing. We built an open source tool that follows this approach, in case it helps:
https://github.com/okteto/cnd

Not able to install Gentoo Linux

Here is my situation, when I download Gentoo and start to run it and downloaded the stage III Tarball from links and then tried to extract it a stream of white sentences flows down my screen really fast for about a minute just like in the YouTube tutorial I was viewing. However, after that instead of going to the correct stage it says cannot right not enough space on device and I tried repartitioning it but I'm not sure what device it is talkingaboutHowever, after that instead of going to the correct stage it says cannot right not enough space on device and I tried repartitioning it but I'm not sure what device it is talking about. Please help
Sorry you're having this issue, though in general, I truly believe Gentoo Handbook is quite well written and even a newbie can follow it... Here are some advices that I hope I can give you (most important is, digest the handbook and follow it carefully, not that I'm saying "RTFM", it's just that for Gentoo, handbook is essential and without it, we can get lost if you're just starting).
From my experiences, the "stream of white sentences" I'd presume would be from verbose un-tar'ing your stage3. Usually, I only want to see the errors so my suggestions is to remove the "v" (i.e. from "tar xjvpf" to "tar xjpf") so that only errors would appear when un-tar'ing. The caveat to this is that you'll be wondering if it hung or is busy un-tar'ing. Use Alt-F1 and Alt-F2 (if on console/tty mode back-and-forth) to log in on another TTY and do 'ps -auxf' to see if it's still tar'ing. If you're using GUI Terminal, just open another tab and 'ps auxf', you get the picture...
Also, learn the commands "df", it'll come in handy. If you're running out of disk space, perhaps you're trying to install/untar stage3 to your ramdisk (grin) rather than your mounted (i.e. "/mnt/gentoo"). Mount your root '/' device to '/mnt/gentoo' and cd to that mounted path then try it (don't forget to mount your '/boot' as well as your proc, dev, sys, etc before you chroot - again, follow the handbook as carefully as you can - oh also, distro such as Debian hybrids including Ubuntu uses symlink to shm, so read that part about 'rm /dev/shm' and follow it carefully; if you're using Gentoo LiveCD, you can ignore that part).
Other useful commands if you're confused (or new to) mounting devices would be to learn to experiment with commands such as 'lsblk' and 'mount' (by itself) to inspect the sizes of your partition (again, use of 'df' comes in handy as well) as well as what is your device (i.e. /dev/sda1 versus /dev/sdb1). Hint: when you do 'mkfs', use "-L" (or for some file system, it's "-N") to label/name your devices, so that when you use commands such as 'mount' or 'lsblk', you can spot them easier. If you're using GUI/desktop versions of some distro, hopefully there are tools such as "gparted" which can give you visual information in GUI of your devices which can be helpful. One think I'd advise you to stay away from if you're just starting, is to avoid RAID (i.e. mdadm) until you're comfortable with how grub/lilo works. Get your kernel (Gentoo-sources) compiled and MBR written (i.e. grub-install), try booting and have fun first (oh also, if you can avoid GUI like installing Gnome/KDE from the get-go, avoid it as well - you'll get into issues such as "should I use SystemD or OpenRC" and then get hit by the obstacle of some gnome parts needs you to use systemd but you've chosen openrc, and so on).
If I may add my opinions, in my opinion, Gentoo (also Arch and FreeBSD) is an excellent place to start if you want to learn the inside of Linux application workings (library dependencies, why packages are important rather than downloading each libs manually and compile them one by one, etc). I hope this won't discourage you from switching to another distro, but if it does frustrate you on installation and all you want to do is test-drive Linux, there are much easier distro that you'd not have to understand USE and other compilation mechanisms (if you have an old i586, it makes sense to build it with pick-and-chose libraries so that leaner can be faster, but if you have fast machine, why compile binaries when somebody who is expert at it already have done it for you?). SUSE and Fedora/RedHat/CentOS used to be the least frustrating for it was able to find/detect hardwares (legacy and new) but these days, I usually tell people, "if you know how to install Windows, you can install Ubuntu" so that too may be a good way to wet your feet. Good luck!
0_o wow, well.. how about some 411 like size of your hdd and exactly how you partioned it? Linux will look for specific directorys and if missing will instead start to install into the root dir. How you partion is an importent first step. Once you got a generally good partion setup most linux installs will go fine. Most basic tables include /, /home,/var and a swap.

Hadoop FileSystem.getFS() pauses for about 2 minutes

I'm having a very strange problem. I'm using dfs-datastores Pail abstraction to write data to HDFS in Java. I don't think the Pail piece is important to the problem though.
When it calls org.apache.hadoop.fs.FileSystem getFS(java.lang.String path) with a path on my local filesystem it pauses for about 2 minutes seemingly doing nothing then returns. This is on my laptop.
The weird thing is that it worked really fast when I was on the network at my office today, but now that I'm home it's doing it again. I'm running Ubuntu 10.10 64-bit with Java 1.7.
Anyone have any ideas what it's doing? What could be different between being at work and being at home?
UPDATE:
I've been stepping through code with the debugger and it seems to be having trouble in Configuration.loadResource(). It's calling that multiple times and it will take 5-10 seconds to return from that function.
UPDATE2:
I've narrowed this down a little further. The biggest hang up seems to be when it calls KerberosName.setConfiguration(). Which would explain why it runs fast at work since the Active Directory acts as a Kerberos server. I don't have one here at home, so it can't find one. Now they question is why in the world it's trying to load the Java Kerberos stuff.
I found a solution (or at least a work around). I installed the krb5-kdc package and now my little program runs fast without any unexplained pauses. After this I removed krb5-kdc, tested and it was still running fast. I removed /etc/krb5.conf and it started doing the pause again. It looks like using the Hadoop library on Ubuntu (at least) requires a /etc/krb5.conf file.
Maybe this will help someone else.

How do I get Windows to go as fast as Linux for compiling C++?

I know this is not so much a programming question but it is relevant.
I work on a fairly large cross platform project. On Windows I use VC++ 2008. On Linux I use gcc. There are around 40k files in the project. Windows is 10x to 40x slower than Linux at compiling and linking the same project. How can I fix that?
A single change incremental build 20 seconds on Linux and > 3 mins on Windows. Why? I can even install the 'gold' linker in Linux and get that time down to 7 seconds.
Similarly git is 10x to 40x faster on Linux than Windows.
In the git case it's possible git is not using Windows in the optimal way but VC++? You'd think Microsoft would want to make their own developers as productive as possible and faster compilation would go a long way toward that. Maybe they are trying to encourage developers into C#?
As simple test, find a folder with lots of subfolders and do a simple
dir /s > c:\list.txt
on Windows. Do it twice and time the second run so it runs from the cache. Copy the files to Linux and do the equivalent 2 runs and time the second run.
ls -R > /tmp/list.txt
I have 2 workstations with the exact same specs. HP Z600s with 12gig of ram, 8 cores at 3.0ghz. On a folder with ~400k files Windows takes 40seconds, Linux takes < 1 second.
Is there a registry setting I can set to speed up Windows? What gives?
A few slightly relevant links, relevant to compile times, not necessarily i/o.
Apparently there's an issue in Windows 10 (not in Windows 7) that closing a process holds a global lock. When compiling with multiple cores and therefore multiple processes this issue hits.
The /analyse option can adversely affect perf because it loads a web browser. (Not relevant here but good to know)
Unless a hardcore Windows systems hacker comes along, you're not going to get more than partisan comments (which I won't do) and speculation (which is what I'm going to try).
File system - You should try the same operations (including the dir) on the same filesystem. I came across this which benchmarks a few filesystems for various parameters.
Caching. I once tried to run a compilation on Linux on a RAM disk and found that it was slower than running it on disk thanks to the way the kernel takes care of caching. This is a solid selling point for Linux and might be the reason why the performance is so different.
Bad dependency specifications on Windows. Maybe the chromium dependency specifications for Windows are not as correct as for Linux. This might result in unnecessary compilations when you make a small change. You might be able to validate this using the same compiler toolchain on Windows.
A few ideas:
Disable 8.3 names. This can be a big factor on drives with a large number of files and a relatively small number of folders: fsutil behavior set disable8dot3 1
Use more folders. In my experience, NTFS starts to slow down with more than about 1000 files per folder.
Enable parallel builds with MSBuild; just add the "/m" switch, and it will automatically start one copy of MSBuild per CPU core.
Put your files on an SSD -- helps hugely for random I/O.
If your average file size is much greater than 4KB, consider rebuilding the filesystem with a larger cluster size that corresponds roughly to your average file size.
Make sure the files have been defragmented. Fragmented files cause lots of disk seeks, which can cost you a factor of 40+ in throughput. Use the "contig" utility from sysinternals, or the built-in Windows defragmenter.
If your average file size is small, and the partition you're on is relatively full, it's possible that you are running with a fragmented MFT, which is bad for performance. Also, files smaller than 1K are stored directly in the MFT. The "contig" utility mentioned above can help, or you may need to increase the MFT size. The following command will double it, to 25% of the volume: fsutil behavior set mftzone 2 Change the last number to 3 or 4 to increase the size by additional 12.5% increments. After running the command, reboot and then create the filesystem.
Disable last access time: fsutil behavior set disablelastaccess 1
Disable the indexing service
Disable your anti-virus and anti-spyware software, or at least set the relevant folders to be ignored.
Put your files on a different physical drive from the OS and the paging file. Using a separate physical drive allows Windows to use parallel I/Os to both drives.
Have a look at your compiler flags. The Windows C++ compiler has a ton of options; make sure you're only using the ones you really need.
Try increasing the amount of memory the OS uses for paged-pool buffers (make sure you have enough RAM first): fsutil behavior set memoryusage 2
Check the Windows error log to make sure you aren't experiencing occasional disk errors.
Have a look at Physical Disk related performance counters to see how busy your disks are. High queue lengths or long times per transfer are bad signs.
The first 30% of disk partitions is much faster than the rest of the disk in terms of raw transfer time. Narrower partitions also help minimize seek times.
Are you using RAID? If so, you may need to optimize your choice of RAID type (RAID-5 is bad for write-heavy operations like compiling)
Disable any services that you don't need
Defragment folders: copy all files to another drive (just the files), delete the original files, copy all folders to another drive (just the empty folders), then delete the original folders, defragment the original drive, copy the folder structure back first, then copy the files. When Windows builds large folders one file at a time, the folders end up being fragmented and slow. ("contig" should help here, too)
If you are I/O bound and have CPU cycles to spare, try turning disk compression ON. It can provide some significant speedups for highly compressible files (like source code), with some cost in CPU.
NTFS saves file access time everytime. You can try disabling it:
"fsutil behavior set disablelastaccess 1"
(restart)
The issue with visual c++ is, as far I can tell, that it is not a priority for the compiler team to optimize this scenario.
Their solution is that you use their precompiled header feature. This is what windows specific projects have done. It is not portable, but it works.
Furthermore, on windows you typically have virus scanners, as well as system restore and search tools that can ruin your build times completely if they monitor your buid folder for you. windows 7 resouce monitor can help you spot it.
I have a reply here with some further tips for optimizing vc++ build times if you're really interested.
The difficulty in doing that is due to the fact that C++ tends to spread itself and the compilation process over many small, individual, files. That's something Linux is good at and Windows is not. If you want to make a really fast C++ compiler for Windows, try to keep everything in RAM and touch the filesystem as little as possible.
That's also how you'll make a faster Linux C++ compile chain, but it is less important in Linux because the file system is already doing a lot of that tuning for you.
The reason for this is due to Unix culture:
Historically file system performance has been a much higher priority in the Unix world than in Windows. Not to say that it hasn't been a priority in Windows, just that in Unix it has been a higher priority.
Access to source code.
You can't change what you can't control. Lack of access to Windows NTFS source code means that most efforts to improve performance have been though hardware improvements. That is, if performance is slow, you work around the problem by improving the hardware: the bus, the storage medium, and so on. You can only do so much if you have to work around the problem, not fix it.
Access to Unix source code (even before open source) was more widespread. Therefore, if you wanted to improve performance you would address it in software first (cheaper and easier) and hardware second.
As a result, there are many people in the world that got their PhDs by studying the Unix file system and finding novel ways to improve performance.
Unix tends towards many small files; Windows tends towards a few (or a single) big file.
Unix applications tend to deal with many small files. Think of a software development environment: many small source files, each with their own purpose. The final stage (linking) does create one big file but that is an small percentage.
As a result, Unix has highly optimized system calls for opening and closing files, scanning directories, and so on. The history of Unix research papers spans decades of file system optimizations that put a lot of thought into improving directory access (lookups and full-directory scans), initial file opening, and so on.
Windows applications tend to open one big file, hold it open for a long time, close it when done. Think of MS-Word. msword.exe (or whatever) opens the file once and appends for hours, updates internal blocks, and so on. The value of optimizing the opening of the file would be wasted time.
The history of Windows benchmarking and optimization has been on how fast one can read or write long files. That's what gets optimized.
Sadly software development has trended towards the first situation. Heck, the best word processing system for Unix (TeX/LaTeX) encourages you to put each chapter in a different file and #include them all together.
Unix is focused on high performance; Windows is focused on user experience
Unix started in the server room: no user interface. The only thing users see is speed. Therefore, speed is a priority.
Windows started on the desktop: Users only care about what they see, and they see the UI. Therefore, more energy is spent on improving the UI than performance.
The Windows ecosystem depends on planned obsolescence. Why optimize software when new hardware is just a year or two away?
I don't believe in conspiracy theories but if I did, I would point out that in the Windows culture there are fewer incentives to improve performance. Windows business models depends on people buying new machines like clockwork. (That's why the stock price of thousands of companies is affected if MS ships an operating system late or if Intel misses a chip release date.). This means that there is an incentive to solve performance problems by telling people to buy new hardware; not by improving the real problem: slow operating systems. Unix comes from academia where the budget is tight and you can get your PhD by inventing a new way to make file systems faster; rarely does someone in academia get points for solving a problem by issuing a purchase order. In Windows there is no conspiracy to keep software slow but the entire ecosystem depends on planned obsolescence.
Also, as Unix is open source (even when it wasn't, everyone had access to the source) any bored PhD student can read the code and become famous by making it better. That doesn't happen in Windows (MS does have a program that gives academics access to Windows source code, it is rarely taken advantage of). Look at this selection of Unix-related performance papers: http://www.eecs.harvard.edu/margo/papers/ or look up the history of papers by Osterhaus, Henry Spencer, or others. Heck, one of the biggest (and most enjoyable to watch) debates in Unix history was the back and forth between Osterhaus and Selzer http://www.eecs.harvard.edu/margo/papers/usenix95-lfs/supplement/rebuttal.html
You don't see that kind of thing happening in the Windows world. You might see vendors one-uping each other, but that seems to be much more rare lately since the innovation seems to all be at the standards body level.
That's how I see it.
Update: If you look at the new compiler chains that are coming out of Microsoft, you'll be very optimistic because much of what they are doing makes it easier to keep the entire toolchain in RAM and repeating less work. Very impressive stuff.
I personally found running a windows virtual machine on linux managed to remove a great deal of the IO slowness in windows, likely because the linux vm was doing lots of caching that Windows itself was not.
Doing that I was able to speed up compile times of a large (250Kloc) C++ project I was working on from something like 15 minutes to about 6 minutes.
Incremental linking
If the VC 2008 solution is set up as multiple projects with .lib outputs, you need to set "Use Library Dependency Inputs"; this makes the linker link directly against the .obj files rather than the .lib. (And actually makes it incrementally link.)
Directory traversal performance
It's a bit unfair to compare directory crawling on the original machine with crawling a newly created directory with the same files on another machine. If you want an equivalent test, you should probably make another copy of the directory on the source machine. (It may still be slow, but that could be due to any number of things: disk fragmentation, short file names, background services, etc.) Although I think the perf issues for dir /s have more to do with writing the output than measuring actual file traversal performance. Even dir /s /b > nul is slow on my machine with a huge directory.
I'm pretty sure it's related to the filesystem. I work on a cross-platform project for Linux and Windows where all the code is common except for where platform-dependent code is absolutely necessary. We use Mercurial, not git, so the "Linuxness" of git doesn't apply. Pulling in changes from the central repository takes forever on Windows compared to Linux, but I do have to say that our Windows 7 machines do a lot better than the Windows XP ones. Compiling the code after that is even worse on VS 2008. It's not just hg; CMake runs a lot slower on Windows as well, and both of these tools use the file system more than anything else.
The problem is so bad that most of our developers that work in a Windows environment don't even bother doing incremental builds anymore - they find that doing a unity build instead is faster.
Incidentally, if you want to dramatically decrease compilation speed in Windows, I'd suggest the aforementioned unity build. It's a pain to implement correctly in the build system (I did it for our team in CMake), but once done automagically speeds things up for our continuous integration servers. Depending on how many binaries your build system is spitting out, you can get 1 to 2 orders of magnitude improvement. Your mileage may vary. In our case I think it sped up the Linux builds threefold and the Windows one by about a factor of 10, but we have a lot of shared libraries and executables (which decreases the advantages of a unity build).
How do you build your large cross platform project?
If you are using common makefiles for Linux and Windows you could easily degrade windows performance by a factor of 10 if the makefiles are not designed to be fast on Windows.
I just fixed some makefiles of a cross platform project using common (GNU) makefiles for Linux and Windows. Make is starting a sh.exe process for each line of a recipe causing the performance difference between Windows and Linux!
According to the GNU make documentation
.ONESHELL:
should solve the issue, but this feature is (currently) not supported for Windows make. So rewriting the recipes to be on single logical lines (e.g. by adding ;\ or \ at the end of the current editor lines) worked very well!
IMHO this is all about disk I/O performance. The order of magnitude suggests a lot of the operations go to disk under Windows whereas they're handled in memory under Linux, i.e. Linux is caching better. Your best option under windows will be to move your files onto a fast disk, server or filesystem. Consider buying an Solid State Drive or moving your files to a ramdisk or fast NFS server.
I ran the directory traversal tests and the results are very close to the compilation times reported, suggesting this has nothing to do with CPU processing times or compiler/linker algorithms at all.
Measured times as suggested above traversing the chromium directory tree:
Windows Home Premium 7 (8GB Ram) on NTFS: 32 seconds
Ubuntu 11.04 Linux (2GB Ram) on NTFS: 10 seconds
Ubuntu 11.04 Linux (2GB Ram) on ext4: 0.6 seconds
For the tests I pulled the chromium sources (both under win/linux)
git clone http://github.com/chromium/chromium.git
cd chromium
git checkout remotes/origin/trunk
To measure the time I ran
ls -lR > ../list.txt ; time ls -lR > ../list.txt # bash
dir -Recurse > ../list.txt ; (measure-command { dir -Recurse > ../list.txt }).TotalSeconds #Powershell
I did turn off access timestamps, my virus scanner and increased the cache manager settings under windows (>2Gb RAM) - all without any noticeable improvements. Fact of the matter is, out of the box Linux performed 50x better than Windows with a quarter of the RAM.
For anybody who wants to contend that the numbers wrong - for whatever reason - please give it a try and post your findings.
Try using jom instead of nmake
Get it here:
https://github.com/qt-labs/jom
The fact is that nmake is using only one of your cores, jom is a clone of nmake that make uses of multicore processors.
GNU make do that out-of-the-box thanks to the -j option, that might be a reason of its speed vs the Microsoft nmake.
jom works by executing in parallel different make commands on different processors/cores.
Try yourself an feel the difference!
I want to add just one observation using Gnu make and other tools from MinGW tools on Windows: They seem to resolve hostnames even when the tools can not even communicate via IP. I would guess this is caused by some initialisation routine of the MinGW runtime. Running a local DNS proxy helped me to improve the compilation speed with these tools.
Before I got a big headache because the build speed dropped by a factor of 10 or so when I opened a VPN connection in parallel. In this case all these DNS lookups went through the VPN.
This observation might also apply to other build tools, not only MinGW based and it could have changed on the latest MinGW version meanwhile.
I recently could archive an other way to speed up compilation by about 10% on Windows using Gnu make by replacing the mingw bash.exe with the version from win-bash
(The win-bash is not very comfortable regarding interactive editing.)

Autoconf on Windows 7 dreadfully slow

I am working on a project using Google's cmockery unit testing framework. For a while, I was able to build the cmockery project with no problems. e.g. "./configure", "make && make install" etc. and it took a reasonable amount of time (1-2 minutes or so.) After working on other miscellaneous tasks on the computer and going back to re-build it, it becomes horrendously slow. (e.g. after fifteen minutes it is still checking system variables.)
I did a system restore to earlier in the day and it goes back to working properly for a time. I have been very careful about monitoring any changes I make to the system, and have not been able to find any direct correlation between something I am changing and the problem. However, the problem inevitably recurs (usually as soon as I assume I must have accidentally avoided the problem and move on). The only way I am able to fix it is to do a system restore to a time when it was working. (Sometimes restarting the machine works as well, sometimes it does not.)
I imagine that the problem is between the environment and autoconf itself rather than something specific in cmockery's configuration. Any ideas?
I am using MinGW and under Windows 7 Professional
Make sure that antivirus software is not interfering. Often, antivirus programs monitor every file access; autoconf accesses many files during its operation and is likely to be slowed down drastically.

Resources