MonetDBLite has no trace capabilities. The Release version compiled with Visual Studio is faster than the one compiled with MinGW64, optimizations used on both (/O2). But even so, the fastest version seems to be 3-4 times slower than the time reported in mclient for the same imported data, query and disk storage (same ssd), being benchmarked at ~6sec for dll, vs 1.8sec for monetdb server. I expected the lite version to have comparable time (the article describes the lite library as being mostly faster than monetdb itself). MonetDBLite is the one downloaded in October 2018, while MonetDB server is Aug 2018.SP2 version.
In the absence of any trace capability (sys.tracelog() is a dummy function), is it any way I can debug this situation?
Related
I am of recently developing a Xamarin based app in Visual Studio 2017 and I am not sure whether the performance I see at a build and debug time is what can be expected or if something is wrong.
Environment: imac late 2015, quad core i5 #3.5GHz, 24GB RAM.
I am executing visual studio (latest) under parallels 13 in windows 10 and have assigned all four cores and 20GB RAM to the VM (it doesn't make a difference though if I assign less).
The solution is a standard xamarin based solution with 3 projects and about 10 classes with roughly 300loc (yes really, there's almost nothing in there yet).
A rebuild takes about 1 Minute. Starting the application in debug mode takes about 30s for the simulator showing up.
Looking at the code size and hardware specs I was expecting build and simulation to be a matter of seconds.
Am I wrong? Even considering the VM I'd not have expected these numbers.
Is anybody able to share experiences/thoughts?
Your problem isn't simply compile time. Every time you build your project, your shared code gets compiled into a dll, code dependencies get checked, then linked into the native project, which is being compiled, resources get packed, integrity-checked and signed and is finally being bundled (not speaking of included nuget Packages and other plugins) and then the whole package gets packed into an app archive, which also needs time to be written.
Also your app gets transmitted to your device via USB or network (default would be USB).
Considering what is happening "under the hood", 30 seconds is quite fast.
However, I have found that the performance is less based upon cpu and ram (at least if your dev machine has a decent amount of both) but on the performance of your hard disk.
If you really want to speed things up, you might consider running visual studio and doing your compiling on a nvme drive (an alternative might be a SSD raid).
For instance I once had a xamarin app, which had a lot of dependencies on various nuget packages. Compiling the iOS Version took about 25 minutes (full rebuild) on a Mac Mini (2011 model improved with an aftermarket Samsung 850 Pro), switching to a VM solution running on a skull canyon NUC equipped with a Samsung 950 Pro nvme drive did speed up the process to incredible 2.5 minutes.
I have successfully cross compiled LTTng for arm and been able to do a quick test on the board. I performed LTTng sessions on a build machine and on the board. Was able to interpret both sessions using Babeltrace. That's fine. But on importing to Eclipse, only the session from the build machine was displayable. What gives? Is it not outputting in the same formats?
Few more information: under both session directories, there are channel.idx and metadata files. But only for the build session, Eclipse shows it as one icon, with CPU usage, LTTng Kernel Analysis, Tmf Statistics Analysis. For the board, it just lists the files.
Has any one met the same problem? Or were able to interpret an LTTng embedded session on Eclipse?
The Eclipse plugin (recently renamed to "Trace Compass") uses its own implementation of a Java CTF reader, it's quite possible for it to have discrepancies with Babeltrace. It also hasn't been tested a lot with ARM traces, so it can definitely be a bug in the parser.
If you could open a bug on the bug tracker, and attach the problematic trace, it would be extremely helpful!
Well, there is a strange problem occured in my working project. It is written on Delphi. When I try to compile it, it takes 8 hours to compile about 770 000 lines (and it is not the end), while my colleague needs only 15-20 seconds.
I've tried everything suggested in Why does Delphi's compilation speed degrade the longer it's open, and what can I do about it?
Shorten the path to project
Defragment disc with MyDefrag
Use Clear Unit Cache (do not sure, if it worked at all)
I also turned off the optimization and I use debug mode. My PC is pretty fast (i5-2310 3.1 GHz, 16 Gb RAM, usual SATA HDDs), the bottle neck could be the HDD, but my collegue has usual one too. So, it is very mysterious, what is the reason of so slow compilation.
Edit: I apologize for lack of information. Here is additional info:
I use debug mode, release one works same.
We use Delphi XE version.
I've copied my collegue's folder with project initially.
I do not use network drive, and I tried to move project to another HDD.
Additional info about system: I use Windows 7 Enterprise N 64 bit, while my collegue uses Windows 7 32 bit, Also, Delphi XE is 32-bit (dunno, if it can be 64-bit). May be it is the reason in some way?
Edit 2: I found solution! The problem was that I installed Delphi on my Windows 64 bit system. Installing it on virtual Windows 7 x86 made it work: compiling in seconds. Dunno, why is there so big gap in perfomance.
Are you sure this is not some hardware problem, e.g. your hard disk having a bad sector? Try to put the source code on a different disk and see if the problem goes away. Or maybe the search path points to a network drive that is very slow or not even available?
I have a laptop machine with below configuration:
Core 2 Duo # 1.4 GHz
4GB RAM
320GB HardDrive
Windows 7
Whether this is sufficient for installing VS 2010? The speed of processor is 1.4GHz, but in Microsoft website they have given minimum of 1.6GHz processor speed. Can anyone tell from their experience?
Thanks in advance.
Will most likely install, however I would expect it will run slow. Depends on what sort of work you are doing. Small console apps would be OK but I doubt full blown WPF/Silverlight apps would be speedy. Also, if your connecting to a local SQL instance.. etc (could pull an increased overhead).
Sum Up.
Will install.
Work will be tedious.
Another SO post for reference VS 2010 Requirments
The main issue is the way that VS2010 uses WPF; you might find that large files behave a little jerkily in the text editor, but I don't think it'll be un-usable.
I've not tried VS2010, but I do have VS2008 + SQL Server Express installed on a netbook with a few years old Atom CPU and 2 GB of RAM, and it works fine though it's obviously a bit slow. So I'd assume that you'll have no problems since even if the requirements for VS2010 are higher, your laptop is much higher spec than that netbook.
Will work. but might have some performace issues on Editor / Designer. I had a machine with almost similar configuration. used it for silverlight developement. I always has problem in the design preview of the XAML file. - it gets loaded after some time then expected time.
I am trying to build a solution for windows XP in Visual Studio 2005. This solution contains 81 projects (static libs, exe's, dlls) and is being successfully used by our partners. I copied the solution bundle from their repository and tried setting it up on 3 similar machines of people in our group. I was successful on two machines and the solution failed to build on my machine.
The build on my machine encountered two problems:
During a simple build creation of the biggest static library (about 522Mb in debug mode) would fail with the message "13>libd\ui1d.lib : fatal error LNK1106: invalid file or disk full: cannot seek to 0x20101879"
Full solution rebuild creates this library, however when it comes to linking the library to main .exe file, devenv.exe spawns link.exe which consumes about 80Mb of physical memory and 250MB of virtual and spawns another link.exe, which does the same. This goes on until the system runs out of memory. On PCs of my colleagues where successful build could be performed, there is only one link.exe process which uses all the memory required for linking (about 500Mb physical).
There is a plenty of hard drive space on my machine and the file system is NTFS.
All three of our systems are similar - Core2Quad processors, 4Gb of RAM, Windows XP SP3. We are using Visual studio installed from the same source.
I tried using a different RAM and CPU, using dedicated graphics adapter to eliminate possibility of video memory sharing influencing the build, putting solution files to different location, using different versions of VS 2005 (Professional, Standard and Team Suite), changing the amount of available virtual memory, running memtest86 and building the project from scratch (i.e. a clean bundle).
I have read what MSDN says about LNK1106, none of the cases apply to me except for maybe "out of heap space", however I am not sure how I should fight this.
The only idea that I have left is reinstalling the OS, however I am not sure that it would help and I am not sure that my situation wouldn't repeat itself on a different machine.
Would anyone have any sort of advice for me?
Thanks
Yes, 522 Megabytes is about as large a contiguous chunk of virtual memory that can be allocated on the 32-bit version of Windows. That's one hunking large library, you'll have to split it up.
You might be able to postpone the inevitable by building on a 64-bit version of Windows, 32-bit programs get a much larger virtual memory space, close to 4 GB.