I have a small nextjs application that relies on several internal and external libraries. While a production build has great performance, dev builds are atrociously slow. When I make a change to a page component which triggers a rebuild, I see messages of this sort in the console:
event - compiled client and server successfully in 15.5s (2267 modules)
event - compiled client and server successfully (2279 modules)
I've gathered that this is a very large number of modules. But what can I do about it? How do I find more information as to the source of these modules so that I can eliminate them or optimize further?
What is considered an "acceptable" number of modules that would not result in rebuild times approaching 60s?
I ran into the same problem and after a long search I found out that it was caused by loading #material-ui/icons as they are imported from an internal package installed in the node_modules folder whereas in production mode, they are imported directly from the code.
Development environment :
Development bundles can contain the full library which can lead to slower startup times. This is especially noticeable if you import from #material-ui/icons. Startup times can be approximately 6x slower than without named imports from the top-level API.
I followed this guide which offers two solutions :
Using path imports to avoid pulling in unused modules.
Adding a babel plugin to automatically rename named imports to default imports (more recommanded since it provides the best user experience and developer experience)
After using babel and renaming named imports to default, I went from 11.3 seconds compile time to 2.9
Related
I'm trying to set up a Windows Server-based continuous integration server to completely build and package an Unreal Engine 4 project. The vast majority of the process works, but at the content cooking stage I keep running into the following errors:
********** COOK COMMAND STARTED **********
Running UE4Editor Cook for project C:\workspace\CEIT_ingame-native-plugins_PR-44\sampleProjects\unreal\ShooterGame26\ShooterGame.uproject
Commandlet log file is C:\Unreal426\Windows\Engine\Programs\AutomationTool\Saved\Cook-2021.07.05-13.56.23.txt
Running: C:\Unreal426\Windows\Engine\Binaries\Win64\UE4Editor-Cmd.exe C:\workspace\CEIT_ingame-native-plugins_PR-44\sampleProjects\unreal\ShooterGame26\ShooterGame.uproject -run=Cook -TargetPlatform=WindowsClient -fileopenlog -ddc=DerivedDataBackendGraph -unversioned -abslog=C:\Unreal426\Windows\Engine\Programs\AutomationTool\Saved\Cook-2021.07.05-13.56.23.txt -stdout -CrashForUAT -unattended -NoLogTimes -UTF8Output
LogInit: Display: Running engine for game: ShooterGame
LogModuleManager: Warning: ModuleManager: Unable to load module 'C:/Unreal426/Windows/Engine/Binaries/Win64/UE4Editor-OpenGLDrv.dll' because the file couldn't be loaded by the OS.
LogModuleManager: Warning: ModuleManager: Unable to load module 'C:/Unreal426/Windows/Engine/Plugins/Lumin/MagicLeap/Binaries/Win64/UE4Editor-MagicLeap.dll' because the file couldn't be loaded by the OS.
Took 14.257796s to run UE4Editor-Cmd.exe, ExitCode=1
ERROR: Cook failed.
(see C:\Users\jenkins\AppData\Roaming\Unreal Engine\AutomationTool\Logs\C+Unreal426+Windows\Log.txt for full exception trace)
AutomationTool exiting with ExitCode=25 (Error_UnknownCookFailure)
BUILD FAILED
Specifically, UE4Editor-OpenGLDrv.dll and UE4Editor-MagicLeap.dll cannot be loaded, but there's not any clear indication as to why this is, just that "the file couldn't be loaded by the OS". The log files written to disk don't tell me much more than the information above. I've verified that both DLLs are actually present on the CI server, so I suspect that there is some other sub-dependency missing.
I've tried running Dependencies on the Unreal executable and the DLLs mentioned in the logs to work out which DLLs might be missing on the server machine itself, but this takes over three hours to run to completion, so is a bit awkward and time-consuming to do repeatedly. I've followed the advice regarding missing dependencies from this page, and have gone through all of the likely DLLs that were reported as not found by the Dependencies utility (mostly DirectX/OpenGL related ones), but the build still fails and I'm running out of ideas.
Is there any easy way in Windows to work out exactly why a DLL fails to load? I seem to remember that Windows DLL loading error messages are nowhere near as informative as on Linux, but perhaps there's a tool or an easier method to work it out that I'm not familiar with.
EDIT: I've narrowed things down somewhat - if I attempt to load glu32.dll completely dynamically in a program of my own, I get the load error Could not load C:\Windows\System32\glu32.dll: The specified procedure could not be found. As this is on the load attempt, rather than attempt at looking up a function, it implies that some procedure is missing on a sub-dependency of glu32.dll, but I don't know how I'd go about identifying which one it is.
You can try to delete Engine/Intermediate and click GenerateProjectFiles.bat to regenerate the whole project if you use UE Source code to start up, see UE documentation and rebuild with Visual Studio.
If you use the UE4-Editor to start up lacking dll, just add dependency within YourProject.build.cs like a third party, see UE document.
I'm developing an electron application which depends on a bunch of modules that use native (C/C++) code.
Due to the way electron includes a patched node.js, these native modules have to be recompiled from source (everything is OSS) when we prepare a release.
One problem we have is that for a small number of users those native modules fail to load with pretty unspecific error messages. The modules don't have any further dependencies and we do validate at startup that all files shipped with the application are there.
So the only reason I can think of why the modules don't load is that some Anti-Virus may interfere with the loading of those modules (they are dlls but with a different file extension).
Since the modules were recompiled, even if the prebuild ones had been signed, the ones we ship are not.
So my question is: Does it make sense for us to sign all those dlls we ship with our own certificate, even though they aren't developed by us? Are there potential (legal) ramifications for using a certificate like that?
Is my theory about AV interfering with the loading of DLLs even plausible?
I am working on a few libraries for coding Arduinos in Ada. Each library is its own project, and I have an aggregate project that aggregates the libraries. I need to specify the runtime for each project since they are running on different chips. So for example I have something like this:
aggregate project Agg is
for Project_Files use ("due/arduino_due.gpr",
"uno/arduino_uno.gpr",
"nano/arduino_nano.gpr");
-- ...
end Agg;
library project Arduino_Due is
-- Library_Dir, _Name, and _Kind attributes ...
-- Target attribute ...
for Runtime ("Ada") use "../runtimes/arduino_due_runtime";
package Compiler is
-- Driver and Switches attributes ...
end Compiler;
And similar projects for the Uno and Nano. Building arduino_due.gpr directly works fine. It finds my runtime in the specified folder as it should. However, when I build agg.gpr, I get
fatal error, run-time library not installed correctly
cannot locate file system.ads
This occurs whether I use an absolute path or a relative path, and also occurs when the relative path is concatenated with Project'Project_Dir. However, if rather than using the Runtime attribute I use the compiler switch --RTS=..., then it works, but only if I use a relative path that is prefixed with Project'Project_Dir. An absolute path or a plain relative path will result in the error gprbuild: invalid runtime directory runtimes/arduino_due_runtime.
So what's going on here? This behavior seems inconsistent and I couldn't find anything in the docs about it so I suspect a bug. But I thought I'd ask here first in case I'm doing something wrong. Maybe I should just be using child projects, or project extension?
This isn’t a bug, it’s a feature :-).
See this rejected issue.
There are two things:
Several options are only recognised in the main project, and if you use an aggregate project that is the main project.
Package Builder is ignored in aggregated projects.
My conclusion: aggregate projects don’t suit your use case, or mine. As I said in the issue noted above, back to Makefiles (or scripts).
Part of the design intent is that aggregate projects should share code and compilations: as 2.8.4 of the manual says,
The loading of aggregate projects is optimized in GPRbuild, so that all files are searched for only once on the disk (thus reducing the number of system calls and yielding faster compilation times, especially on systems with sources on remote servers). As part of the loading, GPRbuild computes how and where a source file should be compiled, and even if it is located several times in the aggregated projects it will be compiled only once.
Since there is no ambiguity as to which switches should be used, files can be compiled in parallel (through the usual -j switch) and this can be done while maximizing the use of CPUs (compared to launching multiple GPRbuild commands in parallel).
I am new to browser development, so I have no prior experience with AMD, CommonJS, UMD, Browserify, RequireJS, etc. I have been reading a lot about them and I believe I generally understand the JavaScript story but I am still very confused as to how to make everything work together.
I have a library written in TypeScript. It is a pure TypeScript library, it doesn't interact with a browser or any browser framework nor any node or NPM things.
I also have a TypeScript client application that leverages this library. The client application may leverages a web framework as well (e.g., jQuery).
Now when I compile my two TypeScript files (which we will assume are in separate projects, isolated from each other and built separately), each will generate a .js file. In Visual Studio I have to choose AMD or Common as my module loader.
This is where things fall apart. My research tells me that if I want to work on the web I either need to use Browserify or RequireJS. Browserify appears to require I first install Node on my machine and then use a command line tool as a post-build step to generate a file and as far as I can tell this isn't available as a NuGet package. Alternatively, I can use RequireJS but then all of the examples stop working. Something about not doing things on window load and instead doing them somewhere else, but nothing that I have found really explains that well.
So, what is the story here? I want to use TypeScript but at the moment it really feels like it is just a language, there aren't any compelling usage stories available to me as a developer as I have grown accustomed to in the Microsoft ecosystem.
TypeScript does support AMD and CommonJS just as JavaScript. But in addition it also supports internal modules. When using internal modules in conjunction with a decent build system like gulp-typescript you'll find that internal modules can cover lot of use cases where one would choose AMD/CommonJS in traditional JavaScript projects.
TypeScript gives you the freedom to decide yourself. If you need asynchronous module loading you are free to use AMD via external modules. You can also use CommonJS and/or use browserify to link together your code into a single file.
I've found that when you are a library developer - that is you ship your TypeScript compiled JS code to other developers - internal modules are a good compromise. You don't force your target audience (developers) to use any special module system like AMD/CommonJS, but instead ship isomorphic JS that runs in the browser as well as in node. Yet you still have a way of modularizing your code internally, just as AMD/CommonJS would allow you.
TL;DR: When you use TypeScript you get internal modules for free, and they provide you with a flexibility that would else only be achieved by AMD/CommonJS. Yet external modules still have their advantages. In the end, you should decide what is the best fit for your project.
TypeScript is a superset of JavaScript so its story is the story of JS, not of .NET or any other Microsoft product.
If you compile your TypeScript modules to AMD, then you load them through an AMD module loader like RequireJS (or Dojo, or curl) in your entrypoint HTML file, which can be as simple as this (using RequireJS):
<!DOCTYPE html>
<title>Application name</title>
<script src="scripts/require.js" data-main="scripts/client"></script>
(Assuming that your built TypeScript module is scripts/client.js.)
The Start page for RequireJS or the Dojo Introduction to AMD modules are both resources that can tell you more about how to load AMD-formatted modules in a browser.
You got a really good technical answer from C Snover, but the answer you're actually looking for is "don't use external modules". By external modules, I mean "AMD" or "CommonJS" modules.
If you actually need what external modules offer, they can be very useful, but they come at a significant cost in terms of build/deployment complexity and concepts that you need to understand.
Just because external modules are way more complicated doesn't mean they're better; the TypeScript compiler itself is written using internal modules.
You can convert an external module back to an internal module by omitting any export statements on the module itself (and by not having an export = statement at the end of the file either). For example, this is an internal module:
module MyLibrary {
export class MyClass {
public Foo = 1;
}
}
If you are using internal modules, all you have to do is reference them in the right order via script tags in your HTML files and they will work without having to deal with any sort of loader system.
<script src="MyLibrary.js"></script>
<script src="MyUICode.js"></script>
I have a custom framework that, following the advice in Apple's Framework Programming Guide >> Installing your framework I install in /Library/Frameworks. I do this by adding a Run Script build phase with the following script:
cp -R build/Debug/MyFramework.framework /Library/Frameworks
In my projects I then link against /Library/Frameworks/MyFramework and import it in my classes like so:
#import <MyFramework/MyFramework.h>
This works very well, except that I always see the following message in my debugger console:
Loading program into debugger…
sharedlibrary apply-load-rules all
warning: Unable to read symbols for "/Users/elisevanlooij/Library/Frameworks/MyFramework.framework/Versions/A/MyFramework" (file not found).
warning: Unable to read symbols from "MyFramework" (not yet mapped into memory).
Program loaded.
Apparently, the compiler first looks in /Users/elisevanlooij/Library/Frameworks, can't find MyFramework, then looks in /Library/Frameworks, does find MyFramework and continues on its merry way. So far this has been more of an annoyance than a real problem, but when runnning unit tests, gdb stops on the (file not found) and refuses to continue. I have solved the problem by adding an extra line to the Run Script Phase
cp -R build/Debug/MyFramework.framework ~/Library/Frameworks
but it feels like sello-taping something that shouldn't be broken in the first place. How can I fix this?
In the past months, I've learned a lot more about frameworks, so I'm rewriting this answer. Please note that I'm talking about installing a framework as part of the development workflow.
The preferred location for installing a public framework (i.e. a framework that will be used by more than one of your apps or bundles) is /Library/Frameworks[link text] because "frameworks in this location are discovered automatically by the compiler at compile time and the dynamic linker at runtime."[Framework Programming Guide]. The most elegant way to do this is in the Deployment section of the Build settings.
As you work on your framework, there are times when you do want to update the framework when you do a build, and times when you don't. For that reason, I change the Deployment settings only in the Release Configuration. So:
Double-click on the framework target to bring up the Target info window and switch to the Build tab.
Select Release in the Configuration selectbox.
Scroll down to the Deployment section and enter the following values:
Deployment Location = YES (click the checkbox)
Installation Build Products Location = /
Installation Directory = /Library/Frameworks
The Installation Build Products Location serves as the root of the installation. Its default value is some /tmp directory: if you don't change it to the system root, you'll never see your installed framework since it's hiding in the /tmp.
Now you can work on your framework as you like in the Debug configuration without upsetting your other projects and when you are ready to publish all you need to do is switch to Release and do a Build.
Xcode 4 Warning
Since switching to Xcode 4, I've experienced a number of problems with my custom framework. Mostly, they are linking warnings in GDB that do not really interfere with the usefulness of the framework, except when running the built-in unit-test. I have submitted a technical support ticket to Apple a week ago, and they are still looking into it. When I get a working solution I will update this answer since the question has proven quite popular (1 kViews and counting).
There's not much reason to put a framework into Library/Frameworks, and it's a lot of work: You'd need to either do it for the user in an Installer package, which is a tremendous hassle to create and maintain, or have installation code in your app (which could only install to ~/L/F, unless you expend the time and effort necessary to make your app capable of installing to /L/F with root powers).
Much more common is what Apple calls a “private framework”. You'll bundle this into your application bundle.
Even frameworks intended for general use by any applications (e.g., Sparkle, Growl) are, in practice, built to be used as private frameworks, simply because the “right” way of installing a single copy of the framework to Library/Frameworks is such a hassle.
The conventional way to do this is to have your framework project and its clients share a common build directory. Xcode will search for framework headers and link against framework binaries in the build folder first, before any other location. So an app project that compiles and links against the header will pick up the most-recently-built one, rather than whatever's installed.
You can then remove the cp -r and instead use the Install Location build setting to place your build product in the final location, using xcodebuild install DSTROOT=/ at the command line. But you'll only need to do this when you're finished, not every time you rebuild the framework.
Naturally, when you distribute your framework it should be installed in /Library/Frameworks; however it seems odd to me that you're doing that with the test/debug versions of your framework.
My first instinct would be to install test versions under ~/Library, as it just makes setting up your test and debug environment that much simpler. If possible, I would expect the debug/test framework to be located in the build tree of the version I'm testing, in which case it's installed as a Private Framework for testing purposes. That would make your life much simpler when it comes time to deal with multiple versions of your framework.
Ultimately, it doesn't matter where the framework is located as long as your application or test suite loads the correct version. Choose the location that makes testing/debugging/development easiest.