How can one release a Node.js module during runtime in order to save memory or improve the overall performance.
My application dynamically loads modules in Node.js during runtime, but does not unload any of them. I'm looking for such functionality esp. to update a module that has been changed after the code loaded the module; and also to unload modules that may not be used further.
Any insights?
Thanks.
To unload a script, you can do this:
delete require.cache['/your/script/absolute/path'] // delete the cache
var yourModule = require('/your/script/absolute/path') // load the module again
So if you have plugin modules, you can watch those files' change, then dynamically unload(delete the cache), then require that script again.
But make sure you are not leaking memory, you can re-assign the changed module to the old variable.
Here is a handy tool that you can check the memory:
node-memwatch .
Good luck!
It sounds like you're creating some sort of plugin system. I would have a look at Node VM's:
http://nodejs.org/docs/latest/api/vm.html
It allows you to load and run code in a sandbox which means when it's finished all it's internal allocations should be freed again.
It is marked as unstable, but that doesn't mean it doesn't work. It means the API might change in future versions of Node.
As an example, Haraka, a node based smtp server, uses the VM module to (re)load plugins.
Related
I am beginner in go and facing difficulty in understanding go/pkg folder.As suggested by documentation it contains pkg/mod and pkg/windowsamd_64. pkg/windowsamd_64 for storing compiled files. What happens if I have a file importing some external github modules and do go build on that.
Will it go first to pkg/mod (but modules are compiled in
pkg/windowsamd_64) to search for external modules
Will it go first to pkg/windowsamd_64 (then what will be use of
pkg/mod) to search for modules
Will it go to {gopath}/src and do something from there
pkg/mod is just a folder ,why do we call it cache as it will keep on
filling or better when does it populate?
The go command has two different modes of locating packages: module mode (introduced in Go 1.11) and GOPATH mode (much older). Module mode is the default as of Go 1.16, and if you are new to Go you will probably want to work exclusively in that mode. (There isn't much point to working in GOPATH mode unless you have a large legacy codebase that requires it.)
pkg/mod stores cached source code for use in module mode. The source code for a given version of a module is downloaded automatically when you build a package from that module (for example, as a dependency of some other package).
GOPATH/src stores source code for use in GOPATH mode. You can also choose to work in that directory in module mode, but that's a completely optional/aesthetic choice and shouldn't change the behavior of anything in module mode.
pkg/windows_amd64 stores installed packages in GOPATH mode. However, installed packages aren't very useful anyway because Go has a separate build cache (even in GOPATH mode). So you can mostly ignore pkg/windows_amd64 completely.
I just built the linux kernel for CentOS using the instructions that can be found here: https://wiki.centos.org/HowTos/Custom_Kernel
Now, I made my changes and I would like to rebuild the kernel and test it with my changes. How do I do that but:
1. Without having to recompile everything. So, build process should reuse whatever object files generated by the first build that wont need to be modified.
2. Without having to build the other packages that are build with the kernel (e.g., debuginfo, tools, debug-devel, ...etc.).
Thanks.
You cannot. The paradigm of rpmbuild is to always start from a clean slate to ensure reproducibility and predictability. The subpackages would be also be invalidated because they depend on the exact output of your kernel build, e.g. locations within the binary images where certain symbols are defined, that may have changed when you rebuilt it.
Fine uploader is 400kb of javascript code and 140kb minified. Since I am not using the UI and only using the API, I would like to build the library without the integrated interface (and hopefully get a smaller lib consequently). Is this possible?
Could not find this in the downloads section.
I've also setup the build environment and built the package myself, but all the files in the _dist dir seem to be bundled with the UI.
Fine Uploader is only 40 kB gzipped, which is compression that pretty much every web server already utilizes. The build is not currently setup to create a bundle without the UI. If you'd like to create such a build, the modules.js file will need to be modified. One place to start would be with a copy of the fuTraditional module sans the #fuSrcUi module. Then, a corresponding entry would need to be added to the concat.js build file. This doesn't seem worth it to save a few kB in my humble opinion, but it's all very possible.
If you're interested in a much more modular upload library where almost every feature is represented as an optional standalone module, take a look at Modern Uploader, which I am slowly developing as time allows. Feel free to open up issues in the repo if you have any questions regarding the future of that product.
I'm about to start making major modification to my project and I just want to clarify something as I think my design maybe somewhat complicated.
I have an executable that loads a dll, lets call this dll1, this then loads dll2.
The executable also loads dll2.
What I'm asking is do I have two instances of dll2's global and static member variables, does the second load of dll2 happen or can an execuable only ever load 1 of dll2 even if dll2 was loaded by a different dll?
I know I should only have one copy of dll2's code in memory this is fine. It is the global and static variables I'm interested in.
You can only have one instance of any particular DLL loaded per process.
You can of course load different instances of the same DLL, this is practice is not common, but it is technically possible. Have a try with Process Explorer. See the snapshot below.
I have a dll that is loaded and file-locked by a process, and I would like to update it with a newer version. I'm looking for an alternative to terminating the process to release the file lock before updating the dll. It is okay that the existing live processes still uses the old version, as long as newly instantiated ones pick up the new logic.
It seems that I can simply rename/move the dll and the live process still seems to work well. Is it safe to do this? If the dll's code has already been loaded into the process then why does it still need to lock the dll?
It is not always ok to move all dll's used by any random application. Some applications, like asp.net, use a shadow copy concept where they actually copy the dll and use the copy leaving you free to modify the original. In the case of asp.net, if you modify the original asp.net will automatically spool up a new app domain using the new dll and gracefully shut down the old one.
If the application you're referring to has a lock on the dll, then you can't safely change it.
It depends on your dll/application. For example, dll may use shared memory, or implement inter-process communication. New dll version may implement it differently. So once new instance will start, you'll have two conflicting versions in memory.
So it's not safe in general case, though in your particular case it may be OK.