I am running nest project on t2 micro. Initially it was working fine but now after adding some other modules, it hangs while making build and says "Javascript heap out of memory".
Please help me with the possible reasons on why this might happen.
--max-old-space-size can increase heap memory
connect ssh to ec2 and open the .bashrc file using nano like so:
nano ~/.bashrc
The remaining steps are similar with the Mac steps from above, except we would most likely be using ~/.bashrc by default (as opposed to ~/.zshrc). So these values would need to be substituted!
Link to Nodejs Docs
Related
I am new to CloudianOS Aml (Have only used the Glass '18 distribution afore) and it seems to be much different than its other distributions.
Here's a line from the changelog on August 25th
My application requires a system restart (after the user agrees on a prompt), however, when executing a general reboot (what power reboot does) some of my processes are started all over (without their cache and data, which prevents the app from knowing where it ended and where to proceed).
I did some research and it seems like application cache and data (didn't really understand when does it apply for data) is saved in "quick cache storage" (which isn't cleaned after the reboot).
I tried all these:
I also tried executing those with QuickAccess. None worked.
Any help would be highly appreciated!
Quick storage is defined by two ways: just an uppercase "Q" in ccd and and index.storage.quick when using os.exec(params) and etc
It is tricky to figure but I think the correct command is power restart -Q. Use a singular dash to define a param and then the name of the param "Q".
I use Vagrant on a MacOS with an ubuntu64 16.04. Running htop, I can see vagrant ssh process can use virtually 530G (in VIRT Column).
Is it the normal behavior of Vagrant? Should I panic? Is it "normal" to have virtually 530G on a mac with 120G of disk and 16G of RAM? Or maybe did I not understand the meaning of VIRT?
The vagrant box runs on virtual box and has only 1G of RAM allocated.
Answer by chrisroberts on github:
Hi! I was able to reproduce this behavior, but with any vagrant command executed. The vagrant ssh command is the easiest to see this behavior simply because the process is left running for as long as the ssh session is alive.
The tl;dr version of below is simply: Don't worry about it. VIRT isn't allocated memory. If it were, you would either need massive swap space, or nothing would be working.
So, what's going on here? The vagrant installer includes a small go executable (vagrant) whose job is to setup the current environment with the proper locations of everything it needs. The installers bin directory, the lib directory for ruby and all the other friends, all the gems, and the vagrant gem itself. Once it has all this configured, it spawns off a new process, the actual Ruby vagrant process.
Because your example was referencing vagrant ssh, and as was previously pointed out (#7296 (comment)) a Kernel.exec happens meaning the Ruby process does not persist, I figured it must be the wrapper that was the culprit. After a bit of searching (mostly to find stackoverflow items saying "don't worry about VIRT") I stumbled upon:
keybase/keybase-issues#1908
They refer to the golang FAQ that talks about a bunch of VIRT being claimed up front and it not being a big deal, but never any absolutes about how much was actually being claimed. A link to lwn was dropped in there (keybase/keybase-issues#1908 (comment)) regarding golang's behavior on startup of claiming a huge chunk of VIRT, but still everything referenced a much lower amount than I was seeing locally. So I decided to go dig into the golang runtime code, and within malloc.go we find the answer:
golang src/runtime/malloc.go
The why it's happening is because of the go wrapper used to start vagrant. Because the VIRT you see is simply a reservation and not actually allocated, it's not a problem and not something that should be worried about.
(There are some interesting conversations on the golang ML around the pros and cons of this approach, all pretty great reads).
It's just a copy/paste (and bolded the TLDR), hope it could help someone else.
I'm trying to use docker beta on OSX, mainly for Symfony development but the mounted volumes are incredible slow. Even for a vanilla Symfony project I get 6s page load time. That's unbearable! Has anyone found a solution to this issue? Trying to move away from vagrant but I just can't find any reasonable way to work with docker instead.
Okay the user Spiil gave a solution but I wanted to elaborate on the exact steps to take since I went through 12 hours trying to figure it out, but once you know how its super easy and fixes all the slow down issues!
The key here is to understand this solution creates NFS (Network File System) drives as the means of communication from the Docker Containers to your Mac instead of the standard OSX File System which is very slow currently either due to bugs or the way it works*
Follow these steps exactly.
1.) Clone this repo here (https://github.com/IFSight/d4m-nfs) in your home directory. To do this open up terminal and type cd ~
Then type git clone https://github.com/IFSight/d4m-nfs
Alternatively you can also do this in a one liner git clone https://github.com/IFSight/d4m-nfs ~/d4m-nfs
2.) Next go into the d4m-nfs folder and create a new file in the /etc folder and title it d4m-nfs-mounts.txt
3.) Add the following lines of code to this.
/Users/yourusername:/Users/yourusername:0:0
What the above does is allows you to still use relative folders with docker-compose and allows all ports to connect on it hence the 0:0.
EDIT
Do not put /Volumes here!!
4.) Go to your docker preferences and do the following
Make sure only /tmp is showing and NOTHING ELSE. I mean nothing else it won't work if there is anything else since it will create conflicts with the NFS systems that the script will make for you later. Restart docker and docker-compose down any containers as well.
5.) Finally navigate to the d4m-nfs directory we created in step 1 and type the following command, /bin/bash d4m-nfs.sh
edit The correct way to type the command above is this as another user from the github (if-kenn) pointed out, ./d4m-nfs.sh which uses the Shebang for what shell should run it.
If done correctly there should be no errors and this should work. Please note DO NOT RUN as sh d4m-nfs.sh this will create errors and you will have to delete your exports file to start over. In fact anytime you make any changes you will have to clear your exports file.
This is what mine looks like.
EDIT:: IMPORTANT -- Remove the /private and volumes! This should only be users/username now!
If you see anything other than this you were not running with bash. You can quickly get to the exports file like this in Mac if you make any errors and just clear it out to start over.
Just select go to folder
and then type /etc/exports
This is a nice shortcut to quickly get to it and clear it out in your favorite text editor.
Also make sure no containers are running or you will get the ........ loop of death. If this loop of death continues make sure you upgrade docker and then restart your computer. Yes restart... it seemed to be the only way to get it to work on my friends computer. Refer to this (https://github.com/IFSight/d4m-nfs/issues/3)
Note to .... loop. I recently found another solution. Make sure you are NOT logged as root, and make sure you pulled the git repo into your users ~ folder not the root ~ folder. In otherwords, it should be in Users/username.
Also, make sure /tmp folder has full write permissions since the script needs to write here or this won't work either. chmod 777 -R /tmp
6.) If you did it right when running the script it will look like this.
Then simply run your docker-compose up -d as usual in your symfony project folder (or whatever project you are using with docker) and everything should work... except NO MORE slow downs!
You will need to run this anytime you restart your computer or docker.
Also note if you get mounting errors showing up, you probably don't have your project stored in your Users/username directory. Remember that is where we mounted it. If your project is somewhere other than there you will need to modify the d4m-nfs-mounts.txt file accordingly.
Other Info:
For people reading this now, maybe it's better to wait for Docker to fix this issue. A pull request has already been accepted to improve performance(https://github.com/docker/docker/pull/31047).
This will be release somewhere in April 2017 and should be a big improvement.
I've tried some workarounds for Docker for Mac, but all of them had some pretty big disadvantages, mostly in usability. A good source for alternatives of the OSXFS can be found at: https://github.com/EugenMayer/docker-sync/wiki/Alternatives-to-docker-sync. Credits for Eugen Mayer for setting this up.
EDIT:
First improvement is implemented in the edge release. https://github.com/docker/for-mac/issues/77 has more info on this.
There's a long thread with explanation from Docker Team and various workarounds.
Currently, the issue is being tracked on GitHub.
While some workarounds may be better than others, I'm afraid the ideal option for now is to switch to Linux.
I spent a lot of my time in searching viable solution. And I found.
d4m-nfs
allow you use docker volumes via nfs.
In my case it gave increase performance 16 times! (1.8sec vs ~30sec)
Also d4m-nfs has quite a intricate manual, so here is another link with detailed example: https://github.com/laradock/laradock/issues/353#issuecomment-262897619
I just leave this here for other googlers.
Normaly volumes should be fast.
But you can not change anything to make them faster if you dont want to change the format of your disk.
But maybe the bottleneck is the CPU or RAM.
You can check that with the command docker stats. These are by default set to 2 cores and 2 GB RAM. You can change this in the Docker for Mac GUI.
I had exactly the same thing. For me using docker-bg-sync (see on GitHub) made a dramatic improvement in speed and CPU usage.
Not as nice as just mounting the volume as you have to start a new container for every sync but it does the job.
In latest docker 17.06.0-ce-mac18 volumes mounted with :cached seems to run quite decent.
I've found that creating a CoreOS VM under Parallels, then using the Docker that is inside CoreOS is far faster than Docker for Mac (currently running Version 17.12.0-ce-mac49 (21995)).
I'm doing Linux code builds using CMAKE/Ninja/GCC and it's almost twice as fast as the exact same build from Docker for Mac.
In my case, I have a ton of library sources that are part of the container (e.g. Boost, OpenSSL), and a decent amount of C++ code that I keep local to my Mac.
This seems to be a recent development. Docker/Mac has become much slower than I remember it being a month or two ago. Maybe it's just me...
We overcame this issue by synchronizing the local and the docker for mac filesystem using syncthing. We built an open source tool that follows this approach, in case it helps:
https://github.com/okteto/cnd
I have followed all the suggestions I can find.
I am running the current version on redis on windows 2008
I can run fin from command line
I can install the service but it doesnt run
I do...
redis-server --service-install redis.windows.conf
and get "redis successfully installed as a service"
Then I try to start the service doing...
redis-server --service-start redis.windows.conf --loglevel verbose
and get Redis service failed to start
I have made sure I have the .net framework 4.5.2 installed, I have tried with the firewall off and have played with security on the folder.
Anyone have any ideas?
(Merry Christmas all)
Start redis server from the command line instead of as a service and it will display a more useful error message. If you are just using the default configuration it is most likely a problem with the maxmemory/maxheap configuration.
C:\redis>redis-server.exe redis.windows.conf
[1576] 04 Feb 10:32:54.172 #
The Windows version of Redis allocates a memory mapped heap for sharing with
the forked process used for persistence operations. In order to share this
memory, Windows allocates from the system paging file a portion equal to the
size of the Redis heap. At this time there is insufficient contiguous free
space available in the system paging file for this operation (Windows error
0x5AF). To work around this you may either increase the size of the system
paging file, or decrease the size of the Redis heap with the --maxheap flag.
Sometimes a reboot will defragment the system paging file sufficiently for
this operation to complete successfully.
Please see the documentation included with the binary distributions for more
details on the --maxheap flag.
Redis can not continue. Exiting.
In my case the default commandline config did not have logging enabled and the service one did. And no place where it complains about that.
Try creating the directory ./Logs.
Old question, but I came across it while trying to get a Win7x64 install working using binaries Redis-x64-2.8.2101. Couldn't get it to start despite fiddling with various options, no meaningful error given when run with the config, and only an apparently spurious disk space error when run natively.
There appears to be a issue on the github related, linked here for future benefit: https://github.com/MSOpenTech/redis/issues/267
I'm using Redis-server for windows ( 2.8.4 - MSOpenTech) / windows 8 64bit.
It is working great , but even after I run :
I see this : (and here are my questions)
When Redis-server.exe is up , I see 3 large files :
When Redis-server.exe is down , I see 2 large files :
Question :
— Didn't I just tell it to erase all DB ? so why are those 2/3 huge files are still there ?
How can I completely erase those files? ( without re-generating)
NB
It seems that it is doing deletion of keys without freeing occupied space. if so , How can I free this unused space?
From https://github.com/MSOpenTech/redis/issues/83
"Redis uses the fork() UNIX system API to create a point-in-time snapshot of the data store for storage to disk. This impacts several features on Redis: AOF/RDB backup, master-slave synchronization, and clustering. Windows does not have a fork-like API available, so we have had to simulate this behavior by placing the Redis heap in a memory mapped file that can be shared with a child(quasi-forked) process. By default we set the size of this file to be equal to the size of physical memory. In order to control the size of this file we have added a maxheap flag. See the Redis.Windows.conf file in msvs\setups\documentation (also included with the NuGet and Chocolatey distributions) for details on the usage of this flag. "
I know this is an old thread, but I am facing the same issues with the file sizes.
In case you have problems with your C ssd drive (like me), you can make a directory junction:
1) Stop redis service
2) Move C:\Windows\ServiceProfiles\NetworkService\AppData\Local\Redis folder to another drive / location.
3) Open a command prompt in C:\Windows\ServiceProfiles\NetworkService\AppData\Local then execute:
mklink /J "C:\Windows\ServiceProfiles\NetworkService\AppData\Local\Redis" "[newpath]"
PD: [newpath] must be absolute, like "D:\directory junctions\Redis"
4) Start redis service. Now the files are in another drive.
Check http://ss64.com/nt/mklink.html if doubts regarding this command.
I faced this same issue on my development machine. It was resolved by stopping the redis service and I used WinDirStat (which is what I used to detect the issue originally) to permanently delete these files in appdata/local/redis.
I then started redis back up and things were working fine.
Before following this same procedure others may want to ensure that this data isn't needed. In my case it wasn't critical since this is my development workstation.
When you flush the DB you only flush the keys from memory. I'm not sure why you've got files of different names, it may be an artifact of the way the Windows port of Redis manages files, but Redis itself doesn't delete files when you remove keys. You will need to manage outdated files outside of Redis.