Minify laravel views on local machine and push to server - laravel

we are using gulp in laravel to minify our views, the problem we are facing, server is unable to process gulp due to low ram of 512, is there any way we can minify the html on our local machine and then push it to our server?

I think you should solve this by making a swap space on your server.
Swap files increase the amount of virtual memory available to perform tasks like for example gulp.
Linux divides its physical RAM (random access memory) into chunks of
memory called pages. Swapping is the process whereby a page of memory
is copied to the preconfigured space on the hard disk, called swap
space, to free up that page of memory. The combined sizes of the
physical memory and the swap space is the amount of virtual memory
available.
from: https://wiki.archlinux.org/index.php/swap
Depending on what your server setup looks like you can find many guides on how to enable swap for your particular server.
Assuming you have Linux you can check if your server has any swap space allocated by running:
sudo swapon --show
and also
free -h
To create a swapfile you allocate it by:
sudo fallocate -l 1G /swapfile
sudo mkswap /swapfile
sudo swapon /swapfile
Which will give you a swap file of 1 GB.
You then have to secure the swap and configure swappiness etc for performance and this depends on your system.

Related

docker ubuntu container filesystem

I pulled a standard docker ubuntu image and ran it like this:
docker run -i -t ubuntu bash -l
When I do an ls inside the container I see a proper filesystem and I can create files etc. How is this different from a VM? Also what are the limits of how big a file can I create on this container filesystem? Also is there a way I can create a file inside the container filesystem that persists in the host filesystem after the container is stopped or killed?
How is this different from a VM?
A VM will lock and allocate resources (disk, CPU, memory) for its full stack, even if it does nothing.
A Container isolates resources from the host (disk, CPU, memory), but won't actually use them unless it does something. You can launch many containers, if they are doing nothing, they won't use memory, CPU or disk.
Regarding the disk, those containers (launched from the same image) share the same filesystem, and through a COW (copy on Write) mechanism and UnionFS, will add a layer when you are writing in the container.
That layer will be lost when the container exits and is removed.
To persists data written in a container, see "Manage data in a container"
For more, read the insightful article from Jessie Frazelle "Setting the Record Straight: containers vs. Zones vs. Jails vs. VMs"

AWS EMR Hadoop Mapreduce physical memory limit error

I keep getting this error when running some of my steps:
Container [pid=5784,containerID=container_1482150314878_0019_01_000015] is running beyond physical memory limits. Current usage: 5.6 GB of 5.5 GB physical memory used; 10.2 GB of 27.5 GB virtual memory used. Killing container.
I searched over the web and people say to increase the memory limits. This error is after I already increased to the maximum allowed on the instance I'm using c4.xlarge. Can I get some assistance about this error and how to solve this?
Also, I don't understand why mapreduce will throw this error and won't just swap or even work slower but just continue to work ...
NOTE: This error started happening after I changed to a custom output compression so it should be related to that.
Thanks!

Hadoop 2.x on amazon ec2 t2.micro

I'm trying to install and configure Hadoop 2.6 on Amazon EC2 t2.micro instance (The Free one, with only 1GB RAM) in Pseudo-Distributed Mode.
I could configure and start all the daemons (ie. Namenode,Datanode,ResourceManager,NodeManager). But When I tried to run a mapreduce wordcount example, it is failing.
I dont know if its failing due to low memory ( Since t2.micro has only 1GB of memory and some of it is taken up by Host OS, Ubuntu in my case). Or can it be some other reason?
I'm using default memory settings. If I can tweak down everything to minimum memory settings will it solve the problem? What is the minimum memory in mb that can be assigned to containers.
Thanks a lot Guys. I'll appreciate if you can provide me with some information.
Without tweaking any memory settings I could run a pi example with 1 mapper and 1 reducer sometimes only on the free tier t2.micro instance, it fails most of the time.
By using the memory optimized r3.large instance with 15GB RAM everything works perfect. All jobs get completed without failure.

SQLServer using too much memory

I have installed on my desktop machine (with windows 7) SQLServer 2008 R2 Express.
I have only one local server running (./SQLEXPRESS) but the sqlserver process is taking ALL the RAM possible.
With an machine with 3GB of RAM the things starts to get slow, so I limited the maximun amount of RAM in the server, and now, constantly the SQLServer give some error messages that the memory is not enought. It's using 1GB of RAM with only one LOCAL server with 2 databases completely empty, how 1GB of RAM isn't enought ?
When the process start it's using an really acceptable amount of memory (around 80MB) but it's keep increasing until it reaches the maximun defined and start to complain about having not enought memory available. In that point I have to restart the server to use it again.
I have read about an hotfix to solve one of the errors I got from sqlserver:
There is insufficient system memory in resource pool 'internal' to run this query
But it's already installed on my sqlserver.
Why it's using so much memory?
You can try configuring the 'max server memory' configuration option:
For additional details check:
http://technet.microsoft.com/en-us/library/ms178067(v=sql.105).aspx
http://support.microsoft.com/kb/321363
http://social.msdn.microsoft.com/Forums/en-US/sqldatabaseengine/thread/df51cd87-68ce-439a-87fa-d5e5cd93ab31
I had the problem like this.
You can increase the cache size of DB.
On MSSQL server properties, choose memory, there have "maximum server memory (in MB)" You can increase this cell.
Or same thing with query:
EXEC sp_configure'Show Advanced Options',1;
GO
RECONFIGURE;
GO
EXEC sp_configure'max server memory (MB)',3500;
GO
RECONFIGURE;
GO

RED5/Java memory limit

I have RED5 installed on my virtual server (I need it for my chat application), which has 1GB of RAM memory. When I start my RED5 it takes approx. 1GB immediately after start and thats a problem, because then my whole site is very site. Iam sure it does not use the whole 1GB, so I need a solution how could I limit it to lets say 700MB.
I have tried such things in red5.sh:
export JAVA_OPTS="-Xms512m -Xmx1024m $LOGGING_OPTS $SECURITY_OPTS $JAVA_OPTS"
But without success.
EDIT - forgot to mention, i use debian on my VPS.

Resources