I have a pretty "unconventional" setup that looks like this:
I have a VMware virtual machine running Ubuntu Desktop 12.10 with networking set to bridge. Installed inside the VM is nginx, php-fpm, php-cli and MySQL. PHP is 5.4.10.
I have a folder containing all the files for the application I am working on in the host (Windows 7). This folder is then made available as a network share.
The virtual machine then mounts the windows share via samba into a directory. This directory is then served by nginx and php-fpm. So far, everything has worked fine.
I have a script that I run via the php CLI to help me build and process some files. Previously, this script has worked fine on my windows host. However, it seems to throw cannot allocate memory errors when I run it in the Ubuntu VM. The weird thing is that it's sporadic as well and does not happen all the time.
user#ubuntu:~$ sudo /usr/local/php/bin/php -f /www/app/process.php
Warning: require_once(/www/app/somecomponent.php): failed to open stream: Cannot allocate memory in /www/app/loader.php on line 130
Fatal error: require_once(): Failed opening required '/www/app/somecomponent.php' (include_path='.:/usr/local/php/lib/php') in /www/app/loader.php on line 130
I have checked and confirmed the following:
/www/app/somecomponent.php definitely exists.
The permissions for /www and all files and sub directories inside are set to execute and read+write for owner, group and others.
I have turned off APC after reading this question to see if APC is the cause, but the problem still persists after doing so.
php-cli is using the same php.ini as php-fpm, which is located in /etc/php/php.ini.
memory_limit in php.ini is set to 128M (default) which seems plenty for the app.
Even after increasing the memory limit to 256M, the error still occurs.
The VM has 2GB of memory.
I have been googling to find out what causes cannot allocate memory errors, but have found nothing useful. What could be causing this problem?
It turns out this was a problem with my windows share. Perhaps because Windows 7 is a client OS, it is not tuned to serve large amounts of files frequently (which is happening in my case).
To fix, set the following keys in the registry:
HKLM\SYSTEM\CurrentControlSet\Control\Session Manager\Memory Management\LargeSystemCache to 1
HKLM\SYSTEM\CurrentControlSet\Services\LanmanServer\Parameters\Size to 3
I have been running my setup with these new settings for about a week and have not encountered any memory allocation errors since making the change.
Related
My VM backup software is aborting the backup job because of CRC errors in the VHDx. If I try to export the VM, I get this error "“Failed to copy file during export”.". The VM is up and running. If I command a chkdsk /f, no errors are reported. The storage where the VHDx is hosted also does not report any errors. How can I correct these errors to create a backup of this VM? I have 5 VMs in this same Windows Server Failover Cluster, and this issue is happening with only one of the VMs.
Thank you.
I copied magento source code from server on local machine. After firing "bin/magento cache:clean" command causes whole 16 GB memory consumption and exit with an error "PHP Fatal error: Allowed memory size of 15032385536 bytes exhausted (tried to allocate 262144 bytes) in Unknown on line 0"
Ubuntu 18.04, PHP7.4 and Magento 2.4. Not sure how to fix this issue, i tried everything but couldn't fix this issue. I really appreciate your time and help in advance.
The remote machine may have an app/etc/env.php with customised caches for Redis and other services.
Try installing a demo store on your local machine with demo products and then compare the app/etc/env.php file to see the cache differences.
I'm trying to setup and learn the Wbadmin command line prompts for making my own backups. I'm created a test on Server 2008 R2 in VMWare, I've created a separate B: drive for backups. I'm trying to target specific files, and I've created 6 testFile# .txt files in the C drive under the !Test folder.
The command that I've used is:
wbadmin start backup -backupTarget:\\localhost\NetworkShare -include:C:\!Test\testFile*
The process starts, but ends up crashing. Screenshot attached below. The logs for both the backup and the error are blank. The main error message is:
There was a failure in updating the backup for deleted items.
The requested operation could not be completed due to a file system limitation
What am I doing wrong? B: was formatted to NTFS, and I've followed the instructions exactly.
So after some research, I found the cause of the error message. The proplem came from within the Virtual Machine itself. The VM or the Operating System was not configured, so Wbadmin would not accept the destination of //localhost/NetworkShare
When I tried backing up to a real network drive, everything worked as planned. The * wildcard, hoping to grab only the 6 testFiles numbered 1-6, worked correctly. However in real practice listing each individual file name after a comma: , will probably be more useful for others. Here is the command that worked:
wbadmin start backup -backuptarget:\\(IP address of network)\Public -inlcude:C:\!Test\testFile*
Here was the log report:
Backed up C:\
Backed up C:\!Test\
Backed up C:\!Test\testFile1.txt
Backed up C:\!Test\testFile2.txt
Backed up C:\!Test\testFile3.txt
Backed up C:\!Test\testFile4.txt
Backed up C:\!Test\testFile5.txt
Backed up C:\!Test\testFile6.txt
I hope this helps someone else
While trying to compile/build and boot custom kernel inside vmware workstation, while booting new kernel, it fails and falls to shell with error "failed to find disk by uuid".
I tried this with both ubuntu and centos.
Things I tried but didn't help
check mapping by uuid in boot entry and existence in directory.
initramfs-update
replaced root=uuid=<> with /dev/disk/sda3
is it issue with vmware workstation?
how can it be rectified..??
I had a similar fault with my own attempts to bootstrap Fedora 22 onto a blank partition using a Centos install on another partition. I never did solve it completely, but I did find the problem was in my initrd rather than the kernel.
The problem is the initrd isn't starting LVM because dracut didn't tell the initrd that it needs LVM. Therefore if you start LVM manually you should be able to boot into your system to fix it.
I believe this is the sequence of commands I ran from the emergency shell to start LVM:
vgscan
vgchange -ay
lvs
this link helped me remember
Followed by exit to resume normal boot.
You might have to mount your LVM /etc/fstab entries manually, I don't recall whether I did or not.
Try this:
sudo update-grub
Then:
mkinitcpio -p linux
It won't hurt to check your fstab file. There, you should find the UUID of your drive. Make sure you have the proper flags set in the fstab.
Also, there's a setting in the grub.cfg that has has GRUB use the old style of hexadecimal UUIDs. Check that out as well!
The issue is with creation of initramfs, after doing a
make oldconfig
and choosing default for new options, make sure the ENOUGH diskspace is available for the image to be created.
in my case the image created was not correct and hence it was failing to mount the image at boot time.
when compared; the image size was quite less than the existing image of lower version, so I added another disk with more than sufficient size and then
make bzImage
make modules
make modules_install
make install
starts working like a charm.
I wonder why the image creation got completed earlier and resulted in corrupt image (with less size) without throwing any error [every single time]
I am trying to build and install membase from source tarball. The steps I followed are:
Un-archive the tar membase-server_src-1.7.1.1.tar.gz
Issue make (from within the untarred folder)
Once done, I enter into directory install/bin and invoke the script membase-server.
This starts up the server with a message:
The maximum number of open files for the membase user is set too low.
It must be at least 10240. Normally this can be increased by adding
the following lines to /etc/security/limits.conf:
Tried updating limits.conf as suggested, but no luck it continues to pop up the same message and continues booting
Given that the server is started I tried accessing memcached over port 11211, but I get a connection refused message. Then figured out (netstat) that memcached is listening to 11210 and tried telneting to port 11210, unfortunately the connection is closed as soon as I issue the following commands
stats
set myvar 0 0 5
Note: I am not getting any output from the commands above {Yes: stats did not show anything but still I issued set.}
Could somebody help me build and install membase from source? Also why is memcached listening to 11210 instead of 11211?
It would be great if somebody could also give me a step-by-step guide which I can follow to build from source from Git repository (I have not used autoconf earlier).
P.S: I have tried installing from binaries (debian package) on the same machines and I am able to successfully install and telnet. Hence not sure why is build from source not working.
You can increase the number of file descriptors on your machine by using the ulimit command. Try doing (you might need to use sudo as well):
ulimit -n 10240
I personally have this set in my .bash_rc so that whenever I start my terminal it is always set for me.
Also, memcached listens on port 11210 by default for Membase. This is done because Moxi, the memcached proxy server, listens on port 11211. I'm also pretty sure that the memcached version used for Membase only listens for the binary protocol so you won't be able to successfully telnet to 11210 and have commands work correctly. Telneting to 11211 (moxi) should work though.