Wbadmin backup failed due to a file system limitation - windows

I'm trying to setup and learn the Wbadmin command line prompts for making my own backups. I'm created a test on Server 2008 R2 in VMWare, I've created a separate B: drive for backups. I'm trying to target specific files, and I've created 6 testFile# .txt files in the C drive under the !Test folder.
The command that I've used is:
wbadmin start backup -backupTarget:\\localhost\NetworkShare -include:C:\!Test\testFile*
The process starts, but ends up crashing. Screenshot attached below. The logs for both the backup and the error are blank. The main error message is:
There was a failure in updating the backup for deleted items.
The requested operation could not be completed due to a file system limitation
What am I doing wrong? B: was formatted to NTFS, and I've followed the instructions exactly.

So after some research, I found the cause of the error message. The proplem came from within the Virtual Machine itself. The VM or the Operating System was not configured, so Wbadmin would not accept the destination of //localhost/NetworkShare
When I tried backing up to a real network drive, everything worked as planned. The * wildcard, hoping to grab only the 6 testFiles numbered 1-6, worked correctly. However in real practice listing each individual file name after a comma: , will probably be more useful for others. Here is the command that worked:
wbadmin start backup -backuptarget:\\(IP address of network)\Public -inlcude:C:\!Test\testFile*
Here was the log report:
Backed up C:\
Backed up C:\!Test\
Backed up C:\!Test\testFile1.txt
Backed up C:\!Test\testFile2.txt
Backed up C:\!Test\testFile3.txt
Backed up C:\!Test\testFile4.txt
Backed up C:\!Test\testFile5.txt
Backed up C:\!Test\testFile6.txt
I hope this helps someone else

Related

Windows 10 - How can I fix - C:\ Ace entries not in canonical order

I have run into an issue with my windows backup software that I believe is due to an issue with permissions on my C:\ drive. I am using cygwin which can cause issues.
The problem manifests itself by the Veritas SRS software not being able to snapshot the C:\ drive due to issues with VSS.
When attempting to run a utility that diagnoses and fixes issues with VSS (vss-doctor by Acronis), it indicates that the SYSTEM account does not have Full Control access to C:\
When I try to run the fix, the utility complains that that the access control list is not in canonical order. I can confirm this by running:
C:\>icacls C:\ /verify
C:\: Ace entries not in canonical order.
Successfully processed 0 files; Failed processing 0 files
When I try to reset, I receive this error (I am running the command prompt in Administrator mode):
C:\>icacls C:\ /reset
C:\: Access is denied.
Successfully processed 0 files; Failed processing 1 files
What can I do to correct this problem?

analyze systemd journal of a crashed / dead system

i resently upgraded a system. after reboot i was not able to login again. all users have been rejected with Login incorrect. systemd with journaling was running and writing error messages to file in /var/log/journal as usual.
i so booted a system from a revovery usb stick (same distribution) mounted the root device of the failed system /mnt and tried to analyze the logs with journalctl --root=/mnt/var/log/journal -xe. journalctl did not find journal files.
question: how can i read systemd journal content of a dead system using a recovery system?
have fun
I may be a bit late, but I stumbled upon this question and here is what I found:
journalctl logs are located in /var/log/journal/*
journalctl app can read foreign journal files with the following switches:
--file= followed by the *.journal file of your choice. This option may be used multiples times
--root= followed by the root directory of you choice, probably a mounted partition
--image= followed by a disk image,
files as argument, with the option --file

Installing Oracle Client 12c release 2 Error

I download the oracle 12c client 64 bit version from the oracle official site and tried to install. As soon as i run the setup file it gives the following error and stop running the setup. could someone suggest the reason for this?
The installer copies a set of files required to initiate the installation process to a temporary area. The avoids a chicken-and-egg problem when it comes to doing de-installations, ie, running a de-installer from the media you are wanting to deinstall.
The error above suggests that you could not write to your standard TEMP area, or it was full. So you could try something like:
create a folder c:\tmp (assuming c:\ has plenty of space)
set TEMP and TMP to c:\tmp using PC => Manage => Advanced =>
ENvironment Var
run setup.exe as before
You should then see something like:
"Preparing to run installer from c:\tmp\OraInstall..."

obtain a full remote file size from a running remote process using command line tools

I need to get the filesize of a remote executable file which its process is running on a remote xp machine.
it must be done from a Windows system using only a batch file and only from a command line.
sigcheck.exe cannot be used because it does not take control over remote files.
I can not even map the remote disk to do that.
Hope someone have a good solution.
thanks in advance.

require_once: cannot allocate memory

I have a pretty "unconventional" setup that looks like this:
I have a VMware virtual machine running Ubuntu Desktop 12.10 with networking set to bridge. Installed inside the VM is nginx, php-fpm, php-cli and MySQL. PHP is 5.4.10.
I have a folder containing all the files for the application I am working on in the host (Windows 7). This folder is then made available as a network share.
The virtual machine then mounts the windows share via samba into a directory. This directory is then served by nginx and php-fpm. So far, everything has worked fine.
I have a script that I run via the php CLI to help me build and process some files. Previously, this script has worked fine on my windows host. However, it seems to throw cannot allocate memory errors when I run it in the Ubuntu VM. The weird thing is that it's sporadic as well and does not happen all the time.
user#ubuntu:~$ sudo /usr/local/php/bin/php -f /www/app/process.php
Warning: require_once(/www/app/somecomponent.php): failed to open stream: Cannot allocate memory in /www/app/loader.php on line 130
Fatal error: require_once(): Failed opening required '/www/app/somecomponent.php' (include_path='.:/usr/local/php/lib/php') in /www/app/loader.php on line 130
I have checked and confirmed the following:
/www/app/somecomponent.php definitely exists.
The permissions for /www and all files and sub directories inside are set to execute and read+write for owner, group and others.
I have turned off APC after reading this question to see if APC is the cause, but the problem still persists after doing so.
php-cli is using the same php.ini as php-fpm, which is located in /etc/php/php.ini.
memory_limit in php.ini is set to 128M (default) which seems plenty for the app.
Even after increasing the memory limit to 256M, the error still occurs.
The VM has 2GB of memory.
I have been googling to find out what causes cannot allocate memory errors, but have found nothing useful. What could be causing this problem?
It turns out this was a problem with my windows share. Perhaps because Windows 7 is a client OS, it is not tuned to serve large amounts of files frequently (which is happening in my case).
To fix, set the following keys in the registry:
HKLM\SYSTEM\CurrentControlSet\Control\Session Manager\Memory Management\LargeSystemCache to 1
HKLM\SYSTEM\CurrentControlSet\Services\LanmanServer\Parameters\Size to 3
I have been running my setup with these new settings for about a week and have not encountered any memory allocation errors since making the change.

Resources