I have a joomla website hosted on one of my LAMP servers. Somehow joomla is "eating" up drive space and i am constantly receiving the error: "This domains disk limit has been exceeded!"
I tried to increase the disk limit(i use ZPanel) and it worked, but after a few days it went to this error-screen again.
I would appreciate any help. Thanks.
Check your files.
see how /images/ evolved over time (it really depends on how much content you upload) or any other components that create content
examine /tmp/ and empty it
check /cache/ folder
Joomla! core files don't usually grow over time in a significant way.
Related
i noticed that mac and the flash-plugin gets in conflict when doing multiple uploads.
There's no problem when it's only 3 or 5 files, but when the amount of files is higher then uploadify just stops uploading. The plugin kinda crashes.
Strangely on windows there is no single problem with multiple uploading.
I upgraded Flash, both on mac and windows. I checked the sessions-id's, i checked this forum, the forum at uploadify.
But can't find where to search for...
Can somebody give me some clues about what to do.
Thanks,
Dave
I am also using uploadify v3.2.1. I came here looking into why it isn't working for some mac users, but stumbled upon your question and seeing it has no feedback on it, I will try to point you onto something that could be a solution....
Are you using PHP? If yes; Did you verify that the total post size is not going above what is allowed for php?
The php.ini file has the setting for post size limit, if the amount of images in one request goes beyond that which is set in the php.ini, it may be causing this, as you only get it when uploading more than 3-5 files (which can amount to a size over that limit) and not when uploading less than that... try raising the post size limit and then retrying the upload file-set that fails right now...
We're building a Windows-based application that traverses a directory structure recursively, looking for files that meet certain criteria and then doing some processing on them. In order to decide whether or not to process a particular file, we have to open that file and read some of its contents.
This approach seems great in principle, but some customers testing an early version of the application have reported that it's changing the last-accessed time of large numbers of their files (not surprisingly, as it is in fact accessing the files). This is a problem for these customers because they have archive policies based on the last-accessed times of files (e.g. they archive files that have not been accessed in the past 12 months). Because our application is scheduled to run more frequently than the archive "window", we're effectively preventing any of these files from ever being archived.
We tried adding some code to save each file's last-accessed time before reading it, then write it back afterwards (hideous, I know) but that caused problems for another customer who was doing incremental backups based on a file system transaction log. Our explicit setting of the last-accessed time on files was causing those files to be included in every incremental backup, even though they hadn't actually changed.
So here's the question: is there any way whatsoever in a Windows environment that we can read a file without the last-accessed time being updated?
Thanks in advance!
EDIT: Despite the "ntfs" tag, we actually can't rely on the filesystem being NTFS. Many of our customers run our application over a network, so it could be just about anything on the other end.
The documentation indicates you can do this, though I've never tried it myself.
To preserve the existing last access time for a file even after accessing a file, call SetFileTime immediately after opening the file handle with this parameter's FILETIME structure members initialized to 0xFFFFFFFF.
From Vista onwards NTFS does not update the last access time by default. To enable this see http://technet.microsoft.com/en-us/library/cc959914.aspx
Starting NTFS transaction and rolling back is very bad, and the performance will be terrible.
You can also do
FSUTIL behavior set disablelastaccess 0
I don't know what your client minimum requirements are, but have you tried NTFS Transactions? On the desktop the first OS to support it was Vista and on the server it was Windows Server 2008. But, it may be worth a look at.
Start an NTFS transaction, read your file, rollback the transaction. Simple! :-). I actually don't know if it will rollback the Last Access Date though. You will have to test it for yourself.
Here is a link to a MSDN Magazine article on NTFS transactions which includes other links. http://msdn.microsoft.com/en-us/magazine/cc163388.aspx
Hope it helps.
We have an exe file delivered by an ASP.NET application. This binary is actually modified on the fly in memory. Is there any way to sign the modified exe with authenticode in memory without writing to disk? There's probably no way to sign the original exe and still keep the signature valid after modification. We thought about using ram disk to help on disk i/o if we have to, but just wondering if there are any other options.
The problem is really how to get rid of the unknown publisher warning. So if there is any other way that does not involve signing or changing policy settings on the client's computer, please let me know as well.
I don't know the answer to this offhand, but I've seen it done by Just Great Software. They make customized installers for RegexBuddy and every time I've downloaded mine it's got its signature.
I'm curious though - why don't you want to persist the file to disk? You don't need to leave it there - persist it, sign it, load it back into memory and delete it. Or, persist it, and have an agent or cron job delete it after a couple days.
I'm experiencing painfully slow operations with one of our SVN repositories/projects.
For example, it's taking 5-10 minutes to revert the changes in one small file (10 KB). Or about 40-60 minutes to check out the project of 100 MB.
There are about 30 other projects on the same server, some vastly bigger than this one, and none of them preform like this.
One thing to note is that this project is a Magento project. It's not very large in terms of disk space, but I have 23k Files and 11k folders, and I have read SVN preforms badly when there are lots of little files; is this true? And is there anything I can do to speed things up?
The Subversion working copy performs quite badly when there's a huge number of directories, like in your case. For write operations (even only locally) to the working copy, the working copy has to be locked, which means that a lock file is created in every directory (that's 11k file creates), then the action executes, and the those 11k files are deleted again.
Subversion 1.7 is moving to a different working copy format, that should resolve these problems. Until then there's a few tricks you might try to speed things up, like excluding the working copy from your virus scanner, disabling file monitors on the directory (like TortoiseSvnCache), and trying to reduce the total number of directories. (Perhaps by checking out a few separate working copies)
There is a known issue with the use of the recycle bin with revert which causes slow reverting. Emptying your recycle bin and setting TortoiseSVN not to use it during revert operations both speed up this operation (see http://www.nabble.com/Revert-is-too-slow-td18222196.html).
This has definitely sped up my revert operations.
I experienced extreme slowness with Subversion on Windows after changing my password. I had to delete all directories and files from %APPDATA%\Subversion\auth.
Now SVN is fast as a hare. My slowness occurred via both TortoiseSVN and the command line.
SVN is slow if you use NFS (Network File System) for the working copy. This could be your problem.
We have face similar issue, the problem was TortoiseSvn (Version 1.9.7). For example, the repo browser took about 10 minutes to initial.
We have turned of the Show Locks feature and every thing fixed!
Right click on a folder and select Tortoise\Settings then General\Dialog 3 then deselect Show Locks
Also some good hints can be found at http://tigris-scm.10930.n7.nabble.com/Workaround-for-slow-RepositoryBrowser-on-large-repositories-td92324.html
Reverting changes in SVN is a local operation which shouldn't go to the server at all. So it sounds as though the problem is in your working copy of the project.
Try running 'svn cleanup' in the working copy; you may also want to check if you have problems with the hard drive or filesystem.
Our SVN was running painfully slow through TortoiseSVN, Eclipse and command line. Commits and exports were slow. Our Zend Framework-based PHP projects would take an age to update and popping in a small commit of about three files would take 5-10 minutes.
Our SVN virtual machine (CentOS) only had 700 MB of RAM which seemed reasonable for a Linux CLI only running Subversion via Apache and has been running fine for about one year. We've only got about 20 projects and only three developers.
I've upped it to 1.5 GB of RAM and things are running much faster now, back to our old speeds.
I also suffered a large slowdown after upgrading to TortoiseSVN 1.7.3.
Then I discovered I had a separate install of SVN 1.6.5. I uninstalled both and reinstalled TortoiseSVN and now things are much better. First update of the day in TortoiseSVN is still slow (1-2 minutes), but fast after that.
I have some projects which use the Eclipse IDE. If you capture the Eclipse project directories you get hundreds and hundreds of tiny files which has the same effect for my project as you're suffering on yours.
I think that when you check files out SVN does so one at a time which means that projects with huge numbers of files are always going to be slow and there's not much you can do about it (aside from avoiding frequent whole-repository operations).
Making changes to a single file shouldn't be slow though.
You may try the suggestions in another post on Stack Overflow about slow SVN. It could also be due to using a BDB database.
While the installation of our software, on at least one PC, some files are replaced by html content and python of exactly the same size of the previous file, the creation date is exactly the same as others files. I suppose the antivirus (AVG free ) might be at fault, it look like the antivirus allocate memory to analyse the file, fail to read it and then write what was in memory at that place instead of the file.
I don't own that PC, so I have limited diagnostic possibilities. Did someone experienced similar issues and found a workaround around it ? May be renaming the file to a different extension ? I'm listening to your suggestions...