clear phpThumb cache regularly with cron job - caching

I am using phpThumb on a client website, and as it is a very image heavy application the cache gets huge quick. Today the thumbs stopped working and I had rename the cache folder, as the folder was too big to delete via ftp. I renamed it cache_old and am trying to delete it now via ssh. I recreated the cache folder and everything worked fine again.
Since it seems it stops working when the cache folder is too full, plus just to keep the server tidy, I would like to setup a daily cron job to clear files from the cache folder. I have no idea how to do this though and haven't been able to find an answer yet..
The cache folder has a file in it called index.php which I assume needs to stay, plus a sub folder called source, which again has a file called index.php, again I assumed that needs to be there. So I need a command that will delete everything BUT those files.
Any guidance on how to set this up would be appreciated!
Thanks,
Christine
P.S. The site is hosted on DreamHost, and I have set other jobs up via there cronjob panel, and I do have SSH access if setting it up that way is easier. Cheers!!

it's possible to do this in one command but it's more obviously.
rm `find /path_to_cache_folder/ -type f | grep -v 'index.php'`
rm `find /path_to_cache_folder/source -type f | grep -v 'index.php'`
or in one cron job
rm `find /path_to_cache_folder/ -type f | grep -v 'index.php'` && rm `find /path_to_cache_folder/source -type f | grep -v 'index.php'`

Related

"No such file or directory" in bash script

FINAL EDIT: The $DATE variable was what screwed me up. For some reason when I reformatted it, it works fine. Does anyone know why that was an issue?
Here's the final backup script:
#!/bin/bash
#Vars
OUTPATH=/root/Storage/Backups
DATE=$(date +%d-%b)
#Deletes backups that are more than 2 days old
find "$OUTPATH"/* -mtime +2 -type f -delete
#Actual backup operation
dd if=/dev/mmcblk0 | gzip -1 - | dd of="$OUTPATH"/bpi-"$DATE".img.gz bs=512 count=60831745
OLD SCRIPT:
#!/bin/bash
#Vars
OUTPATH=~/Storage/Backups
DATE=$(date +%d-%b_%H:%M)
#Deletes backups that are more than 2 days old
find "$OUTPATH"/* -mtime +2 -type f -delete
#Actual backup operation
dd if=/dev/mmcblk0 | gzip -1 - | dd of="$OUTPATH"/bpi_"$DATE".img.gz bs=512 count=60831745
This is a script to backup my banana pi image to an external hard drive. I am new to bash scripting, so I know this will be an easy fix most likely but here is my issue:
I am running the script from ~/scripts
and the output file is ~/Storage/Backups (the mount point for the external HDD, specified in my /etc/fstab.
The commands work fine when the OUTPATH=., i.e. it just backs up to the current directory that the script is running from. I know I could just move the script to the backup folder and run it from there, but I am trying to add this to my crontab, so if I could keep all scripts in one directory just for organizational purposes that would be good.
Just wondering how to correctly make the script write my image to that $OUTPATH variable.
EDIT: I tried changing the $OUTPATH variable to a test directory that is located on /dev/root/ (on the same device that the script itself is also located) and it worked, so I'm thinking it's just an issue trying to write the image to a device that is different from the one that the script itself is located in.
My /etc/fstab line relating to the external HDD I would like to use is as follows:
/dev/sdb1 /root/Storage exfat defaults 0 0
The /root/Storage/Backups folder is where I am trying to write the image to
Populate OUTPATH with the full pathname of your backups directory.
In
OUTPATH=~/Storage/Backups
tilde expansion is not performed when putting "$OUTPATH" in find
find "$OUTPATH"/* ....
You may replace the ~ with the fullpath in OUTPATH or replace the OUTPATH with the actual path in find.

command rename does not work on my bash script

Yesterday I made a question here: How can I run a bash in every subfolder of a base folder and my main problem was solved but I have another one: I don't know why but the rename command does NOTHING if I try to use it recursively. I've tried all different options they told me and other I found and if I run the rename on a single directory it works fine (so the line its ok) but can't make it work recursively.
The question of optimizing images doen't matter now cause I changed the script to do it first. Now I have all the images like this way: image.png (which is the oriniginal) and image-nq8.png which is the optimized one)
What I want now is to have the optimized one with the name of the original, and the original deleted. But as any of my attempts on it, they all fail and I don't know why.
I made an script: scriptloop
for i in $(find /path/to/start/ -name "*.png");do
rename -nq8.png .png *-nq8*
done
and call it this way: ./scriptloop
and tried too using: find . -name '*-nq8.png' -print0 | xargs -0 -P6 -n1 scriptOneLine
with this inside scriptOneLine: rename -nq8.png .png *-nq8*
Note: as I said if I run rename -nq8.png .png *-nq8* on a directory it works but I can't make it work recursive. Any idea of why or what am I doing wrong? (I'm on fedora)
Thank you so much

Fix permissions for website directory mac osx 10.9

I have apparently messed up the permissions of my development environment and can no longer get the web site i'm working on to come up with localhost. There are a lot of files to fix and I do not want to have to try to fix them all manually through finder. Is there a way to fix them all at one time? I'm sure there is a command pompt I could use but I'm not that familliar with comman line.
I am on a Mac running OSX 10.9
Help please
This is an wasy one to fix, especially for wordpress permissions.
Open up a terminal (/Applications/utilities/terminal.app). You would then change directory to where you keep your development sites.
cd /path/to/where/you/keep/your/Site
Then issue the following two commands in you site's directory
find . -type d -print0 | xargs -0 chmod 755
find . -type f -print0 | xargs -0 chmod 644
This will recursively set permissions to what apache expects.

Need to remove *.xml from unknown directories that are older than x days

We have a directory:
/home/httpdocs/
In this directory there may be directories or sub directories of directories, or sub directories of sub directories, and so on and so forth that contain XML files (files that end in .xml) - We do not know which direcrtory contains xml files and these directories contain a massive amount of files
we want to archive all files and remove them from the actual directories so that we only contain the last 7 days worth of xml files in the above mentioned directories.
It was mentioned to me that logrotate would be a good option to do this in, is that the best way to do it, and if so - how would we set it up?
Also if not using lot rotate can this be scripted? Can this script be run during production hours or will it bog down the system?
Sas
find -name "*.xml" -mtime +7 -print0 | tar -cvzf yourArchive.tar.gz --remove-files --null --files-from -
Will create a gzip compressed tar file 'yourArchive.tar.gz', containing all *.xml files in the current directory and any depth of subdirectory that was not changed during the last 24*7 hours and after adding these files to the tar archive the files are deleted.
Edit:
Can this script be run during production hours or will it bog down the
system?
Depends on your system actually. This does create lots of I/O load. If your production system uses a lot of I/O and you don't happen to have a fantastic I/O subsystem (like a huge raid system connected using fibre channel or the like), then this will have some noticable impact on your performance. How bad this is depends on more details though.
If system load is an issue than you could create a small database that keeps track of the files, maybe using inotify, which can run in background over a larger period of time, beeing less noticed.
You can also try to set the priority of the mentioned processes using renice, but since the problem is I/O and not CPU (unless your CPU sucks and your I/O is really great for some reason), this might not lead to the desired effect. But then the next best option would be to write your own script crawling the file tree that is decorated with sleeps. It will take some time to complete but will generate less impact on your production system. I would not recommend any of this unless you really have pressure to act.
Use find /home/httpdocs -name "*.xml" -mtime +7 -exec archive {} \; where archive is a program that archives and removes an XML file.
It'll probably be easiest to do this with find and a cron job.
The find command:
find /home/httpdocs -name \*.xml -ctime +7 -exec mv -b -t /path/to/backup/folder {} +
This will move any file ending in .xml within the /home/httpdocs tree to the backup folder you provide, making a backup of any file that would be overwritten (-b).
Now, to set this up as a cron job, run crontab -e as a user who has write permissions on both the httpdocs and backup folders (probably root, so sudo crontab -e). Then add a line like the following:
14 3 * * * find /home/httpdocs -name \*.xml -ctime +7 -exec mv -b -t /path/to/backup/folder {} +
This will run the command at 3:14am every day (change the 3 and 14 for different times). You could also put the find command into a script and run that, just to make the line shorter.

Magento - Duplicated live site to a development server but it redirects to live site?

I hope you can help get to the bottom of this. This is whats happened:
We duplicated our live Magento site (for example we'll call it domain1.com) to a development server (for this example I'll call this domain2.com)
Did a find/replace for the domain1.com to domain2.com in both the database and files
Deleted all var/cache and var/session files
Reindexed all indexes via SSH
Emptied browser cache
Checked all file permissions
Disabled the .htaccess incase this was causing a redirect
But it's still redirecting to the live server (domain1.com)??
Any ideas what may be triggering this?
Cheers,
Dave
Solved the issue...hopefully this helps some other people.
When the site was duplicated we edited the "local.xml" file with the new details but kept the old one and renamed it "localnew.xml"
It seems Magento was still picking up these details as so was redirecting to the old site.
Deleting the "localnew.xml" (or whatever you called it eg. localxxx.xml) fixed all our issues!
I'm working in a local Centos 6.4 running in a VM at the moment to test various things.
My production Magento site is in that folder: /var/www/html/magento
and the staging site is there: /var/www/html/staging/magento
I had the same issue than you, did everything I could come up with, triple checked the urls in the database, cleared the cache, checked the .htaccess file, rebooted multiple times.
The one thing that fixed the issue has been to set back the permissions properly:
cd /var/www/html/staging/magento
find . -type f -exec chmod 644 {} \;
find . -type d -exec chmod 755 {} \;
chown -R sysadmin:apache .
chmod -R 777 media app/etc var var/.htaccess
Just open your DB and open the table "core_config_data"
then edit the values for the path web/secure/base_url & web/unsecure/base_url

Resources