Delete all product images in magento - image

I uploaded a product image in magento but found out it was no good. So I deleted it and uploaded another image with the same name. Now I get a _1 after the name of the image. Of course this occurs when a image already exists.
Does anybody now how I can delete all product images?

Well, if you want to get rid of the image files for products, you need to go to your server (by ssh / ftp or some cPanel) and delete all contents of /media/catalog/product. Using ssh you should do this that way:
ssh login#host [-p {port} # only if it's not 22 which is default, often it's also 2222 or 2223]
cd /path/to/your/magento/root
rm -rf /media/catalog/product/*
But have in mind that it will delete ALL product images in your Magento instance and may cause errors if some of this files are set as your current product images.

You need to remove them in two places to avoid errors. Note, the following will remove ALL images from all your products. Please backup before.
First, you need to remove them from the database. You can do this by issuing the following in phpMyAdmin or however you choose:
TRUNCATE TABLE `catalog_product_entity_media_gallery`
TRUNCATE TABLE `catalog_product_entity_media_gallery_value`
Second, remove your media/catalog folder. With SSH, you would use:
rm -rf media/catalog
rm -rf media/tmp
rm -rf media/import (if you used an import process for your images with your products).

Try this extension
http://www.magentocommerce.com/magento-connect/image-clean.html
On sites with large product count it work slowly, but you can delete images from hard disk (those images, that have been deleted in Magento).

Remove the file from
media/catalog/product/
media/tmp/catalog/product/
clear cache

Related

Merge fastq.gz files with same name in different localizations in Google-Cloud

I would like to merge several fastq.gz files with the same name in different folders in the Google-Cloud. I have a total of 15 patients. Each patient has paired-end data "R1" and "R2". Each R1 and R2 are divided into 4 files. The size of each file is approximately 28 GB.
My goal is to merge the 4 files to obtain the complete fastq.gz R1 and R2 files for each patient.
I have never worked with Google-Cloud before.
Here is how the folders and the files are in the bucket (example with 2 patients):
gs://bucketID
/folder1
/folder001
Patient1_R1.fastq.gz
Patient1_R2.fastq.gz
/folder002
Patient2_R1.fastq.gz
Patient2_R2.fastq.gz
etc.
/folder2
/folder003
Patient1_R1.fastq.gz
Patient1_R2.fastq.gz
/folder004
Patient2_R1.fastq.gz
Patient2_R2.fastq.gz
etc.
/folder3
/folder005
Patient1_R1.fastq.gz
Patient1_R2.fastq.gz
/folder006
Patient2_R1.fastq.gz
Patient2_R2.fastq.gz
etc.
/folder4
/folder007
Patient1_R1.fastq.gz
Patient1_R2.fastq.gz
/folder008
Patient2_R1.fastq.gz
Patient2_R2.fastq.gz
etc.
I want to make a script that targets fastq.gz files with the same name in different folders, then merge them. However, I have no idea how to do this on Google-Cloud.
Here is the same example with colors (I want to concatenate files with the same color):
Example with colors
Here's how I see the bash script:
bucket="bucketID"
dir1=$bucket/"folder1"
dir2=$bucket/"folder2"
dir3=$bucket/"folder3"
dir4=$bucket/"folder4"
destdir=$bucket/"destdir"
participants = (Patient1
Patient2
)
for i in ${participants[*]};
do
zcat dir1/.../$i/_R1.fastq.gz dir2/.../$i/_R1.fastq.gz dir3/.../$i/_R1.fastq.gz dir4/.../$i/_R1.fastq.gz | gzip >$destdir/"merged_"$i/_R1.fastq.gz
zcat dir1/.../$i/_R2.fastq.gz dir2/.../$i/_R2.fastq.gz dir3/.../$i/_R2.fastq.gz dir4/.../$i/_R2.fastq.gz | gzip >$destdir/"merged_"$i/_R2.fastq.gz
done
Should I use "gsutil compose" instead to merge?
At the end, I would like to have only two files R1 and R2 for each patient: merged_patient#_R1.fastq.gz and merged_patient#_R2.fastq.gz.
In the example I gave above, it would give 4 files:
merged_Patient1_R1.fastq.gz
merged_Patient1_R2.fastq.gz
merged_Patient2_R1.fastq.gz
merged_Patient2_R2.fastq.gz
Thank you!
I would recommend you to use the following command in order to concatenate your files:
gsutil compose gs://bucket/obj1 [gs://bucket/obj2 ...] gs://bucket/composite
You can check the documentation in this link.
I've tried to do a simple bash script by using the "gsutil compose" command with fastq.gz files, and it was working fine for me.
The compose command creates a new object whose content is the concatenation of a given sequence of source objects under the same bucket.
Hope this helps!
Ok I found the solution with gsutil compose :
declare -a participantsArray=("Patient1"
"Patient2"
)
bucket="bucketID"
dir1=$bucket/"folder1"
dir2=$bucket/"folder2"
dir3=$bucket/"folder3"
dir4=$bucket/"folder4"
destdir=$bucket/"destdir"
for i in ${participantsArray[#]};
do
fileR1="${i}_R1.fastq.gz"
fileR2="${i}_R2.fastq.gz"
gsutil compose "${dir1}/*/${fileR1}" "${dir2}/*/${fileR1}" "${dir3}/*/${fileR1}" "${dir4}/*/${fileR1}" "${destdir}/merged_${fileR1}"
gsutil compose "${dir1}/*/${fileR2}" "${dir2}/*/${fileR2}" "${dir3}/*/${fileR2}" "${dir4}/*/${fileR2}" "${destdir}/merged_${fileR2}"
done
As you said the solution was not difficult to find.
Thank you again!

How to list the published container images in the Google Container Registry in a CLI in image size order

Using a CLI, I want to list the images in each repository in a Google Container Registry project but with the following conditions:
Lists the images with the latest tag only
Lists the human-readable size of the images
Lists the name of the images
The closest I've managed to get us through gsutil:
gsutil du -h gs://eu.artifacts.my-registry.appspot.com/containers/images
Resulting in:
33.77 MiB gs://eu.artifacts.my-registry.appspot.com/containers/images/sha256:03c1a2387ef6cb30a7428a46821f946d6a2c591a26cb2066891c55b2b6846ae2
1.27 MiB gs://eu.artifacts.my-registry.appspot.com/containers/images/sha256:03c1e7db6bf0140bd5fa34236a35453cb73cef01f6d89b98bc5995ae8ea07aaf
1.32 KiB gs://eu.artifacts.my-registry.appspot.com/containers/images/sha256:03c3c97495d60c68d37d04a7e6c9b3a48bb159ce5dde13d0d81b4e75e2a3f1d4
81.92 KiB gs://eu.artifacts.my-registry.appspot.com/containers/images/sha256:03c5483cb8ac9c9ae498507e15d68d909a11859a8e5238556b7188e0af4d9264
457.43 KiB gs://eu.artifacts.my-registry.appspot.com/containers/images/sha256:03c7f98faa1cfc05264e743e23ca2e118d24c57bfd67d5cb2e2c7a57e8124b6c
7.88 KiB gs://eu.artifacts.my-registry.appspot.com/containers/images/sha256:03c83b13d044844cd3f6b278382e408541f22029acaf55d9e7e5689b8d51eeea
But obviously this does not meet most of my criteria.
The information is available through the GUI like so on a per image basis:
Any ideas?
I'm open to gsutil, gcloud, docker, anything really which can be installed on a docker container.
You can use the Google Cloud UI to accomplish this. There's a column selector right next to the filter bar and it has an option for the image size.
Once the column is displayed, you'll be able to order by size.
Its seems you have only one outstanding issue with listing container images size after reading your comment at Jason's answer. So it is not possible to retrieve with gcloud command directly. Here are two work around I tested:
You can use gcloud container images describe command to see the size of the images. Make sure you use "--log-http" flag with it. Command should be like this:
$ gcloud container images describe gcr.io/myproject/myimage:tag --log-http
Another way to get the size of the image is using gsutil stat command.
So here's what I did:
a. Upon running below command, I listed all my images from the GCS bucket and saved it to a file called images.txt
$ gsutil ls "BUCKET URL" > images.txt
b. I ran gcloud stat command like below to read image names from the images.txt file and return size of the images chronologically.
$ for x in $(cat images.txt); do `gsutil stat $x | grep Content-Length | awk '{print $2}'`; done
You can customize this little script according to your need.
I understand these are not efficient workaround but thats all seems to be an option now. However, GCR just implements the docker container API, so may be you can read this document to see if you can find/do something of your own.
Hi here just to share a rudimental script which takes the first tag and get the size of the whole layers and write it on a report, it takes ages on 3TB repo but at least i know which repo is big.
echo "REPO,SIZE" > repository-size-report.csv
for REPO in $(gcloud container images list --repository eu.gcr.io/comerge-comerge01-171833 --format="table[no-heading](NAME)") ; do
for TAGS in $(gcloud container images list-tags $REPO --format="table[no-heading](TAGS)"); do
TAG=$(echo $TAGS | cut -d, -f1)
SUM=0
for SIZE in $(gcloud container images describe $REPO:$TAG --log-http 2>&1 | grep size | grep -o '[0-9][0-9]*') ; do
SUM=$((SUM + SIZE))
done
HSUM=$(echo $SUM | numfmt --to iec --format "%8f")
echo "$REPO:$TAG,$HSUM"
echo "$REPO:$TAG,$HSUM" >> repository-size-report.csv
done
done
You can use the command gcloud container images list command to accomplish this task; however, you will need to set the appropriate flags to fulfill your use case. You can read more about the command and the flag options here.

Mage::getStoreConfig not returning updated values

Recently we've noticed that Mage::getStoreConfig is not returning updated values in a app/code/local plugin. Everything was working as of last Friday so we assume that something has changed on the server.
We can see the values updating correctly in the database table core_config_data.
We have
recompiled
flushed the Magento Cache
flushed the Cache Storage
reset folder and file ownership and permissions
find . -type f -exec chmod 644 {} \;
find . -type d -exec chmod 755 {} \;
For example, we added an extra character to the store phone number, see that the database value has updated but it doesn't show with the following line
Mage::getStoreConfig('general/store_information/phone')
As a test we duplicated the site and database via Plesk and applied the latest patches to both sites. The duplicated site worked as normal.
I'm intrigued to find out what has happened so any ideas as to what the issue might be would be welcome?

laravel4.1 can't render template when upload to server

it's weird, when developing localhost, everything works fine, the default page shows.
after upload to server, it just show blank page !
it's driving me crazy !
echo 'outside route';
Route::get('/', function()
{
echo 'inside route';
return View::make('hello');
});
both echo works, but View::make('hello') just don't work, views/hello.php is the default file.
You might have to fix your permissions on the remote server, as it might be a cache issue.
1) Run recursive chmod on you storage path (*assuming you already have proper file ownage)
cd /path/to/laravel
chmod -R 755 app/storage
2) Clear cache with Artisan
php artisan cache:clear
3) Refresh page, should work now.
*if you are running the http server as different user (for example you're on Ubuntu and Apache runs as user www-data), you might want to set file ownage for Laravel app files as well
chown -R www-data .
EDIT:
Just a remark about your code example - remember that if you want to use Blade templating engine you have to name your files accordingly. If you want to have a blade template called 'something', you will place your code in app/views/something.blade.php and than reffer to it for example View::make('something').

Uploading media in Wordpress

I am trying to upload images or any other media type to my wordpress application, but I get this error:
Unable to create directory /home/admin/video/wp-content/uploads/2012/07. Is its parent directory writable by the server?
even though I am sure that the parent directory is writable. It actually has 777 permissions. What might be the problem?
Thank you.
The question is... writable by who or what? You probably need to make the entire "uploads" directory writable by PHP (a.k.a. the web server). Often, apache and other servers default to the user-group www-data, but it could be different. Check your apache or lighttpd (or whatever) configuration files to see what user and user-group it runs as. Often these are in /etc/apache or /etc/lighttpd et cetera. Then, make the uploads directory recursively writable to that group.
Using 777 permissions is a very bad idea. You always want to give the minimal amount of people access to any given directory. So, here's a short discourse on file permissions....
drwxrwxrwx 20 connermcd staff 680 Jul 25 20:38 img
-rw-r--r-- 1 admin www-data 18530 Jul 26 21:46 example
The first character of the permissions string denotes the type. In this case, img is a directory and example is a file. This could also be an l for a symbolic link (among other things). The remaining characters of the string (rwxrwxrwx) define permissions. As you can see, it's a repeating triplet of "read, write, execute". The first triad represents permission for the file or directory's owner. The owner is shown in the third column (connermcd for img and admin for example). The second triad denotes permission for the file or directory's group (staff for img and www-data for example). The last triad denotes permissions for anyone (even someone you gave temporary access to your server or a hacker, hint hint).
Each of the "read, write, execute" triads can be represented by a number. It's easy for me to think about rwxrwxrwx as 421421421. It's the only way multiples of two can add up to 7 if that helps you. So, the 4 stands for read, the 2 stands for write, and the 1 stands for execute. If you add these together then you can denote a triad with three numbers. So what chmod 777 img is really doing is giving "read, write, and execute" permission to everyone. It is also only setting those permissions for that directory and not the directories underneath it. To do this recursively you can use the -R flag -- chmod -R.
In your case, you just want to make the uploads folder and all its subdirectories available to the user group your server runs as. In most cases that's www-data, so I'll use that as an example. You probably want to set your project files as owned by your user to make them easier to move, edit, etc. So let's assume you are the owner of the files (use chown to set) and that they belong to the www-data group (use chgrp to set). In that case we want to give the owner full permissions and the group read and write permissions, and we want to do it recursively. So go to the parent directory of the uploads folder and do chmod -R 760 uploads.
You may also see if is correct your "Settings->Media" and then look to "Uploading Files" section.
The folder(and all subfolders) indicated into "Store uploads in this folder" must have 755 permissions.

Resources