I am missing some of the product images from sample data on the front end.
I have already successfully ran:
php -dmemory_limit=50G bin/magento sampledata:deploy
php -dmemory_limit=50G bin/magento setup:upgrade
php -dmemory_limit=50G bin/magento setup:static-content:deploy -f
When I run:
php -dmemory_limit=50G bin/magento catalog:images:resize
It goes through some files:
1/801 [>-----------------------] 0% < 1 sec 82.0 MiB | /m/b/mb01-blue 2/801 [>-----------------------] 0% 3 secs 82.0 MiB | /m/b/mb04-blac 3/801 [>------------------] 0% 5 secs 84.0 MiB | /m/b/mb04-black-0_alt1 5/801 [>-----------------] 0% 10 secs 84.0 MiB | /m/b/mb03-black-0_alt1 15/801 [>-----------------] 1% 41 secs 84.0 MiB | /w/b/wb06-red-0_alt1.j 21/801 [>-----------------] 2% 1 min 84.0 MiB | /u/g/ug07-bk-0_alt1.jp 26/801 [>-----------------] 3% 1 min 84.0 MiB | /l/u/luma-yoga-brick.j 27/801 [>-----------------] 3% 1 min 84.0 MiB | /l/u/luma-foam-roller.
But eventually I get the following error:
File '/Applications/MAMP/htdocs/Magento_2/pub/media/catalog/product/m/h/mh01-gray_main_1.jpg' does not exist.
It seems I am missing a lot of product images in:
/Applications/MAMP/htdocs/Magento_2/pub/media/catalog/product/
How do I go about fixing this?
Thanks!
Error is because image is not present in there respected folder, you can put images there(as it will be on server I think) or can remove from back-end(admin panel) by editing product one by one. You can also remove from DB but it is not the right way.
Related
The tail of logging shows the following:
22:09:11.016 DEBUG: GET 200 http://someserversomewhere:9000/api/rules/search.protobuf?f=repo,name,severity,lang,internalKey,templateKey,params,actives,createdAt,updatedAt&activation=true&qprofile=AXaXXXXXXXXXXXXXXXw0&ps=500&p=1 | time=427ms
22:09:11.038 INFO: Load active rules (done) | time=12755ms
I have mounted the running container to see if the scanner process is pegged/running/etc and it shows the following:
Mem: 2960944K used, 106248K free, 67380K shrd, 5032K buff, 209352K cached
CPU: 0% usr 0% sys 0% nic 99% idle 0% io 0% irq 0% sirq
Load average: 5.01 5.03 4.83 1/752 46
PID PPID USER STAT VSZ %VSZ CPU %CPU COMMAND
1 0 root S 3811m 127% 1 0% /opt/java/openjdk/bin/java -Djava.awt.headless=true -classpath /opt/sonar-scanner/lib/sonar-scann
40 0 root S 2424 0% 0 0% bash
46 40 root R 1584 0% 0 0% top
I was unable to find any logging in the sonar-scanner-cli container to help indicate the state. It appears to just be hung and waiting for something to happen.
I am running Sonarqube locally from docker at the lts version 7.9.5
I am also running the docker container sonarsource:sonar-scanner-cli which is currently using the following version in the Dockerfile.
SONAR_SCANNER_VERSION=4.5.0.2216
I am triggering the scan via the following command:
docker run --rm \
-e SONAR_HOST_URL="http://someserversomewhere:9000" \
-e SONAR_LOGIN="nottherealusername" \
-e SONAR_PASSWORD="not12345likeinspaceballs" \
-v "$DOCKER_TEST_DIRECTORY:/usr/src" \
--link "myDockerContainerNameForSonarQube" \
sonarsource/sonar-scanner-cli -X -Dsonar.password=not12345likeinspaceballs -Dsonar.verbose=true \
-Dsonar.sources=app -Dsonar.tests=test -Dsonar.branch=master \
-Dsonar.projectKey="${PROJECT_KEY}" -Dsonar.log.level=TRACE \
-Dsonar.projectBaseDir=/usr/src/$PROJECT_NAME -Dsonar.working.directory=/usr/src/$PROJECT_NAME/$SCANNER_WORK_DIR
I have done a lot of digging to try to find anyone with similar issues and found the following older issue which seems to be similar but it is unclear how to determine if I am experiencing something related. Why does sonar-maven-plugin hang at loading global settings or active rules?
I am stuck and not sure what to do next any help or hints would be appreciated.
Additional note is that this process does work for the 8.4.2-developer version of Sonarqube that I am planning migrate to. The purpose of verifying 7.9.5 is to follow the recommended upgrade path from Sonarqube that recommends the interim step of first bringing your current version to the latest LTS then running the data migration before jumping to the next major version.
I'm trying to get the automatic tree-shaking functionality provided by the Nuxt.js / Vuetify module working. In my nuxt.config.js I have:
buildModules: [
['#nuxtjs/vuetify', {treeShake: true}]
],
However, I'm only using one or two components at the moment, but I'm still getting a very large vendor.app (adding the treeshake option had no effect on size)
Hash: 9ab07d7e13cc875194be
Version: webpack 4.41.2
Time: 18845ms
Built at: 12/10/2019 11:04:48 AM
Asset Size Chunks Chunk Names
../server/client.manifest.json 12.2 KiB [emitted]
5384010d9cdd9c2188ab.js 155 KiB 1 [emitted] [immutable] commons.app
706a50a7b04fc7741c9f.js 2.35 KiB 4 [emitted] [immutable] runtime
8d5a3837a62a2930b94f.js 34.7 KiB 0 [emitted] [immutable] app
9d5a4d22f4d1df95d7a7.js 1.95 KiB 3 [emitted] [immutable] pages/login
LICENSES 389 bytes [emitted]
a0699603e56c5e67b811.js 170 KiB 6 [emitted] [immutable] vendors.pages/login
b1019b7a0578a5af9559.js 265 KiB 5 [emitted] [immutable] [big] vendors.app
b327d22dbda68a34a081.js 3.04 KiB 2 [emitted] [immutable] pages/index
+ 1 hidden asset
Entrypoint app = 706a50a7b04fc7741c9f.js 5384010d9cdd9c2188ab.js b1019b7a0578a5af9559.js 8d5a3837a62a2930b94f.js
WARNING in asset size limit: The following asset(s) exceed the recommended size limit (244 KiB).
This can impact web performance.
Assets:
b1019b7a0578a5af9559.js (265 KiB)
ℹ Generating pages 11:04:48
✔ Generated / 11:04:48
✔ Generated /login
Notice the line indicating the large vendors.app
Notice: b1019b7a0578a5af9559.js 265 KiB 5 [emitted] [immutable] [big] vendors.app
Can you please advise?
My mistake, the above configuration is working correctly. The real issue is the file size of the CSS being included (for all components) included in the build.
For people suffering from the same problem, adding build: {analyze:true} to nuxt.config.js shows where the problem files are (automatically opens in a browser window when running npm run build).
Clearly main.sass is the issue here. I will ask the question of how to get Nuxt/Webpack to only use CSS modules for relevant components in another question. The article here only shows how to do it with the CLI, not Nuxt.
Edit: I've now added the extractCSS:true property to my Nuxt config and the file size is reduced to a few kb..
build: {
analyze:true,
extractCSS: true
}
I'm developing a blog using jekyll and up until now I was very happy with it.
But as I make more posts the regeneration times are getting ridiculous (3-4 minutes). It's just not feasible to wait that long every time you make a change.
Specs:
Ruby 2.2.1
Jekyll 2.5.3
markdown: kramdown
highlighter: pygments
permalink:pretty
Working on a cloud service (Cloud9) with 2 GB of RAM
Not a lot of posts (~10), but I do use a lot of data (10 MB of json files in the "_data" folder, 14 MB of images in "img" folder)
Total size of the "_site" folder is 40 MB
Is it a normal thing with these specifications?
I've updated to Jekyll 3.0 to try incremental regeneration, but it didn't help in my case.
Any ideas?
Thanks!
Willem
Run jekyll serve --profile on your site and check what is taking more time to render. It should output a table that looks something like this.
Filename | Count | Bytes | Time
----------------------------------------------------------------------+-------+----------+------
_layouts/compress.html | 73 | 1649.86K | 1.526
_layouts/default.html | 72 | 1874.79K | 0.445
_layouts/post.html | 58 | 980.02K | 0.307
_posts/2015-12-10-how-to-create-and-host-a-website-on-github-pages.md | 1 | 9.36K | 0.294
feed.xml | 1 | 34.74K | 0.105
_includes/prev-next.html | 58 | 39.17K | 0.053
sitemap.xml | 1 | 19.90K | 0.035
_pages/archive.md | 1 | 28.98K | 0.035
_posts/2017-02-15-jekyll-sort-filters.md | 1 | 16.09K | 0.019
_includes/ga_data_fetch.html | 58 | 41.77K | 0.018
_includes/disqus-script.html | 58 | 30.89K | 0.018
_pages/tags.html | 1 | 14.97K | 0.015
That should give you a fair idea on where the problem exists.
Now while making changes to the site, if you want to render only the changed files then use jekyll serve --incremental or jekyll serve -I.
Incremental build still has some issues that the Jekyll team is working on.
A handy option to render only the latest post that you are writing would be jekyll serve --watch --limit_posts 1. This has saved me a lot of time while writing new posts.
There are a few options
Use --incremental on jekyll build or serve but use with caution
Use --profile on jekyll build to get an output of where time is used up
You could also have different config.yml files where you might include only draft posts for development and not for production.
Consider restructuring your development environment
Development folder with _posts containing just a sample
Production folder with your live set of _posts
copy you dev content over prior to production build
The profiling was showing nothing to worry about, but I was still getting 2-3 seconds regeneration times with a simple one page website.
I used a super simple Gemfile
source 'https://rubygems.org'
ruby "2.4.2"
gem "jekyll", "~> 3.6.2"
Then called bundle install again.
After that, regeneration times were back under 1 second.
After encountering situations where I found that rethinkdb service is down for unknown reason, I noticed it uses a lot of memory:
# free -m
total used free shared buffers cached
Mem: 7872 7744 128 0 30 68
-/+ buffers/cache: 7645 226
Swap: 4031 287 3744
# top
top - 23:12:51 up 7 days, 1:16, 3 users, load average: 0.00, 0.00, 0.00
Tasks: 133 total, 1 running, 132 sleeping, 0 stopped, 0 zombie
Cpu(s): 0.0%us, 0.2%sy, 0.0%ni, 99.8%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Mem: 8061372k total, 7931724k used, 129648k free, 32752k buffers
Swap: 4128760k total, 294732k used, 3834028k free, 71260k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
1835 root 20 0 7830m 7.2g 5480 S 1.0 94.1 292:43.38 rethinkdb
29417 root 20 0 15036 1256 944 R 0.3 0.0 0:00.05 top
1 root 20 0 19364 1016 872 S 0.0 0.0 0:00.87 init
# cat log_file | tail -9
2014-09-22T21:56:47.448701122 0.052935s info: Running rethinkdb 1.12.5 (GCC 4.4.7)...
2014-09-22T21:56:47.452809839 0.057044s info: Running on Linux 2.6.32-431.17.1.el6.x86_64 x86_64
2014-09-22T21:56:47.452969820 0.057204s info: Using cache size of 3327 MB
2014-09-22T21:56:47.453169285 0.057404s info: Loading data from directory /rethinkdb_data
2014-09-22T21:56:47.571843375 0.176078s info: Listening for intracluster connections on port 29015
2014-09-22T21:56:47.587691636 0.191926s info: Listening for client driver connections on port 28015
2014-09-22T21:56:47.587912507 0.192147s info: Listening for administrative HTTP connections on port 8080
2014-09-22T21:56:47.595163724 0.199398s info: Listening on addresses
2014-09-22T21:56:47.595167377 0.199401s info: Server ready
It seems a lot considering the size of the files:
# du -h
4.0K ./tmp
156M .
Do I need to configure a different cache size? Do you think it has something to do with finding the service surprisingly gone? I'm using v1.12.5
There were a few leak in the previous version, the main one being https://github.com/rethinkdb/rethinkdb/issues/2840
You should probably update RethinkDB -- the current version being 1.15.
If you run 1.12, you need to export your data, but that should be the last time you need it since 1.14 introduced seamless migrations.
From Understanding RethinkDB memory requirements - RethinkDB
By default, RethinkDB automatically configures the cache size limit according to the formula (available_mem - 1024 MB) / 2. available_mem
You can change this via a config file as they document, or change it with a size (in MB) from the command line:
rethinkdb --cache-size 2048
I'm using ubuntu Ubuntu 14.04.1 LTS
atopsar -d 30 - shows that one of hard drive (sda) in the system is heavily used. This hard drive serves only mysql database. The most frequently used DBs where relocated to another hard drives (sdb, sdd) via symbolic links. Now atopsar shows nearly same load for sda and under 5% load to other HDDs.
Is there a way to know which files are heavily used on HDD?
Can it be that mysql InnoDB log files (ib_logfile) are fragmented? And therefore atopsar show such big load (50%-70%). What can be done in that case?
There are some output from atopsar -d 30:
08:52:47 disk busy read/s KB/read writ/s KB/writ avque avserv _dsk_
08:53:17 sda 63% 0.0 0.0 50.2 14.6 1.1 12.57 ms
sdb 5% 0.0 0.0 9.4 19.8 4.2 5.81 ms
sdd 2% 0.0 0.0 3.7 18.9 1.4 5.82 ms
08:53:47 sda 60% 0.0 16.0 48.1 15.7 1.0 12.55 ms
sdb 5% 0.0 0.0 6.9 17.5 4.6 7.35 ms
sdd 2% 0.0 0.0 4.7 24.9 1.4 4.06 ms
08:54:17 sda 38% 0.5 16.0 30.6 15.6 1.2 12.25 ms
sdb 3% 0.0 0.0 5.6 18.3 3.3 5.50 ms
sdd 2% 0.0 0.0 3.3 19.2 1.1 4.86 ms
08:54:47 sda 53% 0.0 0.0 42.5 16.5 1.1 12.37 ms
sdb 6% 0.0 0.0 8.7 21.0 5.8 6.37 ms
sdd 2% 0.0 0.0 3.1 23.1 1.3 5.68 ms
08:55:17 sda 51% 0.0 4.0 42.7 16.9 1.1 11.94 ms
sdb 5% 0.0 0.0 9.4 20.5 5.0 5.51 ms
sdd 1% 0.0 0.0 1.5 17.6 1.1 7.73 ms
08:55:47 sda 52% 0.0 0.0 40.6 14.5 1.0 12.85 ms
sdb 5% 0.0 0.0 6.8 19.5 5.4 6.66 ms
sdd 2% 0.0 0.0 4.3 31.3 1.3 4.78 ms
There is sysdig tool which allow you to see system-wide activities just like strace does for single process: http://www.sysdig.org/
There are examples for Disk usage info: https://github.com/draios/sysdig/wiki/Sysdig%20Examples#disk-io
See the top processes in terms of disk bandwidth usage
sysdig -c topprocs_file
See the top files in terms of read+write bytes
sysdig -c topfiles_bytes
Print the top files that apache has been reading from or writing to
sysdig -c topfiles_bytes proc.name=httpd
See the top directories in terms of R+W disk activity
sysdig -c fdbytes_by fd.directory "fd.type=file"
See the top files in terms of R+W disk activity in the /tmp directory
sysdig -c fdbytes_by fd.filename "fd.directory=/tmp/"
Observe the I/O activity on all the files named 'passwd'
sysdig -A -c echo_fds "fd.filename=passwd"
Sysdig is modern and convenient tool. For older Linuxes is it possible to get similar information using SystemTap: http://lukas.zapletalovi.com/2014/05/systemtap-as-a-system-wide-strace-tool.html
PS Thanks to habrahabr.ru with this post about Sysdig http://habrahabr.ru/company/selectel/blog/222839/
PPS Brendan D. Gregg created this picture "A quick tour of many tools..." for his Linux Performance page:
To find out the most heavily used files in the system please use: sudo pt-ioprofile -cell sizes
Example of output:
total pread read pwrite fsync lseek filename
10862592 0 0 10862592 0 0 /var/mysqldata/mysql/ibdata1
827392 0 0 827392 0 0 /var/mysqllog/mysql/ib_logfile0
... (other trivial I/O records truncated)
Got it from https://dba.stackexchange.com/questions/21209/innodb-high-disk-write-i-o-on-ibdata1-file-and-ib-logfile0
Please be aware that by default Percona toolkit attaches only to mysqld. And to find out most heavily used file you have to run it to all processes that might create such load. In my case I was definitely sure that it's mysql server, so it's enough for me.
Please read http://www.percona.com/doc/percona-toolkit/2.0/pt-ioprofile.html before you use it.
Try investigating with
dstat --top-bio
it will give you processes that use most of IO.
In linux you have /proc/diskstats - it gives only block device level stats.
I have never seen a mechanism to determine which file is busy in linux.