Is there a way of running Prettier faster? - performance

I'm using the latest versions of these packages:
ESLint: 8.11.0
Prettier: 2.6.0
eslint-plugin-prettier: 4.0.0
eslint-config-prettier: 8.5.0
Rule
Time (ms)
Relative
prettier/prettier
156074.280
98.4%
vue/attribute-hyphenation
678.026
0.4%
#typescript-eslint/no-unused-vars
593.419
0.4%
no-redeclare
174.305
0.1%
padding-line-between-statements
145.755
0.1%
vue/valid-next-tick
38.034
0.0%
no-restricted-imports
37.361
0.0%
#typescript-eslint/no-empty-function
37.335
0.0%
#typescript-eslint/no-loss-of-precision
36.425
0.0%
vue/no-async-in-computed-properties
36.114
0.0%
npm run lint 179.28s user 2.19s system 108% cpu 2:47.58 total
and Prettier is taking 98.3% of the total time.
Is there any way to optimize it (by passing a parameter, using some extra package, disabling specific rules, ...)?

Related

Azure DevOps build pipeline self-hosted agent "No space left on device"

I am running a build pipeline on Azure that runs on a private build server (Red Hat Enterprise Linux) running Self Hosted Agent. This build pipeline only has 1 Job and 2 Tasks where the 1st task is it basically SSH's into a Repo server we have (different server that just holds big files) generates an ISO image on that Repo server, then uses curl to put that ISO back on the the build server where the Azure Pipeline agent is running in the stereotypical $(Build.ArtifactStagingDirectory) Azure uses for Artifacts.
This 1st task succeeds, and the ISO is generated and copied over to the build server, but the "Publish Artifact" stage keeps failing. It's trying to publish to the path $(Build.ArtifactStagingDirectory) but produces an error message, with more logs:
No space left on device
I already went in a cleared all the directories and files that exceeded > 1GB in this working directory `/home/azure/vsts/_work
I'm not an expert with Linux. When I run df -h and view the filesystem, there are a bunch in the list. Is there a way to know what partition I'm actually using for this Azure pipeline agent that using the /home/azure/vsts/_work directory?
My df -h list looks like:
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/vg_root-lv_root 19G 19G 28K 100% /
devtmpfs 3.9G 0 3.9G 0% /dev
tmpfs 3.9G 8.0K 3.9G 1% /dev/shm
tmpfs 3.9G 138M 3.8G 4% /run
tmpfs 3.9G 0 3.9G 0% /sys/fs/cgroup
/dev/sdc 100G 152M 100G 1% /glusterfs
/dev/sda1 488M 119M 334M 27% /boot
/dev/mapper/vg_root-lv_var 997M 106M 891M 11% /var
/dev/mapper/vg_docker-lv_docker 50G 3.1G 44G 7% /var/lib/docker/overlay2
/dev/mapper/vg_root-lv_log 997M 46M 952M 5% /var/log
/dev/mapper/vg_root-lv_crash 997M 33M 965M 4% /var/crash
/dev/mapper/vg_root-lv_root_logins 29M 1.8M 27M 6% /var/log/root_logins
/dev/mapper/vg_root-lv_core 125M 6.6M 119M 6% /var/core
/dev/mapper/vg_root-lv_repo 997M 83M 915M 9% /var/cache/yum
/dev/mapper/vg_root-lv_home 997M 33M 965M 4% /export/home
/dev/mapper/vg_root-lv_logins 93M 5.0M 88M 6% /var/log/logins
/dev/mapper/vg_root-lv_audit 725M 71M 655M 10% /var/log/audit
tmpfs 799M 0 799M 0% /run/user/0
walkie1-ap2.nextgen.com:/hdd-volume0 200G 2.3G 198G 2% /gluster-hdd
If anyone could provide some insight I'd greatly appreciate it.
End of error log:
[2020-05-06 05:49:09Z ERR JobRunner] Caught exception from job steps StepsRunner: System.IO.IOException: No space left on device
at Interop.ThrowExceptionForIoErrno(ErrorInfo errorInfo, String path, Boolean isDirectory, Func`2 errorRewriter)
at Microsoft.Win32.SafeHandles.SafeFileHandle.Open(String path, OpenFlags flags, Int32 mode)
at System.IO.FileStream..ctor(String path, FileMode mode, FileAccess access, FileShare share, Int32 bufferSize, FileOptions options)
at System.IO.FileStream..ctor(String path, FileMode mode)
at Microsoft.VisualStudio.Services.Agent.PagingLogger.NewPage()
at Microsoft.VisualStudio.Services.Agent.PagingLogger.Write(String message)
at Microsoft.VisualStudio.Services.Agent.Worker.ExecutionContext.Write(String tag, String message)
at Microsoft.VisualStudio.Services.Agent.Worker.StepsRunner.RunStepAsync(IStep step, CancellationToken jobCancellationToken)
at Microsoft.VisualStudio.Services.Agent.Worker.StepsRunner.RunAsync(IExecutionContext jobContext, IList`1 steps)
at Microsoft.VisualStudio.Services.Agent.Worker.JobRunner.RunAsync(AgentJobRequestMessage message, CancellationToken jobRequestCancellationToken)```
I can't reproduce same issue on my side, but I think you can check this article for a trouble-shooting.
As I know this task itself will take extra space when being executed. You can try a bash command to make copy of content under path $(Build.ArtifactStagingDirectory) to double the size of those content, if this action throws same No space left on device error?
And in build pipeline there's one clean option to clean the caches before executing the job, enable it to check if it helps:
If it's yaml pipeline, try something like:
workspace:
clean: outputs | resources | all # what to clean up before the job runs
and
steps:
- checkout: self | none | repository name # self represents the repo where the initial Pipelines YAML file was found
clean: boolean # if true, run `execute git clean -ffdx && git reset --hard HEAD` before fetching
See Yaml schema.
So I was a n00b here, and the solution was simply to clean out space from the main directory we used to store our large ISO files:
/dev/mapper/vg_root-lv_root 19G 19G 28K 100% /
This is a custom VM we used to run builds on Azure and I wasn't accustomed to the error message. But yes, if anyone says this message and is using a custom build agent, its definitely a space issue.

golang grpc transport.newBufWriter and bufio.NewReaderSize not releasing memory

I have a simple grpc server in golang which does CRUD operations on an object. However, when I run it the memory never goes down even after requests stop. pprof of heap show has the following result:
> flat flat% sum% cum cum%
> 932.39MB 62.45% 62.45% 932.39MB 62.45% google.golang.org/grpc/internal/transport.newBufWriter
> 463.13MB 31.02% 93.46% 463.13MB 31.02% bufio.NewReaderSize
> 13.50MB 0.9% 94.37% 13.50MB 0.9% runtime.malg
> 13MB 0.87% 95.24% 1420.52MB 95.14% google.golang.org/grpc/internal/transport.newHTTP2Server
> 11MB 0.74% 95.98% 12.10MB 0.81% time.NewTimer
> 8.50MB 0.57% 96.54% 8.50MB 0.57% golang.org/x/net/http2/hpack.(*headerFieldTable).addEntry
> 5.50MB 0.37% 96.91% 17.60MB 1.18% google.golang.org/grpc/internal/transport.(*http2Server).keepalive
> 3.50MB 0.23% 97.15% 7.50MB 0.5% google.golang.org/grpc/internal/transport.newLoopyWriter
> 1.50MB 0.1% 97.25% 12.50MB 0.84% google.golang.org/grpc.(*Server).serveStreams
> 0 0% 97.25% 10MB 0.67% golang.org/x/net/http2.(*Framer).ReadFrame
Can anyone guide me on how to go about fixing this memory issue? The server runs with default options and I have even enabled debug.FreeOSMemory() function to release memory.
Most likely you need to close ClientConn, or reuse it.
I encountered the same issue and the problem was creating new ClientConn for each RPC call without closing these connection later.
Great explanation here pooling-grpc-connections
Related question GRPC Connection Management in Golang

Magento 2: Missing Product Images

I am missing some of the product images from sample data on the front end.
I have already successfully ran:
php -dmemory_limit=50G bin/magento sampledata:deploy
php -dmemory_limit=50G bin/magento setup:upgrade
php -dmemory_limit=50G bin/magento setup:static-content:deploy -f
When I run:
php -dmemory_limit=50G bin/magento catalog:images:resize
It goes through some files:
1/801 [>-----------------------] 0% < 1 sec 82.0 MiB | /m/b/mb01-blue 2/801 [>-----------------------] 0% 3 secs 82.0 MiB | /m/b/mb04-blac 3/801 [>------------------] 0% 5 secs 84.0 MiB | /m/b/mb04-black-0_alt1 5/801 [>-----------------] 0% 10 secs 84.0 MiB | /m/b/mb03-black-0_alt1 15/801 [>-----------------] 1% 41 secs 84.0 MiB | /w/b/wb06-red-0_alt1.j 21/801 [>-----------------] 2% 1 min 84.0 MiB | /u/g/ug07-bk-0_alt1.jp 26/801 [>-----------------] 3% 1 min 84.0 MiB | /l/u/luma-yoga-brick.j 27/801 [>-----------------] 3% 1 min 84.0 MiB | /l/u/luma-foam-roller.
But eventually I get the following error:
File '/Applications/MAMP/htdocs/Magento_2/pub/media/catalog/product/m/h/mh01-gray_main_1.jpg' does not exist.
It seems I am missing a lot of product images in:
/Applications/MAMP/htdocs/Magento_2/pub/media/catalog/product/
How do I go about fixing this?
Thanks!
Error is because image is not present in there respected folder, you can put images there(as it will be on server I think) or can remove from back-end(admin panel) by editing product one by one. You can also remove from DB but it is not the right way.

Explore which files are heavily used in the system

I'm using ubuntu Ubuntu 14.04.1 LTS
atopsar -d 30 - shows that one of hard drive (sda) in the system is heavily used. This hard drive serves only mysql database. The most frequently used DBs where relocated to another hard drives (sdb, sdd) via symbolic links. Now atopsar shows nearly same load for sda and under 5% load to other HDDs.
Is there a way to know which files are heavily used on HDD?
Can it be that mysql InnoDB log files (ib_logfile) are fragmented? And therefore atopsar show such big load (50%-70%). What can be done in that case?
There are some output from atopsar -d 30:
08:52:47 disk busy read/s KB/read writ/s KB/writ avque avserv _dsk_
08:53:17 sda 63% 0.0 0.0 50.2 14.6 1.1 12.57 ms
sdb 5% 0.0 0.0 9.4 19.8 4.2 5.81 ms
sdd 2% 0.0 0.0 3.7 18.9 1.4 5.82 ms
08:53:47 sda 60% 0.0 16.0 48.1 15.7 1.0 12.55 ms
sdb 5% 0.0 0.0 6.9 17.5 4.6 7.35 ms
sdd 2% 0.0 0.0 4.7 24.9 1.4 4.06 ms
08:54:17 sda 38% 0.5 16.0 30.6 15.6 1.2 12.25 ms
sdb 3% 0.0 0.0 5.6 18.3 3.3 5.50 ms
sdd 2% 0.0 0.0 3.3 19.2 1.1 4.86 ms
08:54:47 sda 53% 0.0 0.0 42.5 16.5 1.1 12.37 ms
sdb 6% 0.0 0.0 8.7 21.0 5.8 6.37 ms
sdd 2% 0.0 0.0 3.1 23.1 1.3 5.68 ms
08:55:17 sda 51% 0.0 4.0 42.7 16.9 1.1 11.94 ms
sdb 5% 0.0 0.0 9.4 20.5 5.0 5.51 ms
sdd 1% 0.0 0.0 1.5 17.6 1.1 7.73 ms
08:55:47 sda 52% 0.0 0.0 40.6 14.5 1.0 12.85 ms
sdb 5% 0.0 0.0 6.8 19.5 5.4 6.66 ms
sdd 2% 0.0 0.0 4.3 31.3 1.3 4.78 ms
There is sysdig tool which allow you to see system-wide activities just like strace does for single process: http://www.sysdig.org/
There are examples for Disk usage info: https://github.com/draios/sysdig/wiki/Sysdig%20Examples#disk-io
See the top processes in terms of disk bandwidth usage
sysdig -c topprocs_file
See the top files in terms of read+write bytes
sysdig -c topfiles_bytes
Print the top files that apache has been reading from or writing to
sysdig -c topfiles_bytes proc.name=httpd
See the top directories in terms of R+W disk activity
sysdig -c fdbytes_by fd.directory "fd.type=file"
See the top files in terms of R+W disk activity in the /tmp directory
sysdig -c fdbytes_by fd.filename "fd.directory=/tmp/"
Observe the I/O activity on all the files named 'passwd'
sysdig -A -c echo_fds "fd.filename=passwd"
Sysdig is modern and convenient tool. For older Linuxes is it possible to get similar information using SystemTap: http://lukas.zapletalovi.com/2014/05/systemtap-as-a-system-wide-strace-tool.html
PS Thanks to habrahabr.ru with this post about Sysdig http://habrahabr.ru/company/selectel/blog/222839/
PPS Brendan D. Gregg created this picture "A quick tour of many tools..." for his Linux Performance page:
To find out the most heavily used files in the system please use: sudo pt-ioprofile -cell sizes
Example of output:
total pread read pwrite fsync lseek filename
10862592 0 0 10862592 0 0 /var/mysqldata/mysql/ibdata1
827392 0 0 827392 0 0 /var/mysqllog/mysql/ib_logfile0
... (other trivial I/O records truncated)
Got it from https://dba.stackexchange.com/questions/21209/innodb-high-disk-write-i-o-on-ibdata1-file-and-ib-logfile0
Please be aware that by default Percona toolkit attaches only to mysqld. And to find out most heavily used file you have to run it to all processes that might create such load. In my case I was definitely sure that it's mysql server, so it's enough for me.
Please read http://www.percona.com/doc/percona-toolkit/2.0/pt-ioprofile.html before you use it.
Try investigating with
dstat --top-bio
it will give you processes that use most of IO.
In linux you have /proc/diskstats - it gives only block device level stats.
I have never seen a mechanism to determine which file is busy in linux.

sphinx config || config/sphinx.yml

my sphinx configuration is:
================================ config/sphinx.yml
development:
bin_path: "/usr/local/bin"
searchd_binary_name: searchd
indexer_binary_name: indexer
but everytime i run a rake ts:index
Sphinx cannot be found on your system. You may need to configure the following
settings in your config/sphinx.yml file:
* bin_path
* searchd_binary_name
* indexer_binary_name
For more information, read the documentation:
For more information, read the documentation:
http://freelancing-god.github.com/ts/en/advanced_config.html
Generating Configuration to config/development.sphinx.conf
Sphinx 2.0.1-beta (r2792)
Copyright (c) 2001-2011, Andrew Aksyonoff
Copyright (c) 2008-2011, Sphinx Technologies Inc (http://sphinxsearch.com)
using config file 'config/development.sphinx.conf'...
indexing index 'post_core'...
collected 2 docs, 0.0 MB
sorted 0.0 Mhits, 100.0% done
total 2 docs, 675 bytes
total 0.006 sec, 110510 bytes/sec, 327.43 docs/sec
skipping non-plain index 'post'...
total 6 reads, 0.000 sec, 0.0 kb/call avg, 0.0 msec/call avg
total 12 writes, 0.000 sec, 0.1 kb/call avg, 0.0 msec/call avg
rotating indices: succesfully sent SIGHUP to searchd (pid=19438).
Generating Configuration to config/development.sphinx.conf
Sphinx 2.0.1-beta (r2792)
Copyright (c) 2001-2011, Andrew Aksyonoff
Copyright (c) 2008-2011, Sphinx Technologies Inc (http://sphinxsearch.com)
using config file 'config/development.sphinx.conf'...
indexing index 'post_core'...
collected 2 docs, 0.0 MB
sorted 0.0 Mhits, 100.0% done
total 2 docs, 675 bytes
total 0.006 sec, 105567 bytes/sec, 312.79 docs/sec
skipping non-plain index 'post'...
total 6 reads, 0.000 sec, 0.0 kb/call avg, 0.0 msec/call avg
total 12 writes, 0.000 sec, 0.1 kb/call avg, 0.0 msec/call avg
rotating indices: succesfully sent SIGHUP to searchd (pid=19438).
So what's the problem? Why does the rake output that it cant find it even though its installed?
The warning from Thinking Sphinx could definitely be clearer... the problem is very likely to be how old your version of Thinking Sphinx is. Older TS versions don't know about Sphinx 2.0.x - so I'd recommend updating to the latest version of Thinking Sphinx (either 1.4.6 for Rails 1.2 and 2.x, or 2.0.5 for Rails 3).
There are two things that help to solve this problem. First, as Pat says, it is useful to update the Thinking Sphinx plugin or gem to the latest version (either 1.4.x for Rails 2, or 2.0.x for Rails 3). Second it helps sometimes to specify the version of Sphinx in the configuration file (you can find it out by calling "indexer"), especially if Sphinx is running on a remote server and Thinking Sphinx does not have access to Sphinx locally:
production:
..
version: 2.0.4 # <------- Version of Sphinx on remote server 192.168.1.10
port: 9312
address: 192.168.1.10
..
I was facing the same issue and looked everywhere for an answer without any resolution.
The trick that worked for me was to install older version of sphinx. v .9 instead of the latest beta.
Using the latest Thinking-Sphinx with this version of sphinx resolved the issue.

Resources