Does yarn `--prefer-offline` by default - yarnpkg

yarn has a --prefer-offline flag
Does it pull from the cache by default or is that flag needed?
What is the purpose of the local package cache if the flag is not used?

The flag changes behaviour.
If not set the default behaviour seems to be for yarn always to prefer to download if possible, using the cache only if offline.

Related

Does yarn have an equivalent to unsafe-perm=true?

As far as I understand, I can add a .npmrc file to the root of my directory to prevent node packages to be installed as non-root user when using npm.
Is there an equivalent flag, option, or config when using yarn?
You can use yarn config set unsafe-perm true to get the equivalent result.
See Yarn Docs for config set and this Github answer explaining that unsafe-perm is false by default. Unfortunately, there isn't anything about it in the official docs.

How to compressed file in Minio (Spring, Docker)

I have a Spring project that runs in Docker, back, web are launched through the docker-compose file, minio is also there, I don't know how to set up compression, I tried all the different options, it doesn't work. Please help)
This is my Docker-compose file
environment:
MINIO_ACCESS_KEY: minio
MINIO_SECRET_KEY: miniominio
MINIO_COMPRESS: on
MINIO_COMPRESS_EXTENSIONS:
MINIO_COMPRESS_MIME_TYPES:
or it might be worth placing compression in the application.propert.yml
The easiest way is to remove the env vars and enable it through the configuration through mc. If your cluster is set up as myminio in mc:
mc admin config set myminio compression enable=on allow_encryption=off extensions= mime_types=
The configuration will be stored on your cluster. Note that if you have encryption enabled, you need to enable this separately as well.
Compression is fully transparent, so it requires manual inspection of backend files to see if it is correctly applied.

How to have a "cache per package.json" file in GitLab CI?

I have a Vue web application that is built, tested and deployed using GitLab CI.
GitLab CI has a "Cache" feature where specific products of a Job can be cached so that future runs of the Job in the same Pipeline can be avoided and the cached products be used instead.
I'd like to improve my workflow's performance by caching the node_modules directory so it can be shared across Pipelines.
GitLab Docs suggests using ${CI_COMMIT_REF_SLUG} as the cache key to achieve this. However, this means "caching per-branch" and I would like to improve on that.
I would like to have a cache "per package.json". That is, only if there is a change in the contents of package.json will the cache key change and npm install will be run.
I was thinking of using a hash of the contents of the package.json file as the cache key. Is this possible with GitLab CI? If so, how?
This is now possible as of Gilab Runner v12.5
cache:
key:
files:
- Gemfile.lock
- package-lock.json // or yarn.lock
paths:
- vendor/ruby
- node_modules
It means cache key will be a SHA checksum computed from the most recent commits (up to two, if two files are listed) that changed the given files. Whenever one of these files changes, a new cache key is computed and a new cache is created. Any future job runs using the same Gemfile.lock and package.json with cache:key:files will use the new cache, instead of rebuilding the dependencies.
More info: https://docs.gitlab.com/ee/ci/yaml/#cachekeyfiles
Also make sure to use always --frozen-lockfile flag in your CI jobs. (or npm ci) Regular npm install or yarn install / yarn commands generate new lock files and you wouldn't usually notice it until you install packages again. Thus makes your build artifacts and caches inconsistent.
For that behavior use only:changes parameter with a static cache name.
Ex:
install:
image: node:latest
script:
- npm install
cache:
untracked: true
key: npm #static name, can use any branch, any commit, etc..
paths:
- node_modules
only: #Only execute this job when theres a change in package.json
changes:
- package.json
If you need read this to set cache properly in runners:
https://docs.gitlab.com/runner/configuration/autoscale.html#distributed-runners-caching https://docs.gitlab.com/ee/ci/caching/

Yarn - There appears to be trouble with your network connection. Retrying

I have been trying to do the quickstart guide for react native, but kept getting this error
There appears to be trouble with your network connection. Retrying...
My connection works just fine.
This happens when your network is too slow or the package being installed is too large, and Yarn just assumes it's a network problem.
Try increasing Yarn network timeout:
yarn add <yourPackage> --network-timeout 100000
Deleting the yarn.lock file and rerunning "yarn install" worked for me.
I got this issue because I was working within my company internal network and proxy needed to be set.
$ yarn config set proxy http://my_company_proxy_url:port
$ yarn config set https-proxy http://localhost:3128
example $ yarn config set https-proxy http://proxy.abc.com:8080
Simple working solution (right way of doing it):
Looks like yarn was trying to connect via a proxy. The following worked for me:
npm config rm proxy
npm config rm https-proxy
Source
Turning off "real time protection" with windows defender fixed it for me.
Sucks but it appears the checks are too much for yarn to handle.
Could be that your network speed is too slow and timeout is relatively short, you can set yarn install --network-timeout=30000
If you still have the error, could be solved by proxy, vim ~/.yarnrc and add useful proxy setting.
yarn config set network-timeout 600000 -g
Often, your error is caused by hitting the network connection time limit, and yarn simply reports there is "trouble with your network connection".
The line of code at the top of my answer sets the global yarn network timeout to 10 minutes.
Having a long network timeout is probably okay, because yarn uses caches and if it's big and you don't have it, you probably want it to just go ahead and take the time to download.
Could be a proxy issue. Run the command below to delete the proxy.
yarn config delete proxy
The following helped me
yarn config delete https-proxy
yarn config delete proxy
they set your https-proxy and proxy values to undefined. My https-proxy was set to localhost. Check that proxy and https-proxy config values are undefined by using the following
yarn config get https-proxy
yarn config get proxy
The large package involved often can be Material Design Icons.
Check if you make use of the Material Design Fonts material-design-icons in your package.json and remove it!
material-design-icons is too big to handle and you should only use material-design-icons-fonts if you only need them.
https://medium.com/#henkjan_47362/just-a-short-notice-for-whomever-is-searching-for-hours-like-i-did-a741d0cd167b
Turn off or disable your antivirus before run this command. I am also facing same issue than i disable quick heal antivirus and it is works.
create-react-app my-app
When I want to use yarn I have above error, but there is not any error with npm, for this situation you can install react project from npm
npx create-react-app app --use-npm
Deleting the yarn-lock file, doing a yarn cache clean and then a yarn solved my issue
npm install
worked for me (but my project was built with yarn)
Got the exact issue when trying yarn install
yarn install --network-timeout 100000
Just using this didn't solve my problem. I had to install only ~5 packages at a time. So I ran yarn install multiple times with only few dependencies in the package.json at a time.
Hope this helpful
In short, this is caused when yarn is having network problems and is unable to reach the registry. This can be for any number of reasons, but in all cases, the error is the same, so you might need to try a bunch of different solutions.
Reason 1: Outdated Proxy Settings
This will throw the "network connection" error if you are connected to a network that uses a proxy and you did not update yarn configs with the correct proxy setting.
You can start running the below commands to check what the current proxy configs are set to:
yarn config get https-proxy
yarn config get proxy
If the proxy URLs returned are not what you expect, you just need to run the following commands to set the correct ones:
yarn config set https-proxy <proxy-url>
yarn config set proxy <proxy-url>
Similarly, if you have previously set up the proxy on yarn but are no longer using a network connection that needs a proxy. In this case, you just need to do the opposite and delete the proxy config:
yarn config delete https-proxy
yarn config delete proxy
Reason 2: Incorrect Domain name resolution
This will throw the "network connection" error if for whatever reason your machine cannot resolve your yarn registry URL to the correct IP-address. This would usually only happen if you (or your organization) are using an in-house package registry and the ip-address to the registry changes.
In this case, the issue is not with yarn but rather with your machine. You can solve this by updating your hosts file (for mac users, this should be found in '/etc/hosts') with the correct values, by adding a mapping as follows:
<ip-address> <registry-base-url>
example:
10.0.0.1 artifactory.my.fancy.organiza.co.za
Adding option --network=host was the solution in my case.
docker build --network=host --progress=plain .
I encountered this error while attempting yarn outdated. In my case, a few of the packages in my project were hosted in a private registry within the company network. I didn't realize my VPN was disconnected so it was initially confusing to see the error message whilst I was still able to browse the web.
It becomes quite obvious for those patient enough to wait out all five retry attempts. I, however, ctrl-c'd after three attempts... 😒
In my case I found a reference to a defunct registry in my ~/.yarnrc file
When I removed that the error went away
This happened in my case trying to run yarn install.
My project is a set of many sub-projects.
After a couple of retries, it showed a socket-timeout error log:
error An unexpected error occurred: "https://<myregitry>/directory/-/subProject1-1.0.2.tgz: ESOCKETTIMEDOUT".
I cloned subProject1 separately, did yarn install on it and linked it with main project.
I was able to continue with my command on main project after that.
Once done, I unlinked the subProject1 and did a final yarn install --force which was success.
I got this error while trying to run yarn install - i use WSL with ubuntu distro, the following command fixed it,
echo "nameserver 8.8.8.8" | sudo tee /etc/resolv.conf > /dev/null
This may be a late answer but here are some possible reasons:
If you are behind a proxy you may need to configure .npmrc if you are using npm or .yarnrc if you are using yarn
If proxy is well setup, you may need remove yarn.lock or package-lock.json and re-run npm i or yarn
If you are working within a docker environment or elsewhere that might need a different approach where you are not modifying the installation process, try adding a file named .yarnrc in the root of the project with the problem (where your package.json resides) and in that file write:
network-timeout 600000
Docker will still run without modifying the docker-compose.yml file and you get the timeout solution.
I faced the same issue but adding VS Code to the Firewall Exception List has solved my issue.
I got the same issue but my case is totally different. I am on Linux, and I get this error because I had a service nginx status off.

How does Docker know when to use the cache during a build and when not?

I'm amazed at how good Docker's caching of layers works but I'm also wondering how it determines whether it may use a cached layer or not.
Let's take these build steps for example:
Step 4 : RUN npm install -g node-gyp
---> Using cache
---> 3fc59f47f6aa
Step 5 : WORKDIR /src
---> Using cache
---> 5c6956ba5856
Step 6 : COPY package.json .
---> d82099966d6a
Removing intermediate container eb7ecb8d3ec7
Step 7 : RUN npm install
---> Running in b960cf0fdd0a
For example how does it know it can use the cached layer for npm install -g node-gyp but creates a fresh layer for npm install ?
The build cache process is explained fairly thoroughly in the Best practices for writing Dockerfiles: Leverage build cache section.
Starting with a parent image that is already in the cache, the next instruction is compared against all child images derived from that base image to see if one of them was built using the exact same instruction. If not, the cache is invalidated.
In most cases, simply comparing the instruction in the Dockerfile with one of the child images is sufficient. However, certain instructions require more examination and explanation.
For the ADD and COPY instructions, the contents of the file(s) in the image are examined and a checksum is calculated for each file. The last-modified and last-accessed times of the file(s) are not considered in these checksums. During the cache lookup, the checksum is compared against the checksum in the existing images. If anything has changed in the file(s), such as the contents and metadata, then the cache is invalidated.
Aside from the ADD and COPY commands, cache checking does not look at the files in the container to determine a cache match. For example, when processing a RUN apt-get -y update command the files updated in the container are not examined to determine if a cache hit exists. In that case just the command string itself is used to find a match.
Once the cache is invalidated, all subsequent Dockerfile commands generate new images and the cache is not used.
You will run into situations where OS packages, NPM packages or a Git repo are updated to newer versions (say a ~2.3 semver in package.json) but as your Dockerfile or package.json hasn't updated, docker will continue using the cache.
It's possible to programatically generate a Dockerfile that busts the cache by modifying lines on certain smarter checks (e.g retrieve the latest git branch shasum from a repo to use in the clone instruction). You can also periodically run the build with --no-cache=true to enforce updates.
It's because your package.json file has been modified, see Removing intermediate container.
That's also usually the reason why package-manager (vendor/3rd-party) info files are COPY'ed first during docker build. After that you run the package-manager installation, and then you add the rest of your application, i.e. src.
If you've no changes to your libs, these steps are served from the build cache.

Resources