How can I verify Yarn's integrity checks actually work? - yarnpkg

I cannot figure out how to determine if Yarn's integrity checks actually do anything. There is not a lot of documentation.
I've tried modifying integrity hashes to be bogus ones in yarn.lock and then trying to yarn install or run yarn check --integrity. Neither of these commands fail, so how is Yarn actually checking the integrity of packages?

After some discussion with the Yarn devs on Discord, it looks like Yarn won't check integrity on yarn install unless there is actually something to install, so my test wasn't sufficient to trip over the errors.
You'll also need to be on Yarn 1.19.1 or later because of some caching bugs.
Apparently, yarn check is being removed so that's not a reliably way to check integrity.

Related

Yarn not working on windows + react-native-builder-bob, not even showing an error message

I'm developing a lib for common use in a couple react-native projets and for that I'm using a scaffolding lib recommended on react-native official docs page. This is what the company uses, so I have to use it too.
For some reason, running yarn from the command line on my windows machine does absolutely nothing, not even outputs an error message. The company had to lend me a Mac so I can work on the lib. I'm not saying just yarn commands like trying to install packages, even running yarn --version gives me no output, it just halts.
That's the strange part, it only happens if I'm running yarn from inside the project's folder. If I run yarn on my windows machine from some arbitrary folder or any project that does not uses react-native-builder-bob, yarn works normally. I can install packages, check the version, run commands, everything ok. That makes me think it's not something wrong with my yarn installation. Both the mac and my windows have installed version 1.22.10.
I dug through the issues on bob's github but could find nothing regarding windows and yarn. I also have a spare SSD which I tried running a fresh windows install, setting up my work environment again and still got the same issue.
I really want to use my windows machine to work on this project since it's way more powerful than the mac they lend me. Any help would be much appreciated
I had the same problem. Fixed it by changing options object scripts/bootstrap.js
const options = {
cwd: process.cwd(),
env: process.env,
stdio: 'inherit',
encoding: 'utf-8',
shell: true //add this property
};
I'm assuming you have already installed yarn and node.

Yarn keeps using old registry

I've been using yarn with a private registry in the past - however, the registry has now shut down and I want to use yarn with the official registry.
Whatever I do, yarn always seems to want to connect to the old registry and there's simply no way of making it use the new one. I've already tried:
Completely remove and re-install yarn
yarn config set registry https://registry.yarnpkg.com/
Verified that there is no mention of the old registry in either ~/.npmrc or ~/.yarnrc
Cleared the yarn cache using yarn cache clean
No matter what I do, yarn still tries to connect to the old registry on every install and I have no idea where yarn is getting that from...
any ideas?
Remove your global yarn.lock
rm ~/.config/yarn/global/yarn.lock
and then
yarn config set registry https://registry.yarnpkg.com/
Got it, the culprit was ~/.config/yarn/global/yarn.lock...
Running yarn add with --verbose will tell you which .yarnrc files are being picked up. These shouldn't include the old registry.
So run yarn add <your-package> --verbose and check the .yarnrc files found for any mention of the old registry.
In my case, I got it fixed with a rm ~/.npmrc running MacOS

very strange behaviour with ruby, openssl, unicorn, systemd (Gcloud)

We started seeing some strange errors in our logs that normally appear when ruby isn't compiled properly with OpenSSL. But it's inconcistent...
We're getting errors like:
RuntimeError: Unsupported digest algorithm (SHA256). (also with other digests, like sha1). example error trace
Faraday::SSLError (SSL_CTX_new: (null)) example error trace
We managed to reproduce it when starting unicorn using service unicorn start or systemctl start unicorn. But only with some requests... Not all of them. Some requests that use OpenSSL under the hood do work. Others don't.
However, when we start unicorn with /etc/init.d/unicorn start, everything works without a hitch. (to clarify, systemd starts the same /etc/init.d script)
We tried debugging ENV vars, user permissions, file/dir ownership, recompile ruby, bootstrap a new server from scratch... Nothing seems to help.
In case this helps:
unicorn init.d script
unicorn.rb
What are we missing? What can we try that we haven't thought of?
UPDATE 1
output of some debug commands, e.g. OpenSSL, ruby etc
PATH is being set inside the init.d script
unicorn is being executed via su into www-data user
The same problem happens when we use this unicorn.service file in /etc/systemd/system
We're running Ubuntu 16.04 on Gcloud
Ruby was not installed via apt (explicitly removed, in case platform came pre-installed) and compiled from scratch. We're currently running 2.3.4 and tried also 2.3.6. Compiled either manually or using ruby-build. No rbenv, nor RVM.
We install libssl-dev via apt (we're running apt-get install -y autoconf bison build-essential libssl-dev libyaml-dev libreadline6-dev zlib1g-dev libncurses5-dev libffi-dev libgdbm3 libgdbm-dev before building ruby)
UPDATE 2
We're using a scripted/repeatable build process for the VM (using fabric), and this problem is consistent on multiple VMs we bootstrapped on GCloud. We then tried a VM on DigitalOcean with the same bootstrap scripts, and the problem doesn't seem to appear there.
In both cases we picked Ubuntu 16.04 64bit base image, but obviously there are some differences with kernel versions, base installed packages etc...
UPDATE 3
The problem simply vanished. See my answer below.
#gingerlime I had a similar situation with our Jenkins on GCP, we're using ChefDK 3.1.0 (ruby embeed 2.5.1p57) -- tried other also, over a Jenkins that was running over systemd (Ubuntu 16.04) and upstart (Ubuntu 14.04) -- we tried on both versions, right now running over 16.04 in 4.15.0-1023-gcp kernel version, running a few jobs with kitchen-docker and this problem always emerge in a few situations.
I digged into and found that this only happens when the Etc.getlogin class gets called (for me here), this doesn't return any error, it return the correct info, the correct type of the class (String), but once it gets a call, the Unsupported digest algorithm gets raised.
If I start the process manually by root or jenkins user, this problem doesn't happen. I tried to implement the Etc.getlogin in several different ways, like using ENV['USER'], a fixed String, or other classes from Etc, like getpwuid, simulating the return class and values from Etc.getlogin, and the error doesn't get raised.
I'm not sure if this is some bug related to the ruby version and the custom kernel that GCP instances uses, but it happens in a similar situation like yours, and for me, the Etc.getlogin was the problem. Right now, I fixed by using a custom configuration that doesn't gets the call from this function, and it's working normally.
One option is that this isn't an issue of sysVinit vs systemd at all, but you just haven't triggered the issue with your sysVinit script yet.
When you run your svsVinit script through the systemctl command it's going through a compatibility layer, and there may be a problem there. Your problem would be simplified both yourself and for us if you reproduced the issue directly with a systemd service file and shared that file.
You mentioned debugging ENV, but didn't mention exactly what you checked in the ENV. This is definitely one place where systemd could make a difference. As seen in man systemd.exec, systemd sets $PATH in the environment to a fixed value:
/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
If this is not exactly the same as when run directly as an sysVinit script, that could be an issue.
I would also check for all your copies of SSL on the system. Do you have more than one? Where? Do you have more than copy of the ruby openssl module loaded?
 locate -r lib/.*libssl.*so
Also see the answer to the FAQ: Why do things behave differently under systemd?
(also posted on this github issue)
It looks like the problem just vanished. We were testing and reproducing it consistently, across several Compute Engine instances on Google Cloud. Under certain conditions (unicorn / puma started by systemd, etc), it was completely reproducible both with our own rails app, and with a plain vanilla rails app we've set for testing purposes. It was reproducible across several ruby versions as well (we tested 2.3.4, 2.3.6 and 2.5.0).
Suddenly, all instances that were consistently failing started working without exhibiting these problems. Like it never existed. We didn't even reboot some of those instances, and we saw no evidence of any unattended upgrades taking place... We also had one snapshot of a system that had this problem, and that we can reliably reproduce on. Creating an instance from this snapshot stopped exhibiting it as well from that specific point in time a few hours ago.
We're totally confused as to what might have cause it, and what might have made it disappear... However, without being able to reproduce it now, I guess there's no point leaving this issue open, so will close it. Chalk it up to Deus ex machina I suppose. (perhaps the google support gods, but they haven't reported anything back to us yet)

MODULE_NOT_FOUND in heroku pg:backups

I've got problem with heroku pg:backups capture --app myapp command.
Heroku CLI submits usage information back to Heroku. If you would like to disable this, set `skip_analytics: true` in /home/ubuntu/.heroku/config.json
heroku-cli: Updating to 4.99.0-e5f5ef4... done
heroku-cli: Updating CLI...heroku-cli: Updating to 5.11.8-f58f4fa... done
Starting backup of postgresql-spherical-5948... done
Use Ctrl-C at any time to stop monitoring progress; the backup will continue running.
Use heroku pg:backups:info to check progress.
Stop a running backup with heroku pg:backups:cancel.
Backing up DATABASE to b598... pending
Backing up DATABASE to b598... !
▸ MODULE_NOT_FOUND: Cannot find module 'bytes'
Does anybody have similar problem? This command is launched with deploy on CircleCI.
I started running into the same problem yesterday and was finally able to come up with a solution that is working for me.
For starters, it looks like bytes is a dependency of heroku-pg, which is the part of Heroku CLI that is being used for the backups command. It seems like the dependency is not being included or installed with the version of heroku-cli that is being used to run the backup command.
I tried CircleCI’s “Rebuild with SSH” to troubleshoot the issue, and encountered similar error messages when attempting the backup command there. While trying to reinstall heroku-cli using npm, I found that the npm and node versions were way behind what heroku-cli wanted, so maybe that is part of the problem? Anyway, reinstalling with npm only produced an even more broken Heroku CLI.
Finally I checked the build environment and it was set to Ubuntu 12.04 (Precise) which probably explains the way out of date npm/node packages. I changed it to Ubuntu 14.04 (Trusty) and pushed a new commit to CircleCI (A rebuild alone is not sufficient to change OS versions) and was able to successfully run the backup command that had been failing!
Solution: Set CircleCI build environment to Ubuntu 14.04

While installing cloud manager facing oracle-j2sdk1.7 installation failed

im trying to install cloud manager on ubuntu 12.04.
But im failed and below is the error :
oracle-j2sdk1.7 installation failed. See /var/log/cloudera-manager-installer/3.install-oracle-j2sdk1.7.log for details. Click OK to revert this installation.
Any help can be appricaited now.
Regards,
BJ
Looks like its some kind of lock already in place which might occur at login or because of the first attempt at the installation failing.
Try logging in, maybe even rebooting and trying again.
If this still fails, try the following:
sudo rm /var/lib/apt/lists/* -vf
sudo apt-get update
This should force an update and recreate the directory correctly. There's a thread about this on the Ubuntu forums which might be more useful and provide a lot more information on this problem:
http://ubuntuforums.org/showthread.php?t=1986288
If this is still a problem. Provide the full stack trace/log that is mentioned in the error.
We can't blindly provide advice, we need the actual error from the log in order to help you.

Resources