yarn 2 (berry) doesn't use .yarnrc.yml from home directory - yarnpkg

I would like to use npmAuthToken from .yarnrc.yml which is located in my home directory
~/.yarnrc.yml content
npmScopes:
company:
npmAuthToken: NpmToken.token
~/Project/MyProject/.yarnrc.yml content
enableGlobalCache: true
logFilters:
- code: YN0013
level: discard
nodeLinker: node-modules
npmScopes:
company:
npmAlwaysAuth: true
npmPublishRegistry: "https://company-url"
npmRegistryServer: "https://company-url"
plugins:
- path: .yarn/plugins/#yarnpkg/plugin-workspace-tools.cjs
spec: "#yarnpkg/plugin-workspace-tools"
yarnPath: .yarn/releases/yarn-3.1.0.cjs
However, when I try to run yarn install it fails on the resolution step with the following message: Invalid authentication (as an anonymous user).
It worked fine with yarn 1 and ~/.npmrc file. Now we're going to migrate yarn 2, but I stuck with this issue. Any ideas on how to solve this? Thanks!

Related

Caching playwright browser binaries in bitbucket pipelines

My goal is to enable sharding for Playwright on Bitbucket Pipelines, so I want to use parallel steps along with caching.
My bitbucket-pipelines.yml script looks like this:
image: mcr.microsoft.com/playwright:v1.25.0-focal
definitions:
caches:
npm: $HOME/.npm
browsers: ~/.cache/ms-playwright #tried $HOME/.cache/ms-playwright ; $HOME/ms-playwright ; ~/ms-playwright
steps:
- step: &base
caches:
- npm
- node
- browsers
- step: &setup
script:
- npm ci
- npx playwright install --with-deps
- step: &landing1
<<: *base
script:
- npm run landing1
- step: &landing2
<<: *base
script:
- npm run landing2
- step: &landing3
<<: *base
script:
- npm run landing3
pipelines:
custom:
landing:
- step: *setup
- parallel:
- step: *landing1
- step: *landing2
- step: *landing3
Besides trying various location for the caches definition I also tried to just set repo variable PLAYWRIGHT_BROWSERS_PATH to 0 and hope that browsers will appear within node modules.
Solution with caching browsers within default location leads to this (in all 4 cases mentioned in comment of the file):
While not caching browsers separately and using PLAYWRIGHT_BROWSERS_PATH=0 with caching node also does not work, each parallel step throws an error saying browser binaries weren't installed.
I also tried varying between npm install and npm ci, exhausting all of the solutions listed here.
I hope somebody has been able to resolve this issue specifically for Bitbucket Pipelines, as that is the tool we are currently using in the company.
You can NOT perform the setup instructions in a different step than the ones that need very that setup. Each step runs in a different agent and should be able to complete regardless of the presence of any cache. If the caches were present, the step should only run faster but with the same result!
If you try to couple steps through the cache, you loose control on what is installed: node_modules will quite often be an arbitrary past folder not honoring the package-lock.json for the git ref where the step is run.
Also, your "setup" step does not use the caches from the "base" step definition, so you are not correctly polluting the listed "base" caches either. Do not fix that, it would cause havoc to your life.
If in need to reuse some setup instructions for similar steps, use yet another YAML anchor.
image: mcr.microsoft.com/playwright:v1.25.0-focal
definitions:
caches:
npm: ~/.npm
browsers: ~/.cache/ms-playwright
yaml-anchors: # nobody cares what you call this, but note anchors are not necessarily steps
- &setup-script >-
npm ci
&& npx playwright install --with-deps
# also, the "step:" prefixes are dropped by the YAML anchor
# and obviated by bitbucket-pipelines-the-application
- &base-step
caches:
- npm
# - node # don't!
- browsers
- &landing1-step
<<: *base-step
script:
- *setup-script
- npm run landing1
- &landing2-step
<<: *base
script:
- *setup-script
- npm run landing2
- &landing3-step
<<: *base
script:
- *setup-script
- npm run landing3
pipelines:
custom:
landing:
- parallel:
- step: *landing1-step
- step: *landing2-step
- step: *landing3-step
See https://stackoverflow.com/a/72144721/11715259
Bonus: do not use the default node cache, you are wasting time and resources by uploading, storing and downloading node_modules folders that will be wiped by npm ci instructions.

How can I set gradle/test to work on the same docker environment where other CircleCi jobs are running

I have a CircleCi's workflow that has 2 jobs. The second job (gradle/test) is dependent on the first one creating some files for it.
The problem is with the first job running inside a docker, and the second job (gradle/test) is not. Hence, the gradle/test is failing since it cannot find the files the first job created. How can I set gradle/test to work on the same space?
Here is a code of the workflow:
version: 2.1
orbs:
gradle: circleci/gradle#2.2.0
executors:
daml-executor:
docker:
- image: cimg/openjdk:11.0-node
...
workflows:
checkout-build-test:
jobs:
- daml_test:
daml_sdk_version: "2.2.0"
context: refapps
- gradle/test:
app_src_directory: prototype
executor: daml-executor
requires:
- daml_test
Can anyone help me configure gradle/test correctly?
CircleCI has a mechanism to share artifacts between jobs called "workspace" (well, they have multiple ones, but workspace is what you want here).
Concretely, you would add this at the end of your daml_test job definition, as an additional step:
- persist_to_workspace:
root: /path/to/folder
paths:
- "*"
and that would add all the files from /path/to/folder to the workspace. On the other side, you can "mount" the workspace in your gradle/test job by adding something like this before the step where you need the files:
- attach_workspace:
at: /whatever/mountpoint
I like to use /tmp/workspace for the path on both sides, but that's just personal preference.

Relay compiler throws syntax error: Unexpected "$" when running in Github Action

I am using Relay (and Hasura) and hence are required to compile my code ahead of time using the relay-compiler. I can compile the code fine on my local machine, however it always fails when in Github Actions.
Here is the section of my yml file where is breaks:
runs-on: ubuntu-latest
# other steps
- name: Download GraphQL Schema
run: SECRET=$SECRET ENDPOINT=$ENDPOINT yarn run get-schema
env:
SECRET: ${{ secrets.hasura_admin_secret }}
ENDPOINT: ${{ secrets.graphql_endpoint }}
- name: Test Compile Relay
run: yarn run relay <<< this fails
- name: Test build
run: yarn run build
And here are those scripts in my package.json.
"start": "yarn run compile-css && react-scripts start",
"build": "yarn run build-compile-css && react-scripts build",
"get-schema": "yarn run get-graphql-schema -h \"x-hasura-admin-secret=$SECRET\" $ENDPOINT > schema.graphql",
"relay": "yarn run relay-compiler --schema schema.graphql --src src",
It fails with the error:
$ /home/runner/work/<company-name>/<app-name>/node_modules/.bin/relay-compiler --schema schema.graphql --src src
Writing js
ERROR:
Syntax Error: Unexpected "$".
I have verified that schema is downloaded correctly and the paths to the schema and src folder are correct.
Is there specific config or arguments I need to pass to get this working in a CI environment?
Update
After more testing, I have found that the downloaded file from get-graphql-schema is somehow not correct. The issue is not there if I commit the schema and use this instead of downloading it.
I have the understanding it is bad practice to upload graphql.schema files, is this the case? If so are there are special arguments or set up required to get schema files working correctly in Github Actions?
I have managed to find that when running the get-graphql-schema it Github Actions it add the following line as the first line of the file. I can remove this via an additional script.
schema.graphql
$ /home/runner/work/<company-name>/<app-name>/node_modules/.bin/relay-compiler --schema schema.graphql --src src
schema {
query: query_root
mutation: mutation_root
subscription: subscription_root
}
...
I am unsure though why running this command in github actions will copy the first line.

Serverless - Lambda Layers "Cannot find module 'request'"

When I deploy my serverless api using:
serverless deploy
The lambda layer gets created but when I go to run the function is gives me this error:
"Cannot find module 'request'"
But if I upload the .zip file manually through the console (the exactly same file thats uploaded when I deploy), it works fine.
Any one have any idea why this is happening?
environment:
SLS_DEBUG: "*"
provider:
name: aws
runtime: nodejs8.10
stage: ${opt:api-type, 'uat'}-${opt:api, 'payment'}
region: ca-central-1
timeout: 30
memorySize: 128
role: ${file(config/prod.env.json):ROLE}
vpc:
securityGroupIds:
- ${file(config/prod.env.json):SECURITY_GROUP}
subnetIds:
- ${file(config/prod.env.json):SUBNET}
apiGateway:
apiKeySourceType: HEADER
apiKeys:
- ${file(config/${opt:api-type, 'uat'}.env.json):${opt:api, "payment"}-APIKEY}
functions:
- '${file(src/handlers/${opt:api, "payment"}.serverless.yml)}'
package:
# individually: true
exclude:
- node_modules/**
- nodejs/**
plugins:
- serverless-offline
- serverless-plugin-warmup
- serverless-content-encoding
custom:
contentEncoding:
minimumCompressionSize: 0 # Minimum body size required for compression in bytes
layers:
nodejs:
package:
artifact: nodejs.zip
compatibleRuntimes:
- nodejs8.10
allowedAccounts:
- "*"
Thats what my serverless yaml script looks like.
I was having a similar error to you while using the explicit layers keys that you are using to define a lambda layer.
My error (for the sake of web searches) was this:
Runtime.ImportModuleError: Error: Cannot find module <package name>
I feel this is a temporary solution b/c I wanted to explicitly define my layers like you were doing, but it wasn't working so it seemed like a bug.
I created a bug report in Serverless for this issue. If anyone else is having this same issue they can track it there.
SOLUTION
I followed this this post in the Serverless forums based on these docs from AWS.
I zipped up my node_modules under the folder nodejs so it looks like this when it is unzipped nodejs/node_modules/<various packages>.
Then instead of using the explicit definition of layers I used the package and artifact keys like so:
layers:
test:
package:
artifact: test.zip
In the function layer it is referred to like this:
functions:
function1:
handler: index.handler
layers:
- { Ref: TestLambdaLayer }
The TestLambdaLayer is a convention of <your name of layer>LambdaLayer as documented here
Make sure you run npm install inside your layers before deploying, ie:
cd ~/repos/repo-name/layers/utilityLayer/nodejs && npm install
Otherwise your layers will get deployed without a node_modules folder. You can download the .zip of your layer from the Lambda UI to confirm the contents of that layer.
If anyone face a similar issue Runtime.ImportModuleError, is fair to say that another cause of this issue could be a package exclude statement in the serverless.yml file.
Be aware that if you have this statement:
package:
exclude:
- './**'
- '!node_modules/**'
- '!dist/**'
- '.git/**'
It will cause exactly the same error, on runtime once you've deployed your lambda function (with serverless framework). Just, ensure to remove the ones that could create a conflict across your dependencies
I am using typescript with the serverless-plugin-typescript and I was having a same error, too.
When I switched from
const myModule = require('./src/myModule');
to
import myModule from './src/myModule';
the error disappeared. It seems like the files were not included into the zip file by serverless when I was using require.
PS: Removing the serverless-plugin-typescript and switching back to javascript also solved the problem.

Travis gem deployment failing "Directory nonexistent"

I don't understand why the deployment is not working. I'm getting the following error in the build console:
Preparing deploy
Found gem
/usr/lib/git-core/git-stash: 186: /usr/lib/git-core/git-stash: cannot create /home/travis/build/prismicio/ruby-kit/.git/logs/refs/stash: Directory nonexistent
Build: https://travis-ci.org/prismicio/ruby-kit/jobs/40767391
My .travis.yml:
language: ruby
rvm:
- 2.1.1
- 2.1.0
- 2.0.0
- 1.9.3
- 1.9.2
- jruby-19mode
script: bundle exec rspec spec
notifications:
email:
- example#example.com
addons:
code_climate:
repo_token: X
deploy:
provider: rubygems
api_key:
secure: XXX
gemspec: prismic.gemspec
on:
tags: true
all_branches: true
What's wrong with the build?
The error:
/usr/lib/git-core/git-stash: 186: /usr/lib/git-core/git-stash: cannot create /home/travis/build/prismicio/ruby-kit/.git/logs/refs/stash: Directory nonexistent
may be related to how you're deploying your files to your provider and it's triggered by git stash and its DPL::Provider#cleanup process (see: releases.rb). By default Deployment Provider will deploy the files from the latest commit. This is not supported for all providers, therefore this just means that the "releases" provider needs to skip the cleanup, so it should deploy from the current file state instead (see #BanzaiMan comment) by adding this line:
skip_cleanup: true
This is because every provider has slightly different flags and these are documented in the Deployment section (or for the latest documentation check on GitHub for supported providers).
Further more, the above error is basically related to Travis CI bug (GH #1648) where basically File#basename is stripping the directory part (as per #BanzaiMan comment) and it's not clear why this cannot manifest in the CLI case. So this is something to fix in the near future.

Resources