Merging yarn.lock to existing pnpm.lock - yarnpkg

There are two repos with ts projects: one based on Yarn, another one PNPM with multiple workspaces.
I need to make the first one a part of the second, as one more workspace.
When I put yarn.lock (from the first repo) to the PNPM repo, and do pnpm import, the pnpm-lock.yaml messes up. Other workspaces stop to build.
The another option I am thinking to try is migrating that repo from Yarn to PNPM first, and then try to merge (as it might be easier with two pnpm-lock.yaml file, idk)
So asking some experts out here who dealt with similar use case.
Thanks!

Related

GO workspaces - what to checkin to git and also to use git submodules?

I have been playing around with go workspaces and everything is working as it should.
Although I am confused about whether I should be commiting the ROOT directory where the go.work is and also if I should, in fact, be committing the go.work.
If this is the case, then I added git submodules.
Although I am not sure if I like this workflow :-) I mean, using git submodules.
So I have (directory structure)
root (where go.work is)
proj1 (also added to the go.work)
proj2 (also added to the go.work)
Here is my go.work in the root
go 1.18
use (
./test-work1
./test-work2
)
I cannot find any information about this. If we should be managing the "root" then I assume this needs to end up in GIT - but if I don't want to manage this as some massive MONOREPO then I need to use git submodules.
Or maybe the "root" should never be added to git and we use it locally?
Does anyone have any experience with a good workflow ?
Information seems to thin on the ground, although workspaces is a fairly new addition.
Thanks in advance.

Why and when does yarn decide not to hoist a package in a workspace?

I'm working on a large project using yarn workspaces. I know that yarn workspaces essentially does two things
It automates the symlinking process we had to do manually years ago when we want to share private packages
It hoists all similar packages at the top in node_modules in order to be more efficient.
However, I have noticed that my packages still contain code in their own node_modules and I'm not sure why. When I make a sample monorepo app and say I install lodash in one, it goes straight to the root node_modules.
Why and when does yarn decide to install a package inside a package's node_modules ?
I found the answer on yarn's Discords. yarn will always hoist unless it would conflict with another version.

lerna advantage with yarn workspaces, why use lerna at all

I'm trying to setup a mono repo and came across lerna+yarn. My problem is, what is the actual advantage I get in using lerna. I can use yarn workspaces only and have the same functionality. I came across the following Are there any advantages to using Lerna with Yarn workspaces? but here there is no specific answer as to the specific advantage of using lerna.
as stated in the yarn workspaces documentation,
workspaces are the primitives that tools like lerna can use
You might have a project that only needs workspaces as "primitives", and not lerna. Both are tools, but lerna as a higher-level-tool than yarn workspaces helps you organize your monorepo when you are working on open source or in a team:
The install (bootstrap) / build / start-scripts are at the project root:
meaning one install and package resolution instead of one for each package
node_modules at the root is 'fat', in sub-projects it's tiny: One place to look for when you resolve package conflicts
parallel execution of scripts in sub-projects: E.g. starting up both server and frontend development in watch-mode can just be one command
publishing and versioning is much easier
There is certainly a lot of overlap between the two. The big difference is that Lerna handles package management, versioning and deployment.

How to disable the removal of unused packages in composer?

I have many branches in git with different set of packages in composer.json
After each git checkout I need to do composer install and composer starts to download missing packages. In that moment, composer removes packages that are needed for other branch. And when I will checkout to other branch, I will need to download that packages again. When it comes to packages such as PHPUnit, Codeception or other frameworks, it takes a very long time.
Is it possible to disable the removal of unused packages in composer?
(I have met this feature in bower or npm.)
Thank you.
Right now this is not supported, as install just performs the actions needed to comply to the project requirements. As technically in your case the requirements change, its behavior is correct. While the feature could be implemented in Composer it's not trivial, as it's 'unnatural' behavior that is quite low-level to hack.
However I think the real issue here is that your workflow is not correct. If different branches in Git have wildly different dependencies it is first of all doubtful that they should really be branches, and not entirely different repositories as they're really different projects then.
If that is not the case the easiest solution is just to clone the repository multiple times, and keep the different clones at their respective branches. That solves all your problems immediately and lets Composer do its work like it was intended. This is also a very common workflow in bigger projects, as in-place branch switching is really only practical for short-lived branches like PR's and feature branches.

Project structure. Scientific Python projects

I am looking for a better way to structure my research projects. I have the following setup:
There are projects a,b,c and a library lib. Each project tackles a different research question and the library carries code that is used across projects. Thus all projects depend on lib. Things get more complicated as project c depends on projects a and b as well. When I work on project c, I will also update a,b or lib simultaneously. Each project is in a separate git repository.
So far I have dealt with this situation by including the dependencies above via git submodule and all the source files are located in the root dir of the project. The advantage is that I keep track of which version of lib my projects depend. Also one of my projects could depend on an outdated version of lib. I run everything from the root directory without "installing" any of the packages to site-packages or so. When a path is not set correctly, I override it via sys.path.insert.
However, the following points make me want to change layout:
I keep losing track of which version of lib I am editing.
I want to make use of automated testing tools (tox,jenkins etc.) which seem to be much easier to handle with a standard project setup.
sys.path.insert can lead to subtle problems which are hard to debug.
I usually want all my projects to work with the tip of lib anyway.
Therefore I am currently rearranging all projects (especiall lib) to be in line with the standard Python directory structure (source stored in a subdirectory, root contains a setup.py file) to be able to work in a virtualenv. Then I can list all my dependencies in requirements.txt. First I install lib as develop via pip install -e . Then I run pip freeze > requirements.txt which then includes a line similar to this.
-e git+<path_to_remote>#<sha>#egg=`lib`
So again I have generated a dependency to a specific commit (sha) as with git submodule, ensuring that I can checkout an old commit and the project should run. I can now install everything in a virtualenv and got rid of my path problems. Great.
I face some new trouble though. One problem is, how to update the sha in requirements.txt. The easiest (but probably not most elegant) solution I see is to write a pre-commit hook that updates the sha before commiting. Is there a better way?
And more generally - do you see a better solution given my setup?
As far as I see you have mostly solved your problem and there are only small bits left.
1) Don't use hashes to identify versions of your libraries. Even if you don't publish your libraries to the Cheese Shop, do a normal library versioning (semver) and tag you git repositories accordingly. Thing way you will have human-readable and manageable version in your git+https://github.com/... URLs of dependencies.
2) Make your tox setup in the way that will let you test stable version of dependencies (that you have tagged last time) and master version right from the latest repo revision.

Resources