The dependency resolution of my Poetry environments frequently takes >20min. My personal best is 6hrs!!! I'm clearly doing something wrong. Running poetry lock -vvv I notice various versions of sdist get downloaded this takes several seconds each time. Additionally, I see to following messages a repeating
or
It seems this is where the resolution takes the longest. I am using a private PyPi server as my secondary
[[tool.poetry.source]]
name = "private_pypi"
url = "https://pypi.private_pypi.com.au/simple"
secondary = true
[[tool.poetry.source]]
name = "pypi-public"
url = "https://pypi.org/simple/"
I also see a message earlier in the logs Private PyPi: Response url ... differs from request url ... not sure if this is related.
There is an extensive thread in the Poetry Git issue 2094 that seems to indicate many of the resolution woes are out of Poetry's hands. Not sure if this is the case for me.
I'm simply looking for some next steps to try and speed things up.
Will nailing down versions in my pyproject.toml (ie: using == not ^ or >=) help? Is there something immediately obvious that I'm doing wrong? How many goats should I sacrifice?
First run:
poetry cache clear pypi --all
Then run
poetry lock
Related
I'm trying to optimize our poetry build times by storing a tar.gz of the venv on the end of the build in an azure storage blob (an S3) with key generated by doing a md5sum of the poetry.lock and the pyproject.toml. (this works fine)
So, in the beginning of another build, I would hash those files and try to find if that blob exists in storage. If yes, download it and extract its contents to the .venv/ dir of the project.
This worked fine at the beginning but after a while I started getting builds that were throwing this error when trying to run a command through poetry: (any command)
+ poetry run reorder-python-imports --diff-only <SOME_FILES>
FileNotFoundError
[Errno 2] No such file or directory
at /usr/local/lib/python3.8/os.py:591 in _execvpe
587│ argrest = (args,)
588│ env = environ
589│
590│ if path.dirname(file):
→ 591│ exec_func(file, *argrest)
592│ return
593│ saved_exc = None
594│ path_list = get_exec_path(env)
595│ if name != 'nt':
I have confirmed that there is no .venv folder already in the project dir and that the contents are exactly the same as if it were going to install it from the network.
If I just run poetry install on top of the cached venv, it says it has no more dependencies to install, but it throws the error above.
If I delete the venv and install everything again, commands work fine.
I have no more ideas on how to debug and solve this issue. Help would be very appreciated! :)
Ok, I think I've understood that virtualenvs have some hardcoded paths that are not easy to move around.
For what it says in this post Can I move a virtualenv? we should not move venvs as a good practice.
And there it goes my way of speeding up builds...
I have a React application with ApolloClient with Apollo-Link-Schema. The application works fine locally but in our staging environment (using GOCD), we get the following error:
Uncaught Error: Cannot use e "__Schema" from another module or realm.
Ensure that there is only one instance of "graphql" in the node_modules
directory. If different versions of "graphql" are the dependencies of other
relied on modules, use "resolutions" to ensure only one version is installed.
https://yarnpkg.com/en/docs/selective-version-resolutions
Duplicate "graphql" modules cannot be used at the same time since different
versions may have different capabilities and behavior. The data from one
version used in the function from another could produce confusing and
spurious results.
at t.a (instanceOf.mjs:21)
at C (definition.mjs:37)
at _ (definition.mjs:22)
at X (definition.mjs:284)
at J (definition.mjs:287)
at new Y (definition.mjs:252)
at Y (definition.mjs:254)
at Object.<anonymous> (introspection.mjs:459)
at u (NominationsApprovals.module.js:80)
at Object.<anonymous> (validate.mjs:1)
Dependencies are installed with yarn, I've added the resolutions field to the package.json.
"resolutions": {
"graphql": "^14.5.8"
},
I've checked the yarn.lock and can only find one reference for the graphql package.
npm ls graphql does not display any duplicates.
I thought maybe its a build issue with webpack - I have a different build script for staging, but running that locally I am still able to get the react application to run with that bundle.
Can anyone suggest anything else to help me fix this?
I managed to find the cause of the issue, if this helps anyone else. The issue is not to do with duplicate instances of the package at all, this is a false positive triggered by us using webpack's DefinePlugin to set the process.env.NODE_ENV to staging for our staging build.
However, in webpack the mode (see https://webpack.js.org/configuration/mode/), which sets the process.env.NODE_ENV, only accepts none, development and production as valid values. This was triggering an env check in the graphql package to fail and trigger this error message.
In our case, we need to differentiate between staging and production as our API endpoint differs based on this, but the solution we implemented is to not rely on the process.env.NODE_ENV, but to assign a custom variable on build (e.g. process.env.API_URL)
I would try to replicate the error locally and debug it:
try this:
rm -rf node_modules yarn.lock
# also remove any lock files if you have package-lock.json too
yarn install
# build the project locally and see if you got the error
I got this problem one time where I was working with Gatsby and 2 different themes where using different versions of GraphQL. Also be more explicit with the version (without caret) and check if the error persist.
do you have a repo youc an share? that would also help us to help you :)
While changing NODE_ENV to production might solve the issue, if you have different variables for each environment and don't want to mess with your metrics this is not an ideal solution.
You said you use webpack. If the build with the issue uses some kind of source-map in your devtool, you might want to disable that to see if the problem persists. That's how I solved this without setting my NODE_ENV to production.
I had a similar problem when trying to run Apollo codegen and was able to fix it by deduping my npm packages. Run this:
rm -rf node_modules && npm i && npm dedupe
I was having this problem so I switched to yarn, and after deleting node_modules and npm lockfile, then running yarn, the problem went away :-).
I ended up here because I use the AWS CDK and the NodejsFunction Construct. I was also using bundling with minify: true.
Toggling minify to false resolved this for me.
I need to configure WireGuard to bring up a VPN on boot on an Embedded Linux device.
My recipe installs a /etc/wireguard/wg0.conf pretty much like the examples found through the Internet.
Then I try to enable the service on SystemD like this on my wireguard.bb:
SYSTEMD_SERVICE = "wg-quick#wg0.service"
SYSTEMD_AUTO_ENABLE = "enable"
But bitbake throws me an error:
ERROR: Function failed: SYSTEMD_SERVICE_my-conf value wg-quick#wg0.service does not exist
I checked the temporary directory and file wg0.conf appears in the correct places but it seems that bitbake's SYSTEMD_SERVICE doesn't know how to expand the "wg0" after # sign.
If I try without the interface name (wg0):
SYSTEMD_SERVICE = "wg-quick#.service"
Bitbake is happy and finalizes my recipe, but it is not what systemd is expecting. Starting a service without an interface makes no sense...
Then I tried another approach and split the "wireguard" package itself from the configuration ("wireguard-conf" package) and added DEPENDS and RDEPENDS on "wireguard".
This got even worse since my wireguard-conf.bb recipe does not contain a "wg-quick#.service" file (it comes from the dependency "wireguard").
Well,
I don't know how to properly fix it and any suggestions will be highly appreciated.
Additional Info
I am using Yocto 2.0.3 in this project (with no hope of updating it).
Thanks to #TomasNovotny comments I managed to compare my "systemd.bbclas" against Github and noticed a change in systemd_populate_packages() that seems to solve the problem.
It works in newer OpenEmbedded (looks like in krogoth, version 2.1 released Apr 2016) and it is introduced by this commit. It works for me in rocko (version 2.4 released Oct 2017). According to j4x's comment, it doesn't work in jethro (version 2.0, Nov 2015).
For older (and currently unsupported OpenEmbeddeds) you can try to backport the patch or handle the symlinks for enabling the service in do_install().
Also please note that SYSTEMD_SERVICE_${PN} variable is package specific, so the _${PN} suffix has to be added (see manual).
I've also tried to enable OpenVPN with my profile (in Yocto rocko) without success.
Finally, I've made it working by providing OpenVPN recipe extension instead of custom one. So, the openvpn_%.bbappend file looks like:
inherit systemd
SYSTEMD_SERVICE_${PN} = "openvpn#clientprofile.service"
SYSTEMD_AUTO_ENABLE = "enable"
do_install_append() {
install -d ${D}${sysconfdir}/openvpn/
ln -sf /data/etc/openvpn/clientprofile.conf ${D}${sysconfdir}/openvpn/clientprofile.conf
}
As you can see, I'm using a symlink to my profile instead of the normal file. You can install a normal OpenVPN profile file instead of making symlink and it also works fine.
I'm working through the J primer, and getting stuck when it comes to the load command.
In particular, there are times when the next step in a tutorial is load 'foo' and I'll get an error like the following:
load 'plot'
not found: /users/username/j64-801/addons/graphics/plot/plot.ijs
|file name error: script
| 0!:0 y[4!:55<'y'
When I do ls /users/username/j64/addons/ I only have config and ide in there, so it's sensible that graphics is not found.
My question:
if given an example that says load 'foo', how do I go about finding and installing foo?
I'd recommend simply installing all the JAL packages ("Addons"). There aren't too many, so the download won't take long, and you'll have access to everything you need to run the Labs, Wiki examples, and any code posted by the community (e.g. on the J Forums).
To install all available Addons, type the following into Jconsole (you could theoretically type it into JHS or JQT instead, but since those are distributed as Addons, you might not be able to upgrade them while they're running):
load'pacman' NB. J PACkage MANager
install'all'
The package manager will start running, and you'll see output like:
Updating server catalog...
Installing 52 packages
Downloading base library...
Installing base library...
Downloading api/gl3...
Installing api/gl3...
Downloading api/ncurses...
Installing api/ncurses...
Then stop and restart Jconsole, and run:
load 'pacman'
'update' jpkg 'all'
To make sure all recursive dependencies were satisfied and all packages are up to date (in particular, the base library). Ultimately, you want to see something like:
Updating server catalog...
Local JAL information was last updated: <datetime>
All available packages are installed and up to date.
Then stop & restart J one last time. When that's done, you should have everything you need to run the Labs.
To answer your final question, if you see a line like:
load'foo'
The first thing you should do is run getscripts_j_ 'foo'. In your example:
getscripts_j_ 'plot'
+--------------------------------------------------------------+
|c:/users/user/j64-801/addons/graphics/plot/plot.ijs|
+--------------------------------------------------------------+
Here, you can see the fully-qualified path of where J expects the package to live.
In particular, you can see it where it is relative to the addons directory, which will always be in the form addons/category/module/foo.ijs. The category and module name indicate which addon you need to install, so all you have to do pick the desired entry from the catalog visible in the package manager.
I have a script that works great, except that it randomly fails generating new files...
this is the code:
...
file_log_path = File.join(Rails.root, 'log', "xls_import_#{Time.now.to_i}.log")
#log = File.new(file_log_path, 'w+')
....
and this is the error inside delayed_job.log
2012-12-21T18:18:41+0100: [Worker(delayed_job host:webserver2.netbanana.it pid:24482)] LoadDataFromCsv failed with Errno::ENOENT: No such file or directory - /var/www/rails/myapp/releases/20121210093945/log/xls_import_1356110321.log - 0 failed attempts
2012-12-21T18:18:41+0100: [Worker(delayed_job host:webserver2.netbanana.it pid:24482)] PERMANENTLY removing LoadDataFromCsv because of 1 consecutive failures.
Other times, it works! Someone can help me?
-- edit:
Well... it seems that Rails.root uses a wrong deploy path... in fact /var/www/rails/myapp/releases/20121210093945 doesn't exists.
But, as I said, the script sometimes works, sometimes not... If I reload delayed_job, my script works a few, and then start failing.
If you're using Capistrano to manage your releases, which I'm guessing is the case based on the path structure, then you'll need to be careful about referencing paths which can be removed after a deployment has occurred. DelayedJob needs to be restarted each time you deploy or it might be working in an orphaned directory.
If possible, you might want to use the shared/log path instead since that persists between deployments.
I found several delayed_job processes (zombies), still running... killed them all (not metallica's song) and now it works!