Define setup.py dependencies from a private PyPI - pip

I'd like to install dependencies from my private PyPI by specifying them within a setup.py.
I've already tried to specify where to find dependencies within the dependency_links this way:
setup(
...
install_requires=["foo==1.0"],
dependency_links=["https://my.private.pypi/"],
...
)
I've also tried to define the entire URL within the dependency_links:
setup(
...
install_requires=[],
dependency_links=["https://my.private.pypi/foo/foo-1.0.tar.gz"],
...
)
but when I try to install with python setup.py install, neither of them worked for me.
Can anybody help me?
EDITS:
With the first piece of code I got this error:
...
Installed .../test-1.0.0-py3.7.egg
Processing dependencies for test==1.0.0
Searching for foo==1.0
Reading https://my.private.pypi/
Reading https://pypi.org/simple/foo/
Couldn't find index page for 'foo' (maybe misspelled?)
Scanning index of all packages (this may take a while)
Reading https://pypi.org/simple/
No local packages or working download links found for foo==1.0
error: Could not find suitable distribution for Requirement.parse('foo==1.0')
while in the second case I didn't get any error, just the following:
...
Installed .../test-1.0.0-py3.7.egg
Processing dependencies for test==1.0.0
Finished processing dependencies for test==1.0.0
UPDATE 1:
I've tried to change the setup.py following sinoroc's instructions. Now my setup.py looks like this:
setup(
...
install_requires=["foo==1.0"],
dependency_links=["https://username:password#my.private.pypi/folder/foo/foo-1.0.tar.gz"],
...
)
I built the library test with python setup.py sdist and tried to install it with pip install /tmp/test/dist/test-1.0.0.tar.gz, but I still get this error:
Processing /tmp/test/dist/test-1.0.0.tar.gz
ERROR: Could not find a version that satisfies the requirement foo==1.0 (from test==1.0.0) (from versions: none)
ERROR: No matching distribution found for foo==1.0 (from test==1.0.0)
Regarding the private PyPi, I don't have any additional information because I'm not the administrator of it. As you can see, I just have the credentials (username and password) for that server.
Additionally, that PyPi is organised in sub-folders, https://my.private.pypi/folder/.. where the dependency I want to install is.
UPDATE 2:
By running pip install --verbose /tmp/test/dist/test-1.0.0.tar.gz, it seams there is only 1 location where to search for the library foo, in the public server https://pypi.org/simple/foo/ and not in our private server https://my.private.pypi/folder/foo/.
Here the output:
...
1 location(s) to search for versions of foo:
* https://pypi.org/simple/foo/
Getting page https://pypi.org/simple/foo/
Found index url https://pypi.org/simple
Looking up "https://pypi.org/simple/foo/" in the cache
Request header has "max_age" as 0, cache bypassed
Starting new HTTPS connection (1): pypi.org:443
https://pypi.org:443 "GET /simple/foo/ HTTP/1.1" 404 13
Status code 404 not in (200, 203, 300, 301)
Could not fetch URL https://pypi.org/simple/foo/: 404 Client Error: Not Found for url: https://pypi.org/simple/foo/ - skipping
Given no hashes to check 0 links for project 'foo': discarding no candidates
ERROR: Could not find a version that satisfies the requirement foo==1.0 (from test==1.0.0) (from versions: none)
Cleaning up...
Removing source in /private/var/...
Removed build tracker '/private/var/...'
ERROR: No matching distribution found for foo==1.0 (from test==1.0.0)
Exception information:
Traceback (most recent call last):
...

In your second attempt, I believe you should still have foo==1.0 in the install_requires.
Update
Be aware that pip does not support dependency_links (it used to, but does not anymore).
For pip, the alternative is to use command line options such as --index-url, --extra-index-url, or --find-links. These options can not be enforced on the user of your project (contrary to the dependency links from setuptools), so they have to be properly documented. To facilitate this, a good idea is to provide an example of a requirements.txt file to the users of your project. This file can contain some of pip options.
For example:
# requirements.txt
# ...
--find-links 'https://my.private.pypi/'
foo==1.0
# ...

Related

Yocto build broken when setting a remote rpm repository with https

I have generated a Yocto image to be used on all my target devices. When that image is running on target devices, it must be able to be updated using a rpm remote repository through https protocol.
To try doing that, I have added a dnf bbappend to my custom layer:
$ cat recipes-devtools/dnf/dnf_%.bbappend
FILESEXTRAPATHS_prepend := "${THISDIR}/files:"
SRC_URI += " \
file://yocto-adv-rpm.repo \
"
do_install_append () {
install -d ${D}/etc/yum.repos.d
install -m 0600 ${WORKDIR}/yocto-adv-rpm.repo ${D}/etc/yum.repos.d/yocto-adv-rpm.repo
}
FILES_${PN} += "/etc/yum.repos.d"
This is the content of repository configuration file included by dnf bbappend recipe:
$ cat recipes-devtools/dnf/files/yocto-adv-rpm.repo
[yocto-adv-rpm]
name=Rocko Yocto Repo
baseurl=https://storage.googleapis.com/my_repo/
gpgkey=https://storage.googleapis.com/my_repo/PACKAGEFEED-GPG-KEY-rocko
enabled=1
gpgcheck=1
This repository configuration breaks the build process of the image. When I try to build myimage recipe, I always get this error:
ERROR: myimage-1.0-r0 do_rootfs: [log_check] myimage: found 1 error message in the logfile:
[log_check] Failed to synchronize cache for repo 'yocto-adv-rpm', disabling.
ERROR: myimage-1.0-r0 do_rootfs: Function failed: do_rootfs
ERROR: Logfile of failure stored in: /home/yocto/yocto/build/tmp/work/machine-poky-linux/myimage/1.0-r0/temp/log.do_rootfs.731
ERROR: Task (/home/yocto/yocto/sources/meta-mylayer/recipes-images/myimage.bb:do_rootfs) failed with exit code '1'
However, when I replace the "https" by "http" in "baseurl" variable:
baseurl=http://storage.googleapis.com/my_repo/
Then the myimage recipe is built fine.
The host machine can download files from the https repository using wget:
$ wget https://storage.googleapis.com/my_repo/PACKAGEFEED-GPG-KEY-rocko
Previous commands works fine, so the problem is not related with the host machine, I think it must be something related with google certificates and yocto stuff.
I found some relevant information inside this file:
yocto/build/tmp/work/machine-poky-linux/myimage/1.0-r0/temp/dnf.librepo.log
The relevant part:
15:56:41 lr_download: Downloading started
15:56:41 check_transfer_statuses: Transfer finished: repodata/repomd.xml (Effective url: https://storage.googleapis.com/my_repo/repodata/repomd.xml)
15:56:41 check_finished_transfer_status: Fatal error - Curl code (77): Problem with the SSL CA cert (path? access rights?) for https://storage.googleapis.com/my_repo/repodata/repomd.xml [error setting certificate verify locations:
CAfile: /home/yocto/yocto/build/tmp/work/x86_64-linux/curl-native/7.54.1-r0/recipe-sysroot-native/etc/ssl/certs/ca-certificates.crt
CApath: none]
15:56:41 lr_yum_download_repomd: repomd.xml download was unsuccessful
Can some of you provide any useful advice to try to fix this?
Thank you in advance for your time! :-)
I finally fixed my issue removing completely my dnf bbappend recipe from my custom layer and adding this variable to my distro.conf file:
PACKAGE_FEED_URIS = "https://storage.googleapis.com/my_repo/"
After that, at the end of the build process the image contains a valid /etc/yum.d/oe-remote-repo file and all the necesary stuff to manage it. There is no need to copy "ca-certificates.crt" manually at all.
Also, it's important to execute this command after finishing the build of the image:
$ bitbake package-index
This command generates a "repodata" directory within the package feed needed by the target device once it uses the repo to update packages using dnf client.
I found a temporal hack to fix my issue:
$ cp /etc/ssl/certs/ca-certificates.crt /home/yocto/yocto/build/tmp/work/x86_64-linux/curl-native/7.54.1-r0/recipe-sysroot-native/etc/ssl/certs/
After that, I was finally able to build the image using the "https" repo.
Now I am in the process of fixing this issue in the right way. I'll come back with the final solution.

Haskell on Windows - Stack fails to fetch package index

I'm trying to install Haskell on Windows. Downloaded the installer and just clicked through everything, then tried to use Stack to install a package, ran it from a temporary folder in which everything has write access:
C:\t>stack install hfmt
Using latest snapshot resolver: lts-8.3
Writing implicit global project config file to: C:\sr\global-project\stack.yaml
Note: You can change the snapshot via the resolver field there.
Downloaded lts-8.3 build plan.
Fetching package index ...=.git""=="gui" was unexpected at this time.
C:\sr\indices\Hackage\git-update\all-cabal-hashes>#if ""--git-dir=.git""=="gui" #goto gui
Process exited with ExitFailure 255: C:\Program Files (x86)\Git\cmd\git.CMD --git-dir=.git fetch --tags
Failed to fetch package index, retrying.
removeDirectoryRecursive: permission denied (Access is denied.)
What's going wrong, and how can I fix it? Or should I forget about Stack and just use Cabal instead?
Tried rerunning the command as administrator. This time the response was instant:
C:\t>stack install hfmt
Fetching package index ...=.git""=="gui" was unexpected at this time.
C:\sr\indices\Hackage\git-update\all-cabal-hashes>#if ""--git-dir=.git""=="gui" #goto gui
Process exited with ExitFailure 255: C:\Program Files (x86)\Git\cmd\git.CMD --git-dir=.git fetch --tags
Failed to fetch package index, retrying.
removeDirectoryRecursive: permission denied (Access is denied.)

Install libraries in OpenShift

I've started to use openshift (free account), successing with python. But I need to install some libraries (requests and others). How to do it? I can't find any docs on it...
Forum's info is obscure... I've followed this thread (for third party libs):
Setup.py
from setuptools import setup
setup(name='Igor YourAppName',
version='1.0',
description='OpenShift App',
author='Igor Savinkin',
author_email='igor.savinkin#gmail.com',
url='http://www.python.org/sigs/distutils-sig/',
install_requires=['requests>=2.0.0'],
)
WSGI.py
def application(environ, start_response):
ctype = 'text/plain'
if environ['PATH_INFO'] == '/health':
response_body = "1"
elif environ['PATH_INFO'] == '/env':
response_body = ['%s: %s' % (key, value)
for key, value in sorted(environ.items())]
response_body = '\n'.join(response_body)
else:
ctype = 'text/html'
import requests
see the last line, where I try to import requests.
This yields in 500 error:
Internal Server Error The server encountered an internal error or misconfiguration and was unable to complete your request.
Custom python package try
My second try was on this thread:
I've created libs directory in my root dir; then added into wsgi.py:
sys.path.append(os.path.join(os.getenv("OPENSHIFT_REPO_DIR"), "libs"))
and cloned requests into that directory. When I do:
C:\Users\Igor\mypythonapp\libs\requests\requests>git ls-files -c
I get the full list of requests package files... but again, result is 500 error.
You should try reading through this section (https://developers.openshift.com/en/python-deployment-options.html) of the Developer Portal which describes how to install dependencies for Pythong applications on OpenShift Online
you should use requirements.txt. My requirements.txt is below
admin$ cat requirements.txt
Flask==0.10.1
Requests==2.6.0

pip can't find distributions from within virtualenv

I set up a new virtualenv. From within it, pip cannot find any distributions. Outside of the env, it can. Here's the output:
(wagon-admin)[me#pjs-macbook-pro wagon-admin]$ pip install Django
Downloading/unpacking Django
Could not fetch URL https://pypi.python.org/simple/Django/: There was a problem confirming the ssl certificate: <urlopen error [Errno 1] _ssl.c:480: error:0D0890A1:asn1 encoding routines:ASN1_verify:unknown message digest algorithm>
Will skip URL https://pypi.python.org/simple/Django/ when looking for download links for Django
Could not fetch URL https://pypi.python.org/simple/: There was a problem confirming the ssl certificate: <urlopen error [Errno 1] _ssl.c:480: error:0D0890A1:asn1 encoding routines:ASN1_verify:unknown message digest algorithm>
Will skip URL https://pypi.python.org/simple/ when looking for download links for Django
Cannot fetch index base URL https://pypi.python.org/simple/
Could not fetch URL https://pypi.python.org/simple/Django/: There was a problem confirming the ssl certificate: <urlopen error [Errno 1] _ssl.c:480: error:0D0890A1:asn1 encoding routines:ASN1_verify:unknown message digest algorithm>
Will skip URL https://pypi.python.org/simple/Django/ when looking for download links for Django
Could not find any downloads that satisfy the requirement Django
No distributions at all found for Django
Storing complete log in /Users/me/.pip/pip.log
I'm on OSX, and created the virtual environment using virtualenvwrapper. $ mkvirtualenv <env name>
This happens for all packages, not just django.
Edit: Only similar thing I've found in my searching: https://github.com/pypa/pip/issues/829
I had the same problem but realized I hadn't activated my virtualenv. Once I activated it, the installation worked. Not sure why.
Looking at the command line you pasted, it looks like you activated your env, but just wanted to note this for others who happen to stumble across this.
I updated to python 2.7 and everything works fine now.

Can't start deluge

i'm using windowsXp, sp3 ,setup deluge-1.2.0_rc3-win32-setup.exe, but when i try \deluge\deluge-python\deluge.exe , an error happen
[error] init:1982 Dll load failed: The specified module could not be found.
...
...
..
ImportError: Dll load failed: The specified module could not be found.
[error] xxxxxx ui:147 There was an error whilst launching the request UI: gtk
[error] xxxxxx ui:148 Look at the traceback above for more information
so, deluge don't start :(
i had no prob in installing. I gave complete install. Didn't give recommended. And make sure you select proper directory for installing dll files. (in my case it was the recommended /bin option). try reinstalling.

Resources