I installed recently Anaconda and need to use asammdf.
Standard was 5.8.0 installed, I need 5.21.0.
Running the installer from https://anaconda.org/conda-forge/asammdf did not update the package.
Then I installed a new environnement and asammdf and dependencies from scratch. So I could get 5.19.16 installed, but still not the latest.
Running
conda install -c conda-forge asammdf
from the console returns me the following message:
All requested packages already installed.
But it's not true…
What am I doing wrong ?
In advance thank you for your help.
Here is result of pip freeze on os:
aenum==2.2.3
alabaster==0.7.12
anaconda-client==1.7.2
anaconda-navigator==1.9.12
anaconda-project==0.8.3
applaunchservices==0.2.1
appnope==0.1.0
appscript # file:///opt/concourse/worker/volumes/live/aa928f42-721f-468a-5637-76f7349085f8/volume/appscript_1594840167539/work
argh==0.26.2
argon2-cffi # file:///opt/concourse/worker/volumes/live/c4db8eed-7de0-4d68-400c-2ea7e21d3750/volume/argon2-cffi_1596828478065/work
asammdf==5.19.16
asn1crypto # file:///tmp/build/80754af9/asn1crypto_1596577642040/work
astroid # file:///opt/concourse/worker/volumes/live/b22b518b-f584-4586-5ee9-55bfa4fca96e/volume/astroid_1592495912194/work
astropy==4.0.1.post1
atomicwrites==1.4.0
attrs==19.3.0
autopep8 # file:///tmp/build/80754af9/autopep8_1596578164842/work
Babel==2.8.0
backcall==0.2.0
backports.functools-lru-cache==1.6.1
backports.shutil-get-terminal-size==1.0.0
backports.tempfile==1.0
backports.weakref==1.0.post1
beautifulsoup4==4.9.1
bitarray # file:///opt/concourse/worker/volumes/live/c6d4988e-d50d-45f2-57a8-6d845ccc7c1b/volume/bitarray_1597678765739/work
bitstruct==8.10.0
bkcharts==0.2
bleach==3.1.5
bokeh # file:///opt/concourse/worker/volumes/live/4dce277b-856c-418f-50a0-50eb18d48058/volume/bokeh_1593187628308/work
boto==2.49.0
Bottleneck==1.3.2
brotlipy==0.7.0
canmatrix==0+unknown
cantools==34.0.0
cchardet==2.1.4
certifi==2020.6.20
cffi==1.14.0
chardet==3.0.4
click==7.1.2
cloudpickle # file:///tmp/build/80754af9/cloudpickle_1594141588948/work
clyent==1.2.2
colorama==0.4.3
conda==4.8.4
conda-build==3.18.11
conda-package-handling==1.7.0
conda-verify==3.4.2
contextlib2==0.6.0.post1
cryptography==2.9.2
cycler==0.10.0
Cython # file:///opt/concourse/worker/volumes/live/a8b919bf-06b6-4be5-60d8-71d16518c421/volume/cython_1594833990717/work
cytoolz==0.10.1
dask==2.11.0
decorator==4.4.2
defusedxml==0.6.0
diff-match-patch # file:///tmp/build/80754af9/diff-match-patch_1594828741838/work
diskcache==4.1.0
distributed # file:///opt/concourse/worker/volumes/live/0a62cbc9-591a-4d87-7487-c239010d690a/volume/distributed_1594750312816/work
docutils==0.16
entrypoints==0.3
et-xmlfile==1.0.1
fastcache==1.1.0
filelock==3.0.12
flake8==3.8.3
Flask==1.1.2
fsspec==0.7.4
future==0.18.2
gevent # file:///opt/concourse/worker/volumes/live/272caec7-c260-4aae-6e70-894795c188a9/volume/gevent_1593009579780/work
glob2==0.7
gmpy2==2.0.8
greenlet==0.4.16
h5py==2.10.0
HeapDict==1.0.1
html5lib # file:///tmp/build/80754af9/html5lib_1593446221756/work
idna # file:///tmp/build/80754af9/idna_1593446292537/work
imageio # file:///tmp/build/80754af9/imageio_1594161405741/work
imagesize==1.2.0
importlib-metadata # file:///opt/concourse/worker/volumes/live/84197498-cbc0-4436-7ce0-03c4490b7a28/volume/importlib-metadata_1593446431408/work
iniconfig # file:///tmp/build/80754af9/iniconfig_1596827328212/work
intervaltree # file:///tmp/build/80754af9/intervaltree_1594361675072/work
ipykernel # file:///opt/concourse/worker/volumes/live/73e8766c-12c3-4f76-62a6-3dea9a7da5b7/volume/ipykernel_1596206701501/work/dist/ipykernel-5.3.4-py3-none-any.whl
ipython # file:///opt/concourse/worker/volumes/live/7ea70a71-5624-4799-50b3-3b90e07922f3/volume/ipython_1593447385860/work
ipython-genutils==0.2.0
ipywidgets==7.5.1
isort==4.3.21
itsdangerous==1.1.0
jdcal==1.4.1
jedi==0.14.1
Jinja2==2.11.2
joblib # file:///tmp/build/80754af9/joblib_1594236160679/work
json5==0.9.5
jsonschema==3.2.0
jupyter==1.0.0
jupyter-client # file:///tmp/build/80754af9/jupyter_client_1594826976318/work
jupyter-console==6.1.0
jupyter-core==4.6.3
jupyterlab==2.1.5
jupyterlab-server # file:///tmp/build/80754af9/jupyterlab_server_1594164409481/work
keyring # file:///opt/concourse/worker/volumes/live/9f4aa601-8a6f-42a7-4180-58f1f835bf99/volume/keyring_1593109772323/work
kiwisolver==1.2.0
lazy-object-proxy==1.4.3
libarchive-c==2.9
llvmlite==0.33.0+1.g022ab0f
locket==0.2.0
lxml # file:///opt/concourse/worker/volumes/live/0dfefafa-9e58-455e-535c-75e73a127cfd/volume/lxml_1594826856790/work
lz4 # file:///opt/concourse/worker/volumes/live/70907f0c-3bc5-43d1-77d3-0e51df042440/volume/lz4_1595342290641/work
MarkupSafe==1.1.1
matplotlib # file:///opt/concourse/worker/volumes/live/415740a8-e00a-411a-7129-aa05c3842a43/volume/matplotlib-base_1597876353062/work
mccabe==0.6.1
mistune==0.8.4
mkl-fft==1.1.0
mkl-random==1.1.1
mkl-service==2.3.0
mock==4.0.2
more-itertools==8.4.0
mpmath==1.1.0
msgpack==1.0.0
multipledispatch==0.6.0
natsort==7.0.1
navigator-updater==0.2.1
nbconvert==5.6.1
nbformat==5.0.7
networkx # file:///tmp/build/80754af9/networkx_1594377231366/work
nltk # file:///tmp/build/80754af9/nltk_1592496090529/work
nose==1.3.7
notebook # file:///opt/concourse/worker/volumes/live/4befab56-a4d7-490d-6835-d3c47a466e66/volume/notebook_1596838669687/work
numba==0.50.1
numexpr==2.7.1
numpy # file:///opt/concourse/worker/volumes/live/40d975a3-7e82-44c8-53df-feccdbce8b96/volume/numpy_and_numpy_base_1596233859108/work
numpydoc # file:///tmp/build/80754af9/numpydoc_1594166760263/work
olefile==0.46
openpyxl # file:///tmp/build/80754af9/openpyxl_1594167385094/work
packaging==20.4
pandas # file:///opt/concourse/worker/volumes/live/8386378f-860c-458e-6b3d-426dc3c0374b/volume/pandas_1596825862710/work
pandocfilters==1.4.2
parso==0.5.2
partd==1.1.0
path==13.1.0
pathlib2==2.3.5
pathtools==0.1.2
patsy==0.5.1
pep8==1.7.1
pexpect==4.8.0
pickleshare==0.7.5
Pillow # file:///opt/concourse/worker/volumes/live/05b3bc99-cf9f-44f2-42cf-267ed3a55922/volume/pillow_1594307311725/work
pkginfo==1.5.0.1
pluggy==0.13.1
ply==3.11
prometheus-client==0.8.0
prompt-toolkit==3.0.5
psutil==5.7.0
ptyprocess==0.6.0
py # file:///tmp/build/80754af9/py_1593446248552/work
pycodestyle==2.6.0
pycosat==0.6.3
pycparser # file:///tmp/build/80754af9/pycparser_1594388511720/work
pycrypto==2.6.1
pycurl==7.43.0.5
pydocstyle # file:///tmp/build/80754af9/pydocstyle_1592848020240/work
pyflakes==2.2.0
Pygments==2.6.1
pylint # file:///opt/concourse/worker/volumes/live/42ede439-2571-4cb2-513c-394625d2381b/volume/pylint_1592496039330/work
pyodbc===4.0.0-unsupported
pyOpenSSL # file:///tmp/build/80754af9/pyopenssl_1594392929924/work
pyparsing==2.4.7
pyrsistent==0.16.0
PySocks==1.7.1
pytest==6.0.1
python-can==3.3.3
python-dateutil==2.8.1
python-jsonrpc-server==0.3.4
python-language-server==0.31.7
pytz==2020.1
PyWavelets==1.1.1
PyYAML==5.3.1
pyzmq==19.0.1
QDarkStyle==2.8.1
QtAwesome==0.7.2
qtconsole # file:///tmp/build/80754af9/qtconsole_1592848611704/work
QtPy==1.9.0
regex # file:///opt/concourse/worker/volumes/live/b2afe0bb-a615-4845-4439-4b4e32958733/volume/regex_1596829787473/work
requests # file:///tmp/build/80754af9/requests_1592841827918/work
rope==0.17.0
Rtree==0.9.4
ruamel-yaml==0.15.87
scikit-image==0.16.2
scikit-learn # file:///opt/concourse/worker/volumes/live/eab838ab-55a5-40d0-5cbd-f1854d35aa9d/volume/scikit-learn_1592503039331/work
scipy # file:///opt/concourse/worker/volumes/live/14622691-c324-4e90-5a99-87aced91207b/volume/scipy_1592930535761/work
seaborn==0.10.1
Send2Trash==1.5.0
simplegeneric==0.8.1
singledispatch==3.4.0.3
six==1.15.0
snowballstemmer==2.0.0
sortedcollections==1.2.1
sortedcontainers==2.2.2
soupsieve==2.0.1
Sphinx # file:///tmp/build/80754af9/sphinx_1597428793432/work
sphinxcontrib-applehelp==1.0.2
sphinxcontrib-devhelp==1.0.2
sphinxcontrib-htmlhelp==1.0.3
sphinxcontrib-jsmath==1.0.1
sphinxcontrib-qthelp==1.0.3
sphinxcontrib-serializinghtml==1.1.4
sphinxcontrib-websupport # file:///tmp/build/80754af9/sphinxcontrib-websupport_1597081412696/work
spyder==4.0.1
spyder-kernels==1.8.1
SQLAlchemy # file:///opt/concourse/worker/volumes/live/701c360b-024a-4c1d-72ce-e82e6bfb4620/volume/sqlalchemy_1593446320654/work
statsmodels==0.11.1
sympy # file:///opt/concourse/worker/volumes/live/794b4585-6914-43d5-6dd6-9c84fb6d47ed/volume/sympy_1594236608820/work
tables==3.6.1
tblib==1.6.0
terminado==0.8.3
testpath==0.4.4
textparser==0.23.0
threadpoolctl # file:///tmp/tmp9twdgx9k/threadpoolctl-2.1.0-py3-none-any.whl
toml # file:///tmp/build/80754af9/toml_1592853716807/work
toolz==0.10.0
tornado==6.0.4
tqdm # file:///tmp/build/80754af9/tqdm_1596810128862/work
traitlets==4.3.3
typed-ast==1.4.1
typing-extensions # file:///tmp/build/80754af9/typing_extensions_1592847887441/work
ujson # file:///opt/concourse/worker/volumes/live/b0b42640-6047-4d8e-4be3-9ef9916c7708/volume/ujson_1592441805801/work
unicodecsv==0.14.1
urllib3 # file:///tmp/build/80754af9/urllib3_1597086586889/work
watchdog # file:///opt/concourse/worker/volumes/live/fe5d349c-1d61-4feb-6b8a-cbc5288b20a8/volume/watchdog_1593447347275/work
wcwidth # file:///tmp/build/80754af9/wcwidth_1593447189090/work
webencodings==0.5.1
Werkzeug==1.0.1
widgetsnbextension==3.5.1
wrapt==1.11.2
wurlitzer # file:///opt/concourse/worker/volumes/live/59acb45c-07e7-4304-57af-e84f3889d3ef/volume/wurlitzer_1594753863419/work
xlrd==1.2.0
XlsxWriter==1.2.9
xlwings==0.19.5
xlwt==1.3.0
xmltodict==0.12.0
yapf # file:///tmp/build/80754af9/yapf_1593528177422/work
zict==2.0.0
zipp==3.1.0
zope.event==4.4
zope.interface==4.7.1
Running:
conda create -n test-env python=3 asammdf
returns:
Collecting package metadata (current_repodata.json): done
Solving environment: failed with repodata from current_repodata.json, will retry with next repodata source.
Collecting package metadata (repodata.json): done
Solving environment: failed
PackagesNotFoundError: The following packages are not available from current channels:
- asammdf
Current channels:
- https://repo.anaconda.com/pkgs/main/osx-64
- https://repo.anaconda.com/pkgs/main/noarch
- https://repo.anaconda.com/pkgs/r/osx-64
- https://repo.anaconda.com/pkgs/r/noarch
To search for alternate channels that may provide the conda package you're
looking for, navigate to
https://anaconda.org
and use the search bar at the top of the page.
Related
I am yet to use poetry to run project, so excuse lack of understanding.
I successfully installed the poetry python library manager, using:
curl -sSL https://raw.githubusercontent.com/python-poetry/poetry/master/get-poetry.py | python3
Next step poetry install initially returned this error:
me#LAPTOP-G1DAPU88:~/.ssh/workers-python/workers$ poetry install
RuntimeError
Poetry could not find a pyproject.toml file in /home/me/.ssh/workers-python/workers or its parents
at ~/.poetry/lib/poetry/_vendor/py3.8/poetry/core/factory.py:369 in locate
365│ if poetry_file.exists():
366│ return poetry_file
367│
368│ else:
→ 369│ raise RuntimeError(
370│ "Poetry could not find a pyproject.toml file in {} or its parents".format(
371│ cwd
372│ )
373│ )
I soon realised I needed my own made pyproject.toml file. Running poetry install again yielded:
$ poetry install
TOMLError
Invalid TOML file /home/me/.ssh/workers-python/workers/pyproject.toml: Key "json " already exists.
at ~/.poetry/lib/poetry/_vendor/py3.8/poetry/core/toml/file.py:34 in read
30│ def read(self): # type: () -> "TOMLDocument"
31│ try:
32│ return super(TOMLFile, self).read()
33│ except (ValueError, TOMLKitError) as e:
→ 34│ raise TOMLError("Invalid TOML file {}: {}".format(self.path.as_posix(), e))
35│
36│ def __getattr__(self, item): # type: (str) -> Any
37│ return getattr(self.__path, item)
38│
Above error indicates there were duplicate entries.
Running poetry install again with the now updated pyproject.toml file in cwd threw this error (in the post's title):
$ poetry install
Creating virtualenv my_project-1_EUeV5I-py3.8 in /home/me/.cache/pypoetry/virtualenvs
Updating dependencies
Resolving dependencies... (28.4s)
SolverProblemError
Because my_project depends on string (*) which doesn't match any versions, version solving failed.
at ~/.poetry/lib/poetry/puzzle/solver.py:241 in _solve
237│ packages = result.packages
238│ except OverrideNeeded as e:
239│ return self.solve_in_compatibility_mode(e.overrides, use_latest=use_latest)
240│ except SolveFailure as e:
→ 241│ raise SolverProblemError(e)
242│
243│ results = dict(
244│ depth_first_search(
245│ PackageNode(self._package, packages), aggregate_package_nodes
However, temporarily removing all instances = "*" gave me this error of \n on line 12... which doesn't appear to be there:
$ poetry install
TOMLError
Invalid TOML file /home/me/.ssh/workers-python/workers/pyproject.toml: Unexpected character: '\n' at line 12 col 5
at ~/.poetry/lib/poetry/_vendor/py3.8/poetry/core/toml/file.py:34 in read
30│ def read(self): # type: () -> "TOMLDocument"
31│ try:
32│ return super(TOMLFile, self).read()
33│ except (ValueError, TOMLKitError) as e:
→ 34│ raise TOMLError("Invalid TOML file {}: {}".format(self.path.as_posix(), e))
35│
36│ def __getattr__(self, item): # type: (str) -> Any
37│ return getattr(self.__path, item)
38│
me#LAPTOP-G1DAPU88:~/.ssh/workers-python/workers$ cat pyproject.toml
[tool.poetry]
name = "my_project"
version = "0.1.0"
description = "Top-level package for my_project."
authors = [""]
packages = [
{ include = "my_project"},
]
[tool.poetry.dependencies]
python = "^3.8"
click # Suspect line
I have reverted this.
Current pyproject.toml:
[tool.poetry]
name = "data_simulator"
version = "0.1.0"
description = "Top-level package for data_simulator."
authors = ["iotahoe <iotahoe#iotahoe.com>"] # daniel.bell#hitachivantara.com / daniel#iotahoe.com
packages = [
{ include = "data_simulator"},
]
[tool.poetry.dependencies]
python = "^3.8"
click = "*"
#logging = "*"
#os = "*"
#pathlib = "*"
#time = "*"
numpy = "*"
pandas = "*"
#json = "*"
#random = "*"
faker = "*"
transformers = "4.4.2"
#re = "*"
#itertools = "*"
#datetime = "*"
#requests = "*"
#copy = "*"
#collections = "*"
#collections.abc = "*"
#multiprocessing = "*"
#multiprocessing.dummy = "*"
nltk = "*"
#nltk.corpus = "*"
#string = "*"
[tool.poetry.dev-dependencies]
isort = "5.6.4"
black = "^20.8b1"
invoke = "^1.4.1"
coveralls = "^2.2.0"
pytest = "^3.0"
flake8 = "^3.8.3"
mypy = "^0.782"
[[tool.poetry.source]]
name = "azure"
url = "https://pkgs.dev.azure.com/iotahoe/Halo/_packaging/private-sources/pypi/simple/"
secondary = true
[build-system]
requires = ["poetry>=0.12"]
build-backend = "poetry.masonry.api"
Note: 'name', 'authors', 'include', 'url' have been censored.
As a general advise I recommend to use poetry's command line instead of creating/manipulating the pyproject.toml.
Start with a poetry init or poetry init -n and add your dependencies with poetry add.
The problem with your current pyproject.toml is, that you declare built-in packages as dependencies, like os, pathlib, string and others. This is why you receive the message Because my_project depends on string (*) which doesn't match any versions, version solving failed., which means poetry cannot find any matching package information in the repository.
tl;dr: Flush the *.egg-info directories before running poetry lock.
This answer is not strictly related to the current issue, but a similar error message can appear in other circumstances, so I think it's valuable to share it here.
If you are locking in a project where sub-dependencies are directly available on the file system, some *.egg-info directories may interfere with the locking process, causing issues when trying to run poetry install in a context where those *.egg-info files are missing. To avoid the problem: Flush the *.egg-info directories prior to locking. You should then have an updated poetry.lock file with more content.
I have just started dabbling with Python and I’m stuck with my first project
I need help in trying to make some sense out gpg. I have been struggle with trying to get gpg to work with python 3.8.1. If run the code in Thonny Python 3.6.9 in run just fine.
The version is gpg (GnuPG) 2.2.4 libgcrypt 1.8.1
Home directory : /home/bob/.gnupg
gnupg : /usr/local/lib/python3.8/site-packages/gnupg
using Python 3.6.9 works just fine
#!/usr/bin/python3
from pathlib import Path
import gnupg
# My gpg keys home directory.
#gpg = gnupg.GPG(homedir='/home/bob/.gnupg')
gpg = gnupg.GPG(gnupghome='/home/bob/.gnupg')
local_path = Path("/home/bob")
src_dir = ("/home/bob/Tbox/Channels2.csv")
with open(src_dir, 'rb') as afile:
# text = afile.read()
status = gpg.encrypt_file(afile,
['bobh#gunas.co.uk'],
output='/home/bob/Tbox/Channels2.csv.gpg')
print('ok: ', status.ok)
print('status: ', status.status)
print('stderr: ', status.stderr)
SHELL OUTPUT
ok: True
status: encryption ok
stderr: [GNUPG:] KEY_CONSIDERED 4678A2C439E752DA3DAE2EBA7357BB95381CD73 0
[GNUPG:] KEY_CONSIDERED 4678A2C439E752DA3DAE2EBA7357BB95381CD73 0
[GNUPG:] ENCRYPTION_COMPLIANCE_MODE 23
[GNUPG:] BEGIN_ENCRYPTION 2 9
[GNUPG:] END_ENCRYPTION
however if I run the code in Thonny Python 3.8.1 I not working withy error message in Shell
#!/usr/bin/python3
from pathlib import Path
import gnupg
# My gpg keys home directory.
gpg = gnupg.GPG(homedir='/home/bob/.gnupg')
#gpg = gnupg.GPG(gnupghome='/home/bob/.gnupg')
local_path = Path("/home/bob")
backup_dir = Path("/home/bob/Tbox/tbackup-test")
src_dir = ("/home/bob/Tbox/Channels2.csv")
with open(src_dir, 'rb') as afile:
text = afile.read()
# status = gpg.encrypt_file(text,
status = gpg.encrypt(afile,
['bobh#gunas.co.uk'],
output='/home/bob/Tbox/Channels2.csv.gpg')
print('ok: ', status.ok)
print('status: ', status.status)
print('stderr: ', status.stderr)
SHELL OUTPUT
ok: False
status: None
stderr: gpg: Sorry, no terminal at all requested - can't get input
I have tried add the line no-tty to the gpg.conf file but this did not help.
I have tried with some example of the net but with on joy, one problem I have found is to do with gpg and the word Context like c = gpg.core.Context(armor=True) error AttributeError: 'GPG' object has no attribute 'core'.
In the second example, instead of:
status = gpg.encrypt(afile,
you probably need:
status = gpg.encrypt(text,
Basically you need to decide if you are encrypting a file, or the contents of a file (that you're reading in variable 'text'), and then you either use gpg.encrypt or gpg.encrypt_file accordingly.
The title is self-explanatory. Is there a way to downgrade the conda packages to the ones that were the latest on a certain date?
This is not possible programmatically. Packages in Conda are specified through MatchSpec, which does not currently have any way to constrain on a build timestamp.
Manual Searching
When searching for packages via conda search, the --info flag will print the build timestamps if they are available. So, for example, if one wanted to find the latest version of PyMC3 that someone with Python 3.6 was running a year ago (9 Dec 2018), one could check
conda search --info 'conda-forge::pymc3'
and see that version 3.5, build py36_1000 would satisfy this. If one wanted to create an env with this build in it, they could use
conda create -n py36_pymc35 -c conda-forge pymc3=3.5=py36_1000
2023 Update
In addition to Merv's post, I may add that the --json flag makes it actually quite easy to programmatically gather the history. Once you have the history, you can search for the latest package versions as of some date, and make an environment with them (we do that routinely to establish "low watermark" environments for our CIs).
The conda command line invocation is:
f'conda search -q {package} --info --json`
Here is some code that uses that to gather the history of a few packages. It is also multi-threaded to speed up things a little.
import io
import json
import subprocess
import yaml
from collections import defaultdict
from concurrent.futures import ThreadPoolExecutor
from datetime import datetime, timedelta
from tqdm import tqdm
def shell(cmd):
proc = subprocess.run(cmd, shell=True, capture_output=True)
return proc.stdout.decode('utf-8')
def version_as_tuple(v):
return tuple(map(int, v.split('.')))
def get_history(p):
txt = shell(f"conda search -q {p} --info --json")
d = json.loads(txt)
h = defaultdict(set)
for vv in d.values():
for x in vv:
h[version_as_tuple(x['version'])].add(
datetime.fromtimestamp(x.get('timestamp', 0) / 1e3)
)
h = {vers: min(dates) for vers, dates in h.items()}
return p, h
Example usage:
metayaml = """
- boto3
- pandas >=0.25
- python >=3.8
"""
reqs = yaml.safe_load(metayaml) # in real life, read from conda.recipe/meta.yaml
all_pkgs = sorted(set([p.split()[0] for p in reqs]))
with ThreadPoolExecutor() as pool:
history = dict(tqdm(pool.map(get_history, all_pkgs), total=len(all_pkgs)))
After that, we have a neat version history for all dependent packages. For example:
>>> {v: f'{t:%Y-%m-%d}' for v, t in history['pandas'].items()}
{(0, 20, 3): '2017-09-18',
(0, 21, 0): '2017-11-06',
(0, 21, 1): '2017-12-12',
...
(1, 4, 4): '2022-09-21',
(1, 5, 1): '2022-11-16',
(1, 5, 2): '2022-12-07'}
And:
asof = datetime.now() - timedelta(weeks=2*52)
new = {
name: max([(vers, t) for vers, t in v.items() if t < asof])
for name, v in history.items()
}
print(f'# as of {asof:%Y-%m-%d}')
for name, (vers, t) in new.items():
print(f' - {name} =={".".join(map(str, vers))} # released on {t:%Y-%m-%d}')
Which produces:
# as of 2021-01-20
- boto3 ==1.16.55 # released on 2021-01-15
- pandas ==1.2.0 # released on 2020-12-26
- python ==3.9.1 # released on 2020-12-11
ref: https://github.com/louking/loutilities/tree/0.14.7
I am getting an importerror after pip install of this package. When I use easy_install of the egg all is well. Any help on how to debug would be appreciated.
Used following commands to install this package to pypi, and pip install using "pip 18.1 from c:\users\lking\anaconda2\lib\site-packages\pip (python 2.7)"
python setup.py install
python setup.py sdist bdist_wheel
twine upload dist/loutilities-0.14.7.*
:
# later
pip install loutilities
When I run the project, see the following
NoAppException: While importing "run", an ImportError was raised:
Traceback (most recent call last):
File "C:\Users\lking\Documents\Lou's Software\projects\contracts\contracts\venv\lib\site-packages\flask\cli.py", line 235, in locate_app
__import__(module_name)
File "C:\Users\lking\Documents\Lou's Software\projects\contracts\contracts\run.py", line 31, in <module>
app = create_app(Development(configpath), configpath)
File "C:\Users\lking\Documents\Lou's Software\projects\contracts\contracts\contracts\__init__.py", line 82, in create_app
from contracts.views.frontend import bp as frontend
File "C:\Users\lking\Documents\Lou's Software\projects\contracts\contracts\contracts\views\frontend\__init__.py", line 18, in <module>
import frontend
File "C:\Users\lking\Documents\Lou's Software\projects\contracts\contracts\contracts\views\frontend\frontend.py", line 21, in <module>
from loutilities.flask_helpers.blueprints import add_url_rules
ImportError: No module named flask_helpers.blueprints
find_packages seems to be finding the appropriate packages:
from setuptools import find_packages
find_packages()
Out[4]: ['loutilities', 'tests', 'loutilities.flask_helpers']
setup.py:
#!/usr/bin/python
# [irrelevant comments deleted]
import glob
import pdb
# home grown
from loutilities import version
from setuptools import setup, find_packages
def globit(dir, filelist):
outfiles = []
for file in filelist:
filepath = '{0}/{1}'.format(dir,file)
gfilepath = glob.glob(filepath)
for i in range(len(gfilepath)):
f = gfilepath[i][len(dir)+1:]
gfilepath[i] = '{0}/{1}'.format(dir,f) # if on windows, need to replace backslash with frontslash
outfiles += [gfilepath[i]]
return (dir, outfiles)
setup(
name = "loutilities",
version = version.__version__,
packages = find_packages(),
# include_package_data = True,
scripts = [
'loutilities/agegrade.py',
'loutilities/apikey.py',
'loutilities/applytemplate.py',
'loutilities/filtercsv.py',
'loutilities/makerst.py',
],
# Project uses reStructuredText, so ensure that the docutils get
# installed or upgraded on the target machine
install_requires = [
'unicodecsv>=0.13.0',
],
# If any package contains any of these file types, include them:
data_files = ([
globit('loutilities', ['*.conf','*.pyc','*.pyd','*.dll','*.h','*.xlsx']),
globit('doc/source', ['*.txt', '*.rst', '*.html', '*.css', '*.js', '*.png', '*.py', ]),
globit('doc/build/html', ['*.txt', '*.rst', '*.html', '*.css', '*.js', '*.png', ]),
globit('doc/build/html/_sources', ['*.txt', '*.rst', '*.html', '*.css', '*.js', '*.png', ]),
globit('doc/build/html/_static', ['*.txt', '*.rst', '*.html', '*.css', '*.js', '*.png', ]),
globit('doc/build/html/_images', ['*.png', ]),
]),
entry_points = {
'console_scripts': [
'agegrade = loutilities.agegrade:main',
'apikey = loutilities.apikey:main',
'applytemplate = loutilities.applytemplate:main',
'filtercsv = loutilities.filtercsv:main',
'makerst = loutilities.makerst:main',
],
},
zip_safe = False,
# metadata for upload to PyPI
description = 'some hopefully useful utilities',
long_description=open("README.md").read(),
license = 'Apache License, Version 2.0',
author = 'Lou King',
author_email = 'lking#pobox.com',
url = 'http://github.com/louking/loutilities',
# could also include long_description, download_url, classifiers, etc.
)
After pip install loutilities I see an unknown directory loutilities at the top-level of my virtual environment. The directory contains a few *.pyc but not any subdirectory. I suspect import loutilities.flask_helpers tries to find a subdirectory loutilities/flask_helpers and fails.
I think the top-level directory is from the code
globit('loutilities', ['*.conf','*.pyc','*.pyd','*.dll','*.h','*.xlsx']),
I suspect there is a bug in globit() that puts the data in the wrong directory.
When I deploy my app in heroku there's not any trouble, but when I try to run the app I get the error:
ModuleNotFoundError: No module named 'keras'
I use pip and I'm working in virtualenv.
absl-py==0.7.0 asn1crypto==0.24.0 astor==0.7.1 attrs==18.2.0
backcall==0.1.0 bleach==3.1.0 certifi==2019.3.9 cffi==1.11.5
chardet==3.0.4 Click==7.0 cryptography==2.4.2 cycler==0.10.0
Cython==0.29.6 decorator==4.3.0 entrypoints==0.3
fix-yahoo-finance==0.0.22 Flask==1.0.2 Flask-WTF==0.14.2 gast==0.2.2
graphviz==0.10.1 grpcio==1.16.1 h5py==2.9.0 idna==2.8
ipykernel==5.1.0 ipython==7.4.0 ipython-genutils==0.2.0
ipywidgets==7.4.2 itsdangerous==1.1.0 jedi==0.13.2 Jinja2==2.10
jsonschema==3.0.0a3 jupyter==1.0.0 jupyter-client==5.2.4
jupyter-console==6.0.0 jupyter-contrib-core==0.3.3
jupyter-contrib-nbextensions==0.5.1 jupyter-core==4.4.0
jupyter-highlight-selected-word==0.2.0 jupyter-latex-envs==1.4.6
jupyter-nbextensions-configurator==0.4.1 jupyterlab==0.35.3
jupyterlab-server==0.2.0 Keras==2.1.6 Keras-Applications==1.0.6
Keras-Preprocessing==1.0.5 kiwisolver==1.0.1 lxml==4.3.0
Markdown==3.0.1 MarkupSafe==1.1.0 matplotlib==3.0.2 mistune==0.8.4
mkl-fft==1.0.10 mkl-random==1.0.2 multitasking==0.0.7
nbconvert==5.3.1 nbformat==4.4.0 notebook==5.7.4 numpy==1.15.4
pandas==0.23.4 pandas-datareader==0.7.0 pandocfilters==1.4.2
parso==0.3.1 patsy==0.5.1 pexpect==4.6.0 pickleshare==0.7.5
pipenv==2018.11.26 plotly==3.5.0 pmdarima==1.1.0
prometheus-client==0.5.0 prompt-toolkit==2.0.7 protobuf==3.6.1
ptyprocess==0.6.0 pycparser==2.19 Pygments==2.3.1 pyOpenSSL==18.0.0
pyparsing==2.3.1 pyrsistent==0.14.9 PySocks==1.6.8
python-dateutil==2.7.5 pytz==2018.9 PyYAML==3.13 pyzmq==17.1.2
qtconsole==4.4.3 requests==2.21.0 retrying==1.3.3
scikit-learn==0.19.2 scipy==1.1.0 seaborn==0.9.0 Send2Trash==1.5.0
simplejson==3.16.0 six==1.12.0 statsmodels==0.9.0 ta==0.3.8
tensorboard==1.12.2 tensorflow==1.12.0 termcolor==1.1.0
terminado==0.8.1 testpath==0.4.2 tornado==5.1.1 traitlets==4.3.2
urllib3==1.24.1 wcwidth==0.1.7 webencodings==0.5.1 Werkzeug==0.14.1
widgetsnbextension==3.4.2 wrapt==1.11.0 WTForms==2.2.1 xgboost==0.81
Heroku Error: ModuleNotFoundError: No module named 'keras'