How to change default `pyproject.toml` that is generated when running `poetry new` - python-poetry

When I run poetry new [directory] it generates a pyproject.toml in the directory but I always find that I make the same initial changes to it such as
the python version (from 2.7 to 3.8)
the pytest version (from 4.6 to 5.4)
How can I change these defaults so that when I run poetry new python and pytest will have my desired versions?
Is the python version specified in the pyproject.toml based on the system's python version? If that is the case, I don't think I can change my system's python version since I am using a mac and it might mess up my OS. Or can I?

The standard way to create project templates in python is with the cookiecutter utility. Its documentation is quite good and you can get started creating your own templates easily, but I'll give a short intro using the example you gave.
Cookiecutter uses a template language that allows you to specify which parts of your project template can be parametrized. In your case, that would be the project name (freely choosable) and maybe also the python & pytest versions (from a list of values). This information will be stored in a file called cookiecutter.json (some more examples on how that file can look here), that should look more or less like this:
{
"full_name": "<your name>",
"email": "<your name>#<email>.com",
"project_name": "default",
"version": "0.1.0",
"python_version": ["3.8", "2.7"],
"pytest_version": ["5.4", "4.6"]
}
Now you need to:
run poetry new my_cookie to create the base for the template
put the cookiecutter.json into the resulting folder
replace all mentions of the project name under the top-level folder with {{cookiecutter.project_name}}, including files and directories
repeat that step for all other parameters in cookiecutter.json
if you're done, create a project from your template by running cookiecutter path/to/my_cookie
if you get stuck, take a look at this sample project template or the docs I linked for guidance

Related

Installing Julia packages using a .toml file?

I am totally new to Julia!
I would like to install a large number of packages for a project I'm joining.
The project comes with a "Project.toml" file
It seems like there should be a nice way to install all the packages I need for the project, perhaps using the Project.toml file
However, I have not yet found any site that indicates how I might be able to do this.
Could anyone please let me know if what I am doing is possible, and if so, point me to a reference that would show how?
If your Project.toml is located in a folder, like myproject/Project.toml, you can simply start Julia with julia --project=/path/to/myproject, and it will automatically detect the Project.toml file in myproject.
The first time you activate this project, in the REPL, you can switch to Pkg mode by typing ], and type the command instantiate. This will cause the package manager to download and install all of the packages listed in Project.toml and their dependencies.
Another way to switch between projects during interactive use is to run activate /path/to/myproject in Pkg-mode in the REPL.
How to install julia packages from a Project.toml
First, you will have navigate to the folder containing your Project.toml.
cd ../your-folder-containing-the-project.toml-file
in your terminal:
julia --project=.
]
instantiate
or
julia --project=. -e 'using Pkg; Pkg.instantiate()
The other answers went already to the point, but I want to add another important aspect.
If this project comes "only" with a Project.toml file, you will be able to install "sone version" of these packages, eventualy the Project.toml may also give you a range of versions known to work with the project you have been given.
But if this project comes also with a Manifest.toml file you will be able to recreate on your pc the exact environment, will all the exact versions of all dependent packages recursivelly, of the guy that passed you the project, using the ways desctibed in detail in the other answers (e.g. ] activate [folder]; instantiate).

Building documentation in RTD using a makefile

I created a package for sagemath and used Sphinx to create its documentation. Now I'm trying to create the necessary configuration files to build the documentation in readthedocs.
The problem is that I've come across is that in order for the documentation to build, I have to run sphinx inside a sagemath shell (that is, sage -sh -c "make html").
Is there any way to achieve so with the configuration file for readthedocs? Or to use a makefile to build the docs? Can't seem to find the information on their documetation.
At the moment, Read the Docs has a fixed build process that involves calling python -m sphinx -b html (with some more parameters) from the virtual environment where the requirements are installed. This is described in the documentation.
Therefore, running custom commands is not possible.
However, since sage is available from conda and Read the Docs supports conda environments, you might be able to build the docs without invoking a Sage shell.
Read the Docs has released a beta feature that allows users to override the build process completely and execute custom commands. You can achieve what you want by using build.commands config key in the configuration file (see https://docs.readthedocs.io/en/stable/config-file/v2.html)
Full documentation of this beta feature at https://docs.readthedocs.io/en/stable/build-customization.html#override-the-build-process

import local package over global package

I'm working on a support library for a large Python project which heavily uses relative imports by appending various project directories to sys.path.
Using The Hitchhiker's Guide to Packaging as a template I attempted to create a package structure which will allow me to do a local install, but can easily be changed to a global install later if desired.
One of the dependencies of my package is the pyasn1 package for the encoding and decoding of ASN.1 annotated objects. I have to include the pyasn1 library separately as the version supported by the CentOS 6.3 default repositories is one major version back and has known bugs that will break my custom package.
The top-level of the library structure is as follows:
MyLibrary/
setup.py
setup.cfg
LICENSE.txt
README.txt
MyCustomPackage/
pyasn1-0.1.6/
In my setup configuration file I define the install directory for my library to be a local directory called .lib. This is desirable as it allows me to do absolute imports by running the command import site; site.addsitedir("MyLibrary/.lib") in the project's main application without requiring our engineers to pass command line arguments to the setup script.
setup.cfg
[install]
install-lib=.lib
setup.py
setup(
name='MyLibrary',
version='0.1a',
package_dir = {'pyasn1': 'pyasn1-0.1.6/pyasn1'},
packages=[
'MyCustomPackage',
'pyasn1',
'pyasn1.codec',
'pyasn1.compat','
pyasn1.codec.ber',
'pyasn1.codec.cer',
'pyasn1.codec.der',
'pyasn1.type'
],
license='',
long_description=open('README.txt').read(),
data_files = []
)
The problem I've run into with doing the installation this way is that when my package tries to import pyasn1 it imports the global version and ignores the locally installed version.
As a possible workaround I have tried installing the pyasn1 package under a different name than the global package (eg pyasn1_0_1_6) by doing package_dir = {'pyasn1_0_1_6':'pyasn1-0.1.6/pyasn1'}. However, this fails since the imports used internally to the pyasn1 package do not use the pyasn1_0_1_6 name.
Is there some way to either a) force Python to import a locally installed package over a globally installed one or b) force a package to install under a different name?
Use virtualenv to ensure that your application runs in a fully known configuration which is independent from the OS version of libraries.
EDIT: a quick (unix) solution is setting the PYTHONPATH environment variable, which works just like PATH for Python modules (module loaded from first path in which is found, so simply append you directory at the beginning of the PYTHONPATH). Anwyay, I strongly recommend you to proceed with virtualenv, since it was specifically engineered for handling situations like the one you are facing.
Rationale
The process is easily automatable if you write a setuptools script specifying dependencies with install_requires. For a complete example, refer to this one I wrote
Setup
Note that you can easily insert the steps below in a setup.sh shell script.
First create a virtualenv and enter it:
$ virtualenv $name
$ cd $name
Activate it:
$ source bin/activate
Now cd to your project directory and run the installer script:
$ cd $my_project_dir
$ python ./setup.py --prefix $path_to_virtualenv
Note the --prefix $path_to_virtualenv, which is used to tell the script to install in the virtualenv instead of system-wide. Call this after activating the virtualenv. Note that all the depencies are automatically downloaded and installed in the virtualenv.
Then you are done. When you want to leave the virtualenv, issue:
$ deactivate
On subsequent calls, you will only need to activate the virtualenv (step 2), maybe using a runawesomeproject.sh if you really want.
As noted on the virtualenv website, you should use virtualenv >= 1.9, as the previous versions did not download dependencies via HTTPS. If you consider plain HTTP to be sufficient, then any version should do.
You might also try relocatable virtualenvs: setup it and copy the folder to your host. Anyway, note that this feature is still experimental.

What's the best way to use CoffeeScript with Django if you're developing on Windows?

While starting to use Sass / Compass with Django couldn't be much easier regardless of platform, it has taken a bit of searching around to find the best way to use CoffeeScript with Django on a Windows development box.
Node support on Windows has greatly improved since I posted my original answer (which I will leave for historical purposes), so now it's much easier to get this working.
Download and install Node using the Windows installer. You get node and npm commands added to your Windows PATH automatically (available in cmd.exe).
Install CoffeeScript: npm install -g coffee-script. Then just to test, using cmd.exe...
coffee --version
CoffeeScript version 1.4.0 #sweet!
Install django-compressor: pip install django-compressor.
Add to your settings.py so django-compressor will precompile your CoffeeScript.
COMPRESS_PRECOMPILERS = (
('text/coffeescript', 'coffee --compile --stdio'),
)
Profit! Now use *.coffee files or inline CoffeeScript in Django templates and have it automatically compiled to javascript and combined with your other scripts into a single compressed file.
Example (taken from django-compressor docs):
{% load compress %}
{% compress js %}
<script type="text/coffeescript" charset="utf-8" src="/static/js/awesome.coffee" />
<script type="text/coffeescript" charset="utf-8">
# Functions:
square = (x) -> x * x
</script>
{% endcompress %}
Original answer (obsolete):
The goal is to be able to write CoffeeScript right inside Django templates and have it get automatically converted to Javascript (along with .coffee files). django-compressor has a precompiler that does this, prior to the file compression it's known best for.
Of course the issue is you want to use Windows (what's wrong with you?), and the precompiler assumes you have a typical Linux installation of node.js and coffee-script, able to invoke 'coffee' from the command line with all its standard options. To get the same functionality Windows (without resorting to cygwin), you just have to make a little .bat file:
Grab the latest Windows binary of node
Add the path containing node.exe to PATH in Windows system environment variables
Pick one of:
Given that npm is not available for Windows, you can use ryppi, a minimal Python node package manager, to install the coffee-script package. Put ryppi.py in your Python scripts folder.
cd /d C:\Users\<USERNAME>\ #'node_modules' folder can live here or wherever
ryppi.py install coffee-script
Just download coffee-script from the main site
Add the path\to\coffeescript\bin (containing 'cake' and 'coffee') to your PATH in Windows system environment variables
Make a batch file so you can use 'coffee' from the command line (credit for this) by creating a coffee.bat file in path\to\coffeescript\bin folder above, with this as its contents:
#pushd .
#cd /d %~dp0
#node coffee %*
#popd
Without this you have to do 'node \path\to\bin\coffee' instead of just 'coffee'.
Try reopening cmd.exe and type...
coffee --version
CoffeeScript version 1.1.2 #sweet!
Now you're using the real coffee-script program on node.
Setup the django-compressor precompiler to use coffee.bat:
COMPRESS_PRECOMPILERS = (
('text/coffeescript', 'coffee.bat --compile --stdio'),
)
I put that in my local_settings.py file. Just leave off the .bat as usual in the settings file used by your Linux production server or development box. Windows wasn't happy without the .bat.
Profit!
Now you can use inline CoffeeScript in your Django templates, and have it automatically compiled to javascript and combined with all your other scripts into a single compressed .js file. I'll leave details of using django-compressor to it's documentation.
You can use one of these CoffeeScript compilers.
Some of them support file system watching, like the official node package. So you can start a console and do
coffee -c src/ -o /bin --watch
and all the coffeescript files in src will be automatically recompiled when they change. You don't need any special integration with django, although it might be nice.
Django Pipeline (Django >= 1.5) supports CoffeeScript compilation, as well as loads of other stuff (e.g. LESS, SASS, JS/CSS minification, etc). Make sure you have CoffeeScript installed, then pip install django-pipeline, add 'pipeline' to your INSTALLED_APPS and then create the following config entry:
PIPELINE_COMPILERS = (
'pipeline.compilers.coffee.CoffeeScriptCompiler',
)
Then you can set up files to compile as per the linked docs - basically just source file(s), destination file and a name. You can refer to the compressed files by this name in templates likes this:
{% compressed_js 'my_compressed_js' %}
This looks promising to me: http://pypi.python.org/pypi/django-coffeescript/
I find the delay that compiling via compressor adds to be too much. So I compile on the client side instead, and check in the js files. Instant, and very convenient if you start watching files when the runserver command is run:
https://gist.github.com/EmilStenstrom/4761479

configure question: What would be the most appropriate place to install example programs for a library?

I'm currently writing a configure script for a library and would like to package it with some sample binaries to give examples on how to use it. Where would be the most appropriate location to install these binaries when a user runs "make install"? I ask what would be appropriate in terms of what would comply with GNU standards.
Thanks,
Sam
you can use the /your_prefix_installation_path/share/your_package_name folder to do it. Which is the general folder to put the documentation/example in.
To do it:
For instance, the following snippet how to install your file into ‘$(datadir)/your_package_name’.
yourexampledir = $(datadir)/your_package_name/
yourexample_DATA = your file here
In an "Examples" folder, off the installation directory.
If the installation folder for the library is a "System" folder, make an installation directory in the usual "My Programs" place, and put instructions in the README on how to find it.

Resources