I'm getting the following message when running some predefined Pipenv scripts in Travis-CI, and it bring me to the question of; should I be running Pipenv at all in a Travis environment? Does it defeat the purpose of the CI tests?
Courtesy Notice: Pipenv found itself running within a virtual environment, so it will automatically use that environment, instead of creating its own for any project. You can set PIPENV_IGNORE_VIRTUALENVS=1 to force pipenv to ignore that environment and create its own instead. You can set PIPENV_VERBOSITY=-1 to suppress this warning.
What is the best practice when you use Pipenv for development, and use Travis for CI? Should I be manually running the scripts below that map to pipenv run unit_test instead? See below for a section of my Pipfile.
.travis.yml:
language: python
python:
- "3.6"
install:
- pip install pipenv
- pipenv install --dev
script:
- pipenv run unit_tests
- pipenv run linting
- pipenv run docs
Pipfile:
[scripts]
deploy = "python ./deploy.py"
docs = "python ./docs.py"
linting = "pylint **/*.py"
unit_tests = "python -m pytest --cov=marian tests"
serve = "sh ./serve.sh"
so Travis is using pipenv itself for the virtual env. So, seems awkward beyond installing via pipenv install --dev. I dropped all Pipfile scripts, and went with the following in .travis.yml
install:
- pip install pipenv
- pipenv install --dev
script:
- pylint **/*.py
- python -m pytest --cov=marian
- python ./docs.py
Related
I am trying to migrate from pip to pipenv and have my projects running on a pipenv environment. The steps I'm taking to set up a pipenv environment are listed below:
Create a folder name "Project" & set the dir to "Project"
pip freeze > requirements.txt to create a requirements.txt file containing packages installed under pip environment.
pipenv --python 3.9 to set a specific version of python
interpreter to use under pipenv environment
pipenv shell to run pipenv environment
pipenv install -r ./requirements.txt to install packages listed in the requirements.txt
Questions...
Q1.
On step 3, I am not sure if I should have executed pipenv install --python 3.9 instead of pipenv --python 3.9. Are they doing the same thing?
Q2.
I am also curious why I'm supposed to run step 3 before 4, not after 4. To me, it seems more reasonable to install python version after you're inside the pipenv environment.
Please let me know if the steps above are incorrect or additional steps need to be taken.
Working in a conda template repo on GitLab. Looking to replace pylint with flake8 in the gitlab CI, and install using conda instead of pip. Swap pip install flake8 with conda install flake8 and getting ther erroe that #command conda not found" after I push and view the pipeline. Any ideas why this might be?
You can use conda like this in Gitlab
image: continuumio/miniconda3:latest
before_script:
- conda env create -f environment.yml
- source activate koopa
tests:
stage: test
script:
- python -m unittest discover -v
Getting SSL Certificate error while deploying the following code on aws lambda using aws codestar build pipeline.
Looked at multiple community discussions, nothing worked out.
version: 0.2
phases:
install:
commands:
# Upgrade AWS CLI & PIP to the latest version
- pip install --upgrade awscli
- pip install --upgrade pip
# Define Directories
- export HOME_DIR=`pwd`
- export NLTK_DATA=$HOME_DIR/nltk_data
pre_build:
commands:
- cd $HOME_DIR
# Create VirtualEnv to package for lambda
- virtualenv venv
- . venv/bin/activate
# Install Supporting Libraries
- pip install -U scikit-learn
- pip install -U requests
# Install WordNet
- pip install -U nltk
- python -m nltk.downloader -d $NLTK_DATA punkt
# Output Requirements
- pip freeze > requirements.txt
# Discover and run unit tests in the 'tests' directory. For more information, see <https://docs.python.org/3/library/unittest.html#test-discovery>
- python -m unittest discover tests
build:
commands:
- cd $HOME_DIR
- mv $VIRTUAL_ENV/lib/python3.6/site-packages/* .
Only way that worked for me was download the modules and install them into my source folder in a nltk_data folder then create a lambda environment variable NLTK with value ./nltk_data
I have the following yml configuration file with 3 different jobs:
stages:
- build
- test
- analyze
build:
stage: build
script:
- apt-get update -qy
- apt-get install -y python3-dev python3-pip
- pip3 install -r requirements.txt
test:
stage: test
script:
- coverage run --source='.' manage.py test
cache:
paths:
- .coverage
analyze:
stage: analyze
script:
- flake8
- coverage report
In the first job I install the requirements, among which are coverage or flake8. But these tools are used in the following jobs. I have tried using 'dependencies' or 'cache', but it didn't work: only files/dirs under the project root directory can be shared, not the binaries under /user/local/bin.
I have tried to indicate another directory for pip install, but the binary is installed in /user/local/bin.
The workaround I have found is to install the dependencies in each job, but I think that this is the less optimal solution.
I think that there must be a better solution for that.
Thanks.
I just found a solution, at least for python3 (enough for me):
python3 has a built-in tool for managing virtual envs: venv
Using venv, we can create the virtual env in the project root dir, cache this dir, and enable our virtual env in each job.
variables:
VENV_NAME: "env"
cache:
paths:
- $VENV_NAME
first_job:
script:
- apt-get update -qy
- apt-get install -y python3-dev python3-pip python3-venv
- python3 -m venv $VENV_NAME
- source $VENV_NAME/bin/activate
- pip3 install -r requirements.txt
next_jobs:
script:
- source $VENV_NAME/bin/activate
- echo "hello world!"
PD: don't forget to exclude virtual env dir from coverage or other analysis tools
I've tried many things, but have ultimately failed to get the build for gulp-pipeline-rails running. The script runs locally, no problem.
The last problem I've narrowed down is that I have a ruby language project that utilizes node, but I need node 5. I found one snippet:
#------------------------------
# Update the node version
env:
- TRAVIS_NODE_VERSION="5"
install:
- pwd
- rm -rf ~/.nvm && git clone https://github.com/creationix/nvm.git ~/.nvm && (cd ~/.nvm && git checkout `git describe --abbrev=0 --tags`) && source ~/.nvm/nvm.sh && nvm install $TRAVIS_NODE_VERSION
- npm install
While this seems to get node updated, it does something to my ruby env where it fails to execute rspec:
$ pwd && bundle exec rake
/home/travis/build/alienfast/gulp-pipeline-rails
Could not find gem 'rspec' in any of the gem sources listed in your Gemfile or available on this machine.
Run `bundle install` to install missing gems.
Question
With all that said, how do I simply use Node 5 with this .travis.yml?
language: ruby
rvm:
- 2.2.2
- ruby-head
matrix:
allow_failures:
- rvm: ruby-head
cache: bundler
#------------------------------
# Setup
before_script:
- node -v
# update npm
- npm install npm -g
# install Gulp 4 CLI tools globally from 4.0 GitHub branch
- npm install https://github.com/gulpjs/gulp-cli/tarball/4.0 -g
#------------------------------
# Build
script: bundle exec rake
Try using a before_install stage for adding a second language on Travis, maybe something like:
before_install:
- nvm install node
nvm should be installed by default on the Travis build image (depending on which one you're using), and this command will install the latest version of Node.
After that, maybe just have npm install -g gulp-cli#4.0 as the first step in your before_script stage (i.e. don't worry about updating npm), hopefully that should mean that bundler still runs fine and installs all your gems.
I found this article that helped me out quite a bit.
Relevant information from article:
You can use nvm to manage you node versions in travis, however you have to enable it first:
install:
- . $HOME/.nvm/nvm.sh
- nvm install stable
- nvm use stable
If the project's language is ruby, Travis CI will run bundle install --jobs=3 --retry=3 by default.
If you define an install stage yourself in .travis.yml, the default will not execute in favor of the newly specified commands. The thinking here is to have sane magic by default that should easily be overridden.
There are two solutions to this issue:
Add bundle install --jobs=3 --retry=3 to the install stage
Rename the header of the node replacement snippet to before_install as suggested by #ocean.