solve ResolvePackageNotFound in yml environments - anaconda

I'm trying to follow this link on my local computer
https://github.com/lyakaap/ISC21-Descriptor-Track-1st
in the first step it wants me to set this environment on my computer:
I'm not familiar with yml environments and running to this error
and I don't know how to solve the problem of ResolvePackageNotFound, should I install new packages? how to solve this?

Related

how to setup environment for avalanche's test network

I'm new to avalanche and have never used go before. I have been trying to deploy an Avalanche local test network according to the documentation in my ubuntu20.04 but it's not very clear what should exactly be done about the GOPATH.
It is mentioned:
avalanche-network-runner will be installed into $GOPATH/bin, please make sure that $GOPATH/bin is in your $PATH, otherwise, you may not be able to run commands below.
but its not specified what to set the PATH or GOPATH to.
also the documentation mentions:
# replace execPath with the path to AvalancheGo on your machine
# e.g., ${HOME}/go/src/github.com/ava-labs/avalanchego/build/avalanchego
AVALANCHEGO_EXEC_PATH="${HOME}/go/src/github.com/ava-labs/avalanchego/build/avalanchego"
however the avalanchego project was never said to be cloned.
can someone please provide the specific steps needed to get the avalanche local test network up and working?
thanks in advance for your help
I solved it as follows:
From here, download the binary file and give your file path for AVALANCHEGO_EXEC_PATH.
export AVALANCHEGO_EXEC_PATH="PATH_TO_AVALANCHEGO_BINARY"
Steps:
Install the Avalanche CLI, Avalanche Network Node Runner
Download the Avalanche Bin File
Export its path with AVALANCHEGO_EXEC_PATH
export AVALANCHEGO_EXEC_PATH="PATH_TO_AVALANCHEGO_BINARY"
Then, follow instruction here.

Airflow local install in Mac pointing Dag folder to Conda site packages instead of /Users/myuser/airflow/dags

So I installed Airflow locally on my Mac (Big Sur 11.5) per install script below:
https://airflow.apache.org/docs/apache-airflow/stable/start/local.html
And I have my dag folder in:
/Users/myuser/airflow/dags
Config Changes made to airflow.cfg:
load_examples = False
sql_alchemy_conn = postgresql+psycopg2://airflow_user:password#localhost/airflow_db
executor = LocalExecutor
I then initialized db, started webserver and scheduler.
However, I am not able to see dags I load in my specified dagfolder. Instead I see a list of example dags that Airflow seems to be sourcing from /Users/myuser/opt/anaconda3/lib/python3.8/site-packages/airflow/example_dags/
I looked for other folks that may have run into same issue and that's how I turned the load_examples = False. Reran everything and still no luck.
Help overcome this last hurdle would be much appreciated.
Thanks in advance

Using Ansible for ScaleIO provisioning

I am using this playbook to install a 3 node ScaleIO cluster on CentOS 7.
https://github.com/sperreault/ansible-scaleio
In the EMC documentation they specify that a CSV file needs to be uploaded to the IM to complete installation, I am not sure though how I can automate that part within this playbook. Has anyone got any practical experience of doing so?
this playbook is used to install ScaleIO manually, not by IM.
so you do not need to prepare a csv file

How do I speed up my puppet module development-testing cycle?

I'm looking for some best practices on how to increase my productivity when writing new puppet modules. My workflow looks like this right now:
vagrant up
Make changes/fixes
vagrant provision
Find mistakes/errors, GOTO 2
After I get through all the mistakes/errors I do:
vagrant destroy
vagrant up
Make sure everything is working
commit my changes
This is too slow... how can i make this workflow faster?
I am in denial about writing tests for puppet. What are my other options?
cache your apt/yum repository on your host with the vagrant-cachier plugin
use profile –evaltrace to find where you loose time on full provisioning
use package base distribution :
eg: rvm install ruby-2.0.0 vs a pre-compiled ruby package created with fpm
avoid a "wget the internet and compile" approach
this will probably make your provisioning more reproducible and speedier.
don't code modules
try reusing some from the forge/github/...
note that it can be against my previous advice
if this is an option, upgrade your puppet/ruby version
iterate and prevent full provisioning
vagrant up
vagrant provision
modify manifest/modules
vagrant provision
modify manifest/modules
vagrant provision
vagrant destroy
vagrant up
launch server-spec
minimize typed command
launch command as you modify your files
you can perhaps setup guard to launch lint/test/spec/provision as you save
you can also send notifications from guest to host machine with vagrant-notify
test without actually provisioning in vagrant
rspec puppet (ideal when refactoring modules)
test your provisioning instead of manual checking
stop vagrant ssh-ing checking if service is running or a config has a given value
launch server-spec
take a look at Beaker
delegate running the test to your preferred ci server (jenkins, travis-ci,...)
if you are a bit fustrated by puppet... take a look at ansible
easy to setup (no ruby to install/compile)
you can select portion of stuff you want to run with tags
you can share the playbooks via synched folders and run ansible in the vagrant box locally (no librairian-puppet to launch)
update : after discussion with #garethr, take a look at his last presentation about guard.
I recommand using language-puppet. It comes with a command line tool (puppetresources) that can compute catalogs on your computer and let you examine them. It has a few useful features that can't be found in Puppet :
It is really fast (6 times faster on a single catalog, something like 50 times on many catalogs)
It tracks where each resource was defined, and what was the "class stack" at that point, which is really handy when you have duplicate resources
It automatically checks that the files you refer to exist
It is stricter than Puppet (breaks on undefined variables for example)
It let you print to standard output the content of any file, which is useful for developing complex templates
The only caveat is that it only works with "modern" Puppet practices. For example, require is not implemented. It also only works on Linux.

Using dotcloud and vagrant together

Has anyone thought about using dotcloud and vagrant together? It would be super sweet to be able to type "vagrant up" into a dotcloud application and have vagrant read from the dotcloud.yml file and create an local environment that would mirror what it would look like if you did a "dotcloud push".
I have a semi-working version, but it isn't automatic. You just create both the dotcloud.yml and Vagrantfile in the same folder and have slightly different setup scripts.
I think the nearest thing you could use would be http://docker.io

Resources