I'm having some issues creating unit tests for my Puppet control repository.
I mostly work with roles and profiles with the following directory structure:
[root#puppet]# tree site
site
├── profile
│ ├── files
│ │ └── demo-website
│ │ └── index.html
│ └── manifests
│ ├── base.pp
│ ├── ci_runner.pp
│ ├── docker.pp
│ ├── gitlab.pp
│ ├── logrotate.pp
│ └── website.pp
├── role
│ └── manifests
│ ├── gitlab_server.pp
│ └── nginx_webserver.pp
Where do I need to place my spec files and what are the correct filenames?
I tried placing them here:
[root#puppet]# cat spec/classes/profile_ci_runner_spec.rb
require 'spec_helper'
describe 'profile::ci_runner' do
...
But I get an error:
Could not find class ::profile::ci_runner
The conventional place for a module's spec tests is in the module, with the spec/ directory in the module root. So site/profile/spec/classes/ci_runner_spec.rb, for example.
You could consider installing PDK, which can help you set up the structure and run tests, among other things.
Related
i have role which i try to get using includ_role
now i have this file structure
.
├── foo_A
│ └── roles
│ ├── foo_deploy
├── foo_B
│ └── roles
│ ├── db_foo
│ │ └── tasks
├── foo_C
│ └── roles
│ ├── package_deploy
│ │ ├── defaults
│ │ ├── files
│ │ └── tasks
│ │ └── main.yml
├── group_vars
└── roles
└── utilities
├── defaults
├── files
├── handlers
├── meta
├── tasks
└── dpackage.yml
├── templates
└── vars
I'm calling the include_role with the utilities name from main.yml
but I'm getting an error that that main level role is not units search paths
ERROR! the role 'utilities' was not found in /home/ec2-user/ansible/foo_C/roles:/home/ec2-user/.ansible/roles:/usr/share/ansible/roles:/etc/ansible/roles:/home/ec2-user/ansible/foo_C
The error appears to be in '/home/ec2-user/ansible/foo_C/roles/package_deploy/tasks/main.yml': line 78, column 11, but may
be elsewhere in the file depending on the exact syntax problem.
The offending line appears to be:
include_role:
name: utilities
^ here
how can i get access to the main roles dir under : /home/ec2-user/ansible
I am trying to generate client code using k8s.io/code-generator.
These are the instructions that I am following: https://itnext.io/how-to-generate-client-codes-for-kubernetes-custom-resource-definitions-crd-b4b9907769ba
My question is, does my go module need to be present on a repository or can I simply run the generate-groups.sh script on a go module that is ONLY present on my local system and not on any repository?
I have already tried running it and from what I understand, there needs to be a repository having ALL the contents of my local go module. Is my understanding correct?
You CAN run kubernetes/code-generator's generate-groups.sh on a go module that is only present on your local system. Neither code-generator nor your module needs to be in your GOPATH.
Verification
Cloned kubernetes/code-generator into a new directory.
$HOME/somedir
├── code-generator
Created a project called myrepo and mocked it with content to resemble sample-controller. Did this in the same directory to keep it simple.
somedir
├── code-generator
└── myorg.com
└── myrepo # mock of sample-controller
├── go.mod
├── go.sum
└── pkg
└── apis
└── myorg
├── register.go
└── v1alpha1
├── doc.go
├── register.go
└── types.go
My go.mod looked like
module myorg.com/myrepo
go 1.14
require k8s.io/apimachinery v0.17.4
Ran generate-group.sh. The -h flag specifies which header file to use. The -o flag specifies the output base which is necessary here because we're not in GOPATH.
$HOME/somedir/code-generator/generate-groups.sh all myorg.com/myrepo/pkg/client myorg.com/myrepo/pkg/apis "myorg:v1alpha1" \
-h $HOME/somedir/code-generator/hack/boilerplate.go.txt \
-o $HOME/somedir
Confirmed code generated in correct locations
myrepo
├── go.mod
├── go.sum
└── pkg
├── apis
│ └── myorg
│ ├── register.go
│ └── v1alpha1
│ ├── doc.go
│ ├── register.go
│ ├── types.go
│ └── zz_generated.deepcopy.go
└── client
├── clientset
│ └── versioned
│ ├── clientset.go
│ ├── doc.go
│ ├── fake
│ ├── scheme
│ └── typed
├── informers
│ └── externalversions
│ ├── factory.go
│ ├── generic.go
│ ├── internalinterfaces
│ └── myorg
└── listers
└── myorg
└── v1alpha1
Sources
Go modules support https://github.com/kubernetes/code-generator/issues/57
Documentation or support for Go modules https://github.com/kubernetes/sample-controller/issues/47
In Drupal 7 I use
drush-patchfile
to automatically implements patches when installing/updating module via drush. But in DDEV I don't know how to extend existing drush with drush-patchfile
As you can see on https://bitbucket.org/davereid/drush-patchfile section Installation, I need to clone the repository into
~/.drush
directory and that will append it to existing drush.
On another project without DDEV, I've already done that with creating new docker image file
FROM wodby/drupal-php:7.1
USER root
RUN mkdir -p /home/www-data/.drush && chown -R www-data:www-data /home/www-data/;
RUN cd /home/www-data/.drush && git clone https://bitbucket.org/davereid/drush-patchfile.git \
&& echo "<?php \$options['patch-file'] = '/home/www-data/patches/patches.make';" \
> /home/www-data/.drush/drushrc.php;
USER wodby
But I'm not sure how to do that in DDEV container.
Do I need to create a new service based on drud/ddev-webserver or something else?
I've read documentation but not sure in what direction to go.
Based on #rfay comment, here solution that works for me (and with little modification can works for other projects).
I've cloned repo outside of docker container; for example, I've cloned into
$PROJECT_ROOT/docker/drush-patchfile
Create custom drushrc.php in the $PROJECT_ROOT/.esenca/patches folder (you can choose different folder)
<?php
# Location to the patch.make file. This should be location within docker container
$options['patch-file'] = '/var/www/html/.esenca/patches/patches.make';
Add following hooks into $PROJECT_ROOT/.ddev/config.yaml
hooks:
post-start:
# Copy drush-patchfile directory into /home/.drush
- exec: "ln -s -t /home/.drush/ /var/www/html/docker/drush-patchfile"
# Copy custom drushrc file.
- exec: "ln -s -t /home/.drush/ /var/www/html/.esenca/patches/drushrc.php"
Final project structure should looks like
.
├── .ddev
│ ├── config.yaml
│ ├── docker-compose.yaml
│ ├── .gitignore
│ └── import-db
├── docker
│ ├── drush-patchfile
│ │ ├── composer.json
│ │ ├── patchfile.drush.inc
│ │ ├── README.md
│ │ └── src
├── .esenca
│ └── patches
│ ├── drushrc.php
│ └── patches.make
├── public_html
│ ├── authorize.php
│ ├── CHANGELOG.txt
│ ├── COPYRIGHT.txt
│ ├── cron.php
│ ├── includes
│ ├── index.html
│ ├── index.php
│ ├── INSTALL.mysql.txt
│ ├── INSTALL.pgsql.txt
│ ├── install.php
│ ├── INSTALL.sqlite.txt
│ ├── INSTALL.txt
│ ├── LICENSE.txt
│ ├── MAINTAINERS.txt
│ ├── misc
│ ├── modules
│ ├── profiles
│ ├── README.txt
│ ├── robots.txt
│ ├── scripts
│ ├── sites
│ │ ├── all
│ │ ├── default
│ │ ├── example.sites.php
│ │ └── README.txt
│ ├── themes
│ ├── Under-Construction.gif
│ ├── update.php
│ ├── UPGRADE.txt
│ ├── web.config
│ └── xmlrpc.php
└── README.md
At the end start ddev envronment
ddev start
and now you can use drush-patchfile commands within web docker container.
You can ddev ssh and then sudo chown -R $(id -u) ~/.drush/ and then do whwatever you want in that directory (~/.drush is /home/.drush).
When you get it going and you want to do it repetitively for every start, you can encode the instructions you need using post-start hooks: https://ddev.readthedocs.io/en/latest/users/extending-commands/
Please follow up with the exact recipe you use, as it may help others. Thanks!
How can I add folder and subfolders of my libs to simplecov generate the coverage?
My SimpleCov Config
SimpleCov.start do
add_group 'Bot', 'app/bots'
add_group 'Bot', 'lib/bot'
add_group 'Controllers', 'app/controllers'
add_group 'Models', 'app/models'
add_group 'Helpers', 'app/helpers'
add_group 'Libraries', 'lib'
end
This is my lib tree
├── assets
├── bot
│ ├── base_bot_logic.rb
│ ├── bot_logic.rb
│ ├── core
│ │ ├── blacklist.rb
│ │ ├── bot_core.rb
│ │ ├── broadcast.rb
│ │ ├── emoji.rb
│ │ ├── profile.rb
│ │ ├── reply.rb
│ │ ├── setup.rb
│ │ ├── state_machine.rb
│ │ └── webview.rb
│ └── geoutils
│ └── geoutils.rb
├── estrutura.txt
├── solar
│ ├── api.rb
│ ├── assistido.rb
│ ├── atendimento.rb
│ └── validation
│ └── cpf.rb
├── solar.rb
└── tasks
7 directories, 18 files
But only two files are recognized by SimpleCov.
How I can add missing folders?
EDIT:
I add track_files '{app,lib}/**/*.rb' in my SimpleCov.start and it recognize mys files, but don't calculate the coverage rate.
Although I missed the party, I will answer
Getting started contains answer on your question:
If SimpleCov starts after your application code is already loaded (via
require), it won't be able to track your files and their coverage! The
SimpleCov.start must be issued before any of your application code is
required!
In this way, its correctly:
require 'simplecov'
require 'yourapp'
its NOT correctly:
require 'yourapp'
require 'simplecov'
track_files just includes files matched by this glob, whether or not they were explicitly required.
So I have:
buildSrc/
├── build.gradle
└── src
├── main
│ ├── groovy
│ │ └── build
│ │ ├── ExamplePlugin.groovy
│ │ └── ExampleTask.groovy
│ └── resources
│ └── META-INF
│ └── gradle-plugins
│ └── build.ExamplePlugin.properties
└── test
└── groovy
└── build
├── ExamplePluginTest.groovy
└── ExampleTaskTest.groovy
Question:
It seems like build.ExamplePlugin.properties maps directly to the build.ExamplePlugin.groovy. Is this the case? Seems terribly inefficient to have only one property in the file. Does it have to be fully qualified, i.e. does the name have to exactly match the full qualification of the class?
Now in the example, I see:
project.pluginManager.apply 'build.ExamplePlugin'
...however if I have that in my test, I get an error to the effect that the simple task the plugin defines, is already defined.
Why bother with test examples that require 'apply' when that is inappropriate for packaging?