Puppet Nodes.pp Include Modules Execution Order - provisioning

I am trying to set a sequential order on some of my modules for certain nodes.
node basenode{
include ps
include netfx
include hg
include reportviewer2012
include wdeploy30
include sqlexpress2008
include windowsrolesfeatures
include tcbase
}
node 'myserver' inherits basenode {
include tcuiagent
Class['tcuiagent'] -> Class['tcbase'] -> Class['windowsrolesfeatures'] -> Class['ps']
}
Certainly I DON'T want to set dependencies within the module resources because that will make them interdependent which I don't want to do. In this case, I want to accomplish this order.
ps (first one)
windowsrolesfeatures
anyotherpackage {hg,netfx...}(dont care the order of provisioning)
n. tcbase
tcuigant(last one)

If you really don't want to express relationships between modules, you can use stages to enforce an order.
You must first declare the stages in your top manifest :
## Very important : we define stages.
## Can only be done here.
stage { 'first': } # the first of first
stage { 'apt': } # to install apt sources and run apt-get update if necessary
# stage main # default stage, always available
stage { 'last': } # a stage after all the others
# Now we define the order :
Stage[first] -> Stage[apt] -> Stage[main] -> Stage[last]
Then use them :
# basics needing a run state
# We use the "class" syntax here because we need to specify a run stage.
class
{
'puppeted': # debug
stage => first, # note the explicit stage !
;
'apt_powered': # Very important for managing apt sources
stage => apt, # note the explicit stage !
#offline => 'true', # uncomment this if you are offline or don't want updates
;
'apt_powered::upgraded': # will systematically upgrade paquets. dev machine -> we want to stay up to date
stage => apt, # note the explicit stage !
;
}
But this is ugly and this is not what stages are made for.

I would strongly suggest rewriting the modules so that the order they are installed is not important anymore or create the necessary relationships to the resources.
If you are installing/configuring related resources from different modules, you could consider merging those modules.
Ger.

I guess I solve it using a different approach with node inheritance.
node windowsmachine{
include ps #powershell
include windowsrolesfeatures #windows roles and features
include netfx #netframework 4.0 and 4.5
}
node teamcitybase inherits windowsmachine {
include hg #mercurial
include nuget #nuget configuration
include reportviewer2012
include wdeploy30 #web deployment 3.0
include tcbase #asp.net configuration
include sqlexpress2008 #sqlexpress
}
node 'myserver1','myserver2' inherits teamcitybase{
#pending installation of puppet clients
}
node 'myserver3' inherits teamcitybase {
include tcuiagent
}
Windows Machine configuration modules do not depend on each other but myserver1 with the sqlexpress2008 depends on that baseline.
No stages or Module dependency!!!!!

After releasing the same problem i came across the following post which work the best from all that i have found.
1 #####################
2 # 1) Define the stages
3 #####################
4
5 stage { 'prereqs':
6 before => Stage['main'],
7 }
8
9 stage { 'final':
10 require => Stage['main'],
11 }
12
13 #####################
14 # 2) Define the classes
15 #####################
16
17 # We don't care when this class is executed, it will
18 # be included at random in the main stage
19 class doThisWhenever1 {
20
21 }
22
23 # We don't care when this class is executed either, it will
24 # be included at random in the main stage
25 class doThisWhenever2 {
26
27 }
28
29 # We want this class to be executed before the
30 # main stage
31 class doThisFirst {
32
33 exec {'firstThingsFirst':
34 command => '/bin/echo firstThingsFirst',
35 }
36 }
37
38 # We want this class to be executed after the
39 # main stage
40 class doThisLast {
41
42 exec {'lastly':
43 command => '/bin/echo lastly',
44 }
45
46 }
47
48 #####################
49 # 3) Assign the classes
50 # to a stage
51 #####################
52
53 class { 'doThisFirst':
54 stage => prereqs,
55 }
56
57 class { 'doThisLast':
58 stage => final,
59 }
60
61
62 include doThisFirst
63 include doThisLast
http://pipe-devnull.com/2013/09/20/puppet-ensure-class-execution-ordering.html
Regards

Related

pytest database access takes 66 seconds to start

This is my core/tests.py that I use with pytest-django:
import pytest
def test_no_db():
pass
def test_with_db(db):
pass
Seems that setting up to inject db takes 66 seconds. When the tests start, collection is almost instant, followed by a 66-second pause, then the tests run rapidly.
If I disable the second test, the entire test suite runs in 0.002 seconds.
The database runs on PostgreSQL.
I run my tests like this:
$ pytest -v --noconftest core/tests.py
================================================================================ test session starts ================================================================================
platform linux -- Python 3.8.6, pytest-6.2.4, py-1.10.0, pluggy-0.13.1 -- /home/mslinn/venv/aw/bin/python
cachedir: .pytest_cache
django: settings: main.settings.test (from ini)
rootdir: /var/work/ancientWarmth/ancientWarmth, configfile: pytest.ini
plugins: timer-0.0.11, django-4.4.0, Faker-8.0.0
collected 2 items
core/tests.py::test_with_db PASSED [50%]
core/tests.py::test_no_db PASSED [100%]
=================================================================================== pytest-timer ====================================================================================
[success] 61.18% core/tests.py::test_with_db: 0.0003s
[success] 38.82% core/tests.py::test_no_db: 0.0002s
====================================================================== 2 passed, 0 skipped in 68.04s (0:01:08) ======================================================================
pytest.ini:
[pytest]
DJANGO_SETTINGS_MODULE = main.settings.test
FAIL_INVALID_TEMPLATE_VARS = True
filterwarnings = ignore::django.utils.deprecation.RemovedInDjango40Warning
python_files = tests.py test_*.py *_tests.py
Why does this happen? What can I do?

Problem with get_children() method when using TreeManager. Wrong result

I have encountered an strange problem with the use of TreeManager
Here is my code:
# other imports
from mptt.models import MPTTModel, TreeForeignKey
from mptt.managers import TreeManager
class SectionManager(TreeManager):
def get_queryset(self):
return super().get_queryset().filter(published=True)
class Section(MPTTModel):
published = models.BooleanField(
default=True,
help_text="If unpublished, this section will show only"
" to editors. Else, it will show for all."
)
objects = TreeManager()
published_objects = SectionManager()
When I test it. I get the following correct results:
# show all objects
Section.objects.count() # result is correct - 65
Section.objects.root_nodes().count() # result is correct - 12
# show published objects, just one is not published.
Section.published_objects.count() # result is correct - 64
Section.published_objects.root_nodes().count() # result is corrct - 12
But one child of the roots is unpublished and it does not show in the results. Here is the test:
for root in Section.objects.root_nodes():
print(f"root_section_{root.id} has {root.get_children().count()} children")
# results ...
root_section_57 has 13 children # correct - 13 items
# ... more results
for root in Section.published_objects.root_nodes():
print(f"root_section_{root.id} has {root.get_children().count()} children")
# results ...
root_section_57 has 13 children # WRONG - should be only 12 children
# ... more results
I may not understand something, or I may have hit a bug??
Any ideas?
NOTE: This issue has been posted on the django-mptt github issues page at: https://github.com/django-mptt/django-mptt/issues/689
https://github.com/django-mptt/django-mptt/blob/master/mptt/managers.py
You override wrong function call. root_nodes() call ._mptt_filter()
#delegate_manager
def root_nodes(self):
"""
Creates a ``QuerySet`` containing root nodes.
"""
return self._mptt_filter(parent=None)
And your _mptt_filter does not got any given qs.
#delegate_manager
def _mptt_filter(self, qs=None, **filters):
"""
Like ``self.filter()``, but translates name-agnostic filters for MPTT
fields.
"""
if qs is None:
qs = self
return qs.filter(**self._translate_lookups(**filters))
Now you need to customized based on your use case.
Hope it can be some help

Executing specific testng group using build.gradle

I have checked following questions but none of them helped -
Gradle + TestNG Only Running Specified Group
Gradle command syntax for executing TESTNG tests as a group
The project I am using is available at - https://github.com/tarun3kumar/gradle-demo
It is standard maven project and I am not using testng.xml file.
Test method - com.org.corpsite.LandingPageTest is grouped as - smoke
I am running test as - gradle clean test and test is executed. Test fails due to genuine reason and let's ignore it.
Then I passed test group from command line as -
gradle clean test -P testGroups='doesnotexist'
Notice that 'doesnotexist' is not a valid group but it still executes test.
Following this I added includeGroups in build.gradle as -
test {
useTestNG() {
includeGroups 'smoke'
}
}
and now gradle clean test -P testGroups='doesnotexist' fails with NPE on one of the java class - java.lang.NullPointerException
at com.org.pageobjects.BasePage.findElements(BasePage.java:24)
Questions -
What is right flag to specify test group from command line? Seems -P is wrong else gradle clean test -P testGroups='doesnotexist' would not execute test.
What is wrong with specifying includeGroups 'smoke'?
I am using Gradle 5.1 on macbook pro
Here are the set of things that need to be done to get this to work.
You need to add the attribute alwaysRun=true to your #BeforeMethod and #AfterMethod annotations from your base class com.org.core.SelTestCase. This is to ensure that TestNG executes these configuration methods all the time irrespective of what group is chosen.
Alter the test task in your build.gradle to look like below:
test {
def groups = System.getProperty('groups', 'smoke')
useTestNG() {
includeGroups groups
}
}
This ensures that we try to extract the JVM argument groups value. If its not specified we default to smoke.
We now execute the tests by specifying the groups needed using the below command:
./gradlew clean test --info -Dgroups=smoke
Now if we execute the below command, you would notice that no tests are executed.
./gradlew clean test --info -Dgroups=smoke1
Here's a patch that you can apply to your project
From 25133a5d2a0f96d4a305f34e1f5a17e70be2bb54 Mon Sep 17 00:00:00 2001
From: Krishnan Mahadevan <krishnan.mahadevan#stackoverflow.com>
Date: Mon, 14 Jan 2019 22:38:27 +0530
Subject: [PATCH] Fixing the bug
---
build.gradle | 2 ++
src/main/java/com/org/core/SelTestCase.java | 5 +++--
2 files changed, 5 insertions(+), 2 deletions(-)
diff --git a/build.gradle b/build.gradle
index 10ba91d..2d08991 100644
--- a/build.gradle
+++ b/build.gradle
## -38,7 +38,9 ## task smokeTests(type: Test) {
}*/
test {
+ def groups = System.getProperty('groups', 'smoke')
useTestNG() {
+ includeGroups groups
}
}
diff --git a/src/main/java/com/org/core/SelTestCase.java b/src/main/java/com/org/core/SelTestCase.java
index 80cad09..651529a 100644
--- a/src/main/java/com/org/core/SelTestCase.java
+++ b/src/main/java/com/org/core/SelTestCase.java
## -22,7 +22,7 ## public class SelTestCase {
private WebDriver webDriver;
- #BeforeMethod
+ #BeforeMethod(alwaysRun = true)
#Parameters({"browser", "url"})
public void setUp(#Optional("firefox") String browser, #Optional("https://www.google.com/") String URL) {
switch (browser) {
## -40,8 +40,9 ## public class SelTestCase {
webDriver.get(URL);
}
- #AfterMethod
+ #AfterMethod(alwaysRun = true)
public void tearDown() {
webDriver.quit();
}
+
}
--
2.20.1
You can save the above contents to a file say mypatch.patch and then apply the patch using the instructions detailed in this StackOverFlow post.
You should be able to run a specific test with the 'testInstrumentationRunnerArguments' flag:
-Pandroid.testInstrumentationRunnerArguments.class=com.abc.NameOfMyTestClass

Multi-Module Task Dependencies

I want the outputs of one task to be available to an identical task in another submodule.
I'm trying to make yet-another plugin for compilation (of C/++, .hs, .coffee, .js et al) and source code generation.
So, I'm making a plugin and task/s that (so far) generate CMakeLists.txt, Android.mk, .vcxproj or whatever for each module to build the source code.
I have a multi-module build for this.
I can reach around and find the tasks from "other" submodules, but, I can't seem to enforce any execution order.
So, with ...
root project: RootModule
sub project: NativeCommandLine (requires SharedModule)
sub project: NativeGUI (requires SharedModule)
sub project: SharedModule
... I find that the NativeGUI tasks are executed before SharedModule which means that the SharedModule results aren't ready.
Bad.
Since the dependency { ... } stuff happens after plugins are installed (AFAIK) ... I'm guessing that the dependencies are connected after.
I need my tasks executed in order based on the dependency relations ... right? How can I do that?
I have created a (scala) TaskBag that lazily registers a collection of all participating Task instances.
I add instances of my task to this, along with a handler for when a new task appears.
During configure, any task can include logic in the lambda to filter and act on other tasks and it will be executed as soon as both tasks are participating.
package peterlavalle
import java.util
import org.gradle.api.Task
object TaskBag {
class AnchorExtension extends util.LinkedList[(Task, Task => Unit)]()
/**
* connect to the group of tasks
*/
def apply(task: Task)(react: Task => Unit): Unit =
synchronized {
// lazily create the central anchor ... thing ...
val anchor: AnchorExtension =
task.getProject.getRootProject.getExtensions.findByType(classOf[AnchorExtension]) match {
case null =>
task.getProject.getRootProject.getExtensions.create(classOf[AnchorExtension].getName, classOf[AnchorExtension])
case anchor: AnchorExtension =>
anchor
}
// show us off to the old ones
anchor.foreach {
case (otherTask, otherReact) =>
require(otherTask != task, "Don't double register a task!")
otherReact(task)
react(otherTask)
}
// add us to the list
anchor.add(task -> react)
}
}

ducttape sometimes-skip task: cross-product error

I'm trying a variant of sometimes-skip tasks for ducttape, based on the tutorial here:
http://nschneid.github.io/ducttape-crash-course/tutorial5.html
([ducttape][1] is a Bash/Scala based workflow management tool.)
I'm trying to do a cross-product to execute task1 on "clean" data and "dirty" data. The idea is to traverse the same path, but without preprocessing in some cases. To do this, I need to do a cross-product of tasks.
task cleanup < in=(Dirty: a=data/a b=data/b) > out {
prefix=$(cat $in)
echo "$prefix-clean" > $out
}
global {
data=(Data: dirty=(Dirty: a=data/a b=data/b) clean=(Clean: a=$out#cleanup b=$out#cleanup))
}
task task1 < in=$data > out
{
cat $in > $out
}
plan FinalTasks {
reach task1 via (Dirty: *) * (Data: *) * (Clean: *)
}
Here is the execution plan. I would expect 6 tasks, but I have two duplicate tasks being executed.
$ ducttape skip.tape
ducttape 0.3
by Jonathan Clark
Loading workflow version history...
Have 7 previous workflow versions
Finding hyperpaths contained in plan...
Found 8 vertices implied by realization plan FinalTasks
Union of all planned vertices has size 8
Checking for completed tasks from versions 1 through 7...
Finding packages...
Found 0 packages
Checking for already built packages (if this takes a long time, consider switching to a local-disk git clone instead of a remote repository)...
Checking inputs...
Work plan (depth-first traversal):
RUN: /nfsmnt/hltfs0/data/nicruiz/slt/IWSLT13/analysis/workflow/tmp/./cleanup/Baseline.baseline (Dirty.a)
RUN: /nfsmnt/hltfs0/data/nicruiz/slt/IWSLT13/analysis/workflow/tmp/./cleanup/Dirty.b (Dirty.b)
RUN: /nfsmnt/hltfs0/data/nicruiz/slt/IWSLT13/analysis/workflow/tmp/./task1/Baseline.baseline (Data.dirty+Dirty.a)
RUN: /nfsmnt/hltfs0/data/nicruiz/slt/IWSLT13/analysis/workflow/tmp/./task1/Dirty.b (Data.dirty+Dirty.b)
RUN: /nfsmnt/hltfs0/data/nicruiz/slt/IWSLT13/analysis/workflow/tmp/./task1/Clean.b+Data.clean+Dirty.b (Clean.b+Data.clean+Dirty.b)
RUN: /nfsmnt/hltfs0/data/nicruiz/slt/IWSLT13/analysis/workflow/tmp/./task1/Data.clean+Dirty.b (Clean.a+Data.clean+Dirty.b)
RUN: /nfsmnt/hltfs0/data/nicruiz/slt/IWSLT13/analysis/workflow/tmp/./task1/Data.clean (Clean.a+Data.clean+Dirty.a)
RUN: /nfsmnt/hltfs0/data/nicruiz/slt/IWSLT13/analysis/workflow/tmp/./task1/Clean.b+Data.clean (Clean.b+Data.clean+Dirty.a)
Are you sure you want to run these 8 tasks? [y/n]
Removing the symlinks from the output below, my duplicates are here:
$ head task1/*/out
==> Baseline.baseline/out <==
1
==> Clean.b+Data.clean/out <==
1-clean
==> Data.clean/out <==
1-clean
==> Clean.b+Data.clean+Dirty.b/out <==
2-clean
==> Data.clean+Dirty.b/out <==
2-clean
==> Dirty.b/out <==
2
Could someone with experience with ducttape assist me in finding my cross-product problem?
[1]: https://github.com/jhclark/ducttape
So why do we have 4 realizations involving the branch point Clean at task1 instead of just two?
The answer to this question is that the in ducttape branch points are always propagated through all transitive dependencies of a task. So the branch point "Dirty" from the task "cleanup" is propagated through clean=(Clean: a=$out#cleanup b=$out#cleanup). At this point the variable "clean" contains the cross product of the original "Dirty" and the newly-introduced "Clean" branch point.
The minimal change to make is to change
clean=(Clean: a=$out#cleanup b=$out#cleanup)
to
clean=$out#cleanup
This would give you the desired number of realizations, but it's a bit confusing to use the branch point name "Dirty" just to control which input data set you're using -- with only this minimal change, the two realizations of the task "cleanup" would be (Dirty: a b).
It may make your workflow even more grokkable to refactor it like this:
global {
raw_data=(DataSet: a=data/a b=data/b)
}
task cleanup < in=$raw_data > out {
prefix=$(cat $in)
echo "$prefix-clean" > $out
}
global {
ready_data=(DoCleanup: no=$raw_data yes=$out#cleanup)
}
task task1 < in=$ready_data > out
{
cat $in > $out
}
plan FinalTasks {
reach task1 via (DataSet: *) * (DoCleanup: *)
}

Resources