This is my core/tests.py that I use with pytest-django:
import pytest
def test_no_db():
pass
def test_with_db(db):
pass
Seems that setting up to inject db takes 66 seconds. When the tests start, collection is almost instant, followed by a 66-second pause, then the tests run rapidly.
If I disable the second test, the entire test suite runs in 0.002 seconds.
The database runs on PostgreSQL.
I run my tests like this:
$ pytest -v --noconftest core/tests.py
================================================================================ test session starts ================================================================================
platform linux -- Python 3.8.6, pytest-6.2.4, py-1.10.0, pluggy-0.13.1 -- /home/mslinn/venv/aw/bin/python
cachedir: .pytest_cache
django: settings: main.settings.test (from ini)
rootdir: /var/work/ancientWarmth/ancientWarmth, configfile: pytest.ini
plugins: timer-0.0.11, django-4.4.0, Faker-8.0.0
collected 2 items
core/tests.py::test_with_db PASSED [50%]
core/tests.py::test_no_db PASSED [100%]
=================================================================================== pytest-timer ====================================================================================
[success] 61.18% core/tests.py::test_with_db: 0.0003s
[success] 38.82% core/tests.py::test_no_db: 0.0002s
====================================================================== 2 passed, 0 skipped in 68.04s (0:01:08) ======================================================================
pytest.ini:
[pytest]
DJANGO_SETTINGS_MODULE = main.settings.test
FAIL_INVALID_TEMPLATE_VARS = True
filterwarnings = ignore::django.utils.deprecation.RemovedInDjango40Warning
python_files = tests.py test_*.py *_tests.py
Why does this happen? What can I do?
Related
I would like to run the pytest for all the items in the for loop. The pytest should fail in the end but it should run all the elements in the for loop.
The code looks like this
#pytest.fixture
def library():
return Library( spec_dir = service_spec_dir)
#pytest.fixture
def services(library):
return list(library.service_map.keys())
def test_properties(service, services):
for service_name in services:
model = library.models[service_name]
proxy = library.get_service(service_name)
if len(model.properties ) != 0 :
for prop in model.properties:
try:
method = getattr(proxy, f'get_{prop.name}')
method()
except exception as ex:
pytest.fail(ex)
The above code fails if one property of one service fails. I am wondering if there is a way to to run the test for all the service and get a list of failed cases for all the services.
I tried parametrize But based on this stackoverflow discussion. The parameter list should be resolved during the collection phase and in our case the library is loaded during the execution phase. Hence I am also not sure if it can be parametrized.
The goal is run all the services and its properties and get the list of failed items in the end.
I moved the variables to the global scope. I can parametrize the test now\
library = Library( spec_dir = service_spec_dir)
service_names = list(library.service_map.keys())
#pytest .mark.paramertize("serivce_name", service_names)
def test_properties(service):
pass
Don't use pytest.fail, but pytest_check.check instead.
fail point, is that you stop test execution on condition, while check made for collect how much cases were failed.
import logging
import pytest
import pytest_check as check
def test_000():
li = [1, 2, 3, 4, 5, 6]
for i in li:
logging.info(f"Test still running. i = {i}")
if i % 2 > 0:
check.is_true(False, msg=f"value of i is odd: {i}")
Output:
tests/main_test.py::test_000
-------------------------------- live log call --------------------------------
11:00:05 INFO Test still running. i = 1
11:00:05 INFO Test still running. i = 2
11:00:05 INFO Test still running. i = 3
11:00:05 INFO Test still running. i = 4
11:00:05 INFO Test still running. i = 5
11:00:05 INFO Test still running. i = 6
FAILED [100%]
================================== FAILURES ===================================
__________________________________ test_000 ___________________________________
FAILURE: value of i is odd: 1
assert False
FAILURE: value of i is odd: 3
assert False
FAILURE: value of i is odd: 5
assert False
I have a simple groovy script that leverages the GPars library's withPool functionality to launch HTTP GET requests to two internal API endpoints in parallel.
The script runs fine locally, both directly as well as a docker container.
When I deploy it as a Kubernetes Job (in our internal EKS cluster: 1.20), it runs there as well, but the moment it hits the first withPool call, I see a giant thread dump, but the execution continues, and completes successfully.
NOTE: Containers in our cluster run with the following pod security context:
securityContext:
fsGroup: 2000
runAsNonRoot: true
runAsUser: 1000
Environment
# From the k8s job container
groovy#app-271df1d7-15848624-mzhhj:/app$ groovy --version
WARNING: Using incubator modules: jdk.incubator.foreign, jdk.incubator.vector
Groovy Version: 4.0.0 JVM: 17.0.2 Vendor: Eclipse Adoptium OS: Linux
groovy#app-271df1d7-15848624-mzhhj:/app$ ps -ef
UID PID PPID C STIME TTY TIME CMD
groovy 1 0 0 21:04 ? 00:00:00 /bin/bash bin/run-script.sh
groovy 12 1 42 21:04 ? 00:00:17 /opt/java/openjdk/bin/java -Xms3g -Xmx3g --add-modules=ALL-SYSTEM -classpath /opt/groovy/lib/groovy-4.0.0.jar -Dscript.name=/usr/bin/groovy -Dprogram.name=groovy -Dgroovy.starter.conf=/opt/groovy/conf/groovy-starter.conf -Dgroovy.home=/opt/groovy -Dtools.jar=/opt/java/openjdk/lib/tools.jar org.codehaus.groovy.tools.GroovyStarter --main groovy.ui.GroovyMain --conf /opt/groovy/conf/groovy-starter.conf --classpath . /tmp/script.groovy
groovy 116 0 0 21:05 pts/0 00:00:00 bash
groovy 160 116 0 21:05 pts/0 00:00:00 ps -ef
Script (relevant parts)
#Grab('org.codehaus.gpars:gpars:1.2.1')
import static groovyx.gpars.GParsPool.withPool
import groovy.json.JsonSlurper
final def jsl = new JsonSlurper()
//...
while (!(nextBatch = getBatch(batchSize)).isEmpty()) {
def devThread = Thread.start {
withPool(poolSize) {
nextBatch.eachParallel { kw ->
String url = dev + "&" + "query=$kw"
try {
def response = jsl.parseText(url.toURL().getText(connectTimeout: 10.seconds, readTimeout: 10.seconds,
useCaches: true, allowUserInteraction: false))
devResponses[kw] = response
} catch (e) {
println("\tFailed to fetch: $url | error: $e")
}
}
}
}
def stgThread = Thread.start {
withPool(poolSize) {
nextBatch.eachParallel { kw ->
String url = stg + "&" + "query=$kw"
try {
def response = jsl.parseText(url.toURL().getText(connectTimeout: 10.seconds, readTimeout: 10.seconds,
useCaches: true, allowUserInteraction: false))
stgResponses[kw] = response
} catch (e) {
println("\tFailed to fetch: $url | error: $e")
}
}
}
}
devThread.join()
stgThread.join()
}
Dockerfile
FROM groovy:4.0.0-jdk17 as builder
USER root
RUN apt-get update && apt-get install -yq bash curl wget jq
WORKDIR /app
COPY bin /app/bin
RUN chmod +x /app/bin/*
USER groovy
ENTRYPOINT ["/bin/bash"]
CMD ["bin/run-script.sh"]
The bin/run-script.sh simply downloads the above groovy script at runtime and executes it.
wget "$GROOVY_SCRIPT" -O "$LOCAL_FILE"
...
groovy "$LOCAL_FILE"
As soon as the execution hits the first call to withPool(poolSize), there's a giant thread dump, but execution continues.
I'm trying to figure out what could be causing this behavior. Any ideas 🤷🏽♂️?
Thread dump
For posterity, answering my own question here.
The issue turned out to be this log4j2 JVM hot-patch that we're currently leveraging to fix the recent log4j2 vulnerability. This agent (running as a DaemonSet) patches all running JVMs in all our k8s clusters.
This, somehow, causes my OpenJDK 17 based app to thread dump. I found the same issue with an ElasticSearch 8.1.0 deployment as well (also uses a pre-packaged OpenJDK 17). This one is a service, so I could see a thread dump happening pretty much every half hour! Interestingly, there are other JVM services (and some SOLR 8 deployments) that don't have this issue 🤷🏽♂️.
Anyway, I worked with our devops team to temporarily exclude the node that deployment was running on, and lo and behold, the thread dumps disappeared!
Balance in the universe has been restored 🧘🏻♂️.
In a Pester v5 implementation, any way to have a data driven tag?
My Use Case:
Operating on larger data sets
To have all tests runable on a data set
To be able to run against a specific element of my data set via the Config Filter
My Conceptual example:
Describe "Vehicles" {
Context "Type: <_>" -foreach #("car","truck") {
# Should be tagged Car for iteration 1, Truck for iteration 2
It "Should be True" -tag ($_) { $true | should -betrue }
}
}
TIA
Your example works for me, so the answer seems to be, yes you can do that. Usage examples:
~> Invoke-Pester -Output Detailed
Pester v5.3.1
Starting discovery in 1 files.
Discovery found 2 tests in 12ms.
Running tests.
Running tests from 'C:\Users\wragg\OneDrive\Desktop\so.tests.ps1'
Describing Vehicles
Context Type: car
[+] Should be True 18ms (1ms|17ms)
Context Type: truck
[+] Should be True 19ms (2ms|16ms)
Tests completed in 129ms
Tests Passed: 2, Failed: 0, Skipped: 0 NotRun: 0
~> Invoke-Pester -Output Detailed -TagFilter car
Pester v5.3.1
Starting discovery in 1 files.
Discovery found 2 tests in 12ms.
Filter 'Tag' set to ('car').
Filters selected 1 tests to run.
Running tests.
Running tests from 'C:\Users\wragg\OneDrive\Desktop\so.tests.ps1'
Describing Vehicles
Context Type: car
[+] Should be True 9ms (4ms|5ms)
Tests completed in 66ms
Tests Passed: 1, Failed: 0, Skipped: 0 NotRun: 1
~> Invoke-Pester -Output Detailed -TagFilter truck
Pester v5.3.1
Starting discovery in 1 files.
Discovery found 2 tests in 11ms.
Filter 'Tag' set to ('truck').
Filters selected 1 tests to run.
Running tests.
Running tests from 'C:\Users\wragg\OneDrive\Desktop\so.tests.ps1'
Describing Vehicles
Context Type: truck
[+] Should be True 21ms (1ms|19ms)
Tests completed in 97ms
Tests Passed: 1, Failed: 0, Skipped: 0 NotRun: 1
~>
I'm testing with a minimal example how futures work when using ProcessPoolExecutor.
First, I want to know the result of my processing functions, then I would like to add complexity to the scenario.
import time
import string
import random
import traceback
import concurrent.futures
from concurrent.futures import ProcessPoolExecutor
def process(*args):
was_ok = True
try:
name = args[0][0]
tiempo = args[0][1]
print(f"Task {name} - Sleeping {tiempo}s")
time.sleep(tiempo)
except:
was_ok = False
return name, was_ok
def program():
amount = 10
workers = 2
data = [''.join(random.choices(string.ascii_letters, k=5)) for _ in range(amount)]
print(f"Data: {len(data)}")
tiempo = [random.randint(5, 15) for _ in range(amount)]
print(f"Times: {len(tiempo)}")
with ProcessPoolExecutor(max_workers=workers) as pool:
try:
index = 0
futures = [pool.submit(process, zipped) for zipped in zip(data, tiempo)]
for future in concurrent.futures.as_completed(futures):
name, ok = future.result()
print(f"Task {index} with code {name} finished: {ok}")
index += 1
except Exception as e:
print(f'Future failed: {e}')
if __name__ == "__main__":
program()
If I run this program, the output is as expected, obtaining all the future results. However, just at the end I also get a failure:
Data: 10
Times: 10
Task utebu - Sleeping 14s
Task klEVG - Sleeping 10s
Task ZAHIC - Sleeping 8s
Task 0 with code klEVG finished: True
Task RBEgG - Sleeping 9s
Task 1 with code utebu finished: True
Task VYCjw - Sleeping 14s
Task 2 with code ZAHIC finished: True
Task GDZmI - Sleeping 9s
Task 3 with code RBEgG finished: True
Task TPJKM - Sleeping 10s
Task 4 with code GDZmI finished: True
Task CggXZ - Sleeping 7s
Task 5 with code VYCjw finished: True
Task TUGJm - Sleeping 12s
Task 6 with code CggXZ finished: True
Task THlhj - Sleeping 11s
Task 7 with code TPJKM finished: True
Task 8 with code TUGJm finished: True
Task 9 with code THlhj finished: True
Error in atexit._run_exitfuncs:
Traceback (most recent call last):
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.7/lib/python3.7/concurrent/futures/process.py", line 101, in _python_exit
thread_wakeup.wakeup()
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.7/lib/python3.7/concurrent/futures/process.py", line 89, in wakeup
self._writer.send_bytes(b"")
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.7/lib/python3.7/multiprocessing/connection.py", line 183, in send_bytes
self._check_closed()
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.7/lib/python3.7/multiprocessing/connection.py", line 136, in _check_closed
raise OSError("handle is closed")
OSError: handle is closed
AFAIK the code doesn't have an error itself. I've been researching really old questions like the following, without luck to find a fix for it:
this one related to the same error msg but for Python 2,
this issue in GitHub which seems to be exactly the same I'm having (and seems not fixed yet...?),
this other issue where the comment I linked to seems to point to the actual problem, but doesn't find a solution to it,
of course the official docs for Python 3.7,
...
And so forth. However still unlucky to find how to solve this behaviour. Even found a old question here in SO who pointed to avoid using as_completed function and using submit instead (and from there came this actual test, before I just had a map to my process function).
Any idea, fix, explanation or workaround are welcome. Thanks!
EDIT: I found a solution. Killing the terminal seems to resolve the problem. I guess it is not terminated when the computer is rebooted.
I still don't know however where the problem came from.
I noticed that scalatest suddenly became lost slower to start the tests.
I removed all the tests and only left one, simply testing true.
Here is what I get (using sbt)
MacBook-Pro:simulator nicolas$ sbt
[info] Set current project to Simulator (in build file:/Users/nicolas/Private/simulator/)
> compile
[success] Total time: 1 s, completed 29-Oct-2016 14:30:04
> test
[info] MySpec:
[info] A pip
[info] - should pop
[info] Run completed in 312 milliseconds.
[info] Total number of tests run: 1
[info] Suites: completed 1, aborted 0
[info] Tests: succeeded 1, failed 0, canceled 0, ignored 0, pending 0
[info] All tests passed.
[success] Total time: 31 s, completed 29-Oct-2016 14:30:37
As you can see, the compile is instant (1s) and the tests themselve run in 312 milliseconds. What could be explaining that it actually need 31s to run them?
It was not like this to start with, they were running in a few seconds then suddenly jumped up to 30s (even with only 1 extremely quick test)
Happens too after a fresh restart of the computer.
Here is my build.sbt just in case:
lazy val root = (project in file(".")).
settings(
name := "Simulator",
version := "0.1",
scalaVersion := "2.11.8"
)
// scala JSON library
libraryDependencies += "org.scala-lang.modules" %% "scala-parser-combinators" % "1.0.2"
// ScalaTest dependencies
libraryDependencies += "org.scalactic" %% "scalactic" % "3.0.0"
libraryDependencies += "org.scalatest" %% "scalatest" % "3.0.0" % "test"
// QuickLens
libraryDependencies += "com.softwaremill.quicklens" %% "quicklens" % "1.4.8"
Thanks!
Edit: I made a new project (minimal) and I have the same issue, here is the full tree
project
| - build.sbt
| - src
| - main
| | - scala
| | - hw.scala
| - test
| - scala
| - myTest.scala
hw.scala:
object Hi { def main(args: Array[String]) = println("Hi!") }
myTest.scala:
import org.scalatest._
class MySpec extends FlatSpec with Matchers {
"A pip" should "pop" in { true should be(true) }
}
same build.sbt as above
If you recently upgraded to macOS Sierra, you may be running into this issue: SBT test extremely slow on macOS Sierra