EDIT: I found a solution. Killing the terminal seems to resolve the problem. I guess it is not terminated when the computer is rebooted.
I still don't know however where the problem came from.
I noticed that scalatest suddenly became lost slower to start the tests.
I removed all the tests and only left one, simply testing true.
Here is what I get (using sbt)
MacBook-Pro:simulator nicolas$ sbt
[info] Set current project to Simulator (in build file:/Users/nicolas/Private/simulator/)
> compile
[success] Total time: 1 s, completed 29-Oct-2016 14:30:04
> test
[info] MySpec:
[info] A pip
[info] - should pop
[info] Run completed in 312 milliseconds.
[info] Total number of tests run: 1
[info] Suites: completed 1, aborted 0
[info] Tests: succeeded 1, failed 0, canceled 0, ignored 0, pending 0
[info] All tests passed.
[success] Total time: 31 s, completed 29-Oct-2016 14:30:37
As you can see, the compile is instant (1s) and the tests themselve run in 312 milliseconds. What could be explaining that it actually need 31s to run them?
It was not like this to start with, they were running in a few seconds then suddenly jumped up to 30s (even with only 1 extremely quick test)
Happens too after a fresh restart of the computer.
Here is my build.sbt just in case:
lazy val root = (project in file(".")).
settings(
name := "Simulator",
version := "0.1",
scalaVersion := "2.11.8"
)
// scala JSON library
libraryDependencies += "org.scala-lang.modules" %% "scala-parser-combinators" % "1.0.2"
// ScalaTest dependencies
libraryDependencies += "org.scalactic" %% "scalactic" % "3.0.0"
libraryDependencies += "org.scalatest" %% "scalatest" % "3.0.0" % "test"
// QuickLens
libraryDependencies += "com.softwaremill.quicklens" %% "quicklens" % "1.4.8"
Thanks!
Edit: I made a new project (minimal) and I have the same issue, here is the full tree
project
| - build.sbt
| - src
| - main
| | - scala
| | - hw.scala
| - test
| - scala
| - myTest.scala
hw.scala:
object Hi { def main(args: Array[String]) = println("Hi!") }
myTest.scala:
import org.scalatest._
class MySpec extends FlatSpec with Matchers {
"A pip" should "pop" in { true should be(true) }
}
same build.sbt as above
If you recently upgraded to macOS Sierra, you may be running into this issue: SBT test extremely slow on macOS Sierra
Related
I have a simple groovy script that leverages the GPars library's withPool functionality to launch HTTP GET requests to two internal API endpoints in parallel.
The script runs fine locally, both directly as well as a docker container.
When I deploy it as a Kubernetes Job (in our internal EKS cluster: 1.20), it runs there as well, but the moment it hits the first withPool call, I see a giant thread dump, but the execution continues, and completes successfully.
NOTE: Containers in our cluster run with the following pod security context:
securityContext:
fsGroup: 2000
runAsNonRoot: true
runAsUser: 1000
Environment
# From the k8s job container
groovy#app-271df1d7-15848624-mzhhj:/app$ groovy --version
WARNING: Using incubator modules: jdk.incubator.foreign, jdk.incubator.vector
Groovy Version: 4.0.0 JVM: 17.0.2 Vendor: Eclipse Adoptium OS: Linux
groovy#app-271df1d7-15848624-mzhhj:/app$ ps -ef
UID PID PPID C STIME TTY TIME CMD
groovy 1 0 0 21:04 ? 00:00:00 /bin/bash bin/run-script.sh
groovy 12 1 42 21:04 ? 00:00:17 /opt/java/openjdk/bin/java -Xms3g -Xmx3g --add-modules=ALL-SYSTEM -classpath /opt/groovy/lib/groovy-4.0.0.jar -Dscript.name=/usr/bin/groovy -Dprogram.name=groovy -Dgroovy.starter.conf=/opt/groovy/conf/groovy-starter.conf -Dgroovy.home=/opt/groovy -Dtools.jar=/opt/java/openjdk/lib/tools.jar org.codehaus.groovy.tools.GroovyStarter --main groovy.ui.GroovyMain --conf /opt/groovy/conf/groovy-starter.conf --classpath . /tmp/script.groovy
groovy 116 0 0 21:05 pts/0 00:00:00 bash
groovy 160 116 0 21:05 pts/0 00:00:00 ps -ef
Script (relevant parts)
#Grab('org.codehaus.gpars:gpars:1.2.1')
import static groovyx.gpars.GParsPool.withPool
import groovy.json.JsonSlurper
final def jsl = new JsonSlurper()
//...
while (!(nextBatch = getBatch(batchSize)).isEmpty()) {
def devThread = Thread.start {
withPool(poolSize) {
nextBatch.eachParallel { kw ->
String url = dev + "&" + "query=$kw"
try {
def response = jsl.parseText(url.toURL().getText(connectTimeout: 10.seconds, readTimeout: 10.seconds,
useCaches: true, allowUserInteraction: false))
devResponses[kw] = response
} catch (e) {
println("\tFailed to fetch: $url | error: $e")
}
}
}
}
def stgThread = Thread.start {
withPool(poolSize) {
nextBatch.eachParallel { kw ->
String url = stg + "&" + "query=$kw"
try {
def response = jsl.parseText(url.toURL().getText(connectTimeout: 10.seconds, readTimeout: 10.seconds,
useCaches: true, allowUserInteraction: false))
stgResponses[kw] = response
} catch (e) {
println("\tFailed to fetch: $url | error: $e")
}
}
}
}
devThread.join()
stgThread.join()
}
Dockerfile
FROM groovy:4.0.0-jdk17 as builder
USER root
RUN apt-get update && apt-get install -yq bash curl wget jq
WORKDIR /app
COPY bin /app/bin
RUN chmod +x /app/bin/*
USER groovy
ENTRYPOINT ["/bin/bash"]
CMD ["bin/run-script.sh"]
The bin/run-script.sh simply downloads the above groovy script at runtime and executes it.
wget "$GROOVY_SCRIPT" -O "$LOCAL_FILE"
...
groovy "$LOCAL_FILE"
As soon as the execution hits the first call to withPool(poolSize), there's a giant thread dump, but execution continues.
I'm trying to figure out what could be causing this behavior. Any ideas 🤷🏽♂️?
Thread dump
For posterity, answering my own question here.
The issue turned out to be this log4j2 JVM hot-patch that we're currently leveraging to fix the recent log4j2 vulnerability. This agent (running as a DaemonSet) patches all running JVMs in all our k8s clusters.
This, somehow, causes my OpenJDK 17 based app to thread dump. I found the same issue with an ElasticSearch 8.1.0 deployment as well (also uses a pre-packaged OpenJDK 17). This one is a service, so I could see a thread dump happening pretty much every half hour! Interestingly, there are other JVM services (and some SOLR 8 deployments) that don't have this issue 🤷🏽♂️.
Anyway, I worked with our devops team to temporarily exclude the node that deployment was running on, and lo and behold, the thread dumps disappeared!
Balance in the universe has been restored 🧘🏻♂️.
In a Pester v5 implementation, any way to have a data driven tag?
My Use Case:
Operating on larger data sets
To have all tests runable on a data set
To be able to run against a specific element of my data set via the Config Filter
My Conceptual example:
Describe "Vehicles" {
Context "Type: <_>" -foreach #("car","truck") {
# Should be tagged Car for iteration 1, Truck for iteration 2
It "Should be True" -tag ($_) { $true | should -betrue }
}
}
TIA
Your example works for me, so the answer seems to be, yes you can do that. Usage examples:
~> Invoke-Pester -Output Detailed
Pester v5.3.1
Starting discovery in 1 files.
Discovery found 2 tests in 12ms.
Running tests.
Running tests from 'C:\Users\wragg\OneDrive\Desktop\so.tests.ps1'
Describing Vehicles
Context Type: car
[+] Should be True 18ms (1ms|17ms)
Context Type: truck
[+] Should be True 19ms (2ms|16ms)
Tests completed in 129ms
Tests Passed: 2, Failed: 0, Skipped: 0 NotRun: 0
~> Invoke-Pester -Output Detailed -TagFilter car
Pester v5.3.1
Starting discovery in 1 files.
Discovery found 2 tests in 12ms.
Filter 'Tag' set to ('car').
Filters selected 1 tests to run.
Running tests.
Running tests from 'C:\Users\wragg\OneDrive\Desktop\so.tests.ps1'
Describing Vehicles
Context Type: car
[+] Should be True 9ms (4ms|5ms)
Tests completed in 66ms
Tests Passed: 1, Failed: 0, Skipped: 0 NotRun: 1
~> Invoke-Pester -Output Detailed -TagFilter truck
Pester v5.3.1
Starting discovery in 1 files.
Discovery found 2 tests in 11ms.
Filter 'Tag' set to ('truck').
Filters selected 1 tests to run.
Running tests.
Running tests from 'C:\Users\wragg\OneDrive\Desktop\so.tests.ps1'
Describing Vehicles
Context Type: truck
[+] Should be True 21ms (1ms|19ms)
Tests completed in 97ms
Tests Passed: 1, Failed: 0, Skipped: 0 NotRun: 1
~>
This is my core/tests.py that I use with pytest-django:
import pytest
def test_no_db():
pass
def test_with_db(db):
pass
Seems that setting up to inject db takes 66 seconds. When the tests start, collection is almost instant, followed by a 66-second pause, then the tests run rapidly.
If I disable the second test, the entire test suite runs in 0.002 seconds.
The database runs on PostgreSQL.
I run my tests like this:
$ pytest -v --noconftest core/tests.py
================================================================================ test session starts ================================================================================
platform linux -- Python 3.8.6, pytest-6.2.4, py-1.10.0, pluggy-0.13.1 -- /home/mslinn/venv/aw/bin/python
cachedir: .pytest_cache
django: settings: main.settings.test (from ini)
rootdir: /var/work/ancientWarmth/ancientWarmth, configfile: pytest.ini
plugins: timer-0.0.11, django-4.4.0, Faker-8.0.0
collected 2 items
core/tests.py::test_with_db PASSED [50%]
core/tests.py::test_no_db PASSED [100%]
=================================================================================== pytest-timer ====================================================================================
[success] 61.18% core/tests.py::test_with_db: 0.0003s
[success] 38.82% core/tests.py::test_no_db: 0.0002s
====================================================================== 2 passed, 0 skipped in 68.04s (0:01:08) ======================================================================
pytest.ini:
[pytest]
DJANGO_SETTINGS_MODULE = main.settings.test
FAIL_INVALID_TEMPLATE_VARS = True
filterwarnings = ignore::django.utils.deprecation.RemovedInDjango40Warning
python_files = tests.py test_*.py *_tests.py
Why does this happen? What can I do?
I have following problem, i have class (ExoertTest.java) with one feature (with #expert tag):
package opi;
import cucumber.api.CucumberOptions;
import cucumber.api.junit.Cucumber;
import org.junit.runner.RunWith;
#RunWith(Cucumber.class)
#CucumberOptions(
features = "src/test/resources/features/Expert.feature",
tags = "#expert"
)
public class ExpertTest {
}
I want run only this one feature from maven with command
mvn clean test -Ptest -Dcucumber.options="--tags #expert"
but no tests are executed, logs from console:
[INFO] Running opi.ExpertTest None of the features at
[src/test/resources/features/Expert.feature] matched the filters:
[#expert]
0 Scenarios 0 Steps 0m0,000s
[INFO] Tests run: 0, Failures: 0, Errors: 0, Skipped: 0, Time elapsed:
0.013 s - in opi.ExpertTest
Do You know why Cucumber dont see my #expert tag and this case is not executed?
I use Cucumber with Selenide
Ok, i put tags in feature file instead of CucumberOptions in TestRunner.class and it works
When Running my application locally everything works fine. I even ran
rm -r ~/.m2
To make sure I was redownloading everything. When I deploy to Heroku, however, Heroku reports that it's unable to download commons-codec. I don't think this is an intermittent issue with the repo since the repo has been up. (of course, it could be an intermittent issue with Heroku...)
I was unable to find anything I understood with google (I still a bit unclear on exactly how sbt works). Any idea how I can get up and running again on Heroku?
[warn] [NOT FOUND ] commons-codec#commons-codec;1.5!commons-codec.jar (9ms)
[warn] ==== Typesafe Releases Repository: tried
[warn] http://repo.typesafe.com/typesafe/releases/commons-codec/commons-codec/1.5/commons-codec-1.5.jar
[info] downloading http://repo.typesafe.com/typesafe/releases/org/apache/httpcomponents/httpclient/4.1/httpclient-4.1.jar ...
[info] [SUCCESSFUL ] org.apache.httpcomponents#httpclient;4.1!httpclient.jar (101ms)
[info] downloading http://repo.typesafe.com/typesafe/releases/org/apache/httpcomponents/httpcore/4.1/httpcore-4.1.jar ...
[info] [SUCCESSFUL ] org.apache.httpcomponents#httpcore;4.1!httpcore.jar (90ms)
[info] downloading http://s3pository.heroku.com/maven-central/org/apache/httpcomponents/httpclient/4.1.2/httpclient-4.1.2.jar ...
[info] [SUCCESSFUL ] org.apache.httpcomponents#httpclient;4.1.2!httpclient.jar (457ms)
[info] downloading http://s3pository.heroku.com/maven-central/org/apache/httpcomponents/httpcore/4.1.3/httpcore-4.1.3.jar ...
[info] [SUCCESSFUL ] org.apache.httpcomponents#httpcore;4.1.3!httpcore.jar (450ms)
[warn] ::::::::::::::::::::::::::::::::::::::::::::::
[warn] :: FAILED DOWNLOADS ::
[warn] :: ^ see resolution messages for details ^ ::
[warn] ::::::::::::::::::::::::::::::::::::::::::::::
[warn] :: commons-codec#commons-codec;1.5!commons-codec.jar
[warn] ::::::::::::::::::::::::::::::::::::::::::::::
[error] {file:/tmp/build_19dmxderfnxd/}Xonami WWW/*:update: sbt.ResolveException: download failed: commons-codec#commons-codec;1.5!commons-codec.jar
My build.scala contains:
val appDependencies = Seq(
"org.hibernate" % "hibernate-c3p0" % "4.1.7.Final",
"org.hibernate" % "hibernate-entitymanager" % "4.1.7.Final",
"javax.servlet" % "servlet-api" % "2.5",
"spy" % "spymemcached" % "2.7.3",
"postgresql" % "postgresql" % "9.1-901.jdbc4",
"org.slf4j" % "slf4j-api" % "1.6.4",
"javax.mail" % "mail" % "1.4.4",
"com.thoughtworks.xstream" % "xstream" % "1.4.2",
"org.slf4j" % "slf4j-simple" % "1.6.4",
"org.jdom" % "jdom" % "1.1",
"junit" % "junit" % "4.10",
"com.amazonaws" % "aws-java-sdk" % "1.3.6",
"joda-time" % "joda-time" % "2.1",
"org.restlet.jee" % "org.restlet" % "2.1-RC3",
"org.restlet.jse" % "org.restlet.ext.jetty" % "2.1-RC3",
"org.restlet.jee" % "org.restlet.ext.json" % "2.1-RC3",
"org.restlet.jee" % "org.restlet.ext.servlet" % "2.1-RC3",
"org.restlet.jee" % "org.restlet.ext.xml" % "2.1-RC3",
"org.restlet.jee" % "org.restlet.ext.xstream" % "2.1-RC3",
"org.restlet.jee" % "org.restlet.ext.wadl" % "2.1-RC3",
"xalan" % "xalan" % "2.7.1",
"com.rabbitmq" % "amqp-client" % "3.0.2"
)
val main = PlayProject(appName, appVersion, appDependencies).settings(defaultScalaSettings:_*).settings(
resolvers += "spy" at "http://files.couchbase.com/maven2/",
resolvers += "project.local" at "file:${project.basedir}/repo",
resolvers += "repository.jboss.org-public" at "https://repository.jboss.org/nexus/content/groups/public",
resolvers += "maven-restlet" at "http://maven.restlet.org"
)
The problem started when I added rabbitmq, but seems to persist even when I tried removing it.
I came across this thread which suggested I "fix" this by changing my PlayProject function as follows:
val main = PlayProject(appName, appVersion, appDependencies).settings(defaultScalaSettings:_*).settings(
resolvers := Seq("typesafe" at "http://repo.typesafe.com/typesafe/repo"),
resolvers += "spy" at "http://files.couchbase.com/maven2/",
resolvers += "project.local" at "file:${project.basedir}/repo",
resolvers += "repository.jboss.org-public" at "https://repository.jboss.org/nexus/content/groups/public",
resolvers += "maven-restlet" at "http://maven.restlet.org"
)
I don't really like this, because I don't understand it. I feel like I'm working around a problem with Heroku. Can someone explain why this works and why it's correct (or incorrect)?