I am aware that the currently supported method for invoking a task from another task is to use dependsOn or finalizedBy, but I run into an issue with this.
I have a task, taskA, that is usable on it's own. I have another task, taskB, which, when called, will depend upon taskA. The problem is that taskB has additional conditions that require the task be skipped if they fail. Here is the workflow I am going for:
$ gradle taskA
:taskA
BUILD SUCCESSFUL
$ gradle taskB
checking condition 1... PASS
checking condition 2... PASS
:taskA
:taskB
BUILD SUCCESSFUL
$ gradle taskB
checking condition 1... PASS
checking condition 2... FAIL
:taskA SKIPPED
:taskB SKIPPED
BUILD SUCCESSFUL
If called directly, or as a doFirst or dependsOn or something from a different task, taskA should execute regardless of the conditions. But if taskB is called, the conditions must pass before taskA is executed. Here's what I've tried:
project.tasks.create(name: "taskB", type: MyTask, dependsOn: "taskA")
project.taskB{
onlyIf {
if (!conditionA()){
return false
}
if (!conditionB()){
return false
}
return true
}
}
The problem here is that taskA will execute before the onlyIf is checked on taskB, so even if a condition fails, taskA will execute.
How can I accomplish this?
It seems that it can be configured no sooner than after task's graph has been resolved. At the earlier stages all conditions will be evaluated on configuration phase which is too early. Have a look at this:
task a {
doLast {
println 'a'
}
}
task b {
dependsOn a
doLast {
println 'b'
}
}
gradle.taskGraph.whenReady { graph ->
if (graph.hasTask(b)) {
a.enabled = b.enabled = check1() && check2()
}
}
boolean check1() {
def ok = project.hasProperty('c')
println "check 1: ${ok ? 'PASS' : 'FAIL'}"
ok
}
boolean check2() {
def ok = project.hasProperty('d')
println "check 2: ${ok ? 'PASS' : 'FAIL'}"
ok
}
And the output:
~/tutorial/stackoverflow/40850083/ [master] gradle a
:a
a
BUILD SUCCESSFUL
Total time: 1.728 secs
~/tutorial/stackoverflow/40850083/ [master] gradle b
check 1: FAIL
:a SKIPPED
:b SKIPPED
BUILD SUCCESSFUL
Total time: 1.739 secs
~/tutorial/stackoverflow/40850083/ [master] gradle b -Pc
check 1: PASS
check 2: FAIL
:a SKIPPED
:b SKIPPED
BUILD SUCCESSFUL
Total time: 1.714 secs
~/tutorial/stackoverflow/40850083/ [master] gradle b -Pc -Pd
check 1: PASS
check 2: PASS
:a
a
:b
b
BUILD SUCCESSFUL
Total time: 1.745 secs
I know it's not recommended by most people, but I did it by actually executing a task, thusly:
task a {
doLast {
println 'a'
}
}
task b {
doLast {
a.execute()
println 'b'
}
outputs.upToDateWhen {
conditionA() && conditionB()
}
}
Related
I need to execute something like go test ./... -v -args -name1 val1
But nothing that works with go test ... seems to work with go test ./...
The Go test framework uses the global flag.(*FlagSet) instance. Any flags created in test files are available from the commands line. Positional arguments that aren't consumed by the test framework are available via flag.Args() (and flag.Arg, flag.NArg). Positional args will need -- to separate them on the command line.
For example:
package testflag
import (
"flag"
"testing"
)
var value = flag.String("value", "", "Test value to log")
func TestFlagLog(t *testing.T) {
t.Logf("Value = %q", *value)
t.Logf("Args = %q", flag.Args())
}
Assuming the above test is in several directories testflag, testflag/a, and testflag/b, running go test -v ./... -value bar -- some thing outputs:
=== RUN TestFlagLog
testflag_test.go:11: Value = "bar"
testflag_test.go:12: Args = ["some" "thing"]
--- PASS: TestFlagLog (0.00s)
PASS
ok testflag 0.002s
=== RUN TestFlagLog
testflag_test.go:11: Value = "bar"
testflag_test.go:12: Args = ["some" "thing"]
--- PASS: TestFlagLog (0.00s)
PASS
ok testflag/a 0.001s
=== RUN TestFlagLog
testflag_test.go:11: Value = "bar"
testflag_test.go:12: Args = ["some" "thing"]
--- PASS: TestFlagLog (0.00s)
PASS
ok testflag/b 0.002s
I have the following use-case where I struggle to identify the right concurrency pattern.
I have a service that has a list of computational tasks to do:
Some are fast to execute (a few seconds), others can take hours
I never want the same task to be executed twice at the same time
Different tasks can be executed in parallel
If one task is supposed to be refreshed every hour by example but at some point, it takes more than 1 hour to execute, I want to discard the new execution but log it somewhere
I want to have a timeout on tasks
I'm fine with using an external library of course.
I looked at singleflight but this seems to be aimed at caching and new executions of a task will be blocked until this task finish computing. In my case, I want to log the fact that it was already computing but discard the new execution (or just not do it).
Maintain a map of tasks in progress. When adding a task, check map and log if task is in progress; add task to map. Remove task from map when done.
Here's an example assuming that a task is identified by a string and implemented by a func().
var (
mu sync.Mutex
inProgress = map[string]bool{}
)
func startTask(id string, fn func()) {
mu.Lock()
ip := inProgress[id]
if !ip {
inProgress[id] = true
}
mu.Unlock()
if ip {
log.Printf("task %s in progress", id)
} else {
go func() {
fn()
mu.Lock()
delete(inProgress, id)
mu.Unlock()
}()
}
}
I am running a parallel test using go test. It's a scenario-based test and test runs based on provided JSON data.
Sample JSON data looks like below
[
{
ID: 1,
dependency: []
...
},
{
ID: 2
dependency: [1]
...
},
{
ID: 3
dependency: [2]
...
},
{
ID: 4
dependency: [1,2]
...
}
]
Here ID is identifier of a testcase and dependency is referring to the testcases which should run before a testcase.
Circular dependency checker is implemented and it's making sure that no circular dependency is on JSON.
Test code look like below.
t.Run(file, func(t *testing.T) {
t.Parallel()
var tcs []TestCase
bytes := ReadFile(file, t)
json.Unmarshal(bytes, &tcs)
CheckCircularDependencies(tcs, t) // check circular dependency graph
for idx, tc := range tcs {
queues = append(queues, queitem{
idx: idx
stepID: step.ID,
status: NotStarted,
....
})
}
for idx, tc := range tcs {
UpdateWorkQueStatus(tc, InProgress, t)
ProcessQueItem(tc, t)
}
})
func ProcessQueItem(tc TestCase, t *testing.T) {
WaitForDependencies(tc, t) // this waits for all the dependency test cases to be finished
...
// DO TEST code which takes about 2-3s
// Provide a command to microservices
// Wait for 1-2s for the command to be processed
// If not processed within 1-2s then wait again for 1-2s
// Once command is processed, just check the result and finish test case
UpdateWorkQueStatus(tc, Done, t)
})
}
func WaitForDependencies(tc TestCase, t *testing.T) {
if !GoodToGoForTestCase(tc, t) {
SleepForAWhile()
WaitForDependencies(tc, t)
}
}
func SleepForAWhile() {
time.Sleep(100 * time.Millisecond)
}
Parallel test Behavior:
This is working as expected on my personal computer which has multiple processors.
But when it is working on Google Cloud (maybe 1 cpu), it's stuck and does not go ahead at all for 10mins.
Current solution:
Running tests without parallelism and doing by the order of IDs. (just removed t.Parallel() function call to avoid parallelism)
Problem:
This serial solution is taking more than 3 times than parallel solution, since most of testing time is based on sleeping time.
The test code itself also contains sleep function call.
My question
1. Is this architecture correct and can work correctly on 1 processor CPU?
2. If this architecture can work, what will be the problem of stuck?
I make a makefile that executes go test -cover. Is it possible to fail the make unit_tests command if coverage is below X? How would I do that?
You can use TestMain in your test to do that. TestMain can act as a custom entry point to tests, and then you can invoke testing.Coverage() to get access to the coverage stats.
So for example, if you want to fail at anything below 80%, you could add this to one of your package's test files:
func TestMain(m *testing.M) {
// call flag.Parse() here if TestMain uses flags
rc := m.Run()
// rc 0 means we've passed,
// and CoverMode will be non empty if run with -cover
if rc == 0 && testing.CoverMode() != "" {
c := testing.Coverage()
if c < 0.8 {
fmt.Println("Tests passed but coverage failed at", c)
rc = -1
}
}
os.Exit(rc)
}
Then go test -cover will call this entry point and you'll fail:
PASS
coverage: 63.0% of statements
Tests passed but coverage failed at 0.5862068965517241
exit status 255
FAIL github.com/xxxx/xxx 0.026s
Notice that the number that testing.Coverage() returns is lower than what the test reports. I've looked at the code and the function calculates its coverage differently than the test's internal reporting. I'm not sure which is more "correct".
I want in init function to execute one of those two lines
func init() {
log.SetPrefix(">>>>>>>>>>>>> ")
log.SetOutput(ioutil.Discard)
}
How to achieve something similiar to C and macro definition in make file ?
I build go program like go build my_project and I don't have any custom make file. Can I pass to build command flag and read inside code ?
Create a string variable that you can test. Then set that string variable using -X passed to the linker. For example:
package main
var MODE string
func main() {
if MODE == "PROD" {
println("I'm in prod mode")
} else {
println("I'm NOT in prod mode")
}
}
Now, if you build this with just go build, MODE will be "". But you can also build it this way:
go build -ldflags "-X main.MODE PROD"
And now MODE will be "PROD". You can use that to modify your logic based on build settings. I generally use this technique to set version numbers in the binary; but it can be used for all kinds of build-time configuration. Note that it can only set strings. So you couldn't make MODE a bool or integer.
You can also achieve this with tags, which are slightly more complicated to set up, but much more powerful. Create three files:
main.go
package main
func main() {
doThing()
}
main_prod.go
// +build prod
package main
func doThing() {
println("I'm in prod mode")
}
main_devel.go
// +build !prod
package main
func doThing() {
println("I'm NOT in prod mode")
}
Now when you build with go build, you'll get your main_devel.go version (the filename doesn't matter; just the // +build !prod line at the top). But you can get a production build by building with go build -tags prod.
See the section on Build Constraints in the build documentation.