Can you define the step order in the spring-batch dynamically? - spring

I don't speak English that well. Please understand.
I use the Spring-boot and Spring-batch. It continues a project.
I know that beans related to a batch while all beans go up to the server early days same time go up together.
it was defined as A->B->C. If it is the case.
Is there the way in which I change into B->A->C this type and I can put into practice?
By using JobExecutionDecider, I was going to look for the way but as a step increases, the problem that the coding range is expanded is predicted.
When there are the StepA, StepB, and StepC for example.
A->B->C, A->C->B, B->A->C, B->C->A, C->A->B, C->B->A The number of 3!(=6) If I happen and get to do this for hanger.
.start(decider).on("A").to(StepA())
.next(decider).on("B").to(StepB()).next(StepC()).end()
.from(decider).on("C").to(StepC()).next(StepB()).end()
.from(decider).on("B").to(StepB())
.....
.....
Should you describe all case of 3!(=6) in this way?
The more if that happens, the number of Step increases, I increase by an exponential...
Doesn't something have the better way?

Related

How to loop a Maven execution?

On a maven project, on process-test-resources phase I set up the database schemas with sql-maven-plugin. On this project that are N database shards which I set up with N repeated with exactly the same content bar the database name. Everything works as expected.
Problem here is that with a growing number of shards the number of similar blocks grows, which is cumbersome and makes maintenance annoying (since, per definition, all of those databases are literally the same). I would like to be able to define a "list" of database names and let sql-maven-plugin run once for each, without having to define the whole block many times.
I'm not looking for changes in the test setup as I positively want to setup as many shards as needed on the test environment. I need solely some "maven sugar" for conveniently define the over which values the executions should "loop".
I understand that maven itself does not support iteration by itself and am looking for alternatives or ideas of how to better achieve this. Things that come to my mind are:
Using/writing a "loop" plugin that manages the multiple parameterized executions
Extending sql-maven-plugin to support my use case
???
Does anyone has a better/cleaner solution?
Thanks in advance.
In this case i would recommend to use the maven-antrun-plugin to handle this situation, but of course it also possible to implement a particular maven plugin for this kind of purpose.

Test Driven Development initial implementation

A common practice of TDD is that you make tiny steps. But one thing which is bugging me is something I've seen a few people do, where by they just hardcode values/options, and then refactor later to make it work properly. For example…
describe Calculator
it should multiply
assert Calculator.multiply(4, 2) == 8
Then you do the least possible to make it pass:
class Calculator
def self.multiply(a, b)
return 8
And it does!
Why do people do this? Is it to ensure they're actually implementing the method in the right class or something? Cause it just seems like a sure-fire way to introduce bugs and give false-confidence if you forget something. Is it a good practice?
This practice is known as "Fake it 'til you make it." In other words, put fake implementations in until such time as it becomes simpler to put in a real implementation. You ask why we do this.
I do this for a number of reasons. One is simply to ensure that my test is being run. It's possible to be configured wrong so that when I hit my magic "run tests" key I'm actually not running the tests I think I'm running. If I press the button and it's red, then put in the fake implementation and it's green, I know I'm really running my tests.
Another reason for this practice is to keep a quick red/green/refactor rhythm going. That is the heartbeat that drives TDD, and it's important that it have a quick cycle. Important so you feel the progress, important so you know where you're at. Some problems (not this one, obviously) can't be solved in a quick heartbeat, but we must advance on them in a heartbeat. Fake it 'til you make it is a way to ensure that timely progress. See also flow.
There is a school of thought, which can be useful in training programmers to use TDD, that says you should not have any lines of source code that were not originally part of a unit test. By first coding the algorithm that passes the test into the test, you verify that your core logic works. Then, you refactor it out into something your production code can use, and write integration tests to define the interaction and thus the object structure containing this logic.
Also, religious TDD adherence would tell you that there should be no logic coded that a requirement, verified by an assertion in a unit test, does not specifically state. Case in point; at this time, the only test for multiplication in the system is asserting that the answer must be 8. So, at this time, the answer is ALWAYS 8, because the requirements tell you nothing different.
This seems very strict, and in the context of a simple case like this, nonsensical; to verify correct functionality in the general case, you would need an infinite number of unit tests, when you as an intelligent human being "know" how multiplication is supposed to work and could easily set up a test that generated and tested a multiplication table up to some limit that would make you confident it would work in all necessary cases. However, in more complex scenarios with more involved algorithms, this becomes a useful study in the benefits of YAGNI. If the requirement states that you need to be able to save record A to the DB, and the ability to save record B is omitted, then you must conclude "you ain't gonna need" the ability to save record B, until a requirement comes in that states this. If you implement the ability to save record B before you know you need to, then if it turns out you never need to then you have wasted time and effort building that into the system; you have code with no business purpose, that regardless can still "break" your system and thus requires maintenance.
Even in the simpler cases, you may end up coding more than you need if you code beyond requirements that you "know" are too light or specific. Let's say you were implementing some sort of parser for string codes. The requirements state that the string code "AA" = 1, and "AB" = 2, and that's the limit of the requirements. But, you know the full library of codes in this system include 20 others, so you include logic and tests that parse the full library. You go back the the client, expecting your payment for time and materials, and the client says "we didn't ask for that; we only ever use the two codes we specified in the tests, so we're not paying you for the extra work". And they would be exactly right; you've technically tried to bilk them by charging for code they didn't ask for and don't need.

When applying TDD, what heuristics do you use to select which test to write next?

The first part of the TDD cycle is selecting a test to fail. I'd like to start a community wiki about this selection process.
Sometimes selecting the test to start with is obvious, start with the low hanging fruit. For example when writing a parser, an easy test to start with is the one that handles no input:
def testEmptyInput():
result = parser.parse("")
assertNullResult(result)
Some tests are easy to pass requiring little implementation code, as in the above example.
Other tests require complex slabs of implementation code to pass, and I'm left with the feeling I haven't done the the "easiest thing possible to get the test to pass". It's at this point I stop trying to pass this test, and select a new test to try to pass, in the hope that it will reveal an easier implementation for the problematic implementation.
I'd like to explore some of the characteristic of these easy and challenging tests, how they impact testcase selection and ordering.
How does test selection relate to topdown and bottom up strategies? Can anyone recommend writings that addresses these strategies in relation to TDD?
I start by anchoring the most valuable behaviour of the code.
For instance, if it's a validator, I'll start by making sure it says that valid objects are valuable. Now we can showcase the code, train users not to do stupid things, etc. - even if the validator never gets implemented any further. After that, I start adding the edge cases, with the most dangerous validation mistakes first.
If I start with a parser, rather than start with an empty string, I might start with something typical but simple that I want to parse and something I'd like to get out of that. For me unit tests are more like examples of how I'm going to want to use the code.
I also follow BDD's practice of naming the tests should - so for your example I'd have shouldReturnNullIfTheInputIsEmpty(). This helps me identify the next most important thing the code should do.
This is also related to BDD's "outside-in". Here are a couple of blog posts I wrote which might help: Pixie Driven Development and Bug Driven Development. Bug Driven Development helps me to work out what the next bit of system-level functionality I need should be, which then helps me find the next unit test.
Hope this gives you a slightly different perspective, anyway - good luck!
To begin with, I'll pick a simple typical case and write a test for it.
While I'm writing the implementation I'll pay attention to corner cases that my code handles incorrectly. Then I'll write tests for those cases.
Otherwise I'll look for things that the function in question should do, but doesn't.

TDD: Where to start the first test [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
So I've done unit testing some, and have experience writing tests, but I have not fully embraced TDD as a design tool.
My current project is to re-work an existing system that generates serial numbers as part of the companies assembly process. I have an understanding of the current process and workflow due to looking at the existing system. I also have a list of new requirements and how they are going to modify the work flow.
I feel like I'm ready to start writing the program and I've decided to force myself to finally do TDD from the start to end.
But now I have no idea where to start. (I also wonder if I'm cheating the TDD process by already have an idea of the program flow for the user.)
The user flow is really serial and is just a series of steps. As an example, the first step would be:
user submits a manufacturing order number and receives a list of serializable part numbers of that orders bill of materials
The next step is started when the user selects one of the part numbers.
So I was thinking I can use this first step as a starting point. I know I want a piece of code that takes a manufacturing order number and returns a list of part numbers.
// This isn't what I'd want my code to end up looking like
// but it is the simplest statement of what I want
IList<string> partNumbers = GetPartNumbersForMfgOrder(string mfgOrder);
Reading Kent Becks example book he talks about picking small tests. This seems like a pretty big black box. Its going to require a mfg order repository and I have to crawl a product structure tree to find all applicable part numbers for this mfg order and I haven't even defined my domain model in code at all.
So on one hand that seems like a crappy start - a very general high level function. On the other hand, I feel like if I start at a lower level I'm really just guessing what I might need and that seems anti-TDD.
As a side note... is this how you'd use stories?
As an assembler
I want to get a list of part numbers on a mfg order
So that I can pick which one to serialize
To be truthful, an assembler would never say that. All an assembler wants is to finish the operation on mfg order:
As an assembler
I want to mark parts with a serial number
So that I can finish the operation on the mfg order
Here's how I would start. Lets suppose you have absolutely no code for this application.
Define the user story and the business value that it brings: "As a User I want to submit a manufacturing order number and a list of part numbers of that orders so that I can send the list to the inventory system"
start with the UI. Create a very simple page (lets suppose its a web app) with three fields: label, list and button. That's good enough, isn't it? The user could copy the list and send to the inv system.
Use a pattern to base your desig on, like MVC.
Define a test for your controller method that gets called from the UI. You're testing here that the controller works, not that the data is correct: Assert.AreSame(3, controller.RetrieveParts(mfgOrder).Count)
Write a simple implementation of the controller to make sure that something gets returned: return new List<MfgOrder>{new MfgOrder(), new MfgOrder(), new MfgOrder()}; You'll also need to implement classes for MfgOrder, for example.
Now your UI is working! Working incorrectly, but working. So lets expect the controller to get the data from a service or DAO. Create a Mock DAO object in the test case, and add an expectation that the method "partsDao.GetPartsInMfgOrder()" is called.
Create the DAO class with the method. Call the method from the controller. Your controller is now done.
Create a separate test to test the DAO, finally making sure it returns the proper data from the DB.
Keep iterating until you get it all done. After a little while, you'll get used to it.
The main point here is separating the application in very small parts, and testing those small parts individually.
This is perfectly okay as a starting test. With this you define expected behavior - how it should work. Now if you feel you've taken a much bigger bite than you'd have liked.. you can temporarily ignore this test and write a more granular test that takes out part or atleast mid-way. Then other tests that take you towards the goal of making the first big test pass. Red-Green-Refactor at each step.
Small tests, I think mean that you should not be testing a whole lot of stuff in one test. e.g. Are components D.A, B and C in state1, state2 and state3 after I've called Method1(), Method2() and Method3() with these parameters on D.
Each test should test just one thing. You can search SO for qualities of good tests. But I'd consider your test to be a small test because it is short and focussed on one task - 'Getting PartNumbers From Manufacturing Order'
Update: As a To-Try suggestion (AFAIR from Beck's book), you may wanna sit down and come up with a list of one-line tests for the SUT on a piece of paper. Now you can choose the easiest (tests that you're confident that you'll be able to get done.) in order to build some confidence. OR you could attempt one that you're 80% confident but has some gray areas (my choice too) because it'll help you learn something about the SUT along the way. Keep the ones that you've no idea of how to proceed for the end... hopefully it'll be clearer by the time the easier ones are done. Strike them off one by one as and when they turn green.
I think you have a good start but don't quite see it that way. The test that is supposed to spawn more tests makes total sense to me as if you think about it, do you know what a Manufacturing Order number or a Part Number is yet? You have to build those possibly which leads to other tests but eventually you'll get down to the itty bitty tests I believe.
Here's a story that may require a bit of breaking down:
As a User I want to submit a manufacturing order number and receive a list of serializable part numbers of that orders bill of materials
I think the key is to break things down over and over again into tiny pieces that make it is to build the whole thing. That "Divide and conquer" technique is handy at times. ;)
Well well, you've hit the exact same wall I did when I tried TDD for the first time :)
Since then, I gave up on it, simply because it makes refactoring too expensive - and I tend to refactor a lot during the initial stage of development.
With those grumpy words out of the way, I find that one of the most overseen and most important aspects of TDD is that it forces you to define your class-interfaces before actually implementing them. That's a very good thing when you need to assemble all your parts into one big product (well, into sub-products ;) ). What you need to do before writing your first tests, is to have your domain model, deployment model and preferably a good chunk of your class-diagrams ready before coding - simply because you need to identify your invariants, min- and max-values etc., before you can test for them. You should be able to identify these on a unit-testing level from your design.
Soo, in my experience (not in the experience of some author who enjoys mapping real world analogies to OO :P ), TDD should go like this:
Create your deployment diagram, from the requirement specification (ofc, nothing is set in stone - ever)
Pick a user story to implement
Create or modify your domain model to include this story
Create or modify your class-diagram to include this story (including various design classes)
Identify test-vectors.
Create the tests based on the interface you made in step 4
Test the tests(!). This is a very important step..
Implement the classes
Test the classes
Go have a beer with your co-workers :)

Is midiOutPrepareHeader a quick call?

Does midiOutPrepareHeader, midiInPrepareHeader just setup some data fields, or does it do something that is more time intensive?
I am trying to decide whether to build and destroy the MIDIHDR's as needed, or to maintain a pool of them.
You really have only two ways to tell (without the Windows source):
1) Profile it. Depending on your findings for how long it takes, have a debug-only scoped timer that logs when it suddenly takes longer than what you think is acceptable for your application, or do your pool solution. Though the docs say not to modify the buffer once you call the prepare function, and it seems if you wanted to re-use it you may have to modify it. I'm not familiar enough with the docs to say one way or the other if your proposed solution would work.
2) Step through the assembly and see. Don't be afraid. Get the MSFT public symbols and see if it looks like it's just filling out fields or if it's doing something complicated.

Resources