We have multiple projects, which can be roughly grouped into 3 groups: A, B & C. A-projects and B-projects depend on C-projects, but A and B are not dependent on each other.
I see two major possible architectures:
Have one solution with all projects, and define build configurations (so when I build A, I won't build B)
Have 2 solutions (A, B), and have the C-projects included in both.
Which one is preferred? Or is there better architecture?
Related
How does Gradle resolve dependencies when version ranges are involved? Unfortunately, I couldn't find sufficient explanation on the official docs.
Suppose we have a project A that has two declared dependencies B and C. Those dependencies specify a range of versions, i.e. for B it is [2.0, 3.0) and for C it is [1.0, 1.5].
B itself does not have a dependency to C until version 2.8, so that version introduced a strict dependency to C in the range [1.1, 1.2].
Looking at this example, we might determine that the resolved versions are:
B: 2.8 (because it is the highest version in the range)
C: 1.2 (because given B at 2.8, this is the highest version that satisfies both required ranges)
In general, it is not clear to me how this entire algorithm is carried out exactly. In particular, if ranges are involved, every possible choice of a concrete version inside a range might introduce different transitive dependencies (as in my example with B introducing a dependency to C only at version 2.8) which can declare dependencies with ranges themselves, and so on, making the number of possibilities explode quickly.
Does it apply some sort of greedy strategy in that it tries to settle a version as early as possible and if later a new dependency is encountered that conflicts the already chosen one, it tries to backtrack and choose another version?
Any help in understanding this is very much appreciated.
EDIT: I've read here that the problem in general is NP-hard. So does Gradle actually simplify the process somehow, to make it solvable in polynomial time? And if so, how?
I wanna use multiple variables to predict multiple target. Note multiple target here doesn't mean multi-label.
Let's go for an example like this:
# In this example: x1,x2,x3 are used to predict y1,y2
pd.DataFrame({'x1':[1,2,3],'x2':[2,3,4],'x3':[1,1,1],'y1':[1,2,1],'y2':[2,3,3]})
x1 x2 x3 y1 y2
0 1 2 1 1 2
1 2 3 1 2 3
2 3 4 1 1 3
In my limited experience with data mining, I found two solutions might help:
Build two xgboost models respectively to predict y1,y2
Using a full-connected layer to embed [x1,x2,x3] into [y1,y2], which seems it's a promising solution.
Wanted to know if it's good practice to do that and what would be the better way to predict multiple target?
Regardless of your approach, two outputs means you need two functions. I hope it's clear that a layer producing two outputs is equivalent to two layers producing an output each.
The only thing worth taking into account here (only relevant for deeper models) is whether you want to build intermediate representations of your input that are shared for predicting both outputs, i.e. x → h1 → h2 → .. → hN, hN → y1, hN → y2. Doing so would enforce your hN representation to act as a task-indifferent, multi-purpose encoder while simultaneously reducing the complexity of having two models learn the same thing.
For shallow architectures, such as the single-layer one you described, this is meaningless.
I formerly use eclipse and open many projects in a single work space, for example, I have project A, B and C. Both B and C will be dependent on A, thus when I change code in A, I can get usage information in B and C immediately.
Then I transferred to use IDEA, which is awesome, yet projects are independent and they are dependent in pom declaration, the side effect is I cannot get usage of B and C immediately, I need to open B and C, then build A, and check whether some change such as access level adjustment broke codes in B and C.
So what is the best practice to resolve such issues?
PS: For me it's not good to add B and C as module to A.
I'm looking for an algorithm to help me build 2D patterns based on rules. The idea is that I could write a script using a given site of parameters, and it would return a random, 2-dimensional sequence up to a given length.
My plan is to use this to generate image patterns based on rules. Things like image fractals or sprites for game levels could possibly use this.
For example, lets say that you can use A, B, C, & D to create the pattern. The rule is that C and A can never be next to each other, and that D always follows C. Next, lets say I want a pattern of size 4x4. The result might be the following which respects all the rules.
A B C D
B B B B
C D B B
C D C D
Are there any existing libraries that can do calculations like this? Are there any mathematical formulas I can read-up on?
While pretty inefficient concering runtime, backtracking is an often used algorithm for such a problem.
It follows a simple pattern, and if written correctly, you can easily replace a rule set into it.
Define your rule data structures; i.e., define the set of operations that the rules can encapsulate, and define the available cross-referencing that can be done. Once you've done this, you should have a clearer view of what type of algorithms to use to apply these rules to a potential result set.
Supposing that your rules are restricted to "type X is allowed to have type Y immediately to its left/right/top/bottom" you potentially have situations where generating possible patterns is computationally difficult. Take a look at Wang Tiles (a good source is the book Tilings and Patterns by Grunbaum and Shephard) and you'll see that with the states sets of rules you might define sets of Wang Tiles. Appropriate sets of these are Turing Complete.
For small rectangles, or your sets of rules, this may only be of academic interest. As mentioned elsewhere a backtracking approach might be appropriate for your ruleset - in which case you may want to consider appropriate heuristics for the order in which new components are added to your grid. Again, depending on your rulesets, other approaches might work. E.g. if your ruleset admits many solutions you might get a long way by randomly allocating many items to the grid before attempting to fill in remaining gaps.
I'm putting together a tool for a colleague which helps to create a nice fixture list. I got about 2/3 through the tool, collecting various data ... and then I hit a brick wall. It's less of a JavaScript problem and more of a maths/processing brainblock.
Lets say I have 4 teams, and they all need to play each other at home and away. Using this tool - http://www.fixturelist.com/ - I can see that a home and away fixture with 4 teams would take 6 weeks/rounds/whatever. For the life of me, though, I can't work out how that was programmatically worked out.
Can someone explain the logic to process this?
For info, I would use this existing tool, but there are other factors/features that I need to work in, hence doing a custom job. If only I could understand how to represent that logic!
In your example of 4 teams, call them a, b, c and d:
a has to play b, c, d
b has to play c, d (game against a already included in a's games)
c has to play d (game against a already included in a's games, against b already included in b's games)
If they need to play at home and away, that's 12 games. You can play at most 4/2 = 2 games a week, so that's 6 weeks.
With n teams you need x games, where:
x = ((n-1 + n-2 + n-3 ...) * 2)
This takes y weeks, where:
y = x/(n/2) = 2x/n
This can be simplified with an arithmetic series fairly easily, or calculated with a for loop if you want.