Computing optimum solution for arranging blocks with minimum moves - algorithm

What started as a simple problem has turned into a challenge. And now I'm perilously close to being defeated by it. Help?
It starts off so simply. Picture a class like this:
class Unit
{
// Where we are
public int CurrentPos;
// How long we are
public int Length;
// Where we belong
public int TargetPos;
}
Now, assume you have a (static) collection of hundreds of thousands of these (like this). The goal is to move things from their CurrentPos, to their TargetPos. The catch is that sometimes there's already something at TargetPos (or partially overlapping it). In that case, the 'something' (or somethings) will need to get moved out of the way first.
Since 'moving' is an expensive operation, an optimum solution would only ever move a Unit once (from its current position to its target position). So I start by moving Units whose TargetPos is already free, then moving things into the space freed by the first moves, etc.
But eventually I run into the real challenge. At its simplest: I am trying to move A, but B is in the way, so I try to move B, but C is in the way, so I try to move C, but A is in the way. A->B->C->A.
For people who prefer numbers:
+------------+--------+-----------+
| CurrentPos | Length | TargetPos |
+------------+--------+-----------+
| 100 | 10 | 110 | A
| 110 | 5 | 120 | B
| 120 | 10 | 100 | C
+------------+--------+-----------+
I want to move the 10 at Pos 100 to 110, but there's already something there. So I try to move the 5 at 110 to 120, but there's already something there too. Finally I try to move the 10 at 120 to 100, but, wait, that's a loop!
To "break" the loop I could pick any of those entries and just move it out of the way. But picking the '5' allows me to minimize the size of "Move Out of the Way" moves (as contrasted with "Move Down to TargetPos" moves). Items that are "Moved out of the Way" will still have to be moved a second time to their own targets once the way is clear.
To be clear, what I'm trying to minimize isn't the number of "Move out of the ways," it's the size. Moving four Units of Length 2 is a better deal than moving one Unit of Length 10.
Logic tells me there's got to be 1 optimum solution for a given data set, no matter how large. One that is the absolute minimum "Lengths Moved" required to break all the loops. The trick is, how do you find it?
Rather than lay out all the reasons this is hard, let's head straight to the code. I've created a simple framework (written in c#) which enables me to try various strategies without having to code each one from scratch.
First there's my implementation of Unit:
class Unit : IComparable<int>
{
/// <summary>
/// Where we are
/// </summary>
public int CurrentPos;
/// <summary>
/// How long we are
/// </summary>
public readonly int Length;
/// <summary>
/// Where we belong
/// </summary>
public readonly int TargetPos;
/// <summary>
/// Units who are blocking me
/// </summary>
public List<Unit> WhoIsBlockingMe;
public Unit(int c, int l, int t)
{
CurrentPos = c;
Length = l;
TargetPos = t;
WhoIsBlockingMe = null;
}
/// <summary>
/// Indicate that a child is no longer to be considered blocking.
/// </summary>
/// <param name="rb">The child to remove</param>
/// <returns>How many Units are still blocking us.</returns>
public int UnChild(Unit rb)
{
bool b = WhoIsBlockingMe.Remove(rb);
Debug.Assert(b);
return WhoIsBlockingMe.Count;
}
public override string ToString()
{
return string.Format("C:{0} L:{1} T:{2}", CurrentPos, Length, TargetPos);
}
public override int GetHashCode()
{
return TargetPos.GetHashCode();
}
/// <summary>
/// Used by BinarySearch
/// </summary>
/// <param name="other">CurrentPos being sought.</param>
/// <returns></returns>
public int CompareTo(int other)
{
return CurrentPos.CompareTo(other);
}
}
Mostly what you'd expect. Probably worth highlighting WhoIsBlockingMe. This is the list of other Units that are currently preventing this Unit from moving to its desired TargetPos. It is automatically populated and maintained by the framework.
And here's the framework:
abstract class FindOpt
{
#region Members
protected static readonly string DataDir = #"c:\vss\findopt2\data\"; // <--------- Adjust to suit!!!
/// <summary>
/// The Pos where I move Units "out of the way" (see MoveOut).
/// </summary>
private int m_RunningLast;
/// <summary>
/// Count of MoveOuts executed.
/// </summary>
private int m_Moves;
/// <summary>
/// The total size of MoveOuts executed. This is what I'm trying to minimize.
/// </summary>
private int m_MoveSize;
/// <summary>
/// The complete list of Units read from export.tab.
/// </summary>
protected readonly List<Unit> m_Units;
/// <summary>
/// A collection to keep track of who would get freed by moving a particular unit.
/// </summary>
protected readonly Dictionary<Unit, List<Unit>> m_Tree;
/// <summary>
/// Units freed (possibly due to cascading) waiting to be MoveDown.
/// </summary>
protected readonly Queue<Unit> m_ZeroChildren;
/// <summary>
/// Is m_Units currently sorted properly so BinarySearch will work?
/// </summary>
private bool UnitsOutOfDate;
#endregion
public FindOpt()
{
m_RunningLast = int.MaxValue;
m_Moves = 0;
m_MoveSize = 0;
m_Units = new List<Unit>();
m_Tree = new Dictionary<Unit, List<Unit>>();
m_ZeroChildren = new Queue<Unit>();
UnitsOutOfDate = true;
// Load the Units
using (StreamReader sr = new StreamReader(DataDir + #"export.tab"))
{
string s;
while ((s = sr.ReadLine()) != null)
{
string[] sa = s.Split('\t');
int c = int.Parse(sa[0]);
int l = int.Parse(sa[1]);
int t = int.Parse(sa[2]);
Unit u = new Unit(c, l, t);
m_Units.Add(u);
}
}
}
public int CalcBest()
{
// Build the dependency tree.
BuildTree();
// Process anything that got added to m_ZeroChildren Queue while
// building the tree.
ProcessQueue();
// Perform any one time initialization subclasses might require.
Initialize();
// Keep looping until no Units are blocking anything.
while (m_Tree.Count > 0)
{
// Pick a Unit to MoveOut.
Unit rb = PickVictim();
// Subclass gave up (or is broken)
if (rb == null)
return int.MaxValue;
// When the Unit gets MoveOut, any items in
// m_Tree that were (solely) blocked by it will get
// added to the queue.
WhackVictim(rb);
// Process any additional Units freed by WhackVictim
ProcessQueue();
}
Console.WriteLine("{0} Moves: {1}/{2}", this.GetType().Name, m_Moves, m_MoveSize);
return m_MoveSize;
}
// Intended to be overridden by child class
protected virtual void Initialize()
{
}
// Intended to be overridden by child class
protected abstract Unit PickVictim();
// Called by BinarySearch to re-sort m_Units as
// needed. Both MoveOut and MoveDown can trigger this.
private void CheckUnits()
{
if (UnitsOutOfDate)
{
m_Units.Sort(delegate (Unit a, Unit b)
{
return a.CurrentPos.CompareTo(b.CurrentPos);
});
UnitsOutOfDate = false;
}
}
protected int BinarySearch(int value)
{
CheckUnits();
int lower = 0;
int upper = m_Units.Count - 1;
while (lower <= upper)
{
int adjustedIndex = lower + ((upper - lower) >> 1);
Unit rb = m_Units[adjustedIndex];
int comparison = rb.CompareTo(value);
if (comparison == 0)
return adjustedIndex;
else if (comparison < 0)
lower = adjustedIndex + 1;
else
upper = adjustedIndex - 1;
}
return ~lower;
}
// Figure out who all is blocking someone from moving to their
// TargetPos. Null means no one.
protected List<Unit> WhoIsBlockingMe(int pos, int len)
{
List<Unit> ret = null;
int a1 = BinarySearch(pos);
if (a1 < 0)
{
a1 = ~a1;
if (a1 > 0)
{
Unit prev = m_Units[a1 - 1];
if (prev.CurrentPos + prev.Length > pos)
{
ret = new List<Unit>(2);
ret.Add(prev);
}
}
}
int endpoint = pos + len;
while (a1 < m_Units.Count)
{
Unit cur = m_Units[a1];
if (cur.CurrentPos < endpoint)
{
if (ret == null)
ret = new List<Unit>(2);
ret.Add(cur);
}
else
{
break;
}
a1++;
}
return ret;
}
// Move a Unit "Out of the way." This is the one we are
// trying to avoid. And if we *must*, we still want to
// pick the ones with the smallest rb.Length.
protected void MoveOut(Unit rb)
{
// By definition: Units that have been "MovedOut" can't be blocking anyone.
// Should never need to do this to a Unit more than once.
Debug.Assert(rb.CurrentPos < m_RunningLast, "Calling MoveOut on something that was already moved out");
// By definition: Something at its target can't be blocking anything and
// doesn't need to be "MovedOut."
Debug.Assert(rb.CurrentPos != rb.TargetPos, "Moving from TargetPos to Out");
m_Moves++;
m_MoveSize += rb.Length;
m_RunningLast -= rb.Length;
rb.CurrentPos = m_RunningLast;
UnitsOutOfDate = true;
}
// This is the "good" move that every Unit will eventually
// execute, moving it from CurrentPos to TargetPos. Units
// that have been "MovedOut" will still need to be moved
// again using this method to their final destination.
protected void MoveDown(Unit rb)
{
rb.CurrentPos = rb.TargetPos;
UnitsOutOfDate = true;
}
// child of rb has been moved, either out or down. If
// this was rb's last child, it's free to be MovedDown.
protected void UnChild(Unit rb, Unit child)
{
if (rb.UnChild(child) == 0)
m_ZeroChildren.Enqueue(rb);
}
// rb is being moved (either MoveOut or MoveDown). This
// means that all of the things that it was blocking now
// have one fewer thing blocking them.
protected void FreeParents(Unit rb)
{
List<Unit> list;
// Note that a Unit might not be blocking anyone, and so
// would not be in the tree.
if (m_Tree.TryGetValue(rb, out list))
{
m_Tree.Remove(rb);
foreach (Unit rb2 in list)
{
// Note that if rb was the last thing blocking rb2, rb2
// will get added to the ZeroChildren queue for MoveDown.
UnChild(rb2, rb);
}
}
}
protected void ProcessQueue()
{
// Note that FreeParents can add more entries to the queue.
while (m_ZeroChildren.Count > 0)
{
Unit rb = m_ZeroChildren.Dequeue();
FreeParents(rb);
MoveDown(rb);
}
}
protected bool IsMovedOut(Unit rb)
{
return (rb == null) || (rb.CurrentPos >= m_RunningLast) || (rb.CurrentPos == rb.TargetPos);
}
private void BuildTree()
{
// Builds m_Tree (Dictionary<Unit, List<Units>)
// When the Unit in the Key in is moved (either MoveOut or MoveDown), each of
// the Values has one less thing blocking them.
// Victims handles the special case of Units blocking themselves. By definition,
// no moving of other units can free this, so it must be a MoveOut.
List<Unit> victims = new List<Unit>();
foreach (Unit rb in m_Units)
{
rb.WhoIsBlockingMe = WhoIsBlockingMe(rb.TargetPos, rb.Length);
if (rb.WhoIsBlockingMe == null)
{
m_ZeroChildren.Enqueue(rb);
}
else
{
// Is one of the things blocking me myself?
if (rb.WhoIsBlockingMe.Contains(rb))
{
victims.Add(rb);
}
// Add each of my children to the appropriate node in m_Tree, indicating
// they are blocking me.
foreach (Unit rb2 in rb.WhoIsBlockingMe)
{
List<Unit> list;
if (!m_Tree.TryGetValue(rb2, out list))
{
// Node doesn't exist yet.
list = new List<Unit>(1);
m_Tree.Add(rb2, list);
}
list.Add(rb);
}
}
}
foreach (Unit rb in victims)
{
WhackVictim(rb);
}
}
// Take the "Victim" proposed by a subclass's PickVictim
// and MoveOut it. This might cause other items to get added
// to the ZeroChildren queue (generally a good thing).
private void WhackVictim(Unit rb)
{
FreeParents(rb);
MoveOut(rb);
}
}
Things worth highlighting here:
The DataDir controls where to read data. Adjust this to point to where you've downloaded (and extracted) export.tab.
The expectation is that child classes will choose which Unit (aka Victim) to move out of the way. Once it does, the framework will move it, along with any Units that that move frees up.
You might also want to pay attention to m_Tree. If I have a Unit, I can use this Dictionary to find out who all is being blocked by it (the reverse of Unit.WhoIsBlockingMe).
And here is a simple class that uses the framework. Its purpose is to tell the framework which Unit it should MoveOut next. In this case it just offers up Victims starting from the largest length and working its way down. Eventually it's going to succeed, since it will just keep offering Units until there are none left.
class LargestSize : FindOpt
{
/// <summary>
/// The list of Units left that are blocking someone.
/// </summary>
protected readonly List<Unit> m_AltTree;
private int m_Index;
public LargestSize()
{
m_AltTree = new List<Unit>();
m_Index = 0;
}
protected override void Initialize()
{
m_AltTree.Capacity = m_Tree.Keys.Count;
// m_Tree.Keys is the complete list of Units that are blocking someone.
foreach (Unit rb in m_Tree.Keys)
m_AltTree.Add(rb);
// Process the largest Units first.
m_AltTree.Sort(delegate (Unit a, Unit b)
{
return b.Length.CompareTo(a.Length);
});
}
protected override Unit PickVictim()
{
Unit rb = null;
for (; m_Index < m_AltTree.Count; m_Index++)
{
rb = m_AltTree[m_Index];
if (!IsMovedOut(rb))
{
m_Index++;
break;
}
}
return rb;
}
}
Nothing too surprising. Perhaps worth noting is that moving one Unit will often allow other Units to be moved as well (that's kinda the point of breaking a loop). Such being the case, this code uses IsMovedOut to see if the next Victim it's planning to offer has already been moved to its TargetPos. If so, we skip that one and move on to the next.
As you might imagine, LargestSize does a pretty terrible job at minimizing the size of MoveOuts (moving a total of almost 12 million "Lengths"). Although it does a pretty good job at minimizing the number of moves (895), that's not what I'm after. It's also pleasantly fast (~1 second).
LargestSize Moves: 895 / 11,949,281
A similar routine can be used to start with the smallest and work its way up. That gives way more moves (which is interesting, but not really important), and a much smaller move size (which is a good thing):
SmallestSize Moves: 157013 / 2,987,687
As I've mentioned, I've got others, some better (with as few as 294 moves) and some worse. However, my very best move sizes so far is 1,974,831. Is that good? Bad? Well, I happen to know that there's a solution that requires less than 340,000, so... pretty bad.
For completeness, here's the code to call all this:
class Program
{
static void Main(string[] args)
{
FindOpt f1 = new LargestSize();
f1.CalcBest();
}
}
Stitch those 4 pieces together and you've got the complete test harness. To test out your own approach, just modify PickVictim in the subclass to return whatever your best guess at which Unit should be MovedOut next.
So, what's my goal here?
I'm trying to find a way to compute the "optimal" set of MoveOuts to break every loop, where optimal means smallest total Lengths. And I'm not just interested in the answer for this sample set, I'm trying to create a way to find the optimal results for any data set in a reasonable amount of time (think: seconds, not days). Thus walking all possible permutations of a dataset with hundreds of thousands of records (can you say 206858!?) is probably not the solution I'm after.
What I can't quite wrap my head around is where to start? Should I pick this Unit? Or that one? Since virtually every Unit is in a loop, every one can be freed up by moving something else. So given 2 Units, how can you say with certainty which one is going to lead to the optimal solution?
Smallest and Largest clearly aren't going to get you there, at least not alone.
Looking to see which Units would free the most parents? Tried that.
Looking to see who frees the biggest parents? Tried that too.
How about making a list of all the loops? Then you could pick the smallest in the loop. You could even figure out which Units are involved in multiple loops. Moving a Unit with a length of 10 that breaks 10 different loops seems like a better deal than a Length of 5 that only breaks 1, yes? Turns out this is much harder than you might think. There are a LOT of loops (more than will fit in my 64gig of RAM). Still, my current 'best' lies down this path. Of course my current best stinks...
What else is worth mentioning?
This code is written in c#, but that's just because I find it easier to prototype there. Since what I'm after is the algorithm, feel free to write in whatever language you like. The sample data set is just tab-delimited text, and using the framework is entirely optional. If your solution is written in something I can't read, I'll ask questions.
Remember the goal isn't (just) to figure out the optimum solution for this dataset. I want an efficient way to compute the optimum result for any data set.
Don't use threading/GPU/etc to speed up processing. Again: looking for an efficient algorithm. Without that, it doesn't matter what else you do.
Assume all Lengths, CurrentPos and TargetPos are > 0.
Assume TargetPos + Length never overlap each other.
Splitting Units into smaller lengths is not permitted.
In case you missed the link above, the sample data set is here. Note that in order to keep the download size down, I have omitted the (~300,000) Units that aren't blocked by loops. The framework already handles them, so they're just distracting.
There is 1 Unit that is blocking itself (good old 900). I left it in the dataset, but the framework already handles it explicitly.
There are some Units that aren't blocking anything, but still can't be moved because someone is blocking them (ie they are in m_Units and have values in their WhoIsBlockingMe, but are not in m_Tree.Keys since moving them won't free up anything else). Not sure what to do with this information. Move them first? Last? Can't see how knowing this helps, but there it is.
Doing some analysis, I find that roughly 1/3 of the 206,858 Units in this dataset are of length 1. In fact, 2/3 are Length 8 or less. Only 3 of them are hugely big (ie bigger than the currently known Optimal solution). Move them first? Last? Not quite sure what to do with this info either.
Is StackOverflow the best place for this question? The code is 'broken' in that it doesn't give me the result I want. I've heard of CodeGolf, but never been there. Since this is just a test harness and not production code, CodeReview seemed like a poor fit.
Edit 1: In response to the comment by #user58697, here's a loop where a single Unit (A) is blocking 10 others. For good measure, I made it a loop:
+------------+--------+-----------+
| CurrentPos | Length | TargetPos |
+------------+--------+-----------+
| 100 | 10 | 1000 | A
| 120 | 20 | 81 | B
| 140 | 1 | 101 | C
| 141 | 1 | 102 | D
| 142 | 1 | 103 | E
| 143 | 1 | 104 | F
| 144 | 1 | 105 | G
| 145 | 1 | 106 | H
| 146 | 1 | 107 | I
| 1003 | 1 | 108 | J
| 148 | 50 | 109 | K
+------------+--------+-----------+
Here we see that B is blocked by A (the last bit of B overlaps the first bit of A). Likewise the last bit of A blocks the first bit of K. C-J are obviously blocked as well. So not only is A blocking multiple blocks, its blocking lengths totaling 78, even though it is only Length 10 itself. And of course A itself is blocked by J.
Edit 2: Just a quick update.
The original size of my sample data was just over 500,000 Units. Handling the simple cases (TargetPos already free, etc), I was able to trim that down to just over 200,000 (which is the set I posted).
However, there's another chunk that can be removed. As described above, a loop would normally look like A->B->C->A. But what about M->N->O->A->B->C->A? MNO aren't really in loops. In particular, N has both a parent and children, but still isn't in a loop. As soon as A->B->C is broken, MNO will be fine. Contrawise, moving out MNO isn't going to free anything that won't be freed when ABC is broken.
Trimming out these MNO types of Units drops the count from 200,000 to more like 52,000. Does that make a difference? Well, kinda. Even my simple samples from above improve. Largest drops from 12 million to 9, and smallest drops from 3 million to 1 million.
Still a long ways from the < 340,000 I know can be done, but an improvement.
I still choose to believe there's a way to logic my way thru this that doesn't involve testing 52,000! permutations.
Edit 3: After trying some complex (and ultimately fruitless) alternatives attempting to map all the loops and figure out how best to break them, I scratched all that and started over.
There are 2 ways to move a Unit:
Use MoveOut to move it out.
Use MoveOut on all the things blocking it (ie using WhoIsBlockingMe from the framework).
With that in mind, I try to move the largest (remaining) block. Since the total cost of moving all the children is cheaper, I walk each of them, again figuring out whether it's cheaper to move them or their children, etc, etc.
There are some frills, but that's the basic idea.
This gives me a new personal best of 382,962. Not quite the best (known) solution (< 340,000), but getting closer. And not bad for something that runs in 1/2 a second.
I could still use some help here, so if someone is feeling motivated to give this a go, I'm prepared to post updated code/files.

Related

Text typing effect in C# based on current FPS and characters per second?

I am working on a typing effect for TextMeshPro texts in Unity that should take into account both the current FPS and the user input 'characters per second' (to determine the speed).
The implementation runs in an IEnumerator and displays one or more characters of a given text at a time. After displaying the char(s), there is a 'yield return new WaitForSeconds()' before the next round of revealing begins. (It should be possible to display more than one char at a time, because WaitForSeconds() takes too much time in some cases, even if I enter very small numbers. So rather than waiting after every single char, it should be waited after a precomputed number of chars to maintain the specified typing speed.)
I'm not sure if my approach works as intended in the different possible scenarios and additionally, I'm not happy with the computations within the IEnumerator because I think it could slow down the revealing process.
I tested the typing effect with different FPS in PlayMode (by setting the FPS with "Application.targetFrameRate" manually) and noticed that with FPS lower than 30 the text is revealed very haltingly and thus the viewer gets frustrated because it looks laggy. Maybe someone has experience with this and can suggest an easier way of implementation?
// Precompute the time for displaying a single character by a given number of characters per second:
public void GetTimePerChar() {
if (_charsPerSecond > 0) {
TimePerChar = 1f / _charsPerSecond;
} else {
TimePerChar = 0f;
}
// Coroutine that reveals characters of a given text over time:
private IEnumerator DisplayText() {
/* some other code */
while (TmpText.maxVisibleCharacters < TotalCharCount) { // Reveal characters until the the total amount of chars is reached
if (CharsPerSecond > 0) {
TimePerFrame = Time.deltaTime; // How much time does 1 frame take
if (TimePerChar > 0) {
CharsPerFrame = TimePerFrame / TimePerChar; // How many chars can be displayed within a single frame
} else {
CharsPerFrame = 0f;
}
CharsPerRound = (int)Math.Ceiling(CharsPerFrame); // Rounded up number of chars (as fractions doesn't make sense to display)
TimePerRound = TimePerChar * CharsPerRound; // The individual waiting time before the next char(s) get revealed
TmpText.maxVisibleCharacters += CharsPerRound; // Reveal one or more characters at a time
if (CharsPerRound - CharsPerFrame > 0) { // If the number of chars got rounded up wait afterwards as long as it shall take to display them
yield return new WaitForSeconds(TimePerRound);
}
} else {
TmpText.maxVisibleCharacters = TotalCharCount; // With zero CharsPerSecond, display the whole text at once
}
}
/* some other code */
}

How can I randomize a video to play after another by pressing a key on Processing?

I'm quite new to Processing.
I'm trying to make Processing randomly play a video after I clear the screen by mouseclick, so I create an array that contain 3 videos and play one at a time.
Holding 'Spacebar' will play a video and release it will stop the video. Mouseclick will clear the screen to an image. The question is how can it randomize to another video if I press spacebar again after clear the screen.
I've been searching all over the internet but couldn't find any solution for my coding or if my logic is wrong, please help me.
Here's my code.
int value = 0;
PImage photo;
import processing.video.*;
int n = 3; //number of videos
float vidN = random(0, n+1);
int x = int (vidN);
Movie[] video = new Movie[3];
//int rand = 0;
int index = 0;
void setup() {
size(800, 500);
frameRate(30);
video = new Movie[3];
video[0] = new Movie (this, "01.mp4");
video[1] = new Movie (this, "02.mp4");
video[2] = new Movie (this, "03.mp4");
photo = loadImage("1.jpg");
}
void draw() {
}
void movieEvent(Movie video) {
video.read();
}
void keyPressed() {
if (key == ' ') {
image(video[x], 0, 0);
video[x].play();
}
}
void mouseClicked() {
if (value == 0) {
video[x].jump(0);
video[x].stop();
background(0);
image(photo, 0, 0);
}
}
You have this bit of logic in your code which picks a random integer:
float vidN = random(0, n+1);
int x = int (vidN);
In theory, if you want to randomise to another video when the spacebar is pressed again you can re-use this bit of logic:
void keyPressed() {
if (key == ' ') {
x = int(random(n+1));
image(video[x], 0, 0);
video[x].play();
}
}
(Above I've used shorthand of the two lines declaring vidN and x, but the logic is the same. If the logic is harder to follow since two operations on the same line (picking a random float between 0,n+1 and rounding down to an integer value), feel free to expand back to two lines: readability is more important).
As side notes, these bit of logic look a bit off:
the if (value == 0) condition will always be true since value never changes, making both value and the condition redundant. (Perhaps you plan to use for something else later ? If so, you could save separate sketches, but start with the simplest version and exclude anything you don't need, otherwise, in general, remove any bit of code you don't need. It will be easier to read, follow and change.)
Currently your logic says that whenever you click current video resets to the start and stops playing and when you hit the spacebar. Once you add the logic to randomise the video that the most recent frame of the current video (just randomised) will display (image(video[x], 0, 0);), then that video will play. Unless you click to stop the current video, previously started videos (via play()) will play in the background (e.g. if they have audio you'll hear them in the background even if you only see one static frame from the last time space was pressed).
Maybe this is the behaviour you want ? You've explained a localised section of what you want to achieve, but not overall what the whole of the program you posted should do. That would help others provide suggestions regarding logic.
In general, try to break the problem down to simple steps that you can test in isolation. Once you've found a solid solution for each part, you can add each part into a main sketch one at a time, testing each time you add something. (This way if something goes wrong it's easy to isolate/fix).
Kevin Workman's How To Program is a great article on this.
As a mental excercise it will help to read through the code line by line and imagine what it might do. Then run it and see if the code behaves as you predicted/intended. Slowly and surely this will get better and better. Have fun learning!

Spritekit collisions between arrays of SpriteNodes

I'm developing a game that involves a number of Sprite Arrays and I want to detect collisions between them and specify functions depending on which etc.
So say I have an array of 16 balls ballArray[I] and 16 blocks blockaArray[I] which I can easily iterate through using the index number I.
I have given the balls a Physics Category - Balls and similar to for Blocks. Then I have 16 ID Physics categories say ID1, ID2, ID3, ID4
So I can detect a collision, know that is was a Ball hitting a Block but I then need to know which ball and which block.
What the best or easiest way to do this? I'm reading about enumerateChildNodes(withName) function but have not used it. Or can I create array of PhysicsCategories which I could iterate through along with the SpriteArray to compare and identify.
EDIT:
Thanks Everyone for the help. I have finally cracked it. Surprisingly in the end the code a lot simpler than I first thought. Still not fully understanding where the bits are sitting in my categories but have it working .
I'll try to post my final working code - you may have suggestions to improve. Many thanks again and apologies for my poor StackFlow etiquette - I am new here :-)
So my Physics Categories were defined.
struct PhysicsCategories {
static let BoxCategoryMask = UInt32(1<<7)
static let BallCategoryMask = UInt32(1<<8)
}
and then in my function to build an array of Sprites
boxBloqArray[i].physicsBody?.categoryBitMask = PhysicsCategories.BoxCategoryMask | UInt32(i)
boxBloqArray[i].physicsBody!.contactTestBitMask = PhysicsCategories.BallCategoryMask
and the same for the ball array but just the categoryBitMask
ballBloqArray[i].physicsBody?.categoryBitMask = PhysicsCategories.BallCategoryMask | UInt32(i)
I'm still not really sure why it has to be this way round but that was the final problem this evening that I had the two bodies the wrong way round in the && comparison in the final working detection code:
var body1 = SKPhysicsBody()
var body2 = SKPhysicsBody()
if contact.bodyA.categoryBitMask < contact.bodyB.categoryBitMask {
body1 = contact.bodyA
body2 = contact.bodyB
}
else {
body1 = contact.bodyB
body2 = contact.bodyA
}
// Check node collisions
for n in 0...15 {
for i in 0...15 {
if body2.categoryBitMask == PhysicsCategories.BallCategoryMask | UInt32(n) && body1.categoryBitMask == PhysicsCategories.BoxCategoryMask | UInt32(i) {
//if body1.node != nil {
print("Ball\(n) hit Box\(i)")
//}
}
}
}
and that is now printing the correct collisions.... lovely!... onwards to
the next step... thanks again
Once you have the two nodes involved in the collision as discussed in the answer by #Luca Angeletti, you can turn those into an index in various ways.
If you've made each type of node a specialized subclass and you have the appropriate indexes stored as class members, then you can convert to the appropriate class and look at the index fields, e.g.,
if let block = nodeA as? BlockNode, let ball = nodeB as? BallNode {
print("block \(block.blockIndex) hit ball \(ball.ballIndex)")
}
Nodes are hashable, so you can have dictionaries to map them back to indexes:
if let blockIndex = blockIndexes[nodeA], let ballIndex = ballIndexes[nodeB] {
print("block \(blockIndex) hit ball \(ballIndex)")
}
You can use the userData property of nodes to store whatever you like, including the indexes. The mucking around with NS things gets kind of ugly though.
https://developer.apple.com/documentation/spritekit/sknode/1483121-userdata
You can do the linear scan through each array.
if let blockIndex = blocks.firstIndex(of: nodeA), let ballIndex = balls.firstIndex(of: nodeB) {
print("block \(blockIndex) hit ball \(ballIndex)")
}
It sounds like from your question that you might have a separate category bit mask for each individual block and each individual ball. Or if you don't, that is possible if there are at most 16 of each. Anyway, if that's the case, then you can do some bit flicking to take the categoryBitMask from the physics bodies, shift the ball/block one by 16 bits (whichever is using the high bits gets shifted), and then take log2 of the bit masks to get your indexes. You can find various bit flicking techniques for log2 here:
https://graphics.stanford.edu/~seander/bithacks.html#IntegerLogObvious
Given 16 things of each type, I'd say just do #4. If you already have subclass nodes, #1 is fine. Number 2 is spreading state around a bit, so I'm not such a fan of that. Number 3 I would not really recommend because of the NS stuff. Number 5 is too cute for its own good.
Edit: Now that I read again, it sounds like maybe you've got separate ID's for categories 1...16, so your block category bit masks are like:
blockCategoryMask | ID1, blockCategoryMask | ID2, etc. That can also work (basically a variant of #5). If you're going down that route though, you may as well just stick the index directly into the category masks:
let blockCategoryMask = UInt32(1<<4)
let ballCategoryMask = UInt32(1<<5)
Then the physics body for a block gets mask blockCategoryMask | UInt32(index), and similarly for a ball. In that case the index extraction is just categoryBitMask & UInt32(0xf). Or if you put the block and ball categories in bits 0 and 1 and the indexes in bits 2-5, then right shift by 2 to get the index.
Edit in response to comment:
OK, so let's take the case of 6 distinct categories of objects, and each object can fall into one of 16 distinct subcategories. To be able to control which contacts are reported, you'd assign a bit mask to each of the 6 main categories:
enum Category: UInt32 {
// Basic categories
case block = 0b000001
case ball = 0b000010
case shot = 0b000100
case obstacle = 0b001000
case wizard = 0b010000
case food = 0b100000
}
Since you've used 6 bits for the main category, you have 26 bits remaining. To encode the 16 subcategories needs 4 bits. You can put those in the category bit mask above the main 6 bits. Example manipulations:
func encodeObject(category: Category, subcategory: Int) -> UInt32 {
return category.rawValue | (UInt32(subcategory) << 6)
}
func nodeIsA(node: SKNode, category: Category) -> Bool {
guard let body = node.physicsBody else { return false }
return (body.categoryBitMask & category.rawValue) != 0
}
func subcategory(node: SKNode) -> Int {
guard let body = node.physicsBody else { fatalError("missing physicsbody") }
return Int(body.categoryBitMask >> 6)
}
Note that the subcategories are just sort of tagging along for the ride; all your contactBitMasks would deal only to the main categories.
Essentially you're using the fact that you've got some extra bits in the physics body category bit masks to just store random information. I've done that before with simple sprites. But if the information needed is going to get any more complex that a simple number or index, I'd recommend making subclasses of nodes rather than trying to squirrel stuff away in the unused bits.
Using contact.bodyA.node and contact.bodyB.node you can get the SKNode(s) which are involved in the contact
extension GameScene: SKPhysicsContactDelegate {
func didBegin(_ contact: SKPhysicsContact) {
switch (contact.bodyA.node, contact.bodyB.node) {
case (let ball as Ball, let block as Block):
didBeginContactBetween(ball: ball, andBlock: block)
case (let block as Block, let ball as Ball):
didBeginContactBetween(ball: ball, andBlock: block)
default:
break
}
}
func didBeginContactBetween(ball: Ball, andBlock block: Block) {
// TODO: put your code here
}
}

Performance drop loop vs iterator

I am using kotlin in combination with lwjgl. So far I had the following code that ran several thousand times per second:
// val textureMap = HashMap<Int, Texture>()
fun bind() {
var index = 0
for(entry in textureMap) {
glActiveTexture(GL_TEXTURE0 + index)
entry.value.bind()
program.setInt(entry.key, index)
++index
}
}
So while this was running absolutely fast and consumed virtually 0 of my frame time as expected I had to replace it because it created an Iterator in every call, eventually leading to tens of thousands of those objects eventually getting garbage collected and halting my program for a few milliseconds which is of course not usable in my application.
So I went ahead and changed it to the following code:
// textures = ArrayList<Texture>()
// indices = ArrayList<Int>()
fun bind() {
var index = 0
while(index < textures.size) {
val uniform = indices[index]
val texture = textures[index]
glActiveTexture(GL_TEXTURE0 + index)
texture.bind()
program.setInt(uniform, index)
++index
}
}
Now for some reason I am noticing a massive drop in performance, namely the function now uses several seconds per frame. Using jvisualvm I was able to determine that all that time is spent in glActiveTexture in the native part as well as the native function in program.setInt(...). I am absolutely stumped why this is the case, especially after comparing the byte code of the two.
This is the decompiled class file for the first (fast) version:
public final void bind()
{
int index = 0;
Map localMap = (Map)this.textureMap;
for (Map.Entry entry : localMap.entrySet())
{
GL13.glActiveTexture(33984 + index);
((Texture)entry.getValue()).bind(); Program
tmp66_63 = this.program;
if (tmp66_63 == null) {
Intrinsics.throwUninitializedPropertyAccessException("program");
}
tmp66_63.setInt(((Number)entry.getKey()).intValue(), index);
index++;
}
}
And that is the byte code of the slow version:
public final void bind()
{
int index = 0;
while (index < this.textures.size())
{
Integer uniform = (Integer)this.indices.get(index);
Texture texture = (Texture)this.textures.get(index);
GL13.glActiveTexture(33984 + index);
texture.bind(); Program
tmp52_49 = this.program;
if (tmp52_49 == null) {
Intrinsics.throwUninitializedPropertyAccessException("program");
}
Integer tmp62_61 = uniform;Intrinsics.checkExpressionValueIsNotNull(tmp62_61, "uniform");tmp52_49.setInt(tmp62_61.intValue(), index);
index++;
}
}
I am extremely confused what is going on here. In both versions the call to glActiveTexture is GL_TEXTURE0 + <an int value>, yet one takes so much more time thatn the other.
Does anyone have an idea what I am missing here?
Basically my entire question can be removed. I should have debugged and not only profiled. The problem was the code that populated the lists, and it didnt remove the old values so the lists grew larger and larger and the loop just ran so many more times over time...
In case anyone was wondering how I fixed my problem with the allocations I essentially created two collections, one is containing the uniforms and one is mapping them to textures. And then I can iterate over the uniforms and then get the respective texture. So no pointless Iterator objects are created but I am also not having any duplicates :)

JFreeChart Performance

I'm trying to plot some graphs simultaneously:
Each representing an attribute and displays the results for several objects each containing its own data items series.
I encounter very bad performance using either add(...) or addOrUpdate(...) methods of the TimeSeries - time for plotting ~16,000 items is about 60 seconds.
I read about the performance issue - http://www.jfree.org/phpBB2/viewtopic.php?t=12130&start=0 - but it seems to me like it is much worse in my case for some reason.
I'd like to understand whether this is truly the performance that I may squeeze out of the module (2.5GHz machine running windows - I doubt that).
How can I get my application accelerated with this respect?
Here is a basic version of the code (note that it is all done in a dedicated thread):
/* attribute -> (Object -> graph values) */
protected HashMap<String,HashMap<Object,Vector<TimeSeriesDataItem>>> m_data =
new HashMap<String,HashMap<Object,Vector<TimeSeriesDataItem>>>();
public void loadGraph() {
int items = 0;
for (String attr : m_data.keySet())
for (Object obj : m_data.get(attr).keySet())
for (TimeSeriesDataItem dataItem : m_data.get(attr).get(obj))
items++;
long before = System.currentTimeMillis();
// plot each graph
for (String attr : m_data.keySet()) {
GraphXYPlot plot = m_plots.get(attr);
plot.addToObservation(m_data.get(attr));
}
System.err.printf("Time for plotting %d items is: %d ms", items, System.currentTimeMillis()-before);
// => Time for plotting 16540 items is: 59910 ms
}
public void addToObservation(HashMap<Object, Vector<TimeSeriesDataItem>> plotData) {
for (Object obj : plotData.keySet()) {
SeriesHandler handler = m_series.get(obj);
if (handler != null) {
TimeSeries fullSeries = handler.getFullSeries();
TimeSeries periodSeries = handler.getPeriodseries();
for (TimeSeriesDataItem dataItem : plotData.get(obj)) {
fullSeries.add(dataItem);
periodSeries.add(dataItem);
}
}
}
}
Thanks a lot !
Guy
Absent more details, any of several general optimizations should be considered:
Invoke setNotify(false), as suggested here.
Cache already calculated values, as discussed here.
Adopt a paging strategy, as shown here.
Chart a summary of average/time-unit values; based on the ChartEntity seen in a ChartMouseListener, show an expanded subset in an adjacent panel.

Resources