Network Flow: Can I change edge capacity while solving for max flow? - algorithm

I want to know if I can change edge capacities in a network flow problem while solving it.
I have a supply/demand problem with goods A, B, and AB. A unit of demand for A will take one unit of A, B will take B, and AB will take one unit of AB or a unit of A and a unit of B. Given a list of supply and demand for each good, I want to figure out if there enough goods on hand to satisfy the demand.
So I my net work looks like this:
Let sX be supply of X.
Let dX be demand of X.
All flows go from left to right.
You can see that if I push x units of A, I have subtract x from the capacity going to (A+B). Similarly if I 'undo' a push, I have add capacity back to (A+B). So I have do this during the algorithm. Does that mess up the algorithm?

This is not a network flow problem. Suppose that sA = 10, sB = 10, dA = 10, dB = 10, dAB = 10. From the graph you can supply 10 As, Bs, and A+Bs, and therefore meet the demand. But in fact you need 20 As and 20 Bs to supply that need.
I do not know of a way for a simple flow network to represent the condition that you need the flow in one place to match the flow in another.
What you're describing is an interesting problem that I am sure has been studied, but I don't know what you call it.
This can be solved by turning it into a linear programming problem. See http://en.wikipedia.org/wiki/Linear_programming if you are not familiar with linear programming problems. Consider your simple case. You can start with 6 variables:
x is the flow from input A to output A.
y is the flow from input B to output B.
z is the flow from input AB to output AB.
w is the flow from A into A + B.
w' is the flow from B into A + B.
w'' is the flow from A + B into AB.
Of course the last 3 are all equal to each other. So we have 4 variables. (If we didn't note this we'd have more equations.) Now add the following inequalities:
0 ≤ x
0 ≤ y
0 ≤ z
0 ≤ w
x + w ≤ sA
y + w ≤ sB
z ≤ sAB
x ≤ dA
y ≤ dB
z + w ≤ dAB
This is a set of inequalities that says we're producing stuff, we aren't using more than our supply and we aren't creating more than the final demand for any particular thing. This defines our "feasible region".
Next we need an objective function, the thing that we're trying to maximize. The obvious choice is that we want to maximize the amount that we produce. So we want to maximize x + y + z + w.
The answer to your original question can then be found as follows. Given a set of available inputs, and available outputs, solve the above linear programming problem to optimize production. You are able to satisfy production goals if and only if the optimum level of production is dA + dB + dAB. And better yet, the solution you will get will tell you exactly how to satisfy production.

Related

Global convergence of ray cast against convex object

In Gino Van Den Bergen's 2004 paper Ray Casting against General Convex Objects with Application to Continuous Collision Detection, he first presents a naive iterative method to prep his discussion:
Algorithm 1 An iterative method for performing a ray cast of a ray s+λr against
a convex object C. For positive results, this algorithm terminates with λ being the
hit parameter, x, the hit spot, and n, the normal at x.
λ ← 0;
x ← s;
n ← 0;
c ← “the point of C closest to x”;
while not “x is close enough to c” do
begin
n ← x − c;
if n · r ≥ 0 then return false
else
begin
λ ← λ − (n · n)/(n · r);
x ← s + λr;
c ← “the point of C closest to x”
end
end;
return true
He then states:
The property λi < λi+1 ≤ λhit is a necessary yet not sufficient
condition for global convergence. In order to show that, in case of a
hit, xi indeed approaches the hit spot for i → ∞, we need to show that
the mapping from λi to λi+1 is continuous at all λ<λhit.
However, for real numbers at least, the fact the sequence of λ's is monotonically increasing and bounded above is a sufficient condition for convergence. I am unsure why he needs to prove continuity?
If there's a singularity at some point, it might be the case that λi+1 will asymptotically approach it, but will never be able to get past it. With limited-precision numbers it'll just mean that λ will stop increasing.
EDIT: ok I think I came up with an example of how a similar algorithm may fail to converge. Let's imagine our advancement strategy is a bit more cautious: once we find the closest point c, we advance λ halfway towards the point on the ray that's closest to c, i.e. λi+1 = λi - (n·r)/2, and additionally the object is not entirely convex but has a hole:
AB=AC, so there's a singularity at A. Our new algorithm will approach A, but will never reach it and be able to jump to C (since it only moves halfway each time), and yet λ will steadily increase at any point.
EDIT 2: Since A is a singularity point, n·r will actually get to 0 there, so I'm not sure if it ruins the argument:) Perhaps not if we define that exactly at A, C will be chosen as the closest point instead of B

Random solution to a set of integer constraints

I have a set of integer constraints that I would like to solve. The constraints can consist of additions of variables that are greater than, less than or equal to some constant.
Example:
A >= 20
A <= 30
B <= 10
A + B <= 25
...
There will be hundreds of such simple constraints, and the constants are much larger values (hundreds of thousands) in practice.
However, I don't just want a solution to these constraints: I want a random solution from the solution space. That doesn't mean each solution has to have equal probability (I don't think that's possible without enumerating them all?) but what I want is that for instance for the variable A the solution will typically not be 20 or 30, but rather that values in between are just as likely (or even more likely) to be picked.
What techniques would be appropriate for this kind of problem? I'm having trouble knowing where to look, because most algorithms focus on finding optimal or fast or minimal solutions rather than random ones.
Many Constraint Programming systems has a search heuristic (called "indomain_random" or something similar), which give solutions in random order (given some seed). Here's a MiniZinc model for a simple problem:
var 20..30: A;
var 0..10: B;
solve :: int_search([A,B], first_fail, indomain_random, complete) satisfy;
constraint A + B <= 25;
output [ show([A,B])];
Here's some solutions for a couple of seeds using Gecode's FlatZinc solver:
Seed Solution
---------------
0 [22,0]
2 [25,0]
3 [22,2]
I would start by establishing relationships between all nodes that interact with variables of other nodes.
Make a pass over your graph marking all nodes that depend on no other nodes as visited. Then iterate over each of the nodes that depend on those nodes, shrinking their range (increasing min and decreasing max) in such a way that their formulas are consistant. So if you have A.min=10, A.max=20, B.min = 10, A+B=25 you can change A.max down to 15 (because B must be 10, and 25-10=15). You've just reduced the scope of A by 50%.
This gets easier if you establish a master node: if A+B=25, does A depend on B or does B depend on A? Making your graph a directed graph is much easier to deal with, as the algorithms are simpler in directed graphs.
Once you've done all this you will notice islands appear: this a is a good thing, because islands represent individual graphs that provide walls of separation - if you attempt a trial and error method, you only need to retry islands that failed to enter a consistent state.
Not really a complete answer, but maybe useful, and too long for a comment:
It may help you to know that the solution space is convex. Meaning, if you have two solutions A1, B1, C1 and A2, C2, B2, then any triple in between them is also a solution.
(Here, "in between" means that there is some real number t in the range [0,1], so that:
Anew = t * A1 + (1 - t) * A2
Bnew = t * B1 + (1 - t) * B2
Cnew = t * C1 + (1 - t) * C2
To see why, you can just try plugging these expressions for Anew, Bnew, Cnew into your inequalities, and the inequalities will expand as true because they do so for A1, B1, C1 and A2, B2, C2.)
You can use this information to limit the region of n space you need to search. For instance, if you find one solution, and you want to know how far out in some direction your solution space extends, you can run something like a binary search down towards the known solution. Etc...

Select N locations from list to maximize sum, with min distance constraint

I want to select 5 localities out of 250 locations, to maximize expected profit such that minimum distance between each locality is 5 miles. Expected profit associated with each location and distance between them is given.
I was trying to find out if this is a standard problem. Applying filters to arrive at the solution seems computationally intensive. I have been exploring methods like simulated annealing to reach at a good enough solution.
Since you're using great-circle distances, there are a couple possibilities that provide some quality guarantee: approximation schemes for Euclidean graphs, and integer programming. The former is in theory more scalable, but the latter gives exact optima and is a lot easier to implement assuming that a solver is available. (Of course, you could always do something ad hoc.) Since you have so few locations, that's the one I'll describe.
I'll explain integer programs briefly by formulating your problem as one.
maximize profit1 * x1 + profit2 * x2 + ... + profit250 * x250
subject to
x1 + x2 + ... + x250 = 5 (select exactly 5 localities)
for every pair of localities {i, j} less than 5 miles from each other,
xi + xj <= 1
x1, x2, ..., x250 in {0, 1}
The meaning of variable xi is that it's 1 if locality i is selected and 0 if locality i is not selected.
You'll need to write a small subroutine to communicate this program to your favorite solver in its preferred format. To find a solver, search for "MIP solver"; there are free and commercial offerings with bindings to a variety of languages. Try to get one that supports clique cuts (I know the commercial CPLEX and the free GLPK do). If it doesn't, that's OK; you can implement Bron–Kerbosch yourself to generate constraints of the form
xa + xb + ... + xz <= 1
where a, b, ..., z are localities each within 5 miles of one another.

Algorithm for superimposition of 3d points

I need to superimpose two groups of 3D points on top of each other; i.e. find rotation and translation matrices to minimize the RMSD (root mean square deviation) between their coordinates.
I currently use Kabsch algorithm, which is not very useful for many of the cases I need to deal with. Kabsch requires equal number of points in both data sets, plus, it needs to know which point is going to be aligned with which one beforehand. For my case, the number of points will be different, and I don't care which point corresponds to which in the final alignment, as long as the RMSD is minimized.
So, the algorithm will (presumably) find a 1-1 mapping between the subsets of two point sets such that AFTER rotation&translation, the RMSD is minimized.
I know some algorithms that deal with different number of points, however they all are protein-based, that is, they try to align the backbones together (some continuous segment is aligned with another continuous segment etc), which is not useful for points floating in space, without any connections. (OK, to be clear, some points are connected; but there are points without any connections which I don't want to ignore during superimposition.)
Only algorithm that I found is DIP-OVL, found in STRAP software module (open source). I tried the code, but the behaviour seems erratic; sometimes it finds good alignments, sometimes it can't align a set of few points with itself after a simple X translation.
Anyone know of an algorithm that deals with such limitations? I'll have at most ~10^2 to ~10^3 points if the performance is an issue.
To be honest, the objective function to use is not very clear. RMSD is defined as the RMS of the distance between the corresponding points. If I have two sets with 50 and 100 points, and the algorithm matches 1 or few points within the sets, the resulting RMSD between those few points will be zero, while the overall superposition may not be so great. RMSD between all pairs of points is not a better solution (I think).
Only thing I can think of is to find the closest point in set X for each point in set Y (so there will be exactly min(|X|,|Y|) matches, e.g. 50 in that case) and calculate RMSD from those matches. But the distance calculation and bipartite matching portion seems too computationally complex to call in a batch fashion. Any help in that area will help as well.
Thanks!
What you said looks like a "cloud to cloud registration" task. Take a look into http://en.wikipedia.org/wiki/Iterative_closest_point and http://www.willowgarage.com/blog/2011/04/10/modular-components-point-cloud-registration for example. You can play with your data in open source Point Cloud Library to see if it works for you.
If you know which pairs of points correspond to each other, you can recover the transformation matrix with Linear Least Squares (LLS).
When considering LLS, you normally would want to find an approximation of x in A*x = b. With a transpose, you can solve for A instead of x.
Extend each source and target vector with "1", so they look like <x, y z, 1>
Equation: A · xi = bi
Extend to multiple vectors: A · X = B
Transpose: (A · X)T = BT
Simplify: XT · AT = BT
Substitute P = XT, Q = AT and R = BT. The result is: P · Q = R
Apply the formula for LLS: Q ≈ (PT · P)-1 · PT · R.
Substitute back: AT ≈ (X · XT)-1 · X · BT
Solve for A, and simplify: A ≈ B · XT · (X · XT)-1
(B · XT) and (X · XT) can be computed iteratively by summing up the outer products of the individual vector pairs.
B · XT = ∑bi·xiT
X · XT = ∑xi·xiT
A ≈ (∑bi·xiT) · (∑xi·xiT)-1
No matrix will be bigger than 4×4, so the algorithm does not use any excessive memory.
The result is not necessarily affine, but probably close. With some further processing, you can make it affine.
The best algorithm for discovering alignments through superimposition is Procrustes Analysis or Horn's method. Please follow this Stackoverflow link.

Generalizing the algorithm for domino tiling?

In this earlier question the OP asked the following problem:
Given a rectangular grid where some squares are empty and some are filled, what is the largest number of 2x1 dominoes that can be placed into the world such that no two dominos overlap and no domino is atop a filled square?
The (quite beautiful!) answer to this problem recognized that this is equivalent to finding a maximum bipartite matching in a specially-constructed graph. In this graph, each empty square has a node that is linked to each of its neighbors by an edge. Each domino then corresponds to an edge in the graph such that its endpoints are not covered by any other edge. Consequently, any set of edges that don't share a vertex (a matching) corresponds to an arrangement of dominoes, and vice-versa.
My question is a generalization of this earlier one:
Given a rectangular grid where some squares are empty and some are filled, what is the largest number of M x N dominoes (for a given M and N) that can be placed into the world such that no two dominos overlap and no domino is atop a filled square?
I cannot see how to convert this into a matching problem as was done in the previous case. However, I also don't see any particular reason why this problem would immediately be NP-hard, so there may be a polynomial time solution to the problem.
Is there an efficient algorithm for solving this problem? Or does anyone have a reduction that would show that this problem is NP-hard?
Thanks so much!
This problem is definitely NP-hard and I can prove it. There is a reduction from 3-SAT to this problem. Specifically, it's a reduction from 3-SAT to the subproblem of this problem in which the dominoes are 1x3. There may also be other reductions for other specific sizes, but this one definitely works.
Essentially, in this reduction, we're going to use domino positions to encode either true or false. In specific, I'm going to adopt the same notation as the other solution, which is to say that I'll use asterisks to indicate open spaces on the grid. I'll also use sets of three capital letters to represent dominoes and lower case letters to represent "signals" which are spaces which may or may not be filled depending on the state of the system.
To embed a 3-SAT problem into this space, we're going to need a set of what I'll call gadgets which allow only certain states to be possible. Most of these gadgets will have a fixed number of dominoes in them. The exception will be the gadgets which represent the clauses which will have one extra domino if the clause is true (satisfied) but not when it is false (unsatisfied). We can interconnect these gadgets using paths. Together this will allow us to build a 3-SAT circuit. We will have a base number of dominoes since each path and gadget will take a standard amount of dominoes, we can add those up to get a base number k and then each clause gadget can have one extra domino if it is true, so if all clauses can be made true (and hence the expression satisfied) and there are n clauses, then the maximum number of dominoes will be n+k. If not, then the maximum number will be less than n+k. This is the basic form of the reduction. Next I will describe the gadgets and give examples.
Similar to the other answer, we're going to have two positions which encode true and false for a given variable. So, I'll start with a single tile which can be in two possible places.
****
This can either be covered with one domino like
AAA* or *AAA
Obviously, this cannot be covered with 2 dominoes and covering it with 0 dominoes would never be maximal. For my purposes, we're going to consider a protrusion to represent the value "false" and a lack of protrusion to represent "true". So we can view this part as having carrying two signals:
x**y
And in this case, only one of x or y will be covered, so we can consider the signals to be x and the logical not of x. For our purposes, whichever is covered is false, whichever is not covered is true. Next, we can transmit signals simply through straight can curved paths. If we have
x*****y
We will again have exactly two dominoes and result in either x or y being covered, but not both.
***y
*
*
x
Will have exactly the same behavior. So we can use this to create long and curving paths in lengths which are increments of 3. However, not all lengths we might want to use are increments of 3, so we need an additional gadget to move a different distance. I call this the fiddler gadget and it's only purpose is to move the signal slightly uneven distances to make things connect successfully. Its input comes from x and output goes to y and it merely transmits the same signal along. It looks like this:
***y
*
**x
It always contains exactly two dominoes and is filled in one of the following two ways:
BBB* ABBB
* A
AAA *AX
If we're going to model 3-SAT, however, we need more than this. Specifically, we need some way to model the clauses. To do this, we have a gadget where one extra domino can be packed in if the clause is true. The clause will be true when one or more of its inputs is true. In this case, that means that we can pack one extra domino in when at least one of the inputs does not protrude. It will look like this:
*x*y*
*
z
If we add an extra path to each for clarity, then it looks like this:
* *
* *
* *
*****
*
****
If x,y, and z are all false, then they'll all have protrusions and it will be filled like
this:
A B
C D
C D
*C*D*
*
EEEF
Where the rest of dominoes A,B, and F continue on down a path somewhere. If at least one of inputs is true, then we can pack in one extra domino (G) like so:
C B A D A B
C D C D C D
C D or C D or C D
GGGD* *CGGG *CGD*
* * G
EEEF EEEF GEEE
However, even if all inputs are true, then we cannot pack in more than one domino. That scenario would look like this:
C D
C D
C D
*****
*
*EEE
And as you can see, we can only insert exactly one extra domino into the empty space, not two.
Now, if terms were never repeated, then we'd be done (or very nearly done). However, they can be repeated, so next, we need a signal splitter so that one variable can appear in
multiple terms. To do this, we utilize the following gadget:
y*** ***z
* *
***
***
x
In this gadget x is the input and y and z are the outputs. In this gadget, we can always pack 5 dominoes. If x protrudes than packing 5 dominoes will always require covering y and z as well. If x does not protrude, then covering y and z is not required. The packing where x does not protrude looks like this:
yAAA BBBz
C D
CED
CED
E
When x does protrude (we use X to indicate the end of the domino protruding into space x), the maximal packing necessarily covers both y and z:
AAAC DBBB
C D
C*D
EEE
X
I will take a moment to note that it would be possible to pack this with five dominoes when x is not protruding in such a way that either y or z protrude. However, doing so would result in terms which could be true (not protruding) becoming false (protruding). Allowing some of the terms (not variables, but actual terms in the clauses) to differ in value only by becoming false unnecessarily will never result in being able to satisfy an otherwise unsatisfiable expression. If our 3-SAT expression was (x | y | z) & (!x | y | !z) then allowing both x and !x to be false would only make things harder. If we were to allow both ends of something to be true, this would result in incorrect solutions, but we do not do this in this scheme. To frame it in terms of our specific problem, protruding unnecessarily will never result in more dominoes being able to be packed in later down the line.
With paths and these three gadgets, we can now solve planar 3-SAT, which would be the sub-problem of 3-SAT where if we draw a graph where the terms and clauses are vertices and there is an edge between every term and every clause which contains that term, that the graph is planar. I believe that planar 3-SAT is probably NP-hard because planar 1-in-3-SAT is, but in case it's not, we can use gadgets to do a signal crossing. But it's really quite complex (if anyone sees a simpler way, please let me know) so first I'm going to do an example of solving planar 3-SAT with this system.
So, a simple planar 3-SAT problem would be (x | y | z) & (!x | y | !z). Obviously, this is satisfiable, using any assignment where y is true or several other assignments. We will build our dominoes problem thus:
*******
* *
* *
**** ***
* *
*** ****
* *
* *
* ******* *
* * * *
* * * *
*z*x* *****
* *
**** ****
* *
***
***
*
*
*
y
Notice that we had to use fiddlers at the top to get things to space correctly or else this would've been substantially less complex.
Adding up the total dominoes from gadgets and paths we have 1 splitter (5 dominoes), two fiddlers (2 dominoes each), and a total of 13 regular paths, for a grand total of 5 + 2*2 + 13 = 22 dominoes guaranteed, even if the clauses cannot be satisfied. If they can be, then we will have 2 more dominoes which can be filled in for a total of 24. One optimal packing with 24 dominoes is as follows:
QRRRSSS
Q T
Q T
OPPP *UT
O U
*ON UVVV
N W
N W
M IIIJJJK W
M H K X
M H K X
*zGH* LLLX*
G *
GEEE FFF*
B D
BCD
BCD
C
A
A
A
This tiling contains 24 dominoes, so we can know that the original expression is satisfiable. In this case, the tiling corresponds to make y and x true and z false. Notice that this is not the only tiling (and not the only satisfying assignment of boolean values), but that there is no other tiling which will increase the number of tiles beyond 24, so it is a maximum tiling. (If you don't want to count all the dominoes you can note that I used every letter except for Y and Z.)
If the maximal tiling had contained either 22 or 23 dominoes, then we would know that one of the clauses was not satisfied (GGG and/or LLL dominoes would not be able to be placed) and hence we would know that the original expression was not satisfiable.
In order to be certain that we can do this even if planar 3-SAT isn't NP-hard, we can build a gadget which allows paths to cross. This gadget is unfortunately kind of big and complex, but it's the smallest one I was able to figure out. I'll first describe the pieces and then the whole gadget.
Piece 1: Crossover point. x and y are the inputs. a,b,and c are the outputs. They will need to be combined using other gadgets to actually relay x and y to the opposite side of each other.
***c
*
***
* *
* *
* *
***
***
ax*yb
This gadget will always fit exactly 7 dominoes. There are four possible input combinations. If neither input protrudes (both are true) than no output will protrude and it will be filled as in (tt1) or (tt2) below. If only input x protrudes then only c will protrude as in (ft) below. If only input y protrudes then either output a or c will protrude as in (tf) below. And if input x and y both protrude then output c protrudes as in (ff) below.
(tt) AAAc (ft) AAAc (tf) AAAc (ff) BAAA
* * * B
BBB BBB BBB CBD
C D C D C D C D
C D C D C D C D
C D C D C D E G
EEE EEE EEE EFG
FFF FFF FFF EFG
aGGGb aXGGG GGGYb aXFYb
I have not included the possibility that in the (ft) or (tf) scenarios that c could be covered instead of a or b. This is possible within the scope of this gadget but once combined with other gadgets to form the complete crossover, if it were to do so, it would never result in a larger number of clauses being satisfied so we can exclude it. With that in mind, we can then observe that in this case the value of the input x is equal to the value of b & c and the input y is equal to the value of a & c (note that this would be logical or rather than logical and if protrusion were considered true rather than false). So we just need to split c and then use a logical and gadget to connect connect the values of c with a and b respectively and we will then have successfully completed our cross over.
The logical and is our simplest gadget so far and it looks like this:
****
*
x*y
You might actually note that there's one embedded towards the top of the crossover point gadget. This gadget will always contain precisely 2 dominoes. One will be at the top to serve as the output. The other one serves as a switch which will be horizontally oriented only if both x and y are true (non-protruding) and vertically oriented otherwise as we can see in the following diagrams:
BBB* ABBB ABBB ABBB
* A A A
AAA XAy xAY XAY
Thus we can complete the crossover by splitting c and then adding two of these gates, one for a & c and one for b & c. Putting it all together requires also adding some fiddler gadgets and looks like this:
******* ****
* * * *
* *** *
*** *** ***
* * *
**** * ****
* * *
* **** *
*** * ***
* *** *
**** * * ****
y * * * * x
* * * * * *
* **** *** **** *
*** *** ***
**********x*y*************
I'm not going to fill in example tilings for that. You'll have to do it yourself if you want to see it in action. So, hooray! We can now do arbitrary 3-SAT. I should take a moment to note that doing this will be a polynomial time transformation because even in the worst case, we can just make a big grid with all of the variables and their opposites along the top and all the terms on the side and do O(n^2) crossovers. So there is a simple, polynomial-time algorithm for laying this all out and the maximum size of the transformed problem is polynomial in the size of the input problem. QED.
Edit note:
Following Tom Sirgedas's excellent work in finding a mistake in the splitter gadget, I've made some changes to the answer. Essentially, my old splitter looked like this and could be packed with 6 when x does not protrude (rather than the 5 I had intended) like this:
y*** ***z AAAC DBBB
* * C D
*** C*D
*** EEE
*x* FFF
So I revised it by removing the two spaces on either side of x. This eliminates the six domino packing while still allowing a 5-domino packing in which y and z are uncovered when x is uncovered.
To Keith:
Great work and great explanations! Though, I wrote a program to find maximum tilings, and it uncovered a flaw. Hopefully this can be fixed! [Update: Keith did fix the problem!]
Please check out this link:
http://pastebin.com/bABQmfyX (your gadgets analyzed, plus very handy c++ source code)
The problem is that the gadget below can be tiled with 6 dominoes:
y*** ***z
* *
***
***
*x*
-Tom Sirgedas
Really a good question. This problem is equivalent to finding the size maximum independent set (or maximal clique size) on a special graph - the vertices would be all possible positions of the MxN rectangle and edges would connect the two positions if they collide. Then finding the size of maximum independent set yields the result. Or vice versa, we could define the edge as connecting two positions which do not collide, then we would look for maximum clique size. Anyway, neither graph is neither claw-free nor perfect, so we cannot use polynomial solutions to find maximum independent set / clique.
So, we could try to convert the maximum independent set problem to this tiling problem, but I couldn't find a way how to convert the general graph to this, because you cannot convert e.g. induced K1,5 subgraph to the tiles.
The first thing I would do is make a third state: "empty, but unreachable". You can easily prove each tile unreachable in l*w*m*n time (where l is length of the world, w is width of the world, and m and n are dimensions of the tile). This will reduce your space such that any empty tile is reachable. Note that you may have islands of reachable tiles. The simplest example of this is that the world is cut in half. This lends itself to a recursive effort where each island of reachability is treated as a world in and of itself.
Now that we're dealing with an island (which may or may not be square) you essentially have a special case of the 2D knapsack problem, which is known to be NP-hard (citation under Previous Work). Yours increases complexity of the problem by adding fixed positions in the knapsack that always filled, but reduces complexity (slightly) by making all packages the same size.
1x3 tiles are hard by reduction from cubic planar monotone One-in-three 3SAT. We have to build some "circuitry" to encode the formula.
"Gates":
X********Y
Forces exactly one of X and Y to be covered externally. Used to link a variable and its negation.
Y***
*
*
ooo ****
* * * *
* * * *
X **** Z
Forces none or all of X and Y and Z to be covered externally. Used to copy X or destroy three copies of the same thing. Wires can be shaped more or less arbitrarily using length-3 L pieces.
*******************
* * *
* * *
X Y Z
Forces exactly one of X and Y and Z to be covered externally. One for each clause.

Resources