I am trying to incrementally rotate a curve using the codes below in c# and Vb.net (the editor already imports the necessary libraries). In my 3D modelling program i get overlapping lines instead of lines with different angles.
What am I doing wrong?
on C#:
private void RunScript(Curve ln, int x, double angle, ref object A)
{
List<Curve> lines = new List<Curve>();
Int16 i = default(Int16);
for(i = 0; i <= x; i++){
ln.Rotate(angle * i, Vector3d.ZAxis, ln.PointAtEnd);
lines.Insert(i, ln);
}
A = lines;
on VB.net:
Private Sub RunScript(ByVal ln As Curve, ByVal x As Integer, ByVal angle As Double,
ByRef A As Object)
Dim lns As New List(Of Curve)()
Dim i As Int16
For i = 0 To x
ln.Transform(transform.Rotation(angle * i, vector3d.ZAxis, ln.PointAtEnd))
lns.Insert(i, ln)
Next
A = lns
I need to duplicate the line before rotating the next one in the loop otherwise there is no trace of it.
In C#:
private void RunScript(Curve ln, int x, double angle, ref object A)
{
List<Curve> lns = new List<Curve>;
for (int = 0; i<= x; i++)
{
Curve copy = ln.DuplicateCurve();
copy.Rotate(angle * i, Vector3d.ZAxis, ln.PointAtEnd);
lns.Add(copy);
}
}
A = lns;
in VB.net:
Private Sub RunScript(By Val ln As Curve, ByVal x As Integer, ByVal angle As Double, By Ref A as Object)
Dim lns As New List(Of Curve) ()
For i As Integer = 0 To x
Dim nl As Curve = ln.DuplicateCurve()
nl.Transform(transform.Rotation(angle*i, vector3D.ZAxis, ln.PointAtEnd))
lns.Add(nl)
Next
A = lns
End Sub
Related
I want to create a function to determine the most number of pieces of paper on a parent paper size
The formula above is still not optimal. If using the above formula will only produce at most 32 cut/sheet.
I want it like below.
This seems to be a very difficult problem to solve optimally. See http://lagrange.ime.usp.br/~lobato/packing/ for a discussion of a 2008 paper claiming that the problem is believed (but not proven) to be NP-hard. The researchers found some approximation algorithms and implemented them on that website.
The following solution uses Top-Down Dynamic Programming to find optimal solutions to this problem. I am providing this solution in C#, which shouldn't be too hard to convert into the language of your choice (or whatever style of pseudocode you prefer). I have tested this solution on your specific example and it completes in less than a second (I'm not sure how much less than a second).
It should be noted that this solution assumes that only guillotine cuts are allowed. This is a common restriction for real-world 2D Stock-Cutting applications and it greatly simplifies the solution complexity. However, CS, Math and other programming problems often allow all types of cutting, so in that case this solution would not necessarily find the optimal solution (but it would still provide a better heuristic answer than your current formula).
First, we need a value-structure to represent the size of the starting stock, the desired rectangle(s) and of the pieces cut from the stock (this needs to be a value-type because it will be used as the key to our memoization cache and other collections, and we need to to compare the actual values rather than an object reference address):
public struct Vector2D
{
public int X;
public int Y;
public Vector2D(int x, int y)
{
X = x;
Y = y;
}
}
Here is the main method to be called. Note that all values need to be in integers, for the specific case above this just means multiplying everything by 100. These methods here require integers, but are otherwise are scale-invariant so multiplying by 100 or 1000 or whatever won't affect performance (just make sure that the values don't overflow an int).
public int SolveMaxCount1R(Vector2D Parent, Vector2D Item)
{
// make a list to hold both the item size and its rotation
List<Vector2D> itemSizes = new List<Vector2D>();
itemSizes.Add(Item);
if (Item.X != Item.Y)
{
itemSizes.Add(new Vector2D(Item.Y, Item.X));
}
int solution = SolveGeneralMaxCount(Parent, itemSizes.ToArray());
return solution;
}
Here is an example of how you would call this method with your parameter values. In this case I have assumed that all of the solution methods are part of a class called SolverClass:
SolverClass solver = new SolverClass();
int count = solver.SolveMaxCount1R(new Vector2D(2500, 3800), new Vector2D(425, 550));
//(all units are in tenths of a millimeter to make everything integers)
The main method calls a general solver method for this type of problem (that is not restricted to just one size rectangle and its rotation):
public int SolveGeneralMaxCount(Vector2D Parent, Vector2D[] ItemSizes)
{
// determine the maximum x and y scaling factors using GCDs (Greastest
// Common Divisor)
List<int> xValues = new List<int>();
List<int> yValues = new List<int>();
foreach (Vector2D size in ItemSizes)
{
xValues.Add(size.X);
yValues.Add(size.Y);
}
xValues.Add(Parent.X);
yValues.Add(Parent.Y);
int xScale = NaturalNumbers.GCD(xValues);
int yScale = NaturalNumbers.GCD(yValues);
// rescale our parameters
Vector2D parent = new Vector2D(Parent.X / xScale, Parent.Y / yScale);
var baseShapes = new Dictionary<Vector2D, Vector2D>();
foreach (var size in ItemSizes)
{
var reducedSize = new Vector2D(size.X / xScale, size.Y / yScale);
baseShapes.Add(reducedSize, reducedSize);
}
//determine the minimum values that an allowed item shape can fit into
_xMin = int.MaxValue;
_yMin = int.MaxValue;
foreach (var size in baseShapes.Keys)
{
if (size.X < _xMin) _xMin = size.X;
if (size.Y < _yMin) _yMin = size.Y;
}
// create the memoization cache for shapes
Dictionary<Vector2D, SizeCount> shapesCache = new Dictionary<Vector2D, SizeCount>();
// find the solution pattern with the most finished items
int best = solveGMC(shapesCache, baseShapes, parent);
return best;
}
private int _xMin;
private int _yMin;
The general solution method calls a recursive worker method that does most of the actual work.
private int solveGMC(
Dictionary<Vector2D, SizeCount> shapeCache,
Dictionary<Vector2D, Vector2D> baseShapes,
Vector2D sheet )
{
// have we already solved this size?
if (shapeCache.ContainsKey(sheet)) return shapeCache[sheet].ItemCount;
SizeCount item = new SizeCount(sheet, 0);
if ((sheet.X < _xMin) || (sheet.Y < _yMin))
{
// if it's too small in either dimension then this is a scrap piece
item.ItemCount = 0;
}
else // try every way of cutting this sheet (guillotine cuts only)
{
int child0;
int child1;
// try every size of horizontal guillotine cut
for (int c = sheet.X / 2; c > 0; c--)
{
child0 = solveGMC(shapeCache, baseShapes, new Vector2D(c, sheet.Y));
child1 = solveGMC(shapeCache, baseShapes, new Vector2D(sheet.X - c, sheet.Y));
if (child0 + child1 > item.ItemCount)
{
item.ItemCount = child0 + child1;
}
}
// try every size of vertical guillotine cut
for (int c = sheet.Y / 2; c > 0; c--)
{
child0 = solveGMC(shapeCache, baseShapes, new Vector2D(sheet.X, c));
child1 = solveGMC(shapeCache, baseShapes, new Vector2D(sheet.X, sheet.Y - c));
if (child0 + child1 > item.ItemCount)
{
item.ItemCount = child0 + child1;
}
}
// if no children returned finished items, then the sheet is
// either scrap or a finished item itself
if (item.ItemCount == 0)
{
if (baseShapes.ContainsKey(item.Size))
{
item.ItemCount = 1;
}
else
{
item.ItemCount = 0;
}
}
}
// add the item to the cache before we return it
shapeCache.Add(item.Size, item);
return item.ItemCount;
}
Finally, the general solution method uses a GCD function to rescale the dimensions to achieve scale-invariance. This is implemented in a static class called NaturalNumbers. I have included the rlevant parts of this class below:
static class NaturalNumbers
{
/// <summary>
/// Returns the Greatest Common Divisor of two natural numbers.
/// Returns Zero if either number is Zero,
/// Returns One if either number is One and both numbers are >Zero
/// </summary>
public static int GCD(int a, int b)
{
if ((a == 0) || (b == 0)) return 0;
if (a >= b)
return gcd_(a, b);
else
return gcd_(b, a);
}
/// <summary>
/// Returns the Greatest Common Divisor of a list of natural numbers.
/// (Note: will run fastest if the list is in ascending order)
/// </summary>
public static int GCD(IEnumerable<int> numbers)
{
// parameter checks
if (numbers == null || numbers.Count() == 0) return 0;
int first = numbers.First();
if (first <= 1) return 0;
int g = (int)first;
if (g <= 1) return g;
int i = 0;
foreach (int n in numbers)
{
if (i == 0)
g = n;
else
g = GCD(n, g);
if (g <= 1) return g;
i++;
}
return g;
}
// Euclidian method with Euclidian Division,
// From: https://en.wikipedia.org/wiki/Euclidean_algorithm
private static int gcd_(int a, int b)
{
while (b != 0)
{
int t = b;
b = (a % b);
a = t;
}
return a;
}
}
Please let me know of any problems or questions you might have with this solution.
Oops, forgot that I was also using this class:
public class SizeCount
{
public Vector2D Size;
public int ItemCount;
public SizeCount(Vector2D itemSize, int itemCount)
{
Size = itemSize;
ItemCount = itemCount;
}
}
As I mentioned in the comments, it would actually be pretty easy to factor this class out of the code, but it's still in there right now.
I have some 40000 words and want to find all similar pairs. For similarity, I use a soft of Damerau–Levenshtein distance scaled by the word lengths. For simplicity, I don't consider overlapping edits (just like the linked algorithm). All words (most of them being German, French or English) were converted to lowercase as case caries no information in our data. I did two modifications to the distance computation
the distance between two characters is
0, when they're the same
0.2, when they differ just in the accent (like a vs ä or à)
0.2, when they're s and ß (the German Sharp S)
1, otherwise
Additionally, the distance of the strings ß and ss is set to 0.2. Our data shows that this complication is necessary.
For finding all similar pairs, my idea was to only consider pairs found via a common n-gram, but this fails for short words (which is acceptable) and in general because of the above modifications.
My next idea was an early abort of the distance computation, if it's known that the result is over a threshold (say 4). However, as many words have common prefixes, this abort comes too late to being a nice speed-up. Reversing the words fails (not so badly) on common suffixes.
The whole computation for all 20e6 pairs takes some five minutes, so for now, it's possible to do it once and store all the results (only distances below a threshold are needed). But I'm looking for something more future-proof.
Is it possible to quickly compute a good lower bound on the Damerau–Levenshtein distance (ideally allowing an early exit)?
Is it possible with the above modifications? Note that e.g., "at least the difference of the sizes of the two strings" doesn't hold because of the second modification.
The code
public final class Levenshtein {
private static final class CharDistance {
private static int distance(char c0, char c1) {
if (c0 == c1) return 0;
if ((c0|c1) < 0x80) return SCALE;
return c0<=c1 ? distanceInternal(c0, c1) : distanceInternal(c1, c0);
}
private static int distanceInternal(char c0, char c1) {
assert c0 <= c1;
final String key = c0 + " " + c1;
{
final Integer result = CACHE.get(key);
if (result != null) return result.intValue();
}
final int result = distanceUncached(c0, c1);
CACHE.put(key, Integer.valueOf(result));
return result;
}
private static int distanceUncached(char c0, char c1) {
final String norm0 = Normalizer.normalize("" + c0, Normalizer.Form.NFD).replaceAll("[^\\p{ASCII}]", "");
final String norm1 = Normalizer.normalize("" + c1, Normalizer.Form.NFD).replaceAll("[^\\p{ASCII}]", "");
if (norm0.equals(norm1)) return DIACRITICS;
assert c0 <= c1;
if (c0=='s' && c1=='ß') return DIACRITICS;
return SCALE;
}
private static final Map<String, Integer> CACHE = new ConcurrentHashMap<>();
}
/**
* Return the scaled distance between {#code s0} and {#code s1}, if it's below {#code limit}.
* Otherwise, return some lower bound ({#code >= limit}.
*/
int distance(String s0, String s1, int limit) {
final int len0 = s0.length();
final int len1 = s1.length();
int result = SCALE * (len0 + len1);
final int[] array = new int[len0 * len1];
for (int i0=0; i0<len0; ++i0) {
final char c0 = s0.charAt(i0);
for (int i1=0; i1<len1; ++i1) {
final char c1 = s1.charAt(i1);
final int d = CharDistance.distance(c0, c1);
// Append c0 and c1 respectively.
result = get(len0, array, i0-1, i1-1) + d;
// Append c0.
result = Math.min(result, get(len0, array, i0-1, i1) + SCALE);
// Append c1.
result = Math.min(result, get(len0, array, i0, i1-1) + SCALE);
// Handle the "ß" <-> "ss" substitution.
if (c0=='ß' && c1=='s' && i1>0 && s1.charAt(i1-1)=='s') result = Math.min(result, get(len0, array, i0-1, i1-2) + DIACRITICS);
if (c1=='ß' && c0=='s' && i0>0 && s0.charAt(i0-1)=='s') result = Math.min(result, get(len0, array, i0-2, i1-1) + DIACRITICS);
// Handle a transposition.
if (i0>0 && i1>0 && s0.charAt(i0-1)==c1 && s1.charAt(i1-1)==c0) result = Math.min(result, get(len0, array, i0-2, i1-2) + SCALE);
set(len0, array, i0, i1, result);
}
// Early exit.
{
final int j = i0 - len0 + len1;
final int lowerBound = get(len0, array, i0, j);
if (lowerBound >= limit) return lowerBound;
}
}
return result;
}
// Simulate reading from a 2D array at indexes i0 and i1;
private int get(int stride, int[] array, int i0, int i1) {
if (i0<0 || i1<0) return SCALE * (i0+i1+2);
return array[i1*stride + i0];
}
// Simulate writing to a 2D array at indexes i0 and i1;
private void set(int stride, int[] array, int i0, int i1, int value) {
array[i1*stride + i0] = value;
}
private static final int SCALE = 10;
private static final int DIACRITICS = 2;
}
Example words
rotwein
rotweincuv
rotweincuvee
rotweincuveé
rotweincuvée
rotweincuvúe
rotweindekanter
rotweinessig
rotweinfass
rotweinglas
rotweinkelch
rotweißkomposition
rotwild
roug
rouge
rougeaoc
rougeaop
rougeots
rougers
rouges
rougeáaop
rough
roughstock
rouladen
roulette
roumier
roumieu
round
rounded
roundhouse
rounds
rouret
rouss
roussanne
rousseau
roussi
roussillion
roussillon
route
rouvinez
rove
roveglia
rovere
roveri
rovertondo
rovo
rowan
rowein
roxburgh
roxx
roy
roya
royal
royalbl
royalblau
royaldanishnavy
royale
royales
royaline
royals
royer
royere
roze
rozenberg
rozes
rozier
rozès
rozés
roßberg
roßdorfer
roßerer
rpa
rr
rrvb
rry
rs
rsaftschorle
rsbaron
rscastillo
rsgut
rsl
rstenapfel
rstenberg
rstenbrõu
rt
rtd
rtebecker
rtebeker
ru
ruadh
ruanda
rub
rubaiyat
ruban
rubata
rubblez
rubenkov
rubeno
rubentino
ruber
I would suggest putting all of the words into a trie, and then recursively searching the trie against itself.
Generating the trie should be fast, and now matching of common prefixes against each other is only calculated once no matter how many words share them.
You do have to keep track of a lot of state as you wander the trie, because your state is, "All of the intermediate stuff we could be in the middle of calculating."
I want to compare one bitmap with another bitmap (reference bitmap) and draw all the difference of it in resultant bit map.
Using below code I am able to draw only difference area but not with exact color of it.
Here is my code
Bitmap ResultantBitMap = new Bitmap(bitMap1.Height, bitMap2.Height);
BitmapData bitMap1Data = bitMap1.LockBits(new Rectangle(0, 0, bitMap1.Width, bitMap1.Height), System.Drawing.Imaging.ImageLockMode.ReadOnly, System.Drawing.Imaging.PixelFormat.Format32bppArgb);
BitmapData bitMap2Data = bitMap2.LockBits(new Rectangle(0, 0, bitMap2.Width, bitMap2.Height), System.Drawing.Imaging.ImageLockMode.ReadOnly, System.Drawing.Imaging.PixelFormat.Format32bppArgb);
BitmapData bitMapResultantData = ResultantBitMap.LockBits(new Rectangle(0, 0, ResultantBitMap.Width, ResultantBitMap.Height), System.Drawing.Imaging.ImageLockMode.ReadWrite, System.Drawing.Imaging.PixelFormat.Format32bppArgb);
IntPtr scan0 = bitMap1Data.Scan0;
IntPtr scan02 = bitMap2Data.Scan0;
IntPtr scan0ResImg1 = bitMapResultantData.Scan0;
int bitMap1Stride = bitMap1Data.Stride;
int bitMap2Stride = bitMap2Data.Stride;
int ResultantImageStride = bitMapResultantData.Stride;
for (int y = 0; y < bitMap1.Height; y++)
{
//define the pointers inside the first loop for parallelizing
byte* p = (byte*)scan0.ToPointer();
p += y * bitMap1Stride;
byte* p2 = (byte*)scan02.ToPointer();
p2 += y * bitMap2Stride;
byte* pResImg1 = (byte*)scan0ResImg1.ToPointer();
pResImg1 += y * ResultantImageStride;
for (int x = 0; x < bitMap1.Width; x++)
{
//always get the complete pixel when differences are found
if (Math.Abs(p[0] - p2[0]) >= 20 || Math.Abs(p[1] - p2[1]) >= 20 || Math.Abs(p[2] - p2[2]) >= 20)
{
pResImg1[0] = p2[0];// B
pResImg1[1] = p2[1];//R
pResImg1[2] = p2[2];//G
pResImg1[3] = p2[3];//A (Opacity)
}
p += 4;
p2 += 4;
pResImg1 += 4;
}
}
bitMap1.UnlockBits(bitMap1Data);
bitMap2.UnlockBits(bitMap2Data);
ResultantBitMap.UnlockBits(bitMapResultantData);
ResultantBitMap.Save(#"c:\\abcd\abcd.jpeg");
What I want is the difference image with exact color of the reference image.
It's hard to tell what's going on without knowing what all those library calls and "+= 4" are but, are you sure p and p2 correspond to the first and second images of your diagram?
Also, your "Format32bppArgb" format suggests that [0] corresponds to alpha, not to red. Maybe there's a problem with that, too.
I was trying to practice select function in linq.
The Code describes Pair object that hold 2 numbers. The Main is creating a list with 2 pair and I want to select the one in which the first number (n1) equals to 1 but I get the above error.
The "pair.getN1" has a compilation error.
Thanks.
public class Pair
{
private int n1;
private int n2;
public Pair(int n1, int n2)
{
this.n1 = n1;
this.n2 = n2;
}
public int getN1()
{
return this.n1;
}
public static void main(String[] args)
{
Pair pair1 = new Pair(1, 2);
Pair pair2 = new Pair(3, 4);
List<Pair> pairList = new List<Pair>();
pairList.Add(pair1);
pairList.Add(pair2);
var chosen = from pair in pairList
where pair.getN1 = 1
select pair;
Console.WriteLine(chosen.getn1);
Console.ReadLine();
}
}
I guess you came from VB.NET. You don't want = in C# but ==:
var chosen = from pair in pairList
where pair.getN1() == 1
select pair;
In VB.NET = can mean assignement but also comparison, in C# = only means assignment.
Another thing to fix is, since getN1 is not a field or property but a method you need (), in Vb.NET these are optional if there is no parameter.
I am trying to calculate entropy of a vector using armadillo lib..the code is following. the size of vec is same.whats went wrong..how to fix it.
double ent(){
//arma::vec imT(20);
arma::mat A = randu(4, 5);
arma::vec imT = vectorise(A); /// vector A is 20 by 1 col vec
double ent;
ent = 0;
arma::uvec h = hist(imT);
arma::mat hp = arma::conv_to<arma::mat>::from(imT);
arma::mat prob = hp/hp.n_elem;
ent = -arma::accu(prob*log2(prob)); //entropy cal
prob.print("sd");
return 0;
}
errorr:matrix multiplication: incompaitable matrix dimentions 20x1 and 20X1.