Block insertion point does not match width and height of an image - autocad

(all of the numbers are not real sizes)
I have an image in autocad which width: 8000mm and height:8000mm when I insert a block in the middle of the image instead the X and Y of the block to be (4000,4000) they are something like (560,560). Even when I move the block the X and Y are still the same. I have no idea why?!
I made some research and I saw that some of the people there are saying that sometimes the blocks have different coordinate system than the picture itself.
Can you tell me how I can figure out this so I can get X and Y for the block to be (4000,4000)?
Best regards,
Dimitar Georgiev

The X and Y coordinates (or the origin) of your BLOCK entity don't have anything to do with where the BLOCK is placed in your drawing.
When you insert a BLOCK into a drawing, you are actually doing a couple of things.
First, AutoCAD creates an entry in the TABLE that is used to hold BLOCK_RECORD objects. The BLOCK_RECORD that is created will have a handle that references the geometry of the BLOCK that has just been inserted.
This handle is then used in the BLOCKS section. This section contains the BLOCK entities that define the geometry contained in the BLOCK itself. One of the first entries in the BLOCK entity is a set of X and Y coordinates that represent the base point of the BLOCK.
These are not the same as the coordinates for where the BLOCK is actually inserted in the drawing. These coordinates act as a reference point for all the geometry contained in the BLOCK object. Every coordinate, for every piece of geometry in the BLOCK is referenced from the BLOCK base point. You can think of all the coordinates for the geometry in the BLOCK object as delta coordinates, because they measure distance from the base point of the BLOCK, not from the origin of the drawing.
The last thing that AutoCAD does when you insert a BLOCK, is to create an INSERT object in the ENTITIES section. This is a short entity that contains the name of the BLOCK being inserted as well as the coordinates where the BLOCK will be located in the drawing. These coordinates are the ones that control the location of the BLOCK within the drawing.

Related

Rotate a block in DXF file format

I am creating a DXF file using XSLT to transform my raw data.
I can create a working DXF file with a block and insert it into my drawing.
The issue is I want to create one block that is used many times but is translated and rotated each time it is inserted. Looking up the reference document for DXF, the INSERT group only contains ‘50’ (which is rotation group code). I need to rotate the block upon each insertion around it’s X and Y axis. (OCS, not WCS).
I cannot see a way to do this, but the concept of blocks is to be referenced multiple times in many different orientations, so I find it hard to believe there is not a way to do this.
A long work around is to create a block for each transform but this defeats the purpose and involves me calculating each coordinate, not to mention dramatically increases the file size.
There is also the extrusion group code (210,220,230) but I cannot figure out if I could achieve what I want using this, as these work around the WCS.

drag drawLine jcanvas optimisation coordinates

considering this basic case, one may expect the coordinates of the layer to be updated... but they would not.
Instead, there is the possibility of remembering the starting point, compute the mouse offset and then update the coordinates, like in this test but... the effect is quite extreme.
Expected : point x1,y1 is static
Result : point x1,y1 moves incredibly fast
If setting coordinates to constant, the drag remains the same.
The main problem here is that drag action applies to the whole layer.
Fix : apply the modification at the end of the drag, like in this snippet.
But it is relatively ugly. Anyone has a better way to
get on the run the actual coordinates of the points of the line
manage to keep a point of the line static while the others are moving
Looking forward your suggestions,
In order to maintain the efficiency of dragging layers, jCanvas only offsets the x and y properties for any draggable layer (including paths). Therefore, when dragging, you can compute the absolute positions of any set of path coordinates using something along these lines:
var absX1 = layer.x + layer.x1;
var absY1 = layer.y + layer.y1;
(assuming layer references a jCanvas layer, of course)

Changing point size in autocad

I don't know if its right forum. I'm posting question related to autocad. So please share link of forum (if I'm not allowed to ask it here)
How can I change point size (single point size 1/2/3) of point cloud (imported as pcg) in autocad.
POINT Command
Creates a point object
Draw menu: Point
Command line: point
Specify a point:
Points can act as nodes to which you can snap objects. You can specify a full three-dimensional location for a point. The current elevation is assumed if you omit the Z coordinate value.
The PDMODE and PDSIZE system variables control the appearance of point objects. PDMODE values 0, 2, 3, and 4 specify a figure to draw through the point. A value of 1 specifies that nothing is displayed.
Specifying the value 32, 64, or 96 selects a shape to draw around the point, in addition to the figure drawn through it:
PDSIZE controls the size of the point figures, except for PDMODE values 0 and 1. A setting of 0 generates the point at 5 percent of the drawing area height. A positive PDSIZE value specifies an absolute size for the point figures. A negative value is interpreted as a percentage of the viewport size. The size of all points is recalculated when the drawing is regenerated.
After you change PDMODE and PDSIZE, the appearance of existing points changes the next time AutoCAD regenerates the drawing.

Cocoa - OpenGL ES - Wrapping my head around C-Arrays

This is more of a general theory question that I just cant seem to wrap my head around, so Ill explain what Im trying to do.
Im writing a 3D Game Engine with Cocoa and OpenGL ES. Im trying to determine the best way to store my vertex data for my 3d models (each vertex has an x, y, and z position).
Previously, I was storing each vertex as an individual custom object (AEVertex), this object had an x, y, and z instance variable. The issue is I am using the command glDrawArrays(), which takes the address of a C-Array as it's first param. This C-Array is supposed to be a one-dimensional array storing all of the vert positions in succesion (vert 1's x position, vert 1's y position, vert 1's z position, vert 2's x position, vert 2's y position, vert 2's z position, etc, etc).
The issue I faced was that I had to gather all of the vertexdata for a given model from each individual vertex object and create a C-Array big enough to store all of these verts / fill the C-Array with the vert data, and then pass in this array. This is obviously going to slow things down a lot, as I am essentially allocating memory for every model twice.
So what I would LIKE to do is simply have a class AEMesh, that has a C-Array instance variable that stores all of the vertexdata for the given AEMesh object. My issue with this is that as far as I know its only possible to declare C-Array instance variables of a fixed size, however a) all of my models will have different numbers of vertices and b) I wont know how many verts each model has until reading in the model data at runtime.
So, my questions:
Is there some way to create a mutable, dynamic C-Array as an instance variable for an object? Thus allowing me to add new array indices for every vertex read in from a given AEMesh's model file?
If not, Im wondering if I can create the vertexdata C-Array outside of the AEMesh's initialization, and simply have a pointer instance variable pointing to nil when an AEMesh is instantiated, and repointed to the created C-Array after the C-Array is declared.
Yes it is possible, you can have an instance variable that is a pointer, and use functions like malloc to allocate memory for it at runtime. See this page for a tutorial on dynamic arrays. And don't forget to free your memory later!
Don't make the entire array part of your AEMesh class. Instead, give AEMesh an ivar that's a pointer to the vertex array. That gives you the freedom to use an array of any size, and you can replace it with a different array as often as you like.

Testing Object inside and object

I'm writing an image processing application which recognizes objects based on their shapes. The issue that I'm facing is that since one object can be composed of one or more subobjects eg. human face is an object which is composed of eyes, nose and mouth.
Applying image segmentation creates separate objects but does not tells whether one object is inside another object.
How can I check whether an object is contained inside another object effeciently.
For now my algoirthm is wat I would call 8 point test in which u chose 8 points at 8 corners and check whther all of them are inside the object.If they are in then u can more quite certain that entire object is inside another object... But it has got certain limitation or certain areas of failure...
Also just because inner object is inside another object means I should treat them to part of outer object????
One way to test whether one object is fully inside another is to convert both into binary masks using poly2mask (in case they aren't binary masks already), and to test that all pixels of one object are part of the other object.
%# convert object 1 defined by points [x1,y1] into mask
msk1 = poly2mask(x1,y1,imageSizeX,imageSizeY);
%# do the same for object 2
msk2 = poly2mask(x2,y2,imageSizeX,imageSizeY);
%# check whether object 1 is fully inside object 2
oneInsideTwo = all(msk2(msk1));
However, is this really necessary? The eyes should always be close to the center of the face, and thus, the 8-point-method should be fairly robust at identifying whether you found an eye that is part of the face or whether it is a segmentation artifact.
Also, if an eye is on a face, then yes, you would consider it as part of that face - unless you're analyzing pictures of people that are eating eyes, in which case you'd have to test whether the eye is in roughly the right position on the face.
In sum, the answer to your questions is a big "depends on the details of your application".

Resources