How to create a binary STL file containing more than one solid? - binaryfiles

Including more than one solid in an ASCII STL file has been well described.
solid name1
facet normal N.x N.y N.z
outer loop
vertex V1.x V1.y V1.z
vertex V2.x V2.y V2.z
vertex V3.x V3.y V3.z
endloop
endfacet
facet …
…
endfacet
…
endsolid name1
solid name2
…
endsolid name2
…
However, the format described for a binary STL file does not say anything about including multiple solid objects.
80 Bytes string Header
4 Bytes uint32 Facets count
| 4 Bytes float N.x
| 4 Bytes float N.y
| 4 Bytes float N.z
| 4 Bytes float V1.x
| 4 Bytes float V1.y
| 4 Bytes float V1.z
facet1| 4 Bytes float V2.x
| 4 Bytes float V2.y
| 4 Bytes float V2.z
| 4 Bytes float V3.x
| 4 Bytes float V3.y
| 4 Bytes float V3.z
| 2 Bytes uint16 Attrib
facet2| …
facet3| …
…

In the binary format each facet has got an attribute (attrib). The facets whose attrib are the same will be considered part of the same solid.

Related

PLY file format - what is the correct header for the point cloud with color RGB info?

I'm trying to export a point cloud and am running into an issue where my files are not being accepted by 3rd party tools.
I can not find a concrete example of a valid PLY point cloud file with color data embedded (I only saw binary files with color data). I pieced this together from different sources, but when I export a file with this header, I can not display it on a Mac, or view it in a web based viewer
Can an ascii PLY file have 1 million or more points?
Can a valid PLY file have 0 faces?
Is the definition property list uchar int vertex_indices required?
Is float a correct definition, or does it need to be specified like float32?
Do I need a newline \n or both \r\n at the end of each line?
my header:
ply
format ascii 1.0
element vertex \(vertexCount)
property float x
property float y
property float z
property uchar red
property uchar green
property uchar blue
property uchar alpha
element face 0
property list uchar int vertex_indices
end_header
0.391046 0.00335238 -1.0231568 114 110 94 255
0.39227518 0.0033548833 -1.0226241 114 111 93 255
// no faces
Web based viewer does load files like these (but I do not see those type definitions in these docs: http://paulbourke.net/dataformats/ply/ :
ply
format ascii 1.0
element vertex 2
property float32 x
property float32 y
property float32 z
element face 13594
property list uint8 int32 vertex_indices
end_header
1.13927 0.985002 0.534429
1.11738 0.998603 0.513986
3 0 1 2
3 0 2 3
//...
3 6539 6367 6736
3 6539 6736 6905
This format was accepted by the point cloud library PLY reader:
ply
format ascii 1.0
element vertex \(vertexCount)
property float x
property float y
property float z
property uint8 red
property uint8 green
property uint8 blue
end_header
-0.089456566 0.21365404 -0.7840536 81 51 19
-0.0884854 0.21366915 -0.7838928 82 52 20

Calculating Bytes

Suppose each pixel in a digital image is represented by a 24-bit color value. How much memory does it take to store an uncompressed image of 2048 pixels by 1024 pixels?
I said for this that 24 bits is 3 bytes. And 2048 Pixels is 6KB (2048 * 3 / 1024) and 1024 Pixels is 3KB (1024 * 3 / 1024). And then I multipled to get 18KB^2.
But the answer says 6MB? How is this possible and how do 1024 and 2048 play into this because the answer says 6MB and doesn't explain.
24 bit => 24 bit / 8 bit = 3 byte
1) 2048 pixel * 1024 pixel = 2097152 pixel (Area)
1.1) 2097152 pixel * 3 byte = 6291456 byte (Each pixel 3 bytes)
2) 6291456 byte / 1024 byte = 6144 kilobyte
3) 6144 kilobyte / 1024 byte = 6 Megabyte

Matlab: Reorder 5 bytes to 4 times 10 bits RAW10

I would like to Import a RAW10 file into Matlab. The infos are directly attachted to the jpeg file provided by the raspberry pi camera.
4 Pixels are saved as 5 bytes.
The first four bytes contain the bit 9-2 of a pixel.
The last byte contains the missing LSB.
sizeRAW = 6404096;
sizeHeader =32768;
I = ones(1944,2592);
fin=fopen('0.jpeg','r');
off1 = dir('0.jpeg');
offset = off1.bytes - sizeRAW + sizeHeader;
fseek(fin, offset,'bof');
pixel = ones(1944,2592);
I=fread(fin,1944,'ubit10','l');
for col=1:2592
I(:,col)=fread(fin,1944,'ubit8','l');
col = col+4;
end
fclose(fin);
This is as far as I came yet, but it's not right.

How "bytesPerRow" is calculated from an NSBitmapImageRep

I would like to understand how "bytesPerRow" is calculated when building up an NSBitmapImageRep (in my case from mapping an array of floats to a grayscale bitmap).
Clarifying this detail will help me to understand how memory is being mapped from an array of floats to a byte array (0-255, unsigned char; neither of these arrays are shown in the code below).
The Apple documentation says that this number is calculated "from the width of the image, the number of bits per sample, and, if the data is in a meshed configuration, the number of samples per pixel."
I had trouble following this "calculation" so I setup a simple loop to find the results empirically. The following code runs just fine:
int Ny = 1; // Ny is arbitrary, note that BytesPerPlane is calculated as we would expect = Ny*BytesPerRow;
for (int Nx = 0; Nx<320; Nx+=64) {
// greyscale image representation:
NSBitmapImageRep *dataBitMapRep = [[NSBitmapImageRep alloc]
initWithBitmapDataPlanes: nil // allocate the pixel buffer for us
pixelsWide: Nx
pixelsHigh: Ny
bitsPerSample: 8
samplesPerPixel: 1
hasAlpha: NO
isPlanar: NO
colorSpaceName: NSCalibratedWhiteColorSpace // 0 = black, 1 = white
bytesPerRow: 0 // 0 means "you figure it out"
bitsPerPixel: 8]; // bitsPerSample must agree with samplesPerPixel
long rowBytes = [dataBitMapRep bytesPerRow];
printf("Nx = %d; bytes per row = %lu \n",Nx, rowBytes);
}
and produces the result:
Nx = 0; bytes per row = 0
Nx = 64; bytes per row = 64
Nx = 128; bytes per row = 128
Nx = 192; bytes per row = 192
Nx = 256; bytes per row = 256
So we see that the bytes/row jumps in 64 byte increments, even when Nx incrementally increases by 1 all the way to 320 (I didn't show all of those Nx values). Note also that Nx = 320 (max) is arbitrary for this discussion.
So from the perspective of allocating and mapping memory for a byte array, how are the "bytes per row" calculated from first principles? Is the result above so the data from a single scan-line can be aligned on a "word" length boundary (64 bit on my MacBook Pro)?
Thanks for any insights, having trouble picturing how this works.
Passing 0 for bytesPerRow: means more than you said in your comment. From the documentation:
If you pass in a rowBytes value of 0, the bitmap data allocated may be padded to fall on long word or larger boundaries for performance. … Passing in a non-zero value allows you to specify exact row advances.
So you're seeing it increase by 64 bytes at a time because that's how AppKit decided to round it up.
The minimum requirement for bytes per row is much simpler. It's bytes per pixel times pixels per row. That's all.
For a bitmap image rep backed by floats, you'd pass sizeof(float) * 8 for bitsPerSample, and bytes-per-pixel would be sizeof(float) * samplesPerPixel. Bytes-per-row follows from that; you multiply bytes-per-pixel by the width in pixels.
Likewise, if it's backed by unsigned bytes, you'd pass sizeof(unsigned char) * 8 for bitsPerSample, and bytes-per-pixel would be sizeof(unsigned char) * samplesPerPixel.

Extracting 32-bit RGBA value from NSColor

I've got an NSColor, and I really want the 32-bit RGBA value that it represents. Is there any easy way to get this, besides extracting the float components, then multiplying and ORing and generally doing gross, endian-dependent things?
Edit: Thanks for the help. Really, what I was hoping for was a Cocoa function that already did this, but I'm cool with doing it myself.
Another more brute force approach would be to create a temporary CGBitmapContext and fill with the color.
NSColor *someColor = {whatever};
uint8_t data[4];
CGContextRef ctx = CGBitmapContextCreate((void*)data, 1, 1, 8, 4, colorSpace, kCGImageAlphaFirst | kCGBitmapByteOrder32Big);
CGContextSetRGBFillColor(ctx, [someColor redComponent], [someColor greenComponent], [someColor blueComponent], [someColor alphaComponent]);
CGContextFillRect(ctx, CGRectMake(0,0,1,1));
CGContextRelease(ctx);
FWIW, there are no endian issues with an 8 bit per component color value. Endianness is only with 16 bit or greater integers. You can lay out the memory any way you want, but the 8 bit integer values are the same whether a big endian or little endian machine. (ARGB is the default 8 bit format for Core Graphics and Core Image I believe).
Why not just this?:
uint32_t r = (uint32_t)(MIN(1.0f, MAX(0.0f, [someColor redComponent])) * 255.0f);
uint32_t g = (uint32_t)(MIN(1.0f, MAX(0.0f, [someColor greenComponent])) * 255.0f);
uint32_t b = (uint32_t)(MIN(1.0f, MAX(0.0f, [someColor blueComponent])) * 255.0f);
uint32_t a = (uint32_t)(MIN(1.0f, MAX(0.0f, [someColor alphaComponent])) * 255.0f);
uint32_t value = (r << 24) | (g << 16) | (b << 8) | a;
Then you know exactly how it is laid out in memory.
Or this, if its more clear to you:
uint8_t r = (uint8_t)(MIN(1.0f, MAX(0.0f, [someColor redComponent])) * 255.0f);
uint8_t g = (uint8_t)(MIN(1.0f, MAX(0.0f, [someColor greenComponent])) * 255.0f);
uint8_t b = (uint8_t)(MIN(1.0f, MAX(0.0f, [someColor blueComponent])) * 255.0f);
uint8_t a = (uint8_t)(MIN(1.0f, MAX(0.0f, [someColor alphaComponent])) * 255.0f);
uint8_t data[4];
data[0] = r;
data[1] = g;
data[2] = b;
data[3] = a;
Not all colors have an RGBA representation. They may have an approximation in RGBA, but that may or may not be accurate. Furthermore, there are "colors" that are drawn by Core Graphics as patterns (for example, the window background color on some releases of Mac OS X).
Converting the 4 floats to their integer representation, however you want to accomplish that, is the only way.

Resources