Retrieve ByteBuffer object from Bytes<ByteBuffer> - chronicle

I have been trying to retrieve ByteBuffer (Direct) object reference from Bytes object. Following is the code:
TestMessageWire testMessageWire = new TestMessageWire();
testMessageWire.data('H');
testMessageWire.setIntData(100);
//Below does not give ByteBuffer with correct position & offset
Bytes<ByteBuffer> bytes = Bytes.elasticByteBuffer(10);
Wire wire = new RawWire(bytes);
testMessageWire.writeMarshallable(wire);
ByteBuffer byteBuffer = ByteBuffer.wrap(bytes.toByteArray());
//Another approach, but still does not populate "byteBuffer" object correctly.
ByteBuffer byteBuffer = ByteBuffer.allocateDirect(6);
Bytes<?> bytes = Bytes.wrapForWrite(byteBuffer);
Wire wire = new RawWire(bytes);
testMessageWire.writeMarshallable(wire);
I want to avoid multiple allocation while creating ByteBuffer object. Hence, want to reuse same Bitebuffer wraped by Bytes. How can I achieve this?

You can create a Bytes which wraps a ByteBuffer. This avoids redundant copies.
Bytes<ByteBuffer> bytes = Bytes.elasticByteBuffer();
ByteBuffer bb = bytes.underlyingObject();
Never the less you have to ensure the the ByteBuffer's position and limit is maintained as Bytes has a separate read/write position & limit.
e.g. after writing you might want to do the following so you can read from the ByteBuffer what was written to the Bytes.
bb.position((int) bytes.readPosition());
bb.limit((int) bytes.readLimit());

Related

How to query needed size for output buffer when calling AcceptSecurityContext?

I tried usual Windows way, I passed nullptr as output buffer pointer and size 0. AcceptSecurityContext fails with error SEC_E_INSUFFICIENT_MEMORY. I was expecting to get needed size in OutSecBuff.cbBuffer but it is 0. I call it again with huge buffer. Call succeeds but context is invalid an later calls fail.
// Query needed buffer size
secStatus = AcceptSecurityContext(&hcred,&hctxt, &InBuffDesc,attr,SECURITY_NATIVE_DREP,
&hctxt,&OutBuffDesc,&attr,nullptr);
if(SEC_E_INSUFFICIENT_MEMORY == ss)
{
// Allocate buffer of needed size, big enough
OutSecBuff.cbBuffer = *pcbOut;
OutSecBuff.pvBuffer = pOut;
// Call with buffer of required size
secStatus = AcceptSecurityContext(&hcred,&hctxt, InBuffDesc,
attr,SECURITY_NATIVE_DREP,&hctxt,&OutBuffDesc,&attr,nullptr);
}
If I preallocate huge buffer, everything works fine.
I would like to dynamically allocate buffer of needed size.
SSAPI takes different approcah. When querying security package QuerySecurityPackageInfo, max size of output buffer is returned in field cbMaxToken. You allocate buffer once and you can be assured that buffer size will be enough for all requests.

best practice threejs geometry disposal?

If I have something like the following:
myObj = new THREE.Object3d;
scene.add(myObj);
doIt();
function doIt(){
var geometry = new THREE.SphereGeometry( 1, 8, 8 );
var mesh = new THREE.Mesh( geometry, meshMaterial );
myObj.add(mesh);
}
as far as I understand, the variables geometry and mesh get unassigned as soon as the function concludes. But the scene object still contains myObj, which still contains the mesh, which still contains the geometry. So now the geometry lives within myObj within scene. Am I getting it right so far?
But if I then do
scene.remove(myObj);
and also
myObj = new Object();
Then I would think there is no more mesh, no more geometry. I no longer have any extant variable or object which contains or refers to those things. But they still exist somewhere, taking up memory?
There is a dispose() function in three.js, but I don't understand where in my sequence of code it should be normally applied, or exactly why?
I am working on a project which needs to create and then remove lots of objects, so I'm afraid if I don't do it right, there will be performance issues.
Any wisdom much appreciated.
In javascript, objects exist in memory until they are cleared out by it's garbage collector. By assigning a new object to the variable, you are basically just creating a new variable with the same name. The old data still exists in memory until the garbage collector runs and removes the old data from memory.
Since JavaScript's memory is only cleared out by the garbage collector, and you can't manually trigger a garbage collection (and you shouldn't have to), you should use object pooling instead of creating a ton of disposable objects.
Note: This doesn't mean you should always use object pooling, but rather, you should use an object pool if you find yourself creating and dereferencing a large number of objects within a short time span.
Remember, don't optimize prematurely.
In simple terms, object pooling is the process of retaining a set of unused objects which share a type. When you need a new object for your code, rather than allocating a new one from the system Memory Heap, you instead recycle one of the unused objects from the pool. Once the external code is done with the object, rather than releasing it to main memory, it is returned to the pool. Because the object is never dereferenced (aka deleted) from code it won’t be garbage collected. Utilizing object pools puts control of memory back in the hands of the programmer, reducing the influence of the garbage collector on performance.
source
You can find various object pool boilerplates online, but here's an example: https://gist.github.com/louisstow/5609992
Note: there's no reason to keep a large pool of excess objects in memory if you are no longer creating a large amount of objects. You should reduce the pool size, freeing the unused objects, and allowing the GC to collect them. You can always increase the size again if you need to. Just don't switch between shrinking and increasing the pool size too quickly, otherwise you would just be defeating the point of an object pool.
var objectPool = [];
var marker = 0;
var poolSize = 0;
//any old JavaScript object
function commonObject () { }
commonObject.create = function () {
if (marker >= poolSize) {
commonObject.expandPool(poolSize * 2);
}
var obj = objectPool[marker++];
obj.index = marker - 1;
obj.constructor.apply(obj, arguments);
return obj;
}
//push new objects onto the pool
commonObject.expandPool = function (newSize) {
for (var i = 0; i < newSize - poolSize; ++i) {
objectPool.push(new commonObject());
}
poolSize = newSize;
}
//swap it with the last available object
commonObject.prototype.destroy = function () {
marker--;
var end = objectPool[marker];
var endIndex = end.index;
objectPool[marker] = this;
objectPool[this.index] = end;
end.index = this.index;
this.index = endIndex;
}
//make this as big as you think you need
commonObject.expandPool(1000);

How can I insert a single byte to be sent prior to an I2C data package?

I am developing an application in Atmel Studio 6 using the xMega32a4u. I'm using the TWI libraries provided by Atmel. Everything is going well for the most part.
Here is my issue: In order to update an OLED display I am using (SSD1306 controller, 128x32), the entire contents of the display RAM must be written immediately following the I2C START command, slave address, and control byte so the display knows to enter the data into the display RAM. If the control byte does not immediately precede the display RAM package, nothing works.
I am using a Saleae logic analyzer to verify that the bus is doing what it should.
Here is the function I am using to write the display:
void OLED_buffer(){ // Used to write contents of display buffer to OLED
uint8_t data_array[513];
data_array[0] = SSD1306_DATA_BYTE;
for (int i=0;i<512;++i){
data_array[i+1] = buffer[i];
}
OLED_command(SSD1306_SETLOWCOLUMN | 0x00);
OLED_command(SSD1306_SETHIGHCOLUMN | 0x00);
OLED_command(SSD1306_SETSTARTLINE | 0x00);
twi_package_t buffer_send = {
.chip = OLED_BUS_ADDRESS,
.buffer = data_array,
.length = 513
};
twi_master_write(&TWIC, &buffer_send);
}
Clearly, this is very inefficient as each call to this function recreates the entire array "buffer" into a new array "data_array," one element at a time. The point of this is to insert the control byte (SSD1306_DATA_BYTE = 0x40) into the array so that the entire "package" is sent at once, and the control byte is in the right place. I could make the original "buffer" array one element larger and add the control byte as the first element, to skip this process but that makes the size 513 rather than 512, and might mess with some of the text/graphical functions that manipulate this array and depend on it being the correct size.
Now, I thought I could write the code like this:
void OLED_buffer(){ // Used to write contents of display buffer to OLED
uint8_t data_byte = SSD1306_DATA_BYTE;
OLED_command(SSD1306_SETLOWCOLUMN | 0x00);
OLED_command(SSD1306_SETHIGHCOLUMN | 0x00);
OLED_command(SSD1306_SETSTARTLINE | 0x00);
twi_package_t data_control_byte = {
.chip = OLED_BUS_ADDRESS,
.buffer = data_byte,
.length = 1
};
twi_master_write(&TWIC, &data_control_byte);
twi_package_t buffer_send = {
.chip = OLED_BUS_ADDRESS,
.buffer = buffer,
.length = 512
};
twi_master_write(&TWIC, &buffer_send);
}
/*
That doesn't work. The first "twi_master_write" command sends a START, address, control, STOP. Then the next such command sends a START, address, data buffer, STOP. Because the control byte is missing from the latter transaction, this does not work. All I need is to insert a 0x40 byte between the address byte and the buffer array when it is sent over the I2C bus. twi_master_write is a function that is provided in the Atmel TWI libraries. I've tried to examine the libraries to figure out its inner workings, but I can't make sense of it.
Surely, instead of figuring out how to recreate a twi_write function to work the way I need, there is an easier way to add this preceding control byte? Ideally one that is not so wasteful of clock cycles as my first code example? Realistically the display still updates very fast, more than enough for my needs, but that does not change the fact this is inefficient code.
I appreciate any advice you all may have. Thanks in advance!
How about having buffer and data_array pointing to the same uint8_t[513] array, but with buffer starting at its second element. Then you can continue to use buffer as you do today, but also use data_array directly without first having to copy all the elements from buffer.
uint8_t data_array[513];
uint8_t *buffer = &data_array[1];

how to read bitmap and meta-data from a png stream/file efficiently?

I need to decode both bitmap and meta data from PNG input stream using [PNGJ] (http://code.google.com/p/pngj/) library. The problem is that decoding meta data will advance the stream and then I cannot use
bitmap = BitmapFactory.decodeStream().
Creating Bitmap on my own is OK but if I need to, say, scale bitmap with interpolation I'd rather use BitmapFactory. To use it I have to create a copy of InputStream every time I have to use PNGJ for getting meta data and BitmapFactory for getting a bitmap. It will be nice to return meta data AND Bitmap from a single PNGJ call (at least for most common ARGB_8888 format).
In a nutshell, I have to copy the stream to be used by Java libraries which looks like a waste. Returning a bitmap will be one solution.
// register an auxilary chunk name
PngChunk.factoryRegister(ThumbNailProvider.chunkID, chunkPROP.class);
// reader for the stream
PngReader pngr = new PngReader(inStream, "debug label PNG reader");
ByteArrayOutputStream outputStream = new ByteArrayOutputStream();
PngWriter pngw = new PngWriter(outputStream, pngr.imgInfo);
// copy pre-data chunks
pngw.copyChunksFirst(pngr, ChunkCopyBehaviour.COPY_ALL_SAFE);
// copy image data
for (int row = 0; row < pngr.imgInfo.rows; row++) {
ImageLine l1 = pngr.readRow(row);
pngw.writeRow(l1, row);
}
// copy after-data chunks
pngw.copyChunksLast(pngr, ChunkCopyBehaviour.COPY_ALL);
pngr.end(); // close inStream but not its copy
pngw.end(); // close out stream
// save a copy of the stream for Java Libraries;
data.inputStream = new ByteArrayInputStream(outputStream.toByteArray());
// read the chunk
ChunksList chunkList = pngr.getChunksList();
PngChunk chunk = chunkList.getById1(L2ThumbNailProvider.chunkID);
if (chunk != null) {
...
}
This is the problem of having two independent streams consumers, say Class1.parse(inputStream), Class2.decode(inputStream) and we want them to consume the same single stream; it has no simple elegant solutions (eg), if we have no control on how the consumers eat the stream.
Simple solutions, but not very elegant -and probably impractical- are: close and reopen the stream (unfeasible if we are reading from a network stream), buffer the full stream content in memory, or to a temporary file.
In your concrete case, the alternatives I can think of are:
1) Let PNGJ consume and decode the data and create the Bitmap yourself, filling the pixels it with setPixels(). This, among other inconveniences, would require you to do the proper color conversions.
2) Use PngReader as a InputFilterStream, so that it only parses the metadata and pass the full stream to the consumer. Currently, this is not possible, withouth tweaking on the PNGJ code. I will give it a look, and if a implement this feature I'll post it here.

Can't add System.Windows.Media.Imaging.JpegBitmapEncoder in WP

I'm trying to use the type JpegBitmapEncoderin the name space System.Windows.Media.Imaging, but i can't seem to use it. The namespace itself is available and i can use it but for some reason the JpegBitmapEncoder is not there... How can i use it?
Perhaps you can use the extension methods on WritableBitmap to load and save a jpeg instead.
Extensions Methods - LoadJpeg, SaveJpeg
If you want to get a byte array from the image, use a MemoryStream and its ToArray to get the data.
MemoryStream stream = new MemoryStream();
image.SaveJpeg(stream, width, height, 0, 80);
stream.Position = 0;
byte[] buffer = stream.ToArray();

Resources