I'm getting a CVImageBufferRef from my AVCaptureSession, and I'd like to take that image buffer and upload it over the network. To save space and time, I would like to do this without rendering the image into a CIImage and NSBitmapImage, which is the solution I've seen everywhere (like here: How can I obtain raw data from a CVImageBuffer object).
This is because my impression is that the CVImageBuffer might be compressed, which is awesome for me, and if I render it, I have to uncompress it into a full bitmap and then upload the whole bitmap. I would like to take the compressed data (realizing that a single compressed frame might be unrenderable later by itself) just as it sits within the CVImageBuffer. I think this means I want the CVImageBuffer's base data pointer and its length, but it doesn't appear there's a way to get that within the API. Anybody have any ideas?
CVImageBuffer itself is an abstract type. Your image should be an instance of either CVPixelBuffer, CVOpenGLBuffer, or CVOpenGLTexture. The documentation for those types lists the functions you can use for accessing the data.
To tell which type you have use the GetTypeID methods:
CVImageBufferRef image = …;
CFTypeID imageType = CFGetTypeID(image);
if (imageType == CVPixelBufferGetTypeID()) {
// Pixel Data
}
else if (imageType == CVOpenGLBufferGetTypeID()) {
// OpenGL pbuffer
}
else if (imageType == CVOpenGLTextureGetTypeID()) {
// OpenGL Texture
}
Related
Hi I'm trying to capture a picture using kotlin and registerForActivityResult but I allways get a blur image with no quality I've read several post but I can't understand how to work with my application. I'm using a fragment to call the camera. Any suggestions? Sorry for my bad english I've spent about the full week trying it works. And nothing. Thanks in advance
private var imagenUri: Uri? =null
val startForResult = registerForActivityResult(ActivityResultContracts.StartActivityForResult()) {
result: ActivityResult ->
if (result.resultCode == Activity.RESULT_OK) {
try {
val intent = result.data
intent!!.putExtra(MediaStore.EXTRA_OUTPUT, imagenUri)
val bitMap = intent?.extras?.get("data") as Bitmap
imagenUri= getImageUriFromBitmap(requireContext(),bitMap)
binding.ivImagen.setImageURI(imagenUri)
Toast.makeText(context, "la uri es: $imagenUri", Toast.LENGTH_SHORT).show()
} catch (e: java.lang.Exception){
Toast.makeText(context, "NO SE HA PODIDO ENCONTRAR IMAGEN", Toast.LENGTH_SHORT).show()}
}
}
binding.ibTomarFoto.setOnClickListener(){
startForResult.launch(Intent(MediaStore.ACTION_IMAGE_CAPTURE))
}
From the documentation:
public static final String ACTION_IMAGE_CAPTURE
Standard Intent action that can be sent to have the camera application capture an image and return it.
The caller may pass an extra EXTRA_OUTPUT to control where this image will be written. If the EXTRA_OUTPUT is not present, then a small sized image is returned as a Bitmap object in the extra field. This is useful for applications that only need a small image. If the EXTRA_OUTPUT is present, then the full-sized image will be written to the Uri value of EXTRA_OUTPUT.
So you need to add the EXTRA_OUTPUT extra to get a full-size image stored at a URI you supply. Otherwise you get a small image as a data payload in the result Intent (those bundles can't handle large objects).
It looks like you're already trying to do that, you've just added it to the wrong place - you need to add it to the Intent you call launch with, not the result one. It's a configuration option for the task you're launching!
So this should work:
binding.ibTomarFoto.setOnClickListener(){
startForResult.launch(
Intent(MediaStore.ACTION_IMAGE_CAPTURE).putExtra(MediaStore.EXTRA_OUTPUT, imagenUri)
)
}
And then remove the same putExtra line from your result-handler code (it doesn't do anything, but there's no point having it there)
Is it possible to convert an NSattributedString with attachments (RTFD not RTF) to ASCII, edit the stream, and convert it back? So far I am able to convert an RTFD to a String stream. But turning it back into an NSData object does not work. Here's the code I'm using in a playground.
import Cocoa
func stream(attr: NSAttributedString) -> String? {
if let d = attr.rtfd(from: NSMakeRange(0, attr.length), documentAttributes: [NSDocumentTypeDocumentAttribute: NSRTFDTextDocumentType]) {
if let str = String(data: d, encoding: .ascii) { return str }
else {
print("Unable to produce RTFD string")
return nil
}
}
print("Unable to produce RTFD data stream")
return nil
}
if let im = NSImage(named: "image.png") {
let a = NSTextAttachment()
a.image = im
let s = NSAttributedString(attachment: a)
if let str = stream(attr: s) {
print("\(str)\n") //prints a string, which contains RTF code combined with NSTextAttachment string representation
if let data = str.data(using: .ascii) { //this is where things stop working
if let newRTF = NSAttributedString(rtfd: data as Data, documentAttributes: nil) {
print(newRTF)
}
else { print("rtfd was not created") }
}
else { print("could not make data") }
}
}
What am I missing? Or is my entire concept wrong here? I am doing this to get around a limitation of the way OS X handles images attached in RTF documents.
Edit:
The limitation I am trying to address is to set the size of an image in an RTF stream. The text handling system requires that we use NSTextAttachment. Whenever an image is pasted from that, it automatically sizes the image to whatever the pixel height and width are. Unfortunately there is no way to control this property. I have tried here and also using all the techniques here.
As far as the ASCII stream, I'm not trying to edit the image attachment itself. When the stream is printed, the actual RTF code is visible and editable. This works and would be a good workaround for this limitation. All I need is to edit the RTF code and change the \width and \height properties that Apple uses.
After your edit I can see what you are trying to do, interesting idea, but it won't work - at least not easily.
Take a look at the value of d, it is not an ASCII string stored as a value of type Data (or NSData). It is a serialised representation of multiple items; the RTF stream (text), the image data (binary). If you convert this to an ASCII string and back again it is not going to work, you can't represent arbitrary binary data as ASCII unless you encode it (e.g. something like base 64 encoding).
Now you could attempt what you are trying a slightly different way, skip the conversion to ASCII and edit the Data value directly. That is certainly possible, but as you are editing a format you don't know (the serialised representation) you would have to be careful... And even if you succeed in editing the representation there is no guarantee that converting back to an NSAttributedString with an NSTextAttachment will preserve your edits.
I suggest you tackle this another way. You have an NSAttributedString and you don't like the RTF produced after you write this to a file. So edit the RTF after it is written, e.g. open up the RTFD package, open the contained RTF file (TXT.rtf), edit it, write it back.
HTH
I'm altering someone else's code. They used PNG's which are loaded via BufferedImage. I need to load a TGA instead, which is just simply a 18 byte header and BGR codes. I have the textures loaded and running, but I get a gray box instead of the texture. I don't even know how to DEBUG this.
Textures are loaded in a ByteBuffer:
final static int datasize = (WIDTH*HEIGHT*3) *2; // Double buffer size for OpenGL // not +18 no header
static ByteBuffer buffer = ByteBuffer.allocateDirect(datasize);
FileInputStream fin = new FileInputStream("/Volumes/RAMDisk/shot00021.tga");
FileChannel inc = fin.getChannel();
inc.position(18); // skip header
buffer.clear(); // prepare for read
int ret = inc.read(buffer);
fin.close();
I've followed this: [how-to-manage-memory-with-texture-in-opengl][1] ... because I am updating the texture once per frame, like video.
Called once:
GL11.glBindTexture(GL11.GL_TEXTURE_2D, textureID);
GL11.glTexParameteri(GL11.GL_TEXTURE_2D, GL11.GL_TEXTURE_WRAP_S, GL11.GL_CLAMP);
GL11.glTexParameteri(GL11.GL_TEXTURE_2D, GL11.GL_TEXTURE_WRAP_T, GL11.GL_CLAMP);
GL11.glTexParameteri(GL11.GL_TEXTURE_2D, GL11.GL_TEXTURE_MAG_FILTER, GL11.GL_NEAREST);
GL11.glTexParameteri(GL11.GL_TEXTURE_2D, GL11.GL_TEXTURE_MIN_FILTER, GL11.GL_NEAREST);
GL11.glTexImage2D(GL11.GL_TEXTURE_2D, 0, GL11.GL_RGB, width, height, 0, GL11.GL_RGB, GL11.GL_UNSIGNED_BYTE, (ByteBuffer) null);
assert(GL11.GL_NO_ERROR == GL11.glGetError());
Called repeatedly:
GL11.glBindTexture(GL11.GL_TEXTURE_2D, textureID);
GL11.glTexSubImage2D(GL11.GL_TEXTURE_2D, 0, 0, 0, width, height, GL11.GL_RGB, GL11.GL_UNSIGNED_BYTE, byteBuffer);
assert(GL11.GL_NO_ERROR == GL11.glGetError());
return textureID;
The render code hasn't changed and is based on:
GL11.glDrawArrays(GL11.GL_TRIANGLES, 0, this.vertexCount);
Make sure you set the texture sampling mode. Especially min filter: glTexParameteri ( GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR). The default setting is mip mapped (GL_NEAREST_MIPMAP_LINEAR) so unless you upload mip maps you will get a white read result.
So either set the texture to no mip or generate them. One way to do that is to call glGenerateMipmap after the tex img call.
(see https://www.khronos.org/opengles/sdk/docs/man/xhtml/glTexParameter.xml).
It's a very common gl pitfall and something people just tend to know after getting bitten by it a few times.
There is no easy way to debug stuff like this. There are good gl debugging tools in for example xcode but they will not tell you about this case.
Debugging GPU code is always a hassle. I would bet my money on a big industry progress in this area as more companies discover the power of GPU. Until then; I'll share my two best GPU debugging friends:
1) Define a function to print OGL errors:
int printOglError(const char *file, int line)
{
/* Returns 1 if an OpenGL error occurred, 0 otherwise. */
GLenum glErr;
int retCode = 0;
glErr = glGetError();
while (glErr != GL_NO_ERROR) {
printf("glError in file %s # line %d: %s\n", file, line, gluErrorString(glErr));
retCode = 1;
glErr = glGetError();
}
return retCode;
}
#define printOpenGLError() printOglError(__FILE__, __LINE__)
And call it after your render draw calls (possible earlier errors will also show up):
GL11.glDrawArrays(GL11.GL_TRIANGLES, 0, this.vertexCount);
printOpenGLError();
This alerts if you make some invalid operations (which might just be your case) but you usually have to find where the error occurs by trial and error.
2) Check out gDEBugger, free software with tons of GPU memory information.
[Edit]:
I would also recommend using the opensource lib DevIL - its quite competent in loading various image formats.
Thanks to Felix, by not calling glTexSubImage2D (leaving the memory valid, but uninitialized) I noticed a remnant pattern left by the default memory. This indicated that the texture is being displayed, but the load is most likely the problem.
**UPDATE:
The, problem with the code above is essentially the buffer. The buffer is 1024*1024, but it is only partially filled in by the read, leaving the limit marker of the ByteBuffer at 2359296(1024*768*3) instead of 3145728(1024*1024*3). This gives the error:
Number of remaining buffer elements is must be ... at least ...
I thought that OpenGL needed space to return data, so I doubled the size of the buffer.
The buffer size is doubled to compensate for the error.
final static int datasize = (WIDTH*HEIGHT*3) *2; // Double buffer size for OpenGL // not +18 no header
This is wrong, what is needed is the flip() function (Big THANKS to Reto Koradi for the small hint to the buffer rewind) to put the ByteBuffer in read mode. Since the buffer is only semi-full, the OpenGL buffer check gives an error. The correct thing to do is not double the buffer size; use buffer.position(buffer.capacity()) to fill the buffer before doing a flip().
final static int datasize = (WIDTH*HEIGHT*3); // not +18 no header
buffer.clear(); // prepare for read
int ret = inc.read(buffer);
fin.close();
buffer.position(buffer.capacity()); // make sure buffer is completely FILLED!
buffer.flip(); // flip buffer to read mode
To figure this out, it is helpful to hardcode the memory of the buffer to make sure the OpenGL calls are working, isolating the load problem. Then when the OpenGL calls are correct, concentrate on the loading of the buffer. As suggested by Felix K, it is good to make sure one texture has been drawn correctly before calling glTexSubImage2D repeatedly.
Some ideas which might cause the issue:
Your texture is disposed somewhere. I don't know the whole code but I guess somewhere there is a glDeleteTextures and this could cause some issues if called at the wrong time.
Are the texture width and height powers of two? If not this might be an issue depending on your hardware. Old hardware sometimes won't support non-power of two images.
The texture parameters changed between the draw calls at some other point ( Make a debug check of the parameters with glGetTexParameter ).
There could be a loading issue when loading the next image ( edit: or even the first image ). Check if the first image is displayed without loading the next images. If so it must be one of the cases above.
I need to decode both bitmap and meta data from PNG input stream using [PNGJ] (http://code.google.com/p/pngj/) library. The problem is that decoding meta data will advance the stream and then I cannot use
bitmap = BitmapFactory.decodeStream().
Creating Bitmap on my own is OK but if I need to, say, scale bitmap with interpolation I'd rather use BitmapFactory. To use it I have to create a copy of InputStream every time I have to use PNGJ for getting meta data and BitmapFactory for getting a bitmap. It will be nice to return meta data AND Bitmap from a single PNGJ call (at least for most common ARGB_8888 format).
In a nutshell, I have to copy the stream to be used by Java libraries which looks like a waste. Returning a bitmap will be one solution.
// register an auxilary chunk name
PngChunk.factoryRegister(ThumbNailProvider.chunkID, chunkPROP.class);
// reader for the stream
PngReader pngr = new PngReader(inStream, "debug label PNG reader");
ByteArrayOutputStream outputStream = new ByteArrayOutputStream();
PngWriter pngw = new PngWriter(outputStream, pngr.imgInfo);
// copy pre-data chunks
pngw.copyChunksFirst(pngr, ChunkCopyBehaviour.COPY_ALL_SAFE);
// copy image data
for (int row = 0; row < pngr.imgInfo.rows; row++) {
ImageLine l1 = pngr.readRow(row);
pngw.writeRow(l1, row);
}
// copy after-data chunks
pngw.copyChunksLast(pngr, ChunkCopyBehaviour.COPY_ALL);
pngr.end(); // close inStream but not its copy
pngw.end(); // close out stream
// save a copy of the stream for Java Libraries;
data.inputStream = new ByteArrayInputStream(outputStream.toByteArray());
// read the chunk
ChunksList chunkList = pngr.getChunksList();
PngChunk chunk = chunkList.getById1(L2ThumbNailProvider.chunkID);
if (chunk != null) {
...
}
This is the problem of having two independent streams consumers, say Class1.parse(inputStream), Class2.decode(inputStream) and we want them to consume the same single stream; it has no simple elegant solutions (eg), if we have no control on how the consumers eat the stream.
Simple solutions, but not very elegant -and probably impractical- are: close and reopen the stream (unfeasible if we are reading from a network stream), buffer the full stream content in memory, or to a temporary file.
In your concrete case, the alternatives I can think of are:
1) Let PNGJ consume and decode the data and create the Bitmap yourself, filling the pixels it with setPixels(). This, among other inconveniences, would require you to do the proper color conversions.
2) Use PngReader as a InputFilterStream, so that it only parses the metadata and pass the full stream to the consumer. Currently, this is not possible, withouth tweaking on the PNGJ code. I will give it a look, and if a implement this feature I'll post it here.
I have a class which inherits QAbstractTableModel, and holds some complex structs in a QMap. The QVariant data(QModelIndex index, ...) method just returns an enum which describes how a custom item delegate should draw the contents of a cell. I would like to implement drag and drop functionality in this model so that users can reorder these structs in the QMap, but can't quite figure our how Qt would like me to do this. All I need is to see the source and destination indices of the drag/drop operation and I can take care of the rest, but the closest thing I've found in QAbstractItemModel is the dropMimeData() function. DropMimeData() doesn't give me the source index and requires me to convert the data into some MIME type (plaintext, etc.), which it is definitely not. I can hack my way through this by creating a QMimeData that just contains the source index, but I would like to really learn to use Qt as it's meant to be used, and I feel like I'm missing something. Any thoughts?
Just to help clarify a bit, the application is an animation program which acts sort of like Adobe Flash. The class which inherits QAbstractTableModel has a QMap<int, FrameState> (with struct FrameState{QPointF pos; bool visible;}) to hold keyframes. This state of this QMap is what I would like to display and have users edit. I draw a green circle if there is a visible key frame, a red circle if there is an invisible keyframe, a line if the previous keyframe was visible, and nothing if the previous keyframe was invisible. I would like users to be able to drag the keyframes around to change their QMap key.
Thanks!
You can use the views dragEnterEvent to get the indices that were selected initially:
void DropTreeView::dragEnterEvent(QDragEnterEvent *event)
{
QTreeView::dragEnterEvent(event);
const QItemSelectionModel * sm = selectionModel();
if (!sm)
return;
dragStartIndicies = sm->selectedIndexes();
}
You'll need to use the mime-types for the drag and drop, but C++ Qt provides a nice way to do that using QDataStream:
QMimeData *YourModel::mimeData( const QModelIndexList &indexes ) const
{
QByteArray encodedData;
QDataStream stream( &encodedData, QIODevice::WriteOnly );
stream << yourQMap /* OR almost any Qt data structure */;
QMimeData *mData = new QMimeData();
mData->setData( YOUR_MIME_TYPE, encodedData );
return mData;
}
On the receiving end, you can get your data structure (i.e. QMap if that's what you want to use) back out of the QDataStream:
QByteArray encodedData = yourMimeData->data( YOUR_MIME_TYPE );
QDataStream stream( &encodedData, QIODevice::ReadOnly );
QMap decodedMap;
stream >> decodedMap;