I want to convert a UINT16 monochrome image to a 8 bits image, in C++.
I have that image in a
char *buffer;
I'd like to give the new converted buffer to a QImage (Qt).
I'm trying with freeImagePlus
fipImage fimage;
if (fimage.loadfromMemory(...) == false)
//error
loadfromMemory needs a fipMemoryIO adress:
loadfromMemory(fipMemoryIO &memIO, int flag = 0)
So I do
fipImage fimage;
BYTE *buf = (BYTE*)malloc(gimage.GetBufferLength() * sizeof(BYTE));
// 'buf' is empty, I have to fill it with 'buffer' content
// how can I do it?
fipMemoryIO memIO(buf, gimage.GetBufferLength());
fimage.loadFromMemory(memIO);
if (fimage.convertTo8Bits() == true)
cout << "Good";
Then I would do something like
fimage.saveToMemory(...
or
fimage.saveToHandle(...
I don't understand what is a FREE_IMAGE_FORMAT, which is the first argument to any of those two functions. I can't find information of those types in the freeImage documentation.
Then I'd finish with
imageQt = new QImage(destiny, dimX, dimY, QImage::Format_Indexed8);
How can I fill 'buf' with the content of the initial buffer?
And get the data from the fipImage to a uchar* data for a QImage?
Thanks.
The conversion is simple to do in plain old C++, no need for external libraries unless they are significantly faster and you care about such a speedup. Below is how I'd do the conversion, at least as a first cut. The data is converted inside of the input buffer, since the output is smaller than the input.
QImage from16Bit(void * buffer, int width, int height) {
int size = width*height*2; // length of data in buffer, in bytes
quint8 * output = reinterpret_cast<quint8*>(buffer);
const quint16 * input = reinterpret_cast<const quint16*>(buffer);
if (!size) return QImage;
do {
*output++ = *input++ >> 8;
} while (size -= 2);
return QImage(output, width, height, QImage::Format_Indexed8);
}
Related
I'm new to FFMPEG and trying to use it to do some screen capture to a video file, but after a lot of online searching I am stumped as to what I'm doing wrong. Basically, I've already done the effort of capturing screen data via DirectX which stores in a BGR pixel format and I'm just trying to put each frame in a video file. There's two functions, setup which does all the ffmpeg initialization work, and addImage which is called in the main program loop and puts each buffer of BGR image data into a video file. The technique I'm doing for this is to make two frames, one with the BGR data and one with YUP420P (doesn't need to be the latter but after a lot of trial and error it was all I was able to get working with H.264), and use sws_scale to copy data between the two, and then send that frame to video.mp4. The file seems to be having data written to it successfully (the file size grows and grows as the program runs), but when I try and view it in VLC I see nothing- indeed, VLC fails to fetch a length of the video, and bringing up codec and media information both are empty. I turned on ffmpeg verbose logging but all that is spit out is the following:
Setting default whitelist 'Epu��'
Timestamps are unset in a packet for stream -1259342440. This is deprecated and will stop working in the future. Fix your code to set the timestamps properly
Encoder did not produce proper pts, making some up.
From what I am reading, I understand this to be warnings rather than errors that would totally corrupt my video file. I separately went through all the error codes being spit out and everything seems nominal to me (zero for success for most calls, -11 sometimes for avcodec_receive_packet but the docs indicate that's expected sometimes).
Based on my understanding of things as they are, this should be working, but isn't, and the logs and error codes give me nothing to go on, so someone with experience with this I reckon would save me a ton of time. The code is as follows:
VideoService.h
#ifndef VIDEO_SERVICE_H
#define VIDEO_SERVICE_H
extern "C" {
#include <libavcodec/avcodec.h>
#include <libavformat/avformat.h>
#include <libavutil/imgutils.h>
#include <libswscale/swscale.h>
}
class VideoService {
public:
void setup();
void addImage(unsigned char* data, int lineSize, int width, int height, int align);
private:
AVCodecContext* context;
AVFormatContext* formatContext;
AVFrame* bgrFrame;
AVFrame* yuvFrame;
AVStream* videoStream;
SwsContext* swsContext;
};
#endif
VideoService.cpp
#include "VideoService.h"
#include <stdio.h>
void FfmpegLogCallback(void *ptr, int level, const char *fmt, va_list vargs)
{
FILE* f = fopen("ffmpeg.txt", "a");
fprintf(f, fmt, vargs);
fclose(f);
}
void VideoService::setup() {
int result = 0;
av_log_set_level(AV_LOG_VERBOSE);
av_log_set_callback(FfmpegLogCallback);
bgrFrame = av_frame_alloc();
bgrFrame->width = 1920;
bgrFrame->height = 1080;
bgrFrame->format = AV_PIX_FMT_BGRA;
bgrFrame->time_base.num = 1;
bgrFrame->time_base.den = 60;
result = av_frame_get_buffer(bgrFrame, 1);
yuvFrame = av_frame_alloc();
yuvFrame->width = 1920;
yuvFrame->height = 1080;
yuvFrame->format = AV_PIX_FMT_YUV420P;
yuvFrame->time_base.num = 1;
yuvFrame->time_base.den = 60;
result = av_frame_get_buffer(yuvFrame, 1);
const AVOutputFormat* outputFormat = av_guess_format("mp4", "video.mp4", "video/mp4");
result = avformat_alloc_output_context2(
&formatContext,
outputFormat,
"mp4",
"video.mp4"
);
formatContext->oformat = outputFormat;
const AVCodec* codec = avcodec_find_encoder(AVCodecID::AV_CODEC_ID_H264);
result = avio_open2(&formatContext->pb, "video.mp4", AVIO_FLAG_WRITE, NULL, NULL);
videoStream = avformat_new_stream(formatContext, codec);
AVCodecParameters* codecParameters = videoStream->codecpar;
codecParameters->codec_type = AVMediaType::AVMEDIA_TYPE_VIDEO;
codecParameters->codec_id = AVCodecID::AV_CODEC_ID_HEVC;
codecParameters->width = 1920;
codecParameters->height = 1080;
codecParameters->format = AVPixelFormat::AV_PIX_FMT_YUV420P;
videoStream->time_base.num = 1;
videoStream->time_base.den = 60;
result = avformat_write_header(formatContext, NULL);
codec = avcodec_find_encoder(videoStream->codecpar->codec_id);
context = avcodec_alloc_context3(codec);
context->time_base.num = 1;
context->time_base.den = 60;
avcodec_parameters_to_context(context, videoStream->codecpar);
result = avcodec_open2(context, codec, nullptr);
swsContext = sws_getContext(1920, 1080, AV_PIX_FMT_BGRA, 1920, 1080, AV_PIX_FMT_YUV420P, 0, 0, 0, 0);
}
void VideoService::addImage(unsigned char* data, int lineSize, int width, int height, int align) {
int result = 0;
result = av_image_fill_arrays(bgrFrame->data, bgrFrame->linesize, data, AV_PIX_FMT_BGRA, 1920, 1080, 1);
sws_scale(swsContext, bgrFrame->data, bgrFrame->linesize, 0, 1080, &yuvFrame->data[0], yuvFrame->linesize);
result = avcodec_send_frame(context, yuvFrame);
AVPacket *packet = av_packet_alloc();
result = avcodec_receive_packet(context, packet);
if (result != 0) {
return;
}
result = av_interleaved_write_frame(formatContext, packet);
}
My environment is windows 10, I'm building with clang++ 12.0.1, and using the FFMPEG 5.1 libs.
See the official sample, muxing.c.
Fix you code like the following.
Set fields of an AVCodecContext and call the avcodec_parameters_from_context(), instead of calling the avcodec_parameters_to_context(). You should set width, height, bit_rate, pix_fmt, framerate and time_base at least.(See the implementation of the add_stream() in the sample.)
Specify an algorithm such as the SWS_BILINEAR when calling the sws_getContext().(Although a default algorithm will be selected, that's an undocumented feature.)
Set the pts(presentation timestamp) field of an AVFrame.
Implement a loop calling the avcodec_receive_packet() after calling the avcodec_send_frame(). See the write_frame() in the sample.(Single frame can result in multiple packets.)
In all the examples I've seen for GDCM on how to write image data, they always consider the image volume as a single whole, cohesive buffer. The basic structure is along the lines
#include "gdcmImage.h"
#include "gdcmImageWriter.h"
#include "gdcmFileDerivation.h"
#include "gdcmUIDGenerator.h"
int write_image(...)
{
size_t width = ..., height = ..., depth = ...;
auto im = new gdcm::Image;
std::vector<...> buffer;
auto p = buffer.data();
im->SetNumberOfDimensions(3);
im->SetDimension(0, width);
im->SetDimension(1, height);
im->SetDimension(1, depth);
im->GetPixelFormat().SetSamplesPerPixel(...);
im->SetPhotometricInterpretation( gdcm::PhotometricInterpretation::... );
unsigned long l = im->GetBufferLength();
if( l != width * height * depth * sizeof(...) ){ return SOME_ERROR; }
gdcm::DataElement pixeldata( gdcm::Tag(0x7fe0,0x0010) );
pixeldata.SetByteValue( buffer.data(), buffer.size()*sizeof(*buffer.data()) );
im->SetDataElement( pixeldata );
gdcm::UIDGenerator uid;
auto file = new gdcm::File;
gdcm::FileDerivation fd;
const char UID[] = ...;
fd.AddReference( ReferencedSOPClassUID, uid.Generate() );
fd.SetFile( *file );
// If all Code Value are ok the filter will execute properly
if( !fd.Derive() ){ return SOME_ERROR; }
gdcm::ImageWriter w;
w.SetImage( *im );
w.SetFile( fd.GetFile() );
// Set the filename:
w.SetFileName( "some_image.dcm" );
if( !w.Write() ){ return SOME_ERROR; }
return 0;
}
The problem I'm facing with this approach is, that the amount of image data I need to store easily exceeds the available system memory, if an additional copy is being made; specifically these are volumes of 4096×4096×2048 voxels of 12 bits each, so about 48GiB of data in memory.
However the approach of using gdcm::DataElement and gdcm::Image::SetDataElement will obviously create a full copy of the data in buffer, which is troublesome. For one, the data as produced by my imaging system does not reside in memory as a cohesive, singular block of values; it is split into slices. And the total amount of data fits into the memory of the systems being used only once.
It is trivial for me, to read in the data slice by slice, which would cut down the memory requirements significantly. However I'm at a loss, how that'd be done with GDCM.
Did you check gdcm::FileStreamer:
http://gdcm.sourceforge.net/3.0/html/classgdcm_1_1FileStreamer.xhtml
See typical setup at:
https://github.com/malaterre/GDCM/blob/master/Examples/Csharp/FileStreaming.cs
The example show how to create an out of memory private element, but you can do the same with public DataElement.
A more complex example to read where Pixel Data is written in chunks is at:
https://github.com/malaterre/GDCM/blob/master/Examples/Csharp/FileChangeTS.cs#L126-L154
I have a mesh from which I need to read the vertex positions of but I can just get a buffer with that data, which I seemingly can get as an utf-8 char array.
Currently I'm getting the data from the buffer into the array I metioned and wirte it into a char* but i can't get the decoding correctly or so it seems.
The following code reads the dara from the buffer:
char* GetDataFromIBuffer(Windows::Storage::Streams::IBuffer^ container)
{
unsigned int bufferLength = container->Length;
auto dataReader = Windows::Storage::Streams::DataReader::FromBuffer(container);
Platform::Array<unsigned char>^ managedBytes =
ref new Platform::Array<unsigned char>(bufferLength);
dataReader->ReadBytes(managedBytes);
char * bytes = new char[bufferLength];
for (unsigned int i = 0; i < bufferLength; i++)
{
if (managedBytes[i] == '\0')
{
bytes[i] = '0';
}
else
{
bytes[i] = managedBytes[i];
}
}
}
I can see the data in debug mode but i need a method to make it readable and write it into a file, where i can copy the mesh data and draw the mesh in a seperate program.
The following image shows the array data which can be seen in the array:
debug mode
Be careful not to mix up text encoding and data types.
char is a type often used for buffers because it has the size of a byte, but that doesn't mean that the data contained in the buffer is text.
Your debug view seem to confirm that the data inside your buffer is not text, because when interpreted as text, it gives weird characters such as 'ÿ', '^', etc...
UTF-8 is a way to encode unicode text, so it has nothing to do with binary data.
You need to find a way to cast your buffer data info the internal type of the data, it should be documented where you got that data (maybe it's just an array of floats ?)
I wanted to create a separate function where I could just send a string and it will render the text appropriately so that I didn't need to copy-paste same stuff. The function I came up with is in the following.
void renderText(SDL_Renderer* renderer, char* text,
char* font_name, int font_size,
SDL_Color color, SDL_Rect text_area)
{
/* If TTF was not initialized initialize it */
if (!TTF_WasInit()) {
if (TTF_Init() < 0) {
printf("Error initializing TTF: %s\n", SDL_GetError());
return EXIT_FAILURE;
}
}
TTF_Font* font = TTF_OpenFont(font_name, font_size);
if (font == NULL) {
printf("Error opening font: %s\n", SDL_GetError());
return;
}
SDL_Surface* surface = TTF_RenderText_Blended(font, text, color);
SDL_Texture* texture = SDL_CreateTextureFromSurface(renderer, surface);
if (!texture) {
printf("error creating texture: %s\n", SDL_GetError());
TTF_CloseFont(font);
return;
}
SDL_RenderCopy(renderer, message, NULL, &text_area);
SDL_FreeSurface(surface);
SDL_DestroyTexture(texture);
TTF_CloseFont(font);
}
Now, sometimes I want to align the text with the window for which I need to know the height and width of the surface that contains the text so that I can use something like (WINDOW_WIDTH - surfaceText->w) / 2 or (WINDOW_HEIGHT - surfaceText->h) / 2. But there is no way to know the height and width of the surface containing the text without creating the surface. And if I end up needing to create the surface then the separation of this function would not live upto its objective.
How do I find out the height and width of the surface containing the text without actually creating the surface in SDL2_ttf library?
You can pass the string to the TTF_SizeText() function, which is defined:
int TTF_SizeText(TTF_Font *font, const char *text, int *w, int *h)
The documentation for this function states:
Calculate the resulting surface size of the LATIN1 encoded text rendered using font. No actual rendering is done, however correct kerning is done to get the actual width. The height returned in h is the same as you can get using 3.3.10 TTF_FontHeight.
Then, once you have the dimensions of the string, you can call your rendering function with the necessary information to align it.
There are also TTF_SizeUTF8() and TTF_SizeUNICODE() versions for different encodings.
I'm using D3DXSaveSurfaceToFile to save windowed Direct3D 9 surfaces to PNG, BMP and JPG files. There are no errors returned from the D3DXSaveSurfaceToFile call and all files open fine in Windows Photo Viewer and Paint. But they will not open in a higher end image editing program such as Paint Shop Pro or Photoshop. The error messages from these programs basically say that the file is corrupted. If I open the files in Paint and then save them in the same file format with a different file name, then they'll open fine in the other programs.
This leads me to believe that D3DXSaveSurfaceToFile is writing out non-standard versions of these file formats. Is there some way I can get this function to write out files that can be opened in programs like Photoshop without the intermediate step of resaving the files in Paint? Or is there another function I should be using that does a better job of saving a Direct3D surfaces to an image?
Take a look at the file in a image meta viewer. What does it tell you?
Unfortunately D3DXSaveSurfaceToFile() isn't the most stable (it's also exceptionally slow). Personally I do something like the below code. It works even on Anti-aliased displays by doing an offscreen render to take the screenshot then getting it into a buffer. It also supports only the most common of the pixel formats. Sorry for any errors in it, pulled it out of an app I used to work on.
You can then, in your code and probably in another thread, then convert said 'bitmap' to anything you like using a variety of different code.
void HandleScreenshot(IDirect3DDevice9* device)
{
DWORD tcHandleScreenshot = GetTickCount();
LPDIRECT3DSURFACE9 pd3dsBack = NULL;
LPDIRECT3DSURFACE9 pd3dsTemp = NULL;
// Grab the back buffer into a surface
if ( SUCCEEDED ( device->GetBackBuffer(0, 0, D3DBACKBUFFER_TYPE_MONO, &pd3dsBack) ))
{
D3DSURFACE_DESC desc;
pd3dsBack->GetDesc(&desc);
LPDIRECT3DSURFACE9 pd3dsCopy = NULL;
if (desc.MultiSampleType != D3DMULTISAMPLE_NONE)
{
if (SUCCEEDED(device->CreateRenderTarget(desc.Width, desc.Height, desc.Format, D3DMULTISAMPLE_NONE, 0, FALSE, &pd3dsCopy, NULL)))
{
if (SUCCEEDED(device->StretchRect(pd3dsBack, NULL, pd3dsCopy, NULL, D3DTEXF_NONE)))
{
pd3dsBack->Release();
pd3dsBack = pd3dsCopy;
}
else
{
pd3dsCopy->Release();
}
}
}
if (SUCCEEDED(device->CreateOffscreenPlainSurface(desc.Width, desc.Height, desc.Format, D3DPOOL_SYSTEMMEM, &pd3dsTemp, NULL)))
{
DWORD tmpTimeGRTD = GetTickCount();
if (SUCCEEDED(device->GetRenderTargetData(pd3dsBack, pd3dsTemp)))
{
D3DLOCKED_RECT lockedSrcRect;
if (SUCCEEDED(pd3dsTemp->LockRect(&lockedSrcRect, NULL, D3DLOCK_READONLY | D3DLOCK_NOSYSLOCK | D3DLOCK_NO_DIRTY_UPDATE)))
{
int nSize = desc.Width * desc.Height * 3;
BYTE* pixels = new BYTE[nSize +1];
int iSrcPitch = lockedSrcRect.Pitch;
BYTE* pSrcRow = (BYTE*)lockedSrcRect.pBits;
LPBYTE lpDest = pixels;
LPDWORD lpSrc;
switch (desc.Format)
{
case D3DFMT_A8R8G8B8:
case D3DFMT_X8R8G8B8:
for (int y = desc.Height - 1; y >= 0; y--)
{
lpSrc = reinterpret_cast<LPDWORD>(lockedSrcRect.pBits) + y * desc.Width;
for (unsigned int x = 0; x < desc.Width; x++)
{
*reinterpret_cast<LPDWORD>(lpDest) = *lpSrc;
lpSrc++; // increment source pointer by 1 DWORD
lpDest += 3; // increment destination pointer by 3 bytes
}
}
break;
default:
ZeroMemory(pixels, nSize);
}
pd3dsTemp->UnlockRect();
BITMAPINFOHEADER header;
header.biWidth = desc.Width;
header.biHeight = desc.Height;
header.biSizeImage = nSize;
header.biSize = sizeof(BITMAPINFOHEADER);
header.biPlanes = 1;
header.biBitCount = 3 * 8; // RGB
header.biCompression = 0;
header.biXPelsPerMeter = 0;
header.biYPelsPerMeter = 0;
header.biClrUsed = 0;
header.biClrImportant = 0;
BITMAPFILEHEADER bfh = {0};
bfh.bfType = 0x4d42;
bfh.bfOffBits = sizeof(BITMAPFILEHEADER) + sizeof(BITMAPINFOHEADER);
bfh.bfSize = bfh.bfOffBits + nSize;
unsigned int rough_size = sizeof(BITMAPINFOHEADER) + sizeof(BITMAPFILEHEADER) + nSize;
unsigned char* p = new unsigned char[rough_size]
memcpy(p, &bfh, sizeof(BITMAPFILEHEADER));
p += sizeof(BITMAPFILEHEADER);
memcpy(p, &header, sizeof(BITMAPINFOHEADER));
p += sizeof(BITMAPINFOHEADER);
memcpy(p, pixels, nSize);
delete [] pixels;
/**********************************************/
// p now has a full BMP file, write it out here
}
}
pd3dsTemp->Release();
}
pd3dsBack->Release();
}
}
Turns out that it was a combination of a bug in my code and Paint being more forgiving than Photoshop when it comes to reading files. The bug in my code caused the files to be saved with the wrong extension (i.e. Image.bmp was actually saved using D3DXIFF_JPG). When opening a file that contained a JPG image, but had a BMP extension, Photoshop just failed the file. I guess Paint worked since it ignored the file extension and just decoded the file contents.
Looking at a file in an image meta viewer helped me to see the problem.