Why does D3DX11CreateShaderResourceViewFromMemory only upload a partial copy of my texture? - directx-11

I have a problem with the D3DX11CreateShaderResourceViewFromMemory helper function.
I read some texture from a file or ZIP and pass the raw bytes and length to the helper function, however only part of the texture is uploaded (as confirmed by PIX).
I tried fiddling with the length manually but to no avail.
Here is the code that loads the texture from file:
struct FileDataLoader
{
void Load()
{
std::ifstream file(mFileName);
if (file)
{
file.seekg(0,std::ios::end);
std::streampos length = file.tellg();
file.seekg(0,std::ios::beg);
mBuffer.resize(length);
file.read(&mBuffer[0],length);
file.close();
}
}
void Decompress(void*& data, std::size_t& numBytes)
{
data = &mBuffer[0];
numBytes = mBuffer.size();
}
std::wstring mFileName;
std::vector<char> mBuffer;
};
FileDataLoader fdl;
fdl.mFileName = L"Content\\Textures\\Smoke.dds";
fdl.Load();
void* bytes;
std::size_t size;
fdl.Decompress(bytes, size);
DXCall(D3DX11CreateShaderResourceViewFromMemory(device, bytes, size, NULL, NULL, &particleTexture, NULL));
That is only a sample code that I am using to debug this problem, and I narrowed it down to the file loading and the D3DX helper function.
Now if I do this instead:
DXCall(D3DX11CreateShaderResourceViewFromFileW(device, L"Content\\Textures\\Smoke.dds", NULL, NULL, &particleTexture, NULL));
it works perfectly fine.
Any idea on why it would not upload the texture entirely ?

When opening the file, you need to specify that the file is binary:
std::ifstream file( fileName, std::ios::in | std::ios::binary );
Without the std::ios::binary flag you're reading in plain text by default, which is not what D3DX11CreateShaderResourceViewFromMemory expects.

Related

Resize and Upload Images with Blazor WebAssembly

I am using the following sample to resize the uploaded images with Blazor WebAssembly
https://www.prowaretech.com/Computer/Blazor/Examples/WebApi/UploadImages .
Still I need the original file too to be converted to base64 too and I don't know how can I access it...
I tried to find the file's original width and height to pass its to RequestImageFileAsync function but no success...
I need to store both files : the original one and the resized one.
Can you help me, please ?
Thank You Very Much !
The InputFile control emits an IBrowserFile type. RequestImageFileAsync is a convenience method on IBrowserFile to resize the image and convert the type. The result is still an IBrowserFile.
One way to do what you are asking is with SixLabors.ImageSharp. Based on the ProWareTech example, something like this...
async Task OnChange(InputFileChangeEventArgs e)
{
var files = e.GetMultipleFiles(); // get the files selected by the users
foreach(var file in files)
{
//Original-sized file
var buf1 = new byte[file.Size];
using (var stream = file.OpenReadStream())
{
await stream.ReadAsync(buf1); // copy the stream to the buffer
}
origFilesBase64.Add(new ImageFile { base64data = Convert.ToBase64String(buf1), contentType = file.ContentType, fileName = file.Name }); // convert to a base64 string!!
//Resized File
var resizedFile = await file.RequestImageFileAsync(file.ContentType, 640, 480); // resize the image file
var buf = new byte[resizedFile.Size]; // allocate a buffer to fill with the file's data
using (var stream = resizedFile.OpenReadStream())
{
await stream.ReadAsync(buf); // copy the stream to the buffer
}
filesBase64.Add(new ImageFile { base64data = Convert.ToBase64String(buf), contentType = file.ContentType, fileName = file.Name }); // convert to a base64 string!!
}
//To get the image Sizes for first image
ImageSharp.Image origImage = Image.Load<*imagetype*>(origFilesBase64[0])
int origImgHeight = origImage.Height;
int origImgWidth = origImage.Width;
ImageSharp.Image resizedImage = Image.Load<*imagetype*>(filesBase64[0])
int resizedImgHeight = resizedImage.Height;
int resizedImgWidth = resizedImage.Width;
}

Failure to create EGLSurface using the RenderResolutionScale property on Windows

I'm trying to create an EGLSurface in a Windows UWP app. The creation code is in a xaml.cpp file, as shown below.
When I try creating the surface using the optional property EGLRenderResolutionScaleProperty, it fails with an EGL_BAD_ALLOC error. Two alternate approaches work, but I need to try to use the resolution scale option for my app.
void MyClass::CreateRenderSurface()
{
if (mRenderSurface == EGL_NO_SURFACE)
{
// NOTE: in practice, I only have one of the three following implementations in the code;
// all are included together here for ease of comparison.
// 1. This works
mRenderSurface = CreateSurface(mSwapChainPanel, nullptr, nullptr);
// 2. and this works (here I hardwired the size to twice the
// the size of the window I happen to be using, because
// Windows display settings is set at 200%)
Size size;
size.Height = 1448; // hardwired value for testing, in this case window height is 724 pix
size.Width = 1908; // hardwired value for testing, in this case window width is 954 pix
mRenderSurface = CreateSurface(mSwapChainPanel, &size, nullptr);
// 3. but this fails (and this is the one I want to use)
float resolutionScale = 1.0;
mRenderSurface = CreateSurface(mSwapChainPanel, nullptr, &resolutionScale);
}
}
EGLSurface MyClass::CreateSurface(SwapChainPanel^ panel, const Size* renderSurfaceSize, const float* resolutionScale)
{
if (!panel)
{
throw Exception::CreateException(E_INVALIDARG, L"SwapChainPanel parameter is invalid");
}
if (renderSurfaceSize != nullptr && resolutionScale != nullptr)
{
throw Exception::CreateException(E_INVALIDARG, L"A size and a scale can't both be specified");
}
EGL _egl = this->HelperClass->GetEGL();
EGLSurface surface = EGL_NO_SURFACE;
const EGLint surfaceAttributes[] =
{
EGL_ANGLE_SURFACE_RENDER_TO_BACK_BUFFER, EGL_TRUE,
EGL_NONE
};
// Create a PropertySet and initialize with the EGLNativeWindowType.
PropertySet^ surfaceCreationProperties = ref new PropertySet();
surfaceCreationProperties->Insert(ref new String(EGLNativeWindowTypeProperty), panel);
// If a render surface size is specified, add it to the surface creation properties
if (renderSurfaceSize != nullptr)
{
surfaceCreationProperties->Insert(ref new String(EGLRenderSurfaceSizeProperty), PropertyValue::CreateSize(*renderSurfaceSize));
}
// If a resolution scale is specified, add it to the surface creation properties
if (resolutionScale != nullptr)
{
surfaceCreationProperties->Insert(ref new String(EGLRenderResolutionScaleProperty), PropertyValue::CreateSingle(*resolutionScale));
}
surface = eglCreateWindowSurface(_egl._display, _egl._config, reinterpret_cast<IInspectable*>(surfaceCreationProperties), surfaceAttributes);
EGLint err = eglGetError();
if (surface == EGL_NO_SURFACE)
{
throw Exception::CreateException(E_FAIL, L"Failed to create EGL surface");
}
return surface;
}
where
const wchar_t EGLNativeWindowTypeProperty[] = L"EGLNativeWindowTypeProperty";
const wchar_t EGLRenderSurfaceSizeProperty[] = L"EGLRenderSurfaceSizeProperty";
const wchar_t EGLRenderResolutionScaleProperty[] = L"EGLRenderResolutionScaleProperty";
I have tried changing the cast of the EGLNativeWindowType argument (as in How to create EGLSurface using C++/WinRT and ANGLE?) - that only creates other problems. As indicated, this code does work to create a surface in the basic case, just not when using the EGLRenderResolutionScaleProperty.
My guess is that something about the way I'm supplying that property is failing, because it fails on what should be reasonable values (e.g., 1.0).
Solved this by first checking that swapChainPanel size is not zero:
void MyClass::CreateRenderSurface()
{
if (mRenderSurface == EGL_NO_SURFACE)
{
if (0 == mSwapChainPanel->ActualHeight || 0 == mSwapChainPanel->ActualWidth)
{
mRenderSurface = CreateSurface(mSwapChainPanel, nullptr, &resolutionScale);
}
}
}
(The code checks elsewhere whether the render surface has been created, and will call this again if needed.)
Interestingly, the original code that used nullptr for both size and resolution arguments (case 1 in original snippet above) didn't need that check.

How to store text on the system clipboard after application has quit using GTK3?

I am trying to update the system clipboard from a GTK application. Here is a simplified program:
#include <gtk/gtk.h>
void callback(GtkClipboard *clipboard, const gchar *text, gpointer data) {
printf( "In callback: text = '%s'\n", text);
}
int main() {
gtk_init(NULL, NULL);
GdkScreen *screen = gdk_screen_get_default();
GdkDisplay *display = gdk_display_get_default();
GtkClipboard *clipboard = gtk_clipboard_get_for_display(
display, GDK_SELECTION_PRIMARY );
gtk_clipboard_set_text( clipboard, "Hello world", -1);
gtk_clipboard_request_text( clipboard, callback, NULL );
if( gdk_display_supports_clipboard_persistence(display) ) {
printf( "Supports clipboard persistence.\n");
gtk_clipboard_store(clipboard);
}
}
The output (after compiling the above program on my Ubuntu 19.10 laptop):
In callback: text = 'Hello world'
Note that the text: Supports clipboard persistence. is not shown, so apparently the display does not support updating the system clipboard (?). However, I can easily update it with the xclip command. Why is it not possible to do it from GTK?
GDK_SELECTION_PRIMARY -> is used to get the currently-selected object or text
GDK_SELECTION_CLIPBOARD -> is used perform operations like Cut/copy/paste
(https://developer.gnome.org/gtk3/stable/gtk3-Clipboards.html#gtk-clipboard-get-for-display)
and to store a text the application has to remain in the main loop long enough to let the clipboard manager copy the text.
#include <gtk/gtk.h>
void callback(GtkClipboard *clipboard, const gchar *text, gpointer data) {
printf("In callback: text = '%s'\n", text);
}
int main() {
gtk_init(NULL, NULL);
GdkScreen *screen = gdk_screen_get_default();
GdkDisplay *display = gdk_display_get_default();
GtkClipboard *clipboard =
gtk_clipboard_get_for_display(display, GDK_SELECTION_CLIPBOARD);
gtk_clipboard_set_text(clipboard, "Hello world", -1);
gtk_clipboard_request_text(clipboard, callback, NULL);
if (gdk_display_supports_clipboard_persistence(display)) {
printf("Supports clipboard persistence.\n");
gtk_clipboard_store(clipboard);
}
g_timeout_add(100, gtk_main_quit, NULL);
gtk_main();
}
according to the doc(https://developer.gnome.org/gdk3/stable/GdkDisplay.html#gdk-display-supports-clipboard-persistence) clipboard_persistance will only check for a running clipboard daemon. I am guessing that there are some changes made in this area as I was not able to find any running clipboard-daemon in me machine (they might have integrated it into the window manager)
(https://wiki.ubuntu.com/ClipboardPersistence) -> this doc explains the problems with clipboard persistence and ways to fix it.
if you install "clipit"(clipboard manager) and try to copy the text without waiting in the main loop for few milliseconds your output will be "Clipboard is null, recovering"
xclip would have mostly stayed online for a few milliseconds to allow coppying of the text.

How to convert SoftwareBitmap from Bgra8 to JPEG in Windows UWP

How to convert SoftwareBitmap from Bgra8 to JPEG in Windows UWP. GetPreviewFrameAsync function is used to get videoFrame data in Bgra8. What is going wrong in the following code?. I am getting jpeg size 0.
auto previewProperties = static_cast<MediaProperties::VideoEncodingProperties^>
(mediaCapture->VideoDeviceController->GetMediaStreamProperties(Capture::MediaStreamType::VideoPreview));
unsigned int videoFrameWidth = previewProperties->Width;
unsigned int videoFrameHeight = previewProperties->Height;
FN_TRACE("%s videoFrameWidth %d videoFrameHeight %d\n",
__func__, videoFrameWidth, videoFrameHeight);
// Create the video frame to request a SoftwareBitmap preview frame
auto videoFrame = ref new VideoFrame(BitmapPixelFormat::Bgra8, videoFrameWidth, videoFrameHeight);
// Capture the preview frames
return create_task(mediaCapture->GetPreviewFrameAsync(videoFrame))
.then([this](VideoFrame^ currentFrame)
{
// Collect the resulting frame
auto previewFrame = currentFrame->SoftwareBitmap;
auto inputStream = ref new Streams::InMemoryRandomAccessStream();
create_task(BitmapEncoder::CreateAsync(BitmapEncoder::JpegEncoderId, inputStream))
.then([this, previewFrame, inputStream](BitmapEncoder^ encoder)
{
encoder->SetSoftwareBitmap(previewFrame);
encoder->FlushAsync();
FN_TRACE("jpeg size %d\n", inputStream->Size);
Streams::Buffer^ data = ref new Streams::Buffer(inputStream->Size);
create_task(inputStream->ReadAsync(data, (unsigned int)inputStream->Size, InputStreamOptions::None));
});
});
Bitmap​Encoder.FlushAsync() method is a asynchronous method. We should consume it like the following:
// Capture the preview frames
return create_task(mediaCapture->GetPreviewFrameAsync(videoFrame))
.then([this](VideoFrame^ currentFrame)
{
// Collect the resulting frame
auto previewFrame = currentFrame->SoftwareBitmap;
auto inputStream = ref new Streams::InMemoryRandomAccessStream();
return create_task(BitmapEncoder::CreateAsync(BitmapEncoder::JpegEncoderId, inputStream))
.then([this, previewFrame](BitmapEncoder^ encoder)
{
encoder->SetSoftwareBitmap(previewFrame);
return encoder->FlushAsync();
}).then([this, inputStream]()
{
FN_TRACE("jpeg size %d\n", inputStream->Size);
//TODO
});
});
Then you should be able to get the right size. For more info, please see Asynchronous programming in C++.

ttf Text wont show up in SDL

No matter what i try i cant get my text to load into a texture in SDL 2.0 using SDL_ttf.
Here is my textToTexture code
void sdlapp::textToTexture(string text, SDL_Color textColor,SDL_Texture* textTexture)
{
//free prevoius texture in textTexture if texture exists
if (textTexture != nullptr || NULL)
{
SDL_DestroyTexture(textTexture);
}
SDL_Surface* textSurface = TTF_RenderText_Solid(m_font, text.c_str(), textColor);
textTexture = SDL_CreateTextureFromSurface(m_renderer, textSurface);
//free surface
SDL_FreeSurface(textSurface);
}
And then here is me loading the texture and text
bool sdlapp::loadMedia()
{
bool success = true;
//load media here
//load font
m_font = TTF_OpenFont("Fonts/MotorwerkOblique.ttf", 28);
//load text
SDL_Color textColor = { 0x255, 0x255, 0x235 };
textToTexture("im a texture thing", textColor, m_font_texture);
return success;
}
And then this is the code i am using to render it
void sdlapp::render()
{
//clear the screen
SDL_RenderClear(m_renderer);
//do render stuff here
SDL_Rect rect= { 32, 64, 128, 32 };
SDL_RenderCopy(m_renderer, m_font_texture, NULL, NULL);
//update the screen to the current render
SDL_RenderPresent(m_renderer);
}
Does anyone know what i am doing wrong?
Thanks in Advance, JustinWeq.
textToTexture renders the text with SDL_ttf, the resulting SDL_Texture address is then assigned to a variable called textTexture. Problem is, textTexture is a local variable pointing to the same address as m_font_texture. They're not the same variable, they're different variables poiting to the same place, thus you're not changing any callee variables.
For clarification on pointers, I'd recommend seeing question 4.8 of the C-FAQ
I'd make textToTexture return the new texture address, and don't bother freeing resources that are not managed by it (m_font_texture belongs to sdlapp, it should be managed by it).

Resources