I was trying to build and run the code below (Visual Studio 2013). Building is fine, but the code runs only in Debug mode, but fails in Release mode. I can see that the code stops when it tries to call the VideoWriter function, and I assume that it might be related to the file permission. But, why is it okay in Debug mode? It's a bit confusing.. Can somebody tell what's happening here or what I can further try to nail down the problem?
Thanks.
#include <opencv2/opencv.hpp>
using namespace cv;
using namespace std;
int main()
{
VideoCapture cap(0);
// Get size of frames
Size S = Size((int)cap.get(CV_CAP_PROP_FRAME_WIDTH), (int)
cap.get(CV_CAP_PROP_FRAME_HEIGHT));
// Make a video writer object and initialize it at 30 FPS
VideoWriter put("output.mpg", CV_FOURCC('M', 'P', 'E', 'G'), 30, S);
if (!put.isOpened())
{
cout << "File could not be created for writing. Check permissions" << endl;
return -1;
}
namedWindow("Video");
// Play the video in a loop till it ends
while (char(waitKey(1)) != 'q' && cap.isOpened())
{
Mat frame;
cap >> frame;
// Check if the video is over
if (frame.empty())
{
cout << "Video over" << endl;
break;
}
imshow("Video", frame);
put << frame;
}
return 0;
}
Related
I'm writing a small c++ app that uses Desktop Duplication API to get the display output. I've never done c programming before, and I got to where I am by staring at the win32 API documentation. https://learn.microsoft.com/en-us/windows/win32/api/dxgi1_2/
#include <iostream>
#pragma comment(lib, "windowsapp")
#include <roapi.h>
//#pragma comment(lib, "dxgi")
#include <dxgi1_2.h>
using namespace std;
int main()
{
cout << RoInitialize(RO_INIT_SINGLETHREADED);
// intermediate variables for casting
IDXGIOutput* pDisplay_old;
IDXGIFactory1* pFactory;
IDXGIAdapter1* pGPU;
IDXGIOutput1* pDisplay;
IDXGIOutputDuplication* pCapture;
DXGI_OUTDUPL_DESC captureDesc;
// create factory
if (CreateDXGIFactory1(__uuidof(IDXGIFactory1), (void**)&pFactory) != S_OK) return 1;
// get GPU
if (pFactory -> EnumAdapters1(0, &pGPU) != S_OK) return 1;
// get display
if (pGPU -> EnumOutputs(0, &pDisplay_old) != S_OK) return 1;
pDisplay = (IDXGIOutput1*)pDisplay_old;
DXGI_OUTDUPL_FRAME_INFO frameInfo;
IDXGIResource* pFrame;
HRESULT captureResult;
do
{
// create capture
// cout << pDisplay -> DuplicateOutput(pGPU, &pCapture);
//return 0;
if (pDisplay -> DuplicateOutput(pGPU, &pCapture) != S_OK) return 1;
pCapture -> GetDesc(&captureDesc);
cout << captureDesc.ModeDesc.Width << ' ' << captureDesc.ModeDesc.Height;
do
{
captureResult = pCapture -> AcquireNextFrame(2000, &frameInfo, &pFrame);
if (captureResult == S_OK)
{
cout << "HI";
captureResult = pCapture -> ReleaseFrame();
}
else if (captureResult == DXGI_ERROR_ACCESS_LOST) break;
else return 1;
}
while (true);
}
while (true);
}
I'm using visual studio 2022 witn only "desktop development with c++" enabled, on windows 11 insider build: 22623.1037 ni_release on a regular home PC with display, beyboard, mouse, etc
The code worked fine until DuplicateOutput(), when it complained E_NOINTERFACE. I'm certain there is an interface since index 0 for EnumAdapters1 and EnumOutputs are where the desktop is displayed, and I obviously have a display attached with the desktop. According to this guy https://devblogs.microsoft.com/oldnewthing/20041213-00/?p=37043, I need marshalling and apartments or something, so after more research, I tried RoInitialize() with both RO_INIT_SINGLETHREADED and RO_INIT_MULTITHREADED. Now, DuplicateOutput throws this exception
It seems to happen within the library itself, which makes me think that it's either not my fault or I really messed something up, probably the latter.
I'm really confused now, and would like some assistance, thanks!
EDIT: I replaced "pDisplay = (IDXGIOutput1*)pDisplay_old;" with "pDisplay_old -> QueryInterface(&pDisplay);", and I'm back to E_NOINTERFACE, but I think I'm on the right track, how do I fix this error?
EDIT2: I looked at a related question AcquireNextFrame not working (Desktop Duplication API & D3D11), and followed the answer to add D3D11CreateDevice to my code:
#include <iostream>
#pragma comment(lib, "dxgi")
#pragma comment(lib, "d3d11")
#include <d3d11.h>
#include <dxgi1_2.h>
using namespace std;
int main()
{
// intermediate variables for casting
IDXGIOutput* pDisplay_old;
IDXGIFactory1* pFactory;
IDXGIAdapter* pGPU;
ID3D11Device* pD3DDevice;
IDXGIDevice* pDevice;
IDXGIOutput1* pDisplay;
IDXGIOutputDuplication* pCapture;
DXGI_OUTDUPL_DESC captureDesc;
// create DXGI factory
if (CreateDXGIFactory1(__uuidof(IDXGIFactory1), (void**)&pFactory) != S_OK) return 1;
// get GPU adapter
if (pFactory -> EnumAdapters(0, &pGPU) != S_OK) return 2;
// create D3D11 device
D3D_FEATURE_LEVEL D3DFeatures [6]
{
D3D_FEATURE_LEVEL_11_0,
D3D_FEATURE_LEVEL_10_1,
D3D_FEATURE_LEVEL_10_0,
D3D_FEATURE_LEVEL_9_3,
D3D_FEATURE_LEVEL_9_2,
D3D_FEATURE_LEVEL_9_1
};
cout << D3D11CreateDevice(pGPU, D3D_DRIVER_TYPE_HARDWARE, nullptr, 0, D3DFeatures, sizeof(D3DFeatures), D3D11_SDK_VERSION, &pD3DDevice, NULL, NULL); //!= S_OK) return 3;
return 0;
// get DXGI device from that
pD3DDevice -> QueryInterface(&pDevice);
// get display
if (pGPU -> EnumOutputs(0, &pDisplay_old) != S_OK) return 4;
pDisplay_old -> QueryInterface(&pDisplay);
DXGI_OUTDUPL_FRAME_INFO frameInfo;
IDXGIResource* pFrame;
HRESULT captureResult;
do
{
// create capture
cout << pDisplay -> DuplicateOutput(pD3DDevice, &pCapture);
return 0;
if (pDisplay -> DuplicateOutput(pGPU, &pCapture) != S_OK) return 5;
pCapture -> GetDesc(&captureDesc);
cout << captureDesc.ModeDesc.Width << ' ' << captureDesc.ModeDesc.Height;
do
{
captureResult = pCapture -> AcquireNextFrame(2000, &frameInfo, &pFrame);
if (captureResult == S_OK)
{
cout << "HI";
captureResult = pCapture -> ReleaseFrame();
}
else if (captureResult == DXGI_ERROR_ACCESS_LOST) break;
else return 6;
}
while (true);
}
while (true);
D3D11CreateDevice seems like a complex function and for me it keeps complaining invalid_arg. I'm not sure how to fix that
The solution was provided in the comments:
pDisplay = (IDXGIOutput1*)pDisplay_old; is wrong, you must always use QueryInterface to get an interface from another. And you don't need RoInitialize. –
Simon Mourier Jan 1 at 9:11
I replaced it with "pDisplay_old -> QueryInterface(&pDisplay);", and I'm back to E_NOINTERFACE, but I think I'm on the right track, how do I fix this error? –
Tiger Yang Jan 4 at 16:54
I don't get E_NOINTERFACE (you shouldn't) on this QueryInterface call. What is wrong then is DuplicateOutput expects a Direct3D device, not an adapter interface reference. –
Simon Mourier Jan 4 at 17:18
I've worked on it and updated the post above –
Tiger Yang Jan 7 at 1:51
Your code is wrong again, use D3D_DRIVER_TYPE_UNKNOWN if you pass an adapter as 1st arg (or ask for hardware and pass nullptr as 1st arg) and use ARRAYSIZE(D3DFeatures), not sizeof(D3DFeatures) as 6th arg. Use DirectX Debug Layer learn.microsoft.com/en-us/windows/win32/direct3d11/… walbourn.github.io/direct3d-sdk-debug-layer-tricks to ease debugging –
Simon Mourier Jan 7 at 7:36
Win10, VS 2019 v16.11.5
Below is the minimum code to output some text to a console in Win10 and attempt to clear the buffer contents (a screen clear, if you will). When I use the default buffer, the text is merely scrolled up. When I use the alternate buffer, the same terminal sequence performs as expected.
The first two dozen lines or so (until the window title change) are preamble to get the console handle and set the correct output mode for terminal processing. Problem code commences after this.
Why doesn't this work in the main buffer?
Reference: https://learn.microsoft.com/en-us/windows/console/console-virtual-terminal-sequences
#include <iostream>
#include <windows.h>
#include <conio.h>
#define ESC "\x1b"
#define CSI "\x1b["
int main()
{
// Get the handle for standard out
HANDLE hStdout = GetStdHandle(STD_OUTPUT_HANDLE);
// If the handle is invalid, exit
if (hStdout == INVALID_HANDLE_VALUE)
{
std::cout << "GetStdHandle failed with " << GetLastError() << "\n";
return 1;
}
std::cout << "Console Handle valid";
DWORD dwMode = 0;
if (!GetConsoleMode(hStdout, &dwMode))
{
std::cout << "GetConsoleMode call failed with " << GetLastError() << "\n";
return false;
}
// see https://learn.microsoft.com/en-us/windows/console/setconsolemode.
// Don't reset, if already set.
if (!(dwMode & ENABLE_VIRTUAL_TERMINAL_PROCESSING))
{
dwMode |= ENABLE_VIRTUAL_TERMINAL_PROCESSING;
if (!SetConsoleMode(hStdout, dwMode)) // set dwMode
{
std::cout << "SetConsoleMode failed with " << GetLastError() << "\n";
return 1;
}
}
// Change the window title
printf(ESC "]0;Console Screen Exploration\x1b\x5c");
// Add another line of text
std::cout << "\nIn main buffer by default\n";
char wait = _getch();
// Cursor to 1,1 and clear console but all it seems to do is scroll up??
// see https://learn.microsoft.com/en-us/windows/console/console-virtual-terminal-sequences#text-modification
printf(CSI "1;1H");
printf(CSI "2J");
wait = _getch();
// Whereas, if I perform these operations on the alternate buffer, it works
// Set alternate buffer
// see https://learn.microsoft.com/en-us/windows/console/console-virtual-terminal-sequences#alternate-screen-buffer
printf(CSI "?1049h");
std::cout << "Alternate Buffer\nNow clear console";
wait = _getch();
// Move to 1,1, then clear screen
printf(CSI "1;1H");
printf(CSI "2J");
wait = _getch();
// back to main buffer
printf(CSI "?1049l");
std::cout << "Back to main buffer";
}
I'm thinking of coding something up that will change a laptop's refresh rate based on whether or not the device is plugged in.
From my research, these are two links I came across. One is 20 years old and the other is from Microsoft, but I don't see any mentions of refresh rate specifically.
https://www.codeproject.com/Articles/558/Changing-your-monitor-s-refresh-rate
https://learn.microsoft.com/en-us/windows/win32/api/winuser/nf-winuser-changedisplaysettingsa?redirectedfrom=MSDN
Does anyone have any insight into how to do this? I'm not too particular about what language would have to be used for it, so let me know whatever would be most viable. Of course I'd also have to be able to check a change in state for plugged in/unplugged, but I haven't gotten to that point yet.
I'm mostly targeting Windows 10 since that's what my device is on.
You can use EnumDisplaySettings to enumerate the information of the current display device, and then set the display by ChangeDisplaySettingsA.
If you want to modify the refresh rate, you only need to modify the dmDisplayFrequency parameter of DEVMODEA.
Here is the sample:
#include <Windows.h>
#include <iostream>
using namespace std;
int main(int argc, const char* argv[])
{
DEVMODE dm;
ZeroMemory(&dm, sizeof(dm));
dm.dmSize = sizeof(dm);
if (0 != EnumDisplaySettings(NULL, ENUM_CURRENT_SETTINGS, &dm))
{
cout << "DisplayFrequency before setting = " << dm.dmDisplayFrequency << endl;
dm.dmDisplayFrequency = 60; //set the DisplayFrequency
LONG ret = ChangeDisplaySettingsEx(NULL, &dm, NULL, 0, NULL);
std::cout << "ChangeDisplaySettingsEx returned " << ret << '\n';
if (0 != EnumDisplaySettings(NULL, ENUM_CURRENT_SETTINGS, &dm))
{
cout << "DisplayFrequency after setting = " << dm.dmDisplayFrequency << endl;
}
switch (ret)
{
case DISP_CHANGE_SUCCESSFUL:
std::cout << "display successfully changed\n";
break;
case DISP_CHANGE_BADDUALVIEW:
std::cout << "The settings change was unsuccessful because the system is DualView capable\n";
break;
case DISP_CHANGE_BADFLAGS:
std::cout << "An invalid set of flags was passed in.\n";
break;
case DISP_CHANGE_BADMODE:
std::cout << "The graphics mode is not supported.\n";
break;
case DISP_CHANGE_BADPARAM:
std::cout << "An invalid parameter was passed in. This can include an invalid flag or combination of flags.\n";
break;
case DISP_CHANGE_FAILED:
std::cout << "The display driver failed the specified graphics mode.\n";
break;
case DISP_CHANGE_NOTUPDATED:
std::cout << "Unable to write settings to the registry.\n";
break;
case DISP_CHANGE_RESTART:
std::cout << "The computer must be restarted for the graphics mode to work.\n";
break;
}
}
system("pause");
}
This example is not always successful. Whether you can modify the refresh rate depends on whether your monitor supports it. This is the output of successful setup:
I am using protobuf 3 to serialize a simple message.
I get a bad alloc when i set a string value for one of the memebers of my protobuf message like so.
std::string a("eeee");
hello_in.set_name(a);
The bad alloc exception happens in the libprotobuf.dll in this function...
void CreateInstance(Arena* arena, const ::std::string* initial_value) {
GOOGLE_DCHECK(initial_value != NULL);
// uses "new ::std::string" when arena is nullptr
ptr_ = Arena::Create< ::std::string>(arena, *initial_value);
}
But i think the real problem is that initial_value has been corrupted somehow and has a size of [size] = 3435973836.
Not sure how this is being corrupted. CreateInstance does get called a few times prior to this but its the first time it is called from main.cpp. Which leads me to believe that it has something to do with dll's and ownership of memeory.
Using any of the other set_name functions also cause a bad alloc exception.
Setting the bool or int in the message works fine.
Here is the message and the main.cpp. I didnt include the hello.pb.h/pb.cc as they are quite big but can if it helps.
// See README.txt for information and build instructions.
//
// Note: START and END tags are used in comments to define sections used in
// tutorials. They are not part of the syntax for Protocol Buffers.
//
// To get an in-depth walkthrough of this file and the related examples, see:
// https://developers.google.com/protocol-buffers/docs/tutorials
// [START declaration]
syntax = "proto3";
package commands;
import "google/protobuf/timestamp.proto";
// [END declaration]
// [START messages]
message Hello {
string name = 1;
int32 id = 2; // Unique ID number for this person.
bool on = 3;
google.protobuf.Timestamp last_updated = 4;
}
// [END messages]
#include "hello.pb.h"
// stl
#include <fstream>
#include <iostream>
int main()
{
GOOGLE_PROTOBUF_VERIFY_VERSION;
commands::Hello hello_in;
hello_in.set_id(2);
std::string a("eeee");
hello_in.set_name(a);
hello_in.set_on(false);
{
// Write the new address book back to disk.
std::fstream output("hello.txt", std::ios::out | std::ios::trunc | std::ios::binary);
if (!hello_in.SerializeToOstream(&output)) {
std::cerr << "Failed to write address book." << std::endl;
return -1;
}
}
commands::Hello hello_out;
{
// Read the existing address book.
std::fstream input("hello.txt", std::ios::in | std::ios::binary);
if (!input) {
std::cout << "hello.txt" << ": File not found. Creating a new file." << std::endl;
}
else if (!hello_out.ParseFromIstream(&input)) {
std::cerr << "Failed to parse address book." << std::endl;
return -1;
}
}
// Optional: Delete all global objects allocated by libprotobuf.
google::protobuf::ShutdownProtobufLibrary();
return 0;
}
I have observed same behavior (Visual Studio 2019 C++ project). The solution which helped me: libprotobuf.lib and libprotobuf.dll were replaced in debug/x86 mode by its debug version, libprotobufd.lib and libprotobufd.dll.
I am creating a large pintool and I have two questions:
The tool (abridged below to the relevant part only) sometimes cannot identify the image/routine for particular executed instructions. Does anybody know when/why can that happen?
The tool (when instrumenting a Barnes-Hut benchmark) always terminates with an out-of-memory (OOM) error after running for a while (although the benchmark, when run standalone, completes successfully). Which tools to use to debug/trace the OOM error of Pin-instrumented applications?
int main(int argc, char *argv[])
{
PIN_InitSymbols();
if( PIN_Init(argc, argv) )
{
return 0;
}
INS_AddInstrumentFunction(Instruction, 0);
PIN_StartProgram();
return 0;
}
VOID Instruction(INS ins, VOID *v)
{
INS_InsertPredicatedCall( ins,
IPOINT_BEFORE,
(AFUNPTR) handle_ins_execution,
IARG_INST_PTR,
.....);
}
VOID handle_ins_execution (ADDRINT addr, ...)
{
PIN_LockClient();
IMG img = IMG_FindByAddress(addr);
RTN rtn = RTN_FindByAddress(addr);
PIN_UnlockClient();
if( IMG_Valid(img) ) {
std::cerr << "From Image : " << IMG_Name( img ) << std::endl;
} else {
std::cerr << "From Image : " << "(UKNOWN)" << std::endl;
}
if( RTN_Valid(rtn) ) {
std::cerr << "From Routine : " << RTN_Name(rtn) << std::endl;
} else {
std::cerr << "From Routine : " << "(UKNOWN)" << std::endl;
}
}
I recently asked this on the PinHeads forum, and I'm awaiting a response. What I have read in the documentation is that the IMG_FindByAddress function operates by looking "for each image, check if the address is within the mapped memory region of one of its segments." It may be possible that instructions are executed that are not within the valid ranges.
The best way to know what image it is in for cases like this is to look at the context. My pintool (based on DebugTrace) continues to run even without knowing what image it is in. You can look at the log entries before and after this occurs. I see this all the time in dydl on OSX.