Can std::mutex be used from a non std::thread like a regular pthread ? So no header <thread> is included.
Related
The cppreference.com and the cplusplus.com say that it's defined in <utility>. But my IDE sends me to "C:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\include\type_traits".
Cant't understand why.
The standard only specifies that #include <utility> gives you access to std::move. It does not require that definition to physically be present in that header file. The standard library is free to be organized internally as implementers see fit. For example, <utility> could consist of only #include <utility_internal> (which then contains the actual library implementation) - nothing in the standard forbids this.
In Microsoft's implementation of the standard library, <utility> has an #include <type_traits>. Thus, if you do #include <utility>, you will get std::move. That's all you should have to care about.
According to the documentation, a critical secion object cannot be copied or moved.
Does that imply it cannot be safely stored in a std::vector style collection as an instance?
Correct; the CRITICAL_SECTION object should not be copied/moved as it may stop working (E.g. perhaps it contains pointers to itself).
One approach would be to store a vector of smart pointers, e.g. (C++17 code):
#include <windows.h>
#include <memory>
#include <vector>
using CS_ptr = std::unique_ptr<CRITICAL_SECTION, decltype(&DeleteCriticalSection)>;
CS_ptr make_CriticalSection(void)
{
CS_ptr p(new CRITICAL_SECTION, DeleteCriticalSection);
InitializeCriticalSection(p.get());
return p;
}
int main()
{
std::vector<CS_ptr> vec;
vec.push_back( make_CriticalSection() );
}
Consider using std::recursive_mutex which is a drop-in replacement for CRITICAL SECTION (and probably just wraps it in a Windows implementation), and performs the correct initialization and release in its destructor.
Standard mutexes are also non-copyable so for this case you'd use std::unique_ptr<std::recursive_mutex> if you wanted a vector of them.
As discussed here also consider whether you actually want std::mutex instead of a recursive mutex.
NOTE: Windows Mutex is an inter-process object; std::mutex and friends correspond to a CRITICAL_SECTION.
I would also suggest reconsidering the need for a vector of mutexes; there may be a better solution to whatever you're trying to do.
I am attempting to port the DirectX11/XAML UWP template over to a C++-WinRT version... where EVERYTHING is done via C++-WinRT and I can turn off CX.
I'm currently stuck on how to ResizeBuffers on the swapchain. I keep getting the error that says I haven't released all of the buffer references. If I comment out anything to do with resizing buffers and just hardcode in a size, the app works. So... I am probably doing something wrong.
I believe it has to do with the new winrt::com_ptr. There is no Reset method like on the WRL ComPtr. I have set them to nullptr just like in the original C++/CX templates, but that doesn't seem to be enough.
Other things I've had to do that may have an affect on what's going on:
The DeviceResources class is now a C++/WinRT class that I am creating by default in all of the other classes (SampleScene3DRenderer, DirectXPage, & Main) using the nullptr_t parameter. That way, I can create it in the DirectXPage, pass in the swapChainPanel reference, then pass this one DeviceResources instance to all of the other classes I create.
There's one spot in the DirectX initialization where you have to pass in a **IUnknown. The docs for C++/WinRT mention using a function called winrt::get_unknown to return an *IUnknown. I couldn't get it to work for the following DWriteCreateFactory method so I tried it this way:
DX::ThrowIfFailed(
DWriteCreateFactory(
DWRITE_FACTORY_TYPE_SHARED,
__uuidof(IDWriteFactory3),
reinterpret_cast<::IUnknown**>(m_dwriteFactory.put())
)
);
I'm not sure what else to do. Only the swapchain resizing doesn't work. I'm doing this on PC (not windows phone).
The DWriteCreateFactory call using winrt::com_ptr<T> and the put member above is correct. Also using nullptr assignment is the correct way to reset a com_ptr<T>.
com_ptr<IUnknown> ptr = ...
assert(ptr);
ptr = nullptr;
assert(!ptr);
You can also use winrt::check_hresult rather than ThrowIfFailed if you wish to be consistent with how C++/WinRT reports errors. Here's a simple DirectX example written entirely with C++/WinRT:
https://github.com/kennykerr/cppwinrt/blob/master/Store/Direct2D/App.cpp
I had the same problem. Now, it works after I moved my include for unknwn.h to before all of the winrt includes.
// pch.h
#pragma once
#include <unknwn.h>
#include <winrt/windows.globalization.datetimeformatting.h>
#include <winrt/windows.web.syndication.h>
#include <winrt/windows.foundation.collections.h>
#include <winrt/windows.foundation.numerics.h>
#include <winrt/windows.graphics.display.h>
#include <winrt/windows.applicationmodel.h>
#include <winrt/windows.applicationmodel.activation.h>
#include <winrt/windows.applicationmodel.core.h>
#include <winrt/windows.ui.h>
// #include <winrt/Windows.ui.core.h>
#include <winrt/Windows.UI.Core.h>
#include <winrt/windows.ui.composition.h>
#include <iostream>
#include <d2d1_1.h>
#include <d3d11.h>
#include <d3d11_2.h>
#include <d3d12.h>
#include <dxgi1_2.h>
I found the explanation defining WIN32_LEAN_AND_MEAN "reduces the size of the Win32 header files by excluding some of the less frequently used APIs". Somewhere else I read that it speeds up the build process.
So what does WIN32_LEAN_AND_MEAN exclude exactly? Should I care about this pre-processor directive? Does it speed up the build process?
I've also seen a pre-processor directive in projects named something along the lines of extra lean. Is this another esoteric pre-processor incantation I should know about?
According the to Windows Dev Center WIN32_LEAN_AND_MEAN excludes APIs such as Cryptography, DDE, RPC, Shell, and Windows Sockets.
Directly from the Windows.h header file:
#ifndef WIN32_LEAN_AND_MEAN
#include <cderr.h>
#include <dde.h>
#include <ddeml.h>
#include <dlgs.h>
#ifndef _MAC
#include <lzexpand.h>
#include <mmsystem.h>
#include <nb30.h>
#include <rpc.h>
#endif
#include <shellapi.h>
#ifndef _MAC
#include <winperf.h>
#include <winsock.h>
#endif
#ifndef NOCRYPT
#include <wincrypt.h>
#include <winefs.h>
#include <winscard.h>
#endif
#ifndef NOGDI
#ifndef _MAC
#include <winspool.h>
#ifdef INC_OLE1
#include <ole.h>
#else
#include <ole2.h>
#endif /* !INC_OLE1 */
#endif /* !MAC */
#include <commdlg.h>
#endif /* !NOGDI */
#endif /* WIN32_LEAN_AND_MEAN */
If you want to know what each of the headers actually do, typing the header names into the search in the MSDN library will usually produce a list of the functions in that header file.
Also, from Microsoft's support page:
To speed the build process, Visual C++ and the Windows Headers provide
the following new defines:
VC_EXTRALEAN
WIN32_LEAN_AND_MEAN
You can use them to reduce the size of the Win32 header files.
Finally, if you choose to use either of these preprocessor defines, and something you need is missing, you can just include that specific header file yourself. Typing the name of the function you're after into MSDN will usually produce an entry which will tell you which header to include if you want to use it, at the bottom of the page.
Complementing the above answers and also "Parroting" from the Windows Dev Center documentation,
The Winsock2.h header file internally includes core elements from the Windows.h header file, so there is not usually an #include line for the Windows.h header file in Winsock applications. If an #include line is needed for the Windows.h header file, this should be preceded with the #define WIN32_LEAN_AND_MEAN macro. For historical reasons, the Windows.h header defaults to including the Winsock.h header file for Windows Sockets 1.1. The declarations in the Winsock.h header file will conflict with the declarations in the Winsock2.h header file required by Windows Sockets 2.0. The WIN32_LEAN_AND_MEAN macro prevents the Winsock.h from being included by the Windows.h header ..
Here's a good answer on the motivation for it from Raymond Chen's blog:
https://devblogs.microsoft.com/oldnewthing/20091130-00/?p=15863
...defining WIN32_LEAN_AND_MEAN brought you back to the 16-bit Windows philosophy of a minimal set of header files for writing a bare-bones Windows program. This appeased the programmers who liked to micro-manage their header files, and it was a big help because, at the time the symbol was introduced, precompiled header files were not in common use. As I recall, on a 50MHz 80486 with 8MB of memory, switching to WIN32_LEAN_AND_MEAN shaved three seconds off the compile time of each C file. When your project consists of 20 C files, that’s a whole minute saved right there.
Is there a gcc macro that allows me to identify whether something is being compiled in 64bit mode?
Duplicate Question:
Is there a GCC preprocessor directive to check if the code is being compiled on a 64 bit machine?
__LP64__
Seems to be what you want.
And you could also, at least on Linux,
#include <features.h>
#include <endian.h> // perhaps you skip that
#include <limits.h>
#include <stdint.h>
Then <bits/workdsize.h> gets included and gives you __WORDSIZE (either 64 or 32)
But why do you ask and why using the standard types provided by <stdint.h> is not enough for you?