I was trying to use gl_FragDepthEXT in a shader but ran into issues. Is there something I have to do to enable this extension?
Yes, you are missing one requirement, when you are using a raw shader you must enable with the following string in your shader code:
"#extension GL_EXT_frag_depth : enable"
When using a THREE.ShaderMaterial the program string is partly auto-generated therefore the above string cannot be added early enough in your shader strings to avoid a shader compiler error so you enable with:
material.extensions.fragDepth = true
This will make gl_FragDepthEXT available as a fragment shader output if the extension is supported.
It is your hardware that determines if an extension is supported or not. So what you can do is query the hardware to see if the extension is supported. If you look in the source of three.js (src/renderers/webgl/WebGLExtensions.js), there are helper function to determine if an extension is supported:
// assuming here that _gl is the webgl context
var extensions = new THREE.WebGLExtensions( _gl );
// the return value is null if the extension is not supported,
// or otherwise an extension object
extensions.get( "gl_FragDepthEXT" );
or in pure webGL:
// returns an array of strings, one for each supported extension
// for informational purposes only
var available_extensions = _gl.getSupportedExtensions();
// the return value is null if the extension is not supported,
// or otherwise an extension object
var object_ext = _gl.getExtension( "gl_FragDepthEXT" );
Answering your question from the comments above, re: how well extension is supported. You can check http://webglstats.com/ to get a idea of webgl extensions currently supported by Devices/OS/Browser. The data comes from visitors on the participating websites only, but it should give you a general idea.
Related
I'm building a NativeScript plugin for iOS to integrate a card payment terminal as an external accessory. It is almost done, and working, but I have problem with passing one argument called "optionals". This is the whole code I'm trying to implement. It's the payworks framework for a Miura terminal. http://www.payworks.mpymnt.com/node/143
MPTransactionParameters *tp = [MPTransactionParameters chargeWithAmount:[NSDecimalNumber decimalNumberWithString:#"5.00"]
currency:MPCurrencyEUR
optionals:^(id<MPTransactionParametersOptionals> _Nonnull optionals) {
optionals.subject = #"Bouquet of Flowers";
optionals.customIdentifier = #"yourReferenceForTheTransaction";
}];
I cannot find a way of sending this "optionals" function.
In the generate typing metadata I see the MPTransactionParametersOptionals is a #protocol, but still don't know how to use it here as a parameter.
This is my current javascript code for the block
const tp = MPTransactionParameters.chargeWithAmountCurrencyOptionals(
amount,
MPCurrencyEUR,
function (optionals) {
console.log(optionals); //logs the newly created MPTransactionParameters instance, with set amount and currency properties, but cannot touch or set the optional properties.
}
);
The 3rd parameter of chargeWithAmountCurrencyOptionals() should be a function, but I'm doing it wrong, and searched everywhere in google how to do it but no success. I'm already trying for 2 days.
It is working, when the 3rd parameter is null, but I need the set the optional properties.
EDIT: adding the metadata. There are a lot of typings for MPtransactionParameters, so I decided to give you the whole file so you can search.
https://drive.google.com/open?id=1kvDoXtGbCoeCT20b9_t2stc2Qts3VyQx
EDIT2: Adding the typings:
https://drive.google.com/open?id=1lZ3ULYHbX7DXdUQMPoZeSfyEZrjItSOS
I'm new to C++ and opencv and having difficulty running the background subtractor model. I'm using Visual Studio 2017 (VC++) and opencv 3.2.
I need to use BackgroundSubtractorMOG2 and it creates the model fine but throws exception when I pass it a new frame that is converted from an encoded image (because I'm using a USB3 camera that doesn't allow direct reading of frame by opencv) to cv::Mat. The code snippet is below
if (EncodeRawImage(pRawImage, &frameDesc, imageFormat, &pEncodedImage, &encodedImageSize) == SUCCESS) {
// Convert current raw image to openCV Mat format for analysis
auto store = gcnew cli::array<Byte>(encodedImageSize);
System::Runtime::InteropServices::Marshal::Copy(IntPtr(pEncodedImage), store, 0, encodedImageSize);
auto stream = gcnew System::IO::MemoryStream(store);
System::Drawing::Bitmap^ bitmapFrame = safe_cast<System::Drawing::Bitmap ^> (Image::FromStream(stream));
Mat imgBuf = Mat(bitmapFrame->Width, bitmapFrame->Height, CV_8U, pEncodedImage);
Mat imgMat = imdecode(imgBuf, CV_LOAD_IMAGE_COLOR);
bgm->apply(imgMat , fgMaskMOG);
returnCode = 1;
}
The exception is
owner 0x0000023afa2a2ec0 Information not available, no symbols loaded for opencv_world320d.dll> cv::detail::PtrOwner *
stored 0x0000023afa29baa0 Information not available, no symbols loaded for opencv_world320d.dll> cv::BackgroundSubtractorMOG2 *
Even though the opencv320d.lib are linked properly and a few other basic opencv samples run for me in the same program (e.g., cv::subtract, cv::calcHist etc). I wonder if it because the image size is too large (4608x3288) and bitmpaFrame that I'm creating may have an issue?
Or if I'm trying to access image data in the stream/memory in a way that is not allowed?
Even though the Information not found string is still there when I do the breakpoint, I just found that the model was throwing exception because of large image size as I mentioned in my original question (4608x3288). I resizes the image (resize(newImageMat, newImageMat, cv::Size(1000, 1000 * newImageMat.rows / newImageMat.cols), INTER_LINEAR);) and the error is gone and the model learns and shows the result. The text maybe a Visual Studio bug since I was reading about a similar issue here https://stackoverflow.com/a/7736034/3377101?
I have a variable in my main javascript file e.g. var example = {};.
After webpack has finished its job, I find that example is now referenced as t. This presents me a problem as I am using the variable across the web project. I bind functions onto objects for example:
var example = {};
example.initialise = function () {};
Finally at the bottom of a page I may invoke this section of script e.g:
<script>example.initialise()</script>
This way of writing javascript functions is not unusual...
This is obviously a huge pain in the ass as I have no control over the minification. Moreover, it appears that webpack doesn't figure out that example.initialise = function () {}; relates to its newly minified var example (becoming)--> var t. I.e. it doesn't become t.initialise = function {}; either.
What am I supposed to do here?
I've tried using rollup as well. The same kind of variable minification happens.
The thing is, this kind of minification/obfuscation is great, particularly on the inner workings of functions where there's little cause for concern over the parameter names. But not on the top level. I do not understand why this is happening, or how to prevent it.
Any ideas?
I assume that there are ways to set the configuration of webpack. E.g. inside webpack.config.js, but my perusing of the webpack docs gives me no easy understanding of what options I can use to resolve this, like preventing property minification in some way.
In laravel-elixir-webpack-official code you can see minify() is being applied here, minify() uses UglifyJS2 and mangling is on by default.
Mangling is an optimisation that reduces names of local variables and functions usually to single-letters (this explains your example object being renamed to t). See the doc here.
I don't see any way you can customize minify() behaviour in laravel-elixir-webpack, so for now you might have to monkey patch WebpackTask.prototype.gulpTask method before using the module (not an ideal solution). See the lines I am commenting out.
const WebpackTask = require('laravel-elixir-webpack-official/dist/WebpackTask').default;
WebpackTask.prototype.gulpTask = function () {
return (
gulp
.src(this.src.path)
.pipe(this.webpack())
.on('error', this.onError())
// .pipe(jsFiles)
// .pipe(this.minify())
// .on('error', this.onError())
// .pipe(jsFiles.restore)
.pipe(this.saveAs(gulp))
.pipe(this.onSuccess())
);
};
Turns out I have been silly. I've discovered that you can prevent top level properties from being minified by binding it to window... which in hindsight is something I've always known and was stupid not to have realised sooner. D'oh!
So all that needed to be done was to change all top-level properties like var example = {}; to something like window.app.example = {}; in which app is helping to namespace and prevent and override anything set by the language itself.
My problem is this: My SAPI inproc recognizer refuses to load my grammar file when the windows 10 Display language is set to English(UK).
System Display language set to UK. Speech Recognition language is UK. System Locale is UK. SAPI xml-format Grammar even specifies LANGID=809 - as far as I can tell, EVERYTHING is set to EN-GB, and yet the grammar still won't load.
But it loads and works just fine when display language and is set to English(US).
Does anyone know what's up with this? It's extremely frustrating... Hopefully I'm just missing something simple.
SAPI Initialization code:
//////////////INITIALIZE SAPI ENGINE AND GRAMMAR//////////////////////////////
HRESULT SpeechObject::Initialize(){
//INITIALIZE SR ENGINE
if (FAILED(test=::CoInitialize(NULL)))
SRError(L"COM Initialization Fail");
//Create recognizer instance
if (FAILED(test=cpEngine.CoCreateInstance(CLSID_SpInprocRecognizer))){
SRError(L"Can't Load Reco Engine");
return test;
}
//Load the audio Input (in seperate function to facilitate reload)
LoadAudio(); //should I check this?
//load Default recognizer settings
cpEngine->SetRecognizer(NULL);
//get and load default reco profile
if (FAILED(SpGetDefaultTokenFromCategoryId(SPCAT_RECOPROFILES, &cpObjectToken)))
SRError(L"Can't Find Recognition Profile");
if (FAILED(cpEngine->SetRecoProfile(cpObjectToken)))
SRError(L"Can't Load Recognition Profile");
//create reco context
if (FAILED(test=cpEngine->CreateRecoContext(&cpContext))){
SRError(L"Can't Create Reco Context");
return test;
}
//send pSpeechObject to global callback function
cpContext->SetNotifyCallbackFunction(
(SPNOTIFYCALLBACK*)SpeechCallBack,
NULL, (LPARAM)this);
if(FAILED(cpContext->CreateGrammar(NULL, &cpGrammar)))
SRError(L"Can't Create context");
char str[80]; ////TEST
sprintf(str, "LANGID: %X", GetUserDefaultUILanguage());
MessageBoxA(GetActiveWindow(), str,0,0);
//load grammar from compiled grammar resource
if (FAILED(test = cpGrammar->LoadCmdFromResource(
hModule, MAKEINTRESOURCE(GRAMMARCFG),
L"FILE", GetUserDefaultUILanguage(), SPLO_STATIC))){
SRError(L"Can't Load Grammar. Please check language settings");
return test;
}
//(comment above and uncomment following to load from raw xml file for testing)
//cpGrammar->LoadCmdFromFile(L"Grammar.xml", SPLO_STATIC);
//Enable Engine and Reco Context
cpEngine->SetRecoState(SPRST_ACTIVE);
cpContext->SetContextState(SPCS_ENABLED);
//enable ALWAYS ACTIVE and GROUND ENGINES ON commands
return(cpGrammar->SetRuleState(NULL, NULL, SPRS_ACTIVE));
}
////////////LOAD (AND RELOAD) AUDIO INPUT//////////////////////
HRESULT SpeechObject::LoadAudio(bool dlgFlag){
if (FAILED(test = SpCreateDefaultObjectFromCategoryId(SPCAT_AUDIOIN, &cpAudioIn))){
SRError(L"Can't Find Default Audio Input");
return test;
}
if (FAILED(test = cpEngine->SetInput(cpAudioIn, TRUE))){
if (!dlgFlag)
SRError(L"Can't Set Audio Input");
return test;
}
if (pSRDisplay)
pSRDisplay->DisplayText("Audio Reloaded");
if (pDLog)
pDLog->LogEvent("Audio Reloaded");
//RecoState must be reenabled after audio reset
cpEngine->SetRecoState(SPRST_ACTIVE);
if (pDLog)
pDLog->LogEvent("SR ENABLED");
return test;
}
I get the "Can't Load Grammar. Please check language settings" error any time the display language is not English(US), even if I confirm that ALL SETTINGS match...
Would really appreciate any sort of insight from persons more knowledgeable than I.
Farley
You need to explicitly load the recognizer for your preferred language. In particular, this:
//load Default recognizer settings
cpEngine->SetRecognizer(NULL);
always loads the recognizer specified in the speech control panel. You likely want something like this:
CComPtr<ISpObjectToken> cpEngineToken;
hr = SpFindBestToken(SPCAT_RECOGNIZERS, L"Language=<hex language id>", NULL, &cpEngineToken);
// check hr
hr = cpEngine->SetRecognizer(cpEngineToken);
where you'll need to convert the LCID from GetUserDefaultUILanguage to a hex number.
The issue was that I had misinterpreted the meaning of the "language" parameter in LoadCmdFromResource(). I'll blame it on the ambiguous SAPI documentation, though if I had experience loading some other types of resources before I might have been tipped off to this. ;) I had thought it was somehow used by SAPI and should match the language of the system and recognizer (that's what it sounded like in the documentation). Turns out, it actually just specifies the language used to compile the .RC file the grammar is included in (presumably to allow multiple translations to be included in separate .rc's).
The code works perfectly as originally posted, so long as I replace "GetUserDefaultUI()" with an explicit "0x409" (the language specified in the resource compiler) in the call to LoadCmdFromResource(). Now it works with US English, UK English and presumably all English recognizers, and loads the recognizer selected in the speech control panel regardless of the Display language setting (which can even be non-English).
Many, many thanks to Eric Brown for tipping me off to this, I was starting to lose my mind.
Farley
I'm trying to get a simple OpenGL program going, but I can't get it to display anything. I'm using Swift on a Mac, neither of which I am familiar with, although I've got a good amount of experience with OpenGL on windows. The program runs fine (no GL errors or anything), but nothing displays, until I add this at the end of my openGL initialization:
var vao:GLuint=0;checkGlError();
glGenVertexArrays(1, &vao);checkGlError();
glBindVertexArray(vao);checkGlError();
Then, it gives GL_INVALID_OPERATION as soon as I call glGenVertexArrays(), however the docs don't mention that as an option.
I worried that I might not have a GL3 context (and really I'd prefer to just have a GL1/2 context, however the Swift headers seem to be missing things like glBegin() and glColor3f(), so I decided to settle for GL3), so I tried to manually request one:
required init?(coder: NSCoder) {
super.init(coder: coder);
let attribs=[NSOpenGLPFAOpenGLProfile,NSOpenGLProfileVersion3_2Core,0];
pixelFormat=NSOpenGLPixelFormat(attributes: UnsafePointer<NSOpenGLPixelFormatAttribute>(attribs));
openGLContext=NSOpenGLContext(format: pixelFormat, shareContext: nil);
openGLContext.view=self;
openGLContext.makeCurrentContext();
};
However this didn't seem to affect things at all.
First, glBegin (...) is not valid in a core profile context. You are going to need a legacy (non-core) context for that.
Now, since the legacy context is limited to OpenGL 2.1, it only offers GL_APPLE_vertex_array_object. In a 2.1 context, you can call glGenVertexArraysAPPLE (...) and glBindVertexArrayAPPLE (...) and they will do the same thing.
The problem you are encountering is that on OS X, you link to the same framework regardless which context version you get. That means that you can call functions that are not supported by your context at run-time. Any time you try to use a GL 3.2+ function from the legacy context, you get GL_INVALID_OPERATION.
I was able to solve this by creating a bridging header and including #include in that, then using a GL1 context.