For [2018]EME Conformance Tests - v20171221
WidevineH264MultiMediaKeySession
I find this test for 8 times Initialize & 8 times GenerateKeyRequest for Key
So I can get 8 times provisioning message for key license.
But the Test's success situation is as follows:
In emeTest-20171221164539.js
runner.checkGE(video.currentTime, 15, 'currentTime');
runner.checkEq(testEmeHandler.keySessions.length, 8,
'keySessionCount'); runner.checkEq(testEmeHandler.keyCount, 128,
'keyCount');
Current Time great than 15 sec is pass.
Session count is 8 is the same.
But Keycount=128, I can't pass it. Only 8 time generateRequest.
I can't get 128 keys for 16 keys in one session for this test.
and I find in emeManager-20171221164539.js
onKeyStatusesChange() { self.keyCount++; }
So I assume I need to get 128 AddKey() for onKeyStatusesChange()
But Now I have no idea to solve it.
I am using Cobalt RC 11.119147. and Widevine CDM 3.2.1
Is this version RC11 is support for multiKeyseeion >
Is there any Sample to process this Test for Widevine CDM process ?
Why need 8 times Init with 8 times GenerateRequest ?
why not one init and 8 times GenerateRequest ?
does Init need to clean Something ? or We need 8 CDM instance ? Each one have only one session ?
We recently fixed a bug in the test, is it possible for you to try again to see if you can still reproduce the above mentioned issue.
Related
My question is fairly simple:
I am coding on a single-device small laptop and I am using jax.pmap because my code will run on multiple TPUs. I would like to "fake" having multiple devices to test my code and try different things.
Is there any way to do it? I doubt the solution will be within Jax though. Thanks!
You can spoof multiple XLA devices backed by a single device by setting the following environment variable:
$ set XLA_FLAGS="--xla_force_host_platform_device_count=8"
In Python, you could do it like this
# Note: must set this env variable before jax is imported
import os
os.environ['XLA_FLAGS'] = "--xla_force_host_platform_device_count=8"
import jax
print(jax.devices())
# [CpuDevice(id=0), CpuDevice(id=1), CpuDevice(id=2), CpuDevice(id=3),
# CpuDevice(id=4), CpuDevice(id=5), CpuDevice(id=6), CpuDevice(id=7)]
import jax.numpy as jnp
out = jax.pmap(lambda x: x ** 2)(jnp.arange(8))
print(out)
# [ 0 1 4 9 16 25 36 49]
Note that when a only a single physical device is present, all the "devices" here will be backed by the same threadpool. This will not improve performance of the code, but it can be useful for testing the semantics of parallel implementations on a single-device machine.
I need a way to limit the amount of memory that a service may allocate in order to prevent the service from starving the system, similar to the way SQL Server allows you to set "Maximum server memory".
I know SetProcessWorkingSetSize doesn't do exactly what I want, but I'm trying to get it to behave the way that I believe it should. Regardless of the values that I use, my test app's working set is not limited. Further, if I call GetProcessWorkingSetSize immediately afterwards, the values returned are not what I previously specified. Here's the code used by my test app:
var
MinWorkingSet: SIZE_T;
MaxWorkingSet: SIZE_T;
begin
if not SetProcessWorkingSetSize(GetCurrentProcess(), 20, 12800 ) then
RaiseLastOSError();
if GetProcessWorkingSetSize(GetCurrentProcess(), MinWorkingSet, MaxWorkingSet) then
ShowMessage(Format('%d'#13#10'%d', [MinWorkingSet, MaxWorkingSet]));
No error occurs, but both the Min and Max values returned by GetProcessWorkingSetSize are 81,920.
I tried using SetProcessWorkingSetSizeEx using QUOTA_LIMITS_HARDWS_MAX_ENABLE ($00000004) in the Flags parameter. Unfortunately, SetProcessWorkingSetSizeEx fails with "Code 87. The parameter is incorrect" if I pass anything other than $00000000 in Flags.
I've also pursued using Job Objects to accomplish the same goal. I have memory limits working with Job Objects when launching a child process. However, I need the ability for a service to set its own memory limits rather than depending on a "launching" service to do it. So far, I haven't found a way for a single process to create a job object and then add itself to the job object. This always fails with Access Denied.
Any thoughts or suggestions?
The documentation of SetProcessWorkingSetSize function says:
dwMinimumWorkingSetSize [in]
...
This parameter must be greater than
zero but less than or equal to the maximum working set size. The
default size is 50 pages (for example, this is 204,800 bytes on
systems with a 4K page size). If the value is greater than zero but
less than 20 pages, the minimum value is set to 20 pages.
In case of a 4K page size, the imposed minimum value is 20 * 4096 = 81920 bytes which is the value you saw.
The values are specified in bytes.
To actually limit the memory for your service process, I think it's possible to create a new job (CreateJobObject), set the memory limit (SetInformationJobObject) and assign your current process to the job (AssignProcessToJobObject) in the service's start up routine.
Unfortunately, on Windows before 8 and Server 2012, this won't work if the process already belongs to a job:
Windows 7, Windows Server 2008 R2, Windows XP with SP3, Windows Server
2008, Windows Vista and Windows Server 2003: The process must not
already be assigned to a job; if it is, the function fails with
ERROR_ACCESS_DENIED. This behavior changed starting in Windows 8 and
Windows Server 2012.
If this is your case (ie. you get ERROR_ACCESS_DENIED on older Windows) check if the process is already assigned to a job (in which case, you're out of luck) but also make sure that it has the required access rights: PROCESS_SET_QUOTA and PROCESS_TERMINATE.
Today I finally found out what has been stalling my development process: Even though no errorcode is set, the function wglChoosePixelFormatARB returns 0 pixelformats.
I am trying to set up an OpenGL context in my C++ application and I have managed to retrieve the function pointers for the extensions.
glGetIntegerv(GL_MAJOR_VERSION, &maj)
returns 4 so, naturally, I assumed it would be possible to create an OpenGL 3.2 context. However, after finding out there were no matches, I started to comment out some of my requirements to go in the attribList parameter. There were no matches whatsoever.
Only when I, just to be certain, commented out
WGL_CONTEXT_MAJOR_VERSION_ARB, 3,
WGL_CONTEXT_MINOR_VERSION_ARB, 2,
I finally got matches. Out of the 8 matching pixel formats that the other requirements meet, not ONE of them seems to support version 3 of OGL.
Has anyone ever run into this? I have tried updating/reinstalling my video drivers, but nothing has changed. I am running this on Windows 7, MS Visual Studio 2008, and my graphics card is one from the AMD Radeon HD 7700 Series.
The WGL_CONTEXT_MAJOR_VERSION_ARB, WGL_CONTEXT_MINOR_VERSION_ARB and related attributes are not attributes of the Windows Pixelformat.
You must not use them with wglChoosePixelFormatARB().
Those options belong into the attribute list of wglCreateContextAttribsARB as defined by the WGL_ARB_create_context extension.
os.time() in Luaj returns time in milliseconds, but according to lua documentation, it should return time in seconds.
Is this a bug in Luaj?
And Can you suggest a workaround that will work with Luaj(for java) and real Lua(c/c++)? because i have to use the same lua source for both applications.(cant simply divide it with 1000, as they both have return different time scale)
example in my lua file:
local start = os.time()
while(true) do
print(os.time() - start)
end
in c++ , i received output:
1
1
1
...(1 seconds passed)
2
2
2
in java (using Luaj), i got:
1
...(terminate in eclipse as fast as my finger can)
659
659
659
659
fyi, i try this on windows
Yes there's a bug in luaj.
The implementation just returns System.currentTimeMillis() when you call os.time(). It should really return something like (long)(System.currentTimeMillis()/1000.)
It's also worth pointing out that the os.date and os.time handling in luaj is almost completely missing. I would recommend that you assume that they've not been implemented yet.
Lua manual about os.time():
The returned value is a number, whose meaning depends on your system. In POSIX, Windows, and some other systems, this number counts the number of seconds since some given start time (the "epoch"). In other systems, the meaning is not specified, and the number returned by time can be used only as an argument to os.date and os.difftime.
So, any Lua implementation could freely change the meaning of os.time() value.
It appears like you've already confirmed that it's a bug in LuaJ; as for the workaround you can replace os.time() with your own version:
if (runningunderluaj) then
local ostime = os.time
os.time = function(...) return ostime(...)/1000 end
end
where runningunderluaj can check for some global variable that is only set under luaj. If that's not available, you can probably come up with your own check by comparing the results from calls to os.clock and os.time that measure time difference:
local s = os.clock()
local t = os.time()
while true do
if os.clock()-s > 0.1 then break end
end
-- (at least) 100ms has passed
local runningunderluaj = os.time() - t > 1
Note: It's possible that os.clock() is "broken" as well. I don't have access to luaj to test this...
In luaj-3.0-beta2, this has been fixed to return time in seconds.
This was a bug in all versions of luaj up to and including luaj-3.0-beta1.
I got a ERROR_INSUFFICIENT_BUFFER error when invoking FindNextUrlCacheEntry(). Then I want to retrieve the failed entry again, using a enlarged buffer. But I found that when I invoke FindNextUrlCacheEntry(), it seems I was retrieving the one next to the failed entry. Is there any approach I can go back to retrieve the information of the just failed entry?
I also observed the same behavior on XP. I am trying to clear IE cache programmatically using WinInet APIs. The code at the following MSDN link works perfectly fine on Win7/Vista but deletes cache files in batches(multiple runs) on XP. On debugging I found that API FindNextUrlCacheEntry gives different sizes for the same entry when executed multiple times.
MSDN Link: http://support.microsoft.com/kb/815718
Here is what I am doing:
First of all I make a call to determine the size of the next URL entry
fSuccess = FindNextUrlCacheEntry(hCacheHandle, 0, &cacheEntryInfoBufferSizeInitial) // cacheEntryInfoBufferSizeInitial = 0 at this point
The above call returns false with error no as INSUFFICIENT_BUFFER and with cacheEntryInfoBufferSizeInitial parameter set equal to the size of the buffer required to retrieve the cache entry, in bytes. After allocating the required size (cacheEntryInfoBufferSizeInitial) I call the same WinInet API again expecting it to retrieve the entry successfully this time. But sometimes it fails. I see that the cases in which API fails again even though with required buffered sizes (as determined it only) because it expects morebytes then what it retrieved earlier. Most of times the difference is of few bytes but I have also seen cases where the difference is almost 4 to 5 times.
For what it's worth this seems to be solved in Vista.