What is opt_initialTime relative to? - chromecast

I am calling player.load(protocol, 17800) for live audio stream (duration Infinity). I want to start 17800 seconds after the timestamp of the first segment based on (#EXT-X-PROGRAM-DATE-TIME).
The player is not starting in 17800 seconds. I am off by about 7 minutes. The playlist manifest is 5 hours long. I want to start close to live but not necessarily at the live point. Is there a way to do this?
I do not know what opt_initialTime is relative to? I assume the first segment of the playlist that is pulled. Can someone from Google explain how the google cast media player handles that second parameter of load?

Based from this documentation, opt_initialTime is the time from which to start the playback in seconds. This parameter is used in load and preload methods.
From this link, it discussed how are the load(opt_protocol, opt_initialTime) and preload(opt_protocol, opt_initialTime) supposed to work in the Media Player Library.
Our testing shows that load with an initial time specified works. Preload with an initial time works by downloading segments as specified by the initial time. If that is then followed by a load then that takes precedent and will start the playback from the beginning.

Related

How to prevent doubling values on Stream analytics due to blob input

We see an issue that on stream analytics when using a blob reference input. Upon restarting the stream, it prints double values for things joined to it. I assume this is an issue with having more than 1 blob active during the time it restarts. Currently we pull the files from a folder path in ADLS structured as Output/{date}/{time}/Output.json, which ends up being Output/2021/04/16/01/25/Output.json. These files have a key that the data matches on in the stream with:
IoTData
LEFT JOIN kauiotblobref kio
ON kio.ParentID = IoTData.ConnectionString
which I don't see any issue with, but those files are actually getting created every minute on the minute by an azure function. So it may be possible during the start of stream analytics, it grabs the last and the one that gets created following. (That would be my guess, but I'm not sure how we would fix that).
Here's a visual in powerBI of the issue:
Peak
Trough
This is easily explained when looking at the cosmosDB for that device it's capturing from, there are two entries with the same value, assetID, timestamp, different recordID(just means cosmosDB counted it as two separate events). This shouldn't be possible because we can't send duplicates with the same timestamp from a device.
This seems to be a core issue with blob storage on stream analytics, since it traditionally takes more than 1 minute to start. The best way I've found to resolve is to stop the corresponding functions before starting stream back up. Working to automate through CI/CD pipelines, which is good practice anyways for editing the stream.

R307 Fingerprint Sensor working with more then 1000 fingerprints

I want to integrate fingerprint sensor in my project. For the instance I have shortlisted R307, which has capacity of 1000 fingerprints.But as project requirement is more then 1000 prints,so I will going to store print inside the host.
The procedure I understand by reading the datasheet for achieving project requirements is :
I will register the fingerprint by "GenImg".
I will download the template by "upchr"
Now whenever a fingerprint come I will follow the step 1 and step 2.
Then start some sort of matching algorithm that will match the recently downloaded template file with
the template file stored in database.
So below are the points for which I want your thoughts
Is the procedure I have written above is correct and optimized ?
Is matching algorithm is straight forward like just comparing or it is some sort of tricky ? How can
I implement that.Please suggest if some sort of library already exist.
The sensor stores the image in 256 * 288 pixels and if I take this file to host
at maximum data rate it takes ~5(256 * 288*8/115200) seconds. which seems very
large.
Thanks
Abhishek
PS: I just mentioned "HOST" from which I going to connect sensor, it can be Arduino/Pi or any other compute device depends on how much computing require for this task, I will select.
You most probably figured it out yourself. But for anyone stumbling here in future.
You're correct for the most part.
You will take finger image (GenImg)
You will then generate a character file (Img2Tz) at BufferID: 1
You'll repeat the above 2 steps again, but this time store the character file in BufferID: 2
You're now supposed to generate a template file by combining those 2 character files (RegModel).
The device combines them for you, and stores the template in both the character buffers
As a last step; you need to store this template in your storage (Store)
For searching the finger: You'll take finger image once, generate a character file in BufferID : 1 and search the library (Search). This performs a linear search and returns the finger id along with confidence score.
There's also another method (GR_Identify); does all of the above automatically.
The question about optimization isn't applicable here, you're using a 3rd party device and you have to follow the working instructions whether it's optimized or not.
The sensor stores the image in 256 * 288 pixels and if I take this file to host at maximum data rate it takes ~5(256 * 288*8/115200) seconds. which seems very large.
I don't really get what you mean by this, but the template file ( that you intend to upload to your host ) is 512 bytes, I don't think it should take much time.
If you want an overview of how this system is implemented; Adafruit's Library is a good reference.

Azure Kinect DK - releasing capture/cluster containing timestamp has already been written to disk

trying to record a relatively lengthy video (7-8 hours) using k4arecorder.exe. Have tried to split it up into hourlong recordings, or record it in one go. It will occasionally record fine, but will randomly crash with the following error messages included in the screenshot. Sometimes this occurs at 15 minutes, sometimes after 5 hours, sometimes not at all. Can anyone explain this error to me?
errors are:
releasing capture early due to full queue T5
The cluster containing the timestamp -XXXX has already been written to disk.
Returned failure in k4a recurd write capture ()
screenshot

Appropriate way to cancel saving file via file stream?

A tool I'm writing is responsible for downloading thousands of image files over a matter of many hours. Originally, using TIdHTTP, I would Get the file(s) into a TMemoryStream, and then save that to a file, so long as there were no exceptions. In order to improve speed, I changed the TMemoryStream to a TFileStream.
However, now if the resource was not found, or otherwise any sort of exception which results in no actual file, it still saves an empty file.
Completely understandable, since I simply create a file stream just prior to the download...
FileStream:= TFileStream.Create(FileName, fmCreate);
try
Web.Get(AURL, FileStream);
finally
FileStream.Free;
end;
I know I could simply delete the file if there was an exception. But it seems far too sloppy. I'm sure there's a more appropriate method of aborting such a situation.
How should I make this to not save a file if there was an exception, while not altering the performance (if at all possible)?
How should I make this to not save a file if there was an exception, while not altering the performance (if at all possible)?
This isn't possible in general. Errors and failures can happen at any step if the way, including part way through the download. Once this point is understood, then you must accept that the file can be partially downloaded and then abandoned. At which point where do you store it?
The obvious choices are memory and file. You don't want to store to memory, which leaves to file.
This takes you back to your current solution.
I know I could simply delete the file if there was an exception.
This is the correct approach. There are a few variants on this. For instance you might download to a temporary file that is created with flags to arrange its deletion when closed. Only if the download completes do you then copy to the true destination. This is the approach that a browser takes. But the basic idea is to download to file and deal with any failure by tidying up.
Instead of downloading the entire image in one go, you could consider using HTTP range requests if the server supports it. Then you could chunk the file into smaller parts, requesting the next part after the first finishes (or even requesting multiple parts at the same time to increase performance). If there is an exception then you can about the future requests, so they never start in the first place.
YouTube and a number of streaming media sites started doing this a while ago. It used to be if you started playing a video, then paused it, then it would eventually cache the entire video. Now it only caches a little ahead of the current position. This saves a ton of bandwidth because of the abandon rate for videos.
You could write the partial file to disk or keep it in memory.

AVAudioPlayer not playing from start of file

I have an application using AVFoundation that plays use an mp3 sound file as a sound effect. It's recorded length is 90 seconds long. However, I play it repeatedly, on and off dependant upon varying things, for different lengths of time by utilising the tickingSounds.play() and tickingSounds.stop(). It seems however once I get to 90 seconds that it won't play anymore. I am assuming the play() command resumes from where I last stopped it?
My question is how do I reset it play position or am I missing something else?
You should set:
numberOfLoops
Set the Property numberOfLoops to -1 and it would go into infinite loop.
numberOfLoops
The number of times a sound will return to the beginning, upon reaching the end, to repeat playback.
#property NSInteger numberOfLoops Discussion A value of 0, which is the default, means to play the sound once. Set a positive integer value to specify the number of times to return to the start and play again. For example, specifying a value of 1 results in a total of two plays of the sound. Set any negative integer value to loop the sound indefinitely until you call the stop method.
Availability Available in iOS 2.2 and later. Declared In AVAudioPlayer.h

Resources