I have a large collection of video files, which have missing atributes like length, resolution, bitrate etc. I would like to fill those out somehow but im not sure what solution to use. It would take ages by hand but i'd rather see if a program can do it first. I tried searching for a solution but didnt find anything substantial.
The reason why, is becuase im trying to filter out the videos by atributes. If a vid is shorter than 30 minutes, i delete it. Can sort with the windows explorer, which cant do that if the atributes are missing.
Since you asked this on StackOverflow and not on SuperUser I'm going to answer this as a programming question.
Those properties are provided by the Windows Property System. It is possible to write your own property provider and overwrite the system default provider registration but this is a fair amount of work if all you want to do is sort some files.
Related
Could anybody give me pointers on how to process Switchboard dataset for training with RETURNN? I did see BlissDataset class that seems to be designed for switchboard, but it's not clear to me what I should include in the paths given in the example:
Example:
./tools/dump-dataset.py "
{'class':'BlissDataset',
'path': '/u/tuske/work/ASR/switchboard/corpus/xml/train.corpus.gz',
'bpe_file': '/u/zeyer/setups/switchboard/subwords/swb-bpe-codes',
'vocab_file': '/u/zeyer/setups/switchboard/subwords/swb-vocab'}"
The switchboard dataset has several folders with audios, i.e. swb1_d2/data/*.sph and transcripts swb1_LDC97S62/swb_ms98_transcriptions/**/*
I'm not quite sure how to proceed with this to get a dataset that can be used to train RETURNN.
At our group (RWTH Aachen University), we use the config as it was published on GitHub. As you see, this one uses ExternSprintDataset. That dataset uses
The implementation uses Sprint (publicly called RWTH ASR (RASR), see here) as an external tool (ran in a subprocess) to handle the data (feature extraction, etc). Sprint gets a Bliss XML file which describes all the segments with path to audio and audio offsets and transcriptions, and also it gets further configs for the feature extraction and maybe other things. There is an open source version of RASR which should work but it might be a bit involved to get this to work.
The BlissDataset was planned to be a simpler replacement for that. However, the implementation is incomplete. Also, you still would need to generate the Bliss XML by yourself in some way (we have used some own internal scripts to prepare that based on the official LDC data).
So, unfortunately, there is no simple way yet. Actually, I think the easiest way would be to come up with yet another custom format, which might be similar to the LibriSpeechDataset implementation, or maybe just the same, and then you could just reuse LibriSpeechDataset, or at least parts of that. That dataset implementation takes the data in some zip format which contains the transcripts in txt files and the audio in ogg or wav files. It uses librosa to do MFCC feature extraction (or also other feature types). I planned to implement that for Switchboard, and then reproduce the results, however I did not have time yet and not sure when I will get to that. But if you want to try that on your own, I will be happy to help you however I can. The starting point would be to look at LibriSpeechDataset and understand how the format of that looks like.
I'm starting to work with the Google Tango Tablet, hopefully to create (basic) 2D / 3D maps from scanned areas. But first I would like to read as much about the Tango (sensors / API) as I can, in order to create a plan to be as time efficient as possible.
I instantly noticed the ability to learn areas, which is a very interesting concept, nevertheless I couldn't find anything about these so called Area Description Files (ADF).
I know the ADF files can be geographically referenced, that they contain metadata and an unique UUID. Furthermore I know their basic functionalities, but that's about it.
In some parts of the modules ADF files are referred to as 'maps', in other parts they are just called 'descriptions'.
So what do these files look like? Are they already basic (GRID) (2D) maps, or are they just descriptions?
I know there are people who already extracted the ADF files, so any help would be greatly appreciated!
From Tango ADF Doco
Important: Saved area descriptions do not directly record images or
video of the location, but rather contain descriptions of images of
the environment in a very compressed form. While those descriptions
can’t be directly viewed as images, it is in principle possible to
write an algorithm that can reconstruct a viewable image. Therefore,
you must ask the user for permission before saving any of their
learned areas to the cloud or sharing areas between users to protect
the user's privacy, just as you would treat images and video.
Other than that there doesn't seem to be much info about the file internals - I use a lot of them, but I've never been compelled to look inside - curious yes, but not compelled
Without any direct info from the project Tango folks anything we provide would be merely speculation. I'm with Mark, not much compelling reason to get details. My speculation: probably contains a set of image descriptors, like SIFT, and whatever other known device settings are available, like GPS location, orientation (gravity), time(?), etc.
I got the ADF file, basically coded binaries and seems difficult to decode.
I will be happy to share the file if anyone is still interested.
I'm having an idea to create a Censor Plugin/Extension for VLC Player..
Problem Scenario :
An Adult-Scene for 1 minute in a nice movie makes it not watchable with Family.
My Solution :
Create a Plugin/Extension which does the following
Reads time positions from a file similar to subtitle files
Skip these time positions (which are adult or inappropriate) when playing
Help i needed :
I searched in Google and in videolan website, But can't find an exact solution
Are there already similar Plugins available?
Where should i start?
Please help me if you could guys.. thanks..
Same looking for having/developing Exact same solution. This might be helpful to you.
http://code.google.com/p/movie-content-editor/
A similar thing is also available on github:
https://github.com/rdp/sensible-cinema
You may also want to read this discussion thread:
https://forum.videolan.org/viewtopic.php?t=89466
finding great similar answer here
If you chop random bytes out the movie is likely not playable. The player might crash or fail to resynchronize the stream – the video might just stop. Plus, you're gonna have a hard time figuring out where the "adult" bytes are, so to speak.
If you already know where the parts are that you want to cut out, I would edit the file in any of the numerous video editors. Even Windows Movie Maker or iMovie would do the job, and those are easily available on both major OSes.
This is a requested feature for VLC. Not really anything user-friendly out there. Still, VLC offers the possibility to create playlists in a certain format that would mute or skip parts of a file. This is called XSPF. You might be able to figure out the proper format for this.
Also, there's movie-content-editor:
A VLC based editor built in python that allows users to create and use custom filter files to make movies more family friendly. Allows users to have the player automatically mute specific words or skip certain scenes based on the content of those scenes.
And sensible-cinema:
Clean Editing Movie Player allows you watch edited movies by applying delete lists (EDL's) (i.e. "mute out" or "cut out" scenes) to DVD's/files, with preliminary support for also applying them to arbitrary web/internet based players like netflix instant, hulu/hulu plus etc
See also these threads on The VideoLAN Forums:
auto skip unwanted parts of a video
Clearplay-like (content filter) module exists?
My dearest stackoverflowers,
I want to access the serialized data contained in files with strange, to me, extensions. The bulk of the data seems to be in a .st and an .idt file.
The program is meant to be run on Windows, and the unix file command gives me only false positives. Any ideas on either what these extensions mean or on how to investigate and extract their contents?
Below I provide the entirety of the extensions in a long list in hope somebody recognizes them. Googling also gives me false positives. For example: .st is commonly used for ATARI emulation files.
Thanks in advance!
.cix
.cmp
.cnt
.dam
.das
.drf
.idt
.irc
.lxp
.mp
.mbr
.str
.vlf
.rpf
.st
.st
Some general advice on how to approach this:
One way to approach this is to use a site like http://filext.com/ to try to figure out where the files came from. This can be tough, because it's not like there's a file extension standard anywhere - anyone can use any extension, so you're going to have a lot of conflicts/disambiguation issues to solve.
Sometimes you can get lucky, and if you open up the files in a plain text editor you can occasionally see plain string data that is readable, which can help identify the general sort of data contained in a file, and therefore help cut down on the possible number of sources for a file. For example, I have often helped people who received a file as an email attachment with no extension, figure out what file type it was using this technique, adding the file extension, and then opening it in the appropriate program.
There are also sites like http://www.oldversion.com/ that keep old versions of programs that you (typcially) can download for free. This is especially helpful if the data you're working with was created 5+ years ago, in a proprietary program, and that program is no longer available/purchasable from the vendor who created it.
Once you have a good idea of what files belong to what programs, then you're probably going to spend a lot of time trying to find online resources for what the structure of the files are. If that isn't available, you can get a copy of the original program, but either the program won't open the files you're interested in or you still want raw access to the data, then try generating some sample output files with data that you input, and go Rosetta Stone on it, comparing your known file to the original file.
From there, the additional knowledge you'll probably want, is to try to find out what language/compiler the software was written in, which can give you a lead on what code libraries were used to serialize the data in the first place. Once you know all that, then it's matter of reading through any available documentation on the serialization process, and then writing a deserializer.
The one thing this technique won't solve is, if you're dealing with corrupt/truncated data files, it may be very difficult to tell the difference between that and whether or not you have the file structure correct. The "Rosetta Stone" technique might be helpful in that case.
Depending on how many different pieces of source software you're talking about, sounds like a pretty big project. Good luck!
I've got the same problem as in this question, except in Windows. Our product has a 100+ MB code base, and searching for stuff in there takes an awful amount of time (several minutes). It's nice when you can narrow your search to a specific subfolder, but that isn't always possible.
I was wondering if there is some tool that would make it faster, probably by indexing. Accuracy is paramount, if a substring exists somewhere, it must be found, even if the file is not indexed or the index is out of date. Also it would be ideal if .svn folders would be ignored when searching.
Failing that, I was wondering if I could make something like that myself. Is there maybe a ready made indexing engine available for such tasks? I was wondering about Windows Indexing Service (or whatever it is called these days), but so far my experience with it (the Windows standard file search facility) has been rather dismal, with it often missing files that were right in front of its nose.
Yes, I have seen Window Indexing service miss files too, but I haven't checked KBs or user forums for explanations. I'm glad to see it confirmed that it's not just me ;-)!
There look to be alot of file index programs available, I would be surprised if you can't find one that meets your needs (although, see later).
Here are some things to consider:
If your team is using an IDE, isn't there an index feature/plug-in? (none of the SVNs provide Indexing capabilites?). Also, add some tags to your question so this will be seen by other windows developers using the same dev enviorment that you are using.
The SO link you provided mentions several options: slocate, rlocate, and I found mlocate. The wikipedia page for slocate says
Locate32 for Windows Windows analog of GNU locate with GUI, released under GNU license
which seems to meet your main requirement. Looking at the screen shots with the multi-tab interface (one labeled advanced) would give me hope that you can exclude svn (at least from results, possibly from what is indexed).
Your requirement for
if a substring exists somewhere, it
must be found, even if the file is not
indexed or the index is out of date.
seems contradictory. For the substring requirement, I can see many indexing programs ignore c lang syntax elements ( {([])}, etc), and, for example, 'then' is either removed because it is considered a noise word, or that it gets stemmed-down to 'the' and THEN is removed because it is noise word.
To get to 'must be found', and really be sure, you would have to develop a test suite to see what the index program is doing for anything that is corner case. (For a 100 MB code base, not out of the question, especially since you are considering rolling your own).
Finally 'even if the file is not indexed ...'. Well, you either use an index or your don't (obviously). Unfortunately, for your requirement, while rlocate is looking for changes all the time, slocate (on Unix) doesn't seem to. Probably if you read/check on the docs or user forums for locate32 you'll get the answers you need.
Rlocate would give you what you need, but from an rlocate page 'rlocate will work only on Linux with version 2.6.'. mlocate doesn't seem to be have a Windows port either only.
Finally here is a link I found that is interesting about mlocate : mlocate vs rlocate. This is the google cache, because the redhat.com said 'not available'.