Choosing a random mp4 file from directory in processing - random

I have a directory with a processing script and some .mp4 files, how do i choose a random one to display?

Break your problem down into smaller steps.
Can you write a program that simply lists all of the files in a directory? The File class might help, and the Java API is your best friend.
Can you write a program that takes that list of files and creates an array or ArrayList that contains all of them?
Can you write a program that takes an array or ArrayList and chooses a random element from it? Use hard-coded String values for testing.
When you get all of these individual steps working, you can combine them into a single program that chooses a random file from a directory. If you get stuck on a specific step, you can post a MCVE of just that step, and we'll go from there.

Related

Letting bash know to search for a corresponding floating number when given an integer

I use a CFD code to run a simulation. Output files are written into a folder with the time stamp as its name. If the time is greater than 1e6, the folder name is printed in a floating point format. Like ..., 993600, 997200, 1.0008e+06, 1.0044e+06, 1.008e+06, ... and so on.
I need to extract some data from these output files. I wrote a bash script and it works well if the output time of folder is less than 1e6. After that when floder names are greater than 1e6, the bash file keeps reading the numbers in integer format whereas my files are in floating point format and therefore reports an error (file not found error) due to mismatch.
For example, bash looks for the folder 1000800 whereas I have the folder 1.0008e+06. Is there a way to tell bash what you are looking for is in a floating format to finish the job?
Any pointers please?
After several trials, I found a non-elegant way to do this.
I create a string variable:
time="1.008e+06"
to look for the specific folder name and get into it to run few commands.
A drawback using this procedure is that it becomes cumbersome if there are many folders as I need to explicitly enter the folder name in the bash script and looping through is not possible this way.

process 100K of image files with bash

here is the script to optimize jpg images: https://github.com/kormoc/imgopt/blob/master/imgopt
There is a CMS with image files (not mine).
I assume there is a complicated structure of subdirectories and script just recursively find all img files in given folder.
The question is how to mark already processed files so with next run
script won't touch them and just skip?
I dont know when the guys would like to add new files to it and process it. Also I think renaming is not a good choice either.
I was thinking about hash-table or associative array which will be filled from txt file during
start. But is it ok to have 100K of items array in bash? Seems complicated for a script.
Any other ideas about optimization are also welcome.
I think the easiest thing to do is just output a file with a similar name per processed image file.
For example image1.jpg after being processed would have an empty file with a similar name e.g. .image1.jpg.processed.
Then when your script runs it just checks if the for the current image: NAME.EXT if a file .NAME.EXT.processed exists. If the file doesn't exist then you know it needs to be processed. No memory issues and no hashtable needed granted you will have 100K of empty extra files.

WholeFileInputFormat with multiple files Input

How can i use WholeFileInputFormat with many files as input?
Many files as one file...
FileInputFormat.addInputPaths(job, String ...); doesnt seem to work properly
You need to set "isSplittable" in your InputFormat to "false" so that the input file doesn't get split and get processed by just 1 mapper. One small suggestion though, you could give Sequence File a try. Combine multiple files, you are trying to process, into a single Sequence File and then process it. It would be more efficient as Sequence Files are already in key/value form.

What is the most efficient way to make sure hadoop skips certain input files?

I have a hadoop application that -depending on a parameter- only needs certain (few!) input files from the input directory. My question is now: where is the best place (read: as early as possible) to skip those files? Right now I customized a RecordReader to take care of that, but I was wondering whether I could skip those files sooner? In my current implmentation hadoop still has a huge overhead due to irrelevant files.
Maybe I should add that it is very easy to see whether I need a certain input file. If the filename starts with a parameter, it is needed. Structuring my input directory hierachically might be a solution, but one that is not very likely for my project since every files would end up lonely in a certain directory.
I'd propose you to filter out the input files by applying the appropriate pattern on the input Paths as mentioned here: https://stackoverflow.com/a/13454344/1050422
Note that this solution doesn't consider subdirectories. Alter it
to be able to recursively visit all subdirectories, within the base path.
I've had success with using the setInputPaths() method on TextInputFormat to specify a single String containing comma-separated file names.

What is the best way to edit the middle of an existing flat file?

I have tool that creates variables for a simulation. The current workflow involves hand copying those variables into the simulation input file. The input file is a standard flat file, i.e. not binary or XML. I would like to automate the addition of the variables to the flat input file.
The variables copy over existing variables in the file, e.g.
New Variables:
Length 10
Height 20
Depth 30
Old Variables:
...
Weight 100
Age 20
Length 10
Height 20
Depth 30
...
Would like to have the old variables copy over the new variable. They are 200 lines into the flat input file.
Thanks for any insights.
P.S. This is on Windows.
If you're stuck using flat, then you're stuck using the old fashioned way of updating them: read from original, write to temp file, either write the original row or change the data and then write that. To add data, write it to the temp file at the appropriate point; to delete data, simply don't copy it from the original file.
Finally, close both files and rename the temp file to the original file name.
Alternatively, it might be time to think about a little database.
For something like this I'd be looking at a simple template engine. You'd have a base template with predefined marker tokens instead of variable values and then just pass the values required to your engine along with the template and it will spit out the resultant file, all present and correct. There are a number of Open Source template engines available in Java that would meet your needs, I imagine such things are also available in your language of choice. You could even roll your own without too much difficulty.
Note that under Unix, one would probably look at using mmap() because you can then use functions such as memmove() to move the data around and add new data or truncate() the result if the file is then smaller (you may also want to use truncate() to grow the file).
Under MS-Windows, you have the MapViewOfFileEx() function to do the same thing. The API is different, though,
and I'm not exactly sure what happens or how to grow/shrink the file (MSDN also includes a truncate()-like function and maybe that works).
Of course, it's important to use memcpy() or memmove() properly to not overwrite the wrong data or go outside the buffer.

Resources