I use mediarecorder to record video and audio from a user's browser. We record every 15 seconds and then upload that blog to S3. Then we combine all the files together to make one webm file. I believe the first file isn't right because when I combine the files, there is not any audio - only video.
Is there a way to alter the headers in the first file to use the audio in all of the subsequent files? OR is there an FFMPEG command to force using the audio? I know they exist in the other files.
I don't believe this is important but here is the code that I use to save and combine the webm blobs.
First I save the blobs from the media recorder
recorder = new MediaRecorder(local_media_stream.remoteStream, {
mimeType: encoding_options,
audioBitsPerSecond: 96000,
videoBitsPerSecond: bits_per_second,
});
recorder.ondataavailable = function(e) {
that.save_blob(e.data, blob_index);
}
Then later I combine each of those blobs.
bucket = Aws::S3::Resource.new(region:'us-east-1').bucket("files")
keys = bucket.objects(prefix: "files").collect(&:key)
temp_webm_file = Tempfile.new(['total', '.webm'])
keys.each_with_index do |key, index|
temp_webm_file.write bucket.object(key).get.body.read
end
temp_webm_file.close()
One thing I know that fixes the issue is if I combine a short webm file with audio to the very beginning. Then the audio all works.
Related
I am trying to record the screen and referring to the following tutorial.
http://appium.io/docs/en/commands/device/recording-screen/start-recording-screen/
I tried the following piece of code but it doesn't save anything at this path.
#driver.start_recording_screen video_type: 'h264', time_limit: '260', remote_path: '/recordings'
I am putting it in before method so that it records everything for all the following 5 tests that I have in the particular spec file
Am I missing something here?
To start recording use the below c# code:
driver.StartRecordingScreen(AndroidStartScreenRecordingOptions
.GetAndroidStartScreenRecordingOptions()
.WithTimeLimit(TimeSpan.FromMinutes(1))
.EnableBugReport());
And then, to stop the recording, you need to use the following code. Since it creates recording in base64 format, you need to decode it to view.
String video = driver.StopRecordingScreen();
byte[] decode = Convert.FromBase64String(video);
String fileName = "VideoRecording_test.mp4";
File.WriteAllBytes(fileName, decode);
In order to start the recording, we just need to call the start_recording_screen method from the respective classes.
before(:all) or before(:each) do
#driver.start_recording_screen video_quality: 'low'
end
For IOS, please install ffmpeg (brew install ffmpeg).
We can add screen recording configurations like time limit, video size etc during the start of your video recording.
In order to stop the recording, we need to call the stop_recording_screen method from the respective classes.
Now, coming to the most important question! Where is our video?
The stopRecordingScreen() method returns a Base64 String. Using this string we need to build our video. There are many ways to do it, I have used decode64 method from Ruby Base64 module.
after(:all) do
record = #driver.stop_recording_screen
File.open('sample.mp4', 'wb') do |file|
file.write(Base64.decode64(record))
end
end
Finally you could find the recording under sample.mp4. I would recommend to play the video by VLC or Mplayer if you can not play the video with other video players.
The background of my problem is that I want to extract the video data of Motion Photos (taken by my Samsung S7). Manually it is easy but time consuming. Just open the .jpg file in a HexEditor and extract all data after the line "MotionPhoto_Data". The first part is the image and the second part is the video.
My current code is
im = 'test.jpg'
with open(im, 'rb') as fin:
data = fin.read()
data_latin = data.decode('latin1')
fin.close()
position = data_latin.find('MotionPhoto_Data')
data_pic = data[:position]
data_mpg = data[position:]
My problem now is that I can´t figure out how to save these strings in a way that data_pic is saved as a working jpg and data_mpg as a working video.
I tried
with open('test_pic.jpg', 'a') as fin:
fin.write(str(data_pic))
fin.close()
But this didn´t worked. I think there is a basic issue on how I try to handle/save my data but I can´t figure out how to fix this.
I assume you use python 3 as it is tagged that way.
You should not decode with 'data.decode('latin1'). It is binary data.
data = fin.read()
Then later write it also as binary data:
with open('test_pic.jpg', 'ab') as fout:
fout.write(data_pic)
fout.close()
I have a mp4 file that doesn't seek accurately, and seeking backwards is slow. I believe it is caused by a heavily compressed encoding.
What is the best way to re-encode this file using AVFoundation so that I can make every frame a key frame?
The standard way to set the (max) key frame interval is AVVideoMaxKeyFrameIntervalKey.
The corresponding value is an instance of NSNumber. 1 means key frames
only.
You could apply it to an AVAssetWriter object, which might look similar to this:
NSDictionary *compProps = #{ AVVideoAverageBitRateKey : #(bitsPerSecond),
AVVideoExpectedSourceFrameRateKey : #(30),
AVVideoMaxKeyFrameIntervalKey : #(1) };
_videoInput = [[AVAssetWriterInput alloc] initWithMediaType:AVMediaTypeVideo
outputSettings:compProps];
↳ AVVideoMaxKeyFrameIntervalKey
I have a folder of jpg files and I want to make them into a movie. I am using this script:
% Create video out of list of jpgs
clear
clc
% Folder with all the image files you want to create a movie from, choose this folder using:
ImagesFolder = uigetdir;
% Verify that all the images are in the correct time order, this could be useful if you were using any kind of time lapse photography. We can do that by using dir to map our images and create a structure with information on each file.
jpegFiles = dir(strcat(ImagesFolder,'\*.jpg'));
% Sort by date from the datenum information.
S = [jpegFiles(:).datenum];
[S,S] = sort(S);
jpegFilesS = jpegFiles(S);
% The sub-structures within jpegFilesS is now sorted in ascending time order.
% Notice that datenum is a serial date number, for example, if you would like to get the time difference in hours between two images you need to subtract their datenum values and multiply by 1440.
% Create a VideoWriter object, in order to write video data to an .avi file using a jpeg compression.
VideoFile = strcat(ImagesFolder,'\MyVideo');
writerObj = VideoWriter(VideoFile);
% Define the video frames per second speed (fps)
fps = 1;
writerObj.FrameRate = fps;
% Open file for writing video data
open(writerObj);
% Running over all the files, converting them to movie frames using im2frame and writing the video data to file using writeVideo
for t = 1:length(jpegFilesS)
Frame = imread(strcat(ImagesFolder,'\',jpegFilesS(t).name));
writeVideo(writerObj,im2frame(Frame));
end
% Close the file after writing the video data
close(writerObj);
(Courtesy of http://imageprocessingblog.com/how-to-create-a-video-from-image-files/)
But it gives me this error:
Warning: No video frames were written to this file. The file may be invalid.
> In VideoWriter.VideoWriter>VideoWriter.close at 289
In Movie_jpgCompilation at 37
I'm sure my jpg files are fine, and they are in the folder I specify. What is the problem?
(This is my first post ever, so I hope it helps).
If you're on Linux, don't the backslashes need to be forward slashes? When I ran it on my Mac, my jpegFiles was an empty Struct. When I changed them around it worked:
% Create video out of list of jpgs
clear
clc
% Folder with all the image files you want to create a movie from, choose this folder using:
ImagesFolder = uigetdir;
% Verify that all the images are in the correct time order, this could be useful if you were using any kind of time lapse photography. We can do that by using dir to map our images and create a structure with information on each file.
jpegFiles = dir(strcat(ImagesFolder,'/*.jpg'));
% Sort by date from the datenum information.
S = [jpegFiles(:).datenum];
[S,S] = sort(S);
jpegFilesS = jpegFiles(S);
% The sub-structures within jpegFilesS is now sorted in ascending time order.
% Notice that datenum is a serial date number, for example, if you would like to get the time difference in hours between two images you need to subtract their datenum values and multiply by 1440.
% Create a VideoWriter object, in order to write video data to an .avi file using a jpeg compression.
VideoFile = strcat(ImagesFolder,'/MyVideo.avi');
writerObj = VideoWriter(VideoFile);
% Define the video frames per second speed (fps)
fps = 1;
writerObj.FrameRate = fps;
% Open file for writing video data
open(writerObj);
% Running over all the files, converting them to movie frames using im2frame and writing the video data to file using writeVideo
for t = 1:length(jpegFilesS)
Frame = imread(strcat(ImagesFolder,'/',jpegFilesS(t).name));
writeVideo(writerObj,im2frame(Frame));
end
% Close the file after writing the video data
close(writerObj);
Edit: You can also use filesep so that the file separator is OS-specific. http://www.mathworks.com/help/matlab/ref/filesep.html
It would be simpler to use Windows Movie Maker [windows] or iMovie [mac]. For your purposes though you should use PowerPoint.
I am developing an application in Java. Part of it includes playing an mp3 file via another application. I need to find the duration (total play time) of the mp3 file. How do I do that?
You can do this very easily using JAudioTagger:
Tag tag;
java.util.logging.Logger.getLogger("org.jaudiotagger").setLevel(Level.OFF);
audioFile = AudioFileIO.read(new File(filePath));
System.out.println("Track length = " + audioFile.getAudioHeader().getTrackLength());
That will print out the track length of the file at filePath. The logger line is to remove a lot of (probably) unwanted info/debugging logging from JAudioTagger. Besides this, JAudioTagger supports getting all sorts of metadata tags from different audio file types (MP3, MP4, WMA, FLAC, Ogg Vorbis), both from file-embedded tags. You can even get MusicBrainz info easily, but I haven't tried that yet. For more info:
http://www.jthink.net/jaudiotagger/examples_read.jsp
You can get the jar files for it here:
http://download.java.net/maven/2/org/jaudiotagger/
For small .mp3 files you can use:
AudioFile audioFile = AudioFileIO.read(MyFileChooser.fileName);
int duration= audioFile.getAudioHeader().getTrackLength();
System.out.println(duration);
This will give a string containing the minute and second duration in it. eg: 316 for 3:16.
Note that This method is not suitable with larger files.