ffmpeg-python trim why not concat - ffmpeg

I want to split the video, do some logical processing, and finally merge it
import ffmpeg
info = ffmpeg.probe("test.mp4")
vs = next(c for c in info['streams'] if c['codec_type'] == 'video')
num_frames = vs['nb_frames']
arr = []
in_file = ffmpeg.input('test.mp4')
for i in range(int(int(num_frames) / 30) + 1):
startTime = i * 30 + 1
endTime = (1 + i) * 30
if endTime >= int(num_frames):
endTime = int(num_frames)
# more more
arr.append(in_file.trim(start_frame=startTime, end_frame=endTime))
(
ffmpeg
.concat(arr)
.output('out.mp4')
.run()
)
I don't understand why this is happening
TypeError: Expected incoming stream(s) to be of one of the following types: ffmpeg.nodes.FilterableStream; got <class 'list'>

Perhaps this is a little too late but you could try
.concat(*arr)
This worked for me with a list of defined start- and endframes

Related

TypeError: unsupported operand type(s) for +=: 'datetime.timedelta' and 'NoneType'

#property
def total_duration_of_videos(self):
videos_qs = self.videos.all()
total_duration = datetime.timedelta(0,0,0)
for video in videos_qs:
total_duration += video.total_time
hours, remainder = divmod(total_duration.seconds, 3600)
minutes, seconds = divmod(remainder, 60)
return str(hours)+":"+str(minutes)+":"+str(seconds)
This is what i'm written here i was not unble to find solution.
For some video in videos_qs, video.total_time is returning None. That's why you are getting the error.
To resolve the error change total_duration += video.total_time line to total_duration += (video.total_time or datetime.timedelta(0,0,0)) as follows.
#property
def total_duration_of_videos(self):
videos_qs = self.videos.all()
total_duration = datetime.timedelta(0,0,0)
for video in videos_qs:
otal_duration += (video.total_time or datetime.timedelta(0,0,0))
hours, remainder = divmod(total_duration.seconds, 3600)
minutes, seconds = divmod(remainder, 60)
return str(hours)+":"+str(minutes)+":"+str(seconds)

Why is my code throwing a binary operator error on the second + sign for the counterLabel.text variable?

This code throws the error "Binary Operator '+' cannot be applied to operands of type 'String' and 'Double'" on the second + sign of the counter Label.text variable. It only places the error here if I delete everything after the minutesLabel in counter Label.text the error goes away (This is written in Swift). Also sorry for any formatting confusion I'm a first time user.
func updateCounter(timer: NSTimer) {
let hours = floor(stopWatchTime / pow(60, 2))
let hoursInSeconds = hours * pow(60, 2)
let minutes = floor((stopWatchTime - hoursInSeconds) / 60)
let minutesInSeconds = minutes * 60
let seconds = floor((stopWatchTime - hoursInSeconds - minutesInSeconds) / 60)
let secondsInCentiseconds = seconds * 100
let centiseconds = stopWatchTime - hoursInSeconds - minutesInSeconds - secondsInCentiseconds
let hoursLabel = String(format: "%02.0f:", hours)
let minutesLabel = String(format: "%02.0f:", minutes)
let secondsLabel = String(format: "%02.0f:", seconds)
let centisecondsLabel = String(format: "%02.0f", centiseconds)
counterLabel.text = hoursLabel + minutesLabel + secondsLabel + centiseconds
stopWatchTime = stopWatchTime + 1
You forgot to add Label to the variable centiseconds,as it is not string and centisecondsLabel is a String.That is why you got the binary operator error.
Use the below code:
counterLabel.text = hoursLabel + minutesLabel + secondsLabel + centisecondsLabel
that will fix the problem!

YouMax video upload date

I've been using the YouMax plugin which enables you to embed your YouTube channel on your website. However, I am having problems as it displays the uploaded date in months and years. I'd like it to display days, weeks, months and years.
You can view the source code here http://jsfiddle.net/wCKKU/
I believe that its this that needs adjusting to make it calculate in day, weeks, months and years.
function getDateDiff(timestamp) {
if (null == timestamp || timestamp == "" || timestamp == "undefined") return "?";
var splitDate = ((timestamp.toString().split('T'))[0]).split('-');
var d1 = new Date();
var d1Y = d1.getFullYear();
var d2Y = parseInt(splitDate[0], 10);
var d1M = d1.getMonth();
var d2M = parseInt(splitDate[1], 10);
var diffInMonths = (d1M + 12 * d1Y) - (d2M + 12 * d2Y);
if (diffInMonths <= 1) return "1 month";
else if (diffInMonths < 12) return diffInMonths + " months";
var diffInYears = Math.floor(diffInMonths / 12);
if (diffInYears <= 1) return "1 year";
else if (diffInYears < 12) return diffInYears + " years"
}
You could modify the plugin by a small block of code in the middle of the function:
var d2M = parseInt(splitDate[1], 10); // this line is already there
var d1D = d1.getDate();
var d2D = parseInt(splitDate[2],10);
var diffInDays = (d1D + 30 *d1M + 12 * d1Y) - (d2D + 30 *d2M + 12 *d2Y);
if (diffInDays < 2) return "1 day";
else if (diffInDays < 7) return diffInDays+" days";
else if (diffInDays > 7 && diffInDays < 14) return "1 week";
else if (diffInDays > 14 && diffInDays < 30) return Math.floor(diffInDays / 7) + " weeks";
var diffInMonths = (d1M + 12 * d1Y) - (d2M + 12 * d2Y); // this line is already there
Note that this isn't a particularly elegant way to handle the issue, but it matches the coding style the plugin is already using, and at least won't break anything else.
Also, as a side comment, if you're modifying the plugin code you'll want to fix a bug in it at the same time. Getting the current month should look like this:
var d1M = d1.getMonth() + 1;
This is because in Javascript, the getMonth() function returns the month on a zero-based index, and your math won't be reliable unless you switch it to a one-based index.

R: tm Textmining package: Doc-Level metadata generation is slow

I have a list of documents to process, and for each record I want to attach some metadata to the document "member" inside the "corpus" data structure that tm, the R package, generates (from reading in text files).
This for-loop works but it is very slow,
Performance seems to degrade as a function f ~ 1/n_docs.
for (i in seq(from= 1, to=length(corpus), by=1)){
if(opts$options$verbose == TRUE || i %% 50 == 0){
print(paste(i, " ", substr(corpus[[i]], 1, 140), sep = " "))
}
DublinCore(corpus[[i]], "title") = csv[[i,10]]
DublinCore(corpus[[i]], "Publisher" ) = csv[[i,16]] #institutions
}
This may do something to the corpus variable but I don't know what.
But when I put it inside a tm_map() (similar to lapply() function), it runs much faster, but the changes are not made persistent:
i = 0
corpus = tm_map(corpus, function(x){
i <<- i + 1
if(opts$options$verbose == TRUE){
print(paste(i, " ", substr(x, 1, 140), sep = " "))
}
meta(x, tag = "Heading") = csv[[i,10]]
meta(x, tag = "publisher" ) = csv[[i,16]]
})
Variable corpus has empty metadata fields after exiting the tm_map function. It should be filled. I have a few other things to do with the collection.
The R documentation for the meta() function says this:
Examples:
data("crude")
meta(crude[[1]])
DublinCore(crude[[1]])
meta(crude[[1]], tag = "Topics")
meta(crude[[1]], tag = "Comment") <- "A short comment."
meta(crude[[1]], tag = "Topics") <- NULL
DublinCore(crude[[1]], tag = "creator") <- "Ano Nymous"
DublinCore(crude[[1]], tag = "Format") <- "XML"
DublinCore(crude[[1]])
meta(crude[[1]])
meta(crude)
meta(crude, type = "corpus")
meta(crude, "labels") <- 21:40
meta(crude)
I tried many of these calls (with var "corpus" instead of "crude"), but they do not seem to work.
Someone else once seemed to have had the same problem with a similar data set (forum post from 2009, no response)
Here's a bit of benchmarking...
With the for loop :
expr.for <- function() {
for (i in seq(from= 1, to=length(corpus), by=1)){
DublinCore(corpus[[i]], "title") = LETTERS[round(runif(26))]
DublinCore(corpus[[i]], "Publisher" ) = LETTERS[round(runif(26))]
}
}
microbenchmark(expr.for())
# Unit: milliseconds
# expr min lq median uq max
# 1 expr.for() 21.50504 22.40111 23.56246 23.90446 70.12398
With tm_map :
corpus <- crude
expr.map <- function() {
tm_map(corpus, function(x) {
meta(x, "title") = LETTERS[round(runif(26))]
meta(x, "Publisher" ) = LETTERS[round(runif(26))]
x
})
}
microbenchmark(expr.map())
# Unit: milliseconds
# expr min lq median uq max
# 1 expr.map() 5.575842 5.700616 5.796284 5.886589 8.753482
So the tm_map version, as you noticed, seems to be about 4 times faster.
In your question you say that the changes in the tm_map version are not persistent, it is because you don't return x at the end of your anonymous function. In the end it should be :
meta(x, tag = "Heading") = csv[[i,10]]
meta(x, tag = "publisher" ) = csv[[i,16]]
x

How to read large matrix from a csv efficiently in Octave

There are many reports of slow performance of Octave's dlmread. I was hoping that this was fixed in 3.2.4, but when I tried to load a csv file that has a size of ca. 8 * 4 mil (32 mil in total), it also took very, very long time. I searched the web but could not find a workaround for this. Does anybody know a good workaround?
I experienced the same problem and had R handy, so my solution was to use "read.csv" in R, and then use the R package "R.matlab" to write a ".mat" file, and then load that in Octave.
"read.csv" can be pretty slow too, but this worked very well in my case.
The reason is that Octave has a bug that adding data to a very large matrix takes more time then adding the same amount of data to a small matrix.
Below is my try. I choose to save data each 50000 lines, so meanwhile I could already take a look instead of being forced to wait. It is slower for small files, but much faster for larger files.
function alldata = load_data(filename)
fid = fopen(filename,'r');
s=0;
data=[];
alldata=[];
save "temp.mat" alldata;
if fid == -1
disp("Couldn't find file mydata");
else
while (~feof(fid))
line = fgetl(fid);
[t1,t2,t3,t4,d] = sscanf(line,'%i:%i:%i:%i %f', "C"); #reading time as hh:mm:ss:ms and data as float
s++;
t = (t1 * 3600000 + t2 * 60000 + t3 * 1000 + t4);
data = [data; t, d];
if (mod(s,10000) == 0)
#disp(s), disp(" "), disp(t), disp(" "), disp(d), disp("\n");
disp(s);
fflush(stdout);
end
if (mod(s,50000) == 0)
load "temp.mat";
alldata=[alldata; data];
data=[];
save "temp.mat" alldata;
disp("data saved");
fflush(stdout);
end
end
disp(s);
load "temp.mat";
alldata=[alldata; data];
save "temp.mat" alldata;
disp("data saved");
fflush(stdout);
end
fclose(fid);
Here is a workaround that I am using.
I did not find that sscanf will parse input lines as indicated above. Also, I didn't use the temp file.
My .csv file has a large number of rows. They begin with a header of 18 lines and are followed by a data block, each of which has 135 columns. The following code has been tested. My file also begins each row with a dd/mm/yyyy hh:mm field. This will also catch poor lines and indicate where they are by using try/catch.
My .csv file came from a customer who dumped his PARCView load in an Excel file.
function [tags,descr,alldata] = fbcsvread(filename)
fid = fopen(filename,'r');
s = 0;
data=[];
alldata=zeros(1,135);
if fid==-1
disp("Couldn't find file %s\n",filename);
else
linecount = 1;
while (~feof(fid))
line = fgetl(fid);
data2 = zeros(1,135);
if linecount == 1
tags = strsplit(line,",");
elseif linecount == 2
descr = strsplit(line,",");
elseif linecount >= 19
data = strsplit(line,",");
datetime = strsplit(char(data(1))," ");
modyyr = strsplit(char(datetime(1)),"/");
hrmin = strsplit(char(datetime(2)),":");
year1 = sscanf(char(modyyr(3)),"%d","C");
day1 = sscanf(char(modyyr(2)),"%d","C");
month1 = sscanf(char(modyyr(1)),"%d","C");
hour1 = sscanf(char(hrmin(1)),"%d","C");
minute1 = sscanf(char(hrmin(2)),"%d","C");
realtime = datenum(year1,month1,day1,hour1,minute1);
data2(1) = realtime;
for location = 2:134
try
data2(location) = sscanf(char(data(location)),"%f","C");
catch
printf("Error at %s %s\n",char(datetime(1)),char(datetime(2)) );
fflush(stdout);
end_try_catch
endfor
alldata(linecount-18,:) = data2;
if mod(linecount,50) == 0
printf(".");
fflush(stdout);
endif
endif
linecount = linecount + 1;
endwhile
fclose(fid);
endif
endfunction

Resources