VLCj get Snapshot at the last millisecond - vlcj

I'm using VLCj 2.4.1 and want to take screenshot at the last millisecond of the video. Usually I just do
getMediaPlayer().setTime(someTime);
getMediaPlayer().getSnapshot();
and it works. But if someTime is >= getMediaPlayer().getLength()-120 (I get to 120ms through experimentation) VLCj doesn't respond (video position stays the same).
If however someTime is in range 0-getMediaPlayer().getLength()-120 everything works as expected.
Where is the problem? Why are those ~120ms "missing"?

I "solved" the problem with completely different program/library - Xuggle for Java. It works flawlessly - see DecodeAndCaptureFrames.java

Related

"Select jobs to execute..." runs literally forever

I have a rather complex workflow with 750 samples and roughly 18.000 jobs, at first snakemake runs just fine but then after around 4.000 jobs it suddenly freezes and upon restart it hangs with "Select jobs to execute..." for 24h, after that I terminated it. The initial DAG building takes roughly 2-3 minutes, though.
When I run snakemake (v5.32.0 and v5.32.1) with the --verbose option, I get tons of lines similar to this one:
Cbc0010I After 600 nodes, 304 on tree, -52534.791 best solution, best possible -52538.194 (7.08 seconds
I tried to delete the .snakemake folder in the hope that something went riot there, but that wasn't the case, unfortunately. To me it seems that the CBC MILP Solver somehow does not converge, and it keeps going and going to bring the best and the best possible solution closer together!?
Now I do not have any idea anymore, how to proceed and fix the problem. My possible solutions are somehow to change the convergence criteria or the solver itself. In the manual I found the option --scheduler-ilp-solver but it has apparently only one option, the default COIN_CMD.
After terminating a (shorter) run, I get this verbose output
Result - User ctrl-cuser ctrl-c
Objective value: 52534.79114334
Upper bound: 52538.202
Gap: -0.00
Enumerated nodes: 186926
Total iterations: 1807277
Time (CPU seconds): 1181.97
Time (Wallclock seconds): 1188.11
Next I will try to limit the number of samples in the workflow and see if this has any impact (for other datasets with 500 samples, it ran without any problems (with snakemake version 5.24), but there the DAG building took some hours. Hence, I am not very eager to try the old version.)
So, any idea how to fix the problem is highly appreciated. Also, I do not even know, if this is a bug!?
EDIT Actually, I believe it is a bug in the current version, I downgraded Snakemake back to version 5.24, it created the DAG within 10 minutes and started to run the pipeline. So, apparently there is some bug with the latest version. I will make this an answer to my own question, as the downgrading to an older version solved the problem...
I also ran into this issue with a smaller workflow (~1500 jobs total) and snakemake version 6.0.2. About half the jobs had run when the workflow got stuck, and refused to run any more jobs. Looks like it's a problem specific to the ILP solver, because when I re-ran with --scheduler greedy, it worked fine.
Actually, I believe it is a bug in the current snakemake version, I downgraded Snakemake back to version 5.24, it created the DAG within 10 minutes and started to run the pipeline. So, apparently there is some bug with the latest version. I will make this an answer to my own question, as the downgrading to an older version solved the problem...

Xcode 12 not displaying all search results

When I search for something in the Project navigator, some results are missing from the list. If I delete my query and write it again, it starts to work again - but only in come cases.
What is this issue and how can I get around it?
UPDATE: This issue has been fixed in Xcode 12.0.1.
Original answer:
This is a bug in Xcode 12 caused by typing too fast. (Yeah, I know...)
Essentially, what happens is that Xcode starts fetching the list, but if you type the second character before the first query finishes (or the third before the second finishes, you get the idea), you'll get a list with missing items. What likely happens is that Xcode tries to filter the previous list to save time instead of querying all files, and in this case, it filters the incomplete list instead of waiting for the query to finish.
There are 2 easy ways to get around this issue for now:
Use the Open Quickly menu instead (⌘⇧O, Cmd+Shift+O)
Type slower. (no joke, this is what I usually do)
Hopefully this will get fixed soon.

Rcpp in Rstudio, can't cache in memory when parallel if I don't open the cpp file in Rstudio

I met a wired problem but I wonder if I'm asking the correct question:
result = parLapply(cl, 1:4,
function(j,rho_list_needed,delta0_needed,
V_iter_s,Sigma_list_needed) {
rhoj = rho_list_needed[[j]]
delta0_in_cpp = delta0_needed
v = as.vector(V_iter_s[,,,j])
sigmaj = Sigma_list_needed[[j]]
sourceCpp('sample_Z.cpp')#first time complie slow,then cashed
return(Sample_Z(rhoj,delta0_in_cpp, v,sigmaj,A,Cmatrix))
},rho_list_needed,delta0_needed,
V_iter[[s]],Sigma_list_needed)
When I was testing my sample_Z.cpp with parallel through parLapply, the single calculation takes around 1 sec. By parallel, my 4 iterations takes around 1.2 secs, which is a big improvement compared to unparalleled version, which is 8 sec.
There's no problem at all when I run my program yesterday. Just now I noticed a bug and revised my program. To give my PC a fresh environment, I restarted my computer. When started to run my program, I only opened the .R file, and run. But it took 9 sec for that parallel, which used to be 1.2 sec. The 9 sec was after warming up my cores, i.e., already sourced the cpp before I time it.
I just don't know where is the bug. Then try to source the cpp file directly in my global merriment, and I found out that there was no caching at all. The second time took the same time as the first one.
But I accidentally opened the sample_Z.cpp in Rstudio, explicitly at the editor. And then, everything works correct now.
I don't know how to search this similar problem on google with what kind of key words and I don't know if opening the cpp file is a must, while I never known before.
Can anyone tell me what's the real issue? Thanks!
After restarting your PC, you probably had extra processes running which would have competed for CPU cores that slowed down your algorithm. The fact you're rebooting suggests to me you're not using Linux... but if you are, watch with top while starting your code, or equivalent for your platform.

ffmpeg how to get start_time and start_pst to 0

ffprobe showed that one of our videos has a start time that is incorrect, 2.1 (it has to be 0).
Does anybody know hot go get start_pst and start_time to 0?
I have been searching for hours and i did not yet find a way to force that to become 0.

How to avoid follow-mode large text spillovers to the next windows when the text size is decreased?

When setting my text-scale-mode-amount to -2 i.e C-xC--C-- and using follow-mode alongside I get this annoying large amounts(20 lines) of text spillover in the next buffer which almost defeats the main purpose of using follow-mode. The spillover increases when the text-scale-mode-amount is further decreased(further decreasing the size of the text).
Any solutions to this?
Update 1:
Just tested this on my Emacs running on Ubuntu(Linux) and nothing of that sort happens. No spillovers. Its the Windows Emacs that is causing the problem.
Update 2:
Its a bug that is occurring in GNU Emacs 24.2.1 (i386-mingw-nt6.1.7600) of 2012-08-29 on MARVIN and seems to have been fixed in Emacs 24.3 since I am not facing the problem after upgrading to it. Therefore this question is specific to the above mentioned version of Emacs and others facing the same.
Simple solution:
Like phils suggested, upgrade the Emacs.
Not so simple solution:
Replace Emacs_installation_dir/lisp/follow.el with this and delete follow.elc in the same directory. Or force Emacs to use the above package instead of the built-in one(overriding it).
I think it's a bug in follow-mode which doesn't really pay attention to the actual text displayed. I'd recommend a report-emacs-bug to get a bug-number for that issue.

Resources