Usually the logo detection means find the logo and recognize the logo. Some common works do the two steps together using SIFT/SURF matching method, detailed in
(1) Logo recognition in images
(2) Logo detection using OpenCV
But, if the logo is tiny and blur, the result is poor, and kind of time consuming; I want to split the two steps, firstly finding where the logo is in video; then recognize the logo using template matching or other method, like:
(3) Logo recognition - how to improve performance
(4) OpenCV logo recognition
My problem is mainly focused on finding the logo automatically in video. I tried two methods:
Brightness method. The logo on tv screen usually always there when the show goes on, I select a list of frames randomly and do difference between frames, the logo area tend to be 0; I do some statistics of 0 brightness with threshold to determine whether the pix is logo or not. This method usually do well but failed while the show has static background.
Edge method. Likely, if the logo is there, the border tends to be obvious. I do the statistic work like Brightness method, but edge sometimes unstableļ¼such as very bright background.
Are there any suggestions or state of art methods to auto finding logo areas and any other logo recognition method except sift or template matching ?
Let's assume your list of logos known before hand and you have access to examples (video streams/frames) of all logos.
The 2017 answer to your question is to train a logo classifier, and most likely a deep neural network.
With sufficient training data, if it is identifiable to the TV viewers it will be able to detect it. It will be able to handle local blurring and intensity changes (which may thwart "classic" image processing methods of brightness and edges).
OpenCV can load and run network models from multiple frameworks like Caffe, Torch and TensorFlow, so you can use one of their pre-trainined models or train one yourself.
You could also try the Tensorflow's object detection API here: https://github.com/tensorflow/models/tree/master/research/object_detection
The good thing about this API is that it contains State-of-the-art models in Object Detection & Classification. These models that tensorflow provide are free to train and some of them promise quite astonishing results. I have already trained a model for the company I am working on, that does quite amazing job in LOGO detection from Images & Video Streams. You can check more about my work here: https://github.com/kochlisGit/LogoLens
The problem with the TV is that the LOGOs will probably be not-static and move along the frames. This will result in a motion blur effect, which will probably make your classifier to get confused or not see the LOGOs. However, once you find a logo You can use an object tracking algorithm to keep track of the logo (e.g. deepsort)
Related
Can someone tell me how (or the name of it, so that I could look it up) I can implement this interpolation effect? https://www.youtube.com/watch?v=36lE9tV9vm0&t=3010s&frags=pl%2Cwn
I tried to use r = r+dr, g = g+dr and b = b+db for the RGB values in each iteration, but it looks way too simple compared to the effect from the video.
"Can someone tell me how I can implement this interpolation effect?
(or the name of it, so that I could look it up)..."
It's not actually a named interpolation effect. It appears to interpolate but really it's just realtime updated variations of some fictional facial "features" (the hair, eyes, nose, etc are synthesized pixels taking hints from a library/database of possible matching feature types).
For this technique they used Neural Networks to do a process similar to DFT Image Reconstruction. You'll be modifying the image data in Frequency domain (with u,v), not Time domain (using x,y).
You can read about it at this PDF: https://research.nvidia.com/sites/default/files/pubs/2017-10_Progressive-Growing-of/karras2018iclr-paper.pdf
The (Python) source code:
https://github.com/tkarras/progressive_growing_of_gans
For ideas, on Youtube you can look up:
DFT image reconstruction (there's a good example with b/w Nicholas Cage photo reconstructed in stages. Loud music warning).
Image Synthesis with neural networks (one clip had salternative shoe and hand-bag designs (item photos) being "synthesized" by an N.N. after it analyzed features from other existing catalogue photos as "inspiration".
Image Enhancement Super Resolution using neural networks This method is closest to answering your question. One example has very low-res blurry pixelated image in b/w. Cannot tell if boy or girl. During a test, The network synthesizes various higher quality face images that it thinks is the correct match for the testing input.
After understanding what/how they're achieve it, you could think of shortcuts to get similar effect without needing networks eg: only using regular pixel editing functions.
Found it in another video, it is called "latent space interpolation", it has to be applied on the compressed images. If I have image A and the next image is image B, I have first to encode A and B, use the interpolation on the encoded data and finally decode the resulted image.
As of today, I found out that this kind of interpolation effect can be easily implemented for 3d image data. That is if the image data is available in a normalized and at 3d origin centred way, like for example in a unit sphere around the origin and the data of each faceimage is inside that unit sphere. Having the data of two images stored this way the interpolation can be calculated by taking the differences of rays going through the origin center and through each area of the sphere at some desired resolution.
I am thinking of using OpenCV library for image analysis. Basically I want to automate in my project the extraction of image label from wine bottle.
This is the sample input image:
This is the sample output:
I am thinking what should be my general strategy to extract the image. I am not asking for direct code. Just want to know the general approach to solve the problem.
Thanks!
Sorry for vage answer but in applied computer vision is no such thing like general approach.
some will disagree of course but in reality
all CV applications are custom made for some specific purpose/task
in your case is the idea to find cylindric and probably standing object (bottle)
and then finding of irregular parts in it
I would do it like this:
1.remove noise as much as possible (smooth/sharpen filters)
2.(optionaly) reduce image data (via (i)FT or (i)DCT for example)
3.segmentate objects (usually by homogenity of color or by edge detection or by booth)
4.identify bottle object (by color,shape,or illumination (glass is transparent))
5.identify objects inside bottle (homogenity,not transparent,usually sharp edges,color is not good some labels are black on dark glass)
6.(optional) project label back from cylindric space to flat texture
[notes]
create app with many scrollbars and checkboxes
to be able to change all tresholds and enable disable filters or their order on the run
all parts will take a lot of tweaking of tresholds and weights
you have to do a lot of trial and error runs to find the best filters and their config for your task
Assuming a simple product demo e.g. the one found on http://www.sublimetext.com/
i.e. something this isn't traditional high res video and could be reasonable accomplished with:
animated gif
video (can be embedded youtube, custom html5 player, whatever is most competitive)
canvas
The question is, which performs better for the user? Both in terms of:
The size of the files the user must be downloaded to view the 'product demo'
The requirements in terms of processing power to display the 'product demo'
If you feel that there's a superior technology to accomplish this or another metric to judge its usefulness, let me know and I'll adjust accordingly.
I know it's already answered, but as you specifically referred to the Sublime Text animation I assume you're wanting to create something similar?
If that's the case then here is a post explaining how it was created by the Sublime Text author, himself:
http://www.sublimetext.com/~jps/animated_gifs_the_hard_way.html
The interesting part of the article is how he reduces the file size - which I believe is your question.
With a simple animation such as the one at the link you're referring to, with a very low frame rate, a simple animated-PNG of animated GIF will probably be the best solution.
However, you need to consider band-width factor in this. If the final size of the GIF or the PNG is large then probably a buffered video is probably better.
This is because the whole gif/png file needs to be downloaded before it shows (I am not sure how interleaved PNGs works when they contain animation though).
A video may be larger in file size, but as it is typically buffered you will be able to show the animation almost right away.
Using external hosts such as YouTube or others can be beneficial to your site as well as the band-width is drawn from those site and not from your server (in case you use a provider that limits or charge for this in various ways).
For more information on animated PNGs or APNG (as this is not so well-known):
https://en.wikipedia.org/wiki/APNG
The canvas in this is only a displaying device and not really necessary (an image container does the same job and can also animate the GIF/PNG whereas a canvas cannot).
If you use a lot of vectors then canvas can be considered.
CSS3 animation is also an option for things such as presentation slides.
I'm thinking of stitching images from 2 or more(currently maybe 3 or 4) cameras in real-time using OpenCV 2.3.1 on Visual Studio 2008.
However, I'm curious about how it is done.
Recently I've studied some techniques of feature-based image stitching method.
Most of them requires at least the following step:
1.Feature detection
2.Feature matching
3.Finding Homography
4.Transformation of target images to reference images
...etc
Now most of the techniques I've read only deal with images "ONCE", while I would like it to deal with a series of images captured from a few cameras and I want it to be "REAL-TIME".
So far it may still sound confusing. I'm describing the detail:
Put 3 cameras at different angles and positions, while each of them must have overlapping areas with its adjacent one so as to build a REAL-TIME video stitching.
What I would like to do is similiar to the content in the following link, where ASIFT is used.
http://www.youtube.com/watch?v=a5OK6bwke3I
I tried to consult the owner of that video but I got no reply from him:(.
Can I use image-stitching methods to deal with video stitching?
Video itself is composed of a series of images so I wonder if this is possible.
However, detecting feature points seems to be very time-consuming whatever feature detector(SURF, SIFT, ASIFT...etc) you use. This makes me doubt the possibility of doing Real-time video stitching.
I have worked on a real-time video stitching system and it is a difficult problem. I can't disclose the full solution we used due to an NDA, but I implemented something similar to the one described in this paper. The biggest problem is coping with objects at different depths (simple homographies are not sufficient); depth disparities must be determined and the video frames appropriately warped so that common features are aligned. This essentially is a stereo vision problem. The images must first be rectified so that common features appear on the same scan line.
You might also be interested in my project from a few years back. It's a program which lets you experiment with different stitching parameters and watch the results in real-time.
Project page - https://github.com/lukeyeager/StitcHD
Demo video - https://youtu.be/mMcrOpVx9aY?t=3m38s
Let's say you are taking a video (with the camera in a steady position) and a bird flies through the view of the camera. It should be possible to do image segmentation and automatically remove this bird from the video.
What are these styles of algorithms called and how are they normally accomplished?
There's a technique called Simple Image Object Extraction (SIOX) - it uses a technique to identify foreground vs. background objects in still and video images. The open source GIMP editor has an implementation of it, and there's more information about it here.
From the overview:
SIOX stands for Simple Interactive Object Extraction and is a solution for extracting foreground from still images with very little user interaction. SIOX is fast, noise robust, and can therefore also be used for the segmentation of videos. It avoids many of the drawbacks of graph-based segmentation methods but performs about equally well on different benchmarks. SIOX is open and free (Apache License) and the authors have intentionally not patented any part of the technology. As a result, it has been integrated into several open-source image manipulation programs over the past years. SIOX is the underlying algorithm of the foreground extraction tool in the GNU Image Manipulation Program (GIMP) and is part of the tracer tool in Inkscape. SIOX originates from E-Chalk where an instructor standing in front of an electronic chalkboard is segmented. Variants of SIOX are being used for robotic vision and for improving 3D time-of-flight camera segmentation.
Here's a link to the Java Reference Implementation of SIOX.
Here's a link to the PDF with details about how a variation of the algorithm works.
You should be able to adapt it to use inter-frame interpolation to remove a specific foreground object from each frame of a video by using temporal data from surrounding frames.
If the camera is fixed and there isn't too much motion in the scene, then I would suggest a method based on background subtraction.
Step 1: Compute background for each frame of the video. There are complicated algorithms for doing this, but a very simple and effective one would be to compute the median value of every pixel in the image across a 3 second time window. Longer if the object in question is moving slowly. Incidentally, if you just perform this kind of filtering it will remove most moving objects from the video if the camera is fixed, hence my earlier question about all objects vs. one object.
Step 2: Mark the regions you want to remove in each frame with a brush tool, and replace them with the background pixels. Don't bother with a fine brush or lasso tool as any non-object pixels you mark will just be replaced with their filtered version. You could probably use the same brush marks for several frames since the boundary is not so important. If the object is the only thing moving in the scene, you could just mark the entire frame and have it replaced with the background.
Anyways, to answer your more general question, the topic you want to research is called inpainting for images and video. There is quite a bit of literature out there on the subject, what I described was just a super simple method you could implement in an hour or so with opencv.