Theory of Computation | Turning Machine Q - computation-theory

Could anyone help me with this Q please:
Draw the transition diagrams for Standard Turing Machines that accept the languages
below:
enter image description here

Related

Manipulate Physical Object to Solve a Connection Puzzle (Using Software)

There is a simple problem that teaches wishful thinking. It goes like this.
Connect A to A, B to B, C to C, without lines crossing or going out of the box.
The answer to this puzzle is employing wishful thinking and imagining the box to be like this.
From then on the answer goes on easily as shown in this youtube video.
Youtube Video
My question is how to do the "box moving" in a software "like" Paint (Paint cannot do this), physically moving the box A to C position and the lines curve to give way and settle in new position. No Hand Drawings Please, but physical manipulation of objects and the connections to give way and curve around like in the video.
This will be so useful to give a demonstration to kids.
Is there such a software that can manipulate it available?
Please share.
PC or Android or Web Software preferable.

Explaination - Image segmentation using feed forward neural network by clustering algorithm

Image segmentation using feed forward neural network by clustering algorithm.
I know the machine learning algorithms. (which algorithm goes what)
I wanted to know that what the following piece of line implies. Best would be an example. I want to know that how a person will go through this project?
Image segmentation is the act of finding features in an image. Those features can then be analysed by a separate algorithm.
For example, if you had an image of a dog chasing a ball in a park, image segmentation would break up the pixels of the image into features such as the ball, the dog, the sky, ground and any trees. Once you have those features, you can ignore the ones you're not interested in.
Neural networks can be used to perform "k-means" clustering or edge detection to find your features. A feed forward network in this case would be a multi-layer perceptron (MLP).
As an aside, I would not use a feed forward network for this task as there are much simpler algorithms that can perform image segmentation. Instead, neural networks usually act on the result of image segmentation.
References: Wikipedia on Image Segmentation

Match Sketch(Drawing) face photo to digital color photo

I'm going to match the sketch face (drawing photo) in to the color photo. so for the research i want to find out what are the challenges that matching sketch drawing in to color faces. for now i have find out that
resolution pixel difference
texture difference
distance difference
and color (not much effect)
I want to know (in technical terms) what are other challenges and what are available OPEN CV and JAVA CV method and algorithms to overcome that challenges?
Here is some example of the sketches and the photos that are known to match them:
This problem is called multi-modal face recognition. There has been a lot of interest in comparing a high quality mugshot (modality 1) to low quality surveillance images (modality 2), another is frontal images to profiles, or pictures to sketches like the OP is interested in. Partial Least Squares (PLS) and Tied Factor Analysis (TFA) have been used for this purpose.
A key difficulty is computing two linear projections from the image in modality 1 (and modality 2) to a space where two points being close means that the individual is the same. This is the key technical step. Here are some papers on this approach:
Abhishek Sharma, David W Jacobs : Bypassing Synthesis: PLS for
Face Recognition with Pose, Low-Resolution and Sketch. CVPR
2011.
S.J.D. Prince, J.H. Elder, J. Warrell, F.M. Felisberti, Tied Factor
Analysis for Face Recognition across Large Pose Differences, IEEE
Patt. Anal. Mach. Intell, 30(6), 970-984, 2008. Elder is a specialist in this area and has a variety of papers on the topic.
B. Klare, Z. Li and A. K. Jain, Matching forensic sketches to
mugshot photos, IEEE Pattern Analysis and Machine Intelligence, 29
Sept. 2010.
As you can understand this is an active research area/problem. In terms using OpenCV to overcome the difficulties, let me give you an analogy: you need to build build a house (match sketches to photos) and you're asking how will having a Stanley hammer (OpenCV) will help. Sure, it will probably help. But you'll also need a lot of other resources: wood, time/money, pipes, cable, etc.
I think that James Elder's old work on the completeness of the edge map (using reconstruction by solving the Laplace equation) is quite relevant here. See the results at the end of this paper: http://elderlab.yorku.ca/~elder/publications/journals/ElderIJCV99.pdf
You could give Eigenfaces a try, though i never tested them with sketches i think they could a least be a good starting point for your research.
See Wiki: http://en.wikipedia.org/wiki/Eigenface and the Tutorial for OpenCV: http://docs.opencv.org/modules/contrib/doc/facerec/facerec_tutorial.html (including not only Eigenfaces!)
OpenCV can be used for feature extraction and machine learning required for this task. I guess you can start with the papers in the answers above, start with some basic features and prototype a classifier with OpenCV.
I guess you might also want to detect and match feature points on the faces. If you use this approach, you will have to do the feature point detectors on your own (training the Viola-Jones detector in OpenCV with your own data is an option).

Open/closed hand recognition: a simple method

I'm searching for a simple method to recognize if an hand is open or closed.
I'm using C# and EmguCV, but this is not significative in this context. I only need a "pseudo-code" that describes what I need to do.
The input image to this algorithm is a binary image (I've already implemented the segmentation process) that represents the hand. The output must be a boolean (true for open hand, false otherwise).
This is an input example:
I tried to consider something about the convex hull, or the percentage of white area, but I guess these methods are not robust enough for this kind of problem.
In machine learning terms what you are trying to do is classification on a binary input matrix of the size of your input image (1 for white pixel, 0 for black pixel), to a single binary output (1 for open hand, 0 for closed hand).
If you build up a training set by taking lots of images of closed and open hands and hand label them (pun not intended), than you can apply a supervised learning algorithm to create the classifier.
There are many choices for supervised learning algorithms. Perhaps the best one to try for a first shot would be a support vector machine:
http://en.wikipedia.org/wiki/Support_vector_machine
Support vector machines work by essentially calculating the "distance" between the input image and the examples provided in the training set. If the input image is "closer" on average to the examples of open hands from the training set than the closed hands from the training set, it will classify it as an open hand (and visa versa).
There are many other supervised learning algorithms:
http://en.wikipedia.org/wiki/Supervised_learning
Convex Hull should do good, you can calculate the black area percentage lying in convex hull, and if its greater than some threshold, then hand is open.
Otherwise you can just calculate area & perimeter of the white area, then check their ratio, for open hands area / perimeter should be less than for closed hands.

Determining which are the text and graphic regions in an image

I dont know whether should I post this question here or not? But if someone knows it, please answer?
What are the algorithms for determining which region in an image is text and which one is graphic? Means how to separate such regions? (figure or diagram)
Most OCR software, e.g., Ocropus, support layout analysis, which is what you need.
Mao, Rosenfeld & Kanungo (2003) Document structure analysis algorithms: a literature survey provides a fairly recent survey of layout analysis algorithms.
first step would probably be to isolate the sharper contrast between text and image. This can be done by taking the derivative of the image. This will show the change in color and the high values would most likely then be compared to textual shapes

Resources