What I'm trying to do
Im trying to create a terminal app to use as an advanced calculator and I want the equations to be formatted the way they are in latex and mathway.com both as input and output. I was considering using FinalCut to create the TUI but I am unsure how to actually output special charecters like sigma, delta, infinity etc. Even graphs.
I would also like to ask how do Casio and Texas Instruments do the GUI on their calculator?
Equations in mathway
Equations in markdown
I haven't tried a whole lot, because i am fairly inexperienced but I looking for directions I can go in.
Related
I want to use a Makefile to produce lots of images by script in the same manner as TeX tikz. It can be a different script language, but it should be either simple or worth the extra power.
I use graphviz and gnuplot and love them both. Both are driven by text scripts describing what is to be drawn. Absolutely no GUI should be necessary.
The script should be able to do the usual things like drawing a red line.
draw[line width=2px,color=red] (0,0,0) -- (1,1,1);
Are there any good choices for script-driven 3D image generating programs with the look and feel of graphviz or gnuplot?
For a project I am using YOLO to detect phallusia (microbial organisms) that swim into focus in a video. The issue is that I have to train YOLO on my own data. The data needs to be segmented so I can isolate the phallusia. I am not sure how to properly segment/cut-out the phallusia to fit the format that YOLO needs. For example in the picture below I want YOLO to detect when a phallusia is in focus similar to the one I have boxed in red. Do I just cut-out that segment of the image and save it as its own image and feed to that to YOLO? Do all segmented images need to have the same dimensions? Not sure what I am doing and could use some guidance.
It looks like you need to start from basics, ok, no fear. I will try to suggest a simple route to start efficiently to use YOLO techniques. Luckly the web has a lot of examples.
Understand WHAT is a YOLO method.
Andrew NG's YOLO explanation is a good start, but only if you alread know what are classification and detection.
Understand the YOLO Loss function, the heart of the algorithm.
Check the paper YOLO itself, don't be scared. At page #2, in Unified Detection section, you will find the information about the bounding box detection used, but be aware that you can use whatever notation you want (even invent a new one), in order to be compatible with the Loss function, real meaning of this algorithm.
Start to implement an example
As I wrote above, there are plenty of examples. You can check this one if you are familiar with python and tensorflow.
Inside it you will find a way to prepare the dataset, that is your target for this question, I think. In this case a tool named labelImg is used.
I hope it will be useful. Please share your code when it will be ready, I'm curious :). Good luck!
Do I just cut-out that segment of the image and save it as its own image and feed to that to YOLO?
You need as much images as you can get of your microbial organism, in different sizes, positions, etc. It doesn't need to be the only thing on the image, but you need to know the <x> <y> <width> <height> position of it.
Do all segmented images need to have the same dimensions?
No, they can be of any size and Yolo adapts them. See the VOC dataset for examples of images Yolo is normally trained on. A couple examples;
kitchen, dogs
Not sure what I am doing and could use some guidance.
My advice would be to follow the instructions for "Training YOLO on VOC" from the original Yolo website; https://pjreddie.com/darknet/yolo/
Once you have that working, you will have a better idea of the steeps you need to take.
I had similar problems when I wanted to train YOLOv2 for some game cards.
In order to solve the problem I took a picture from every game card with my cellphone and I cut out them. Because I didn't have enough training data I wrote a dataset generator program what generated the training data by using the photos from the cards. This program is able to multiply, rotate, scale the image then to place it on a background.
It can happen that you will have problems if you don't have enough learning data. In this case don't panic, because from several raw images by rotating and scaling you can generate a large dataset.
Here you can find my dataset generator, which is able to generate Pascal VOC style and darknet style training data: https://github.com/szaza/dataset-generator. Feel free to reuse it, if you need something similar.
In the Hebrew University in Jerusalem there are a few MATLAB applications, consisting of both calculations and UI. Since the UI is becoming increasingly complex, it's getting very hard to maintain it.
What I'd like to do is keep the calculations and the rendering of 2D and 3D graphs in MATLAB, but control the entire UI from elsewhere. I know MATLAB exports a COM interface, which is OK for using MATLAB calculations, but I couldn't find a way to pass rendered data (MATLAB plots, basically) back through it.
Is there a way to do that?
The simplest thing for you to do would be to issue an instruction to MATLAB to create the plot (perhaps creating it offscreen, to avoid an unwelcome popup window), adjust its appearance and size, then save it to an image file. Pass the filename back, then load it in from your UI code and display it.
However, that will not of course get you a plot that is "live", so you won't be able to edit it, or click on it/interact with it, or even resize it nicely.
If you need that, I'm afraid there's no documented or supported way to do it. But if you're willing to go undocumented, then MATLAB also has a Java interface (jmi.jar) that you can call from Java, and you can embed a live MATLAB plot within a Java GUI, attaching MATLAB or Java callbacks to plot elements.
Note that that capability is completely undocumented, and may well change from release to release without warning. If you'd like to learn how to approach that, I'd recommend reading through the blog Undocumented MATLAB, and probably buying a copy of the book by that blog's author.
It's known that 3D rendering is computation expensive.
And I want to use Apache Hadoop for distributed 3D rendering (rendering images or videos) to reduce rendering time. So after learning about Hadoop, I understand that I need 2 things:
Data, which will be visualized (probably it some kind of file, which contains instructions (like draw rectangle, set coordinates, set color etc.))
Some Tool/Program/Utility to render file described above. I want to invoke it from my program, pointing it to file with data. (it's good if this program has a command line API).
But I don't know anything about 3D rendering, so I need your help in suggesting tools (open-source) for render 3D images/videos. Also I don't know anything about input data. So it will be nice if you suggest me render tool + file format to render.
I heard about using Hadoop with .rib file format as data to visualize, and rndr program to render this data. So I need some analogue.
Please note, my goal is to more deeper learn about Hadoop and distributed computation, not about 3D rendering, so please suggest me simplest solution.
Thanks.
KISS: Gnuplot
If you really only care about using Hadoop, and your only requirement of the rendering is that it takes an input file and makes an image so you can show a completed animation, then I would suggest gnuplot. It is actually a graph-drawing program, but it takes scripts and produces image files, and most usefully for you you can enter mathematical formulae to draw rather than constructing 3d worlds to render.
You would simply prepare n files which are all the same except for an offset value for the time since start, and gnuplot would produce the appropriate frame.
This is the simplest option, and lets you concentrate on the Hadoop side. To show you how simple, this would produce a frame for an animation of a 3-blade fan spinning:
set xrange[-1:1]
set yrange[-1:1]
set polar
unset key
unset border
unset tics
set terminal png size 1000,1000
set output "frame_$FRAME.png"
plot cos(3*t+$FRAME/5)
A great thing about Gnuplot is that you type in commands in an interactive prompt to manipulate the graph, and these are the same commands you put in the script. So once you have something you're happy with, you can either do save 'newscript.gpt' or copy out the commands you used. You can recreate the graph by just running gnuplot newscript.gpt at a prompt.
Incidentally, it is easier to simulate hard-to-render scenes than to actually construct them, so just put a sleep command in the gnuplot script to make it take 15 seconds or however long.
The whole banana: Blender
Blender is a 3d rendering system. I believe it is used as a rendering system for some mainstream animations on server farms in exactly the manner you describe. It has quite a learning curve, but I think you should be able to pick up a tutorial or other pre-existing animation project files. From there you would need to work out how to invoke the rendering engine as a command for a specific frame. I've only ever done static rendering in blender, so I can't take you further there. This would be more impressive visually, but no more impressive academically, and you'll waste more time on that side of things.
My choice
Personally I would go with gnuplot. You can make 3d plots with the splot command rather than the 2d plot, and although it's not 3d scene rendering as you might be imagining, it achieves the purpose of making a picture using a script. You can begin with something totally basic like the above until you have your setup going, then introduce more complexity; from an implementation perspective, running a gnuplot script is easier than running a script that also requires a data file, but it's still easy to pre-generate the data and have gnuplot read that when you want to do command + script + data instead of command + script. The key is incrementally increasing the difficulty, not running before you can walk.
If you ultimately find you have spare time at the end and change it to using blender, then that's all free win, and you haven't jeopardised your hadoop project making it pretty.
I am learning image processing and i am trying to start my first project, that is Simple number recognition in an image.
So far i have applied thresholding to the image. Now i would like to know some algorithms by which my system can recognize the number in the image. Preferably the algorithm must be simple and it doesn't have to robust as i am would be generating the image in paint using the same font.
I have looked at the similar questions here on SO and they all point out to using libraries. Remember guys i am trying to learn so please don't point out some libraries.
Are the numbers printed or hand-written?
The Computer Vision System Toolbox includes a function called ocr, which will recognize both, letters and numbers.
If you are looking for hand-written digit recognition, please take a look at this example.