Example code for a minimal paint program (MS Paint style) - cocoa

I want to write a paint program in the style of MS Paint.
On a most basic level, I have to draw a dot on the screen whenever the user drags the mouse.
def onMouseMove():
if mouse.button.down:
draw circle at (mouse.position.x, mouse.position.y)
Unfortunately, I'm having trouble with my GUI framework (see previous question), I'm not getting mouse move messages frequently enough. I'm using the GUI framework wxWidgets and the programming language Haskell.
Question: Could you give me some example code that implements such a minimal paint procedure? Preferably, your code should be use wxWidgets, but I also accept GTK+ or Cocoa. I don't mind any programming language, as long as I can install it easily on MacOS X. Please include the whole project, makefiles and all, since I probably don't have much experience with compiling your language.
Basically, I would like to have a small example that shows me how to do it right in wxWidgets or another GUI framework, so I can figure out why my combination of Haskell and wxWidgets doesn't give a decent frequency of mouse move events.

For Cocoa, Apple provides an example named CIMicroPaint, though it's a bit complicated in that it uses Core Image instead of Quartz 2D. Here a screenshot:

I know this is an old question but nevertheless - in order to get smooth drawing, it is not simply enough to put an instance of your brush at the location of the mouse, as input events are not polled nowhere nearly as fast as they need to for smooth drawing.
Drawing lines is a very limited solution as lines ... are lines, and for a drawing app you need to be able to use custom bitmap brushes.
The solution is simple, you have to interpolate between the previous and current position of the cursor, find the line between the two points and interpolate it by adding the brush for every pixel between the two points.
For my solution I used Qt, so here is the method that interpolates a line between the last and current position in order to fill it out smoothly. Basically it finds the distance between the two points, calculates the increment and interpolates using a regular for loop.
void Widget::drawLine()
{
QPointF point, drawPoint;
point = newPos - lastPos;
int length = point.manhattanLength();
double xInc, yInc;
xInc = point.x() / length;
yInc = point.y() / length;
drawPoint = lastPos;
for (int x=0; x < length; ++x) {
drawPoint.setX(drawPoint.x()+xInc);
drawPoint.setY(drawPoint.y()+yInc);
drawToCanvas(drawPoint);
}
}
This should give you smooth results, and performance is very good, I have even tested it on my Android tablet which is a pretty slow and laggy device and it works very well.

To answer my own question, here is a minimal paint example in C++ using wxWidgets. I have mainly assembled snippets from the book Cross-Platform GUI Programming with wxWidgets which is available online for free.
The drawing is as smooth as it can get, there are no problems with mouse event frequency, as can be seen from the screenshot. Note that the drawing will be lost when the window is resized, though.
Here is the C++ source code, assumed to be in a file minimal.cpp.
// Name: minimal.cpp
// Purpose: Minimal wxWidgets sample
// Author: Julian Smart, extended by Heinrich Apfelmus
#include <wx/wx.h>
// **************************** Class declarations ****************************
class MyApp : public wxApp {
virtual bool OnInit();
};
class MyFrame : public wxFrame {
public:
MyFrame(const wxString& title); // constructor
void OnQuit(wxCommandEvent& event);
void OnAbout(wxCommandEvent& event);
void OnMotion(wxMouseEvent& event);
private:
DECLARE_EVENT_TABLE() // this class handles events
};
// **************************** Implementation ****************************
// **************************** MyApp
DECLARE_APP(MyApp) // Implements MyApp& GetApp()
IMPLEMENT_APP(MyApp) // Give wxWidgets the means to create a MyApp object
// Initialize the application
bool MyApp::OnInit() {
// Create main application window
MyFrame *frame = new MyFrame(wxT("Minimal wxWidgets App"));
//Show it
frame->Show(true);
//Start event loop
return true;
}
// **************************** MyFrame
// Event table for MyFrame
BEGIN_EVENT_TABLE(MyFrame, wxFrame)
EVT_MENU(wxID_ABOUT, MyFrame::OnAbout)
EVT_MENU(wxID_EXIT , MyFrame::OnQuit)
END_EVENT_TABLE()
void MyFrame::OnAbout(wxCommandEvent& event) {
wxString msg;
msg.Printf(wxT("Hello and welcome to %s"), wxVERSION_STRING);
wxMessageBox(msg, wxT("About Minimal"), wxOK | wxICON_INFORMATION, this);
}
void MyFrame::OnQuit(wxCommandEvent& event) {
Close();
}
// Draw a dot on every mouse move event
void MyFrame::OnMotion(wxMouseEvent& event) {
if (event.Dragging())
{
wxClientDC dc(this);
wxPen pen(*wxBLACK, 3); // black pen of width 3
dc.SetPen(pen);
dc.DrawPoint(event.GetPosition());
dc.SetPen(wxNullPen);
}
}
// Create the main frame
MyFrame::MyFrame(const wxString& title)
: wxFrame(NULL, wxID_ANY, title)
{
// Create menu bar
wxMenu *fileMenu = new wxMenu;
wxMenu *helpMenu = new wxMenu;
helpMenu->Append(wxID_ABOUT, wxT("&About...\tF1"), wxT("Show about dialog"));
fileMenu->Append(wxID_EXIT, wxT("E&xit\tAlt-X"), wxT("Quit this program"));
// Now append the freshly created menu to the menu bar...
wxMenuBar *menuBar = new wxMenuBar();
menuBar->Append(fileMenu, wxT("&File"));
menuBar->Append(helpMenu, wxT("&Help"));
// ... and attach this menu bar to the frame
SetMenuBar(menuBar);
// Create a status bar just for fun
CreateStatusBar(2);
SetStatusText(wxT("Warning: Resize erases drawing."));
// Create a panel to draw on
// Note that the panel will be erased when the window is resized.
wxPanel* panel = new wxPanel(this, wxID_ANY);
// Listen to mouse move events on that panel
panel->Connect( wxID_ANY, wxEVT_MOTION, wxMouseEventHandler(MyFrame::OnMotion));
}
To build, I use the following Makefile, but this will not work for you, since you probably don't have the macosx-app utility. Please consult the wiki guide to Building a MacOSX application bundle.
CC = g++ -m32
minimal: minimal.o
$(CC) -o minimal minimal.o `wx-config --libs`
macosx-app $#
minimal.o: minimal.cpp
$(CC) `wx-config --cxxflags` -c minimal.cpp -o minimal.o
clean:
rm -f *.o minimal

like your eyes, a cursor moves in jumps, so you will want to draw lines between each point the cursor was recorded.

Related

How to save incrementally images loaded on a QGraphicsView using QPushButton & OpenCV::imwrite

I prepared this small verifiable .ui in Figure 1 that replicates the issue I have:
I am trying to use the QPushButton "Print Screen Both Images" to incrementally save images on Left and Right of the QGraphicsView into two different folders present on my Desktop, see below Figure 2:
I can take a print screen of either the leftScene or the rightScene by just clicking on their related QPushButton Print Screen Left and Print Screen Right.
However, I am trying for this specific case not to use QFileDialog as I need to silently and incrementally save the images in the two different destination folders as I move on with the right/left arrow.
See below the snipped of code I am using:
mainwindow.h
public:
void bothPrintScreen(const std::string& pathImg);
private slots:
void on_bothPrintScreen_clicked(const std::string& imgPath);
private:
int counterA=0;
int counterB=0;
mainwindow.cpp
void MainWindow::on_bothPrintScreen_clicked(const std::string& imgPath)
{
bothPrintScreen(imgPath);
}
void MainWindow::bothPrintScreen(const std::string& pathImg){
cv::Mat left, right;
std::string outA = pathImg+"/printScreenA_"+std::to_string(counterA++)+".png";
cv::imwrite(outA,left);
std::string outB = pathImg+"/printScreenB_"+std::to_string(counterB++)+".png";
cv::imwrite(outB,right);
}
I am missing something in the code but I am not sure what exactly.
The compiler is seinding this allocate()/deallocate() error that I don't understand:
Please shed light on this matter.
It need to add OpenCV libraries to the your Qt project (like this)
INCLUDEPATH += -I/usr/local/include/opencv
LIBS += -L/usr/local/lib -lopencv_stitching -lopencv_superres ...and another libraries

Check if exact image in exact location is on screen

I am looking to create a program in Visual Studio (C#) which scans the screen for an exact image in an exact location of the screen. I have seen many discussions which involve algorithms to find a "close" image, but mine will be 100% exact; location, size and all.
I have obtained a png from a section of my screen [Image 1] using this code:
private void button1_Click(object sender, EventArgs e)
{
//Create a new bitmap.
var bmpScreenshot = new Bitmap(Screen.PrimaryScreen.Bounds.Width,
Screen.PrimaryScreen.Bounds.Height);
// Create a graphics object from the bitmap.
var gfxScreenshot = Graphics.FromImage(bmpScreenshot);
// Take the screenshot from the upper left corner to the right bottom corner.
gfxScreenshot.CopyFromScreen(1555, 950,
1700, 1010,
Screen.PrimaryScreen.Bounds.Size,
CopyPixelOperation.SourceCopy);
// Save the screenshot to the specified path that the user has chosen.
bmpScreenshot.Save("Screenshot.png");
}
So, basically here is the flowchart of my program on how I want to move forward:
1) create the master png using the above code
2) run loop:
create same screenshot using the same procedure as the master png
compare master png to new screenshot png and if:match then move on otherwise reiterate loop.
I am very new to programming, but I don't believe this is beyond me, given a little guidance. I have written fairly complicated (in my opinion) VBA and Matlab programs. Any help is greatly appreciated.
Thank You,
Sloan
Digging around a bit through Microsoft's documentation, I came up with a rough function that would do something similar to what you want.
https://msdn.microsoft.com/en-us/library/hh191601.aspx
This function offers the chance of getting stuck in an endless loop, so you might consider calling it with a timeout from your main. See here for info on synchronous methods with timeouts:
Monitoring a synchronous method for timeout
From your main, all you'd have to do is see if it returns true.
static int Main(string[] args)
{
if (ImageInLocation(left, right, top, bottom)) {
// do other things
}
return 0;
}
The only thing I'm not entirely sure on is how strict you can be with the ColorDifference. Even if the images are identical, any pixel difference with an entirely non-tolerant ColorDifference will come up false. If you know it should work and it's not, perhaps consider increasing the tolerance. Here's some more info on that:
https://msdn.microsoft.com/en-us/library/microsoft.visualstudio.testtools.uitesting.colordifference.aspx
public bool ImageInLocation(int left, int right, int top, int bottom) {
bool image_found = false;
var masterImage = Image.FromFile("path_to_master");
while (!image_found) {
// screenshot code above, output to "path_to/Screenshot.jpg"
var compImage = Image.FromFile("path_to/Screenshot.jpg");
// note, all zeroes may not be tolerant enough
var color_diff = new ColorDifference(0, 0, 0, 0);
Image diffImage;
image_found = ImageComparer.Compare(masterImage, compImage, color_diff, out diffImage);
}
return true;
}
Good luck! Welcome to the programming community.
Also, if anyone has any suggestions/changes, feel free to edit this. Happy imaging, friends!

Unity2d game shooting and animation sync issue

I'm a new in Unity and making my first 2D game. I seen several topics on this forum in this issue, but I didn't found the solution.
So I have a lovely shooting animation and the bullet generation. My problem, I have to generate the bullet somewhere at the middle of the animation, but the character shoots the bullet and the animation at the same time, which killing the UX :)
I attached an image, about the issue, this is the moment, when the bullet should be initialized, but as you can see it's already on it's way.
Please find my code:
The GameManager update method calls the attackEnemy function:
public void Awake(){
animator = GetComponent ();
animator.SetTrigger ("enemyIdle");
}
//if the enemy pass this point, they stop shooting, and just go off the scren
private float shootingStopLimit = -6f;
public override void attackPlayer(){
//animator.SetTrigger ("enemyIdle");
if (!isAttacking && gameObject.transform.position.y > shootingStopLimit) {
isAttacking = true;
animator.SetTrigger("enemyShoot");
StartCoroutine(doWait());
gameObject.GetComponentInChildren ().fireBullet ();
StartCoroutine (Reload ());
}
}
private IEnumerator doWait(){
yield return new WaitForSeconds(5);
}
private IEnumerator Reload(){
animator.SetTrigger ("enemyIdle");
int reloadTime = Random.Range (4,7);
yield return new WaitForSeconds(reloadTime);
isAttacking = false;
}......
My questions:
- How can I sync the animation and the bullet generation ?
Why not the doWait() works ? :)
Is it okay to call the attackPlayer method from the GameManager update ?
The enemies are flynig from the right side of the screen to the left, when they reach the most right side of the screen, they became visible to the user. I don't know why, but they to a shooting animation (no bullet generation happen )first, only after it they do the idle. Any idea why ?
Thanks,
K
I would suggest checking out animation events. Using animation events, you can call a method to instantiate your bullet.
To use Mecanim Animation Events you need to write the name of the function you want to call at the selected frame in the "Function" area of the "Edit Animation Event" window.
The other boxes are for any variables that you want to pass to that function to trigger whatever you have in mind.
Triggering/blending between different animations can be done in many different ways. The event area is more for other things that you want to trigger that are not related to animation (e.g. audio, particle fx, etc).

How to identify when a Qt application/widget is dragged across monitors

My Qt based application has a QMainWindow and another modal widget. This modal widget does not have a restore option. The user is allowed to drag this widget across displays when more than one monitor is available. However, when my widget is large, and the user drags it across to an extended monitor( usually a projector with a very low resolution) the widget is too large for the screen and gets cut.
I want to be able to detect when the user has moved to different screen so that I can resize the widget( and the content within) to fit the new screen's dimension and resolution. Is there any signal that Qt emits for this purpose?
This is probably the function call you are looking for:
http://qt-project.org/doc/qt-5/qdesktopwidget.html#screenNumber
int QDesktopWidget::screenNumber(const QWidget * widget = 0) const
Returns the index of the screen that contains the largest part of
widget, or -1 if the widget not on a screen.
To use this as part of a signal, you should subclass moveEvent and resizeEvent of a QWidget and put your logic for deciding how to place/resize your widget there. If you want to resize like a browser tab that gets dragged onto a new monitor, you may want to just use the point of the mouse instead of the widget to decide which monitor to react to.
So your end code could look something like this:
void Widget::moveEvent(QMoveEvent * e)
{
m_newScreenSize = qApp->desktop()->screenGeometry(this);
}
void Widget::mouseReleaseEvent(QMouseEvent * e)
{
this->resize(m_newScreenSize);
}
Other Links and References
QMoveEvent
QResizeEvent
QDesktopWidget
http://qt-project.org/doc/qt-5/qdesktopwidget.html#details
http://qt-project.org/doc/qt-5/qapplication.html#desktop
http://qt-project.org/doc/qt-5/qmoveevent.html#details
http://qt-project.org/doc/qt-5/qwidget.html#moveEvent
http://qt-project.org/doc/qt-5/qwidget.html#resizeEvent
http://qt-project.org/doc/qt-5/qrect.html#intersects
http://qt-project.org/doc/qt-5/qdesktopwidget.html#availableGeometry-2
const QRect QDesktopWidget::availableGeometry(const QWidget * widget) const

How to get multitouch to work in QGraphicsView, Qt 5.0.2 in Windows 8

I am struggling with getting multi-touch to work on a couple of QWidgets that I have added to a QGraphicsView. I have created a subclass of QWidget in which I set up a QGraphicsScene and QGraphicsView. This is my (test) subclass of QWidget:
#include "qttest1.h"
#include <QtWidgets>
#include <QTouchEvent>
qttest1::qttest1(QWidget *parent)
: QWidget(parent)
{
setEnabled(true);
if(!QCoreApplication::testAttribute(Qt::AA_DontCreateNativeWidgetSiblings))
setAttribute(Qt::WA_NativeWindow);
setAttribute(Qt::WA_AcceptTouchEvents);
scene = new QGraphicsScene(this);
scene->setSceneRect(0, 0, 1920, 1080);
graphicsView = new QGraphicsView(scene, this);
graphicsView->setRenderHints(QPainter::Antialiasing);
graphicsView->setHorizontalScrollBarPolicy(Qt::ScrollBarAlwaysOff);
graphicsView->setVerticalScrollBarPolicy(Qt::ScrollBarAlwaysOff);
graphicsView->setAttribute(Qt::WA_AcceptTouchEvents);
graphicsView->viewport()->setAttribute(Qt::WA_AcceptTouchEvents);
QBoxLayout *layout = new QVBoxLayout;
layout->addWidget(graphicsView);
setLayout(layout);
}
qttest1::~qttest1() {}
void qttest1::showGraphics()
{
for(int i = 0; i < 10; i++)
{
Dial *dial = new QDial();
dial->move(i * 120 + 50, 200);
dial->resize(120, 120);
dial->setAttribute(Qt::WA_AcceptTouchEvents);
QGraphicsProxyWidget *proxy = scene->addWidget(dial);
proxy->setAcceptTouchEvents(true);
}
}
This is my main:
int main(int argc, char **argv)
{
QApplication app(argc, argv);
app.setAttribute(Qt::AA_DontCreateNativeWidgetSiblings);
QRect rect = app.desktop()->screenGeometry();
qttest1 test;
test.resize(rect.width(), rect.height());
test.showFullScreen();
test.showGraphics();
return app.exec();
}
I know the code isn't pretty and probably leaking a bit, but the point is to try to get multi-touch to work.
I can see and use every kind of widget I add to the scene, but as soon as I touch a dial it swallows every touch that comes after the first. Which makes the dial jump between several positions. What I want is that every dial (or any type of widget) can be used individually and at the same time. I am using QT 5.0.2, Windows 8 with a monitor that supports up to 10 touches.
The Qt docs state : -
Reimplement QWidget::event() or QAbstractScrollArea::viewportEvent()
for widgets and QGraphicsItem::sceneEvent() for items in a graphics
view to receive touch events.
With that, I believe that you need to handle the QEvent::TouchBegin, QEvent::TouchUpdate and QEvent::TouchEnd events, which I don't see in the code you've posted.
Qt may handle the first touch for you, but it's not going to know what you want to do with the second, third, fourth etc. simultaneous touches. For example, you may want your app to do any of the following with the second touch moving: -
1) Rotate the object that the first item is over
2) Scale the object that the first item is over
3) Select the second item
4) Translate the view
5) etc.
So, you need to handle the consecutive touches to do what you want it to do. Also, you may want to look at Gestures in Qt.

Resources