Qt, CEGUI or wxWidgets for a text game GUI? - user-interface

I tried to sign up, but I was unable; perhaps a problem from my side. Hopefully I'll get an answer as anonymous.
I apologize for the grammar/syntax, but English isn't my native language.
Recently I lost my job, so I have enough spare time to try something fun. I decided to create a simple text RPG game for me and some friends. It will very close to the board games like Talisman, Dungeon Run, and HeroQuest, using dice and a simple attribute/skill system. So no 3d graphics. The only 2d element, if I decide to include it, will be a map
that will allow the hero to move between locations. Currently I'm using Windows XP SP3, for the game I use wxDev-C++, and although cross platform would be cool, I don't really care.
I have some experience in C++ (currently using wxDev-C++), but I'm far from being called an expert or even a great programmer. I was about to start writing parts of the code, but I decided to check if creating a GUI for the game is possible. In some forums, many suggested I use Qt, CEGUI or wxWidgets, but most examples I saw are grey boxes that are
indifferent at best, when I want something that fits better in a fantasy setting. I don't claim I would do better, but I want a GUI that is more fantasy related.
What I want from the GUI:
1. A "cool" Gui with decent graphics. I could even create an image to serve as a mask in Photoshop, but the GUI builder will have to support imported images.
2. A relatively large textbox in the middle (with a scrollbar) that will display die rolls, damage and options.
3. The ability to display dynamically values (like the change in the health after each action without requiring to refresh manually)
4. Display an icon or a small image of the character in the area where I display stats/abilities.
5. Open new windows created with tha same GUI builder to allocate points, buy/sell things and open a map.
About the map in the game: I decided to create a map in photoshop. When the hero decides to move to another location, a new window will open showing the map. I thought of 2 possible ways to move between locations: 1) Create hotspots on the image and select one by clicking on the name of the location.(I dare not think about the complexity of this so we
move to idea #2) and 2) Have the image as a backgroung to a grid with vertical and horizontal coordinates. When the hero selects a new area to visit, he clicks on the area, but what he really does is click on the grid, which returns the two values (x,y) of the location and informs the game about the area the hero wants to visit.
Yeah, yeah, I know it's too much, so what I'm most interested in are the 1-3. I know that even if they are possible, it will propably take forever, but as I said I have spare time, and I like learning new things. I apologize for the size of the post, but I decided to post as many info as possible so you know what I want.
If any of you has used Qt, CEGUI or wxWidgets could you tell which covers most of my criteria? I saw some great stuff build with CEGUI, but I don't know if it is too hard to learn?
Thank in advance.

I know my answer comes pretty late, I only recently started using stackoverflow fairly recently, but maybe this response will help anybody.
CEGUI fully supports skinning widgets using XML. Our CEED editor (WYSIWYG) fully supports layout editing, but the skinning editor (LNF editor) is not finished as of now (11.11.2014), the development version supports exchanging images however and changing sizes and proportions, but more advanced adjustments have to be done in XML.
CEGUI has an imageset editor, fully supported by the CEED editor. Creating imagesets (sets of named subimages, with position and dimension inside a big texture atlas) is supported there. Additionally there is a way to create imagesets from just a bunch of jpg/png/... files using a tool. You would have to ask for specifics in the forum though because it is not integrated into CEED yet.
So basically with CEGUI you are free to make whatever fantasy GUI you want. Skinning simple elements like buttons and progress bars isn't much work in XML anyways. Without the finished editor, some more advanced widgets are more work to skin, but many skins have already been created done this way and some of them are even publically available in the forum and in the CEGUI stock files.
StaticText widgets supports what you want, you can even use images in there or change fonts and colours in the text if you want. Scrollbars are supported too.
I am not sure what you mean by this. You have to specify this.
A simple "Generic/Image" widget is available in CEGUI for this purpose. You can use precreated images or even RTT textures.
You can create and destroy windows in CEGUI without issues.
Regarding the map: I m not sure what you mean, but getting the position of a click in respect to an image (representing the map) is possible in CEGUI.
CEGUI is not particularly hard to learn. There is always the forums and the chat if you got questions. For an Open Source project it is quite well documented so if you read all of the API docu, and look at the supplied samples in the sample browser, you should already get quite far. And for everything additional there is the forum (search), the IRC chat and a community wiki (mind the targeted versions of an article there though)
For a project like yours, CEGUI seems perfectly suited (this is what it was created for in the first place). Qt is not really optimal for games for numerous reasons. wxWidgets I have never used.

Related

How can I programmatically interact with a video game GUI

Before I get shot down on this one, I realize that the 'how' answer for this question might be slightly debatable, however I'm more interested in the 'what'.
In a nut shell I want to know which methods I can use to interact with a PC video game interface. I want to create a program that can extract data from a video game market interface.
My first initial thought was that I would need to programmatically take screen shots and then use some Optical Character Recognition software to extract the text. Then run whatever operation on the extracted text to derive my incites.
Then I was thinking it might just be easier to have a bunch of mini screen shots that I just use to find matches on certain sections of the screen. When a match is found, I would then know what the text is on the screen, without having to actually 'extract' it.
For those out there whom have done this, can you point me in one direction or the other? Perhaps there is a method that I am completely unaware of.
If its the case that this question is not suitable for this forum. It would be much appreciated if you could direct me elsewhere.
Edit: I should probably add that I'm not looking to spend a fortune on this project... so any free software would be the best. Perhaps that's a tall order.
I'm starting to think Sikuli is the direction I'm going to go. Open Source image recognition software, integrates with Python, Ruby, Java, JDBC, JavaScript and more.
-- Expanding on the question --
There are basically 3 categories of tools:
Recorder while you manually work along your workflow, a recorder tracks your mouse and keyboard actions. After stopping the recording, you might playback (autorun your worflow). The recordings can usually be edited and augmented with additional features.
GUI aware the tool allows to programmatically operate on GUI elements like buttons. This is based on the knowledge of internal structures and names of the GUI elements and their features. Some of these tools also have a recording feature.
Visually the tool “sees” images (usually retangular pixel areas) on the screen and allows to act on these images using mouse and keyboard simulation. There might be some recorder feture as well with such a tool.
SikuliX belongs to the 3rd category and currently does not have a recorder feature.
Answer in progress...
In games with moddable UIs, like many MMOs, you could create a mod that streams data through a series of black and white squares that could be read with optical sensors. From there, a microcontroller could deliver the data back to the PC via USB or wifi.
My approach as a noob. First determine if OCR 100% needed, I think this plays a role in speed.
if possible:
-run game in window (allows for trouble shooting and easy troubleshooting)
-is there a high contrast option for game? Will help Sikuli find things
then you plan out your scenarios:
You have to create different functions for different situations. A lot of gaming is "do you see this?" Then "do this" until that is gone.
Start with small parts you want to automate then build on them. Making sure your parts can scale in case small change need to happen, they will. For instance you want to open the menu if you see an object, lets say a tree.
Assume you have some sort of walking algorithm.
setROI(region1) #focus here for tree
if exists(tRee):
click(loCation) #you could hit the shortcut key to opening the menu
click(iTem) #if the item moves in the menu then you may need to scroll to find it first or you can change the ROI and start seeing if sikuli can differentiate your item from one you dont want to click.
You would get that to loop into other actions and proceed. Goodluck.

What do I need to do this?

I am new to coding and wanted to get some hands on practice with a project I have in mind. Here it is:
Let's say you have blank page and on the side of a screen you have several items you can choose to draw on the blank page. For example the background can be mountains, the ocean, a forest etc. On top of that you can place a house, a church or another selectable element. Whatever you like.
It is like a picture editor where you can put together a picture with different pre-given elements. Or like in video games where you can create your own character.
What would I need to build a web application for that kind of thing?
This link should get you started but it won't be the complete answer to your question - http://www.webdesignerdepot.com/2013/08/how-to-use-html5s-drag-and-drop/
Essentially, you can achieve your image dragging and dropping using similar techniques. It will require a bit of Spike work from yourself, and looking into how HTML5 can handle drag and drop. I discovered this resource fairly quickly and I think the solution you want isn't as complicated as you may think, it just requires a bit of know-how regarding drag and drop operations within HTML5 :-)
Also, there may already be some JavaScript based API's that do this sort of thing easier but I'm not too aware - I suppose starting this way could be a great introduction for you and you may wish to expand once you've done some work for it :-)
Hope this helps you and your coding journey!

Image/form to Pascal/Delphi code converter?

Does anyone knows about any editor allowing to visually design a form (by form I do not mean DFM or Delphi form, but a "paper form", like those pre-printed forms that you fill with some info) and that generates pascal commands to draw that form in a Printer (or Image) canvas?
What I want is an easy way to draw/design this form visually, composed just by lines and text, and a way to convert this to Pascal commands that when run, will draw that form in a Canvas (Image or Printer), respecting the original layout and scale, doesn't matter the Canvas DPI where it is being drawn.
Update: Maybe I wasn't clear enough about what I need and why I need it. I developed an Open Source component called TFreeBoleto (freeboleto.sf.net). It is used to generate and print bank billets (a common method for billing people in Brazil). Right now, the component uses a TBitmap image containing the "billet" mask, and TextOut methods for the dynamic areas (ie: billet number, customer name, etc). It is fine when looked in the screen, but some people complains that the quality of the printed image is not good. The component uses a BltTBitmapAsDib procedure to maximize the quality of printing, but some people still think it is not good enough. So, my idea was to avoid using a bitmap image as the form layout, and draw everything direct in the canvas (both form and printer). Check here for a sample of what a bank billet looks like.
Of course ReportBuilder and/or FastReport could solve the problem, but they are not free, so I cannot include it in the component. I need "native" solution that any standard Delphi install would be able to compile.
You might get what you want out of the Fast Reports Report Designer which is a commercial reporting system for Delphi. Remember that a report is just a page. That page can be shown on the screen or printed on the printer.
You also might find that something like TRichView helps you.
Whether using TRichView in particular or not, I would look into using HTML to do what you want. I would use HTML+CSS to do both a screen and printer layout, that can also be viewed on the web. For simple text layout plus text boxes I think even bare HTML and HTML tables might be sufficient. To visually design simple text pages, using a Delphi application, I would use TRichView.
In both cases, you would be creating documents, not code. To create code that creates a page, without using any document system, would be very difficult indeed, and I am not sure what you would really do with that code, since you would need a compiler or interpreter to convert that code into something that you could use. Please clarify what you mean by "creating code", and what syntax you would want that code to be using. If HTML is code in your definition of "code" then maybe HTML is the best kind of "code" for your problem.
I do my form-work with WPTools. It is also a commercial product. The core is a very good wordprocessor and form-designer. The engine can render text and forms to any canvas (screen, printer, also create pdf) and is highly flexible. Output is mainly rtf and html.
I also see no advantage in creating pascal code to redraw the form. What you need, i think, is a good WYSIWYG-editor which creates a document that fits your needs.
Check out ReportBuilder # http://www.digital-metaphors.com/
It is a commercial reporting tool for Delphi - around a long time, very high quality, with all native Delphi source code packaged with it. I am using it for an important commercial project right now and I recommend it highly (I'm not working for them.) I've used MANY Delphi reporting tools over the years and this one is the best IMO.
RBuilder also has extensive support for paper form emulation see:
http://www.digital-metaphors.com/products/report_design/form_emulation.html
I haven't worked with that feature, but you can download a full-featured demo and try it.
Yoy can use Adobe Acrobat (full version) to create forms.
Then you can use free Acrobat Reader to display and print forms or other COM object in your application.
I think it is best solution for you.
PS
All tools for reports that are included in Delphi are free for you to design form and are free to distribute if user only preview and print already designed reports.
The same is valid for Adobe Acrobat (you may distribute forms) but you have added that you need to print form and some text over form. Maybe it is easier if you use reports but it is possible to do the same using PDF.
Most report engines are not open source but are free to distribute. There is many components for creating PDF - paid (one time), free, as well as open source.
PPS
I have read your updete for second time. Since you are using TBitmap and you can to TextOut so: You can use TMetafile. There is many editors for metafiles and it is free to distribute metafiles.

Opengl Window with mouse control for win 32

I am new to OpenGl, almost new to C++.
I am looking for some code that does the following things.
Open an OpenGL window (maybe using glut)
Rotate the view point when the user press the left mouse button
zoom when the user press the right mouse button
translate the point of view the user press the central button
Basically what I need is a very simple graphics platform in which I will plot results coming from my algorithms. I have tried using the glut library and some code coming from the web, but no luck!
This should be a basic project, can you please point me where to find it. It just seems unreal to me that a so simple project turns to be so hard to find, but I have been googling for hours and no results.
I really appreciate your help,
thank you very much
You're asking for a fair amount of code there. Basic, but not insubstantial. Even if we do provide the code to do what you've asked, I'm not sure if you'll be able to use it to do what you want. The Red Book is a "bible" of openGL programming of sorts and will provide you with many of the functions and how to use them. I found the entire thing online here. Look into Chapters 1-3 for your drawing and rotating. Also, Lighthouse 3D has some great tutorials for you to look at for mouse events (Link). Some knowledge of linear algebra really helps, but you can manage without it.
I don't think it directly implements everything you want, but you might want to look at the 3D graph control on Code Project. This is hardly unique though -- you might want to Google for something like "opengl activex" and look at some of the alternatives. I doubt any will directly implement all you've asked for -- they'll probably include most of the basic operations, but it'll be up to you to make the connection between the mouse operations and the actions in the window.

Best language for quickly creating user interfaces without drag and drop? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
I'm a blind college student who is taking an introduction to programming class that focuses on user interface design. The class is using Processing, which is completely inaccessible. I'm looking for a language that will allow me to create GUI's without drag and drop and hopefully be smart enough to do most of the layout without forcing me to specify control positions in pixels.
I know Perl, Java, C/C++, c#, and HTML. I was considering creating HTA applications. My only requirements are that the language must run under MS Windows, and must not use SWING or GTK as the underlying toolkit.
I would say that xaml would be a good choice:
Pixel manipulation is not needed
Item functionality in code behind
Can add pixels changing for control
later on
There is a lot of documentation on
how to use it
Maybe if you give us an idea of what you will need the language for we can give you better suggestions.
Speaking as a blind programmer:
C# + WinForms: You can either create the code by hand and use layout managers or calculate the sizes in your head, or if you're using the JAWS screen reader then there are scripts which will help you in the WinForms designer.
C# + WPF: Here you define your UI in XML, but it is more complex to get your head around. Certainly look at this as it is a very nice solution. the other problem with WPF at the moment is that not all screen readers support this newer technology.
Jamal Mazrui at www.EmpowermentZone.com has created something called "Layout By Code", but I have no experience with this.
HTML+Javascript would be nice, but I doubt it'd be allowed in your course.
WXWidgets: I don't have a lot of experience with this cross-platform, multi-language UI toolkit, but I believe it has layout managers and is thus used by several blind programmers I know.
Finally, I used to design Win32 resource scripts by hand, calculating sizes in my head (no layout managers). This is certainly achievable if you wanted to take this route.
In summary, WPF's nice, but make sure your screen reader works with this kind of app. The next best alternative is probably WinForms. If you like Layout By Code then use it, but if this is a skill you want for employment, then keep that in mind.
take a look on XAML. I think it could be a good start for both modern Windows and Web UI creators.
Tcl/Tk will do exactly what you want. The pack and grid layout managers are based on logical relative placement of the widgets.
Although the "native" language of Tk is Tcl, many other languages have a Tk binding.
label .l -text "this is a label"
button .b -text 'quit' -command "exit"
pack .l .b
Check out this project on codeplex. It may help you (as an alternative to processing&java)
http://bling.codeplex.com/
ling is a C#-based library for easily programming images, animations, interactions, and visualizations on Microsoft's WPF/.NET. Bling is oriented towards design technologists, i.e., designers who sometimes program, to aid in the rapid prototyping of rich UI design ideas. Students, artists, researchers, and hobbyists will also find Bling useful as a tool for quickly expressing ideas or visualizations. Bling's APIs and constructs are optimized for the fast programming of throw away code as opposed to the careful programming of production code.
Bling as the following features that aid in the rapid prototyping of rich UIs:
* Declarative constraints that maintain dynamic relationships in the UI without the need for complex event handling. For example, button.Width = 100 - slider.Value causes button to shrink as the slider thumb is moved to the right, or grow as it is moved to the left. Constraints have many benefits: they allow rich custom layouts to be expressed with very little code, they are easy animate, and they support UIs with lots of dynamic behavior.
* Simplified animation with one line of code. For example, button.Left.Animate.Duration(500).To = label.Right will cause button to move to the right of label in 500 milliseconds.
* Pixel shader effects without the need to write HLSL code or boilerplate code! For example, canvas.CustomEffect = (input, uv) => new ColorBl(new Point3DBl(1,1,1) - input[uv].ScRGB, input[uv].ScA); defines and installs a pixel shader on a canvas that inverts the canvas's colors. Pixel shading in Bling takes advantage of your graphics card to create rich, pixel-level effects.
* Support for multi-pass bitmap effects such as diffuse lighting.
* An experimental UI physics engine for integrating physics into user interfaces! The physics supported by Bling is flexible, controllable, and easy to program.
* Support for 2.5D lighting.
* A rich library of geometry routines; e.g., finding where two lines intersect, the base of a triangle, the area of triangle, or a point on Bezier curve. These routines are compatible with all of Bling's features; e.g., they can be used in express constraints, pixel shaders, or physical constraints. Bling also provides a rich API for manipulating angles in both degrees and radians.
* And many smaller things; e.g., a frame-based background animation manager and slide presentation system.
* As a lightweight wrapper around WPF, Bling code is completely compatible with conventional WPF code written in C#, XAML, or other .NET languages.
Bling is an open source project created by Sean McDirmid and friends to aid in design rapid prototyping. We used Bling to enhance our productivity and would like to share it with other WPF UI design prototypers.
I'd probably try using C#. It has reasonably friendly interfaces to windows common controls and the like even without making use of Drag and Drop. Just don't make use of the designer and code as normal.
I don't program in Java but I know that Java provides for the programmatic creation of the UI AND provides some wonderful Layout Management components (Native to Java without requiring SWING). I first got exposed to Layout Managers back in the good-old-days of X11 with X Toolkits (anybody remember Motif, OpenLook, HP Open View?) and Java seems to have adopted similar technology.
You can create Windows, Dialogs and Menus all from simple layout managers.
Being sighted myself and not having worked too closely on anything that has ever been audited for accessibility or heavily accessed by blind users, I don't think my answer will be terribly thorough. My first instinct however is to say that some kind of dynamic web server architecture that generates HTML like C#, PHP or ColdFusion is going to fit your description of handling most of the layout for you without requiring that you specify control positions in pixels. There certainly is the availability to specify control positions in pixels via CSS, but it's not required. And I know HTML also has well defined standards for accessibility, whereas I'm not sure what the status is on accessibility standards with other kinds of software.
You could use javascript and html. There's a port of processing to javascript, so you know that it is powerful enough for the things that your class will cover. You can author html without knowing a single thing about what it looks like. In fact that is the preferred way to author html.
The main downside of javascript is not javascript itself, but the browser dom. That is the interface into controlling the html elements. However, a library like jquery, or mootools, or dojo can take care of most of those problems.
As for accessiblity, have a look at WAI ARIA also opera's intro to WAI ARIA.a
WAI ARIA is a way to build rich javascript applications while playing nice with screen readers. It's very cool. I've not seen more work and passion put into making the web stack accessible in any other programming stack.

Resources