As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I am curious about this. I must learn Prolog for my course, but the applications that I seen mostly are written using C++, C# or Java. Applications written by Prolog, to me is very very rare application.
So, I wonder how Prolog is used and implement the real-world application?
SWI-Prolog website is served from... SWI-prolog, using just a small subset of the libraries available.
Well, it's not a commercial application, but it's rather real world.
Much effort was required to make the runtime able to perform 24x7 service (mainly garbage collection) and required performance scalability (among other multithreading).
Several libraries were developed driven by real world applications needs.
I once asked my supervisor a similar question, when he is giving us a Prological lecture.
And he told me that people do not really use prolog to implement a whole huge system. Instead, people write the main part with other language(which is more sane and trivial), and link it to a "decision procedure" or something written in Prolog.
Not sure about other Prolog implementation, we were using BProlog and it provides C/Java interface.
Microsoft Windows NT Networking Installation and Configuration applet
One of the notorious and in a way notable examples is Microsoft Windows NT OS network interface configuration code that involved a Small Prolog interpreter built in. Here is a link to the story written by David Hovel for Dr. Dobbs. (The often cited Microsoft Research link seems to be gone.)
Expert systems
Once Prolog was considered as THE language for a class of software systems called Expert Systems. These were interactive knowledge management systems often with a relational database backend.
Beyond Prolog
In general rule-based programming, resolution and different automated reasoning systems are widely used beyond Prolog.
According to the Tiobe Software Index, Prolog is currently #36: between Haskell and FoxPro:
http://www.tiobe.com/index.php/content/paperinfo/tpci/index.html
What's it used for?
I first heard of it with respect to Japan's (now defunct) "Fifth Generation" project:
http://en.wikipedia.org/wiki/Fifth_generation_computer
Frankly, I'm not really aware of anybody using Prolog for any serious commercial development.
Related
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I'm sure this depends heavily on a few variables. Here are the ones I can think of:
-Desktop, mobile, web, or server application
-With GUI, without GUI
-Object-oriented, non-object-oriented
-Choice of language
So what design patterns are the most prevalent? Which designs are most advanced? Thanks
This question is quite academic. But I'll give the best answer I can:
The software development process first involves obtaining the set of high level goals, and functional requirements from the stakeholders.
Stakeholders are defined as, the paying client, the end users, your bosses and co-workers involved with the project, and anyone else whom the project will have a direct influence upon.
High level goals are things like, "It needs to be easy to use, because our end users are volunteers with limited computer knowledge" or, "It needs to be completely secure because we are storing sensitive personal information".
Functional requirements are the nitty-gritties. "We need to store information about people. First Name and Surname need to be at least 50 characters... etc.".
And from there you consider the relative strengths and weaknesses of each approach.
You want a mobile app? Strengths include portable and versatile. Weaknesses: Will the end-users even have a phone capable of running the app? Or does the client intend their end-users to only be people with a smartphone?
Without reference to a specific project, I would say that the Goals of the project would affect what platform the application will run upon, and the choice of GUI. And the Functional Requirements would influence the choice of programming language.
So I hope I got my point across: You would be better off getting a broad understanding of the strengths and weaknesses of each technology and approach, and it is simply a mark of a professional to be able to correctly apply them to each project you came across, often negotiating these strengths, weaknesses and associated costs with the client.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
I see that Ruby is a big success when it comes to web programming. However, for desktop applictions and scripts, I do not see it being heavily used. In fact, in most of the Linux distros, it does not come installed by default. Most applications are coded in Python and some are in Perl. What advantages can Ruby offer over Python when it comes to desktop applications and scrips? If i am writing one Linux application, say a music player, how Ruby blocks and metaprogramming techniques can help?
Edit:
I see that some have opted for close this question because it can escalate into a Language war, perhaps. Fear not, I am a day-time Python programmer. I am trying to reconcile these seemingly incompatible observations. It is fact that most Linux distros do not come with a ruby installed. It is also a fact that most Linux apps are coded in Python. And it is also a fact that Ruby has more advanced meta-programming features than Python, which can make development easier. I am wondering why Ruby is not used as much in Linux application development, which has been a playground for scripting languages.
Python has become popular on the Linux side because many distributions have built their various front-end tools using it so it's guaranteed to be available.
Ruby does have Qt bindings that might be what you're looking for and it's possible to write wrappers for any C or C++ library you need to interface with.
In the end it all comes down to finding a suitable example to learn from. You may find that there are far more Python examples to refer to and this may affect your decision.
Both languages are equally capable on the whole and the default distributions are similar in terms of performance. Python's new PyPy compiler is faster if you don't mind sticking to Python 2.7, and there's also Rubinius which is an effort to boost Ruby's performance.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
I heard here a lot that MFC is outdated, and wraps Win32 in bad OOP.
But Microsoft released MFC 10 with VS2010, and it has latest tools like ribbon interface, so should one use it for simple apps?
MFC is widely considered to be poorly designed but it has been updated recently and appears to be alive and well. You may decide not to use it for other reasons, but you should not reject it for being outdated.
MFC doesn't look bad if you understand it. Otherwise you consider it as poorly designed.
MFC is not outdated: a lot of complex applications use MFC, for example MS Office. And you even can find in samples how to make office-look application.
Also for simple apps you can use ATL.
I have seen MFC applications are noticeably faster, particularly they load faster compared to C#. The compelling arguments I have heard from C# guys are it is very fast to development in compared to MFC. In terms of performance MFC wins hands down.
You can make a call on what kind of application you are developing and what features do you need. The trend unfortunately is moving away from MFC though I personally can't justify it. I know couple of companies who are planning to migrate MFC applications to .NET My friends there old me mainly it's the faster development time and it is easy to develop in.
If you are bold you can still start new application with MFC and do better than those who would with C#. If you just want to go with trend than use new easier tools and take a little hit with performance.
Overall I would definitely not sideline MFC as it might be the only fit for certain high performance applications. For example I love the GUI threads in MFC which don't exist in C++ but are very powerful if used properly. I don't know if they exist in C# or newer languages but I wouldn't like to give up on them.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 9 years ago.
Where can I find resources related to the design and development of text-based user interfaces (e.g. interfaces exported via serial port from embedded devices to VT100 terminals)? I am interested in any material available - best practices, style guides, frameworks, etc.
Note that I am asking about resources related to the design and development of 'TUIs' rather than command-line interfaces (the thrust of Text User Interface Design Reference?). Wikipedia differentiates TUIs from CLIs (and GUIs) as follows:
TUIs are different from command-line
interfaces in that, like GUIs, they
use the entire screen area and do not
necessarily provide line-by-line
output. However, TUIs only use text
and symbols available on a typical
text terminal, while GUIs typically
use high-resolution graphics modes.
I don't have any experience with VT100 and that kind of stuff, but I know that Turbo Vision is still around and kicking on quite a few platforms, DOS and Linux included. And back in its day, it was used to write some of the better TUI applications (Borland C++ and Borland Pascal DOS IDEs come to mind), and I've seen it used in LOB applications back then quite often as well.
Screenshot:
(source: sourceforge.net)
Perhaps take a look at ncurses? It's a GNU library specifically designed for writing terminal-based UIs.
For best practices and style guides, the IBM Common User Access (CUA) defines a "text subset" that should be helpful especially if your users are used to GUIs. Details are in Chapter 3 of:
http://publibz.boulder.ibm.com/cgi-bin/bookmgr/BOOKS/F29BDG00/CCONTENTS
Additional CUA guidelines and standards are in:
http://publibz.boulder.ibm.com/cgi-bin/bookmgr/BOOKS/F29AL000/CCONTENTS?DT=19921204095534
http://petesqbsite.com/sections/express/issue21/tuiseriespart1.htm
http://en.wikipedia.org/wiki/Text-based_user_interface
Hope it helps... I still make programs in TUI (www.harbour-project.org)
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
We recently built a large ASP.NET web forms application for a client and the main point of contact has told me he wants to learn more about the technical side of web applications. He has no programming experience and has a primarily business background.
I have provided him with many online resources, however he would like to get some book recommendations. After searching myself, I can't seem to find any that fit the bill.
I am looking for a book that will:
Provide a high-level introduction to internet tecnologies (HTTP, TCP/IP, servers, web farms, hosting, scripting languages etc).
Cover issues that commonly affect the success/failure of web applications (performance & scalability, security, data integrity, server maintainence).
Give a very basic introduction to web development (ideally in the ASP.NET world, but not important).
Introduce typical web application architectures (for example describing N-Tier systems, SOA)
I can obviously find tons of books on each of the topics mentioned above, however I can't seem to find any that would be targeted at people that are not (would-be) web developers.
Anyone have any recommendations?
How about this one? Learning Web Design: A Beginner's Guide to (X)HTML, StyleSheets, and Web Graphics by O'Reilly?
It covers most of your topics, but unfortunately doesn't really cover the programming aspect - just the scripting. A good start nonetheless.
Then, if he's still up for it, you can hit him with Beginning Web Development, Silverlight, and ASP.NET AJAX: From Novice to Professional by Apress, which would finish the job and introduce them to MS technologies. "It adopts a “zero to hero” approach..." which is what you are looking for.
You might get better responses on ServerFault since you're asking about a book that mainly centers on server administration rather than the programming aspect.