What is Unifying Logic within the Semantic Stack Model and who is supposed to take care of it? - logic

Basically, after hours of researching I still dont get what the Unifying Logic Layer within Semantic Web Stack Model is and whose problem it is to take care of it.

I think this depends on what your conceptualisation of the semantic web is. Suppose the ultimate expression of the semantic web is to make heterogeneous information sources available via web-like publishing mechanisms to allow programs - agents - to consume them in order to satisfy some high-level user goal in an autonomous fashion. This is close to Berners-Lee et al's original conceptualisation of the purpose of the semantic web. In this case, the agents need to know that the information they get from RDF triple stores, SPARQL end-points, rule bases, etc, is reliable, accurate and trustworthy. The semantic web stack postulates that a necessary step to getting to that end-point is to have a logic, or collection of logics, that the agent can use when reasoning about the knowledge it has acquired. It's rather a strong AI view, or well towards that end of the spectrum.
However, there's an alternative conceptualisation (and, in fact, there are probably many) in which the top layers of the semantic web stack, including unifying logic, are not needed, because that's not what we're asking agents to do. In this view, the semantic web is a way of publishing disaggregated, meaningful information for consumption by programs but not autonomously. It's the developers and/or the users who choose, for example, what information to treat as trustworthy. This is the linked data perspective, and it follows that the current stack of standards and technologies is perfectly adequate for building useful applications. Indeed, some argue that even well-established standards like OWL are not necessary for building linked-data applications, though personally I find it essential.
As to whose responsibility it is, if you take the former view it's something the software agent community is already working on, and if you take the latter view it doesn't matter whether something ever gets standardised because we can proceed to build useful functionality without it.

Related

Does it make sense to use actor/agent oriented programming in Function as a Service environment?

I am wondering, if is it possible to apply agent/actor library (Akka, Orbit, Quasar, JADE, Reactors.io) in Function as a Service environment (OpenWhisk, AWS Lambda)?
Does it make sense?
If yes, what is minimal example hat presents added value (that is missing when we are using only FaaS or only actor/agent library)?
If no, then are we able to construct decision graph, that can help us decide, if for our problem should we use actor/agent library or FaaS (or something else)?
This is more opinion based question, but I think, that in the current shape there's no sense in putting actors into FaaS - opposite works actually quite well: OpenWhisk is implemented on top of Akka actually.
There are several reasons:
FaaS current form is inherently stateless, which greatly simplifies things like request routing. Actors are stateful by nature.
From my experience FaaS functions are usually disjointed - ofc you need some external resources, but this is the mental model: generic resources and capabilities. In actor models we tend to think in category of particular entities represented as actors i.e. user Max, rather than table of users. I'm not covering here the scope of using actors solely as unit of concurrency.
FaaS applications have very short lifespan - this is one of the founding stones behind them. Since creation, placement and state recovery for more complex actors may take a while, and you usually need a lot of them to perform a single task, you may end up at point when restoring the state of a system takes more time than actually performing the task, that state is needed for.
That being said, it's possible that in the future, those two approaches will eventually converge, but it needs to be followed with changes in both mental and infrastructural model (i.e. actors live in runtime, which FaaS must be aware of). IMO setting up existing actor frameworks on top of existing FaaS providers is not feasible at this point.

What was MVC invented for? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I am new in web development, and read some wiki and discussions about MVC. However, the more I read, the more confusion I have about its design purpose.
I just want to know why is this design pattern invented? And what problem is it used to solve?
Thanks in advance.
The goal of the MVC paradigm is in essence to ensure a form of separation of code. The problem that often arise when developing code is that the code is written in a succession, where each part follows another and where each part is directly dependent upon what the other parts are doing.
When working with a large project, maintaining and further developing the code can quickly become an issue. You could therefore argue, in a simplified manner, that what the MVC paradigm tries to do is to ensure that you separate business logic (e.g. the code that performs) from the presentation logic (the code that shows the results). But those two parts need to communicate with each other, which is what the controller is responsible for.
This allows for a clear structure of code where the different parts are more decoupled, meaning less dependent upon each other.
The separation also means that you work in a much more modular way, where each part interacts with the others through an interface (some defined functions and variables that are used to call upon other parts) so that you can change the underlying functionality without having the change other parts of your code, as long as your interface remains the same.
So the problem it tries to solve is to avoid having a code base that is so entangled that you can't change or add anything without breaking the code, meaning you have to modify the code in all sorts of places beyond where you made your original changes.
To some degree it's a solution in search of a problem.
As a rather ancient programmer I'm well aware of the benefits of "separation of concerns", but (in my not-so-humble opinion) MVC doesn't do this very well, especially when implemented "cook-book" fashion. Very often it just leads to a proliferation of modules, with three separate modules for every function, and no common code or common theme to tie things together and accomplish the real goal: minimize complexity and maximize reliability/maintainability.
"Classical" MVC is especially inappropriate in your typical phone GUI app, where, eg, management of a database table may be intimately connected to management of a corresponding table view. Spreading the logic out among three different modules only makes things more complicated and harder to maintain.
What does often work well is to think about your data and understand what sorts of updates and queries will be required, then build a "wrapper" for the database (or whatever data storage you use), to "abstract" it and minimize the interactions between the DB and the rest of the system. But planning this is hard, and a significant amount of trial and error is often required -- definitely not cook-book.
Similarly you can sometimes abstract other areas, but abstracting, say, a GUI interface is often too difficult to be worthwhile -- don't just write "wrappers" to say you did it.
Keep in mind that the authors of databases, GUI systems, app flow control mechanisms, etc, have already put considerable effort (sometimes too much) into abstracting those interfaces, so your further "abstraction" is often little more than an extra layer of calls (especially if you take the cook-book approach).
Model view controller was created to separate concerns in code instead of crating a hodge podge all in a single blob. (Spaghetti code) the view code is merely presentation logic, the model is your objects representing your domain and the controller handle negotiating business logic and integrations to services on the backend.

Can TDD Handle Complex Projects without an upfront design?

The idea of TDD is great, but i'm trying to wrap my head around how to implement a complex system if a design is not proposed upfront.
For example, let's say I have multiple services for an payment processing application. I'm not sure I understand how development would/can proceed across multiple developers if there is not a somewhat solid design upfront.
It would be great if someone can provide an example and high level steps to putting together a system in this manner. I can see how TDD can lead to simpler and more robust code, I'm just not sure how it can bring together 1) different developers to a common architectural vision and 2) result in a system that can abstract out behavior in order to prevent having to refactor large chunks of code (e.g. accept different payment methods or pricing models based on a long term development roadmap).
I see the refactoring as a huge overhead in a production system where data model changes increase risks for customers and the company.
Clearly i'm probably missing something that TDD gurus have discovered....
IMHO, It depends on the the team's composition and appetite for risk.
If the team consists of several experienced and good designers, you need a less formal 'architecture' phase. It could be just a back of the napkin doodle or a a couple of hours on the whiteboard followed by furious coding to prove the idea. If the team is distributed and/or contains lots of less skilled designers, you'd need to put more time/effort (thinking and documenting) in the design phase before everyone goes off on their own path
The next item that I can think of is to be risk first. Continually assess what are the risks to your project, calculate your exposure/impact and have mitigation plans. Focus on risky and difficult to reverse decisions first. If the decision is easily reversible, spend less time on it.
Skilled designers are able to evolve the architecture in tiny steps... if you have them, you can tone down the rigor in an explicit design phase
TDD can necessitate some upfront design but definitely not big design upfront. because no matter how perfect you think your design is before you start writing code, most of the time it won't pass the reality check TDD forces on it and will blow up to pieces halfway through your TDD session (or your code will blow up if you absolutely want to bend it to your original plan).
The great force of TDD is precisely that it lets your design emerge and refine as you write tests and refactor. Therefore you should start small and simple, making the least assumptions possible about the details beforehand.
Practically, what you can do is sketch out a couple of UML diagrams with your pair (or the whole team if you really need a consensus on the big picture of what you're going to write) and use these diagrams as a starting point for your tests. But get rid of these models as soon as you've written your first few tests, because they would do more harm than good, misleading you to stick to a vision that is no longer true.
First of all, I don't claim to be a TDD guru, but here are some thoughts based on the information in your question.
My thoughts on #1 above: As you have mentioned, you need to have an architectural design up-front - I can't think of a methodology that can be successful without this. The architecture provides your team with the cohesion and vision. You may want to do just-enough-design up front, but that depends on how agile you want to be. The team of developers needs to know how they are going to put together the various components of the system before they start coding, otherwise it will just be one big hackfest.
It would be great if someone can provide an example and high level
steps to putting together a system in this manner
If you are putting together a system that is composed of services, then I would start by defining the service interfaces and any messages that they will exchange. This defines how the various components of your system will interact (this would be an example of your up-front design). Once you have this, you can allocate various development resources to build the services in parallel.
As for #2; one of the advantages of TDD is that it presents you with a "safety net" during refactoring. Since your code is covered with unit tests, when you come to change some code, you will know pretty soon if you have broken something, especially if you are running continuous integration (which most people do with a TDD approach). In this case you either need to adapt your unit tests to cover the new behavior OR fix your code so that your unit tests pass.
result in a system that can abstract out behavior in order to prevent
having to refactor large chunks of code
This is just down to your design, using e.g. a strategy pattern to allow you to abstract and replace behavior. TDD does not prescribe that your design has to suffer. It just asks that you only do what is required to satisfy some functional requirement. If the requirement is that the system must be able to adapt to new payment methods or pricing models, then that is then a point of your design. TDD, if done correctly, will make sure that you are satisfying your requirements and that your design is on the right lines.
I see the refactoring as a huge overhead in a production system where
data model changes increase risks for customers and the company.
One of the problems of software design is that it is a wicked problem which means that refactoring is pretty much inevitable. Yes, refactoring is risky in production systems, but you can mitigate that risk and TDD will help you. You also need to have a supple design and a system with low coupling. TDD will help reduce your coupling since you are designing your code to be testable. And one of the by-products of writing testable code is that you reduce your dependencies on other parts of the system; you tend to code to interfaces which allows you to replace an implementation with a mock or stub. A good example of this is replacing a call to a database with a mock/stub that returns some known data - you don't want to hit a database in your unit tests. I guess I can mention here that a good mocking framework is invaluable with a TDD approach (Rhino mocks and Moq are both open source).
I am sure there are some real TDD gurus out there who can give you some pearls of wisdom...Personally, I wouldn't consider starting a new project with out a TDD approach.

Tips on creating user interfaces and optimizing the user experience [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
I am currently working on a project where a lot of user interaction is going to take place. There is also a commercial side as people can buy certain items and services.
In my opinion a good blend of user interface, speed and security is essential for these types of websites. It is fairly easy to use ajax and JavaScript nowadays to do almost everything, as there are a lot of libraries available such as jQuery and others. But this can have some performance and incompatibility issues. This can lead to users just going to the next website.
The overall look of the website is important too. Where to place certain buttons, where to place certain types of articles such as faq and support. Where and how to display error messages so that the user sees them but are not bothering him. And an overall color scheme is important too.
The basic question is: How to create an interface that triggers a user to buy/use your services
I know psychology also plays a huge role in how users interact with your website. The color scheme for example is important. When the colors are irritating on a website you just want to click away. I have not found any articles that explain those concept.
Does anyone have any tips and/or recourses where i can get some articles that guide you in making the correct choices for your website.
Adhere to some standard UI Design Principles:
The structure principle: Your design
should organize the user interface
purposefully, in meaningful and
useful ways based on clear,
consistent models that are apparent
and recognizable to users, putting
related things together and
separating unrelated things,
differentiating dissimilar things
and making similar things resemble
one another. The structure principle
is concerned with your overall user
interface architecture.
The simplicity principle: Your
design should make simple, common
tasks simple to do, communicating
clearly and simply in the user’s own
language, and providing good
shortcuts that are meaningfully
related to longer procedures.
The visibility principle: Your
design should keep all needed
options and materials for a given
task visible without distracting the
user with extraneous or redundant
information. Good designs don’t
overwhelm users with too many
alternatives or confuse them with
unneeded information.
The feedback principle: Your design
should keep users informed of
actions or interpretations, changes
of state or condition, and errors or
exceptions that are relevant and of
interest to the user through clear,
concise, and unambiguous language
familiar to users.
The tolerance principle: Your design
should be flexible and tolerant,
reducing the cost of mistakes and
misuse by allowing undoing and
redoing, while also preventing
errors wherever possible by
tolerating varied inputs and
sequences and by interpreting all
reasonable actions reasonable.
The reuse principle: Your design
should reuse internal and external
components and behaviors,
maintaining consistency with purpose
rather than merely arbitrary
consistency, thus reducing the need
for users to rethink and remember.
Try to look for Websites or Web Application which has successfully achieved the goal you are looking to achieve, study their UI's, try to find common parameters & patterns which engages the user on their sites.
I always believe amazon is very good at keeping user engaged on website by showing relevant recommendations, what other people are looking types recommendations, people who bought this also bought this kind of recommendations.
Another good read: What should a developer know about interface design usability and user psychology
Also, Good Read on UI design considerations of e-commerce websites.
When it comes to UI design, ideally you will have an actual visual designer provide some guidance on your use of colors and a UxD provide some insight into your layout and flows based upon their expertise in these areas. Barring these folks having some input, if you design the pages and create the visuals yourself, iterative discovery is the best method to inform your design and provide insight into how these items affect the user and the overall experience you have created.
While there are certainly numerous books you can read and "guidelines" you can follow (and should for the initial design phases), no amount of book learning can replace real user interactions.
Build a functional prototype of your site/app/service/etc. and get it in front of actual users to gauge usability and value. This should be done in an ad-hoc format (versus formal usability testing) and the prototype should consist of smoke and mirrors as needed (i.e. it could be only clickable comps or primarily images with only the flows you're testing actually working).
Once you have some level of prototype, bring it to a place where ppl tend to be (and where you have i-net access if needed). I have found Starbucks to be great for this. Grab some ppl and ask if you can have 10 minutes of their time - you'll find tons of willing participants. Provide these folks with a simple / specific scenario to complete in your prototype and watch and learn.
People in a real-world situation using your software will quickly find its flaws and you'll be learning more than you could ever glean from a book or guideline. You'll be iterating on the design and tweaking items every time you test.
Test like this over a few weeks and you'll be discovering the perfect design very quickly. Once you have something that ppl can use and find value in, you're ready to get it live. But, testing should not end there - once live, you should continue to test and tweak via A/B and multi-variant testing while keeping a close eye on on your analytics and user behavior.
Discovery testing followed continual A/B allows you to continuously tweak, test and learn and ultimately to create the best solution possible.

How do you balance business process changes against the challenges of changing software?

In my admittedly young career I've found myself writing code to support quirky business rules and processes. Inevitably these changes were always in some massively difficult code base and caused many issues. My question has a couple of parts:
While software is a tool for businesses to make their lives easier, at what point do we as developers suggest a change in business process rather than in the software as the "magic bullet" to solve a particular problem.
How do we as developers evangelize a certain level of reverence for the software as well as the difficulty involved in making changes simply to support the quirks of the business?
I understand that these changes in business processes promote our industry, but in an analogy my father would understand: Which is easier, to melt down a hammer to forge a screwdriver to drive screws or to simply use nails since your hammer is already awesome...?
You can look at
Seven Habits of Highly Effective
People
, as there is the sense that you need to develop a sphere of influence large enough to try to change business processes.
Your best bet is to show that you are very competent at your job, and work on developing relationships with people on the business side, so that you can feel comfortable sitting down outside of work to discuss the business process in question.
This is a slow process, but if you try to rush too fast the business will push back, and squash you like a bug. If you read
The Age of Heretics
for example, you will see examples of companies that were too successful in making changes, and the corporation destroyed them.
At the moment your best bet is to make changes, as you can, to have the software be more adaptable, so that if the process changes you can easily adapt to the new rules.
Before you can do anything, you'd better step back and try to understand the business. If they're reacting to change by adapting their processes, that's a GOOD thing. It's when they leave things exactly the same for years that you can forget about them remaining a company. You need to make sure, however, that the change you're responding to won't negatively impact up- or down-stream business processes. Business units don't often do that checking. But, when it all goes to hell, you know who they're going to blame, right? By doing this, you can head those issues off and evangelize, "better ways." Not doing it is a prescription for eternal frustration.
Learn their business before you even think of codifying it.
As for the mechanics:
What I always had my teams write was, "generic software." Some business unit might have needed a way to capture a form and produce a report. Okay, easy enough, right? Wrong. Always consider a request as something*200. Would you want to support 200 such applications, all doing almost the same thing? Not me. Too lazy.
I directed my teams to make a generic form system and use off-the-self or generic reporting mechanisms. And I stressed the use of XML/XSLT for as much as possible (not relying, for example, on Microsoft's easy-bake-oven technologies that seem to break with each new release). Then, when another business unit wanted, "something similar, but with changes," the core was already there - we only needed a new folder, modified XML/XSLT and we were done.
That always - ALWAYS - made those future changes easier to handle. "Need a new field? Change an XML file. Need to change the way a report is produced? Change XSLT. No program changes." Get it? NO program changes. Keep as much as you can OUT of the logic. Even business processes can be represented in XML/XSLT.
In reality, most of the applications you'll come across are the same Programming Wheels (a good algorithm book, by the way) that have been done forever. They'll just be done more poorly by people who didn't understand the business and understood their craft even less.
They're not going to build their business around you or your software, unless you're writing MS DOS for the first time. The second you suggest it, you'll be gone. And... you should be.
One of the most frustrating things any end customer (that is, a customer of your employer or customer) can hear is "the computer won't let me do that". Say, add items to an order after the shipping is calculated, or cancel something before the sales tax has been calculated, or whatever. The software should serve the business. Sure, that means the software has to change a lot, and sometimes it changes so much from where it was that you have to start over. As you grow in experience you will write software that is easier to change, given the unadjustable reality that business process change, laws change, tax codes change, customers change, and so on. Some day you may be a trusted business advisor to your clients. That is unusual early in your career. I'm in that stage now but I'm in my fourth decade of being paid to program. I rarely suggest the business accomodate the software. It takes a lot of judgement to know when that might be the right thing to suggest. And whatever reverence you might feel for your software, do your best to hide it from the folks who pay for it. They see it as a tool to support the real business they're in.
I think there is value in questioning the cost effectiveness of building new solutions to adapt to existing business processes versus adapting business processes to adapt to existing solutions. However, in reality, I have not seen the business consider this angle.
With that in mind, I think the next best thing you can do is to anticipate specific changes that the business might request in the future and develop your solution such that it can adapt to those changes easily.
Unfortunately, this is entirely situation-dependent.
Even with a great deal of experience in business AND software, it is still a complex issue.
As far as your specific questions:
As soon as you see them. What is important is to couch your suggestion in constructive terms. Also using terms relevant to the business (ROI, NPV, etc). And finding ancillary benefits (). So if the software change really doesn't mitigate the business problem, the cost is high and fixing the business process has significant ancillary cost savings, you pose a completely different scenario than just saying "we can't do it because it costs too much".
The software is owned by the business - it isn't owed any more or less reverence than anything else the company owns of similar value.
When facing escalating business rules complexity in relation to the current form of the software, try considering Aspect-Oriented-Software-Development, in order to achieve better modularity and separation of concerns. This way, new or changing business rules, as they appear, may be integrated in your existent code base as plug-ins to only those modules that need them, not being necessary to rewrite large amounts of unrelated code.
The idea is that after all, a lot of business rules come from specific legislation, and it's the responsibility of the business, transmitted to the software as well, to implement and adapt. I personally believe that the lack of will to follow specifications due to perceived difficulty is what lead most web browsers to lag more or less behind web standards - altering the rules was a temporary workaround that has lead in time to a far greater accumulated cost by supporting each browser's particular quirks. Try to implement new business rules as fast as they appear or they change - failing to do so leads to accumulative lacks of support for new functionality and ultimately render your software deprecated.
That's kind of like the role/strength of the CIO. If the IT side can convince the business side that it would be easier/cheaper/cost effective to change the business process than the code, than you have a point. Otherwise, the quirky business practice may be more valuable than you think. I also doubt that you are making it clear that if you spend time on the quirky problem, you won't deliver the needed features on time (good luck with that).
If technologists had their their way, the GUI and the mouse/pointer would never have made it out of the lab. For everyday users, they're here to stay.

Resources