Software Requirments Specification (SRS): What are 'System Interfaces'? [closed] - srs

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
In the Wikipedia link for SRS, in the 'Product Perspective' section, there is a mention of the term 'System Interfaces'. I am not clear as to what exactly that means. I have looked at a few other SRS samples available online but am not able to piece together an unambiguous definition from the examples. Could someone elaborate on what 'System Interfaces' refer to?

The IEEE 830-1998 standard defines 'Hardware Interfaces' as
'the logical characteristics of each interface between the software
product and the hardware components of the system'
Similarly, it defines 'User Interfaces' as
'the logical characteristics of each interface between the software
product and its users'.
So, a little reasoning tells us that the 'System Interface' should have been defined as
'the logical characteristics of each interface between the software
product and the bigger system'.
That means, 'system interfaces' are not the bigger system's interfaces with the outside world, but the internal interfaces between the software and everything else within the bigger system, which includes user interfaces, hardware interfaces and software interfaces.
Ironically, the 830-1998 is written in a so inconsistent fashion that the recommended section hierarchy is:
2. Overall Description
2.1. Product perspective
2.1.1 System Interfaces
2.1.2. User Interfaces
2.1.3. Hardware Interfaces
2.1.4. Software Interfaces
...
where 2.1.1 should really be the parent section of 2.1.2 - 2.1.4.
So they gave some vague definitions of the 'system interfaces' section:
This should list each system interface and identify the functionality
of the software to accomplish the system requirement and the interface
description to match the system.
Whoever wrote this, please try get a B from the 12th grade English composition class!
Anyway, as a non-native speaker, my understanding of IEEE's version of the 'System Interface' is that:
Software may be an independent product made for general use (e.g. commercial software, video games, etc.), or it may be a part of a bigger system which contains both software and hardware. For example, a car is a system and the embedded computer software is only a part of the system. Another example is the software in a hospital CT scanner is also a part of the system (the machine).
Assuming the system requirement is defined before the software requirement (i.e. top-down approach), think about
a) what functionality your software must have in order to meet the system requirement? (can't forget the bigger picture)
For example, if a automatic driving car system requirement is that 'the car shall detect sudden slow downs of the vehicles in front of the car within 0.1 seconds', then you may need to write a non-functional requirement for your software system such as 'after the software receives the 'sudden slow down' signal from the front sensor, it shall process the signal and make decisions. If it is a confirmed real scenario (not a false alarm) then the software shall send 'hit break' signal to the break system. The decision making and signal sending process shall take no longer than 0.05 seconds.
b) what are the interfaces between your software and everything else within the bigger system?
E.g. your car computer software shall have the following interface with the front sensor:
public int processFrontSensorSignal(Signal signal){
if (signal.getType() == 1){
SuddenSlowDownSignalProcessor.process(signal);
} else if (signal.getType() == 2){
...
}
else
....
}
Such interfaces shall be clearly defined.
If you software is not part of a bigger system, or if it is designed to be a generic software to be run in general systems (e.g. MS Windows applications), then there is little need for specifying the 'System Interfaces' section.

System interfaces include the following:
1. User Interfaces e.g. screen format,keys
2. Hardware interfaces e.g. configuration characteristics, devices supported
3. Software interfaces e.g. an OS
4. Communication Interface - LAN
Also, you may want to include a high level view of your system in relation to all these interfaces.
Let me know if you require more details.

System interfaces in the context will mean all interfaces your system will need to perform its purpose.
Probably if your server consumes a web service response / a Queue message or a database poll, they can be counted as 3 interfaces to your system. The implementation of these interfaces will be a SOAP impl, a ACTIVEMQ broker and a database.

Related

How to start implementing a Linux ECC block device wrapper [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
Thinking about student projects to advise; I'm an OK C develepor and C++ developer/architect.
Not done much Linux kernel development beyond "write your first module".
The thing I want to rudimentarily implement (before advising students doing it properly as a team project) is:
Take existing block device, say /dev/sda
add second block device, say /dev/fec0, which implements wrapper around first, providing error-correction on read and writing data + error coding information on write
This implies that the size of fec0 is smaller than that of sda. Also, note that this is somewhat different to the RAID5 approach, as there's no split of parity data to secondary data groups or similar.
Now, I'm trying to get an overview of how to do this. It's clear that it will be necessary, aside from implementing the block device interface itself, to implement a control interface to define things like "use /dev/sda to create fec0", "shut down fec0".
The questions that arise are:
Is there a preferred framework to integrate this into? Goal is finding the sensible place to put this. To mind come:
dmraid and
lvm2, or
just a plain block device driver
How to implement control interfacing?
Specific ioctls?
/sys/* pseudofile interface?
framework-specific ways?
For the first part of your question, I'd suggest you have two choices: the block subsystem or the md (multiple device) subsystem. I think the most useful thing I could do would be to point you in the direction of some references so you can decide what fits best. If you want to account for logical volumes then see here for the lvm-raid man page which sits in both the md and dm subsystems. If you just want to restrict yourself to RAID functionality, then see here for the md driver man page. But if you want to keep it simple and just make a wrapper over a simple block device, perhaps emulating some aspects of lvm and raid, then add a driver to the block subsystem.
Your choice would affect the second part of your question, too. Single block devices commonly provide lots of ioctls; md drivers make a lot of use of the sysfs interface. Hope that's helpful.

Is there any difference between HL7 U.S.A and U.K standards?

Is there any difference between HL7 USA and UK standards ? If its there then what are those ?
I'm not aware of any country-specific HL7 features neither did Google query site:hl7.org country specific reveal something eye striking. But it does not mean there are no differences
I guess your best bet would be to narrow your question down to a particular set of messages, register as member at hl7.org and ask this question through their internal mailing list (Even better best bet might be to talk to some friendly competitor and ask for the lessons learned)
In our country-specific HL7 community the membership is also paid, but the benefits of becoming a member are close to 0 from the developer's perspective. It's prestigious to be a member and it looks good on the business card. But all you get is access to downloadable specifications (but most of them are free for about a year anyway)
In the software I was working with we were exchanging only very small set of hl7 messages, namely: ADT^A04, ADT^A05, ADT^A08, ADT^A18, ADT^A23, ADT^A34, ORM^O01 and ACK in 2 different countries, considering adding 3rd country, without any country-specific features needed, except the convention used for identifying patient with his/her security number. The number format was country-specific but it was handled by upper software layers. The topmost customization was the number input mask and user input validation business rules at GUI level
What we did need, was an installation-specific or 3rd-party-system-specific message customization. For this customization you'll need rather the list of vendors of hardware and software your system will talk to. Once you have it, check their hl7 conformance statements

What is a use-case? How to identify a use-case? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
The question is quite generic. What are the points that should be kept in mind to identify a valid use case? How to approach a use case?
A use case identifies, with specificity, a task or goal that users will be able to accomplish using a program. It should be written in terms that users can understand.
Wikipedia's description is overly formal. I'll dig through my other texts shortly.
In contrast, the original wiki's article is much more accessible.
An early article by Alastair Cockburn, cited positively by The Pragmatic Programmer, contains a good template.
This question, from just a few days ago, is very closely related, but slightly more specific.
The definition of use case is simple:
An actor's interactions with a system to create something of business value.
More formally:
a sequence of transactions performed
by a system that yield a measurable
set of values for a particular actor.
They're intended to be very simple: Actor, Interaction, Value. You can add some details, but not too many.
Using use cases is easy. Read this: http://www.gatherspace.com/static/use_case_example.html
The biggest mistake is overlooking the interaction between actor and system. A use case is not a place to write down long, detailed, technical algorithm designs. A use case is where an actor does something.
People interact with systems so they can take actions (place orders, approve billing, reject an insurance claim, etc.) To take an action, they first make a decision. To make a decision, they need information
Information
Decision
Action
These are the ingredients in the "Interaction" portion of a use case.
A valid use case could describe:
intended audience / user
pre-requisites (ie must have logged in, etc)
expected outcome(s)
possible points of failure
workflow of user
From Guideline: Identify and Outline Actors and Use Cases by the Eclipse people:
Identifying actors
Find the external entities with which
the system under development must
interact. Candidates include groups of
users who will require help from the
system to perform their tasks and run
the system's primary or secondary
functions, as well as external
hardware, software, and other systems.
Define each candidate actor by naming
it and writing a brief description.
Includes the actor's area of
responsibility and the goals that the
actor will attempt to accomplish when
using the system. Eliminate actor
candidates who do not have any goals.
These questions are useful in
identifying actors:
Who will supply, use, or remove
information from the system?
Who will
use the system?
Who is interested in a
certain feature or service provided by
the system?
Who will support and
maintain the system?
What are the
system's external resources?
What
other systems will need to interact
with the system under development?
Review the list of stakeholders that
you captured in the Vision statement.
Not all stakeholders will be actors
(meaning, they will not all interact
directly with the system under
development), but this list of
stakeholders is useful for identifying
candidates for actors.
Identifying use cases
The best way to find use cases is to
consider what each actor requires of
the system. For each actor, human or
not, ask:
What are the goals that the actor will
attempt to accomplish with the system?
What are the primary tasks that the
actor wants the system to perform?
Will the actor create, store, change,
remove, or read data in the system?
Will the actor need to inform the
system about sudden external changes?
Does the actor need to be informed
about certain occurrences, such as
unavailability of a network resource,
in the system?
Will the actor perform
a system startup or shutdown?
Understanding how the target
organization works and how this
information system might be
incorporated into existing operations
gives an idea of system's
surroundings. That information can
reveal other use case candidates.
Give a unique name and brief
description that clearly describes the
goals for each use case. If the
candidate use case does not have
goals, ask yourself why it exists, and
then either identify a goal or
eliminate the use case.
Outlining Use Cases
Without going into details, write a
first draft of the flow of events of
the use cases identified as being of
high priority. Initially, write a
simple step-by-step description of the
basic flow of the use case. The
step-by-step description is a simple
ordered list of interactions between
the actor and the system. For example,
the description of the basic flow of
the Withdraw Cash use case of an
automated teller machine (ATM) would
be something like this:
The customer inserts a bank card.
The system validates the card and prompts
the person to enter a personal
identification number (PIN).
The customer enters a PIN.
The system
validates the PIN and prompts the
customer to select an action.
The customer selects Withdraw Cash.
The system prompts the customer to choose
which account.
The customer selects
the checking account.
The system
prompts for an amount.
The customer
enters the amount to withdraw.
The
system validates the amount (assuming
sufficient funds), and then issues
cash and receipt.
The customer takes the cash and receipt, and then
retrieves the bank card.
The use case ends.
As you create this step-by-step
description of the basic flow of
events, you can discover alternative
and exceptional flows. For example,
what happens if the customer enters an
invalid PIN? Record the name and a
brief description of each alternate
flow that you identified.
Representing relationships between
actors and use cases
The relationship between actors and
use cases can be captured, or
documented. There are several ways to
do this. If you are using a use-case
model on the project, you can create
use-case diagrams to show how actors
and use cases relate to each other.
Refer to Guideline: Use-Case
Model for more information.
If you are not using a use-case model
for the project, make sure that each
use case identifies the associated
primary and secondary actors.

The future of Naked Objects pattern (and UI auto-generation) [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
I ask about the pattern, not framework. This is kind of follow-up to a question on UI auto-generation.
Do you believe in the concept of UI auto-generation from metadata?
What kind of problems can be approached this way?
The question arose when I've created a small library to support my student projects, which generates interactive CLI in runtime based on object's metadata. And I think CLI it generates is quite decent.
On the other extreme is the Naked Objects Framework, which is rather universal, but UI it generates is horrible, IMO.
It's clear, every problem is specific and needs specific UI, but maybe there are several classes of problems where auto-generation is acceptable?
Yes, I believe the concept of metadata-based auto-generated applications is very sound - mainly because it drastically reduces development time and improves code quality by reducing the massive redundancy you have in most applications where each domain data field is represented in the database, in the model, in the UI, and often also several times in various mapping layers.
I think the future is auto-generated apps that can be modified wherever necessary. Currently, this is AFAIK not really possible; for example, Rails only allows you to fully customize the UI when you use static scaffolding, which basically means code generation, i.e. many further changes in the domain model are then not automatically represented in the UI because the duplication has happened when the code was generated.
I believe the first framework that manages to combine complete auto-generation with complete modifiability afterwards will become the de-facto development standard to a previously unknown degree. Though most likely we'll get there in small steps so that there will not be such a single dominating framework.
Take a look at JMatter, which is a rather better-looking implementation of Naked Objects.
http://www.jmatter.org
There is also Chris Muller's work on MAUI, and Lukas Renggli's work on Magritte (both Squeak /Smalltalk)
We have lots of generated UI in the configuration part of our apps. All those lists that are around forever and changed once in a blue moon by a system administrator.
I find that most applications with a database back-end tend to have a bad design from an OO and NO perspective, as already shown in the NO book by Pawson and Matthews.
Re: qn #1 ... Do you believe in the concept of UI auto-generation from metadata? ... I'm definitely going to answer 'yes' to your first question, being one of the committers to the Naked Objects (Java) framework and writing a book on DDD + NO.
The question mentions metadata. I think this is key to NO being able to succeed. In the latest version (which will be going beta in Feb) the metamodel has been opened up so that it is very extensible, either so you can write your domain model following your own programming conventions/annotations, or, potentially so that more sophisticated viewers can look for their own metadata to provide more sophisticated views. (For example, consider that if an object implemented a Location interface then it is displayed in a google maps).
Regarding qn #2 ... what kind of problems can be approached this way ... we've always said that NO is more suitable for "sovereign applications" (transactional, operational systems ones used internally within an organization) to "transient applications" (like an airport kiosk, say). An NO GUI does require that the user is familiar with the domain, otherwise they won't know what they are looking at.
What's missing still is sophisticated viewers, of course. You are right about the NO GUI, it is definitely low fidelity (though the .NET version is a big improvement, see recent infoq.com article). On the Java side there is a sister project called scimpi.org that has a lot of promise though... it provides a basic web GUI for free but lets you hand-craft web pages as necessary and incrementally. I'm also working on an Eclipse RCP GUI that'll work similarly.
The other thing to add to this though is that the NO approach has value (I believe) even if you choose to write a custom GUI and/or presentation layer. That is, you can use it as a design tool for building a very solid pojo domain layer, and then skin it as you will. Trouble is that NO was never originally sold in those terms, so most will see the NO pattern as an all-or-nothing affair.
Dan
One way to look at this is to consider the difference between the user interface you get from something like Toad or MySQL Browser, where the user interface is directly constructed from the tables and their associated meta data, and the user interface that a skilled designer would develop for the actual application. IF there not too disimilar then it should be fairly low hanging fruit for an auto-generation framework.
As you say there are classes of problems which will work quite well with this kind of auto generation and some which wouldn't. To my mind the key things are how well the implementation model (or portion thereof) which you are exposing in the user interface maps to the conceptual model of the user. Secondly how well can the behavior of the application can be expressed through a limited set of user interface components (assuming this is a general purpose UI generation framework).
This article "Universal Model of a User Interface" may be of interest .
I think the idea of automatically generated UIs has a lot of potential especially for your average form-and-table layout database user interface. However, even there a human needs to be in the loop, having the ability to override the output without it being overwritten with the next regeneration.
I suspect automatically generated UIs would be more successful today if interaction designers were more involved in developing the generation algorithms. My impression is that historically the creators of these systems don’t know what kinds of UI-related metadata to include or how to use it. Specifying labels, value ranges, formats, and orders for fields is a start, but more high level information is needed. Sufficient modeling of the tasks and user roles in particular tends to be lacking, along with some basic style-guide-level principles for UI.
Oracle’s Designer 2000, for example, was on the right track in including not only the entities and relations in the model, but also the tasks in the form of a functional hierarchy. Then they blew it by misapplying this metadata (e.g., assuming that depth is always preferred to breadth) and including fundamental flaws when generating the UI (e.g., only one primary window can be opened at a time). The result was IUs that were not even consistent with Oracle’s own Applications User Interface Standards.
Getting a basic UI up quickly that lets the customer try out the system and create test data must be of value. Naked Objects frameworks can help for the “boot strapping” even if you have to have replace it with “hand crafted” UI before you ship.
In most system I have worked on, there have been lots of simple housekeeping tables. All these tables need a UI to edit and view them etc. There is also great value in these simple editors being consistent. Here a naked Objects framework could save a lot of time, even if the main “day to day” UI is “hand crafted”
I have seen a couple of failed projects (cases where I was brought in as a rather expensive consultant to help architect the replacement) which used the "naked objects" approach (not the framework, AFAIK) - all with simply atrocious UIs, and worked replacing a lot of the UI on one project which, in its original incarnation, had a similar approach (the entire application was a tree of objects accessed through context menus and property sheets - this was NetBeans 2.0 circa 1998 - IDE as a giant hierarchical JavaBean).
The bottom line is, your users don't care about your architecture, they care about getting what they need to do done in the most comprehensible-to-mere-mortals set of interactions you can come up with. If that happens to align with your architecture, you are having a lucky day - but it really is serendipity. Trying to force users to care (or even know) about your architecture is a recipe for software nobody wants to use.
Code generally needs to be designed around two not-always-compatible goals:
Maintainability - people who didn't write the code can understand the code
Stability and performance - i.e. the activities the code asks the computer to physically do are both possible, and can be completed within a reasonable time frame
The abstractions and code structures that it makes sense to create to meet those two goals very, very rarely map exactly to user interface elements of any sort. Sometimes you can get away with it - barely - if your audience is technical. But even there, you are likely to please more users with at least a "presentation layer" adapter layer on top of the architecture that makes sense for programmers and machines.

How do you perform address validation? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 2 years ago.
The community reviewed whether to reopen this question 11 months ago and left it closed:
Original close reason(s) were not resolved
Improve this question
Is it even possible to perform address (physical, not e-mail) validation? It seems like the sheer number of address formats, even in the US alone, would make this a fairly difficult task. On the other hand it seems like a task that would be necessary for several business requirements.
Here's a free and sort of "outside the box" way to do it. Not 100% perfect, but it should reject blatantly non-existent addresses.
Submit the entire address to Google's geocoding web service. This service attempts to return the exact coordinates of the location you feed it, i.e. latitude and longitude.
In my experience if the address is invalid you will get a result of 602 from the service. There's definitely a possibility of false positives or false negatives, but used in conjunction with other consistency checks it could be useful.
(Yahoo's geocoding web service, on the other hand, will return the coordinates of the center of the town if the town exists but the rest of the address is bogus. Potentially useful as long as you pay close attention to the "precision" field in the result).
There are a number of good answers in here but most of them make the assumption that the user wants an "API" solution where they must write code to connect to a 3rd-party service and/or screen scrape the USPS. This is all well and good, but should be factored into the business requirements and costs associated with the implementation and then weighed against the desired benefits.
Depending upon the business requirements and the way that the data is received into the system, a real-time address processing solution may be the best bet. If a real-time solution is required, you will want to consider the license agreement and technical limitations of the Google Maps/Bing/Yahoo APIs. They typically limit the number of calls you can make each day. The USPS web tools API is the same in additional they restrict how/why you can use their system and how you are allowed to use the data thereafter.
At the same time, there are a handful of great service providers that can easily process a static list of addresses. Essentially, you give the service provider a CSV file or Excel file, they clean it up and get it back to you. It's a one-time deal with no long-term commitment or obligation—usually.
Full disclosure: I'm the founder of SmartyStreets. We do address verification for addresses within the United States. We are easily able to CASS certify a list and we also offer a address verification web service API. We have no hidden fees, contracts, or anything. You use our service until you no longer need it and you can walk away. (Unlike cell phone companies that require a contract.)
USPS has an address cleaner online, which someone has screen scraped into a poor man's webservice. However, if you're doing this often enough, it'd be a better idea to apply for a USPS account and call their own webservice.
I will refer you to my blog post - A lesson in address storage, I go into some of the techniques and algorithms used in the process of address validation. My key thought is "Don't be lazy with address storage, it will cause you nothing but headaches in the future!"
Also, there is another StackOverflow question that asks this question entitled How should international geographic addresses be stored in a relational database.
In the course of developing an in-house address verification service at a German company I used to work for I've come across a number of ways to tackle this issue. I'll do my best to sum up my findings below:
Free, Open Source Software
Clearly, the first approach anyone would take is an open-source one (like openstreetmap.org), which is never a bad idea. But whether or not you can really put this to good and reliable use depends very much on how much you need to rely on the results.
Addresses are an incredibly variable thing. Verifying U.S. addresses is not an easy task, but bearable, but once you're going for Europe, especially the U.K. with their extensive Postal Code system, the open-source approach will simply lack data.
Web Services / APIs
Enterprise-Class Software
Money gets it done, obviously. But not every business or developer can spend ~$0.15 per address lookup (that's $150 for 1,000 API requests) - a very expensive business model the vast majority of address validation APIs have implemented.
What I ended up integrating: streetlayer API
Since I was not willing to take on the programmatic approach of verifying address data manually I finally came to the conclusion that I was in need of an API with a price tag that would not make my boss want to fire me and still deliver solid and reliable international verification results.
Long story short, I ended up integrating an API built by apilayer, called "streetlayer API". I was easily convinced by a simple JSON integration, surprisingly accurate validation results and their developer-friendly pricing. Also, 100 requests/month are entirely free.
Hope this helps!
I have used the services of http://www.melissadata.com Their "address object" works very well. Its pricey, yes. But when you consider costs of writing your own solutions, the cost of dirty data in your application, returned mailers - lost sales, and the like - the costs can be justified.
For us-based address data my company has used GeoStan. It has bindings for C and Java (and we created a Perl binding). Note that it is a commercial product and isn't cheap. It is quite fast though (~300 addresses per second) and offers features like CASS certification (USPS bulk mail discount), DPV (Delivery point verification) flagging, and LON/LAT geocoding.
There is a Perl module Geo::PostalAddress, but it uses heuristics and doesn't have the other features mentioned for GeoStan.
Edit: some have mentioned 'doing it yourself', if you do decide to do this, a good source of information to start with is the US Census Tiger Data Set, which contains a lot of information about the US including address information.
As seen on reddit:
$address = urlencode('1600 Pennsylvania Avenue, Washington, DC');
$json = json_decode(file_get_contents("http://where.yahooapis.com/geocode?q=$address&flags=J"));
print_r($json);
Fixaddress.com service is available that provides following services,
1) Address Validation.
2) Address Correction.
3) Address spell correcting.
4) Correct addresses phonetic mistakes.
Fixaddress.com uses USPS and Tiger data as reference data.
For more detail visit below link,
http://www.fixaddress.com/
One area where address lookups have to be performed reliably is for VOIP E911 services. I know companies reliably using the following services for this:
Bandwidth.com 9-1-1 Access API MSAG Address Validation
MSAG = Master Street Address Guide
https://www.bandwidth.com/9-1-1/
SmartyStreet US Street Address API
https://smartystreets.com/docs/cloud/us-street-api
There are companies that provide this service. Service bureaus that deal with mass mailing will scrub an entire mailing list to that it's in the proper format, which results in a discount on postage. The USPS sells databases of address information that can be used to develop custom solutions. They also have lists of approved vendors who provide this kind of software and service.
There are some (but not many) packages that have APIs for hooking address validation into your software.
However, you're right that its a pretty nasty problem.
http://www.usps.com/ncsc/ziplookup/vendorslicensees.htm
As mentioned there are many services out there, if you are looking to truly validate the entire address then I highly recommend going with a Web Service type service to ensure that changes can quickly be recognized by your application.
In addition to the services listed above, webservice.net has this US Address Validation service. http://www.webservicex.net/WCF/ServiceDetails.aspx?SID=24
We have had success with Perfect Address.
Their database has all the US street names and street number ranges. Also acts as a pretty decent parser for free-form address fields, if you are lucky enough to have that kind of data.
Validating it is a valid address is one thing.
But if you're trying to validate a given person lives at a given address, your only almost-guarantee would be a test mail to the address, and even that is not certain if the person is organised or knows somebody at that address.
Otherwise people could just specify an arbitrary random address which they know exists and it would mean nothing to you.
The best you can do for immediate results is request the user send a photographed / scanned copy of the head of their bank statement or some other proof-of-recent-residence, because at least then they have to work harder to forget it, and forging said things show up easily with a basic level of image forensic analysis.
There is no global solution. For any given country it is at best rather tricky.
In the UK, the PostOffice controlls postal addresses, and can provide (at a cost) address information for validation purposes.
Government agencies also keep an extensive list of addresses, and these are centrally collated in the NLPG (National Land and Property Gazetteer).
Actually validating against these lists is very difficult. Most people don't even know exactly how their address as it is held by the PostOffice. Some businesses don't even know what number they are on a particular street.
Your best bet is to approach a company that specialises in this kind of thing.
Yahoo has also a Placemaker API. It is good only for locations but it has an universal id for all world locations.
It look that there is no standard in ISO list.
You could also try SAP's Data Quality solutions which are available in both a server platform is processing a large number of requests or as an embeddable SDK if you wanted to run it in process with your application. We use it in our application and it's very robust and scalable.
NAICS.com is coming out with an API that will add all kinds of key business data including street address. This would happen on the fly as your site's forms are processed. https://www.naics.com/business-intelligence-api/
You can try Pitney Bowes “IdentifyAddress” Api available at - https://identify.pitneybowes.com/
The service analyses and compares the input addresses against the known address databases around the world to output a standardized detail. It corrects addresses, adds missing postal information and formats it using the format preferred by the applicable postal authority. I also uses additional address databases so it can provide enhanced detail, including address quality, type of address, transliteration (such as from Chinese Kanji to Latin characters) and whether an address is validated to the premise/house number, street, or city level of reference information.
You will find a lot of samples and sdk available on the site and i found it extremely easy to integrate.
For US addresses you can require a valid state, and verify that the zip is valid. You could even check that the zip code is in the right state, but beyond that I don't think there are many tests you could run that wouldn't provide a lot of false negatives.
What are you trying to do -- prevent simple mistakes or enforcing some kind of identity check?

Resources