Sort of a very long winded explanation of what I'm looking at so I apologize in advance.
Let's consider a Recipe:
Take the bacon and weave it ...blahblahblah...
This recipe has 3 Tags
author (most important) - Chandler Bing
category (medium importance) - Meat recipe (out of meat/vegan/raw/etc categories)
subcategory (lowest importance) - Fast food (our of fast food / haute cuisine etc)
I am a new user that sees a list of randomly sorted recipes (my palate/profile isn't formed yet). I start interacting with different recipes (reading them, saving them, sharing them) and each interaction adds to my profile (each time I read a recipe a point gets added to the respective category/author/subcategory). After a while my profile starts to look something like this :
Chandler Bing - 100 points
Gordon Ramsey - 49 points
Haute cuisine - 12 points
Fast food - 35 points
... and so on
Now, the point of all this exercise is to actually sort the recipe list based on the individual user's preferences. For example in this case I will always see Chandler Bing's recipes on the top (regardless of category), then Ramsey's recipes. At the same time, Bing's recipes will be sorted based on my preferred categories and subcategories, seeing his fast food recipes higher than his haute cuisine ones.
What am I looking at here in terms of a sorting algorithm?
I hope that my question has enough information but if there's anything unclear please let me know and I'll try to add to it.
I would allow the "Tags" with the most importance to have the greatest capacity in point difference. Example: Give author a starting value of 50 points, with a range of 0-100 points. Give Category a starting value of 25 points, with a possible range of 0-50 points, give subcategory a starting value of 12.5 points, with a possible range of 0-25 points. That way, if the user's palate changes over time, s/he will only have to work down from the maximum, or work up from the minimum.
From there, you can simply add up the points for each "Tag", and use one of many languages' sort() methods to compare each recipe.
You can write a comparison function that is used in your sort(). The point is when you're comparing two recipes just add up the points respectively based on their tags and do a simple comparison. That and whatever sorting algorithm you choose should do just fine.
You can use a recursively subdividing MSD (sort of radix sort algorithm). Works as follows:
Take the most significant category of each recipe.
Sort the list of elements based on that category, grouping elements with the same category into one bucket (Ramsay bucket, Bing bucket etc).
Recursively sort each bucket, starting with the next category of importance (Meat bucket etc).
Concatenate the buckets together in order.
Complexity: O(kn) where k is the number of category types and N is the number of recipes.
I think what you're looking for is not a sorting algorithm, but a rating scheme.
You say, you want to sort by preferences. Let's assume, these preferences have different “dimensions”, like level of complexity, type of cuisine, etc.
These dimensions have different levels of measurement. These can be e.g. numeric or simple categories/tags. It would be your job to:
Create a scheme of dimensions and scales that can represent a user's preferences.
Operationalize real-world data to fit into this scheme.
Create a profile for the users which reflects their preferences. Same for the chefs; treat them just like normal users here.
To actually match a user to a chef (or, even to another user), create a sorting callback that matches all your dimensions against each other and makes sure that in each of the dimension the compared users have a similar value (on a numeric scale), or an overlapping set of properties (on a nominal scale, like tags). Then you sort the result by the best match.
I have an interesting problem that I need help with. I am currently working on a feature of my program and stumbled into this issues
I have a huge list of street names in Indonesia ( > 100k rows ) stored in database,
Each street name may have more than 1 word. For example : "Sudirman", "Gatot Subroto", or "Jalan Asia Afrika" are all legit street names
have a bunch of texts ( > 1 Million rows ) in databases, that I split into sentences. Now, the features ( function to be exact ) that I need to do , is to test whether there are street names inside the sentences or no, so just a true / false test
I have tried to solve it by doing these steps:
a. Putting the street names into a Key,Value Hash
b. Split each sentences into words
c. Test whether words are in the hash
This is fast, but will not work with multiple words
Another alternatives that I thought of is to do these steps:
a. Split each sentences into words
b. Query the database with LIKE statement ( i,e. SELECT #### FROM street_table WHERE name like '%word%' )
c. If query returned a row, it means that the sentence contains street names
Now, this solution is going to be a very IO intensive.
So my question is "What is the most efficient way to do this test" ? regardless of the programming language. I do this in python mainly, but any language will do as long as I can grasp the concepts
============EDIT 1 =================
Will this be periodical ?
Yes, I will call this feature / function with an interval of 1 minute. Each call will take 100 row of texts at least and test them against the street name database
A simple solution would be to create a dictionary/multimap with first-word-of-street-name=>full-street-name(s). When you iterate each word in your sentence you'll look up potential street names, and check if you have a match (by looking at the next words).
This algorithm should be fairly easy to implement and should perform pretty good too.
Using nlp, you can determine the proper noun in a sentence. Please refer to the link below.
http://nlp.stanford.edu/software/lex-parser.shtml
The standford parser is accurate in its calculation. Once you have the proper noun, you can decide the approach to follow.
So you have a document and want to seach if it contains any of your list of streetnames?
Turbo Boyer-Moore is a good starting point for doing that.
Here is more information on turbo boyer moore
But, i strongly believe, you will have to do something about the organisation of your list of street names. there should be some bucket access to it, i.e. you can easily filter for street names:
Here an example:
Street name: Asia-Pacific-street
You can access your list by:
A (getting a starting point for all that start with an A)
AS (getting a starting point for all that start with an AS)
and so on...
I believe you should have lots of buckets for that, at least 26 (first letter) * 26 (second letter)
more information about bucketing
The Aho-Corasick algorithm could be pretty useful. One of it's advantages is that it's run time is independent of how many words you are searching for (only how long the text is you are searching through). It will be especially useful if your list of street names is not changing frequently.
http://en.wikipedia.org/wiki/Aho%E2%80%93Corasick_string_matching_algorithm
I have a list of requirements for a software project, assembled from the remains of its predecessor. Each requirement should map to one or more categories. Each of the categories consists of a group of keywords. What I'm trying to do is find an algorithm that would give me a score ranking which of the categories each requirement is likely to fall into. The results would be use as a starting point to further categorize the requirements.
As an example, suppose I have the requirement:
The system shall apply deposits to a customer's specified account.
And categories/keywords:
Customer Transactions: deposits, deposit, customer, account, accounts
Balance Accounts: account, accounts, debits, credits
Other Category: foo, bar
I would want the algorithm to score the requirement highest in category 1, lower in category 2, and not at all in category 3. The scoring mechanism is mostly irrelevant to me, but needs to convey how much more likely category 1 applies than category 2.
I'm new to NLP, so I'm kind of at a loss. I've been reading Natural Language Processing in Python and was hoping to apply some of the concepts, but haven't seen anything that quite fits. I don't think a simple frequency distribution would work, since the text I'm processing is so small (a single sentence.)
You might want to look the category of "similarity measures" or "distance measures" (which is different, in data mining lingo, than "classification".)
Basically, a similarity measure is a way in math you can:
Take two sets of data (in your case, words)
Do some computation/equation/algorithm
The result being that you have some number which tells you how "similar" that data is.
With similarity measures, this number is a number between 0 and 1, where "0" means "nothing matches at all" and "1" means "identical"
So you can actually think of your sentence as a vector - and each word in your sentence represents an element of that vector. Likewise for each category's list of keywords.
And then you can do something very simple: take the "cosine similarity" or "Jaccard index" (depending on how you structure your data.)
What both of these metrics do is they take both vectors (your input sentence, and your "keyword" list) and give you a number. If you do this across all of your categories, you can rank those numbers in order to see which match has the greatest similarity coefficient.
As an example:
From your question:
Customer Transactions: deposits,
deposit, customer, account, accounts
So you could construct a vector with 5 elements: (1, 1, 1, 1, 1). This means that, for the "customer transactions" keyword, you have 5 words, and (this will sound obvious but) each of those words is present in your search string. keep with me.
So now you take your sentence:
The system shall apply deposits to a
customer's specified account.
This has 2 words from the "Customer Transactions" set: {deposits, account, customer}
(actually, this illustrates another nuance: you actually have "customer's". Is this equivalent to "customer"?)
The vector for your sentence might be (1, 0, 1, 1, 0)
The 1's in this vector are in the same position as the 1's in the first vector - because those words are the same.
So we could say: how many times do these vectors differ? Lets compare:
(1,1,1,1,1)
(1,0,1,1,0)
Hm. They have the same "bit" 3 times - in the 1st, 3rd, and 4th position. They only differ by 2 bits. So lets say that when we compare these two vectors, we have a "distance" of 2. Congrats, we just computed the Hamming distance! The lower your Hamming distance, the more "similar" the data.
(The difference between a "similarity" measure and a "distance" measure is that the former is normalized - it gives you a value between 0 and 1. A distance is just any number, so it only gives you a relative value.)
Anyway, this might not be the best way to do natural language processing, but for your purposes it is the simplest and might actually work pretty well for your application, or at least as a starting point.
(PS: "classification" - as you have in your title - would be answering the question "If you take my sentence, which category is it most likely to fall into?" Which is a bit different than saying "how much more similar is my sentence to category 1 than category 2?" which seems to be what you're after.)
good luck!
The main characteristics of the problem are:
Externally defined categorization criteria (keyword list)
Items to be classified (lines from the requirement document) are made of a relatively small number of attributes values, for effectively a single dimension: "keyword".
As defined, no feedback/calibrarion (although it may be appropriate to suggest some of that)
These characteristics bring both good and bad news: the implementation should be relatively straight forward, but a consistent level of accuracy of the categorization process may be hard to achieve. Also the small amounts of various quantities (number of possible categories, max/average number of words in a item etc.) should give us room to select solutions that may be CPU and/or Space intentsive, if need be.
Yet, even with this license got "go fancy", I suggest to start with (and stay close to) to a simple algorithm and to expend on this basis with a few additions and considerations, while remaining vigilant of the ever present danger called overfitting.
Basic algorithm (Conceptual, i.e. no focus on performance trick at this time)
Parameters =
CatKWs = an array/hash of lists of strings. The list contains the possible
keywords, for a given category.
usage: CatKWs[CustTx] = ('deposits', 'deposit', 'customer' ...)
NbCats = integer number of pre-defined categories
Variables:
CatAccu = an array/hash of numeric values with one entry per each of the
possible categories. usage: CatAccu[3] = 4 (if array) or
CatAccu['CustTx'] += 1 (hash)
TotalKwOccurences = counts the total number of keywords matches (counts
multiple when a word is found in several pre-defined categories)
Pseudo code: (for categorizing one input item)
1. for x in 1 to NbCats
CatAccu[x] = 0 // reset the accumulators
2. for each word W in Item
for each x in 1 to NbCats
if W found in CatKWs[x]
TotalKwOccurences++
CatAccu[x]++
3. for each x in 1 to NbCats
CatAccu[x] = CatAccu[x] / TotalKwOccurences // calculate rating
4. Sort CatAccu by value
5. Return the ordered list of (CategoryID, rating)
for all corresponding CatAccu[x] values about a given threshold.
Simple but plausible: we favor the categories that have the most matches, but we divide by the overall number of matches, as a way of lessening the confidence rating when many words were found. note that this division does not affect the relative ranking of a category selection for a given item, but it may be significant when comparing rating of different items.
Now, several simple improvements come to mind: (I'd seriously consider the first two, and give thoughts to the other ones; deciding on each of these is very much tied to the scope of the project, the statistical profile of the data to be categorized and other factors...)
We should normalize the keywords read from the input items and/or match them in a fashion that is tolerant of misspellings. Since we have so few words to work with, we need to ensure we do not loose a significant one because of a silly typo.
We should give more importance to words found less frequently in CatKWs. For example the word 'Account' should could less than the word 'foo' or 'credit'
We could (but maybe that won't be useful or even helpful) give more weight to the ratings of items that have fewer [non-noise] words.
We could also include consideration based on digrams (two consecutive words), for with natural languages (and requirements documents are not quite natural :-) ) word proximity is often a stronger indicator that the words themselves.
we could add a tiny bit of importance to the category assigned to the preceding (or even following, in a look-ahead logic) item. Item will likely come in related series and we can benefit from this regularity.
Also, aside from the calculation of the rating per-se, we should also consider:
some metrics that would be used to rate the algorithm outcome itself (tbd)
some logic to collect the list of words associated with an assigned category and to eventually run statistic on these. This may allow the identification of words representative of a category and not initially listed in CatKWs.
The question of metrics, should be considered early, but this would also require a reference set of input item: a "training set" of sort, even though we are working off a pre-defined dictionary category-keywords (typically training sets are used to determine this very list of category-keywords, along with a weight factor). Of course such reference/training set should be both statistically significant and statistically representative [of the whole set].
To summarize: stick to simple approaches, anyway the context doesn't leave room to be very fancy. Consider introducing a way of measuring the efficiency of particular algorithms (or of particular parameters within a given algorithm), but beware that such metrics may be flawed and prompt you to specialize the solution for a given set at the detriment of the other items (overfitting).
I was also facing the same issue of creating a classifier based only on keywords. I was having a class keywords mapper file and which contained class variable and list of keywords occurring in a particular class. I came with the following algorithm to do and it is working really fine.
# predictor algorithm
for docs in readContent:
for x in range(len(docKywrdmppr)):
catAccum[x]=0
for i in range(len(docKywrdmppr)):
for word in removeStopWords(docs):
if word.casefold() in removeStopWords(docKywrdmppr['Keywords'][i].casefold()):
print(word)
catAccum[i]=catAccum[i]+counter
print(catAccum)
ind=catAccum.index(max(catAccum))
print(ind)
predictedDoc.append(docKywrdmppr['Document Type'][ind])
Locked. This question and its answers are locked because the question is off-topic but has historical significance. It is not currently accepting new answers or interactions.
A large international company deploys a new web and MOTO (Mail Order and Telephone Order) handling system. Among other things you are tasked to design format for both order and customer identification numbers.
What would be the best format in your opinion? Please list any assumptions and considerations.
Accepted Answer
Michael Haren's answer selected due to the most up votes, but please do read other answers and comments as they make Michael's answer more complete.
Go with all numbers or all letters. If you must mix it up, then make sure there are no ambiguous characters (Il1m, O0, etc.).
When displayed/printed, put spaces in every 3-4 characters but make sure your systems can handle inputs without the spaces.
Edit:
Another thing to consider is having a built in way to distinguish orders, customers, etc. e.g. customers always start with 10, orders always start with 20, vendors always start with 30, etc.
DON'T encode ANY mutable customer/order information into the numbers! And you have to assume that everything is mutable!
Some of the above suggestions include a region code. Companies can move. Your own company might reorganize and change its own definition of regions. Customer/company names can change as well.
Customer/order information belongs in the customer/order record. Not in the ID. You can modify the customer/order record later. IDs are generally written in stone.
Even just encoding the date on which the number was generated into the ID might seem safe, but that assumes that the date is never wrong on the systems generating the numbers. Again, this belongs in the record. Otherwise it can never be corrected.
Will more than one system be generating these numbers? If so, you have the potential for duplication if you use only date-based and/or sequential numbers.
Without knowing much about the company, I'd start down this path:
A one-character code identifying the type of number. C for customers, R for orders (don't use "O" as it could be confused with zero), etc.
An identifier of the system that generated the number. The length of this identifier depends on how many of these systems there will be.
A sequence number, unique to the system generating it. Just a counter.
A random number, to prevent guessable order/customer numbers. Make this as long as your paranoia requires.
A simple checksum. Not for security, but for error checking.
Breaking this up into segments makes it more human-readable as others have pointed out.
CX5-0000758-82314-12 is a possible number generated by this approach
. This consists of:
C: it's a customer number.
X5: the station that generated the number.
0000758: this is the 758th number generated by X5. We can generate 10 million before retiring this station ID or the station itself. Or don't pad with zeros and there's no limit.
82314: this was randomly generated and results in a 1/100,000 chance of guessing a customer ID.
12: checksum.
A primary advantage of using only numbers is that they can be entered much more efficiently using 10-key.
The length of that number should be as short as possible while still encompassing the entire entity space you expect to catalog with room to spare. This can be tricky and should be given a bit of thought. A little set theory can give you the number of unique keys you will have access to, given a group of elements.
It is natural when speaking, to break numbers up into sets of two to four digits. By inserting dashes in some pattern, you can "force" the customer to repeat them in a more efficient and unambiguous manner.
For instance, 323-23-5344, which, of course, is social security number format, helps to inform the speaker where to pause when vocalizing the number. It also provides a visual delineation when writing the number and makes it easy to compare when copying the number.
I second the recommendation that the ordering system masks the input correctly so that no dashes need to be entered at any time. This should be carried through to printed forms to provide a clear expectation of what should be entered. For instance, a printed box for each digit separated by printed dashes.
I disagree that too much information should be embedded in this number especially if those attributes might change. For instance, say we give "323" the meaning of "is a nice customer" but then they call in four times with an attitude. Are we then going to change their customer key to "324", "is a jerk"? What if they are in region 04 and move their company to region 05?
If that happens, your options will be to update that primary key throughout the database or live with the ambiguity that the information embedded in that key is no longer reliable, thus rendering all of the information embedded in the keys of questionable utility.
It is better to store attributes that may change as separate fields in the database and have the customer number be a unique, unchanging key for that customer.
To build on Daniel and Michael's questions: it's even better if the separated numbers MEAN something else. For example, I worked for a company where account numbers were like this:
xxxx-xxxx-xxxxxxxx
The first set of numbers represented the region and the second set represented the market within that region. Once you got used to knowing what numbers were from were, it made it really easy to tell what area an account was in without even having to look at the customer's account.
There are several assumptions that I make when answering this question; some are based on the fact that it is a large international organization, and some are based on the fact that the format is for two separate table types.
Assumptions based on the fact that it's an international organization:
It is probable that each region will need to operate independently -- that is, region A must be able to add customer numbers independently from region B
Each region probably uses a different language so to make the identifiers easily type-able by users around the world, it is best to stick to numbers and spaces only.
Assumptions based on the fact that there are two tables for which this format will be used:
This format may be used by more than the two tables listed, so it should be able to handle an arbitrarily large number of tables.
Experienced users should be able to know what type of identifier they are looking at based on information encoded into the identifier itself.
It would be nice if identifiers were globally unique within the entire system.
Considerations:
For a global company, identifiers can be very long if only numerics are used. We should attempt to limit the amount of extraneous information encoded into the identifier as much as possible.
Identifiers should be self-verifiable to a limited extent; that is a program should be able to detect a large percent of invalid identifiers without looking anything up at all. This implies a checksum.
Proposed format:
SSSS0RR0TTC
The format proposed is as simple as possible, but no simpler:
C The first (rightmost) character will be a checksum of all other characters in the identifier. A simple checksum will do. This will eliminate 90% of all typing errors. If it is decided that this is not enough, then this can be expanded to 2 digits which will eliminate 99% of all typing errors.
TT The next N digits represent the table type number. No table type number can contain the digit zero.
The next digit is a zero. This zero separates the table type number from the region number.
RR The next N digits are the region number. No region numbers can contain a zero.
The next digit is a zero. This zero separates the region from the sequence number.
SSSS The next N digits are the sequence number. This number can contain zeros.
Each set of four numbers are separated by spaces when printed or typed in by convention. Internally they are not separated, but this helps the user transfer them correctly.
Examples
Assuming:
Customer table type=1
Order table table type=2
Region code for US-Alabama=1
Region code for CA-Alberta=43
Region code for Ethopia=924
10 1013 - Customer #1 in Alabama (3 is the checksum: 1 +1 + 1)
10 1024 - Order #1 in Alabama
9259 0304 3016 - customer # 925903 in Alberta, Canada
20 3043 4092 4023 - order number 2030434 in Ethopia
Advantages of this approach:
90% of mistyped numbers will be caught
There are an unlimited number of table types
There are an unlimited number of regions
There are an unlimited number of sequential numbers for each table
Identifier numbers are globally unique to the system. This is important - a customer number cannot be mistaken for an order number and visa versa.
Each region can independently add sequence numbers without a global key
Disadvantages
Each identifier is at least six characters
table types numbers and region numbers cannot contain a zero because the zero is used to separate the sequence number from the region number from the table type number.
Make the number as long as necessary, but not any longer. Every time I pay my water bill, I have to enter my 20-digit customer number, and an 18-digit invoice number. Thankfully, a dash in my customer number separates it into two parts.
Do not depend on leading zeros. Having to figure out how many zeros are in my invoice number is extremely annoying. Take 000000000051415432 for example. Their system won't recognize just 51415432.
Group digits together. If you absolutely have to use long numbers, four-digit chunks should work well.
I would never use user information in IDs. Suppose you use the first letters of the customer's last name followed by some number: e.g. Thomsom could be customer THOM-0001.
Only, it appears you made a mistake, and the man's name is Tomson instead of Thomson. User data can be corrected, IDs should never be modifiable. So next time you look up Tomson under TOMS-... you can't find him. Same with other data, like a customer type. It can always change, the ID can't.
This is very basic to RDBMS.
Simply use counting numbers. For readability it's a good idea to insert separators such that you never have more than 4 successive digits: 9999-9999 is better than 999-99999. And don't make the number longer than necessary; people are much more annoyed by being reduced to a 20 digit number than just being reduced to a number.
There's a catch, though. Especially if you have a small business simple counters can give more away than you would appreciate. Say I order something from you, and the order number is 090145. Next month I order again, and the order number is 090171. Er.. 26 orders in a month? Same, I wouldn't feel comfortable to become customer 0006 in a business which has been active for 10 years.
The solution is simple: skip numbers. Don't use random numbers, because you still want them to be in sequence.
I would have my order numbers follow this format:
ddmmyyyy-####-####
Where ####-#### resets to zero at the beginning of every day. This makes it very easy to correlate orders with the date it was placed.
For customer IDs, I would mix capital letters and numbers, but as Michael said avoid commonly mistaken letters (0,o,L,1,5,s). This will give you 30 characters to deal with. If you use 20 characters, that will give you almost a 64 bit range of customer IDs -- pretty good for security. Make sure you use a secure random number generator when generating ID. As for how you display the format, it should be the following:
####-####-####-####-####
As Michael said again, make sure your system can deal with dashes, spaces, no spaces, or no dashes. (It should just strip all those characters from the input before validation.)
I hope that helps!
You may add a small checksum (using XOR for instance) to ensure (enhance) correctness of given ids.
If it's by mail, consider z-base-32 encoding. But here, with telephone orders, you may prefer decimal identification.
assuming that the creation of orders/customers is not centralized, or will not always be centralized, use a GUID
if the creation of orders/customers will always be centralized, an unsigned integer would be fine
there is no compelling reason for the order number of customer number to "mean" anything, and it is likely that any segmented number scheme invented will have to be overhauled down the road. Stick to something unique and meaningless.
EDIT: for MOTO, any multi-character alphabetical identifier will cause problems over the phone, so GUIDs are right out. Assuming multiple decentralized MOTO locations, assign each MOTO location a prefix (A, B, C, etc., or 01, 02, ...) and use an integer or big-integer for the customer and order IDs, e.g. 01-1 is the first order from MOTO location #1. Note that zero-padding is unnecessary, imposes an implicit digit limit to the numbers, and requires the customer to distinguish between six zeros and seven zeros when speaking the number. If you must use a padded fixed-length format, break the number up into groups of no more than 4 or 5 digits each.
ADDENDUM: the order number and the customer number do not have to be the primary keys of their respective tables, just unique indexed columns for lookup. You'll probably want to use something simpler/more efficient for the primary keys in the database.
We use leading zeroes for some of our references "numbers" where I work and I can't tell you how many wasted hours I've had over the last seven years forcing Excel to treat them as text. Don't do it.
Auto-incrementing integers are all well and good for computers, but they greatly reduce human beings ability to spot errors. How important that is will depend on your business. I work with property (housing) related data and our primary reference has the front door embedded in it. It's not elegant but it means that experienced admin staff can spot 90% of minor errors (when we get invoices, etc in) before they get near a database. But in an environment where you're not relying on that kind of process this argument is less compelling.
(Now, some folks have strongly warned about using meaningful data in references as it could be changed, and there's some truth in that, but you can be smart. You don't have to pick something obviously fickle like whether the person is married - you can anchor yourself on past events like a character representing the region they first opened a particular account. Even if you don't do that, have some kind of pattern to help communication with customers. I've worked in a number of call centres and people sometimes come to phone with every piece of documentation from birth certificate onwards as they desperately try to find their account/order/customer number. I don't think saying "It'll be a number between 1 and 100 trillion" would be very handy)
It's been said, but don't create enormously long references. We're busy people, we haven't got time to be keying in this crap over a phone system and making a mistake on digit 17 only to restart (again). Some of your customers may have disabilities and it's likely a growing number will be over 55+. Once again, watch out for the zeroes. You see purchase order numbers and the like with fourteen digits. How many orders do they think they're going to be placing?
If there's going to be any data aggregation outside of your network (and thus not connected to your database) - have some sort of check digit/regular expression pattern which your partners/suppliers can verify they've not made mistakes. One example of this is the UK's electrical supply numbering system (MPAN) is a good example of this - designed for people to maintain their own records without having to download the big list of every electricity meter in the universe to check they've not made a typo.
I would use numbers only since it is an international company. I would use spaces or dashes every 4-6 numbers to separate it. I would also keep the format separate for quick identification
Example:
000-00000-00000 - could be an customer number
00000-00000-00000-00000 - could be a order number
Stick to numbers (no chars or special stuff):
Can be easily input in an IVR flow
its international - No language hassles
No confusion in chars vs. numbers - O vs. 0, I vs. 1
As long as leading 0 is meaningless, you can store/manipulate them more effectively
I would use a completely numeric systems for both Order Number and Customer Number, this will allow you to avoid issues with other languages.
Avoid leading zeros, as this can cause issues with data entry and validation.
The number of digits for each will be dependent on your expected volume. You will always have a greater number of Order Numbers than Customer Numbers. A six digit Customer number starting at 100000 will still give you 899,999 customers. Add an additional 3-4 digits for the order number, will give you 999 to 9,999 orders per customer (more if you consider one off customers).
There is no need to build any sort of identification into your numbering sequence. You have other database fields to identify where a customer is from, etc. Do not overly complicate your system.
KISS (keep it simple stackoverflow)
I would suggest using 16 digit identifiers that when printed or shown to customers are formatted in the format of xxxx-xxxx-xxxx-xxxx but stored as numbers without the dashes in your system.
The reason for using this format is that it makes it easier for people reading out the number over to phone to read as they can do it in batches of 4 rather then trying to remember how much they have said already.
If you wish the first 4 digits can be used to identify the type of number, 1000 for customers, 2000 for suppliers, 3000 for orders, 4000 for invoices etc.
The second set can then by a year/month identifier if you wish to keep that sort of information encoded in the number itself, using a format of yymm so 1000-0903-xxxx-xxxx would be a customer entered in march 2009.
This then leaves you with 8 digits for the actual data itself.
I would consider the use of letters in the identifiers to be a very bad idea for any system that deals with telephones as the differences in accents and understand is so varied that people are bound to get upset at trying to get their identifier recognised by someone who cannot understand their accent properly.
An additional consideration to the format issue- in the code, create a separate class for OrderId and CustomerId. These classses are immutable, and validate their input to ensure that they are acceptable IDs. Also, no value could be and order ID and a customer ID.
The simplest approach would just be to have the backing values for OrderId be ints that start with 1, and CustomerIds be ints that start with 2, or something similar.
Wow - what a simple yet revealing question! And what a lot of contradictory answers. I think there are 3 obvious candidate answers here:
1) Use an autoincrementing long integer.
2) Use a GUID
3) Use a compound type that includes other information in the ID.
For simpler systems, and especially web based systems where all users are hitting a central database, (1) works well. It has the advantage that numbers stay as short and simple as possible, but no shorter, avoids alphabetic characters (you would be amazed how different the names for the same letters are in different countries - one countries E is another countries I). It does not differentiate the order ID from the customer ID intrinsically, but you could always prepend or append a "C" or "O" to each and silently drop them on entry?
It also does not have a checksum or error check.
For distributed systems where many software components need to create the numbers on the fly, without reference to a master database (2) is the only way to go. They have the advantage of being largely error checking, since the address space is so large, but by the same token, are too long and alphanumeric to comfortably read over the telephone.
As for (3) - embedding region information or today's date into the number - those are the sorts of ideas that experienced developers train themselves out of. Looks like a good idea at first, but always comes back to haunt you. Consider the case where a customer moves to a new state, or an order is manually rekeyed a week after originally issued? These items of information belong in related tables where they can be edited independantly of the ID which should represent the entities identity only.
To repeat: NEVER ENCODE BUSINESS DATA IN AN ID OR PRIMARY KEY - every time you do that you leave a time bomb for others to clean up one day.
Given that this is a centralised (phone based) system I would go with option (1) until a clear need arose to change. Simpler is usually better. Insert hyphens as others suggest and prepend or postpend a checksum and/or identifying letter if required.
First step: in an org sufficiently large to require such a system, there is an existing system that you're replacing. Continue the previous system's scheme, if possible. It makes a lot of things easier if you can access, even at a basic level, the data from the old system.
That said, there's often a good reason to change the scheme, particularly when it's coming from a legacy system. i find, though, that it's often helpful to formally rule out the old scheme before proceeding.
Second step: systems like this never exist in a vacuum. Is there already an organization-wide scheme for user and/or order IDs, such as in the accounting, inventory management, or CRM system? If so, consider adopting the existing schemes to make interoperability easier. Many large orgs have multiple ways to specify a single customer or order, and it just makes getting useful intelligence out of the data that much harder.
Third step: if the old system's scheme is too awful to continue and there's no other scheme to adopt, roll your own. In this case, look at the shortcomings of the original scheme, whatever they are, and correct them. The right answer will depend on the specific requirements of the application. The problem statement you've given us is too vague to speculate usefully on what the final form might look like.
I always stick with auto-increment numbers, and I always seed the sequence high enough so that they will all have a consistent number of digits - seems to be less confusing.
I also sometimes start an order number, say 6 digits, starting at 200,000 and customer numbers at 5 digits, starting at 10,000 which would for example give me 90,000 unique customer numbers and 800,000 unique order numbers to use, and you could always tell just by looking at it whether it was a customer number or an order number. (i.e. so if a customer rep was asking for a number over the phone it would immediately be obvious which was which)
I would not however build logic in the app that would depend on that, so even if it did roll over, the system wouldn't care.
The biggest issue here is to try not to overthink the problem.
Although I'm more experienced in e-commerce systems I think some of the points made in this post could be applied to mail order and telephone order systems.
For orders, an auto-increment integer works perfectly as the primary key in the database as well as the number that the customer will see on his/her invoice. There is absolutely no reason to create some overcomplicated algorithm for your numbers. If you want to tell which country/region they're from use a separate field in your database. Also if you are concerned about your competitors spying on you; let them! If your business revolves around spying on your competitors because you're not generating enough revenue then most likely your businessidea isn't good in the first place. Also if you wanted to fool your competitor you could just create your own script that will autocreate fake orders. If your e-commerce system is well designed then this won't be an issue.
Key stuff using an auto-increment integer:
All numbers/digits => easier to communicate, no ambiguities over the phone, works for all languages/cultures that use 0-9 as their numerical system
No extra coding
Looks nice on the invoice and it's the shortest possible number of digits a customer would ever need to spell out over the phone
Works for small AND large businesses
It's scalable
Serviceminded/Customerminded (What's best for the customer) (se bullets 1 and 3)
Simple
Whenever or whatever you're designing should always begin with what's best for the customer. At the end of the day they are the ones putting food on your table. A happy customer is a returning customer.
For me, my preferred is getting the combination of date + a counter for today's transaction. I was challenged to come up with only 5 digit order number. So with that, I come up with the following below:
I have to get the current date then
get the current counter for today's transaction then add 1.
I decided to use a counting larger than decimal(10), so i use base 16 for counting. So with that, if i will get the max of 5 digit out of hexadecimal(FFFFF) that will be 1,048,575 counts. By involving the date, I can say I can get 1,048,575 counts per day. So to make that count unique every day, I mixed the date by getting the sum of the following:
Current Year count starting from the year of the implementation which is 1
Current hour(max is 24)
Current day of the year(max is 365)
So with that, I will have a max 3 characters start for my counting. So that will be XXX + Todays current transaction. Example:
Current Date:
2014-12-31 01:22 PM
Implementation date: 2010
Running total for today's transaction: 100
Count: (5 + 13 + 365) + 101 = 383101
Order Number: AD-5D87D
AD there is just a custom order number prefix. So by the time i will be out of order number that will 1000000 years from the time of my implementation date.
Anyway, this is not a good solution if you think your transaction per day can be high as 1000000 counts.