Is there any alternative explanations why 'Luhn algorithm' is good? - algorithm

Good Evening guys,
I built a simple Java program that validates credit card numbers using Luhn's algorithm / (mod 10).
I found a lot of explanations for why it is right to use this formula but still did not understand it fully.
Can anyone explain to me why it is a good formula to use for such validation?

It is just because all the companies agreed to follow this formula and they do not allow any credit card/bank account to exist if it does not follow the Luhn format.
Reason for that is to limit chance of human error.
If you make single typo, it will not pass Luhn alghoritm. Why single typo? Because it is mod 10, if you do typo to one number the difference is from 1 to 9 so the mod 10 covers it.
If you do typo to two numbers and you are unlucky (only 10% of combinations is valid, so there is 10% chance) you can send money to wrong account anyway.

Related

App Inventor app does not take answers in divisions

I am trying to make an app that has the four basic mathematical operations. Addition, subtraction, multiplication and division.
Each operation has a series of exercises and for each correct exercise, assign a point in a score counter.
I clarify that both, the exercises and the answers are selected at random. They are not questions and answers stored in a list.
Everything is ready and finished, but I have a problem with the division and it is as follows.
If, for example, the result of the division has exactly two decimals, the score counter takes as correct the answer that is selected. But if the result of the division has more than two decimals, the score counter does not take the answer as correct.
Example:
20/8 = 1.25 No more decimals, then the score counter takes it as the correct answer
9/7 = 1.28571428571 This answer has many decimals, then the score counter does not take it as the correct answer
The problem is not in rounding up the figures or in formatting the number of decimals. The problem is that for some reason, answers with more than two decimal numbers are not taken as correct.
Not matter if I round the result to a integer or if I set only a 2 decimals for each result, for some reason, the score counter do not show the result as correct.
For example, if I take the division 9/7 = 1.28571428571 and I set only 2 decimals for the result, leaving it as 1.28, the score counter do not take this result as a correct result.
Even if I round the result to 1, occur the same problem.
How can this be fixed?
Many thanks to anyone who can help me find a solution.
P.S.: I'm not a programmer I'm just an amateur and nothing else, that just starts, so I appreciate, please, answers for a layman like me. Thanks in advance.
Here are the blocks

Converting Logarithmic Bases and Approximating Exponents

I'm studying for an algorithms final. There are some sample questions that I'd like to know how to solve. No calculators are permitted, and the methods used are more relevant to computer science than algebra. I'll try to be as specific as possible, because researching this has only led me to change-of-base formulas that require a calculator.
What is (approximately) the Log base 2 of 130,000,000?
What is (approximately) the Log base 4 of 1,000,000?
I'd also appreciate it if someone would explain base conversions without a calculator to me. Thank you. If this post is inappropriate, I'll remove it without hesitation.
Edit: Answers delivered in log(x)/log(r) will not be considered correct. Thank you.
What you need to do is make your own table during the exam.
2, 4, 8, 32, 64, 128, 256,512, 1024, ....
Find the power of 2 that is closest to the value you looking for. If you are asked for Log2(1000), you'd see the closest is 1024 (2**10): Ans: 10.
log base r (x) may be evaluated as log(x)/log(r)

Reverse engineering a check digit algorithm

I am trying to reverse engineer an algorithm used to generate a check digit.
Numbers are 8 digits long and the last digit is the check digit. I have thousands of valid numbers to test it on.
I have tried a standard Luhn, Verhoeff and modulo-10 algorithms (brute force checking of all possible weights), but could not find an answer!
Is it possible to calculate this? Any ideas?
Here is some examples of valid numbers:
1002784-5
1000514-7
1001602-8
1001255-2
1001707-1
1003355-5
1005579-1
1004535-0
1004273-1
1001695-9
1004565-9
1000541-9
1001291-1
1005866-1
1004352-7
EDIT:
Thanks guys - I don't have access to the code unfortunately. The number is a tax number, I need to be able to verify that the number was typed in correctly. From my research is looks like most countries use a pretty standard modulo-10 type system. I've got access to about 60 000 numbers.
I understand that the problem could be impossible to solve, it was more of academic concern.
First check your context:
If context is credit cards, driver's licenses, government licensing numbers (not SSN) think Luhn or Mod 10. If some other industry, does that industry have a defacto standard? If not, is the developer of the system using the numbers also a player in an industry that has a de facto standard?
Nobody likes to reinvent the wheel if they don't have to.
If that doesn't help keep in mind:
Don't assume that all the numbers in the keys you are testing against are used to arrive at the check digit. It's possible only 4 or the 8 digits are being used to calculate the check digit (or any other combination). It's also possible there is some external PREFIX number that is used with the other digits to arrive at the check digit. So... line up all your numbers with the same check digit, and see what the similarities are. Can you add a number to them and then always reach check digit? Can you test only the first few digits? Last few digits? every other digit?
Good luck.
Count how many times in your data (60 thousand) there are digits 0,1,2,3,4,5,6,7,8,9 as a check digit. If the digit 0 occurs twice as often as other digits, it means that the algorithm uses the modulo 11 operation. In this algorithm, if the sum mod 11 = 10, then the check digit is 0.

Likeliest way to compute validation using MOD 7 check on the last number

I am attempting to perform number validation for a proprietary ID implementation. I was anticipating a spec document for the algorithm in detail, but this is all that they sent:
The IDs are 9 digits.
The check digit is a MOD 7 check on the last number.
I think I am to assume that "MOD 7" means to apply modulus 7 to something--I suspect that "the last number" means more than just applying mod 7 to the last digit, otherwise every valid ID would end in 0 or 7.
Meanwhile someone in a separate conversation said that it was actually "a CRC MOD 7 check on the last number" (throwing in "CRC"), but I think that this was a misspoken detail and that CRC is not used at all, but what do I know?
I am having difficulty finding information on some standard way of reading this specification and interpreting this for some "standard algorithm". Most of the samples I've found consist of many different varieties of algorithms, such as weighted or unweighted, etc.
Does anyone know how I am most likely to interpret this, or if I am supposed to ask for more information? If I am supposed to ask for more information, what questions should I ask?
As I said in my comment, this is horrible documentation, but here's the only way to read this that makes sense to me:
You do a MOD 7 check on the whole card number (all nine digits), and then check if that result equals the last digit in the card number.
Slight variation on Briguy37's answer, this variation so far appears to actually be the correct answer in my case since initial tests seem to pass.
You do a MOD 7 check on the first eight digits of the 9-digit number as a single 8-digit integer, and then check if that result equals the last digit in the card number.
You should ask for more information. Think of the consequences if you don't and it turns out the interpretation you received from the internets is wrong.

Correct use of Simplify in Mathematica (with multiphase trig)

I just started working with Mathematica (5.0) for the first time, and while the manual has been helpful, I'm not entirely sure my technique has been correct using (Full)Simplify. I am using the program to check my work on a derived transform to change between reference frames, which consisted of multiplying a trio of relatively large square matrices.
A colleague and I each did the work by hand, separately, to make sure there were no mistakes. We hoped to get a third check from the program, which seemed that it would be simple enough to ask. The hand calculations took some time due to matrix size, but we came to the same conclusions. The fact that we had the same answer made me skeptical when the program produced different results.
I've checked and double checked my inputs.
I am definitely . (dot-multiplying) the matrices for correct multiplication.
FullSimplify made no difference.
Neither have combinations with TrigReduce / expanding algebraically before simplifying.
I've taken indices from the final matrix and tryed to simplify them while isolated, to no avail, so the problem isn't due to the use of matrices.
I've also tried to multiply the first two matrices, simplify, and then multiply that with the third matrix; however, this produced the same results as before.
I thought Simplify automatically crossed into all levels of Heads, so I didn't need to worry about mapping, but even where zeros would be expected as outputs in the matrix, there are terms, and where we would expect terms, there are close answers, plus a host of sin and cosine terms that do not reduce.
Does anyone frequent any type of technique with Simplify to get more preferable results, in contrast to solely using Simplify?
If there are assumptions on parameter ranges you will want to feed them to Simplify. The following simple examples will indicate why this might be useful.
In[218]:= Simplify[a*Sqrt[1 - x^2] - Sqrt[a^2 - a^2*x^2]]
Out[218]= a Sqrt[1 - x^2] - Sqrt[-a^2 (-1 + x^2)]
In[219]:= Simplify[a*Sqrt[1 - x^2] - Sqrt[a^2 - a^2*x^2],
Assumptions -> a > 0]
Out[219]= 0
Assuming this and other responses miss the mark, if you could provide an example that in some way shows the possibly bad behavior, that would be very helpful. Disguise it howsoever necessary in order to hide proprietary features: bleach out watermarks, file down registration numbers, maybe dress it in a moustache.
Daniel Lichtblau
Wolfram Research
As you didn't give much details to chew on I can only give you a few tips:
Mma5 is pretty old. The current version is 8. If you have access to someone with 8 you might ask him to try it to see whether that makes a difference. You could also try WolframAlpha online (http://www.wolframalpha.com/), which also understands some (all?) Mma syntax.
Have you tried comparing your own and Mma's result numerically? Generate a Table of differences for various parameter values or use Plot. If the differences are negligable (use Chop to cut off small residuals) the results are probably equivalent.
Cheers -- Sjoerd

Resources