I am reading "Introduction to Algorithms", third edition. In there under the section "Analyzing Algorithms" it is written that:
We also assume a limit on the size of each word of data. For example when working with inputs of size n, we typically assume that integers are represented by c lg n bits for some constant c>=1. We require c>=1 so that each word can hold the value of n, enabling us to index the individual input elements, and we restrict c to be a constant so that the word size doesn't grow arbitrarily.
What is the significance of the word "word" here? Is this a standard to represent data by "word"?
They mean a machine word; basically the size of a processor register, or the "most natural size" of a piece of data for that machine. For a 32-bit machine, it's 32 bits; for a 64-bit machine, it is (not surprisingly) 64 bits.
Word size used to be somewhat more variable, as computer architectures evolved. If you look at this Wikipedia article on word size, you'll see links to descriptions of 12-bit, 18-bit, 21-bit, 24-bit, 31-bit, 36-bit, 48-bit, and 60-bit hardware. I remember reading about a 72-bit machine once, although I can't find a reference right now.
A "Word" is typically a multi-byte integer value that matches the data register size of the underlying processor. Might be 16-bits on some systems, 32 on others, for example, although those aren't the only possibilities.
It was a data type in some languages at one point, but the lack of standardization in size caused a lot of portability issues.
Related
I'm reading the documentation to the math/big package here:
https://golang.org/pkg/math/big/#pkg-constants
I am trying to understand how large a number is too big for math.big, and this looked like a constant I could interrogate.
I see on my machine:
fmt.Println(math.MaxUint32)
4294967295
How does this relate to the largest integer possible on my machine, for the purpose of calculation? What are the units of this number? Is this bytes, or decimal places or something other than the number itself?
bignum libraries usually store big numbers as a sequence of digits (e.g. in base 264). Their limitation is related to the memory available. So the largest number you could represent is tied to the limitation of your virtual address space. You can safely assume that a number even as large as 1010000 is representable in bignum. Of course, a googolplex is not representable as a bignum (because it has more bits than the number of particles in the universe).
Another limitation is the complexity of arithmetic operations. But there exist very efficient bignum algorithms.
FWIW, the GMPlib (a C library for bignums) can deal with numbers as long as there is memory for them. However, it is rumored than when malloc fails, GMPlib is aborting.
I don't know what happens inside Go bignums when a number is too big to be representable (and that limit varies from one machine to the next and could be different from one run to the next). For example, Go's Int.Mul gives a product whose size is the sum of the size of the arguments, and the "out of memory" error is undocumented (but obviously can happen).
When using bignums, prefer iterative algorithms to recursive ones. For example, a naive recursive factorial might overflow the call stack with large enough bignums, so you want to code it iteratively.
Upon studying hash data structure and cache memory from computer architecture, I noticed that they're very similar.
Division hash function calculates index by hash(k) = k Mod (table size M) but my DS book says M should be a prime number or at least an odd number, because if M is an even number, the result is always even when k is even, odd when k is odd, so even M should be avoided since you often use memory addresses which are always even.
And yet, my CA book says for direct-mapped cache you use (Block address) Mod (Number of blocks in the cache) and the result indices look uniform. Why is this? It's all very confusing because MIPS uses 32 bit address every 4 bytes which is even number. But I think it's because they threw out the last 2 bits since they're byte offsets?
And, since it uses (Block address) Mod (Number of blocks in the cache), it makes the cache size power of 2 so that you can just use the lower x bits of the block address.
But this method looks exactly the same as division hash function, except you make the hash table power of 2, which is even (data structure book said use prime or odd) and use the lower bits of the block address.
Are these 2 different methods? If so, what's the cache one called? I would really appreciate a reply please. Thank you.
The reason for not using an even number for hash table is described here.
And how caches use addresses to calculate line numbers are described here. And its ok for caches to map more than one entry to the same line. Just because an address is mapped to a cacheline which has data, we don't blindly use the data in that cacheline. We also do a tag comparison to make sure that the content is the cacheline is what exactly we are looking for.
The reason for using a prime to take the modulo by is to get "mixing" of the bits, which is helpful if the integers that you're hashing have a poor structure. That isn't the only way to deal with it though, and for example the Java standard library doesn't use that, it uses a separate "mixing" function (that XORs the input with right-shifted versions of itself) and then uses a power-of-two sizes table. Either way it's protection against badly distributed input, which isn't necessary in and of itself - if the input was always nicely distributed you wouldn't need it.
Memory addresses are usually fairly nicely distributed, because it's typically used in sequential pieces. The obvious exception is that you will see highly aligned big objects, which would conflict with each other in the cache if nothing was done about it. Of course you will probably use a set-associative cache rather than direct mapped, since it is far more robust against degradation, and that would take care of a lot of that. But nothing is ever immune to bad patterns (that also goes for hash-mod-prime, which you can easily defeat if you know the prime), but a fairly simple improvement (which is also used in practice, or at least was, more advanced techniques exist now - combined with adaptive replacement strategies that mitigate bad access patterns) is to XOR some of the higher address bits into the index. This is hash-strengthening, the same technique used in the Java standard library, but a much simpler version of it.
Computing a remainder by a prime number (or really anything that isn't a power of two) is not something you'd want to do in this case, it's a slow computation by itself, and it leaves you with an awkwardly sized cache that doesn't fully use the power of its decoders, which adds to the slowness (or reduces cache size for a given latency, depending on how you look at it). The difference between that and XORing some of the high bits into the low bits is much bigger in hardware than it is in software, since XOR is really a trivial operation in hardware, much faster as a circuit operation than as an instruction.
This is from the book Assembly Language Step By Step, Jeff Duntemann:
Here’s the quick tour: A bit is a single binary digit, 0 or 1. A byte
is 8 bits side by side. A word is 2 bytes side by side. A double word
is 2 words side by side. A quad word is 2 double words side by side.
And this is from the book Principles of Computer Organization and Assembly Language: Using the Java Virtual Machine, Patrick Juola:
For convenience, 8 bits are usually grouped into a single block,
conventionally called a byte. The next-largest named block of bits is
a word. The definition and size of a word are not absolute, but vary
from computer to computer. A word is the size of the most convenient
block of data for the computer to deal with.
So is a word 2 bytes (16 bits), or is it the most convenient block of data for the computer to deal with? (I am also not sure what this means..)
I'm not familiar with either of these books, but the second is closer to current reality. The first may be discussing a specific processor.
Processors have been made with quite a variety of word sizes, not always a multiple of 8.
The 8086 and 8087 processors used 16 bit words, and it's likely this is the machine the first author was writing about.
More recent processors commonly use 32 or 64 bit words.
In the 50's and 60's there were machines with words sizes that seem quite strange to us now, such as 4, 9 and 36. Since about the 70's word size has commonly been a power of 2 and a multiple of 8.
On x86/x64 processors, a byte is 8 bits, and there are 256 possible binary states in 8 bits, 0 thru 255. This is how the OS translates your keyboard key strokes into letters on the screen. When you press the 'A' key, the keyboard sends a binary signal equal to the number 97 to the computer, and the computer prints a lowercase 'a' on the screen. You can confirm this in any Windows text editing software by holding an ALT key, typing 97 on the NUMPAD, then releasing the ALT key. If you replace '97' with any number from 0 to 255, you will see the character associated with that number on the system's character code page printed on the screen.
If a character is 8 bits, or 1 byte, then a WORD must be at least 2 characters, so 16 bits or 2 bytes. Traditionally, you might think of a word as a varying number of characters, but in a computer, everything that is calculable is based on static rules. Besides, a computer doesn't know what letters and symbols are, it only knows how to count numbers. So, in computer language, if a WORD is equal to 2 characters, then a double-word, or DWORD, is 2 WORDs, which is the same as 4 characters or bytes, which is equal to 32 bits. Furthermore, a quad-word, or QWORD, is 2 DWORDs, same as 4 WORDs, 8 characters, or 64 bits.
Note that these terms are limited in function to the Windows API for developers, but may appear in other circumstances (eg. the Linux dd command uses numerical suffixes to compound byte and block sizes, where c is 1 byte and w is bytes).
The second quote is correct, the size of a word varies from computer to computer. The ARM NEON architecture is an example of an architecture with 32-bit words, where 64-bit quantities are referred to as "doublewords" and 128-bit quantities are referred to as "quadwords":
A NEON operand can be a vector or a scalar. A NEON vector can be a 64-bit doubleword vector or a 128-bit quadword vector.
Normally speaking, 16-bit words are only found on 16-bit systems, like the Amiga 500.
This is from the book Hackers: Heroes of the Computer Revolution by Steven Levy.
.. the memory had been reduced to 4096 "words" of eighteen bits each.
(A "bit" is a binary digit, either a 1 or 0. A series of binary
numbers is called a "word").
As the other answers suggest, a "word" does not seem to have a fixed length.
In addition to the other answers, a further example of the variability of word size (from one system to the next) is in the paper Smashing The Stack For Fun And Profit by Aleph One:
We must remember that memory can only be addressed in multiples of the
word size. A word in our case is 4 bytes, or 32 bits. So our 5 byte buffer
is really going to take 8 bytes (2 words) of memory, and our 10 byte buffer
is going to take 12 bytes (3 words) of memory.
"most convenient block of data" probably refers to the width (in bits) of the WORD, in correspondance to the system bus width, or whatever underlying "bandwidth" is available. On a 16 bit system, with WORD being defined as 16 bits wide, moving data around in chunks the size of a WORD will be the most efficient way. (On hardware or "system" level.)
With Java being more or less platform independant, it just defines a "WORD" as the next size from a "BYTE", meaning "full bandwidth". I guess any platform that's able to run Java will use 32 bits for a WORD.
Another instance of a book citing the variable length of the Word is Operating System Concepts by Sileberschatz, Galvin, Gagne where the authors in Chapter 1 page 6 state:
A less common term is "word",
which is a given computer architecture's native storage unit. A word is
generally made up of one or more bytes. For example, a computer may have
instructions to move 64-bit (8-byte) words.
For example, if I add a short integer and a float data type, will each allocate the same amount of memory? They have different maximum values, but have different data structures as well, so I was not sure.
Also, do mathematical operations take different amounts of time with different data types?
It's hard to answer this exactly without knowing which language/hardware you are using, but in most cases different data types do take up different amounts of memory and data type can affect the speed of mathematical operations. For example, in Java and C#, a short is 16 bits and a float is 32 bits. I would expect floating point math to be slower than integer math in general, although this can be complex as explained in this post.
For other languages (such as C), the size of the basic data types is hardware-dependent, so an int might be 32 bits on some machines and 64 bits on others.
the reason we have datatypes is efficiency. That includes both size and treatment of data(types).
A char datatype has 8 bits (you may also call it 8-bit integer since the numerical values are simply mapped to characters in the ASCII table) = 256 possible values
A 32-bit integer has ... you guessed it 32 bits = 2³² possible values
Mathematical Operations are performed by shifting or comparing those bits..
More bits = more operations = more time
Reference: http://en.wikipedia.org/wiki/Data_type
This question already has answers here:
Count the number of set bits in a 32-bit integer
(65 answers)
Count bits in the number [duplicate]
(3 answers)
Closed 8 years ago.
I was asked the above question in an interview and interviewer is very certain of the answer. But i am not sure. Can anyone help me here?
Sure. The obvious brute force method is just a big lookup table with one entry for every possible value of the input number. That's not very practical if the number is very big, but is still enough to prove it's possible.
Edit: the notion has been raised that this is complete nonsense, and the same could be said of essentially any algorithm.
To a limited degree, that's a fair statement -- but the limitations are so severe that for most algorithms it remains utterly meaningless.
My original point (at least as well as I remember it) was that population counting is about equivalent to many other operations like addition and subtraction that we normally assume are O(1).
At the hardware level, circuitry for a single-cycle POPCNT instruction is probably easier than for a single-cycle ADD instruction. Just for one example, for any practical size of data word, we can use table lookups on 4-bit chunks in parallel, then add the results from those pieces together. Even using fairly unlikely worst-case assumptions (e.g., separate storage for each of those tables) this would still be easy to implement in a modern CPU -- in fact, it's probably at least somewhat simpler than the single-cycle addition or subtraction mentioned above1.
This is a decided contrast to many other algorithms. For one obvious example, let's consider sorting. For even the most trivial sort most people can imagine -- 2 items, 8 bits apiece, we're already at a 64 kilobyte lookup table to get constant complexity. Long before we can do even a fairly trivial sort (e.g., 100 items) we need a lookup table that contains far more data items than there are atoms in the universe.
Looking at it from the opposite direction, it's certainly true that at some point, essentially nothing is O(1) any more. Let's consider the most trivial operations possible. For an N-bit CPU, bitwise OR is normally implemented as a set of N OR gates in parallel. Unlike addition, there's no interaction between one bit and another, so for any practical size of CPU, this easy to execute in a single instruction.
Nonetheless, if I specify a bit-wise OR in which each operand is 100 petabits, there's nothing even approaching a practical way to do the job with constant complexity. Using the usual method of parallel OR gates, we end up with (among other things) 300 petabits worth of input and output lines -- a number that completely dwarfs even the number of pins on the largest CPUs.
On reasonable hardware, doing a bitwise OR on 100 petabit operands is going to take a while (not to mention quite a bit of hard drive space). If we increase that to 200 petabit operands, the time is likely to (about) double -- so from that viewpoint, it's an O(N) operation. Obviously enough, the same is going to be true with the other "trivial" operations like addition, subtraction, bit-wise AND, bit-wise XOR, and so on.
Nonetheless, unless you have very specific instructions to say you're going to be dealing with utterly immense operands, you're typically going to treat every one of these as a constant complexity operation. Looked at in these terms, a POPCNT instruction falls about halfway between bit-wise AND/OR/XOR on one hand, and addition/subtraction on the other, in terms of the difficulty to execute in fixed time.
1. You might wonder how it could possibly be simpler than an add when it actually includes an add after doing some other operations. If so, kudos -- it's an excellent question.
The answer is that it's because it only needs a much smaller adder. For example, a 64-bit CPU needs one half-adder and 63 full-adders. In the simple implementation, you carry out the addition bit-wise -- i.e., you add bit-0 of one operand to bit-0 of the other. That generates an output bit, and a carry bit. That carry bit becomes an input to the addition for the next pair of bits. There are some tricks to parallelize that to some degree, but the nature of the beast (so to speak) is bit-serial.
With a POPCNT instruction, we have an addition after doing the individual table lookups, but our result is limited to the size of the input words. Given the same size of inputs (64 bits) our final result can't be any larger than 64. That means we only need a 6-bit adder instead of a 64-bit adder.
Since, as outlined above, addition is basically bit-serial, this means that the addition at the end of the POPCNT instruction is fundamentally a lot faster than a normal add. To be specific, it's logarithmic on the operand size, whereas simple addition is roughly linear on the operand size.
If the bit size is fixed (e.g. natural word size of a 32- or 64-bit machine), you can just iterate over the bits and count them directly in O(1) time (though there are certainly faster ways to do it). For arbitrary precision numbers (BigInt, etc.), the answer must be no.
Some processors can do it in one instruction, obviously for integers of limited size. Look up the POPCNT mnemonic for further details.
For integers of unlimited size obviously you need to read the whole input, so the lower bound is O(n).
The interviewer probably meant the bit counting trick (the first Google result follows): http://www.gamedev.net/topic/547102-bit-counting-trick---new-to-me/