Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 8 years ago.
Improve this question
In the game 2048 what is the biggest tile that can be achieved, assuming a player playing optimally and tile spawning at the most optimal place?
Naively I would say that the biggest achievable tile is 65536 * 2 = 131072 because it seems that the best possible board is the following:
4 4 8 16
256 128 64 32
512 1024 2048 4096
65536 32768 16384 8192
But I'm not sure if
it's correct
how to prove that my intuition is indeed correct.
(sorry if I should have asked on gaming.stackexchange, but this is more of a CS question than a gaming one afaict)
you haven't finished yet with the board you propose: you can slide to the right, going all the way down and obtaining 131072. So your analysis was correct although you missed a spot:
This will be your final board:
4 8 16 32
512 256 128 64
1024 2048 4096 8192
131072 65536 32768 16384
Related
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 3 years ago.
Improve this question
The goal is to determine metrics of an UDP protocol performance, specifically:
Minimal possible Theoretical RTT (round-trip time, ping)
Maximal possible Theoretical PPS of 1-byte-sized UDP Packets
Maximal possible Theoretical PPS of 64-byte-sized UDP Packets
Maximal and minimal possible theoretical jitter
This could and should be done without taking in account any slow software-caused issues(like 99% cpu usage by side process, inefficiently-written test program), or hardware (like busy channel, extremely long line, so on)
How should I go with estimating these best-possible parameters on a "real system"?
PS. I would offer a prototype, of what I call "a real system".
Consider 2 PCs, PC1 and PC2. They both are equipped with:
modern fast processors(read "some average typical socket-1151 i7 CPU"), so processing speed and single-coreness are not an issues.
some typical DDR4 #2400mhz..
average NICs (read typical Realteks/Intels/Atheroses, typically embedded in mobos), so there is no very special complicated circuitry.
a couple meters of ethernet 8 pair cable that connects their NICs, having established GBIT connection. So no internet, no traffic between them, other that generated by you.
no monitors
no any other I/O devices
single USB flash per PC, that booted their initramfs to the RAM, and used to mount and store program output after test program finishes
lightest possible software stack - There is probably busy box, running on top of latest Linux kernel, all libs are up-to-date. So virtually no software(read "busyware") runs on them.
And you run a server test program on PC1, and a client - on PC2. After program runs, USB stick is mounted and results are dumped to file, and system powers down then. So, I've described some ideal situation. I can't imagine more "sterile" conditions for such an experiment..
For the PPS calculations take the total size of the frames and divide it into the Throughput of the medium.
For IPv4:
Ethernet Preamble and start of frame and the interframe gap 7 + 1 + 12 = 20 bytes.(not counted in the 64 byte minimum frame size)
Ethernet II Header and FCS(CRC) 14 + 4 = 18 bytes.
IP Header 20 bytes.
UDP Header 8 bytes.
Total overhead 46 bytes(padded to min 64 if payload is less than ) + 20 bytes "more on the wire"
Payload(Data)
1 byte payload - becomes 18 based on 64 byte minimum + wire overhead. Totaling 84 bytes on the wire.
64 byte - 48 + 64 = 112 + 20 for the wire overhead = 132 bytes.
If the throughput of the medium is 125000000 bytes per second(1 Gb/s).
1-18 bytes of payload = 1.25e8 / 84 = max theoretical 1,488,095 PPS.
64 bytes payload = 1.25e8 / 132 = max theoretical 946,969 PPS.
These calculations assume a constant stream: The network send buffers are filled constantly. This is not an issue given your modern hardware description. If this were 40/100 Gig Ethernet CPU, bus speeds and memory would all be factors.
Ping RTT time:
To calculate the time it takes to transfer data through a medium divide the data transferred by the speed of the medium.
This is harder since the ping data payload could be any size 64 - MTU(~1500 bytes). ping typically uses the min frame size (64 bytes total frame size + 20 bytes wire overhead * 2 = 168 bytes) Network time(0.001344 ms) + Process response and reply time combined estimated between 0.35 and 0.9 ms. This value depends on too many internal CPU and OS factors, L1-3 caching, branch predictions, ring transitions (0 to 3 and 3 to 0) required, TCP/IP stack implemented, CRC calculations, interrupts processed, network card drivers, DMA, validation of data(skipped by most implementations)...
Max time should be < 1.25 ms based on anecdotal evidence.(My best eval was 0.6ms on older hardware(I would expect a consistent average of 0.7 ms or less on the hardware as described)).
Jitter:
The only inherent theoretical reason for network jitter is the asynchronous nature of transport which is resolved by the preamble. Max < (8 bytes)0.000512 ms. If sync is not established in this time the entire frame is lost. This is possibility that needs to be taken into account. Since UDP is best effort delivery.
As evidenced by the description of RTT: The possible variances in the CPU time in executing of identical code, as well as OS scheduling, and drivers makes this impossible to evaluate effectively.
If I had to estimate, I would design for a maximum of 1 ms jitter, with provisions for lost packets. It would be unwise to design a system intolerant of faults. Even for a "Perfect Scenario" as described faults will occur (a nearby lightening strike induces spurious voltages on the wire). UDP has no inherent method for tolerating lost packets.
This question is from an exame my school made a year ago.
I have a N-way set associative cache with 48 bit adresses and the tag 33 bits.
The cache can store 16384 double-type elements, if the adress is multiple of 64.
The question is: How many sets and lines per set exist?
Since the capacity is 16384 double-type and a double has 8 byte, I put the capacity in 16384 * 8 = 131072 byte
I think the 48 bit (6 byte) adress is the dimension of a line.
I saw on a website that the Cache capacity / number of bits in a line = lines number so I put 131072 / 6 = 21845 (approximate).
I can't get further than this, since I don't find the way to get the sets and if I'm right with the number of lines per set, as this is my problem.
Thanks
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 8 years ago.
Improve this question
I'm struggling with a coursework question. I'm stuck. My brain's hurting and I'm going around in circles.
The question is.
(a) Suppose a memory with 4-byte words and a capacity of 2^21-bit is
built using 2k x 8 RAM.
i. How many chips are needed?
My answer / idea here can't be correct.
Found a question that says how many chips to make 32k memory from 2k x 8 RAM? The answer is 16 chips. That makes sense 2 x 16 = 32
However, 2^21 bits?
4-byte words = 32 bits. This must be the number of bits per cell. The width of the memory?
If the entire memory holds 2^21 bits then does that mean that there will be 2^21 / 32 rows? = 65536 rows. I got to this through the thinking that I need 2^21 bits altogether. If there are 32 per row, I need 65536 rows to get to 2^21 (=2097152).
Even though I have got this far, I can't see how it helps me.
How many bits are stored on each 2k x 8 RAM?
ii. How many address lines are needed for the memory?
I have read that
"2k x 8 RAM is referred to as 2k x n memory. There are k address lines and therefore 2k addresses. Each of these addresses contains an n-bit word.
In this instance, 2k = 2048 = 211. You need 11 address lines."
I don't `100% understand the quote. I know that 2 address lines give four addresses. I know 3 address lines give 8 addresses. Do I need to work this out for 65536 rows?
iii. How many of these address lines are connected to the address inputs of the RAM chips?
????
iv. How many of these address lines will be used to select the appropriate RAM chip(s)?
I understand that some address lines are needed to select the chip whilst others are necessary for the cell in the chip. When I know the number of chips, can I work this out?
Many, many thanks for any help you can give me.
(a) Suppose a memory with 4-byte words and a capacity of 2^21-bit is built using 2k x 8 RAM.
"8 Ram" implies this RAM chip stores bytes, and thus this chip has the capacity to store 2kB, or 512 words == 2^9 words.
Now, to store 2²¹ bits == 2^18 bytes == 2^16 words == 2^7 chips worth of words == 128 chips.
That wasn't so hard, was it?
How many bits are stored on each 2k x 8 RAM?
2k * 8, exactly as your quoted book says.
ii. How many address lines are needed for the memory?
Well, you need 7 lines to select the chip, and 9 lines to select the word inside the chip. 7+9 = 16.
iii. How many of these address lines are connected to the address inputs of the RAM chips?
9, see ii.
iv.
7, see ii.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
I'm doing an exercise. But I'm confused on how to calculate the megabytes
A “genome” can be represented as a sequence of “bases”, each of which can be encoded as a two-bit value. Assuming that a megabit is 1,000,000 bits, how many megabytes are required to encode a 16 million base genome?
The line about "a megabit is 1,000,000 bits" seems to suggest:
8 bits in a byte
1,000 bytes in a kilobyte
1,000 kilobytes in a megabyte
Therefore:
1,000,000 bytes in a megabyte; or
8,000,000 bits in a megabyte
16mln * 2 bit / MB = 32 MBit / MB = 32 MB / 8MB = 32 / 8 = 4
1 byte = 8 bits. Today, at least.
This question is unlikely to help any future visitors; it is only relevant to a small geographic area, a specific moment in time, or an extraordinarily narrow situation that is not generally applicable to the worldwide audience of the internet. For help making this question more broadly applicable, visit the help center.
Closed 9 years ago.
I know this doesn't exactly fit the mold of this site, but this is a better place to ask than say Yahoo Answers. Can anyone help me with this?
Suppose you are instructed to populate a memory array of 64K words – where each word is 20 bits wide (let’s assume the extra 4 bits are for error correction) – out of 1K by 4 bit memory chips. How many such chips will you need?
Thanks!
I'll take a guess :-)
24bits/word, 65536 words = 1572864 bits.
1K x 4-bit memory means 4096 bits/chip, is that correct?
Then we have 1572864 bits / 4096 bits/chip = 384 chips.
5x 1Kx4 chips for 1K words, so 5*64 chips in total is 320