what is dominate factor for disk price? [closed] - performance

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
What is dominate factor of the disk price, capacity or IOPS? I think the answer of this question should also answer this one in which i asked why cost of disc I/O per access is PricePerDiskDrive/AccessesPerSecondPerDisk.
Thanks
Chang

The factor dominating the price is the market segment: Home disks are cheapest, server disks most expensive.

It depends on several factors, as stated in the previous answer, you have the segment, home or business.
Then there is the architecture:
SCSI (bus controller with high speeds)
SSD (flash)
SATA (regular drive)
SAS (serial attached scsi, backwards compatible with SATA)
SAS and SCSI are mostly disks running at high speeds, this makes them more expensive.
SATA disks for home use at normal speeds (5400 or 7200 rpm) are expensive based on capacity and brand. If a company has the first 3 TB disk it will be very expensive, when 3 companies have those disks, prices will decrease because of competition
SSD is a technology that got affordable, but still a lot more expensive than regular SATA (with platters). This is because there are no turning parts and it uses faster memory.
Also a very nice thing to remember :
The faster the drive, the more expensive, there for it is normal that the better your IOPS are the more expensive it is.
Capacity has a price, but it is linked to the drives speed and the recent evolution in technology.

Related

Is it possible to have a high performance and high fps graphics without high CPU+GPU specs? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 1 year ago.
Improve this question
I was watching a video about .kkrieger fps game (https://www.youtube.com/watch?v=bD1wWY1YD-M) and was astounded by their incredible work to put such a complex game for such an insane small memory size (96kB). However, they consume a huge amount of CPU and GPU processing.
Then it came the following question: is it possible to develop a graphics programming engine/framework/tool for high performance and high fps without relying much on high-end CPU + GPU processing power? I am not thinking of less ROM memory in this question, but less CPU+GPU processing power to improve the fps.
As #Nicol-Bolas pointed out, there are many ways to see the question and my question was too broad or unfocused, so it is defined in terms of having an engine or own made code for high resolution and high fps settings without having a high CPU + GPU specs combo.
Computers are not magic. Everything they do has to come from somewhere and be the result of some process.
It is impressive to be able to generate interesting assets from algorithms. But this is a memory vs. performance tradeoff: you are exchanging small memory sizes for using up processing power to generate those assets. Essentially, algorithmic generation can be thought of as a form of data compression. And generally speaking, the bigger your compression ratios, the longer it will take to decompress the data.
If you want more stuff, it's going to cost you something. They choose to optimize for disk storage space, and that has costs for runtime memory and performance.

Hardware Recommendations for an Oracle Server production scenario [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
I have to install an Oracle 18c database server, currently I have 4 960 GB solid state hard drives, 32 2.10 ghz Intel Xeon processors, 255 GB of RAM, the database that Mounted on this server is approximately 750 GB in weight and is increased by 350 GB per year and has a concurrency of 500 simultaneous connections, do you consider that this hardware is the one to use to obtain good performance in Oracle or should it be increased?
The answer is, "it depends":
Is this enough storage capacity to meet your long term needs for data, archived transaction logs, backups, auditing, and whatever else you need to store?
Do you have an existing production system, and what sort of performance do you get out of it? How does the old hardware and new hardware compare?
How many transactions does your database process in an hour? Your overall data might only grow by 350GB, but if there are a lot of updates and not just inserts then you you could be archiving that much in log files and backups every day.
What those 500 concurrent sessions are actually doing at any moment will drive the size of your memory and processor requirements, as will the amount of data that you need to cache to support them.
Do you have any HA requirements, and if so how does that affect the configuration of your storage (i.e. do you lose some percentage of your storage to RAID)?
Which operating system you choose can also affect how efficiently your hardware performs.
My personal preference is to use virtualized servers running Oracle Linux with an SSD SAN , but that might not be right for you. In the end there really isn't enough information in your question to say for sure if the hardware you describe is sufficient for your needs or not.

How to improve performance of PC, upgrade processor, memory or clock speed? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 6 years ago.
Improve this question
A PC has a microprocessor which processes 16 instructions per microsecond. Each instruction is 64 bits long. Its memory can retrieve or store
data/instructions at a rate of 32 bits per microsecond.
Mention 3 options to upgrade system performance. Which option gives most improved performance?
And the answer provided is
a) upgrade processor to one with twice the speed
b) upgrade memory with one twice the speed
c) double clock speed
(b) gives most improved performance.
Overcoming the bottleneck of a PC can improve the integrated performance.
However, my problem is that I am not sure of why b gives the most improved performance. Additionally, would a and c give the same performance? Will it provide the same performance? Can it be calculated? I am not sure of how these different parts would work on the performance.
Your question's leading paragraph contains the necessary numbers to see why it's b):
The CPU's processing rate is fixed at 16 instructions per microsecond. So an instruction takes less than a microsecond to execute.
Each instruction is 64 bits long, but the memory system retrieves data at 32 bits per microsecond. So it takes two microseconds to retrieve a single instruction (i.e. 64 bits).
The bottleneck is clear: it takes longer to retrieve an instruction (2μs) than it does to execute it (1/16thμs).
If you increase the CPU speed (answer a)), the CPU will execute an individual instruction faster, but it will still be waiting idle at least 2μs for the next instruction to arrive, so the improvement is wasted.
To eliminate bottlenecks you need to increase the memory-system's speed to match the CPU's execution speed, so the memory needs to read 64 bits in a 1/16μs (or 32 bits in 1/32μs).
I assume answer c) refers to increasing the speed of some systemwide master clock which would also increase the CPU and Memory data-rates. This would improve performance, but the CPU would still be slaved to the memory speed.
Note that your question describes a simplistic computer. Computers were like this originally, where the CPU accessed memory directly, instruction-by-instruction. However as CPUs got faster, memory did not - so computer-engineers added cache levels: this is much faster memory (but much smaller in capacity) where instructions (and data memory) can be read as fast as a CPU can execute them, solving the bottleneck without needing to make all system memory match the CPU's speed.

For disks/RAM, what is the relationship between access time and read/write speed? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 8 years ago.
Improve this question
I'm trying to better understand how relatively fast HDD/SSD/RAM is when it comes to reading/writing bytes.
Here are the access times and read/write speeds I've found from online sources:
Storage | Read/Write speed | Access time
RAM | 100 GB/s | 50 ns
SSD | 0.5 GB/s | 500 ns
HDD | 0.1 GB/s | 5000 ns
My initial thought was that access time is the time it takes to read 1 byte, but it looks like these numbers don't support that. What exactly is the difference between read/write speed and access time? How are they related?
Is it safe to say that RAM is ~1,000x faster than SSD, and SSD is ~100x faster than HDD, and hence RAM is ~100,000x faster than HDD?
Access time or latency means how long the system wait from request until data starts to arrive. Read and write speed are the amount of data transferred per time unit. Usually read and write speeds are different for the same device.
These benchmarks are directly related to the technology adopted by each one. On physical disks (HDD), the read/write speed are directly affected by the rotational speed and the access time are related to the movement of the head.
On SSD storage, speed and access time are related to chip internals and organization. SSD uses multiple flash memory chips which have a natural specific access time and speed to store data. Access time is also affected by the controller that split data through these chips.
RAM modules use dynamic chips (DRAM) that are very fast in speed and access time. The speed is affected by the chip but also affected by the pcb design and data bus of the module. The access time, in some way. is limited by the refresh rate of the chip.
There is also another kind of memory called static RAM (SRAM). SRAM uses a much more expensive technology than DRAM that limits its capacity but far faster than DRAM. It is used on processors cache.
Comparing these technologies, it is safe to say that RAM is much faster than SSD and that SSD is much faster than HDD in general way. Put in number is not so easy because technology evolves and each generation of products have improvements in its performance. Also server application devices have much better performance than consumer products device.
Those seem like slightly inflated estimates but they are in the ball-park. Read and write speeds using a common filesystem are going to be much slower than that. If you are interested in easy to use benchmark utilities, download an ISO for memtest86 and that will tell you the actual RAM throughput for raw data. ArgusMonitor is for windows is demo software but will give you your hard drive speeds with raw data.
Average I've seen is roughly around 20GB/s if I am not mistaken for 800MHz DDR2 RAM using raw data and around 90-130MB/s raw data on a SATA3 HDD. I have not had the finances to bench test a Solid State drive yet, but I have seen that they claim an average about two or three times faster than the SATA3 HDD I have.
Access time are like seek times it seems. A Platter based HDD has to make rotations and the heads have to move into the position of the data be sought (seek) and that takes maybe 1-8 milliseconds which is sort of like a latency. Solid states access times are about what you mentioned and RAM is slightly less than your estimate at about 10-15 nanoseconds from the time the request is made until the data is retrieved.
http://en.wikipedia.org/wiki/CAS_latency < RAM info

where does OSs keep the programs that are not used for some minutes? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
Typically in a working environment, I have many windows open, Outlook, 2/3 word docuements, few windows in browser, notepad++, some vpn client, excel etc..
Having said that, there are chances that about 40% of these apps are not frequently used, but are referred only sparingly. They occupy memory none-the-less.
Now, how does a typical OS deal with that kind of memory consumption ? does it suspend that app to hard disk (pagefile , or linux swap area etc) thereby freeing up that memory for usage, or does it keep occupying the memory there as it is.
Can this suspension be a practical solution, doable thing ? Are there any downsides ? response time ?
Is there some study material I can refer to for reading on this topic/direction..
would appreciate the help here.
The detailed answer depends on your OS and how it implements its memory management, but here is a generality:
The OS doesnt look at memory in terms of how many processes are in RAM, it looks at in terms of discrete units called pages. Most processes have several pages of RAM. Pages that are least referenced can be swapped out of RAM and onto the hard disk when physical RAM becomes scarce. Rarely, therefore, is an entire process swapped out of RAM, but only certain parts of it. It could be, for example, that some aspect of your currently running program is idle (ie the page is rarely accessed). In that case, it could be swapped out even though the process is in the foreground.
Try the wiki article for starters on how this process works and the many methods to implement it.

Resources