is read only memory modifiable in some way (not PROM) [closed] - rom

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 7 years ago.
Improve this question
Just curious about ROM.
I understand some types of rom are editable a few times but for the ROM chips that claim they are unmodifiable I have a question.
Is there some kind of way to edit them? like a special hardware component or something?
And how are they able to be made as uneditable? can you give me a brief explanation of how this works.
Thanks in advance for your time.

There are a few different types available. ROM, PROM, EPROM, EEROM and EAROM.
ROM, Read-Only Memory is programmed at the factory and cannot be
programmed or altered once delivered.
PROM, Programmable Read-Only Memory, can be written to once, after
which there is no changing it.
EPROM, Erasable Programmable Read-Only Memory, can be written to.
Erasure is typically done using ultraviolet light. The chip will have
a window on it to allow the ultraviolet light into the chip. Typically the
window is covered with a sticker or label to prevent accidental
corruption of the data.
EEPROM, Electrically Erasable Programmable Read-Only Memory, can be
written to and then erased by using particular voltages that are
usually higher than the normal operating voltages and appropriately
timing the signals.
EAPROM, Electrically Alterable Programmable Read-Only Memory, works a
lot like EEPROM except that it can be changed without erasing the
entire device.
The various types of ROM do not require power to be applied to retain their contents. Programmer devices were and are still available for working with these parts. All of the programmable ROMS that can be altered after programming have a limited number of times that they can be reprogrammed before the reliability of the device starts to decay.
In some devices, battery backed-up RAM was used to get around the need for abnormal voltages for programming and to allow for frequent changes to the contents.
These days most of the original technology has been supplanted by various types of flash memory.
There is also some confusion introduced through common language usage. For example, the BIOS for computers still being referred to as ROM sometimes, even though nowadays this is usually stored on a flash chip.

Related

What is the most limited and expensive resource in a computer? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 2 years ago.
Improve this question
What is the most expensive and limited resource in a computer today?
Is it the CPU? Maybe the memory or as I was told the bandwidth (Or something entirely different)?
Does that mean that a computer should do everything to use that resource more efficiently,
including putting more load on other resources?
For example by compressing files, do we put more load on the CPU, so the file can be transmitted over
the network faster?
I think I know the answer to that, but I would like to hear it from someone else, please provide an explanation.
There is a more costly resource that you left out -- Design and Programming.
I answer a lot of questions here. Rarely do I say "beef up the hardware". I usually say "redesign or rewrite".
Most hardware improvements are measured in percentages. Cleaver redesigns are measured in multiples.
A complex algorithm can be replaced by a big table lookup. -- "Speed" vs "space".
"Your search returned 8,123,456 results, here are the first 10" -- You used to see things like that from search engines. Now it says "About 8,000,000 results" or does not even say anything. -- "Alter the user expectations" or "Get rid of the bottleneck!".
One time I was looking at why a program was so slow. I found that 2 lines of code were responsible for 50% of the CPU consumed. I rewrote those 2 lines into about 20, nearly doubling the speed. This is an example of how to focus the effort to efficiently use the programmer.
Before SSDs, large databases were severely dominated by disk speed. SSDs shrank that by a factor of 10, but disk access is still a big problem.
Many metrics in computing have followed Moore's law. But one hit a brick wall -- CPU speed. That has only doubled in the past 20 years. To make up for it, there are multiple CPUs/cores/threads. But that requires much more complex code. Most products punt -- and simply use a single 'cpu'.
"Latency" vs "throughput" -- These two are mostly orthogonal. The former measures elapsed time, which is limited by the speed of light, etc. The latter measures how much data -- fiber optics is much "fatter" than a phone wire.

Are there any computer viruses that affect gpus? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
Recent developments in gpus (the past few generations) allow them to be programmed. Languages like Cuda, openCL, openACC are specific to this hardware. In addition, certain games allow programming shaders which function in the rendering of images in the graphics pipeline. Just as code intended for a cpu can cause unintended execution resulting a vulnerability, I wonder if a game or other code intended for a gpu can result in a vulnerability.
The benefit a hacker would get from targeting the GPU is "free" computing power without having to deal with the energy cost. The only practical scenario here is crypto-miner viruses, see this article for example. I don't know details on how they operate, but the idea is to use the GPU to mine crypto-currencies in the background, since GPUs are much more efficient than CPUs at this. These viruses will cause substential energy consumption if unnoticed.
Regarding an application running on the GPU causing/using a vulnerability, the use-cases here are rather limited since security-relevant data usually is not processed on GPUs.
At most you could deliberately make the graphics driver crash and this way sabotage other programs from being properly executed.
There already are plenty security mechanisms prohibiting reading other processes' VRAM etc., but there always is some way around.

Calling antiviruses from software to scan in-memory images [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
Our system periodically (several times a minute) calls an external service to download images. As security is a priority, we are performing various validations and checks on our proxy service, which interfaces with the external service.
One control we are looking into is anti-malware which is supposed to scan the incoming image and discard it if it contains malware. The problem is that our software does not persist the images (where they can be scanned the usual way) and instead holds them in an in-memory (RAM) cache for a period of time (due to the large volume of images).
Do modern antiviruses offer APIs that can be called by the software to scan a particular in-memory object? Does Windows offer a unified way to call this API across different antivirus vendors?
On a side note, does anybody have a notion of how this might affect performance?
You should contact antivirus manufacturers - Some of them do, but you probably find it tricky to find out the pricing even.
Windows has AMSI which has a stream interface and a buffer interface. I am unaware if it makes a copy of the data in the buffer or scans the buffer as it is.
And it will absolutely wreck your performance, probably.
What might be faster would be to just have some code to assure that they are in fact images that can be read and re-encoded, but then there are obvious problems with re-encoding .jpg images, so maybe just sanity check the header and data with them. This could also be slower. decoding large images is slow, but it would probably catch 0 day exploits targeting libpng/libjpeg better.
Also you could read some horror stories of scanning servers like that being targets of malware in otherwise benign files, though the last one I remember is from last decade.

Effect on performance of lowering resolution in the settings vs performance of real physical laptop already having that resolution [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 7 years ago.
Improve this question
I'm sorry that the title is confusing, but it is so complicated I couldn't make a good title. I don't know if it's in the correct stackexchange subdivision in the first place, if it's in the wrong forum, please migrate it!
HD = 1920x1080, UHD = 3840x2160
Let's say I buy a laptop with UHD screen and for example NVIDIA GeForce 980M card. Some games may run slow because the screen is very big.
Now I change the graphics settings for a lower resolution from UHD to like HD. Will changing from an UHD to HD in the graphics settings make the laptop's performance equally fast to the performance of another laptop that is physically build with HD screen, or will it still run a bit slower?
It's basically impossible to give an exact answer to this without measuring the exact computers, but either way, you are not going to notice any difference.
If they cost the same, the difference would be from the laptop manufacturer spending their hardware budget on a higher resultion screen rather than a better processor etc.
If the software and hardware is the same, except for the screen, you wouldn't notice any difference except the price of the computer.

Implementing LRU in mips [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
How LRU can be implemented in MIPS? The procedures that are used take in a lot of initialisations and the requirement of registers is quite high when trying to implement LRU with other functions like sort and other programs that use more variables. How can this issue be addressed?
Very few VM implementations actually use LRU, because of the cost. Instead they tend to use NRU (Not Recently Used) as an approximation. Associate each mapped in page with a bit which is set when that page is used (read from or written to). Have a process that regularly works round the pages in a cyclical order clearing this bit. When you want to evict a page, chose one that does not have this bit set, and so has not been used since the last time the cyclical process got round to it. If you don't even have a hardware-supported "not recently used" bit emulate it by having the cyclical process (this is sometimes known as the clock algorithm) clear the valid bit of the page table and have the interrupt handler for accessing an invalid page set a bit to say the page was referenced before setting the page as valid and restarting the instruction that trapped.
See e.g. http://homes.cs.washington.edu/~tom/Slides/caching2.pptx especially slide 19

Resources