Intel introduced a software controlled cache partitioning mechanism named Cache Allocation Technology couple of years ago (you can see this website published in 2016 ). Using this technology, you can define the portion of L3 cache an application can use and at the same time other application will not evict your selected application's data from its assigned partition(s). Being software controlled, it is very easy to use. The purpose of this is said to ensure the quality of service. My question is how popular is this technology in practice among developers and system architects?
Also, some researchers have used this technology as a protection mechanism against side channel attack (like prime+probe and flush+reload). You can see this paper in this regard. Do you think it is practical?
Related
I learned the cache write-back and write-through strategy. I want to test the impact of different strategies on program IPC. But the emulator I used before was gem5. I just learned from the official mailing list that gem5 does not implement the write-through strategy. Does qemu have the option to set write-back and write-through strategy. I want to test spec 2006. So can qemu be realized? Or are there other mature simulators that can help me?
QEMU does not model caches at all, so you cannot use it to look at the performance of software in the way you are hoping to do. (In general, trying to estimate performance by running code on a software model is tricky at best, because the behaviour of software models is often significantly different from the behaviour of real hardware, especially for modern hardware which is significantly out-of-order, speculative and microarchitecturally complex. There are a lot of pitfalls for the unwary.)
Whilst this question has obviously been asked before, year's have gone by since then. Apple has released a new NFC spec in that time and further software updates indicate more speculation in this area.
A smartphone has an NFC chip. Is it possible to harness this to take an EMV payment from a contactless card or eWallet? This would obviously require an installed EMV kernel to securely process the payment and possible a means of accessing the secure layer for any PIN entry.
As much as this may seem like an ambiguous question, clearly the hardware is capable. Is it possible / legal / licensed in anyway yet. There is a service that claims to be working on it called PHOS.
Quite obviously, SO is not the right place for such a question as it's unrelated to programming. There's quite a lot of discussion regarding the topic and answers also will tend to be opinion based.
Up to this moment, it hasn't been possible on Apple (due to closed ecosystem, not hardware incompatibility) and became allowed for Android. Technically it's been possible for a while already, but regulations made consumer grade devices incapable of acceptance - they are still quite terrible in the physical aspect as they are not designed to either handle entries securely as well as generate the electromagnetic field according to EMVCo requirements as to the shape and operating volume. Payment schemes have generated as list of special criteria for solutions based on consumer grade devices and the company you mentioned is one of many that have been working on it. There certainly are already some production deployments with limits that have been set by the schemes.
There might be changes in Apple approach (especially as they acquired a company dedicated to such solutions) or not. This is just speculation. The fact is that consumer devices tend not to be as good as dedicated hardware but only time will tell if this stays true. Security research is ongoing, we shall see the results and how will that affect companies policy and further development in the area. It's just too early too tell.
The latest Intel's XEON processors have 30MB of L3 memory which is enough to fit a thin type 1 Hypervisor.
I'm interested in understanding how to keep such an Hypervisor within the CPU, i.e. prevented from being flushed to RAM or, at least, encrypt data before being sent to memory/disk.
Assumes we are running on bare metal and we can bootstrap this using DRTM (Late Launch), e.g. we load from untrusted memory/disk but we can only load the real operating system if we can unseal() a secret which is used to decrypt the Operating System and which take place after having set the proper rules to make sure anything sent to RAM is encrypted.
p.s. I know TXT's ACEA aka ACRAM (Authenticated Code Execution Area aka Authentication Code RAM) is said to have such guarantee (i.e. it is restrain to the CPU cache) so I wonder if some trickery could be done around this.
p.p.s. It seems like this is beyond current research so I'm actually not quite sure an answer is possible to this point.
Your question is a bit vague, but it seems to broil down to whether you can put cache lines in lockdown on a Xeon. The answer appears to be no because there's no mention of such a feature in Intel docs for Intel 64 or IA-32... at least for the publicly available models. If you can throw a few million $ at Intel, you can probably get a customized Xeon with such a feature. Intel is into the customized processors business now.
Cache lockdown is typically available on embedded processors. The Intel XScale does have this feature, as do many ARM processors etc.
Do note however that cache lockdown does not mean that the cached data/instructions are never found in RAM. What you seem to want is a form of secure private memory (not cache), possibly at the microcode level. But that is not a cache, because it contradicts the definition of cache... As you probably know, every Intel CPU made in the past decade has updatable microcode, which is stored fairly securely inside the cpu, but you need to have the right cryptographic signing keys to produce code that is accepted by the cpu (via microcode update). What you seem want is the equivalent of that, but at x86/x64 instruction level rather than at microcode level. If this is your goal, then licensing an x86/x64-compatible IP core and adding crypto-protected EEPROM to that is the way to go.
The future Intel Software Guard Extensions (SGX), which you mention in your further comments (after your question, via the Invisible Things Lab link), does not solve the issue of your hypervisor code never being stored in clear in RAM. And that is by design in SGX, so the code can be scanned for viruses etc. before being enclaved.
Finally, I cannot really comment on privatecore's tech because I can't find a real technological description of what they do. Twitter comments and news articles on start-up oriented sites don't provide that and neither does their site. Their business model comes down to "trust us, we know what we do" right now. We might see a real security description/analysis of their stuff some day, but I can't find it now. Their claims of being "PRISM proof" are probably making someone inside the NSA chuckle...
Important update: it's apparently possible to actually disable the (whole) cache from writing back to RAM in the x86 world. These are officially undocumented modes known as "cache-as-RAM mode" in AMD land an "no-fill mode" in Intel's. More at https://www.youtube.com/watch?v=EHkUaiomxfE Being undocumented stuff, Intel (at least) reserves the right to break that "feature" in strange ways as discussed at https://software.intel.com/en-us/forums/topic/392495 for example.
Update 2: A 2011 Lenovo patent http://www.google.com/patents/US8037292 discusses using the newer (?) No-Eviction mode (NEM) on Intel CPUs for loading the BIOS in the CPU's cache. The method can probably be used for other type of code, including supervisors. There's a big caveat though. Code other than the already cached stuff will run very slowly, so I don't see this as really usable outside the boot procedure. There's some coreboot code showing how to enable NEM (https://chromium.googlesource.com/chromiumos/third_party/coreboot/+/84defb44fabf2e81498c689d1b0713a479162fae/src/soc/intel/baytrail/romstage/cache_as_ram.inc)
I want to test the performance of a filesystem under different conditions.
Specifically I want to test the performance of Windows virtual machines without compression and with compression both on "normal harddisk" and on USB-disk as it would be interesting to see exactly what the difference is.
What I need is a program that can test different aspects of filesystem (random access, sequential read/write, etc) and make pretty graphs that go well with my blog. Preferrably the application should be automated so I can add it to startup, this way the timing is the same for each run and I can repeat the runs for verification.
I can post a link to the results here when I get around to testing it. Right now its just in the planning phase.
Iometer is the I/O measurement tool. And it's free. From the website:
Iometer is an I/O subsystem
measurement and characterization tool
for single and clustered systems. It
was originally developed by the Intel
Corporation and announced at the Intel
Developers Forum (IDF) on February 17,
1998 - since then it got wide spread
within the industry.
Meanwhile Intel has discontinued to
work on Iometer and it was given to
the Open Source Development Lab
(OSDL). In November 2001, a project
was registered at SourceForge.net and
an initial drop was provided. Since
the relaunch in February 2003, the
project is driven by an international
group of individuals who are
continuesly improving, porting and
extend the product.
The tool (Iometer and Dynamo
executable) is distributed under the
terms of the Intel Open Source
License. The iomtr_kstat kernel module
as well as other future independent
components are distributed under the
terms of the GNU Public License.
You said you'd like pretty graphs for your blog. In my use of IOMeter, I've never seen it produce a graph. However, it is possible that I overlooked an existing feature.
Alternatively, (from the look of its website) iozone might give you graphs:
http://www.iozone.org/
Yet, it could be that iozone only collected the data used to create those graphs shown on its web site.
Regardless, this is still another option for I/O Benchmarking.
Additional server oriented disk benchmarks:
Diskspd
fio
vdbench
Many poeple have online startups in their head that may potentially attracts millions, but most of the time you will only have minimal budget (time and resource) to start with so you want to have it delivered within a year's time. Short after launch, you are bound to perform one or a series of upgrades that may include: code refactor to newer foundation, adding hierarchy(ies) in software architecture or restructure database(s). This cycle of upgrade/refactor continues as:
New features avaiable in latest version of the language(s)/framework(s) you use.
Availability of new components/frameworks/plugins that may potentially improve the product.
Requirement has changes it's direction, existing product wasn't designed to cope with new needs.
With above as prerequisite, I want to take this discussion serious and identify the essence of an upgradable solution for a web application. In the discussion you may talk about any stages of development (initial, early upgrade, incremental upgardes) and cover one of more of the following:
Choice of language(s) for a web application.
Decision for using a framework or not? (Consider the overhead)
Choice of DBMS and its design
Choice of hardware(s) and setups?
Strategy to constant changes in requirements (, which can be a natural of web application)
Strategy/decision toward total redesign
Our company's web solution is on its fourth major generation, having evolved considerably over the past 8 years. The most recent generation introduced a broad variety of constructs to help with exactly this task as it was becoming unwieldy to update the previous generation based on new customer demands. Thus, I spent quite a bit of time in 2009 thinking about exactly this problem.
The single most valuable thing you can do is to employ an Agile approach to building software. In particular, you should maintain an environment in which a new build can be (and is) created daily. While daily builds are only one aspect of Agile, this is the practice that is most important in addressing your question. While this isn't the same thing as upgradeability, per se, it nonetheless introduces a discipline into the process that helps reduce the chance that your code base will become unwieldy (or that you'll become an Architect Astronaut).
As far as frameworks and languages go, there are two primary requirements: that the framework be long-lived and stable and that the environment support a Separation of Concerns. ASP.NET has worked well for me in this regard: it has evolved in a rational manner and without discontinuities that invalidate older code. I use a separate Business Logic Layer to manage SoC but ASP.NET does now support MVC development as well. In contrast, I came to dislike PHP after a few months working with it because it just seemed to encourage messy practices that would endanger future upgrades.
With respect to DBMS selection, any modern RDMS (SQL Server, MySQL, Oracle) would serve you well. Here is the key though: you will need to maintain DDL scripts for managing upgrades. It is just a fact of life. So, how do you make this a tractable process? The single most valuable tool from any third-party developer is my copy of SQL Compare from Red Gate. This process used to be a complete nightmare and a significant drag on my ability to evolve my code until I found this tool. So, the generic recommendation is to use a database for which a tool exists to compare database structures. SQL Server is just very fortunate in this regard.
Hardware is almost a don't care. You can always move to new hardware as long as your development process includes with a reasonable release build process.
Strategy for constant changes in requirements. Again, see Agile. I'd encourage you not to even think of them as "requirements" any more - in the traditional sense of a large document filled with specifications. Agile changes that in important ways. I don't keep a requirements document either except when working on contract for an external, paying customer so that I can be assured of appropriate billing and prevent feature creep. At this point, our internal process is so rapid and fluid that the reports from our feature request/bug management software (FogBugz if you want to know) serves as our documentation when documenting a new release for marketing.
The strategy/decision for total redesign is: don't. If you put a reasonable degree of thought into the process you'll be using, choose mainstream tools, and enforce a Separation of Concerns then nothing short of a complete abandonment of HTTP and RDBMSs should cause a total redesign.
If you are Agile enough that anything can change, you are unlikely to ever be in a position where everything must change.
To get the ball rolling, I'd have thought a language/framework that supports the concept of dependency injection (or Inversion of Control as is seems to be called these days) would be high on the list.
You will find out that RDBMS technology is not easily scalable. All vendors will tell you otherwise yet when you try multiple servers and load-balancing the inherent limitations will show up. Everything else can be beefed up with "bigger iron" and may be more efficient code but Databases cannot be split and distributed easily.
Web applications will hopefully drive the innovation in database technologies and help us break out of the archaic Relational Model mind-set. It is long overdue.
I recommend paying a lot of attention to this weak link right from the start.