Are 64 bit programs bigger and faster than 32 bit versions? - performance

I suppose I am focussing on x86, but I am generally interested in the move from 32 to 64 bit.
Logically, I can see that constants and pointers, in some cases, will be larger so programs are likely to be larger. And the desire to allocate memory on word boundaries for efficiency would mean more white-space between allocations.
I have also heard that 32 bit mode on the x86 has to flush its cache when context switching due to possible overlapping 4G address spaces.
So, what are the real benefits of 64 bit?
And as a supplementary question, would 128 bit be even better?
Edit:
I have just written my first 32/64 bit program. It makes linked lists/trees of 16 byte (32b version) or 32 byte (64b version) objects and does a lot of printing to stderr - not a really useful program, and not something typical, but it is my first.
Size: 81128(32b) v 83672(64b) - so not much difference
Speed: 17s(32b) v 24s(64b) - running on 32 bit OS (OS-X 10.5.8)
Update:
I note that a new hybrid x32 ABI (Application Binary Interface) is being developed that is 64b but uses 32b pointers. For some tests it results in smaller code and faster execution than either 32b or 64b.
https://sites.google.com/site/x32abi/

I typically see a 30% speed improvement for compute-intensive code on x86-64 compared to x86. This is most likely due to the fact that we have 16 x 64 bit general purpose registers and 16 x SSE registers instead of 8 x 32 bit general purpose registers and 8 x SSE registers. This is with the Intel ICC compiler (11.1) on an x86-64 Linux - results with other compilers (e.g. gcc), or with other operating systems (e.g. Windows), may be different of course.

Unless you need to access more memory that 32b addressing will allow you, the benefits will be small, if any.
When running on 64b CPU, you get the same memory interface no matter if you are running 32b or 64b code (you are using the same cache and same BUS).
While x64 architecture has a few more registers which allows easier optimizations, this is often counteracted by the fact pointers are now larger and using any structures with pointers results in a higher memory traffic. I would estimate the increase in the overall memory usage for a 64b application compared to a 32b one to be around 15-30 %.

Regardless of the benefits, I would suggest that you always compile your program for the system's default word size (32-bit or 64-bit), since if you compile a library as a 32-bit binary and provide it on a 64-bit system, you will force anyone who wants to link with your library to provide their library (and any other library dependencies) as a 32-bit binary, when the 64-bit version is the default available. This can be quite a nuisance for everyone. When in doubt, provide both versions of your library.
As to the practical benefits of 64-bit... the most obvious is that you get a bigger address space, so if mmap a file, you can address more of it at once (and load larger files into memory). Another benefit is that, assuming the compiler does a good job of optimizing, many of your arithmetic operations can be parallelized (for example, placing two pairs of 32-bit numbers in two registers and performing two adds in single add operation), and big number computations will run more quickly. That said, the whole 64-bit vs 32-bit thing won't help you with asymptotic complexity at all, so if you are looking to optimize your code, you should probably be looking at the algorithms rather than the constant factors like this.
EDIT:
Please disregard my statement about the parallelized addition. This is not performed by an ordinary add statement... I was confusing that with some of the vectorized/SSE instructions. A more accurate benefit, aside from the larger address space, is that there are more general purpose registers, which means more local variables can be maintained in the CPU register file, which is much faster to access, than if you place the variables in the program stack (which usually means going out to the L1 cache).

I'm coding a chess engine named foolsmate. The best move extraction using a minimax-based tree search to depth 9 (from a certain position) took:
on Win32 configuration: ~17.0s;
after switching to x64 configuration: ~10.3s;
This is 41% of acceleration!

In addition to having more registers, 64-bit has SSE2 by default. This means that you can indeed perform some calculations in parallel. The SSE extensions had other goodies too. But I guess the main benefit is not having to check for the presence of the extensions. If it's x64, it has SSE2 available. ...If my memory serves me correctly.

In the specific case of x68 to x68_64, the 64 bit program will be about the same size, if not slightly smaller, use a bit more memory, and run faster. Mostly this is because x86_64 doesn't just have 64 bit registers, it also has twice as many. x86 does not have enough registers to make compiled languages as efficient as they could be, so x86 code spends a lot of instructions and memory bandwidth shifting data back and forth between registers and memory. x86_64 has much less of that, and so it takes a little less space and runs faster. Floating point and bit-twiddling vector instructions are also much more efficient in x86_64.
In general, though, 64 bit code is not necessarily any faster, and is usually larger, both for code and memory usage at runtime.

Only justification for moving your application to 64 bit is need for more memory in applications like large databases or ERP applications with at least 100s of concurrent users where 2 GB limit will be exceeded fairly quickly when applications cache for better performance. This is case specially on Windows OS where integer and long is still 32 bit (they have new variable _int64. Only pointers are 64 bit. In fact WOW64 is highly optimised on Windows x64 so that 32 bit applications run with low penalty on 64 bit Windows OS. My experience on Windows x64 is 32 bit application version run 10-15% faster than 64 bit since in former case at least for proprietary memory databases you can use pointer arithmatic for maintaining b-tree (most processor intensive part of database systems). Compuatation intensive applications which require large decimals for highest accuracy not afforded by double on 32-64 bit operating system. These applications can use _int64 in natively instead of software emulation. Of course large disk based databases will also show improvement over 32 bit simply due to ability to use large memory for caching query plans and so on.

Any applications that require CPU usage such as transcoding, display performance and media rendering, whether it be audio or visual, will certainly require (at this point) and benefit from using 64 bit versus 32 bit due to the CPU's ability to deal with the sheer amount of data being thrown at it. It's not so much a question of address space as it is the way the data is being dealt with. A 64 bit processor, given 64 bit code, is going to perform better, especially with mathematically difficult things like transcoding and VoIP data - in fact, any sort of 'math' applications should benefit by the usage of 64 bit CPUs and operating systems. Prove me wrong.

On my machine, same h265 encode works almost twice as fast using virtulDub_x64 (with x64 h265 library) vs virtulDub_x32 (regular x32 h265 library). That's probably because longint (64bits) numbers operations (ie: add) can be done on a single instruction on x64, but on 32bit needs two: add lower part, and then add (with carry) the higher part. So unless integer maths are limited to 32bit integers, most of it will take more time under x32.

Related

Are there performance advantages of 32 bit apps over 64 bit ones, on x86-64?

I know the advantages of 64 bit over 32 bit, but except for compatibility, are there any advantages of 32 bit applications over 64 bit ones that could make 32-bit application faster or otherwise more efficient?
There's one big advantage: 32-bit applications use significantly less memory (precisely because pointers are smaller). Not everything is a pointer, e.g. strings and numbers don't change their size, so the effective difference is not 2x. I happen to know about JavaScript engines specifically, where the 64-bit version typically uses around 50% more memory for the same workload than the 32-bit version of the same engine.
V8 has recently addressed this by implementing "pointer compression" in its 64-bit version. In theory, any C/C++ app could do the same thing, but it's a big engineering effort.
That said, this generally isn't a reason not to move to 64-bit, as other benefits (more registers, more address space) typically outweigh this drawback. But it does mean that if you're targeting devices/machines with less than 4GiB memory anyway, you might want to stick with 32-bit builds, if memory consumption is a concern.
(Performance, in my experience, is a mixed bag: smaller code and smaller data mean better cache utilization on 32-bit; OTOH having more and wider registers on 64-bit can save instructions there. In rare extreme cases, a 64-bit app can process twice as much data in the same time; most of the time the difference will only be in the range 1-5%, and can go in either direction: sometimes a 32-bit build is indeed a little faster than a 64-bit build; it really depends on what the app is doing.)
In short, no. To be more accurate, it might theoretically be the case for some processors, but none that I'm aware of.
The only other difference that comes to my mind is that 32-bit instuctions are generally smaller (due to not having a REX prefix, at least), so you might save some space this way, but it probably doesn't outweight the benefits of x64. Considering that instructions unique to x64 also tend to have higher impact, i.e. manipulating more data at once, the code might even turn out to be more compact in x64. And, for the same reason, x32 is generally slower. So no, x32 doesn't have any real advantages over x64, other than compatibility.
For windows apps, 32 bit are viewed as "most portable" (easier to distribute) though that's becoming less of an issue.
For memory hogs like Ruby it seemed to me like it used 1/2 the RAM, so you could run more apps on boxes that were RAM limited. Not to mention "all apps" use less RAM (kernel, etc.)
It also ran faster, since it traverses all its memory to do garbage collection, which fit in cache better, there was less overall RAM to traverse, looking for pointers, etc. At the same time, 64-bit is less likely to find false positives when the GC was looking for pointers, so small win for 64-bit there.
If you're real brave, you can try the hybrid x32 ABI (32 bit pointers, 64-bit registers) https://unix.stackexchange.com/questions/121424/linux-and-x32-abi-how-to-use which is meant to kind of get the best of both worlds. I'm really not sure why it isn't considered more of a popular option, it seems like a nice win to me, trade-off being you can't have more than 2GB RAM. My guess is most people aren't in a very RAM constrained environment, or that the win over just going straight "32 bit kernel" (which is well supported) isn't enough motivation? In essence, most boxes have gobs of RAM so it's not as much of a priority?

Why is there a huge performance difference between 32 and 64 bit JDK? [duplicate]

Recently I've been doing some benchmarking of the write performance of my company's database product, and I've found that simply switching to a 64bit JVM gives a consistent 20-30% performance increase.
I'm not allowed to go into much detail about our product, but basically it's a column-oriented DB, optimised for storing logs. The benchmark involves feeding it a few gigabytes of raw logs and timing how long it takes to analyse them and store them as structured data in the DB. The processing is very heavy on both CPU and I/O, although it's hard to say in what ratio.
A few notes about the setup:
Processor: Xeon E5640 2.66GHz (4 core) x 2
RAM: 24GB
Disk: 7200rpm, no RAID
OS: RHEL 6 64bit
Filesystem: Ext4
JVMs: 1.6.0_21 (32bit), 1.6.0_23 (64bit)
Max heap size (-Xmx): 512 MB (for both 32bit and 64bit JVMs)
Constants for both JVMs:
Same OS (64bit RHEL)
Same hardware (64bit CPU)
Max heap size fixed to 512 MB (so the speed increase is not due to the 64bit JVM using a larger heap)
For simplicity I've turned off all multithreading options in our product, so pretty much all processing is happening in a single-threaded manner. (When I turned on multi-threading, of course the system got faster, but the ratio between 32bit and 64bit performance stayed about the same.)
So, my question is... Why would I see a 20-30% speed improvement when using a 64bit JVM? Has anybody seen similar results before?
My intuition up until now has been as follows:
64bit pointers are bigger, so the L1 and L2 caches overflow more easily, so performance on the 64bit JVM is worse.
The JVM uses some fancy pointer compression tricks to alleviate the above problem as much as possible. Details on the Sun site here.
The JVM is allowed to use more registers when running in 64bit mode, which speeds things up slightly.
Given the above three points, I would expect 64bit performance to be slightly slower, or approximately equal to, the 32bit JVM.
Any ideas? Thanks in advance.
Edit: Clarified some points about the benchmark environment.
From: http://www.oracle.com/technetwork/java/hotspotfaq-138619.html#64bit_performance
"Generally, the benefits of being able to address larger amounts of memory come with a small performance loss in 64-bit VMs versus running the same application on a 32-bit VM. This is due to the fact that every native pointer in the system takes up 8 bytes instead of 4. The loading of this extra data has an impact on memory usage which translates to slightly slower execution depending on how many pointers get loaded during the execution of your Java program. The good news is that with AMD64 and EM64T platforms running in 64-bit mode, the Java VM gets some additional registers which it can use to generate more efficient native instruction sequences. These extra registers increase performance to the point where there is often no performance loss at all when comparing 32 to 64-bit execution speed.
The performance difference comparing an application running on a 64-bit platform versus a 32-bit platform on SPARC is on the order of 10-20% degradation when you move to a 64-bit VM. On AMD64 and EM64T platforms this difference ranges from 0-15% depending on the amount of pointer accessing your application performs."
Without knowing your hardware I'm just taking some wild stabs
Your specific CPU may be using microcode to 'emulate' some x86 instructions -- most notably the x87 ISA
x64 uses sse math instead of x87 math, I've noticed a %10-%20 speedup of some math-heavy C++ apps in this case. Math differences could be the real killer if you're using strictfp.
Memory. 64 bits gives you much more address space. Maybe the GC is a little less agressive on 64 bits mode because you have extra RAM.
Is your OS is in 64b mode and running a 32b jvm via some wrapper utility?
The 64-bit instruction set has 8 more registers, this should make the code faster overall.
But, since processsors nowaday mostly wait for memory or disk, i suppose that either the memory subsystem or the disk i/o might be more efficient in 64-bit mode.
My best guess, based on a quick google for 32- vs 64-bit performance charts,
is that 64 bit I/O is more efficient. I suppose you do a lot of I/O...
If memcpy is involved when moving the data, it's probably more efficient to copy longs than ints.
Realize that the 64-bit JVM is not magic pixie dust that makes Java apps
go faster. The 64-bit JVM allows heaps >> 4 GB and, as such, only makes sense
for applications which can take advantage of huge memory on systems which
have it.
Generally there is either a slight improvement (due to certain hardware
optimizations on certain platforms) or minor degradation (due to increased
pointer size). Generally speaking there will be a need for fewer GC's -- but
when they do occur they will likely be longer.
In memory databases or search engines that can use the increased memory
for caching objects and thus avoid IPC or disk accesses will see the biggest
application level improvements. In addition a 64-bit JVM will also
allow you to run many, many more threads than a 32-bit one, because
there's more address space for things like thread stacks, etc. The
maximum number of threads generally for a 32-bit JVM is ~1000but ~100000 threads with a 64-bit JVM.
Some drawbacks though:
Additional issues with the 64-bit JVM are that certain client
oriented features like Java Plug-in and Java Web Start
are not supported. Also any native code would also need
to be compatible (e.g. JNI for things like Type II JDBC drivers).
This is a bonus for pure-Java developers as pure apps should
just run out of the box.
More on this Thread at Java.net

Where is the bottleneck when using 64-bit variables and 32-bit systems?

A colleague and I recently were arguing about whether or not it is a good idea to use 64-bit variables in 32-bit code. I took a side saying "It might be dangerous and slow us down somewhere", he said "Nah. Nothing bad will happen." So who is right?
The bitness of parts of our ecosystem is as follows:
Windows: either 32-bit or 64-bit
Compiler: 32-bit only (Delphi...)
Processor: modern Intel ones... that's 64-bit, isn't it?
Let's say we have some variables which use some 64-bit type built into the language, with Delphi that would be Int64. Is it safe (performance-wise) to scatter the code with calculations based on these?
It sounds like you are doing some premature optimization.
Even if it would cause a very slight performance hit, the hardware of your users are more likely to have 64-bit machines as the software ages. So it would be a self correcting problem, if it even is a problem.
If the compiler supports them, go ahead and use them. If you find a slowdown in a tight loop with lots of calculations, then perhaps consider changing it.
It's bad. The data takes twice the memory, so it's like running on a processor with half the cache. It also requires 2 instructions to load, 2 instructions to add, and 2 instructions to store a pair of numbers. You can test this with simple programs like a prime number sieve. These will generally run measurably slower using the longer types on a 32bit machine, and even on a 64bit machine when you get larger than the cache. The worst part is that once you sprinkle this stuff all over your code, you'll have a systemic performance problem that will be hard to correct later. And yes, similar is even true of using 32 bit integers where shorts could be used, but only with regard to memory performance - a 32bit CPU can do 32bit math just as fast as 16bit - so don't worry about that (or your spreadsheet will only support 65K rows for example).
GCC (and i doubt your Delphi-compiler is much better) compiles/assembles its long long data-datatype so that it uses the EAX and ECX registers, loading them already takes longer than only loading one of them (if it is actually double the time depends on the cache). It then assembles multiple instructions where you would only need one for e.g. ADD/SUB/IDIV if you were only operating on a single 32-bit value. Then follow the stores for EAX and ECX which again take longer than storing only one of them.
On x64 architecture, if you only need 32 bit wide integers, then using 32 bit wide integers will result in faster code, even when running 64 bit code.
Ask yourself this:
Does this variable ever need to store a value that can be bigger than 32-bit, and then choose from the following:
If this answer is "no", use an unsigned int.
If the answer is "yes", use an unsigned 64-bit int. 64-bit ints aren't very memory expensive and are cheap to add or subtract (and reasonably cheap to multiply divide).
If the answer is "yes, but only on 64-bit systems" (for example the length of a buffer might be > 4GB on 64-bit but never on 32-bit), use a "size_t". This is defined to be 32-bit on 32-bit systems and 64-bit on 64-bit systems.
Write your program to be correct, and let the optimiser / performance tests help you get it fast afterwards. It's more expensive for you to write it fast first and fix it later than for you to write it correct first and make it fast later.

Processor architecture

While HDDs evolve and offer more and more space on less room, why are we "sticking with" 32-bit or 64-bit?
Why can't there be a e.g.: 128-bit processor?
(This is not my homework; I'm just a student interested beyond the things they teach us in informatics)
Because the difference between 32-bit and 64-bit is astronomical - it's really the difference between 232 (a ten-digit number in the billions) and 264 (a twenty-digit number in the squillions :-).
64 bits will be more than enough for decades to come.
There's very little need for this, when do you deal with numbers that large? The current addressable memory space available to 64-bit is well beyond what any machine can handle for at least a few years...and beyond that it's probably more than any desktop will hold for quite a while.
Yes, desktop memory will continue to increase, but 4 billion times what it is now? That's going to take a while...sure we'll get to 128-bit, if the whole current model isn't thrown out before then, which I see equally as likely.
Also, it's worth noting that upgrading something from 32-bit to 64-bit puts you in a performance hole immediately in most scenarios (this is a major reason Visual Studio 2010 remains 32-bit only). The same will happen with 64-bit to 128-bit. The more small objects you have, the more pointers, which are now twice as large, that's more data to pass around to do the same thing, especially if you don't need that much addressable memory space.
When we talk about an n-bit architecture we are often conflating two rather different things:
(1) n-bit addressing, e.g. a CPU with 32-bit address registers and a 32-bit address bus can address 4 GB of physical memory
(2) size of CPU internal data paths and general purpose registers, e.g. a CPU with 32-bit internal architecture has 32-bit registers, 32-bit integer ALUs, 32-bit internal data paths, etc
In many cases (1) and (2) are the same, but there are plenty of exceptions and this may become increasingly the case, e.g. we may not need more than 64-bit addressing for the forseeable future, but we may want > 64 bits for registers and data paths (this is already the case with many CPUs with SIMD support).
So, in short, you need to be careful when you talk about, e.g. a "64-bit CPU" - it can mean different things in different contexts.
Cost. Also, what do you think the 128-bit architecture will get you? Memory addressing and such, but to handle it effectively, you need higher bandwidth buses and basically some new instruction languages that handle it. 64-bit is more than enough for addressing (18446744073709551616 bytes).
HDDs still have a bit of ground to catchup to RAM and such. They're still going to be the IO bottleneck I think. Plus, newer chips are just supporting more cores rather than making a massive change to the language.
Well, I happen to be a professional computer architect (my inventions are probably in the computer you are reading this on), and although I have not yet been paid to work on any processor with more than 64 bits of address, I know some of my friends who have been.
And I have been playing around with 128 bit architectures for fun for a few decades.
I.e. its already happening.
Actually, it has already happened to a limited extent. The HP Precision Architecture, Intel Itanium, and the higher end versions of the IBM Power line, have what I call a folded virtual memory. I have described these elsewhere, e.g. in comp.arch posts in some details, http://groups.google.com/group/comp.arch/browse_thread/thread/53a7396f56860e17/f62404dd5782f309?lnk=gst&q=folded+virtual+memory#f62404dd5782f309
I need to create a comp-arch.net wiki post for these.
But you can get the manuals for these processors and read them yourself.
E.g. you might start with a 64 bit user virtual address.
The upper 8 bits may be used to index a region table, that returns an upper 24 bits that is concatenated with the remaining 64-8=56 bits to produce an 80 bit expanded virtual address. Which is then translated by TLBs and page tables and hash lookups, as usual,
to whatever your physical address is.
Why go from 64->80?
One reason is shared libraries. You may want to have the shared libraries to stay at the same expanded virtual address in all processors, so that you cam share TLB entries. But you may be required, by your language tools, to relocate them to different user virtual addresses. Folded virtual addresses allow this.
Folded virtual addresses are not true >64 bit virtual addresses usable by the user.
For that matter, there are many proposals for >64 bit pointers: e.g. I worked on one where a pointer consisted of a 64bit address, and 64 bit lower and upper bounds, and metadata, for a total of 128 bits. Bounds checking. But, although these have >64 bit pointers or capabilities, they are not truly >64 bit virtual addresses.
Linus posts about 128 bit virtual addresses at http://www.realworldtech.com/beta/forums/index.cfm?action=detail&id=103574&threadid=103545&roomid=2
I'd also like to offer a computer architect's view of why 128bit is impractical at the moment:
Energy cost. See Bill Dally's presentations on how today, most energy in processors is spent moving data around (dissipated in the wires). However, since the most significant bits of a 128bit computation should change little, it should mitigate this problem.
Most arithmetic operations have a non-linear cost w.r.t operand size:
a. A tree multiplier has space complexity n^2, w.r.t. number of bits.
b. The delay of a hierarchical carry look ahead adder is Log[n] w.r.t number of bits (I think). So a 128bit adder will be slower than a 64bit add. Can anyone give some hard numbers (Log[n] seems very cheap) ?
Few programs use 128bit integers or quad precision floating point, and when they do, there are efficient ways to compose them from 32 or 64bit ops.
The next big thing in processor's architecture will be quantum computing. Instead of beeing just 0 or 1, a qbit has a probability of being 0 or 1.
This will lead to huge improvements in the performance of algorithm (for instance, it will be very easy to crack down any RSA private/public key).
Check http://en.wikipedia.org/wiki/Quantum_computer for more information and see you in 15 years ;-)
The main need for a 64 bit processor is to address more memory - and that is the driving force to switch to 64 bit. On 32 bit systems, you can really only address 4Gb of RAM, at least per process. 4Gb is not much.
64 bits give you an address space of several petabytes.(though, a lot of current 64 bit hardware can address "only" 48 bits - thats still enough to support 256 terrabytes of ram though).
Upping the natural integer sizes for a processor does not automatically make it "better" though. There are tradeoffs. With 128bit you'd need twice as much storage(registers/ram/caches/etc.) compared to 64 bit for common data types - with all the drawback that might have - more ram needed to store data, more data to transmit = slower, wider buses might requires more physical space/perhaps more power, etc.

64-bits and Memory Bandwidth

Mason asked about the advantages of a 64-bit processor.
Well, an obvious disadvantage is that you have to move more bits around. And given that memory accesses are a serious issue these days[1], moving around twice as much memory for a fair number of operations can't be a good thing.
But how bad is the effect of this, really? And what makes up for it? Or should I be running all my small apps on 32-bit machines?
I should mention that I'm considering, in particular, the case where one has a choice of running 32- or 64-bit on the same machine, so in either mode the bandwidth to main memory is the same.
[1]: And even fifteen years ago, for that matter. I remember talk as far back as that about good cache behaviour, and also particularly that the Alpha CPUs that won all the benchmarks had a giant, for the time, 8 MB of L2 cache.
Whether your app should be 64-bit depends a lot on what kind of computation it does. If you need to process very large data sets, you obviously need 64-bit pointers. If not, you need to know whether your app spends relatively more time doing arithmetic or memory accesses. On x86-64, the general purpose registers are not only twice as wide, there are twice as many and they are more "general purpose". This means that 64-bit code can have much better integer op performance. However, if your code doesn't need the extra register space, you'll probably see better performance by using smaller pointers and data, due to increased cache effectiveness. If your app is dominated by floating point operations, there probably isn't much point in making it 32-bit, because most of the memory accesses will be for wide vectors anyways, and having the extra SSE registers will help.
Most 64-bit programming environments use the "LP64" model, meaning that only pointers and long int variables (if you're a C/C++ programmer) are 64 bits. Integers (ints) remain 32-bits unless you're in the "ILP64" model, which is fairly uncommon.
I only bring it up because most int variables aren't being used for size_t-like purposes--that is, they stay within ranges comfortably held by 32 bits. For variables of that nature, you'll never be able to tell the difference.
If you're doing numerical or data-heavy work with > 4GB of data, you'll need 64 bits anyways. If you're not, you won't notice the difference, unless you're in the habit of using longs where most would use ints.
I think you're starting off with a bad assumption here. You say:
moving around twice as much memory
for a fair number of operations can't
be a good thing
and the first question is ask is "why not"? In a true 64 bit machine, the data path is 64 bits wide, and so moving 64 bits takes exactly (to a first approximation) as many cycles as moving 32 bits on a 32 bit machine. So, if you need to move 128 bytes, it takes half as many cycles as it would take on a 32 bit machine.

Resources