Is there any BigEndian hardware out there? - endianness

I consider throwing out code that handles the big endian case from a library and instead simply throw an expception during initialization if the platform is not little endian. I cannot imagine that there is any big endian hardware if we restrict to
typical server hardware for any web site hosted
servers according to the open compute project spec
all common mobile devices
Did anybody lately encounter a Big Endian machine or device that does not belong to the dinnosaur park?

Windows only supports little-endian processors ( http://blogs.msdn.com/b/larryosterman/archive/2005/06/07/426334.aspx ) however it seems all of the platforms that matter (so-to-speak) are either little-endian already (x86, AMD64) or support little-endian mode (ARM, POWER/PowerPC, Itanium, etc).
While there are exclusively big-endian hardware platforms, they're increasingly rare and obscure - however if the cost of maintaining BE/LE-compatible code isn't too much trouble then I think it's worthwhile to keep it: I assume that it's only a matter of performing conversion in the entrypoints and output calls of your code, internally you shouldn't need to do anything.

Related

What is the quantifiable benefit of 64 bit for Visual Studio Code over 32 bit

I'm not a hardware guy, but I know that Visual Studio in a 64 bit version issue request was declined by Microsoft stating that a 64 bit version would not have good performance.
Two noticeable differences between the two that I feel are obvious is the code base. One began it's life in 1997, one would think that means more baggage on the Visual Studio side, less opportunities to have very modern application architecture and code and that may make it harder and possibly stuff may be built to perform on 32 bit and for some reason is not suitable for 64 bit? I don't know.
Visual Studio Code on the other hand is an modern Electron app which means it pretty much just compiled HTML. CSS and JavaScript. I'm betting making a version of Visual Studio Code has little in the way of obstructions and although performance may not be something truly noticeable, why not?
P.S.
I still would like to understand what areas may be improved in performance and if that improvement is negligible to the a developer. Any additional info or fun facts you may know would be great I would like to have as much info as possible and I will update the question with any hard facts I uncover that are not mentioned.
The existence of 64-bit Visual Studio Code is largely a side-effect of the fact that the Node.js- and Chromium-based runtimes of Electron support both 32- and 64-bit architectures, not a primary design goal for the application. Microsoft developed VS Code with Electron, a framework used to build desktop applications with web technologies.
Because Electron already includes runtimes for both architectures (and for different operating systems), VS Code can provide both versions with little additional effort—Electron abstracts the differences between machines from the JavaScript code.
By contrast, Microsoft distributes much of Visual Studio as compiled binaries that contain machine-specific instructions, and the cost of rewriting and maintaining the source code for 64-bits historically outweighed any benefits. In general, a 64-bit program isn't noticeably faster to the end user than its 32-bit counterpart if it never exceeds the limitations of a 32-bit system. Visual Studio's IDE shell doesn't do much heavy-lifting—the bulk of the expensive processing in a typical workflow is performed by the integrated toolchains (compilers, etc.) which usually support 64-bit systems.
With this in mind, any benefits we may notice from running a 64-bit version of VS Code are similar to those we would see from using a 64-bit web browser. Most significantly, a 64-bit version can address more than 4 GB of memory, which may matter if we need to open a lot of files simultaneously or very large files, or if we use many heavy extensions. So—most important to us developers—the editor won't run out of memory when abused.
While this sounds like an insurance policy worth signing, even if we never hit those memory limits, remember that 64-bit applications generally consume more memory than their 32-bit counterparts. We may want to choose the 32-bit version if we desire a smaller memory footprint. Most developers may never hit that 4 GB wall.
In rare cases, we may need to choose either a 32-bit or 64-bit version if we use an extension that wraps native code like a DLL built for a specific architecture.
Any other consequences, positive or negative, that we experience from using a 64-bit version of VSCode depend on the versions of Electron's underlying runtime components and the operating system they run on. These characteristics change continuously as development progresses. For this reason, it's difficult to state in a general manner that the 32-bit or 64-bit versions outperform the other.
For example, the V8 JavaScript engine historically disabled some optimizations on 64-bit systems that are enabled today. Certain optimizations are only available when the operating system provides facilities for them.
Future 64-bit versions on Windows may take advantage of address space layout randomization for improved security (more bits in the address space increases entropy).
For most users, these nuances really don't matter. Choose a version that matches the architecture of your system, and reserve switching only if you encounter problems. Updates to the editor will continue to bring optimizations for its underlying components. If resource usage is big concern, you may not want to use a GUI editor in the first place.
I haven't worked much on windows but have interacted with x86, x64 and ARM (Both 32-bit and 64-bit instruction set size) processors. Based on my experience, before writing the code in 64-bit format we thought: Do we really need 64-bit size instructions? If our operation can be performed within 32 bits, then why shall we need another 32 bits?
Think of it like this: You have a processor with 64-bit address and 64-bit data buses and 64-bit size registers. Almost all of the instructions of your program requires maximum 32 bits. What will you do? Well, I think there are two ways now:
Create a 64-bit version of your program and run all the 32-bit instructions on your 64-bit processor. (Wasting 32-bits or your processor in each instruction cycle, and filling the Program Counter with an address which is 4 bytes ahead). Your application / program which could have been executed in 256 MB of RAM now requires 512 MB, due to which other programs or processes running on the RAM will suffer.
Keep the program format to 32-bit and combine 2 32-bit instructions to be pushed into your 64-bit processor for execution.
Obviously, second approach will run faster with the same resources.
But yes, if your program is containing more instructions which are really 64-bit in size; For eg. Processing 4K videos (Better on 64-bit processor with 64-bit instruction set) or performing floating-points operations with up to 15 decimal digit precision, etc. Then, it is better to create 64-bit program file.
Long story in short: Try to write compact software and leverage the hardware as much as possible.
So far, what I have read Here, Here and Here; I came to know that most of the components of VS require only 32-bits instruction size.
Hope it explains.
Thanks
4 years later, in 2021, you now have:
"Microsoft's Visual Studio 2022 is moving to 64-bit" from Mary Jo Foley
It references the official publication "Visual Studio 2022" from Amanda Silver, CVP of Product, Developer Division
Visual Studio 2022 is 64-bit
Visual Studio 2022 will be a 64-bit application, no longer limited to ~4gb of memory in the main devenv.exe process. With a 64-bit Visual Studio on Windows, you can open, edit, run, and debug even the biggest and most complex solutions without running out of memory.
While Visual Studio is going 64-bit, this doesn’t change the types or bitness of the applications you build with Visual Studio. Visual Studio will continue to be a great tool for building 32-bit apps.
I find it really satisfying to watch this video of Visual Studio scaling up to use the additional memory that’s available to a 64-bit process as it opens a solution with 1,600 projects and ~300k files.
Here’s to no more out-of-memory exceptions. 🎉

Are games/programs compiled for multiple architectures?

This might be a big broad and somewhat stupid, but it is something I've never understood.
I've never dealt with code that needs to be compiled except Java, which I guess falls between two chairs, so here goes.
I don't understand how games and programs are compiled. Are they compiled for multiple architectures? Or are they compiled during installation (it does not look that way)?
As far as I've understood, code needs to be compiled based on the local architecture in order to make it work. Meaning that you can't compile something for AMD and "copy" the binaries and execute them on a computer running Intel (or similar).
Is there something I've misunderstood here, or does they use an approach which differs from the example I am presenting?
AMD and Intel are manufacturers. You might be thinking of amd64 (also known as x86_64) versus x86. x86_64 is, as the name suggests, based on x86.
Computers running a 64-bit x86_64 OS can normally run x86 apps, but the reverse is not true. So one possibility is to ship 32 bit x86 games, but that limits the amount of RAM that can be accessed per process. That might be OK for a game though.
A bigger issue is shipping for different platforms, such as Playstation and (Windows) PC. The Playstation not only has a completely different CPU architecture (Cell), but a different operating system.
In this case you can't simply cross-compile - and that is because of the operating system difference. You have to have two separate versions of the game - sharing a bunch of common code and media files (also known as assets) - one version for PC and one for Playstation.
You could use Java to overcome that problem, in theory... but that only works when a JVM is available for all target platforms. Also, there is now fragmentation in the Java market, with e.g. Android supporting a different API from JME. And iPhones and iPads don't support Java at all.
Many games publishers do not in fact use Java. An exception is Mojang.

Can I convert a 16-bit .exe program to a 64-bit .exe?

I realize that there will likely be no special converter programs or anything easy like that for such a task, but it imperative that I find some way to get a 16-bit program to run in 64-bit Windows. Due to the large amount of resources that must be dedicated to them, emulators will not be a good solution.
The idea I had for this project was to decompile all the code from a 16-bit program, copy it, and re-compile it into 64-bit code. Is this at all possible using Eclipse or another programming environment?
Basically, I want to make a 16-bit program run in 64-bit Windows without emulators. I realize that it's a tall order, but is it conceivable?
The problem goes beyond translating 16-bit instructions with 64-bit instructions. There is also the ABI (Application Binary Interface) used by the program to communicate with the rest of the system. A 16-bit program likely uses a lot of DOS calls and it's not unlikely it tries to access hardware directly too. There is no way this can be translated automatically. Even if such a solution existed, I highly doubt the result would be more efficient than running in a virtual machine (which actually is very efficient). Further more, programs written for 16-bit environment are often not very scalable, and completely unable to handle amounts of data beyond the capacities of the original target platform.
So I'd say there are really just two realistic solutions: Run it in a virtual machine. Or if that doesn't cut it, write a new application from scratch that does the same thing.
Even if this is a very old question, but I thought I'd write this solution for anyone still looking out there:
Using something like winevdm -which is a very light windows program- you can run 16-bit Windows (Windows 1.x, 2.x, 3.0, 3.1, etc.) on 64-bit Windows apps very easily!

high performance runtime

It’s the first time I submit a question in this forum.
I’m posting a general question. I don’t have to develop an application for a specific purpose.
After a lot of “googling” I still haven’t found a language/runtime/script engine/virtual machine that match these 5 requirements:
memory allocation of variables/values or objects cleaned at run time
(e.g. a la C++ that use keyword delete or free in C )
language (and consequently the program) is a script or
pseudo-compiled a la byte code that should be portable on main
operating system (windows, linux, *bsd, solaris) & platform(32/64bit)
native use of multicore (engine/runtime)
no limit on the heap usage
library for network
The programming language for building application and that run on this engine is agnostic oriented (paradigm is not important).
I hope that this post won’t stir up a Holy-War but I'd like to put focus on engine behavior during program execution.
Sorry for my bad english.
Luke
I think Erlang might fit your requirement:
most data is either allocated in local scopes and therefore immediately deleted after use or contained in a library-powered permanent storage like ETS, DETS or Mnesia. There is Garbage Collection, though, but the paradigm of the language makes the need for it not as important.
the Erlang compiler compiles the source code to the BEAM virtual machine byte code, which, unlike Java is register-based and thus much faster. The VM is available for:
Solaris (including 64 bit)
BSD
Linux
OSX
TRU64
Windows NT/2000/2003/XP/Vista/7
VxWorks
Erlang has been designed for distributed systems, concurrency and reliability from day one
Erlang's Heap grows with your demand for it, it's initially limited and expanded automatically (there are numerous tweaks you can use to configure this on a per-VM-basis)
Erlang comes from a networking background and provides tons of libraries from IP to higher-level protocols

Should I provide an x64 build of my application?

Perhaps I'm missing a major point of the x64 platform here, but my perception was that x64 applications were only better performing than x86 versions (on an x64 OS and hardware, obviously) when large amounts of memory, large pointers, or other demanding factors were involved.
However, I've started to notice some smaller applications offering x64 versions of their installers in addition to the standard x86 versions. Since x86 runs just fine on Windows x64 using WoW, is there any benefit to me releasing an x64-compiled version of my application? As I see it:
Pros:
Potentially higher performance (in what conditions, though)
Cons:
Additional build to create/support
Potential bugs in x64 target that aren't present in the x86 target
Dependence on x64 versions of vendor/OS DLLs, requiring different install checklist and introducing additional troubleshooting complications
What are some compelling reasons that should cause me to reconsider adding an x64-compiled version of my app?
Another potential reason for compiling and debugging an x64 version is that it may expose hidden bugs in the x86 version. For example, this may expose improper conversions between 32-bit integers and (now) 64-bit pointers. This also positions you to support x64 in the future.
However, unless your application would benefit from 64-bit integers, additional cpu registers, a larger memory space (beyond 3.5Gb), or implement a device driver, staying with 32-bit application is fine. All major operating systems support running both x32 and x64 applications at the same time so there will not be a major push towards 64-bit only applications.
BTW. Applications based on .NET automatically benefit from being executed on a 64-bit system without any code changes. No additional testing required.
Potential performance improvement relates mostly to usage of 64-bit integers (around 4 times as fast in an x64 build on my machine than in x86 on the same) and the fact that compilers may assume some CPU features to be universally present in CPUs supporting x64, such as SSE2, &c.; this can lead to more optimized code.
For many applications, especially small ones it isn't too difficult to be 64-bit clean, for larger ones it's a major headache, granted. Compelling reasons are few, usually. But some platforms don't install their 32-bit support in 64-bit versions by default (I think FreeBSD needs to be explicitly told to do so, but I may err on that).
Your program will benefit if it uses a lot of long longs and WOW does of course mean a minor performance hit(very minor though because the CPU has a compatibility mode for such this reason)...
Windows support for 32 bit programs will be degrading in the future(albeit slowly) so I say in another year or 2, you can just about wonder why you would want to deploy a 32 bit application...
Also, a 64 bit build of your application, can actually be much more optimized than a 32 bit build because with 64 bit, you are guaranteed to have quite a few things, such as SSE2.
Performance; x86_64 is faster than x86 by a small amount (that also depends on your compiler), for the reasons already stated. Also, it's easier to support really huge data sets, but of course many applications never need go there.
Certainly on Linux and OS X, you really should support x86_64; I'm typing this to a 64 bit browser on OS X, and my Linux box in the corner is also 64 bit almost exclusively. 64 bit Windows is a bit more of a question, but that is now coming (with the drivers).
This is often based on human factors rather than objective technical reasoning. 64-bit is the latest and greatest and must be better than 32-bit. If customers want it the customer is always right. Have spoke with windows users saying their goal was to make it such that when they view their process list in windows *32 does not appear next to any of their apps.
Often times this is also fueled by confusion on the point of compatibility where people have 64-bit operating systems and just want to make sure the software will work on their computers. Expecting average people to understand the technical demarcation line between 32-bit processes on a 64-bit OS is unrealistic. Unless explicitly stated on the packaging it can be a point of confusion/concern for a customer purchasing new software. Often times you will see 64-bit mentioned for this reason. Typically this really means only 64-bit compatibility.
Now there are a few 64-bit applications (flash player and google earth top my list) that can't come soon enough.
Here's one reason: integrating with 64-bit apps, like writing a shell extension. A 64-bit process cannot load a 32-bit DLL directly, so you have to make yours 64-bit as well.
One factor that hasn't been mentioned yet: WoW64 Is Now an Optional Feature for Server Core. Only an issue if your application needs to run on server systems, of course.
Similarly, Windows PE, Windows RE, etc., do not include Wow64.

Resources