I've been paying some attention to Microsoft's fairly recent promoting of Velocity as a distributed caching solution that would compete with the likes of Memcached.
I've been looking for a 64bit version of Memcached for Windows for some time now with no luck, and since everything about the ASP.Net MVC project I'm working on is 64bit, it doesn't make sense to use anything but 64bit.
Now we're already hedging our bets with ASP.NET MVC in Beta (RTM soon hopefully), but StackOverflow doesn't seem to be doing too badly, so I have limited concerns there. But Velocity is still very much an unknown quantity and will still be Beta (or CTP) for ages - but it does have 64bit!
Does anyone have relevant experience or point of view to offer in this situation? Should we bide our time for Velocity - is it even anywhere near good enough to compete with a giant like Memcached, or should we invest time trying to get a 64bit version of Memcached going?
We have done recently a fair amount of comparing of Velocity and Memcached. In the nutshell, we found Velocity to be 3x - 5x slower than Memcached, and (even more crucially) it does not have currently support for a multi-get operation. So at the moment, I would recommend going with Memcached. Also, another lesson we have learned was that the slowest operation in distributed caching is serialization and deserialization (at least in ASP.NET). The in-process ASP.NET cache is order of magnitudes faster. So you have to choose caching strategies much more carefully.
If you don't mind paying for a license, you can use Scale Out State Server, which I talk about in my answer to a similar question here. They have both 32- and 64-bit versions.
EDIT: Despite the name of the product, it handles both Session State and distributed caching.
Memcached has some open source libraries if I'm not mistaken so if you want to go the 64bit route can you not just recompile?
I evaluated Velocity when it first arrived but came to the conclusion it was a bit undeveloped at that stage. Being able to run memcached on non-windows servers is also a bonus.
Related
These frameworks are the future of speed internet. But I can't find any benchmark or feature comparison of them on google. What framework in which situation would be better for example for building highload online shop? For building stackoverflow clone?
Maybe some basic memory management and request handling differences explanation, please?
Though the official documentation links to techempower, ChicagoBoss is not mentioned anywhere. Looking closely at ChicagoBoss it seems to be targeted mostly at Erlang developers, which is not the most popular language out there. I'm a fanatical about Phalcon, but I feel that ChicagoBoss would be faster and more resource efficient out of the box. But… writing your entire app in binary code right away would be even better in that sense.
Phalcon in less than two years achieved bigger popularity and reputation than ChicagoBoss did in five. There is significantly more information and support out there for Phalcon given all standard PHP rules and information apply to it as well. Phalcon next big release is under active development and looks very promising.
What framework in which situation would be better for example for
building highload online shop? For building stackoverflow clone?
I'm certain that neither Amazon or SO use either of them but both rely on a lot of caching and infrastructure optimisation to get where they are – the job for a different type framework.
Phalcon is a great lightweight tool for building unique projects with focus on high performance. It behaves very nicely with PhpStorm and the development / debugging is a pleasure most of the time. But be sure, it will give a lot of headache (there are a few bugs and some information is hard to come by) – isn't the best choice for enterprise software, you will spend a lot of time figuring out how things work and how to fix some of them.
Are there any performance benchmarks between the managed and unmanaged Oracle ODP.Net drivers?
(i.e. is there any advantage to moving to the managed driver other than for architectural/deployment simplicity)
I would like to share some results. I think that the small lack of performance is worth compared to the easiness of deployment.
Note: seg means seconds. Sorry about that.
Of course that it is a simple test, and there are several topics that is not covered like connection pool, stability, reliability and so on...
It is important to mention, that the scenarios were executed 100 times. So the time quantities are the average of that 100 executions.
Bullets from the quick start video:
Fewer files (1 or 2 dlls at most)
Smaller footprint (10 MB compared to 200 MB)
Easier side by side deployment
Same assembly for 32 and 64 bit (except for second MTS assembly).
Code Access Security
I'm not sure about performance but I doubt it will be much different either way. My guess is that the two drivers communicate in an identical way over "Oracle Net." While there might be minor differences in the in-memory client side operations done to prepare a command and process the results, this overhead typically only represents a fraction of the time relative to the entire transaction. Most of the cost/time is spent on the server in physical IO and transferring the data back to the client. This simply isn't the same as going from the oledb provider or the System.DataAccess.OracleClient driver. This is another release from the same RDBMS company - they're going to exploit all the same performance tricks that their other client used. I wish I could post a study, but i'd guess such a thing doesn't exist because in the end it would be unremarkable. A case of no news is good news - if the new provider was somehow worse you would be reading about it.
Simplicity is enough reason to switch to this IMO. The vast majority of developers and administrators do not fully understand the provider and its relationship to the unmanaged client. Confusion about oracle home preference, version mismatch, upgrades, etc comes up constantly. To eliminate these questions would be a welcome change.
Here is a gotcha for all you folks. Took me a couple weeks to figure out why Oracle Managed drivers would not connect using ef6. If your database has the following data integrity algorithms then you MUST use the unmanaged drivers!!
buried deep in the oracle documentation!!! THANKS ORACLE!!!!!
The easier deployment and bitness independence are really nice benefits, but you should rather evaluate your typical driver usage thoroughly. I faced almost 50% performance handicap when using the new managed driver in 64bit processes. Other people are reporting memory leaks etc on Oracle forum: https://forums.oracle.com/community/developer/english/oracle_database/windows_and_.net/odp.net . It looks like it's kind of typical Oracle buggy product which needs some more months/years to settle back :/
Keep in mind that Custom Types are not supported yet. This could be a reason not to switch to the managed driver.
See this Oracle doc for the differences between the managed and unmanaged version:
http://docs.oracle.com/cd/E16655_01/win.121/e17732/intro004.htm
Is VCL dead, or does it have a future as a GUI library? As CLX ended, is there any chance for cross-platform support in future releases?
I've had to do some work with legacy app that uses Borland's VCL(BCB6). Now that new features have to be implemented, it's necessary to revalue alternatives. Whether to stick with VCL or migrate to some other library/framework.
I've never read much what's happening in the field Embarcadero(Borland) tools. At least there seems to be only few VCL tagged questions here in SO and no much luck with google either.
Whether to continue using VCL in your project, or migrate to an alternative depends alot on your requirements. The VCL framework is powerful and mature, with lots of 3rd party components, which makes it a good idea to consider. The alternatives have been improving rapidly, and to point out one as the ultimate choice really requires you to carefully consider your requirements, and validate the strengths and weaknesses of the different frameworks.
Considering that cross platform is on the road map, I remind you that so has 64 bit support been for quite a while. We might see cross platform support, perhaps on schedule, perhaps delayed as we have seen with many previous features. I want to believe its coming because I truly like the VCL framework, but I always have a natural doubt concerning the official road map of the RAD studio series - sorry David. ;)
If you've researched the different alternatives, and found VCL to be the best choice based on its relevance to your project, then I'd consider using the VCL framework, especially if it is a framework you are familiar with. Learning a new framework can - while often a good idea - be a time consuming job. So even though there might be a risk of the framework not being held alive (as will there be with any alternatives) you might save a lot of work staying with the familiar framework, if it is the framework that suits your project the most.
If you do consider going with C++ Builder and the VCL, you might find that the C++ Builder Journal is a valuable source of information, they have a relatively quite forum, but with some interesting posts in it, and some free hints on their website: www.bcbjournal.com.
Of course there is also the embarcadero forums, and this site, it may be a good idea to search the Delphi forums and categories, since it seems there are more active users on these, and by far more posts. One good thing though, is that conversion from Delphi to C++ in VCL related questions is quite simple.
VCL is undergoing continued development.
Cross platform is on the current roadmap.
The embarcadero forums are still a valuable resource.
As a user of VCL I must say that your observations are truly correct. VCL might appeal to you, but the resources available compared to QT and other toolkits is poor at least esp. at SO. Our team have also found several bugs in their components, and have more than once patched components to make our application stable. Still for me the main reason to migrate is that VCL locks you in with a single set of development tools. I must admit that I have a hard time trying to find any really good reasons to continue to use it if you have the resources to migrate.
Given that bcc32 and its libraries is also very buggy, the lockin gets even more serious, The last months me and my team have spent more time fixing issues caused by the compiler than actually developing features. For me this is such a serious impediment that its cost overweight its benefits tenfold. Unfortunately the costs of migrating for us is so high that we at least for now have to endure its pains.
I'm involved with a project using DotNetNuke version 05.01.04 Community Edition. We are building our new Intranet using it, but performance is terrible.
We have five people adding pages and content to it and every 15-30 seconds they experience a pause of 10 seconds or longer before the system continues and the next screens loads.
The server is Windows 2003, 3.8GHz with 1GB of RAM. I'm told by our server admin that the CPU and memory performance don't appear to be the bottleneck.
We currently have 350 pages in the system, we a plan to add 1000. So we need to resolve this performance problem so that we can enter content and so we can go live.
I just can't see where the bottleneck is. Is there a good why to determine the bottleneck when using DotNetNuke?
Modules installed
Publish:Engage (Not currently in
use)
Page Blaster (Doesn't appear
to providing caching when users
logged in using Integrated
Authentication)
SimpleGallery
XMod
Content Manager
IIS Setup
Application recycling completely disabled (Apart from a 2am recycle)
New findings: 18th March 2010
The main bottleneck was due to version 5.1.4 having a bug which caused 1300 database roundtrips on an average page, due to broken database in-memory caching. We've upgraded to 5.2.4 which has resolved this bottleneck.
Now the next biggest bottleneck is the navigation. We've used both DDR:Menu and DDN:Nav, but both have a major impact on performance.
Is there a navigation interface out there that doesn't drain performance so badly?
I think you need to start investigating this using performance profiling tools. For the DNN application itself I'd grab something like JetBrains DotTrace or Red Gate's ANTS Performance Profiler.
For the database SQL Server Profiler would be the first choice or a tool such as Red Gate's SQL Response.
Without profiling the application these you're going to be pulling at straws.
And as Tim pointed out in his comment, installing Firebug in Firefox with the YSlow add-in to see what resources are taking longest to serve to the browser.
Mitchel Sellers has some good tutorials and checklists to go through with regards to performance in DNN. Start with Explaining High Performance DotNetNuke Configuration and Management (which points to some of his earlier articles).
I have several years of dnn development and maintainance experience, when I have this kind of problem, I start doing things from database clean up. Next thing is, find for missing indexes, and/or rebuild all the indexes periodically (sql job scheduled for that) but major performance gain would be from clean up of table
Another good considerations would be, disabling trace, debug mode to false and turn off features of dnn that you don't use (scheduler is the first one to turn off)
Edit: consider keep alive as well
Hope this helps
Is your database on that server? If so, just throw in some more RAM, or get a faster disk array...
Have you considered creating this lot of pages directly through TSQL? It's not hard to do and may save you a lot of time.
Would I expect to see any performance gain by building my native C++ Client and Server into 64 bit code?
What sort of applications benefit from having a 64 bit specific build?
I'd imagine anything that makes extensive use of long would benefit, or any application that needs a huge amount of memory (i.e. more than 2Gb), but I'm not sure what else.
Architectural benefits of Intel x64 vs. x86
larger address space
a richer register set
can link against external libraries or load plugins that are 64-bit
Architectural downside of x64 mode
all pointers (and thus many instructions too) take up 2x the memory, cutting the effective processor cache size in half in the worst case
cannot link against external libraries or load plugins that are 32-bit
In applications I've written, I've sometimes seen big speedups (30%) and sometimes seen big slowdowns (> 2x) when switching to 64-bit. The big speedups have happened in number crunching / video processing applications where I was register-bound.
The only big slowdown I've seen in my own code when converting to 64-bit is from a massive pointer-chasing application where one compiler made some really bad "optimizations". Another compiler generated code where the performance difference was negligible.
Benefit of porting now
Writing 64-bit-compatible code isn't that hard 99% of the time, once you know what to watch out for. Mostly, it boils down to using size_t and ptrdiff_t instead of int when referring to memory addresses (I'm assuming C/C++ code here). It can be a pain to convert a lot of code that wasn't written to be 64-bit-aware.
Even if it doesn't make sense to make a 64-bit build for your application (it probably doesn't), it's worth the time to learn what it would take to make the build so that at least all new code and future refactorings will be 64-bit-compatible.
Before working too hard on figuring out whether there is a technical case for the 64-bit build, you must verify that there is a business case. Are your customers asking for such a build? Will it give you a definitive leg up in competition with other vendors? What is the cost for creating such a build and what business costs will be incurred by adding another item to your accounting, sales and marketing processes?
While I recognize that you need to understand the potential for performance improvements before you can get a handle on competitive advantages, I'd strongly suggest that you approach the problem from the big picture perspective. If you are a small or solo business, you owe it to yourself to do the appropriate due diligence. If you work for a larger organization, your superiors will greatly appreciate the effort you put into thinking about these questions (or will consider the whole issue just geeky excess if you seem unprepared to answer them).
With all of that said, my overall technical response would be that the vast majority of user-facing apps will see no advantage from a 64-bit build. Think about it: how much of the performance problems in your current app comes from being processor-bound (or RAM-access bound)? Is there a performance problem in your current app? (If not, you probably shouldn't be asking this question.)
If it is a Client/Server app, my bet is that network latency contributes far more to the performance on the client side (especially if your queries typically return a lot of data). Assuming that this is a database app, how much of your performance profile is due to disk latency times on the server? If you think about the entire constellation of factors that affect performance, you'll get a better handle on whether your specific app would benefit from a 64-bit upgrade and, if so, whether you need to upgrade both sides or whether all of your benefit would derive just from the server-side upgrade.
Not much else, really. Though writing a 64-bit app can have some advantages to you, as the programmer, in some cases. A simplistic example is an application whose primary focus is interacting with the registry. As a 32-bit process, your app would not have access to large swaths of the registry on 64-bit systems.
Continuing #mdbritt's comment, building for 64-bit makes far more sense [currently] if it's a server build, or if you're distributing to Linux users.
It seems that far more Windows workstations are still 32-bit, and there may not be a large customer base for a new build.
On the other hand, many server installs are 64-bit now: RHEL, Windows, SLES, etc. NOT building for them would be cutting-off a lot of potential usage, in my opinion.
Desktop Linux users are also likely to be running the 64-bit versions of their favorite distro (most likely Ubuntu, SuSE, or Fedora).
The main obvious benefit of building for 64-bit, however, is that you get around the 3GB barrier for memory usage.
According to this web page you benefit most from the extra general-purpose registers with 64 bit CPU if you use a lot of and/or deep loops.
You can expect gain thanks to the additional registers and the new passing parameters convention (which are really linked to the additional registers).