Performance tuning in Cocoa application - cocoa

I am developing a Cocoa application which communicates constantly with a web service to get up-to-date data. This considerably reduces the performance of the application. The calls are made to the web service asynchronously but the number of calls is huge.
In what ways can I improve the performance of the application? Is there a good document/write up available which gives the best practices to be followed when an a Cocoa application communicates with a web service?
Thanks

You should try out Shark that comes with the Mac OS X devtools - really great for digging into your callstack and should allow you to limit to network libraries and friends.

Yes! Apple actually has some very concise guides on performance that cover a lot of tricks and techniques, I'm sure you'll find something relevant to your own application. There may be some additional guides specific to 10.5 I haven't seen yet, but here are three I've found useful in the past.
Performance Overview
Cocoa Performance Guidelines
Cocoa Drawing Performance Guidelines
The most important thing to take away though, is that you need to use performance tools to see exactly where the bottleneck is occurring. Sometimes it may be in the place you least expect it.

I think if you use Shark your just find your app is blocking waiting for answers back from the server. Code split across machines is far harder to benchmark as the standard tools can only benchmark part of the picture.
Sounds like you need to look into bundling calls up into fewer transactions.... Your bottleneck is almost certainly the network. What about supporting sending multiple calls as an array of calls? and the same for answers? Maybe you could buffer calls locally and only send them a few times a second as a single transaction?
Tony

Related

Go: Embedded backend vs app engine

I'm one of those classic native programmers that has spent most of his past with .exe's and .jar's. As of the past year I've thrown my self into the world of web frameworks and technologies that seize to impress me. As of the past 1½ month I have fallen In love with Go because of it's strictness, and also how 'stand-alone' it seems to be. So now to the real question...
Go app engine application, why do we need this?
What is the difference and reasoning to choose a wrapped application (framework)?
I assume its purpose is to load some of the communication off the application to the wrapper, but sadly I can't seem to figure out (through documentation and discussion) what the specific purpose behind this modulation is.
Best regards and cyber high fives in your direction!
These really are two different questions.
1. Why GAE?
It's up to you. GAE provides cloud-based hosting that you pay rental to use. It's a bit similar to Amazon Web Services. Your Go app would be uploaded to GAE, where it provides your web service and your users can do lots of wonderful things. Meanwhile you never need to know which actual server is doing the serving at any given time - the app can migrate across their servers dynamically. GAE provides a high uptime and a low effort for you in keeping the server secure, backed up etc. It will also be elastic to cope with surges in load.
You may instead prefer to rent a private server (e.g. at Rackspace) or just a virtual machine. You'd perferably need to be a Linux expert (get lots of help at Serverfault) and you'll have to do the backup, firewall etc all yourself. It may cost (much) less. Or more.
2. Choosing a framework?
The net/http API allows you to write HTTP server code to do pretty well anything you want. But you have to do quite a lot of hard work. At the opposite extreme, frameworks like Revel make rapid server development possible, as long as it does the things you want of it. If you stray into functionality beyond what it offers, you might have to do quite a lot of digging to find out how to extend the framework.
Other interesting toolkits include Gorilla, Gocraft Web and Goji. In terms of complexity, these sit about halfway between Revel and basic net/http.
To answer your second question, here are some pros and cons of using a framework (e.g., revel) vs. something simpler like a toolkit (e.g., gorilla)
In general, the pros of using a framework are:
it provides a lot of sub-packages to handle important web-related sub-tasks like templating, generating data in specified formats like json or xml, query escaping, etc.
it handles boilerplate http handling
it (hopefully) enforces best practices like escaping strings
it helps you manage complexity by enforcing a consistent design pattern in the way you handle requests
Cons of using a framework:
frameworks tend to be "opinionated," meaning you have to buy into their general philosophy and understand their core concepts before you can make use of them; for a lot of frameworks this can be quite a bit of mental overhead
extra layer of abstraction, meaning you're another step removed from what's really going on, and there will be more stuff to understand and debug if something goes wrong
it can be brittle and hard to do something that isn't a standard use case in the framework
future maintainability: most frameworks don't tend to have a super long lifespan. Django and Rails have been around for a long time, but there's a massive graveyard of frameworks that came before them. Hindsight is 20/20, but it's hard to pick the right horse from the outset.
Recommendation
It's hard to make the call upfront. So much depends on the specifics of your problem, but I'd say in the case of Go, opt for the simpler option. Much of the value-add of frameworks in other languages is the fact that they contain useful sub-packages that handle important tasks, but Go already contains a lot of these in its standard library (e.g., encoding/json, net/http, net/url, text/template). I've built a fairly sophisticated web app using just the Gorilla toolkit and the go standard library, and it's been amazingly good, and the best part is, it's incredibly easy to understand what the code does and I can explain it to someone else without requiring them to first read through the massive About page of some third party framework.
If you want to get a sense of how other people use Gorilla, you might try looking at real-world usage examples. Compare that to how people use more sophisticated frameworks and pick whichever you like better.

Basic knowledge for a high traffic application

Thanks for all the questions and responses posted on here. This site usually shows up whenever I search for information from google, and in many cases, the answers are usually relevant to the issues I needed solved.
I want to preface my question by stating that I've been programming (.NET, XML, T-SQL, AJAX, etc) for less than 2 years, and I still have a lot to learn; so, pardon my ignorance.
Here's my situation (and question): I'm building a social web application, which I know will have much traffic in a short time; as a result,
What are the basic information that I need to have, in order not to be overwhelmed? It's currently a one-man affair, and here is the hosting specification that I plan to start with: 2GB RAM, 600 HDD, 1000 GB bandwidth, and 2.13GHz Duo Core Processor.
I've read about web-farms, but I've never had an opportunity to use them, so I'm not entirely sure how to phrase this question: how can one split the same application on multiple physical servers? How do you make all the files act as one entity? And since every .net application requires a web.config, how is it split among the various files on these multiple servers?
I've built smaller projects before, but this is the first big project I'm building, and to be frank, I'm a little intimidated. So, I would like to ensure I know what I'm getting into before starting.
Thank you.
Based on your background I assume you are developing in a .Net environment? If so, I highly recommend you take a look at Windows Azure. Developing your app against Azure will allow you to deploy your app in Microsoft's cloud platform. Once deployed you can shrink and grow your resources according to demand without having to deal with the relative hassle of setting up multiple servers in multiple locations and managing it all. This allows you to pay for a "little bit" of server up front and if your app gets popular you can easily pay for "web farm" like power and geographic diversity. It also gives you a decent framework for developing an app that will scale relatively well. That's an 18,000-feet overview. If you can put some more details in your question I'm sure you will get more detailed responses. Best of luck!
Your "social web application" will not have any users if it isn't working and deployed. Don't worry about scaling much until the site actually does something useful and has a few hundred users (or at least a few dozen!). Get it working, find people around you who can help when the going gets tough, and keep at it. Otherwise your concerns about needing to scale will never be warranted.

Using ZeroMQ for cross platform development?

We have a large console application in Haskell that I have been charged with making cross platform and adding a gui.
The requirements are:
Native-as-possible look and feel.
Clients for Windows and Mac OS X, Linux if possible.
No separate runtime to install.
No required network communication. The haskell code deals with very sensitive information that cannot be transmitted over the wire. This is really the only reason this isn't a web application.
Now, the real reason for this question is to explain one solution I'm researching at the moment and to solicit for reasons that I'm not thinking of that make this a bad idea.
My solution is a native gui. Winforms on Windows, Cocoa on Mac OS X, and GTK/Glade on Linux, that simply handles the presentation. Then I would write a layer on top of the Haskell code that turns it into a responder for messages to and from the UI using ZeroMQ to handle the messages and maybe protobufs for serializing the data back and forth. So the native application would start which would itself start the daemon where all of the magic happens, and send messages back and forth.
Aside from making sure that the daemon only accepts connections from the application that started it, and the challenge of providing the right data back and forth for advanced gui elements (I'm thinking table views, cells, etc.), I don't see many downsides to this.
What am I not thinking about that makes this a bad idea?
I should probably mention that at first glance I was going to go with GTK on all platforms. The problem is that, while it's close, and GTK and Glade support for Haskell is nice to work with, the result doesn't look 'right'. It's close, but just not native enough in subtle ways which make that solution unacceptable to the people who happen to be writing the check for this work.
Also, the issue of multiple platforms and thus multiple languages for the gui isn't a problem so I'm not necessarily looking for other ways to solve that problem unless it simplifies something about the interop with the haskell code.
Then I would write a layer on top of the Haskell code that turns it into
a responder for messages to and from the UI using ZeroMQ to handle the
messages and maybe protobufs for serializing the data back and forth.
I think this is reasonable (a client/server model, where the client just
happens to be a native look-n-feel desktop app). (I have no strong view
about protobufs versus e.g. JSON, thrift).
The Haskell zeromq
bindings are getting
some use now, too.
What am I not thinking about that makes this a bad idea?
How well tested is zeromq on Windows and Mac? It is probably fine, but
something I'd check.
The problem is that, while it's close, and GTK and Glade support for
Haskell is nice to work with, the result doesn't look 'right'.
Does the integration package help
there?
Here's an interesting possibility: wai-handler-webkit. It essentially packages up QtWebkit with the Warp web server to make your web apps deployable. It hasn't seen intensive use, has never been tested on Mac, and is tricky to compile on Windows, but it's a fairly straight-forward approach that lets you use the fairly rich web ecosystem developing in Haskell.
I'm likely going to be doing more development on it in the near future, so if you have interest in using it, let me know what extra features would be useful, as well as if you could offer any help on the Mac front in particular. I'm also not convinced that we need to stick with QtWebkit on all platforms: it might make more sense to use a different Webkit backend depending on OS, or maybe even using Gecko or (shudder) Trident instead.
I've had some problems getting zeromq to play nice with haskell on OSX (problems with looking for a dylib as opposed to an "o" I think). Protocol buffers and haskell seems to work fine though.
So your reason not to use a web application is because of sensitive nature of haskell program's output. And THAT's why you are distributing that same sensitive application that spews out unencrypted data on ALL client machines ? That does not make any sense.
If your application is sensitive you DEFINITELLY should put it on server and utilize strongest possible TLS.

Asterisk AGI framework for IVR; Adhearsion alternative?

I am trying to get started writing scalable, telecom-grade applications with Asterisk and Ruby. I had originally intended to use the Adhearsion framework for this, but it does not have the required maturity and its documentation is severely lacking. AsteriskRuby seems to be a good alternative, as it's well documented and appears to be written by Vonage.
Does anyone have experience deploying AGI-based IVR applications? What framework, if any, did you use? I'd even consider a non-Ruby one if it's justified. Thanks!
SipX is really the wrong answer. I've written some extremely complicated VoiceXML on SipX 3.10.2 and it's been all for naught since SipX 4 is dropping SipXVXML for an interface that requires IVRs to be compiled JARs. Top that off with Nortel filing bankruptcy, extremely poor documentation on the open-source version, poor compliance with VXML 2.0 (as of 3.10.2) and SIP standards (as of 3.10.2, does not trunk well with ITSPs). I will applaud it for a bangup job doing what it was designed to do, be a PBX. But as an IVR, if I had it to do all over again, I'd do something different. I don't know what for sure, but something different. I'm toying with Trixbox CE now and working on tying it into JVoiceXML or VoiceGlue.
Also, don't read that SipX wiki crap. It compares SipX 3.10 to AsteriskNOW 1 to Trixbox 1. Come on. It's like comparing Mac OS X to Win95! A more realistic comparison would be SipX 4 (due out 1Q 2009) to Asterisk 1.6 and Trixbox 2.6, which would show that they accomplish near identical results except in the arena of scalibility and high-availability; SipX wins at that. But, for maturity and stability, I'd advocate Asterisk.
Also, my real world performance results with SipXVXML:
Dell PowerEdge R200, Xeon Dual Core 3.2GHz, handles 17 calls before jitters.
HP DL380 G4, Dual Xeon HT 3.2 GHz, handles 30 calles before long pauses.
I'll post my findings when I finish evaluating VoiceGlue and JVoiceXML but I think I'm going to end up writing a custom PHP called from AGI since all the tools are native to Asterisk.
You should revisit Adhearsion as v0.8.1 is out, and the documentation has gotten much better quite recently. Have a look here:
http://adhearsion.com
http://docs.adhearsion.com
http://api.adhearsion.com
If you're looking for "telecom-grade" applications, you may want to look into SipXecs instead of asterisk. It's featureful, free, and open source, with commercial support available from Nortel. You can interact with it via a Web Services API in ruby (or any other language).
See the SipXecs wiki for more information. There's a comparison matrix on that site, comparing features with AsteriskNOW and TrixBox.
There really aren't any other frameworks out there. There's of course AGI bindings to every language, but as far as full-fledged frameworks for developing telephony applications, we're just not there yet. At least in the open-source world.
I have asked somewhat related questions here, here, and here. I'm using Microsoft's Speech Server, and I'm very intested to learn about any alternatives that are out there, especially open source ones. You might find some good info in the answers to one of those questions.
I used JAGIServer extensively, even though it's not under development anymore, and it's pretty good and easy to use. It's an interface for FastAGI, which I recommend you use instead of simple AGI.
The new version of this framework is OrderlyCalls which seems to have a lot more features but since I haven't needed them, I haven't tried it.
I guess it all depends on what you want to do with AGI; usually I have a somewhat complex dialplan to gather and validate all user input and then just use AGI to connect to a Java application which will read some variables, do some stuff with it (perform operations, queries, etc etc) and then sets some more variables on the AGI channel and disconnects. At this point, the dialplan continues depending on the result of the variables set by the Java app.
This works really fast because you have a ServerSocket on the Java app, which receives incoming connections from AGI, creates a JAGIClient with the new socket and a new instance of a JAGIProcessor (which you have to write, it's the object that will do all your processing), and then run the JAGIClient inside a thread pool.
Your JAGIProcessor implements the processCall method where it does all the work it needs, interacting with the JAGIClient passed as a parameter, to read and set variables or do whatever stuff the AGI interface allows you to.
So you have a Java app running all the time and it can be a simple J2SE app or an EE app on a container, doesn't matter; once it's running, it will process AGI requests really fast, since no new processes have to be started (in contrast to simple AGI which runs a program for every AGI call).
Smee again. After migrating my client's IVR's over from SipX to Asterisk utilizing PHPAGI, I must say that I haven't encountered any other architecture that anywhere near as simple and capable. I'll be stress testing Trixbox CE 2.8 today on the same hardware I had tested SipX on earlier. But I must say, using PHPAGI for the IVR and the Asterisk CLI for debugging has worked perfectly and allowed me to develop IVR's far faster than any other company out there. I'm working on implementing TTS and ASR today and I'll post my stress test results when I can.
Simple small flexible Asterisk AGI IVR written on PHP
http://freshmeat.net/projects/phpivr
For small and easy applications I use Asterisk::AGI in perl. There are also extensions for the Fast AGI. For bigger applications, like VoIP operator's backends I use something similar to OrderlyCalls written in Java (my own code). OrderlyCalls is great though to start with java fastagi engine and extend it to your needs.

How do you do performance testing in Ruby webapps?

I've been looking at ways people test their apps in order decide where to do caching or apply some extra engineering effort, and so far httperf and a simple sesslog have been quite helpful.
What tools and tricks did you apply on your projects?
I use httperf for a high level view of performance.
Rails has a performance script built in, that uses the ruby-prof gem to analyse calls deep within the Rails stack. There is an awesome Railscast on Request Profiling using this technique.
NewRelic have some seriously cool analysis tools that give near real-time data.
They just made it a "Lite" version available for free.
I use jmeter for session-based testing - it allows very fine-grained control over pages you want to hit, parameters to inject, loops to go through, etc. It's great for simulating how many real users your site can handle, rather than just performance testing a set of static urls. You can distribute tests over multiple machines quite easily by loading up the jmeter-server on computers with publicly accessible IP's. I have found some limitations in the number of users/threads any one machine can throw at a server at once (it depends on the test), but jmeter has helped my team improve our apps capacity for users to 6x.
It doesn't have any fancy graphing -- I actually use my own in-house graphing with gruff that can do performance analysis on request time for certain pages and actions.
I'm evaluating a new opensource web page instrumentation and measurement suite called Jiffy. It's not particularly for ruby, it works for all kind of webapps
There's also a Jiffy Firebug Extension for rendering the metrics inside the browser.
I also suggest you look at Browser Mob for load testing.
A colleague of mine has also posted some interesting thoughts on this.

Resources