what is the advantage of using Alamofire over NSURLSession/NSURLConnection for networking? - nsurlconnection

Can anyone help me in understanding these question : What is the advantage of using Alamofire over NSURLSession/ NSURLConnection?
What are the differences between NSURLSession and NSURLConnection?

NSURLConnection is Apple's old API for doing networking (e.g. making HTTP requests and receiving responces), while NSURLSession is their new one. The latter one is higher level and is generally much easier and involves less boilerplate code to use for most application developers - there's basically no reason to use the former except in legacy code that you don't want to update.
A bit of history: Before NSURLSession came out, a third party library, AFNetworking, became the de facto standard for doing networking in Objective C, as it provided an easier and more convenient API than NSURLConnection (which it was a wrapper around - I think these days it wraps NSURLSession instead). The same developer later made a similar library for Swift, called AlamoFire in order to fully take advantage of "the Swift way of doing things" (instead of just adding Swift bindings to AFNetworking). Around the same time NSURLSession came out and it seemed pretty obvious that Apple had been inspired by AFNetworking and wanted to make their new networking API just as convenient to work with - by and large I think they succeded. What this means is that while previously, using AFNetworking instead of NSURLConnection was the only sane choice for most developers, today the advantages of using AFNetworking or AlamoFire over NSURLSession are much smaller, and for most developers starting up new projects I'd recommend to just start by using NSURLSession and only look into AlamoFire if they feel that they run into some limitation or inconvenience that is big enough to justify adding another dependency.

Alamofire is built on top of NSURLSession, and is less lines of code to do REST interactions with (POST/GET/PUT/etc). It will get you "90%" of the way, but if you need to do super specialized network calls, you will need to use NSURLSession.
Ex: Simple Alamofire call that gets JSON
Alamofire.request(.GET, "https://www.google.com/",encoding: .JSON).responseJSON {response in
if(response.result.error == nil){
print(response.data)
}else{
print("Network Error")
}
}
NSURLConnection is now deprecated, and NSURLSession is the new standard.
https://developer.apple.com/library/content/documentation/Cocoa/Conceptual/URLLoadingSystem/Articles/UsingNSURLSession.html

First let me answer the difference between NSURLSession and NSURLConnection
NSURLConnection : if we have an open connection with NSURLConnection and let's say you close your app, then everything we received or sent is lost
NSURLSession: Here this case is handled with the help of a app delegate method
application:handleEventsForBackgroundURLSession:completionHandler
Here's a link i found which might be useful in explaining the difference better between the two
let's go to your initial question of advantage of Alamofire over the default framework class, and am just sharing my experience on what I did so far so get ready as it's a long read.
For me the thumb rule is never bypass the framework unless you have a strong reason for doing so irrespective of what technology you are using.
Reason behind that is I have used third party library in the past and that did not end well with the latest updates of iOS and let's say if you have a paid app and your third party library stops working then your client's app will loose it's value and will end up having negative comments on the app store.
I advice before using any third party library please make sure you make a check of the given questions
what's the frequency of updates?
how many open bugs are open?
Is it really worth using it or you can leverage the framework classes?
What's the license of this library?
Coming back to the question, I used Alamofire in one project which was just making get, post and put calls had some offline sync features.
Now thinking about it i could have made my own class with URLSessions and handled response accordingly there was no need to use Alamofire as i was not doing anything special here.
I believe this was all achievable by the default framework than using a third party but i used it anyway. Why did i use it don't know maybe i was curious but a lesson learned here was i could have achieved the same thing by using framework classes, if let's say the library stops working I would have no clue as to why it did and would have to rely on a fix from the care-taker/author of that library which may take a month, week, day who knows
Once i used a library for a feature i wanted but that same feature had an open defect for it and it was never updated so i ended up making my own custom one and till today it's working fine so do go through all the open defects section to avoid unplanned surprises.
Also as mentioned in the above answer use third party components only and only when you run into limitations with the default framework and even before you do please do check when was the last time this library was updated, what license it has and how many open defects are there.

For Swift 3.0 and above, Alamofire is best because it is well Optimized and also reusable and it has also many Build in features.Alamofire calls for a similar approach in that one creates a router by conforming to a protocol, URLRequestConvertible. Under the hood, Alamofire calls for a singleton pattern that’s built on top of an NSURLSessionConfiguration.

The advantage is that there is a higher level of abstraction, so you can write less lines of code. However, there is still no official support for Codable. Updating Swift version can come with its downsides, but I think this should improve with ABI stability in Swift 5. Also, you have got to be mindful of security vulnerabilities.
Back in the Objective C and NSURLConnection days then libraries like AFNetworking made a lot of sense, but with Swift, Codable, and URLSession, then I think there is less of a case to be made for using Alamofire.

Related

AFNetworking 2.4 (AFURLSessionManager/AFHTTPSessionManager) vs NSURLSession in iOS 7/8

I'm hoping someone can help me understand all the differences at large between the latest version of AFNetworking (at the time of this writing, v2.4.1) and NSURLSession in the context of iOS 8.
I'm not super savvy when it comes to doing this kind of study because I tend to have a hard time discerning what is actually a pro and how that pro can add value to the development process for a team.
I've used AFNetworking in a previous project and enjoyed it. I really do want to make a case for it as an alternative to NSURLSession but some people really don't like having a 3rd-party dependency that has the potential of breaking down the road (an inevitable truth for a lot of developers out there).
I've heard people saying that AFNetworking 2 fills "a lot of the gaps" that NSURLSession currently has (UIKit extensions? SSL Pinning? Built-in reachability? Are those examples?). Can anyone further elaborate on what exactly those gaps are? And leveraging NSURLCache in both those implementations -- is there anything different between the two?
Also, does AFNetworking 2.4 use NSOperation and NSOperationQueue under the hood for all NSURLSessionDataTasks that get fired off via AFURLSessionManager/AFHTTPSessionManager? (for asynchronous optimizations and such)

Shoebox / Library applications with Auto-Save & Versions in OS X Lion

We have a shoebox-style application that we want to make a first-class citizen in Lion. This means integrating Auto-Save & Versions among other things. Currently we don’t have a document-centric model and we just use a plain Core Data stack.
UIPersistentDocument provides a really easy way to integrate both Auto-Save & Versions and I see two options we could choose from to integrate with the new APIs:
“Abuse” NSPersistentDocument for our shoebox-style application. Technically it would be a document-based application, but the user interface would still be the same iPhoto-like library. This makes conceptually not a lot of sense, but we would get a lot of functionality for free.
Keep the current plain Core Data stack and implement Auto-Save & Versions manually.
I heard contradicting opinions from Apple representatives about the approach we should take and it would be great to clarify things before we start our implementation. While I think that 1. shouldn’t be used it’s also very tempting, because we get a lot for free. I couldn’t even find sufficient documentation on manually implementing Auto-Save & Versions in a Core Data application.
I would really tend to use 1. but I see some problems:
I’m worried about file-system-level conflicts when using versions and only one database-file. I couldn’t find any documentation regarding this topic.
I’m worried about performance issues in Versions when browsing through “space”.
We can’t enforce only one instance of the open database, since Versions has to open several instances. I’m worried about side-effects and concurrency issues.
Conceptually it looks like a hack and I don’t like hacks.
If we would only want to integrate iCloud sync I definitely wouldn’t think about using a document-centric model for our application, because Core Data supports it directly. I’m mostly worried about the developer overhead we would have if we would stick to our current non-document based paradigm.
Do you have any advice or ideas how shoebox applications should be integrated in the new Lion world?
I'm afraid you're forced into using the first option. Versions is implemented inside NSDocumentController *sic* and so you will have to use some kind of NSDocument to get anything out of versions. I think you also have to add your App's Window in a NSWindowController to that document in order to get the nice little popup menu at the top. The problem is that versions is more or less a completely opaque feature...
But there's a question you gotta answer yourself: What portions of your app would you want to put into version? Does it really make sense to have everything in a single file when it comes to restoring data? Version restore (other than copy&paste) happens on the file-system level. And thus, does it really make sense to always have everything restored at once? If your answer is no, you probably even have to slit up you model into multiple smaller files...
Don't expect improvement here until the next major release. That's what I guessed from the engineers comments...

Modifying Code Igniter

I am looking into developing a commercial application using PHP. Since I have experience in CodeIgniter and it has been working well for me, I decided to use it. Now, if ever my application comes to a point that it needs to grow and have to have custom modifications in the platform, is it possible to modify the CodeIgniter source code to the point that it's far from the original?
I'd strongly recommend not modifying the source code as this is the back bone of the application will make updating to futures releases of codeigniter impossible.
Instead you should be creating your own classes/libraries that extend the core bases. This is best practice.
The codeigniter users guide has some fantastic information regarding this, http://codeigniter.com/user_guide/general/creating_libraries.html
Good luck.
Don't modify it, extend it. Ellislab is good about releasing bug fixes and patches, and the last thing you want is to have to re-engineer all the updates into your new hybrid everytime they release updated source code. Plus, different features and bug fixes are released at different paces for the core and reactor, so unless you plan to lock your framework in at the current version, extending is the way to go.
The good news is CI is built for and encourages extensions of the source. The system folder and the application folder separation are a clear indication of how you should segregate your enhancements from the base libraries.
CodeIgniter is written in PHP. You can completely rewrite the framework anyway you like. So the answer is "yes".
There are many big websites that were written in a different language or build on a framework when they started and have been rewritten many times since. I think a lot of developers (mostly freelancers) are over-thinking when starting out. When I start a project, then I try to get it done and get it out in the web asap. If a website starts taking of - and most of the times start making you money - then you can always take the time to rewrite it.
I think it is a lot more important to spend time when you plan and design your database and tables. I think it is a lot harder to redo parts of your database later on then it is to rewrite the code that uses the data.
Just my 2 cents.
If you have used CI, then you should know that being a PHP framework that all of the source-code is there in the download, furthermore opening the system and application folders and looking through there will tell you a lot. Yes all of the source is there in plain English (plain programming English) and not only is it in plain English but has been extensively documented inside and out (literally that is in the source and in the user-guide). CI gained initial fame from that simple fact, that all code is extensively and meticulously documented.
Beyond all that, the question itself raises concerns that maybe you should study CI a bit further before writing commercial applications using it. Ci is a powerful and very easy to use PHP framework, but it is not a WYSIWYG. In my opinion, a coder should know his tools inside and out in order to be able to create a solid secure and trusted application. The first measure of which is to read the user manual, you should know at the very least everything in it, and since there are 12 sections in it that cover everything from extending CI libraries to creating your own libraries, and everything in between I would say you need to spend a little more time with it.
I want to say though, I am not being rude or trying to shame you in any way I am simply saying that you should learn the framework a bit more before venturing into a commercial application using it.
In the early days of PHP people realized how amazingly easy it was to use and how fast you could write an application with it. At the time the major options where very difficult for new and hobby programmers to use or involved expensive software to run, PHP was free, easy to learn and most of all ran on a free OS. It also took hardly any setup to get going, you could download PHP and essentially be programming in minutes. All of these factors lead to the almost destruction of the language.
Entry level programmers were destroying it with bits of code taken from other applications, never knowing (or caring) what the code actually did beyond the simple fact that it did what they wanted at the time, never considering or even investigating if the code might be harmful. Because of this practice PHP applications that had grown to Goliath sized websites, taking thousands of hits an hour were:
beginning to crash
being hacked to reveal sensitive customer/client data
generally crumbling all around the web
All because since the language was so easy to use that people had taken advantage of it and failed to take time to learn it. PHP was becoming a joke to other professional programmers and wasn't even thought of as a viable application language by many who had dubbed it "the copy and paste" language.
So my advice to you, please take the time to know your tools inside and out, what makes them tick, if they have any gotchas and where they are vulnerable. I understand that in order to learn a language to a professional level you have to build with it so I suggest that you take it slow with CI stick to the core for now. Trust me when I say that even in its purest form CI is an amazing and powerful tool that in the right hands can create awesome powerful web application, but in the uneducated/inexperienced hands it can create havoc and destruction.
So (stepping off of the soap box) I simply ask that if you are serious about creating commercial applications period that you take your time and learn your tools/language become as close to an expert on them as possible. I gurentee that if you do that you will always have work when you need it and you will spend less hours beating your head against the table or worse explaining to a client why their site is down.
I truly wish you good luck, just slow down and learn your trade and you will do just fine.
Yes, Codeigniter is an open source framework. However, I would advise against modifying the core of Codeigniter, as most files can be extended and rewritten safely without modifying the core files which will cause you headaches if you ever decide to update.
To extend a core class by default you would do this in Codeigniter. We'll extended the parser class for this example, but this applies to all classes pretty much. This link in the comprehensive user guide will give you all the information you need to extended and overload methods inside of a Codeigniter core class: http://codeigniter.com/user_guide/general/core_classes.html

Using Regular Expressions in a Cocoa Application for Mac App Store

I have a lot of problems getting regular expressions working for my simple Cocoa application. I know that many people use RegexpKit Lite, but because it has an undocumented API call (to use the ICU library), I am pretty sure my app would get rejected when submitted to the Mac App Store (I know others have been rejected for using ICU in the iOS App Store).
My next step was to integrate with the full RegExpKit framework. While this works without issue in my application, it doesn't work in my unit tests. I have tried a lot of steps here - but, I still keep getting 'library not loaded' for the framework, even though there is a copy files build phase that puts the framework in the correct place. In addition, I spent quite a bit of time debugging another issue with the RegExpKit framework (dealing with the restrict qualifiers within the framework). Long story short - I don't think the RegexpKit framework is a good choice for me.
In reality, I just need a simple solution for regular expressions (speed isn't a primary concern as this will be used sparingly) that can be used within my unit tests.
Ideas?
[Edited post-NDA]
One option is to wait for Lion and then require it. Then you can use NSRegularExpression.

Using ZeroMQ for cross platform development?

We have a large console application in Haskell that I have been charged with making cross platform and adding a gui.
The requirements are:
Native-as-possible look and feel.
Clients for Windows and Mac OS X, Linux if possible.
No separate runtime to install.
No required network communication. The haskell code deals with very sensitive information that cannot be transmitted over the wire. This is really the only reason this isn't a web application.
Now, the real reason for this question is to explain one solution I'm researching at the moment and to solicit for reasons that I'm not thinking of that make this a bad idea.
My solution is a native gui. Winforms on Windows, Cocoa on Mac OS X, and GTK/Glade on Linux, that simply handles the presentation. Then I would write a layer on top of the Haskell code that turns it into a responder for messages to and from the UI using ZeroMQ to handle the messages and maybe protobufs for serializing the data back and forth. So the native application would start which would itself start the daemon where all of the magic happens, and send messages back and forth.
Aside from making sure that the daemon only accepts connections from the application that started it, and the challenge of providing the right data back and forth for advanced gui elements (I'm thinking table views, cells, etc.), I don't see many downsides to this.
What am I not thinking about that makes this a bad idea?
I should probably mention that at first glance I was going to go with GTK on all platforms. The problem is that, while it's close, and GTK and Glade support for Haskell is nice to work with, the result doesn't look 'right'. It's close, but just not native enough in subtle ways which make that solution unacceptable to the people who happen to be writing the check for this work.
Also, the issue of multiple platforms and thus multiple languages for the gui isn't a problem so I'm not necessarily looking for other ways to solve that problem unless it simplifies something about the interop with the haskell code.
Then I would write a layer on top of the Haskell code that turns it into
a responder for messages to and from the UI using ZeroMQ to handle the
messages and maybe protobufs for serializing the data back and forth.
I think this is reasonable (a client/server model, where the client just
happens to be a native look-n-feel desktop app). (I have no strong view
about protobufs versus e.g. JSON, thrift).
The Haskell zeromq
bindings are getting
some use now, too.
What am I not thinking about that makes this a bad idea?
How well tested is zeromq on Windows and Mac? It is probably fine, but
something I'd check.
The problem is that, while it's close, and GTK and Glade support for
Haskell is nice to work with, the result doesn't look 'right'.
Does the integration package help
there?
Here's an interesting possibility: wai-handler-webkit. It essentially packages up QtWebkit with the Warp web server to make your web apps deployable. It hasn't seen intensive use, has never been tested on Mac, and is tricky to compile on Windows, but it's a fairly straight-forward approach that lets you use the fairly rich web ecosystem developing in Haskell.
I'm likely going to be doing more development on it in the near future, so if you have interest in using it, let me know what extra features would be useful, as well as if you could offer any help on the Mac front in particular. I'm also not convinced that we need to stick with QtWebkit on all platforms: it might make more sense to use a different Webkit backend depending on OS, or maybe even using Gecko or (shudder) Trident instead.
I've had some problems getting zeromq to play nice with haskell on OSX (problems with looking for a dylib as opposed to an "o" I think). Protocol buffers and haskell seems to work fine though.
So your reason not to use a web application is because of sensitive nature of haskell program's output. And THAT's why you are distributing that same sensitive application that spews out unencrypted data on ALL client machines ? That does not make any sense.
If your application is sensitive you DEFINITELLY should put it on server and utilize strongest possible TLS.

Resources