Why Play! framework chose Groovy for template engine - performance

From their website http://www.playframework.org/documentation/1.0/faq
"
The biggest CPU consumer in the Play stack at this time is the template engine based on Groovy. But as Play applications are easily scalable, it is not really a problem if you need to serve a very high traffic: you can balance the load between several servers. And we hope for a performance gain at this level with the new JDK7 and its better support of dynamic languages.
"
So there are no better choices? What about JSP?

JSP is not feasible as every JSP compiles to a Servlet and the servlet API provides things like the server side session which are not compatible with the RESTful paradigm. We don't want to go back to the dark ages of badly scalable server side sessions, back buttoning problems in the browser, reposts etc.
Japid templates are interesting, but they are not backed by a great community and perhaps didn't even exist at the time play was created (I don't know for sure though). I tried Japid as a replacement for the Groovy templates in my own application and found out in a JMeter test that the benefit would be only marginal, 10% to max. 25% improvement.
I guess in the end it all depends on your scalability requirements and the structure of your pages. I picked the 90% use case of my application and did the test. To me, the little improvement did not justify for the additional costs of the extra dependency (I like to keep dependencies to a minimum for maintainability).
Groovy templates are not bad or slow in general. Use typed variables wherever possible (instead of "def"), even in closures! Keep values of accessed properties in local variables. Do reasonable results paging. Then keep your fingers crossed that GSP might be able to run on groovy++ in the future and you're done ;)
To me, the question would not be why they used groovy in the views. That is, because I rather do miss it so much in the controller layer. Groovy would make coding the controller behaviour a lot easier IMHO.

First off, JSP was not a valid option for Play since it chose not to go down the Java EE route (which JSP is part of). Instead, Play chose to use Groovy as an intuitive, simple but powerful templating engine.
However, one of Play's greatest features is that it is a pluggable system, meaning that many parts of the core system can simply be replaced. This includes the template engine, and there are a couple that are already available.
The most popular is Japid. It claims to be 2-20x faster than the standard templating engine, and has been around for a while. For more info, see here http://www.playframework.org/modules/japid.
A second option is Cambridge, although this has only been out for a little while, but is reasonably active in the message boards (see https://groups.google.com/forum/?hl=en#!searchin/play-framework/cambridge/play-framework/IxSei-9BGq8/X-3pF5tWAKAJ).
I tend to stick to Groovy, as I like the way it works, and have not found it to be too bad in terms of performance, but every application is individual, so your own performance tests should lead you down your own particular path.

Yes there is Japid. Which is much, much faster.
http://www.playframework.org/modules/japid

I totally agree with the choice of ease over speed the Play Framework designers made here. My guess is that if the templating starts getting in the way in terms of performance, you can (and should!) measure the slow bits, and refactor them into fast tags where possible. With that, you're likely to save 80% of CPU by moving 20% into fast tags, leaving you with flexibility and adequate speed.
Having said that, I'm looking forward to an experiment I'm planning to see how well the new Scala templates (loosely "borrowed" from Razor.NET - awesome clean syntax) work with Java controllers/models. The Scala backend isn't there yet in terms of comparative ease, but the templating language certainly rocks.

I may be late to the party in 2016. Play 2 is out, and the JDK (not to mention the hardware) drastically improved. I am not using Play or Spring Boot, since my platform doesn't need them - just pure runtime text/HTML generation from templates.
First, when talking about Groovy templates, there is more than one. I use the original Groovy SimpleTemplateEngine for anything from emails to rich Web pages, whether most people nowadays favor the "advanced" MarkupTemplateEngine with its non-HTML builder syntax. I did not go that route because of the IDE (e.g. UntelliJ) support for JSPish HTML files with JavaScript - catching unclosed tags, braces, etc. Besides, how would you include JavaScript into the curly brace based "builder" style template?
Whichever you chose, both SimpleTemplateEngine and MarkupTemplateEngine statically compile their templates, even though the Groovy doc only mentions it for the latter. Why wouldn't it generates a class for the former? I didn't compare them against each other, but I'd expect SimpleTemplateEngine to be faster, since it is... well, simpler - doesn't translate any syntax into String concatenations with ifs and loops in between.
And it is indeed very fast. I was concerned about invoking it in a loop. Doesn't make any difference. There is no overhead, as the template is compiled once.
I use multiple small templates responsible for generating individual form control markup (HTML + JS) to generate a composite form, included in a higher-level container, included in another container, and so on, until the entire page is formed. Decomposing your view like that makes it, as you already guessed, modular, encapsulated, and "object-oriented" - composed of many individual MVC components building upon each other. Sort of like good old custom JSP tags, only evaluated at runtime and compatible with technologies like Spring Boot, if you cannot resist trendy resume-boosting stuff like that.
A test form with a 100 fields (all complex "smart" controls with encapsulated state management and behavior) renders in 150ms the first time, and then 10-14ms thereafter. In an IDE debugger on my memory-starved 4y.o. notebook. I also verified it is thread-safe, since Groovy never mentioned it explicitly. Why wouldn't it be, if it is compiled into a stateless Groovy class like any other? Call createTemplate() once, store the Template somewhere, and use it (call Template.make()) in your servlet or another concurrent context. Obviously I'll never have a 100-field form in a real application. Anyone who does, needs to reconsider his/her UX.
The performance is quite adequate. I'd even accept one second to render a 100-field page. Think of it, you don't need ultimate nanotrading or nuclear missile tracking performance in a Web app. If you do, pick Jamon: http://www.jamon.org/Overview.html, which generates a Java class, you'd normally write to concatenate Strings. I didn't test it, as I don't like extra compilation steps (automatically executed by Maven, but still). Compiled Groovy bytecode is good enough for me - compared to the compiled, yes, strongly-typed Java. The difference would be marginal unless you are doing something complex, which you shouldn't inside a template (see below). Playing with typed Groovy variables vs. def as suggested in this thread, only saved me a couple of milliseconds on that 100-template run.
Templates should not have much procedural logic (internal variables, ifs and loops) anyway - that's the controller's, not view's responsibility. That said, ifs and loops are a must for a template engine. Why would one use Handlebars/Mustache, if he/she can simply call String.replace()?
The rest of the template engines is also irrelevant. No String concatenation (e.g. Velocity, or Freemarker) or interpreted JS-based technology (e.g. Jade) would ever beat the most direct Jamon's approach performance-wise. And being a Java programmer, you want to use your favorite language/syntax: either directly (Jamon) or 90% close to Java, Groovy is (being a scripting-centric concise interpreted Java). I wouldn't comment on Scala - the matter of preference. Other than its allegedly "concise" syntax (less and less relevant with Java 8+) comes at a price. And only matters for complex loops. You do not want to write your entire app inside one template, like I already said. A couple of loops and up to ten if statements max.
And, like everyone mentioned, the intuitive syntax, and ease of use is the key. They drastically reduce the number of bugs. A good (additional) server costs a $1000, while developer salaries - to fix all of the bugs stemming form the complexity of marginal performance optimization, are 100 times higher.

Related

Best approach to caching in Ember Octane

I have a project running Ember#3.20. We are currently in the process of migrating from classic to glimmer based components and have come across some expensive computational patterns which would benefit from caching.
My question is, what is the best approach to caching functionality to getters for glimmer components? It looks like there are currently a few ways to do this:
#cached via tracked-toolbox - I believe this was released prior to the ember cached api. I didn't peek under the hood but it has the has a #cached decorator which might collide with future ember #cached.
ember-cache-primitive-polyfill - Mentioned in the Ember docs as a polyfill for the ember cached API (3.22) but the syntax isn't as concise as the #cached decorator
ember-cached-decorator-polyfill - related to RFC566 appears to be based on option 2 with a more ergonomic syntax
Upgrade to 3.22 - Trying to avoid bumping ember unless there is a significant benefit. At a glance, I didn't see #cached included here though.
Any additional insight/guidelines into how expensive a getter should be to warrant it being cached? For example, preventing re-renders seems a fairly obvious use case but there can be a wide range of what developers might consider an "expensive" computation.
There are two categories of things here:
The two #cached decorators.
The caching primitives introduced via RFC 0566.
In the vast majority of Ember or Glimmer app or normal library code, you’ll just be using the decorator. You’d only ever really reach for the caching primitives if you were building some low-level library code yourself (not never, but not exactly common, either).
As for the #cached decorators, they have basically the same semantics. The tracked-toolbox version was research that fed into the the development of the primitive that Glimmer ships (and Ember uses), and so ember-cached-decorator-polyfill is implemented using the actual public API—polyfilling it via ember-cache-primitive-polyfill if necessary.
In terms of the performance characteristics, you don’t even actually need to think about it in terms of preventing re-renders: that’s not how the system works anyway. (See this blog post I wrote last year (2020) for a deep dive on how re-rendering gets scheduled in Ember and Glimmer using the autotracking concepts.) It’s also worth remembering that caching is not free! So it’s not as simple as “this thing costs something, so I should cache it”—the caching has to pay for itself to be worth it, and it costs both memory use and CPU time to create and to check caches.
With that caveat firmly in mind, I tend to think of “expense” here in the following categories:
am I rendering this hundreds or thousands of times?
does rendering this cause a long-running computation that will impact render (i.e. on the order of multiple milliseconds)
does this trigger asynchronous behavior?
(especially) does this trigger an API call?
In a lot of normal app code, the only getters you’ll really need to decorate with #cached are getters which produce API calls based on the components’ arguments. Since the getter will otherwise be invoked every time it is referenced, you will end up with multiple API calls, which can produce a situation where the apparent state in the UI flips back and forth as references to different promises resolve.

SAPUI5: Does Using Formatters Impact Performance?

I am working on custom SAPUI5 app using ODataModel and for which I have to do the formatting for some of the fields which I will be displaying in List control.
I need to know which approach is good mentioned below is good w.r.t. performance of app.
1) Is it a good idea to use Formatter.js file and write each method for each field for formatting?
Example -
There are 2 fields which should be formatted before showing in UI and hence 2 formatter function.
2) Before binding Model to List - Do the formatting using Loop at each row.
Example -
Loop at OData.
--do formatting here for both the fields
move data to model.
Endloop.
Bind new model to UI
Is there any other way by which we can improve performance - apart from code minification or using grunt.
Appreciate your help.
Thanks,
Rahul
Replacing Formatters with other solutions is definitely NOT the point to start when optimizing performance not to mention that you will loose a lot of the convenience the ODataModel comes with when manually manipulating the data in it.
Formatter Performance
Anyways using a formatter is of course less performant then pre-formatting your data once after they were loaded. A formatter will be executed on every rerendering of your control. So you might not want to do heavy calculations or excessive looping in a formatter that is executed frequently. But given a normal usage using formatters is absolutely nothing you should worry about or that does noticeably affect end-user experience. Keep enjoying the convenience of formatters (and take a look at the cool Expression Binding).
General Performance Considerations
To improve performance it is first of all very important to identify the real bottle neck. In many cases this is simply the backend, there is usually much more to win with much less effort. Always keep that in mind. UI Code optimization is ridiculous as long as the main backend call runs for say 3s.
Things to improve your UI performance might be:
serve SAPUI5 from a CDN
use a Component-preload, can be generated with grunt-openui5 or gulp-ui5-preload (I think it does not minify XML yet so you could do that additionaly before creating Component-preload)
try to reduce the number of SAPUI5 libraries you are using
be aware of which SAPUI5 libraries you are NOT using and very consequently remove those (don't forget the dependencies section in Component metadata resp. manifest.json)
be aware that sap.ui.layout is a separate independent library (not registering it as such will result in a lot of extra requests)
if you use an ODataModel make sure you set useBatch to true (default in v2.ODataModel)
intelligently design your OData service (if you can influence it)
intelligently use $expands: sometimes it can make sense to preload $expand data on a parent binding that does not actually use it e.g. if you most probably need the data later on
think about bundling your app as native app and benefit from improved caching (Kapsel)
Check Performance: Speed Up Your App and Performance Issues
squeeze out some more bytes and save some requests by minifying/combining custom css or other resources if you have some
If you are generally interested in Web Performance I can recommend Steve Souders books.
I'm totally open for more ideas on SAPUI5 performance improvements! Anyone?
BR
Chris
the best practice is to do it this way. The formatter allows you to receive an input and return output. The formatter function will be called in runtime and will be called for each of the rows which are displayed in your list. The reason that it will be called for each of the rows is because that you cannot grantee that the input will be the same for all of the rows in the list.
The concept of binding is to loop on your data model and update the UI accordingly. It is much better to use binding because a lot of reasons like: maintainability, performance, separate the data layer from the presentation layer, core optimizations and more.

CodeIgniter + Smarty - is it relevant?

I'm starting a new website, using CodeIgniter for the first time. In the views, there is PHP code written. I was thinking of completely separate the code from display, as I did few years ago using Smarty.
I found a template engine provided by CodeIgniter : http://ellislab.com/codeigniter/user-guide/libraries/parser.html
But inside of the page, I found this note :
Note: CodeIgniter does not require you to use this class since using pure PHP in your view pages lets them run a little faster. However, some developers prefer to use a template engine if they work with designers who they feel would find some confusion working with PHP.
So, I wanted some help to choose the right thing. Should I use pure PHP ? What would be the advantage of using a template engine like this one, when the coding style is already MVC ? Would it be better to use Smarty, that I already know a little ?
The website will need to be very secure AND very fast, a lot of AJAX will be used (I was thinking to install a websocket as well, but there is no link to the current question).
Thanks for your help !
If you require your application to be fast, then you've been quite inspired in choosing Codeigniter, as it's a very lightweight framework and it's going to solve most speed concerns quite easily, when caching isn't involved.
There's a saying that you shouldn't scale unless you need to, and I think that it applies very well here. Unless you're displaying megabytes of data, I don't see how choosing a templating engine might harm the overall speed of your application. In the event that it does happen, you can always have a look at caching some responses/various other bits of information or third party solutions (i.e. Gearman) which may be overkill for now.
If you want to learn something new, go with the Codeigniter templating library; if you need to develop something fast, use the tools that you know best. As a matter of preference, I love Twig, and there is a CI implementation for it, called Twiggy: http://edmundask.github.io/codeigniter-twiggy/
As for security, I'd say it's not as robust as an enterprise level framework, like Symfony or ZF2, which place higher emphasis on that. They are more complete packages in themselves, and with caching, they perform super-fast, but they come with a higher learning curve.
Update: What I meant by the the idea that unless you're displaying huge amounts of data you shouldn't consider the speed of templating engines is that there will be a negligible effect on your page rendering speed. Don't imagine that it's something that a user would ever notice, as the order of 0.0x in execution time isn't noticeable. Take a look here for a comparison between SMARTY and Twig: http://umumble.com/blogs/php/249/
0.058 seconds of execution time for Smarty vs. 0.083 seconds of execution time for Twig. Templating engines always carry an overhead, their facility is that they make development easier, and it helps out when working with designers.
If you want to go with a templating engine, I suggest Twig. http://twig.sensiolabs.org/

Web site performance - what is it about? Readability and ignorance Vs. Performance

Let me cut to the chase...
On one hand, many of the programming advices given (here and on other places) emphasize the notion that code should always be as readable and as clear as possible, at (almost?!) any pefromance cost.
On the other hand there are SO many slow web sites (at least one of whom, I know from personal experience).
Obviously round trips and DB access, are issues a web developer should always keep in mind. But the trade-off between readability and what not to do because it slows things down, for me is very unclear.
Question are- 1.What else? 2.Is there a rule (preferably simple, but probably quite general) one should adhere to in order to make sure his code does not slow things down too much?
General best practices as well as specific advices would be much appreciated. Advices based on experience would be especially appreciated.
Thanks.
Edit: A little clarification: General peformance advices aren't hard to find. That's not what I'm looking for. I'm asking about two things- 1. While trying to make my code as readable as possible, when should I stop and say: "Now I'm hurting performance too much". 2. Little, less known things like- is selecting just one column faster than selecting all (Thanks Otávio)... Thanks again!
See the Stack Overflow discussion here:
What is the most important effect on performance in a database-backed web application?
The top voted answer was, "write it clean, and use a profiler to identify real problems and address them."
In my experience, the biggest mistake (using C#/asp.net/linq) is over-querying due to LINQ's ease-of-use. One huge query is usually much faster than 10000 small ones.
The other ASP.NET gotcha I see a lot is when the viewstate gets extremely fat and bloated. EnableViewState=false is your best friend, start every new project with it!
For web applications that have a database back end, it is extremely important that:
indexing is done properly
retrieval is done for what is needed (avoid select * when selecting specific fields will do - even more so if they are part of a covered index)
Also, whenever possible an appropriate caching strategy can help performance
Optimizing your code.
While making your code as readable as possible is very important. Optimizing it is equally as important. I've listed some items that will hopefully get you in the right direction.
For example in regards to Databases:
When you define the schema of your database, you should make sure that it is normalized and the indexes of fields are defined properly.
When running a query, specifically SELECT, only select the fields you need.
You should only make one connection to the database per page load.
Re-factor. This is probably the most important factor in producing clean, optimized code. Always go back and look at your code and see what can be done to improve it.
PHP Code:
Always test your work with a tool like PHPUnit.
echo is faster than print.
Wrap your string in single quotes (‘) instead of double quotes (“) is faster because PHP searches for variables inside “…” and not in ‘…’, use this when you’re not using variables you need evaluating in your string.
Use echo’s multiple parameters (or stacked) instead of string concatenation.
Unset or null your variables to free memory, especially large arrays.
Use strict code, avoid suppressing errors, notices and warnings thus resulting in cleaner code and less overheads. Consider having error_reporting(E_ALL) always on.
Incrementing an undefined local variable is 9-10 times slower than a pre-initialized one.
Methods in derived classes run faster than ones defined in the base class.
Error suppression with # is very slow.
Website Optimization
A good place to start is here (http://developer.yahoo.com/performance/rules.html)
Performance is a huge topic and there are a lot of things that you can do to help improve the performance of your website. It's something that takes time and experience.
Best,
Richard Castera
Scott and Rcastera did a good job covering DB and querying optimization. To address your question from a HTML / CSS / JavaScript standpoint:
CSS:
Readability is key. CSS is rendered so fast that you should never feel it is necessary to sacrifice readability for performance. As such, focus on adding in as many comments as necessary to document the code, why certain rules (like hacks) are there, and whatever else floats your comment boat. In CSS there are a few obvious rules to follow: 1) Use external stylsheets. 2) Limit external stylesheets to limit GET requests.
HTML: Like CSS, HTML is read so fast by the browser you should really only focus on writing clean code. Use whitespace, indentation, and comments to properly document your work. Only major things in HTML to remember are: 1) declare the <meta charset /> early within the head section. 2) Follow this guys advice to minimize browser reflows. *this rule actually applies to CSS as well.
JavaScript: Most optimizations for JavaScript are really well known by now so these'll seem obvious, like initializing variables outside of loops, pushing javascript to bottom of body so DOM loads before scripts start tying up all of the resources, avoiding costly statements like eval() or with(). Not to sound like a broken record, but keeping a well commented and easily readable script should still be a priority when developing JavaScript code. Especially since you can just minimize and compress away all the excess when you deploy it.

Should you wrap 3rd party libraries that you adopt into your project? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
A discussion I had with a colleague today.
He claims whenever you use a 3rd party library, you should always write for it a wrapper. So you can always change things later and accomodate things for your specific use.
I disagree with the word always, the discussion arose regarding log4j and I claimed that log4j has well tested and time proven API and implementation, and everything thinkable can be configured a posteriori and there is nothing you should wrap. Even if you wanted to wrap there are proven wrappers like commons-logging and log5j.
Another example that we touched in our discussion is Hibernate. I claimed that it has a very big API to be wrapped. Furthermore it has a layered API which lets you tweak its inside if you so need. My friend claimed that he still believes it should be wrapped but he didn't do it because of the size of the API (this co-worker is much veteran than me in our current project).
I claimed this, and that wrapping should be done in specific cases:
you are not sure how the library will fit your needs
you will only use a small portion of a libary (in which case you may only expose a part of its API).
you are not sure of the quality of the library's API or implementation.
I also maintained that sometimes you can wrap your code instead of the library. For example, puting your database related code in a DAO layer, instead of preemptively wrapping all of hibernate.
Well, in the end this is not really a question, but your insights, experiences and opinions are highly appreciated.
It's a perfect example for YAGNI:
it is more work
it inflates your project
it may complicate your design
it has no immediate benefit
the scenarion you write it for may never manifest
when it does, your wrapper most likely needs to be re-written completely because it is tied too closely to the concrete library you were using and the new one's API simply doesn't match yours.
Well, the obvious benefit is for switching technologies. If you have a library that becomes deprecated, and you want to switch, you may end up rewriting a lot of code to accommodate the change, whereas if it were wrapped, you'd have an easier time writing a new wrapper for the new lib, than changing all your code.
On the other hand, it would mean that you have to write a wrapper for every trivial library that you include, which is probably an unacceptable amount of overhead.
My industry is all about speed, so the only time I'd be able to justify writing a wrapper is if it was around some critical library that was likely to change dramatically on a regular basis. Or, more commonly, if I need to take a new library and shoehorn it into old code, which is an unfortunate reality.
It's definitely not an "always" situation. It's something that may be desirable. But the time isn't always going to be there, and, in the end, if writing a wrapper takes hours and the long term code library changes are going to be few, and trivial...Why bother?
No. Java architects/wanna-bees are too busy designing against imaginary changes.
With modern IDE, it's a piece of cake when you do need change. Until then, keep it simple.
I agree with everything that's been said pretty much.
The only time wrapping third party code is useful (bar violating YAGNI) is for unit testing.
Mocking statics and so forth requires you to wrap the code, this is a valid reason to write wrappers for third party code.
In the case of logging code, its not needed though.
The problem here is partially the word 'wrapper', partially a false dichotomy, and partially a false distinction between the JDK and everything else.
The word 'wrapper'
Wrapping all of Hibernate, as you say, is a completely impractical enterprise.
Restricting the Hibernate dependencies to an identified, controlled, set of source files, on the other hand, may well be practical and achieve the same results.
The false dichotomy
The false dichotomy is the failure to recognize a third option: standards. If you use, say, JPA annotations, you can swap Hibernate for other things. If you are writing a web service and use JAX-WS annotations and JAX-B, you can swap between the JDK, CXF, Glassfish, or whatever.
The false distinction
Sure, the JDK changes slowly and is unlikely to die. But major open source packages also change slowly and are unlikely to die. Untold thousands of developers and projects use Hibernate. There's really no more risk of Hibernate disappearing or making radical incompatible API changes than there is of Java itself.
If the library you are planning to wrap is unique in its "access principles, metaphors and idioms" from other offerings in the same domain, then your wrapper is pretty much going to be similar to that library and won't do you any good if you one day switch to a different library since you will need a new wrapper.
If the library is accessed in a similar way to other libraries and the same wrapper can apply to these libraries, then they are probably written based on some existing standard and there is some common layer that already exists to access both of them.
I would only go with wrappers if I knew for sure that I would have to support multiple and substantially different libraries in production.
The main factor for deciding to wrap a library or not is the impact a library change will have on the code. When a library is only called from 1 class the impact of changing library will be minimal. If on the other side a library is called in all classes a wrapper is much more likely.
Any uncertainty around the choice of 3rd party library should be flushed out at the beginning of the project using prototypes to test the scalability/suitability/whatever of the 3rd party library.
If you decide to go ahead and provide full de-coupling/abstraction support it should be costed up and ultimately approved by the project sponsor - ultimately it's a commercial decision as someone has to pay for it and the work required to do it (unless it's absolutely trivial, in which case the api is probably low risk anyway).
Generally an experienced architect will chose a technology that they can be reasonably confident with, and have experience of, and that they are confident will last the lifetime of the app, OR else they will eliminate any risk in the decision early on in the project, thus removing any need to do this, most of the time
I'd tend to agree with most of your points. Using absolutes often gets you into trouble and saying you should "always" do something limits your flexibility. I'd add some more points to your list.
When you use wrapping code around a very common API, like Hibernate or log4j you make it more difficult to bring on new developers. New developers now have to learn a whole new API, where if you hadn't wrapped the code they would have been very familiar right away.
On the flip side of that, you also limit your developers' view into the API. Using an advanced feature of the API takes more time because you have to make sure that your wrapper is implemented in a way that can handle it.
Many of the wrapping layers I've seen also are very specific to the underlying implementation. So, if you write a log wrapper around log4j, you are thinking in log4j terms. If some new cool framework comes out, it may change the whole paradigm, so your wrapping code doesn't migrate as well as you had thought.
I'm definitely not saying wrapping code is always bad, but as you stated, there are a lot of factors you have to consider.
The purpose of wrapping even a well-tested and time-proven 3rd-party library is that you might decide to switch libraries at some point in the future. Wrapping it makes it easier to switch without changing any code in your core application. Only the wrapper needs to change.
If you're absolutely sure that you'll never (another absolute) use a different logging framework in your project, go ahead and skip the wrapper. Even having said that, I'd probably hold off on writing the wrapper until I knew I needed it, like the first time I need to switch.
This is kind of a funny question.
I've worked in systems where we've found showstopper bugs in libraries we were using, and which upstream was either no longer maintaining, or not interested in fixing. In a language like Java, you usually can't fix internal bugs from a wrapper. (Fortunately, if they're open-source, you can at least fix them yourself.) So it's no help here.
But I'm often working in a language where you can easily modify libraries at any time, without seeing or even having their source code -- I commonly add new methods to existing classes, for example. So in this case, there's no point in wrapping: just make the change you want.
Also, does your colleague draw the line at things called "libraries"? What about Java itself? Does he wrap built-in classes? Does he wrap the filesystem? The thread scheduler? The kernel? (That is, with his own wrappers -- in a sense, everything is a wrapper around the CPU, but it sounds like he's talking about wrappers in your source repo that are completely under your control.) I've had built-in functionality change or disappear when new versions of it appear. Java is not immune from this.
So the idea to always write a wrapper comes down to a bet. Assuming he's only wrapping third-party libraries, he seems to be implicitly betting that:
"first-party" functionality (like Java itself, the kernel, etc.) will never change
when "third-party" functionality changes, it will always be done in a way that can be fixed in a wrapper
Is that true in your case? I don't know. Of the medium-large Java projects I've done, it's rarely true for me. I wouldn't spend effort wrapping all third-party libraries, because it seems like a poor bet, but your situation is certainly different from mine.
There is one situation where you with good reason can wrap. Namely if you need to test stuff, and the default third party object is heavy weight. Then having an interface can really make a difference.
Note, this is not to replace the library ,but make it manageable where it doesn't matter much.
Wrapping a whole library is boilerplate, ineffective, and wrong in most cases. It can be done in a much clever way. I'd say that wrapping a library is appropriate mostly in case of UI component libraries, and again, you have to be adding some additional core functionality of yours to all the components for this to be needed.
if too much modifications and additions are needed, this is most likely not the library you are looking for
if there is a moderate amount of additions and modifications - there are always the design patterns that come handy in those cases. The Decorator pattern (allows new/additional behaviour to be added to an existing object dynamically) , for example, is rather suitable for the most cases.
IDE search/replace and refactoring capabilities offer an easy way to change your code in all required places if some important change is needed and a wrapping object appears. (of course, unit-tests would be helpful here ;) )
In my experience the question becomes fairly moot if you're using abstractions sufficiently. Coupling to a library is just like coupling to any other interface. Thus you want to reduce accidental coupling and the scope of rewrite necessary if you need to swap out the implementation. Don't bind your application logic to some construct, but don't just form a bunch of stupid (literally) wrappers around something and expect to gain any benefit.
A wrapper doesn't usually gain you anything unless it's answering a specific purpose (such as polymorphizing a non-polymorphic construct). They often show up in refactoring, but I wouldn't recommend forming an architecture on them. There's a few exceptions of course, but there is with any principle.
This doesn't speak toward adapters. An adapter can be a pretty important component for when you want to actually alter the interface of a library and its use to be in line with architecture, code, or domain concepts in your project.
You should do it always, often, sometimes, rarely, or never. Not even your colleague does it always, but the instructive cases are always and never. Suppose that it is sometimes necessary. If you never wrapped a library, the worst consequence is that one day you discovered that it was necessary for a library that you had used all over the place. It would take you some time to wrap that library and to perform shotgun surgery on the clients. The question is whether that eventuality would take more time than habitually providing wrappers that are rarely necessary, but having never to perform the shotgun surgery.
My instinct is to appeal to the YAGNI (you ain't gonna need it) principle and opt for "rarely".
I would not wrap it as a one to one thing, but I would layer the app so that each part it replaceable as much as possible. The ISO OSI model works well for all types of software :-)

Resources