In Turbo Pascal 7 for DOS you can use the Crt unit to define a window. If you define a second window on top of the first one, like a popup, I don’t see a way to get rid of the second one except for redrawing the first one on top again.
Is there a window close technique I’m overlooking?
I’m considering keeping an array of screens in memory to make it work, but the TP IDE does popups like I want to do, so maybe it’s easy and I’m just looking in the wrong place?
I don't think there's a window-closing technique you're missing, if you mean one provided by the CRT unit.
The library Borland used for the TP7 IDE was called TurboVision (see https://en.wikipedia.org/wiki/Turbo_Vision) and it was eventually released to the public domain, but well before that, a number of 3rd-party screen handling/windowing libraries had become available and these were much more powerful than what could be achieved with the CRT unit. Probably the best known was Turbopower Software's Object Professional (aka OPro).
Afaik, these libraries (and, fairly obviously TurboVision) were all based on an in-memory representation of a framed window which could be rapidly copied in and out of the PC's video memory and, as in Windows with a capital W, they were treated as a stack with a z-order. So the process or closing/erasing the top level window was one of getting the window(s) that it had been covering to re-draw itself/themselves. Otoh, CRT had basically evolved from v. primitive origins similar to, if not based on, the old DEC VT100 display protocol and wasn't really up to the job of supporting independent, stackable window objects.
Although you may still be able to track down the PD release of TurboVision, it never really caught on as a library for developers. In an ideal world, a better place to start would be with OPro. It was apparently on SoureForge for a while, but seems to have been taken down sometime since about 2007, and these days even if you could get hold of a copy, there is a bit of a question mark over licensing. However ...
There was also a very popular freeware library available for TP by the name of the "Technojock's toolkit" and which had a large functionality overlap (including screen handling) with OPro and it is still available on github - see https://github.com/lallousx86/TurboPascal/tree/master/TotLib/TOTSRC11. Unlike OPro, I never used TechnoJocks myself, but devotees swore by it. Take a look.
Is there a way to retrieve more detailed error information when OpenGL has flagged an error? I know there isn't in core OpenGL, but is there perhaps some common extension or platform- or driver-dependent way or anything at all?
My basic problem is that I have a game (written in Java with JOGL), and when people have trouble with it, which they do on certain hardware/software configurations, it can be quite hard to trace down where the root of the problem lies. For performance reasons, I can't keep calling glGetError for each command but only do so at a few points in the program, so it's kind of hard to even find what command even flagged the error to begin with. Even if I could, however, the extremely general error codes that OpenGL have don't really tell me all that much about what happened (seeing as how the manpages on the commands even describe how the various error codes are reused for sometimes quite many different actual error conditions).
It would be tremendously helpful if there were a way to find out what OpenGL command actually flagged the error, and also more details about the error that was flagged (like, if I get GL_INVALID_VALUE, what value to what argument was invalid and why?).
It seems a bit strange that drivers wouldn't provide this information, even if in a completely custom way, but looked as I have, I sure haven't found any way to find it. If it really is that they don't, is there any good reason for why that is so?
Actually, there is a feature in core OpenGL that will give you detailed debug information. But you are going to have to set your minimum version requirement pretty high to have this as a core feature.
Nevertheless, see this article -- even though it only went core in OpenGL 4.3, it existed in extension form for quite some time and it does not require any special hardware feature. So for the most part all you really need is a recent driver from NV or AMD.
I have an example of how to use this extension in an answer I wrote a while back, complete with a few utility functions to make the output easier to read. It is written in C, so I do not know how helpful it will be, but you might find something useful.
Here is the sort of output you can expect from this extension (AMD Catalyst):
OpenGL Error:
=============
Object ID: 102
Severity: Medium
Type: Performance
Source: API
Message: glDrawElements uses element index type 'GL_UNSIGNED_BYTE' that is not
optimal for the current hardware configuration; consider using
'GL_UNSIGNED_SHORT' instead.
Not only will it give you error information, but it will even give you things like performance warnings for doing something silly like using 8-bit vertex indices (which desktop GPUs do not like).
To answer another one of your questions, if you set the debug output to synchronous and install a breakpoint in your debug callback you can easily make any debugger break on an OpenGL error. If you examine the callstack you should be able to quickly identify exactly what API call generated most errors.
Here are some suggestions.
According to the man pages, glGetError returns the value of the error flag and then resets it to GL_NO_ERROR. I would use this property to track down your bug - if nothing else you can switch up where you call it and do a binary search to find where the error occurs.
I doubt calling glGetError will give you a performance hit. All it does is read back an error flag.
If you don't have the ability to test this on the specific hardware/software configurations those people have, it may be tricky. OpenGL drivers are implemented for specific devices, after all.
glGetError is good for basically saying that the previous line screwed up. That should give you a good starting point - you can look up in the man pages why that function will throw the error, rather than trying to figure it out based on its enum name.
There are other specific error functions to call, such as glGetProgramiv, and glGetFramebufferStatus, that you may want to check, as glGetError doesn't check for every type of error. IE Just because it reads clean doesn't mean another error didn't happen.
I'm afraid that asking this question may result in some downvotes, but after making some not satisfying research I decided to take a risk and ask more experienced people...
There are many questions here referring to some specific problems connected with the XCode Analayzer Tool. It seems to be very helpful solution. But I would like to ask you - as a beginner in iOS world - what kind of memory management stuff cannot be noticed by this tool.
In other words, are there any common memory management aspects, about which the iOS beginners should think "Oh, be careful with that, because in this case XCode Analyzer may not warn you about your mistake"...
For instance, I've found here Why cannot XCode static analyzer detect un-released retained properties? that:
(...)the analyzer can't reliably detect retain/release issues across
method/library boundaries(...)
It sounds like a good hint to consider, but maybe you know about some other common issues...
The analyzer is very good at finding the routine leaks that plague new programmers writing non-ARC code (failures to call release, returning objects of the wrong retain count, etc.).
In my experience, there are a couple of types of memory issues it does not find:
It cannot generally identify strong reference cycles (a.k.a. retain cycles). For example, you add a repeating NSTimer to a view controller, unaware that the timer maintains a strong reference to the view controller, and if you don't invalidate the timer (or do it in the wrong place, such as the dealloc method), neither the view controller nor the timer will get released.
It cannot find circular logic errors. For example, if you have some circular references where view controller A presents view controller B, which in turn presents a new copy of A (rather than dismissing/popping to get back to A).
It cannot find many non-reference counting memory issues. While it's getting better in dealing with Core Foundation functions, if you have code that is doing manual memory allocations (such as via malloc and free), the static analyzer may be of limited use. The same is true whenever you're using non-reference counting code (e.g. you use SQLite sqlite3_prepare_v2 and fail to call sqlite3_finalize).
I'm sure that's not a complete list of what it doesn't find, but those are the common issues I see asked about on Stack Overflow for which the static analyzer will be of limited help. But the analyzer is still a wonderful tool (it finds issues other than memory issues, too) and for those individuals not using ARC, it's invaluable.
Having said that, while the static analyzer is an under-appreciated first line of defense, you really should use Instruments to find leaks. See Locating Memory Issues in Your App in the Instruments User Guide. That's the best way to identify leaks.
We have an old (written in CakePHP 1.1!) application that has started groaning under the weight of its responsibilities, so we're giving it an overhaul with the help of some consultants.
One of the "quick wins" that they have identified: some pages are experiencing not-insignificant slowdown due to multiple calls to $html->url. As for whatever reason we're not currently using routing, and even in the future the necessity for reverse routing isn't likely to come into play, use of $html->url could just be creating overhead for the sake of it here. Some questions that arise in my mind in any case:
(1) Is it better to do things "the Cake way" where possible even if this is slower? Are there other advantages to using the HTML helper for links that might not have been considered.
(2) We are planning to upgrade to a much more up-to-date version of Cake, possibly all the way to 2.0, at some point. Is it possible that the current slowness of using the HTML helper is more to do with being in Cake 1.1 than an inherently bad design decision?
Just interested in people's opinions generally on this subject: is it definitely right for us to strip out laborious uses of the HTML helper or not?
An upgrade to CakePHP 2.0 will definitely yield a speed improvement. Beware however, the upgrade path from 1.1 to 2.0 is not smooth. Expect to do a lot of manual editing.
As to the multiple calls to $html->ur(), you might find this useful:
http://debuggable.com/posts/how-to-save-half-a-second-on-every-request-of-your-cakephp-app:49a69610-8648-4d65-815d-754c4834cda3
I would rather consider:
Rewriting HtmlHelper::url(). It is very surprising that such a simple function has a performance impact. Consider using a version from a future release.
Cache the views (or the elements, or extract these into elements, then cache) so they are not triggered in each request.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
A discussion I had with a colleague today.
He claims whenever you use a 3rd party library, you should always write for it a wrapper. So you can always change things later and accomodate things for your specific use.
I disagree with the word always, the discussion arose regarding log4j and I claimed that log4j has well tested and time proven API and implementation, and everything thinkable can be configured a posteriori and there is nothing you should wrap. Even if you wanted to wrap there are proven wrappers like commons-logging and log5j.
Another example that we touched in our discussion is Hibernate. I claimed that it has a very big API to be wrapped. Furthermore it has a layered API which lets you tweak its inside if you so need. My friend claimed that he still believes it should be wrapped but he didn't do it because of the size of the API (this co-worker is much veteran than me in our current project).
I claimed this, and that wrapping should be done in specific cases:
you are not sure how the library will fit your needs
you will only use a small portion of a libary (in which case you may only expose a part of its API).
you are not sure of the quality of the library's API or implementation.
I also maintained that sometimes you can wrap your code instead of the library. For example, puting your database related code in a DAO layer, instead of preemptively wrapping all of hibernate.
Well, in the end this is not really a question, but your insights, experiences and opinions are highly appreciated.
It's a perfect example for YAGNI:
it is more work
it inflates your project
it may complicate your design
it has no immediate benefit
the scenarion you write it for may never manifest
when it does, your wrapper most likely needs to be re-written completely because it is tied too closely to the concrete library you were using and the new one's API simply doesn't match yours.
Well, the obvious benefit is for switching technologies. If you have a library that becomes deprecated, and you want to switch, you may end up rewriting a lot of code to accommodate the change, whereas if it were wrapped, you'd have an easier time writing a new wrapper for the new lib, than changing all your code.
On the other hand, it would mean that you have to write a wrapper for every trivial library that you include, which is probably an unacceptable amount of overhead.
My industry is all about speed, so the only time I'd be able to justify writing a wrapper is if it was around some critical library that was likely to change dramatically on a regular basis. Or, more commonly, if I need to take a new library and shoehorn it into old code, which is an unfortunate reality.
It's definitely not an "always" situation. It's something that may be desirable. But the time isn't always going to be there, and, in the end, if writing a wrapper takes hours and the long term code library changes are going to be few, and trivial...Why bother?
No. Java architects/wanna-bees are too busy designing against imaginary changes.
With modern IDE, it's a piece of cake when you do need change. Until then, keep it simple.
I agree with everything that's been said pretty much.
The only time wrapping third party code is useful (bar violating YAGNI) is for unit testing.
Mocking statics and so forth requires you to wrap the code, this is a valid reason to write wrappers for third party code.
In the case of logging code, its not needed though.
The problem here is partially the word 'wrapper', partially a false dichotomy, and partially a false distinction between the JDK and everything else.
The word 'wrapper'
Wrapping all of Hibernate, as you say, is a completely impractical enterprise.
Restricting the Hibernate dependencies to an identified, controlled, set of source files, on the other hand, may well be practical and achieve the same results.
The false dichotomy
The false dichotomy is the failure to recognize a third option: standards. If you use, say, JPA annotations, you can swap Hibernate for other things. If you are writing a web service and use JAX-WS annotations and JAX-B, you can swap between the JDK, CXF, Glassfish, or whatever.
The false distinction
Sure, the JDK changes slowly and is unlikely to die. But major open source packages also change slowly and are unlikely to die. Untold thousands of developers and projects use Hibernate. There's really no more risk of Hibernate disappearing or making radical incompatible API changes than there is of Java itself.
If the library you are planning to wrap is unique in its "access principles, metaphors and idioms" from other offerings in the same domain, then your wrapper is pretty much going to be similar to that library and won't do you any good if you one day switch to a different library since you will need a new wrapper.
If the library is accessed in a similar way to other libraries and the same wrapper can apply to these libraries, then they are probably written based on some existing standard and there is some common layer that already exists to access both of them.
I would only go with wrappers if I knew for sure that I would have to support multiple and substantially different libraries in production.
The main factor for deciding to wrap a library or not is the impact a library change will have on the code. When a library is only called from 1 class the impact of changing library will be minimal. If on the other side a library is called in all classes a wrapper is much more likely.
Any uncertainty around the choice of 3rd party library should be flushed out at the beginning of the project using prototypes to test the scalability/suitability/whatever of the 3rd party library.
If you decide to go ahead and provide full de-coupling/abstraction support it should be costed up and ultimately approved by the project sponsor - ultimately it's a commercial decision as someone has to pay for it and the work required to do it (unless it's absolutely trivial, in which case the api is probably low risk anyway).
Generally an experienced architect will chose a technology that they can be reasonably confident with, and have experience of, and that they are confident will last the lifetime of the app, OR else they will eliminate any risk in the decision early on in the project, thus removing any need to do this, most of the time
I'd tend to agree with most of your points. Using absolutes often gets you into trouble and saying you should "always" do something limits your flexibility. I'd add some more points to your list.
When you use wrapping code around a very common API, like Hibernate or log4j you make it more difficult to bring on new developers. New developers now have to learn a whole new API, where if you hadn't wrapped the code they would have been very familiar right away.
On the flip side of that, you also limit your developers' view into the API. Using an advanced feature of the API takes more time because you have to make sure that your wrapper is implemented in a way that can handle it.
Many of the wrapping layers I've seen also are very specific to the underlying implementation. So, if you write a log wrapper around log4j, you are thinking in log4j terms. If some new cool framework comes out, it may change the whole paradigm, so your wrapping code doesn't migrate as well as you had thought.
I'm definitely not saying wrapping code is always bad, but as you stated, there are a lot of factors you have to consider.
The purpose of wrapping even a well-tested and time-proven 3rd-party library is that you might decide to switch libraries at some point in the future. Wrapping it makes it easier to switch without changing any code in your core application. Only the wrapper needs to change.
If you're absolutely sure that you'll never (another absolute) use a different logging framework in your project, go ahead and skip the wrapper. Even having said that, I'd probably hold off on writing the wrapper until I knew I needed it, like the first time I need to switch.
This is kind of a funny question.
I've worked in systems where we've found showstopper bugs in libraries we were using, and which upstream was either no longer maintaining, or not interested in fixing. In a language like Java, you usually can't fix internal bugs from a wrapper. (Fortunately, if they're open-source, you can at least fix them yourself.) So it's no help here.
But I'm often working in a language where you can easily modify libraries at any time, without seeing or even having their source code -- I commonly add new methods to existing classes, for example. So in this case, there's no point in wrapping: just make the change you want.
Also, does your colleague draw the line at things called "libraries"? What about Java itself? Does he wrap built-in classes? Does he wrap the filesystem? The thread scheduler? The kernel? (That is, with his own wrappers -- in a sense, everything is a wrapper around the CPU, but it sounds like he's talking about wrappers in your source repo that are completely under your control.) I've had built-in functionality change or disappear when new versions of it appear. Java is not immune from this.
So the idea to always write a wrapper comes down to a bet. Assuming he's only wrapping third-party libraries, he seems to be implicitly betting that:
"first-party" functionality (like Java itself, the kernel, etc.) will never change
when "third-party" functionality changes, it will always be done in a way that can be fixed in a wrapper
Is that true in your case? I don't know. Of the medium-large Java projects I've done, it's rarely true for me. I wouldn't spend effort wrapping all third-party libraries, because it seems like a poor bet, but your situation is certainly different from mine.
There is one situation where you with good reason can wrap. Namely if you need to test stuff, and the default third party object is heavy weight. Then having an interface can really make a difference.
Note, this is not to replace the library ,but make it manageable where it doesn't matter much.
Wrapping a whole library is boilerplate, ineffective, and wrong in most cases. It can be done in a much clever way. I'd say that wrapping a library is appropriate mostly in case of UI component libraries, and again, you have to be adding some additional core functionality of yours to all the components for this to be needed.
if too much modifications and additions are needed, this is most likely not the library you are looking for
if there is a moderate amount of additions and modifications - there are always the design patterns that come handy in those cases. The Decorator pattern (allows new/additional behaviour to be added to an existing object dynamically) , for example, is rather suitable for the most cases.
IDE search/replace and refactoring capabilities offer an easy way to change your code in all required places if some important change is needed and a wrapping object appears. (of course, unit-tests would be helpful here ;) )
In my experience the question becomes fairly moot if you're using abstractions sufficiently. Coupling to a library is just like coupling to any other interface. Thus you want to reduce accidental coupling and the scope of rewrite necessary if you need to swap out the implementation. Don't bind your application logic to some construct, but don't just form a bunch of stupid (literally) wrappers around something and expect to gain any benefit.
A wrapper doesn't usually gain you anything unless it's answering a specific purpose (such as polymorphizing a non-polymorphic construct). They often show up in refactoring, but I wouldn't recommend forming an architecture on them. There's a few exceptions of course, but there is with any principle.
This doesn't speak toward adapters. An adapter can be a pretty important component for when you want to actually alter the interface of a library and its use to be in line with architecture, code, or domain concepts in your project.
You should do it always, often, sometimes, rarely, or never. Not even your colleague does it always, but the instructive cases are always and never. Suppose that it is sometimes necessary. If you never wrapped a library, the worst consequence is that one day you discovered that it was necessary for a library that you had used all over the place. It would take you some time to wrap that library and to perform shotgun surgery on the clients. The question is whether that eventuality would take more time than habitually providing wrappers that are rarely necessary, but having never to perform the shotgun surgery.
My instinct is to appeal to the YAGNI (you ain't gonna need it) principle and opt for "rarely".
I would not wrap it as a one to one thing, but I would layer the app so that each part it replaceable as much as possible. The ISO OSI model works well for all types of software :-)