I recently redesigned a view controller in storyboard, which before the redesign was portrait. Once redesigned, it ran upside down, with no code changed. I must have pressed something but have no idea what.
How do I reorientate this upright?
This looks very much like a transform setting, most commonly used when porting Mac code to iOS or for working with Core Text. Sometimes people use it when trying to create an inverted infinite table view (such as for chat apps or social media apps). I would expect that somewhere in your code you've accidentally applied it to more views than you meant to (maybe with a poorly considered extension). I would search your code for transform =. Also, you'll want to check your view hierarchy (the left-hand pane with the scenes in it), and make sure you haven't embedded this in something like a scrollview (that you've then flipped in code somewhere).
It is possible to do in the Storyboard editor, but it's pretty obscure, and unlikely for you to do by accident. See https://stackoverflow.com/a/43014583/97337 for an picture of where it would be set (that question is about rotation, but flipping is very similar).
(Hopefully you have this project in version control, such as git, so you could use diffs or bisect to find out where the relevant change was.)
One more thought: don't completely discount a bug in Xcode. It's not likely, but it is possible. Do the standard stuff: quit Xcode, delete DerivedData, and try again. Probably won't fix it, but when things are really weird, it can be Xcode's fault.
Related
Well, the title almost says it all : Why should I not move a GUI (e.g. Gtk) window on screen from the code ? In Gtk 3 there was an API for moving windows on screen, but it was removed in Gtk 4, because it is not good to move a window from code; only the user should do so (don't ask me to provide sources for that, I read it somewhere but have forgotten where and cannot find it). But I cannot think of any reason why it shouldn't be good, but of several reasons why it could be good, for example to restore the position of a window between application restarts. Could you please shed some light on this ?
The major reason why is that it can't possibly work cross-platform, so it is broken API by definition. That’s also why it was removed in GTK4. For example: this is impossible to implement when running on top of a Wayland session, since the protocol doesn't allow getting/setting global coordinates. If you still want to have something similar working, you'll have to call the specific platform API (for example, X11) for those platforms that you want to support.
On the reason why it’s not supported by some display protocols: it’s bad for UX and security. In terms of UX: some compositors can have special behavior because they need to work on a small device, or because they have a kiosk mode in which everything should always run fullscreen, or they provide a tiling experience. Applications positioning their windows themselves then tend to give unexpected behaviour. In terms of security: if you allow this, it’s technically possible for an application to reposition and resize itself so that it covers your screens while making itself transparent, without it being noticeable, which means it has the possibility of scraping all input.
This is a best-practices question.
When one makes a new Swift application for OSX, it builds a Main.storyboard and places that physically in the Base.lproj folder, but logically within the app's main "group".
I decided to separate different parts of the UI into different storyboards, so I added a Document.storyboard and Preferences.storyboard.
In retrospect it's not clear if this was the correct way to do this - for items that consist of a single window or view, should I use storyboards or just use XIBs? I've read the Apple documents but I'm not clear on the practical differences. Are storyboards "replacing" XIBs, are they the new hotness that I should use from now on?
Now I will be expanding the project with additional views, specifically a series of sheets used for editing certain features of the document. Should I put these all in a single storyboard, one XIB, or individual XIBs? Is there any strong reason to select one over the others?
And finally, when I added my storyboards, it placed them in the root of the project folder. Should these really be moved to Base.lprog?
This is something I've been thinking about, too. I've recently gotten into OS X development, so I'll share my amateur view of XIBs vs storyboards. To those of you that are more familiar with this, feel free to correct me if I'm mistaken.
Interface Builder inside Xcode seems to do a pretty good job of allowing you to put a skeleton in place, but doesn't always provide all the necessary customization options for a view. When using storyboards, I frequently end up with projects that are half visually based, and half code. It's like working on a cyborg.
Nibs/Xibs suffer from the same problem, but they don't even try to implement transitions. From what I can tell, they represent single windows, views, or menu items. This makes them simpler and more modular. You get to write the code that handles the wiring of them together, and, at first, it may seem like more trouble, but it actually feels like a benefit to me because of the level of control gained. Storyboards can do a lot of this for you, but I personally tend to prefer having it all together in the code.
The ideal solution, to me, would be for Apple to implement a more abstract form of user interface design: where each window (or iOS view, depending on the platform) was contained in a nib, and a Storyboard was only a transition mapping between the nibs. For example: You create all your windows and menus, and then use the storyboard to connect them all, but the storyboard can't edit any of the views details, only transitions and connections.
That being said, I'm quickly getting to the point where I prefer nibs and do all the other coding myself. If nothing else, I'm becoming a better programmer for it. Hope this helps!
I'm from a Windows programming background when writing tools, but have been programming using Carbon and Cocoa for the past year. I have introduced myself to Mac by, I admit it, hiding from UI programming. I've been basically wapping my OpenGL code in a view, then staying in my comfort zone using my platform agnostic OpenGL C++ code as usual.
However, now I want to start porting one of my more sophisticated applications to Mac OS.
Typically I use the standard Visual Studio dockable MDI approach, which is excellent, but very Windows-like. From using a Mac primarily now for a while, I don't tend to see this sort of method used for Mac UIs. Even Xcode doesn't support the idea of drag and drop/dockable views, unfortunately. I see docked views with splitter panels, but that's about it.
The closest thing I've seen to the Visual Studio approach is Photoshop CS4, which is pretty nice.
So what is the general consensus on this? Is there are more Mac-like way of achieving the same thing that I haven't seen? If not, I'm happy to write a window manager in Cocoa myself, so that I can finally delve in an learn what looks like an excellent API.
Note, I don't want to use QT or any other cross-platform libraries. The whole point is that I want to make a Mac app look like a Mac app, leave the Windows app looking like a Windows app. I always find the cross-platform libraries tend to lose this effect, and when I see a native Mac UI, with fancy Cocoa transitions and animations, I always smile. It's also a good excuse for me to learn Cocoa.
That being said, if there is an Open Source Cocoa library to do this, I'd love to know about it! I'd love to see how someone else achieves this, and would help smooth the Cocoa learning curve.
Cheers,
Shane
UPDATE: I forgot to mention a critical point. I support plugins, which can have their own UI to display various plugin specific information. I don't know which plugins will be loaded and I don't know where their UI will live, if I don't support docking. I'd love to hear people's thoughts on this, specifically: How do I support a plugin view architecture, if the UI can't change? Where do I put the plugin views?
Coming from a Windows background, you feel the need to have docking windows, but is it really essential to the app? Apple's philosophy (in my opinion) is that the designer knows better than the user how things should look and work. For example, iTunes is a pretty sophisticated app, but it doesn't let you change the UI around, change the skin, etc., because Apple wants to keep it consistent. They offer the full view, the mini player, and a handful of different viewing options, but they don't let you pull the source list off into a separate window, or dock it in other positions. They think it should be on the left, so there it stays...
You said you "want to make a Mac app look like a Mac app", and as you pointed out, Mac apps don't tend to have docking windows. Therefore, implementing your own docking windows is probably a step in the wrong direction ;)
+1 to Ken's answer.
From a user perspective unless its integral to the app like it is in Adobe CS or Eclipse i want everything as concise as possible and all the different options and displays out of my way so i can focus on the document.
I think you will find with mac users that those who have the "user skill" to make use of rearranging panels will in most cases opt for hot key bindings instead, and those who dont have that level of "skill" youre just going to confuse.
I would recommend keeping it as simple as possible.
One thing that's common among many Mac apps is the ability to hide all the chrome and focus on your content. That's the point behind the "tic tac" toolbar control in the top right corner of many windows. A serious weakness of many docking UIs is that they expect you to have the window take up most of the screen, because the docked panels can obscure content. Even if docked panels are collapsable, the space left by them is often just wasted and filled with white space. So, if you build a docking panel into your interface, you should expect it to be visible most of the time. For example, iTunes' source list is clearly designed to be visible all the time, but you can double-click a playlist to open it in a new window.
To get used to the range of Mac controls, I'd suggest you try doing some serious work with some apps that don't have a cross-platform UI; for example, the iWork apps, Interface Builder or Preview. Take note of where controls appear and why—in toolbars, in bottom bars, in inspectors, in source lists/sidebars, in panels such as IB's Library or the Font and Color panels, in contextual HUDs. Don't forget the menu bar either. Get an idea of the feel of controls—their responsiveness, modality, sizing, grouping and consistency. Try to develop some taste—not everything is perfect; just try iCal if you want to have something to make fun of.
Note that there's no "one size fits all" for controls, which can be an issue with docking UIs. It's important to think about workflow: how commonly used the control would be, whether you can replace it with direct manipulation, whether a visible indication of its state is necessary, whether it's operable from the keyboard and mouse where appropriate, and so forth. Figure out how the control's placement and behavior lets the user work more efficiently.
As a simple example of example of a good versus bad control placement and behavior in otherwise-decent applications, compare image masking in OmniGraffle and Keynote. In OmniGraffle, this uses the Image inspector where you have to first click on an unlabeled button ("Natural size") in order to enable the appropriate controls, then adjust size and position away in a low-fidelity fashion with an image thumbnail or by typing percentages into fields. Trying to resize the frame directly behaves in a bizarre and counterintuitive fashion.
In Keynote, masking starts with a sensibly named menu item or toolbar item, uses a HUD which pops up the instant you click on a masked image and allows for direct manipulation including a sensible display of the extent of the image you're masking. While you're dragging a masked image around, it even follows the guides. Advanced users can ignore the HUD entirely, just double-clicking the image to toggle mask editing and using the handles for sizing. It should be easy to see, with a few caveats (e.g. the state of "Edit Mask" mode should be visible in the HUD rather than just from the image; the outer border of the image you're masking should be more effectively used) Keynote is substantially better at this, in part because it doesn't use an inspector.
That said, if you do have a huge number of options and the standard tabbed inspector layout doesn't work for you, check out the Omni Group's OmniInspector framework. Try to use it for good, and hopefully you'll figure out how to obsess over UI as much as you do over graphics now :-)
(running in slow motion, reaching out in panic) Nnnnnoooooooo!!!!!
:-) Seriously, as I mentioned in reply to Ken's excellent answer, trying to force a "Windowsism" on an OS X UI is definitely a bad idea. In my opinion, the biggest problem with Windows UI is third-party developers inventing new and inconsistent ways of presenting UI, rather than being consistent and following established conventions. To a Mac user, that's the sign of a terrible application. It's that way for a reason.
I encourage you to rethink your UI app's implementation from the ground up with the Mac OS in mind. If you've done your job well, the architecture and model (sans platform-specific implementation) should clearly translate to any platform.
In terms of UI, you've been using a Mac for a year, so you should have a pretty good idea of "the norm". If you have doubts, it's best to post a question specifically detailing what you need to present and your thoughts on how you might do it (or asking how if you have no idea).
Just don't whack your app with the ugly stick by forcing it to behave as if it were running in Windows when it's clearly not. That's the kiss of death for an app to Mac users.
I'm getting back in to Cocoa development on the Mac after a long stint doing iPhone work. My previous experience with Cocoa on the Mac has just been dinky little tools. I'm looking to build something serious.
Looking at a modern Cocoa application like iPhoto (or Mail or Things or....) many apps use the Single-Window, Source-List based approach. I'm trying to wrap my head around that as best I can because it seems to provide a good experience. However, I'm having a little trouble. Here's how I think it should look, but I'm wondering how others are doing it, and what's really the best way:
Starting point of the app is an AppDelegate object which, after launching, creates a Window[Controller?] from a nib, along with setting up its data (from, say CoreData)
WindowController loads a window which essentially just has an NSSplitView in it.
Left side of the splitview has an NSTableView or NSOutlineView which is set to have the SourceList style.
Right side has the main content of the app, depending on which item of the table view is selected.
I would assume somewhere (where?) there are NSViewControllers managing each of the different views which will appear in the right side (think how iPhoto has All Photos, Events, Faces, Places, etc. and I imagine they could all appear in different nibs... is this correct?).
Those view controllers are probably bound to the source list on the left.. how does that work (source list is backed by an NSArrayController of NSViewControllers maybe?).
Anyway, those are my thoughts, am I completely off-base or...? I've looked around the web, found this post here, and I've looked at some Apple source code but I can't seem to wrap my head around it. Any guidance would be welcome.
Breaking the views up into separate nibs is mainly good if you're going to swap out some views for others, since you can load them lazily. And yes, in a modern app, you would use NSViewController, or perhaps KTViewController from KTUIKit (see the posts she co-wrote about NSViewController)
Don't just go running into the arms of the source list, however. A single-window interface can be good for simple apps, but it can quickly become unwieldy when you have many things going on, as they may be better served by breaking them into separate windows; iTunes and Xcode both provide many examples of this (especially the latter, since you can switch it between SWI and MWI).
You need to think about whether a multiple-window or single-window interface would be better for your app. There is no one answer for all apps; it depends entirely on your app, and what you want it to do, and how you want it to look—you (plus the rest of your team, if you have one) are the only one who can answer this question. You may want to do some paper prototyping to do quick experiments in each direction so that you can hold at least fake examples of both UIs up against each other.
One easy way to get a feel for the way nibs are split up is to just go into the iPhoto directory and start opening up nibs
If you want to explore a little more into the class structure you can try browsing around using F-Script
I would like to extend some existing applications' drag and drop behavior, and I'm wondering if there is any way to hack on drag and drop support or changes to drag and drop behavior by monitoring the app's message loop and injecting my own messages.
It would also work to monitor for when a paste operation is executed, basically to create a custom behavior when a control only supports pasting text and an image is pasted.
I'm thinking Detours might be my best bet, but one problem is that I would have to write custom code for each app I wanted to extend. If only Windows was designed with extensibility in mind!
On another note, is there any OS that supports extensibility of this nature?
If you're willing to do in-memory diddling while the application is loaded, you could probably finagle that.
But if you're looking for an easy way to just inject code you want into another window's message pump, you're not going to find it. The skills required to accomplish something like this are formidable (unless someone has wrapped all of this up in an application/library that I'm unaware of, but I doubt it). It's like clipboard hooking, writ-large: it's frowned upon, there are tons of gotchas, and you're extremely likely to introduce significant instability into your system if you don't really know what you're doing.
Well, think of this from the point of view of the app designer. If you wrote an application, do you want users to be able to inject things into your application (more importantly, would you want to incur the support/revenue headache of clueless users doing this and then blaming you)? Each application's drag and drop infrastructure is written specifically for the application, not to allow you to drop anything you want onto it (potentially causing crashes and all sorts of other nasty behaviour when you drag something onto an app that simply can't handle it). Stuff like this is hard to do for a reason.
It is possible to do, but it's a lot of work: you need to acquire the window handle of the thing you want to drop something onto, and then replace that window's message handler with your own. That's fraught with danger, of course, since you either have to replicate all of the existing functionality of that window yourself, or risk the app not working correctly.
Hm thats really too bad. I suppose there are sometimes reasons why apps don't exist yet. Basically what I'm trying to do is simplify the process of sending image links to people using various apps (mainly web browser text forms, but also anytime I'm editing in a terminal window) by hooking the process of pasting an image in a text context, uploading the image in the background, and pasting a url to where the image was uploaded all with a single action.
Edit: I suppose the easier solution to this is to just create a new keyboard combo that is hooked by my app before it gets to any other app. There's no reason in particular that I need to tie it to copy/paste functionality.