Hi I have updated my Xcode and iOS . In the iOS 9 they have made some changes in HomeKit. In that they have by default added some predefined scenes. like HMActionSetTypeSleep,HMActionSetTypeWakeUp ... When I click on that scene in HMCatalog app it throws error that no actions in the actions. Could any one please tell me how to execute the HMActionSetTypeWakeUp type built in scenes.Thank you for your valuable time. Please let me know if I am not clear.
The 'Scenes' by itself are HMAction aggregators, i.e. HMActionSets. You need to add actions to the HMActionSet in order to execute, or make the action set/scene do anything.
For e.g., I'd like to set desired thermostat temperate to 22 degree Celsius when I execute the Wake Up or Good Morning pre-defined scene under a HMHome.
In order to do so, you would create an HMAction (or more precisely, HMCharacteristicWriteAction - a subclass of HMAction), that writes the value 24 to the thermostat's desired temperature HMCharacteristic.
Once the HMAction is created (and you can create multiple such actions), add that HMAction to the predefined scene. Look at the iOS api for the exact api calls to do so.
In the HMCatalog app, take a look at the 'ActionSetCreator' class.
Also, you can see this action by adding HMActions in the HMCatalog itself. Drill down to a thermostat (or any other homekit paired service). Change the values of some HMCharacteristics under the HMService, then drill back up and save your scene (I may be off from the exact screen by screen details, as writing this out of memory on an airplane seat, but you get the point - hopefully).
Once you have (any) HMActions listed under a pre-defined scene - try saying the name of the scene to Siri (or you can simply execute it by pressing on the scene on HMCatalog).
If the thermostat (or other HomeKit service) is properly paired, you should see your state problem resolved.
Related
I am developing a testing algorithm for our iOS apps using Appium. To fully implement this algorithm i need to identify wither i have moved onto different screen or am still on the same screen after performing some action. I need to know, what makes every screen unique/different from other in terms of Appium?
Going through the pageSource of every screen, i found that most screens have xpath attribute in window element. Can i use value of xpath of window element to mark the screen as unique from others, or do i need to do a trivial string comparison between screen's pageSources to mark them different? Or is there any other better solution?
Not sure if xpath would be the best solution for this. Normally the UIAWindow would remain the same, and developers might use different containers within this UIAWindow to render different screens.
So to verify different screens, you might need to figure out what this container is and see if the container's properties change when you move to a new screen (ie a new container)
If you app user a different header for every new screen, then you can use this header to see if the screen is changed. Example: in WhatsApp, you would see a different persons at the top. So in this case, the person's name can be assumed as the header.
If this doesn't work then you can verify some of the other controls, or say list of all the UIAStaticText on the screen. During screen change the entire list of UIAStaticText might change. So this can indicate a screen change.
For our automation suite at work I've implemented a series of screen check steps. Every time we switch screens I do a find_element command for an element on that screen that is unique to that screen. That way if a button or option takes me to a new screen that is incorrect my test will fail as expected. If it does find the element we're expecting it adds minimal time to the test suite.
Anish Pillai made a good suggestion of using the header text if there is any. Otherwise a particular tab, menu text, resource_id, or whatever is unique about the page would suffice. All you would need to do is a find_element call and a failure message if it fails.
I've tried to search on the internet for a method that can get a specific node from the scene. I've seen a lot of methods, but it doesn't give me a clear answer.
I've have build a scene in SpriteBuilder from Cocos2D. This is how the MainScene.ccb looks like:
If needed: This is how Player.ccb looks like:
I'm trying to get the player 'object' in Xcode after I published the project. I've tried to use CCBReader, but I can't find any useful method (unless I missed it). Also I've tried so use self.children, but I don't know how to continue any further with that.
Can you help me out? At the end, I want to get the position of the player.
Thanks!
By the way, I'm a beginner in Swift, so don't expect that I know all of the terms.
When you add nodes (sprite) to your sprite builder project, make sure it is selected, then on the right, click theorem code connections tab. In the 'Doc root var' type your variable name in the box.
When you load Xcode select the file that loads the scene from sprite builder and you can then declare a variable in this file with the same name you gave it in sprite builder. You will then be able to use that node (sprite) whenever you want and access it's properties.
I have just spet the last few hours trying to find the flag to use in Terminal to launch an app with the colored outlines around the various view elements to show how they are nested. I know that Matt Gemmell covered it during the Cocoa Face Off session of NSConference 2009 (at about the 13minute mark in the video). Unfortunately I can't actually read what he types and he doesn't speak the exact command. I know it has to be in the Apple docs somewhere but the search system is currently not being of any use. It looks like her just adds -showAllViews YES to the end of the command to open TextEdit but that command has no effect in 10.6.6. I have also tried every other capitalization I can think of as well as using view instead of views. Every command opens TextEdit just fine but doesn't show the colored outlines.
Use -NSShowAllDrawing and -NSShowAllDrawingColor:
/Applications/TextEdit.app/Contents/MacOS/TextEdit -NSShowAllDrawing 200 -NSShowAllDrawingColor cycle
-NSShowAllDrawing sets the delay between drawing commands (allowing you enough time to see the drawing update)
-NSShowAllDrawingColor sets the fill colour for the regions with pending drawing operations (see class methods on for NSColor for valid values, or pass it "cycle" to loop through all available colours).
I want to create a window that can display the current content of an application, say Powerpoint or Adobe Reader.
When I run my application, I would first select which of the currently running application I need to monitor in realtime. Once done, I need to get the current content of the selected application and display it. Since my application is going to be realtime, it will need to capture the contents of the selected application as and when they change (with minimal lag), and then display it.
As I understand, this broadly comprise of the following steps:
1. Selecting an application that I want to monitor
2. Get 'notification' when the content of that applicaion (client area) has changed
3. Capture the new content and display it
The steps [1] and [3] are quite easy and I find several methods here to perform them. However, for the stage [2] I am still clueless. Can anybody throw some light on how to acheive this?
Cheers.
You might take a look at UltraVNC, which does exactly what you are trying to do (it has a single window mode as well as full-screen). It has no less than four ways to accomplish your step #2.
The one obvious approach I can think of is to periodically take snapshots of the app's window and compare it to the previous one for changes.
I am trying to make a simple application in which there is a empty red rectangle and whenever the mouse is moved over the upper half border of the rectangle the cursor will become closed hand.
I started with selecting the foundation command line project.Made a transparent NSWindow and embedded a NSView in it with the rectangle, made window to accept mouse moved events(by method: -setAcceptsMouseMovedEvents). I have overridden -canBecomeKeyWindow and -canBecomeMainWindow window to return YES. But somehow none of the -mouseMoved events are being received by NSView.
When I put the same code by making a cocoa application project and creating my window in -applicationDidFinishLaunching method , my view was able to receive -mouseMoved events.
why is it not receiving mouse moved events when I use foundation command line utility project ?
I have also observed that whenever I make a window(carbon or cocoa) through foundation cmd line utility project , the window doesn't become key even on clicking the title bar.On clicking the title bar color remains light grey instead of becoming dark grey. Why is this happening?
I have overridden -canBecomeKeyWindow and -canBecomeMainWindow of NSwindow to return YES.
I would agree with what Joshua has already said. Any application that is going to show a user interface, be it a faceless background process or one which shows up in the Dock, should be in the form of an application bundle, not a plain old Mach-O executable like the Foundation tool template will create.
Also, there are reasons why views do not respond to mouseMoved: events by default:
Mouse moved events can quickly flood the event queue
There is generally little reason to use mouseMoved:, as tracking areas are
far more effective and efficient.
A while back, I wrote a little test app that demonstrates the differences between these 2 approaches:
Moving your mouse around the upper view for roughly 20 seconds results in 1000 events, while in the lower view, which uses tracking areas, less than 50.
Sample GitHub project: https://github.com/NSGod/MouseMoved-vs-TrackingAreas
Again, as Joshua mentioned, it would be helpful if you could describe what you're trying to accomplish. If your app needs to be a background app (LSUIElement == 1), and present an interface without appearing in the Dock, then there are ways to do that (as Josh mentioned, a command-line, non-bundled app is not the way).
You have no event loop to detect events and pass them to your window because your program does not start an NSApplication. See the main.m file of a typical Cocoa application.
It might be helpful to describe what you're trying to accomplish by taking this approach. My guess is you're building a daemon but want a GUI interface to manage the otherwise "headless" daemon. That or you're building a new login management system. In either case, there are specific ways to do both and this isn't it. :-)