General tips in running a Unity3D-Animation on Microsoft Hololens - visual-studio

I'm going to have the task to make sure that an animation created for in Unity3D can be run on a Microsoft Hololens. I don't have any further information about the animation yet but I wanted to ask in advance if there are any big things i should keep in mind.
In the animation you're playing a "character" in first person mode, controlled by wasd or the arrow keys and you can look up, down, left, right with the mouse. There are (as known to me) no special interactions besides colliders.
And another question: is it easier to test the animation on the actual hololens or to use a hololens-emulator on my laptop?
I know it's a lot to ask right now without any code or stuff but I still hope that some of you can give me a little advide :)

In my experience it is difficult to say. The HoloLens, besides it is an awesome device with nice specs for that size, has quite limited graphical power. Try to minimize your model's vetices to a reasonable low amount (e.g. using Blender's decimate feature). Set down the quality in Unity's quality setting as proposed in the Dev-Guide.
For your emulation question: The emulator does not emulate the HoloLens' specs (like processor, memory...), but emulates input concepts etc., while running a Hyper-V virtual machine. So the performance in the emulator is dependent to your computer's hardware and is not related to the actual performance on a HoloLens.
Also take a look at the performance guidelines from Microsoft

I worked on HoloLens for a couple of projects. A few points that can be useful for you:
the first big thing I would keep in my mind is understanding if the character has to move in a VR environment. In this case HoloLens is almost useless because its lenses will allow you to see the surroundings [the real ones] distracting you from the virtual world. This is exactly what happens with their pre-installed HoloTour. Nice attempt but you will not totally feel in Roma or Machu Picchu
the second big thing that I would consider is the fact that - at least for the first release - HoloLens has a very limited field of view, that "amounts to the size of a monitor in front of you – equivalent to 15 inches" [source]. It is likely that - in a situation where the character will look in every direction - the objects that you put in the AR space will end up being cut or invisible
about testing: the emulator is really exceptional, I didn't find great differences between it and the real device. Of course if you already have the real HoloLens I would use that. But if not I would first develop and test on the emulator to understand if the project is worth the purchase

Related

Trying to find proper profiler to test consumption after build on unity

I have problem on my game after I build it and install on my device it's lagging and has low fps(I tested on different devices and everywhere is the same). I tried unity's built in profiler which shows that everything is fine and always displays 100(or more) fps. So I think profiling game after installation can help me, but I can't find any proper profiler to use and can someone give me any suggestion?
Thanks in advance
Unity's profiler is still a valid tool. You can find the slow parts in your scripts.
The main difference on mobile devices are bad graphic cards.
To have good performance on those cards you need to bring down the polygon count and number of draw calls.
You find those infos in the stats window of the Game tab.
Also mobile shaders and baked lighting helps.
Find more hints in Unity's Mobile Optimization Guide.
Look at Graphy plugin. May be it will be useful in your case.
https://assetstore.unity.com/packages/tools/gui/graphy-ultimate-fps-counter-stats-monitor-debugger-105778

Assesing feasibility for OS X port

I need to figure out if porting an application to Mac OS X (not iOS) is feasible. I wrote some code for Mac around 20 years ago, but what I'm looking at now is completely different, and may require a complete re-write, which I cannot afford. After googling for some time, I found a variety of APIs, which appearing and get deprecated so often, that I feel completely lost.
The application draws through copying small fragments of bitmaps to the window. This is accomplished with BitBlt() on Windows or XCopyArea() on X11. In both cases, the source is stored in the video memory, so copying is really fast, 500K copies per second on a decent card, possibly more. On Mac, there used to be CopyBits() function which did the same, but it is now depreacted. I found CGContextDrawImage() which looks it's getting deprecated too, but copies from the user memory, and can only copy the whole image (not fragments). Is there any way to accomplish bitmap copying at decent speed?
I see everything is 64-bit. I would want to keep it in 32-bit for a number of reasons. 32-bit applications still seem to be supported, but with the fast pace deprecation, Apple may stop supporting at any time. Is this a correct assesment?
Software distribution. I cannot find any information on this. Looks like you need to be a member of the Apple Development program to be able to install your software on user's computers. Is this true? In some other places, I have read that any software must undergo Apple approval. Is this correct?
Thank you for your help.
So much has changed in the past twenty years, that it may indeed be quite difficult to port your app directly to modern OS X. You may be better served by taking the general design concept and application objectives, and create a fresh implementation using up-to-date software technology.
Your drawing system might be much easier to do with modern APIs, but the first step is deciding which framework to use. Invest some time in reading the documentation and watching the many videos available on the Developer website. A logical place to start is Getting Started with Graphics & Animation, but you may also wish to explore Metal Programming Guide and SpriteKit.
The notion of 64-bit vs 32-bit is irrelevant. All Mac computers run 64-bit code.
If you don't purchase a Developer program membership, you can still create an unsigned application with Xcode. It can be installed on another user's computer, but they'll need to specifically change the setting in System Preferences -> Security to "Allow apps downloaded from: Anywhere".
The WWDC videos are very useful in understanding the concepts and benefits of advancements in these frameworks made over the past few years.
After some investingation, it appears that OS X graphics is completely different from others. Screen is regarded as a target for vector graphics, not bitmap.
When the user changes the screen resolution, the screen resolution doesn't really cange (as it would in Linux or Windows) and remains native, but the scale at which the vector graphics is rendered changes. Consequently, it's perfectly possible to set screen "resolution" to be higher than the native one - you just see the things renderd smaller.
When you take a screenshot, the system simply renders everything to an off-screen bitmap (which can be any size), so you can get nice smooth screenshots at any size.
As everything is vectored, the applications that use bitmap graphics are at a huge disadvantage. It is very hard to get to native pixels without much overhead, and worse yet, an application that uses native pixels will behave strange because it won't scale when the user changes screen resolution. It also will have problems when screenshots are taken. Is it possible to make it work? I guess I won't find out until I fork over $2K for Macbook Pro and try it.
32-bit apps seems to be supported, and I don't think there's an intent to drop the support.
As to code distribution, my Thawte Authenticode certificate is supposed to work on OS X as well, so I probably don't need to become a member of Apple Developer program to distribute software, but again there's no definitive answer to that until I try.

What are the main pain points, when learning Windows Phone 7 programming?

I was wondering what are the pain points for other developers when learning Windows Phone 7 programming. For me is switching between application pages and the MVVC. If you have any hints or resources helping to overcome these pain points, please share it.
When switching to a new development platform there are bound to be new things to learn.
If you're coming from a web background it's important to note that you're no longer in the same stateless world as the web. There is also a different navigation model. (Especially if you're developing in XNA!)
The biggest, and in my opinion, most important difference in moving to developing for the phone (or any mobile platform) are teh follwoing 6 points.
"Mobile" applications are used
differently to desktops ones. -
Expect users to have less time to
spend with the application and be
doing other things at the same time.
Input is different. - Consider
[multi-]touch as well as voice,
location and sensors rather than
mouse and keyboard.
Output is different. - Even if just
considering output to the screen,
it's very different developing for a
small screen than a large one.
Connectivity is nott guaranteed. -
Create apps which work offline and
are occassionaly connected. Don't
assume a network conneciton is
guaranteed or fast.
Performance is important. - Partt of
the way that"mobile" applications
are used differently to their
desktop counterparts creates a
different expectation from users and
they are much less tollerant of
applications which are displaying
the equivalent of a wait cursor. Do
no more than you have to and be sure
to keep the app/device as responsive
as possible.
Resources are constrained. - The
most important consequence of this
is to do no more than you must, so
you can preserve battery life.
Afterall, if you run down the users
battery they get frustrated and
can't use your app.
Unfortunately the best way to avoid running in to problems is to develop a detailed knowledge and understadnig of the platform.
With that i mind, I'd recommend the following resources:
For general information check out the MSDN documentation.
I'd like to particularly draw your attention to:
the design resources, particularly the UI guidelines - so you can create something which looks like it is actually part of the platform.
and the fundamental concepts - so you don't waste time trying to do something which isn't possible.
Other useful resources are:
- Code samples
- Online training (there are updates to this coming soon)
- the book by Charles Petzold
There is a great, organised resrouce list here which covers pretty much all the major points of Windows Phone 7 development.

Periodic GPU performance problem

I have a WinForms application that uses XNA to animate 3D models in a control. The app have been doing just fine for months but recently I've started to experience periodic pauses in the animation. Setting out to investigate what is going on I have established these facts:
It happens on my machine only, other machines works fine
Removing everything from my render loop does not improve the problem
In 2. I didn't actually remove everything, I limited my loop to set the viewport on my GraphicsDevice and then do a GraphicsDevice.Present.
Trying to dig further I fired up PIX to capture some statistics. Screenshots of two PIX runs can be viewed here (Run6) and here (Run14). Run6 is using my original render loop and Run14 is using the bare-bones Present loop.
PIX tells me that the GPU is periodically doing something, and I assume this is causing the pauses. What could be the cause of this? Or how do I go about finding out what the GPU is actually doing?
Update: since I usually trust my code to be perfect (who's laughing?) I started a new XNA project from scratch to see if it exhibit the same behavior. So starting a new XNA 3.1 Windows Game project and running PIX I get this timeline. The same periodic pauses. So the problem must be lower in the stack, in XNA or Direct3D.
So PIX shows that the GPU is working on something, I can see the list of DX calls made within each frame and the timing calculations shows that the pause occurs during (or after) the IDirect3DDevice9::Present call.
Update 2: I had previously installed and uninstalled XNA 4.0 CTP on the problematic machine. I cannot be certain that this is related but I thought that perhaps a reinstall of the XNA Game Studio 3.1 bits could make a difference. Turns out it did.
The underlying question remains the same (and the bounty is still up): what could affect XNA 3.1 (or DirectX) to make it behave like this and is there any logging/tracing power tool for the DirectX and/or GPU level out there that could shed some light on what is going on?
Note: I'm using XNA 3.1 on a Windows 7 x64 dual-core machine with 8GB RAM.
Note2: also posted this question on the XNA Creators forums here.
You could try to see if you can find something with Xperf that is close to your periodically problem, do not run your application but keep the programs open that would normally run besides your application. You could also try to do it again with the application running but it could give a cluttered view.
Start the tracing, do this in an elevated prompt.
xperf -on BASE+LATENCY -stackWalk Profile
Wait for a fair amount of time to be sure that the problem is traced.
Stop the tracing and open it like this.
xperf -d trace.etl
xperfview trace.etl
Analyze by looking at the graphs and consulting tables of specific intervals and see if you can find something that is related to the problem, the highest chance on finding it would be in the DPC and Interrupts section. But it might as well be something odd at the CPU or I/O section. Good luck!
Also more information on Xperf and how to obtain it, hopefully this delivers results.
If not, you can alternatively try GPUView which has been used for improvements in DWM,
this is also included next to Xperf with the Windows Performance Toolkit so you can easily try both!
log v
... wait for a fair amount of time to be sure that the problem is traced ...
log
gpuview merged.etl
In the case that gpuview gets out of memory you can try to add "/limit 3" or remove the v.
Read the documentation of the tools if you are stuck somewhere.
Hmm ... this seems to be occurring on the GPU, however it sounds like a CPU garbage collection issue. Can you run the CLR profiler and see if you can see any spikes in GC activity that you can correlate to the slowdowns?
I agree that it sounds unlikely since you can clearly see it in PIX, but it might offer a clue as to the cause.
If it's only happening on your own machine, then could it be drivers? Forgive me for being skeptical, but it's a 64 bit machine after all :D
This looks like either a vsync issue or GPU in its last throes. Since going back to a different version fixed it, and the "bottleneck" is in IDirect3DDevice9::Present lets go with the former option.
I'm not familiar with XNA so I don't know how much of the workings of D3D are exposed, but do you know what your PresentationParameters are set to?
Specifically try setting the swap effect set to Discard.

Great idea for embedded development

For my university I (and three others), are searching for a project that utilizes at least one embedded device, web services or other web technology, and a Graphical User Interface.
Currently we are looking at developing a unified remote, that is an extendable application on a cell phone through which you can control your media center. Any ideas, or advice on this will be appreciated, though it is not the focus of this question.
We are having a hard time finding interesting (or funny) projects on which we can work a complete semester. Any ideas will be greatly appreciated. The software will be released as free software. (GPL or BSD license).
We all have a Bsc in Software Engineering.
EDIT: I am very pleased with the suggestions so far. Thanks to everyone, and keep it coming.
How about follower: carry a device, as you move from room to room in your house devices configure themselves to your preference - lights, music etc. If two people are in the room some precedence rules.
Is that possible just on the presence of a mobile phone?
Another idea (from the top of my head):
A work environment ensurance thing. We programmers like to develop in nice and quiet environments. Unfortunately some people tends to annoy us with their disturbing behaviour (or just by being loud).
So the project could be to create devices wich tracks the stress level (sweat levels, pulse etc.) of the individual and their impact onto others.
An example: One individual is very loud (the device should measure this), and others around him becomes stressed and/or unfocused because of this. The serverside sw, should then detect and warn him to quit down a bit to improve the work environment.
Comments?
What do you peeps like doing? Build an app for it.
So, if you like drinking coffee build a application which will find the nearest frothy coffee shoppe (or if you're particular, the nearest Peets/Starbucks/Whatever-ocino). This idea works for beer too.
If you buy stuff off e-Bay build a sniper app.
If you enjoy playing frisbee build an app which locates your nearest friends and sends them a text asking whether they want to goof off lectures and go to the park.
Heck, you could even build an app which monitors your SO questions and alerts you when you get an answer (although I don't know whether the data services SO currently offer will be up to the job).
The standout companies that have made great universal (programmable) remotes are : logitech, and philips.
One of the big problems with these types of devices is the ability of the general consumer to actually program all of their various devices. Logitech has done an outstanding job of providing a fairly simple Web based user setup experience that then implements a very usable universal control.
I would definitely look at what they have done for some ideas on universal remote controls.
How about an app and hardware that will tell me when my wife's plants need watering? (It's somehow my fault if they don't get watered.)
OK then: the recipe generating fridge. Rfid tags on the contents know what's available and the expiry dates. The database knows the recipes. The fridge emails/texts you to say "buy some mushrooms and you can have a delicous ham and mushroom omelette while the eggs are still fresh."
Benjamin and all those aspiring to do embedded projects ...
When you start a project, especially in embedded systems, you need to understand that the hardware is not your PC but some special device. And every sensor will be a transducer in itself. The only thing that would matter to students is that everything costs and are costly
So, it will be good to make sure that the idea is such that,
It can be completed by the
project members within the given timeframe
All the required development
tools like hardware etc can be
really bought
Of all, it good to ensure that the
project enables you to learn
something useful for your career ...
To do all this it is better set some achievable goals
Develop a system in which you can program the lighting system of your house. You can set up their schedule one time and everything should work automatically.
I really love working witht the Atmel ststk1000/stk1006/stk1002 development boards for tht AVR32. ATSTK1000
2x Ethernet
QVGA lcd
USB 2.0
SD/MMC
Conpact flash
Supported embedded linux
IR
Audio
ps2 interfaces
uarts
++
familiy atmel page:
AVR 32 family home
online forums
Forums for CPU

Resources