Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
It's obvious that using CSS3 shadows, tranparency, effects and other features can dramatically lower overall page performance on some phones/tablets, but I can't find out a single way to determine if device performance is not enough to handle "heavy" CSS3 features. I'd create two versions of CSS: lite for budget phones/tablets and full for PCs and so on; but how should I choose? Using media queries doesn't look like a moderate option.
This is quite a sophisticated topic and hard to answer correctly without epics. I try to keep it short and simple:
Tooling
Chrome Dev Tools offer many convenient features for profiling JS and CSS Features. As a first step I'd recommend the Timeline, activating the Capturing of 'JS Profile', Memory and Paint and then analyse the largest colored blocks (where a lot of calcuations/layouting/painting needed to be done. More detailled information: https://developer.chrome.com/devtools/docs/timeline
How to (CSS performance focused)
A quite popular misconception is the rumour of "long and a high amount of selectors" cause Perf. Issues. From what I've seen, debugged and read by great performance Pros like Paul Lewis, this is usually NOT the bottleneck. Computing and applying CSS values is extremely fast. Instead Painting and Layouting, especially when using animations is costly. There are tricks like using translate instead of left/top positioning, using the will-change property and many more. You may find an overview about what causes what here: https://csstriggers.com/
Conclusion: Media Queries might not be a perfect solution, but not bad either, as the huge amount of selectors probably won't matter.
Maybe try to chose by screen resolution? Budget devices are almost always with bad screens :)
Related
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
I'm not 100% sure what factors are important when deciding whether to use Unity's NavMesh vs an advanced pathing algorithm such as HPA* or similar. When considering the mechanics below, what are the implications of using Unity's NavMesh vs rolling my own algorithms:
Grid based real-time building.
Large number of AI, friendly, hostile, neutral. Into the hundreds. Not all visible on screen at once but the playfield would be very large.
AI adheres to a hierarchy. Basically does things where AI entities issues commands, receive commands, and execute them in tandem with one-another. This could allow for advanced pathing to be done on a single unit that relays rough directions to others where they can commence lower-level pathing to save on performance.
World has a strong chance of being procedural. I wanted to go infinite proc-gen but I think that's out of scope. I don't intend on having the ground plane being very diverse in regards to actual height, just the objects placed on it.
Additions and removals within the environment will be dynamic at run-time by both the player and AI entities.
I've read some posts talking about how NavMesh can't handle runtime changes very well but have seen tutorials/store assets that are contrary to that. Maybe I could combine methods too? The pathing is going to be a heavy investment of time so any advice here would be greatly appreciated.
There are lots of solutions. It's way too much for a single answer, but here's some keywords to look into:
Swarm pathfinding
Potential fields
Flocking algorithms
Boids
Collision avoidance
Which one you use depends on how many units will be pathing at a time, whether they're pathing to the same place or different places, and how you want them to behave if multiple are going to the same place (eg. should they intentionally avoid collisions with each other? Take alternate routes when one is gridlocked? Or all just stupidly cram into the same hallway?)
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
In code I have basic things like viewport height and width. After I've laid out the initial items on my view via xcode, I figure then I can just rearrange them any way I want using swift (rather than auto-layout). All I need is viewport height/width that way I can make things 90% of the width, or set them a certain percentage of the height, align them to the middle, etc. My app is pretty basic, just portait mode (on both iphone/ipad) but the layout looks identical in both.
When I tried to use auto-layout I just kept running into so many issues with constraints I just don't like it one bit. I'm asking this question to see if other developers have taken the "code route" versus auto-layout and does this seem sensible?
You are certainly free to implement layoutSubviews or some comparable method to perform manual layout of subviews; that is what we used to do before auto layout was invented, in situations where autoresizing was insufficient.
("Sensible" is not a good fit for Stack Overflow so I'm going to ignore that part of the question. If manual layout works best for you it works best for you.)
Yes, in fact the code route used to be the only way to layout a view. You can create and manage your entire view hierarchy in code if you want, Apple won't reject you for that.
As for sensible, well... The path you are on is likely to lead to much pain. Auto layout handles a lot of pain points for you, and will make it much easier to adapt to different device sizes. d
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
I have some issue regarding the performance of d3.js to display points on map.
I am using this beautiful piece of code that I found here as a starting point for my code https://gist.github.com/bobuss/1985684
Basically what the code on the link does is to draw points on maps and draw curves to connect the lines.
However, when I tried to add more data points (around 300) it will somehow either crash my browser or it will lag ALOT
So I was wondering if there's anyway to actually improve the performance in this case..
Thanks!
I considered using d3 to show some genomic data (23k points on scatter plot).
It just couldn't work, most of the browser will crash when you add 23k dom nodes and if you try to have some interactivity (hover, click) you end up with too many event listeners and everything dies.
I love d3, I'm using it since protovis days, but in my experience, it become unusable after few thousands of elements, and every time I had to create chart like that i ended up building it from scratch and implementing it on canvas. And there you end up with entirely new set of problems - implementing your own hit tests, events, simulating hover...
Its a nightmare.
There is no good solution to "big data" charting in JS, at least not to my knowledge.
And that is a shame to be honest, seeing my MacBookPro spinning his fan at max speed and browsers
being unresponsive - because im trying to plot 25k points - on a i7 8GB ram machine, thats a nonsense.
But that is what we get when we try to use browser for something its not meant to be.
And yes, 2D GPU acceleration helped, but not that much.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
Hi
When i wanna start a new project, I have enough details to just start it. And as all programmer needs, I need to analyse that project to understand how to write codes and classes and relations between them...
Normally, I do it on so many papers and its really annoying and also I can't consentrate so good (in huge projects).
I wanna know, what is the best way (or tool) to write implementation and designing steps to analyse, break down and follow project progress?
Thanks
I strongly recommend PowerDesigner from Sybase.
You can build requirements documents and link each requirement to classes. You can generate a physical data model straight from your class model. It supports a wide variety of RDBMS's. There's a 15 day fully functional trial at the link above.
If the project is huge, there's plenty of budget for this tool. It's a lifesaver. The ROI is self-evident.
i suggest VS 2008 > Class Designer, a handy tool, it writes clases behind class diagram and also has tools to help analyze archtecture.
This is incredibly easy. Use yellow sticky notes on a white board or large sheet of white cardboard.
Treat each sticky note as a seperate process. For decisions trun the sticky note so it looks like a diamond. This way you can move them around until it's right.
You can also split a complicated sticky not into two or three sticky notes. If you know that something needs to get done but don't know what that something is simply write "Process goes here ask (Marketing or Compliance)"
I've used this many, many times and it's very cost effective.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
When you are rapid prototyping for features should you really worry about code quality & optimization?.
Looking back at the number of times a "prototype" ended up becoming the product, the answer would be yes.
Don't forget that you are not only prototyping the feature, you are also prototyping the design.
Yes to quality. No to optimization. This question should be community wiki.
I would focus on clarity.
If quality and optimisation are requirements for the prototype then yes. If not, then no. Just because you are doing rapid prototyping you don't abandon standard operating procedures like programming to a specification, using source code control, testing, etc. It is, perhaps, relatively unusual for high performance to be a requirement for a rapidly developed prototype, but that's another matter.
Yes. Focus on quality, clarity and simplicity AND comments to explain what its doing and why (don't bother with the how, unless its really complicated, that is what the code is for).
Just about all work we do here starts out as a what if? And if it works we continue with it.
We write comments that describe what should happen, before we write the code, then write the code to match the comments. Writing the comments first forces you to think about how you will structure it all. We've found that it prevents a lot of FALSE assumptions and actually makes development faster.
It also makes reusing this when you come back to it so much easier - you don't need to read the code and understand it, just read the comments. Don't go for the nonesense of self-documenting code, all that does is self-document the bugs, you've got nothing to check to see if the code matches the comments/documentation at all.
You can worry about optimization later - see this description of a huge win I got by changing from MFC CMaps to STL when working on a hobby project parsing some Apache log files. This was done after I had the initial concept working and only when it became apparent there was a problem with performance.