3d.io combine with A-Frame - performance issue - performance

Our company has tried to use 3d.io with A-Frame as a solution.
This is a floorplan sample in 3d.io
https://spaces.archilogic.com/model/Kendra_Chen/5r29uqxy?modelResourceId=69db3a0d-e1fb-4fdf-b7a8-2ce4dfe30f85&mode=edit&view-menu=camera-bookmarks&main-menu=none&logo=false
I use 3d.io combined with A-Frame, which 3d.io provide here:
(change scene id to "69db3a0d-e1fb-4fdf-b7a8-2ce4dfe30f85" )
https://appcreator.3d.io/default_setup?m=nv
I found my computer's CPU is raising to 70% and up.
I also use it as static file sample here:
https://www.istaging.com/dollhouse/artgallery/
My computer's CPU still raises to 70% and up.
Is this an A-Frame's performance issue?
If yes, how can I ask A-Frame's team to enhance this?
It's a big issue and must be resolved we think.
Is this possible to enhance? Or there are too many components to handle?

Related

Performance tools (Lighthouse) show very slow "Time to Interactive" on my Gatsby site. How can I improve it?

I'm looking for some help. My website is getting a very low time to interactive score (over 10 seconds).
It tells me I need to "Minimize main-thread work" and "Reduce JavaScript execution time".
Could you share some advice/tips on what I can do? How do you improve these metrics in your projects?
From googling, it seems like the main thing people do is code splitting, but Gatsby does that automatically already, right?
What else can be done?
You have quite a good Lighthouse Score. Try to test your page in anonymous mode to disable extensions.
However, you can improve it.
Load bundle-analysys and check what is inside your bundles. Try to find duplicated scripts or scripts you are not using.
Check importing strategy for the big scripts. If you are doing smth like import * from 'lodash' change it to import {groupBy} from 'lodash'
Find possibility to use lazy loading. This is not done automatically. The best candidates for this would be other pages of your site.
Then do revision for all scripts you have. Might be you can remove something that is not very important for you. Some people using jQuery for the querying page only.
Do general performance improvements:
Reduce image sizes (on your landing page you have one quite big image for the 450KB).
Reduce the number of fonts (you have 4 of them 547kb)
Think about removing analytics or change it for something simpler
For tracking performance changes use perfrunner or lighthouse-ci

Performance of POD files in Cocos3d

I've encountered problem with performance of loaded POD files. I'm using the ones created for online service with WebGL, so these models are pretty well detailed. The total number of models I got is large and I really want to avoid to remake them all. So, while increasing the number of models loaded in scene fps is dropping. Are there any general advices to improve performance without changing these models? I've disabled multisampling, tried to decrease textures' sizes, number of lights and other stuff like that. Also, all models are viewed by camera, so I could not use culling. These models are also different. Any suggestions?
I knew that there was something I've missed! :) saying in general, I created cocos3d template, than used my own mechanisms to add POD files. But if you will see into Scene.m there are
[self createGLBuffers];
[self releaseRedundantData];
methods in -initializeScene. And, of course, I did not use them after adding POD files. That helped to improve performance from 7 fps to 30.

Performance Issue with Doctrine, PostGIS and MapFish

I am developing a WebGIS application using Symfony with the MapFish plugin http://www.symfony-project.org/plugins/sfMapFishPlugin
I use the GeoJSON produced by MapFish to render layers through OpenLayers, in a vector layer of course.
When I show layers up to 3k features everything works fine. When I try with layers with 10k features or more the application crash. I don't know the threshold, because I either have layers with 2-3k features or with 10-13k features.
I think the problem is related with doctrine, because the last report in the log is something like:
Sep 02 13:22:40 symfony [info] {Doctrine_Connection_Statement} execute :
and then the query to fetch the geographical records.
I said I think the problem is the number of features. So I used the OpenLayers.Strategy.BBox() to decrease the number of feature to get and to show. The result is the same. The app seems stuck while executing the query.
If I add a limit to the query string used to get the features' GeoJSON the application works. So I don't think this is related to the MapFish plugin but with Doctrine.
Anyone has some enlightenment?
Thanks!
Even if theorically possible, it’s a bad idea to try to show so many vector features on a map.
You'd better change the way features are displayed (eg. raster for low zoom levels, get feature on click…).
Even if your service answer in a reasonable time, your browser will be stuck, or at least will have very bad performance…
I’m the author of sfMapFishPlugin and I never ever tried to query so many features, and even less tried to show them on a OL map.
Check out OpenLayers FAQ on this subject: http://trac.osgeo.org/openlayers/wiki/FrequentlyAskedQuestions#WhatisthemaximumnumberofCoordinatesFeaturesIcandrawwithaVectorlayer , a bit outdated with recent browsers improvements, but 10k vector features on a map is not reasonable.
HTH,

How can I boost my performance in a CakePHP application?

I'm currently using CakePHP for my training plan application but I have rendering times of 800 ms per page.
Do you have any tips on improving the performance?
Thanks in advance.
br, kms
In general the tool that will give you the greatest speed boost is caching, and CakePHP has a dedicated helper. The second most useful thing to do is optimizing the queries, only asking for the data you need - this can be done with the containable behavior.
You can find more detailed tips in this post.
Install APC, if you dont have it. that will instantly make it < 500ms. also without touching a single line of code.
Make sure your tables have all the proper indexes so that queries are as fast as they can be.
Next up look at some things about caching routes / urls as that is a huge drain.
Those will give you the most speed boost for the least amount of work
Have you tried any of the CSS/JS asset combiners for CakePHP? They combine/compress/minify your CSS/JS scripts and cache them where applicable. Here's one that's quite recent.
Not specific to CakePHP but you could go through all the factors in Google Page Speed, it will help you speed up your page loading times by suggesting what scripts you could combine and advice on how to reduce requests.
Other than that, look into the containable behaviour, see if you can cut out any necessary info/queries by only selecting what you need at any time.
This question is well-populated with information on speeding Cake up.

Using flickr to get photos of a specific location and put together a model

I've read about systems which use the Flickr database of photos to fill in gaps in photos (http://blogs.zdnet.com/emergingtech/?p=629).
How feasible is a system like this? I was toying with the idea (not just a way of killing time but as a good addition to something I am coding) of using Flickr to get photos of a certain entity (in this case, race tracks) and reconstruct a model. My biggest concern is that there aren't enough photos of a particular track and even then, it would be difficult to tell if two photos are of the same part of the racetrack, in which case one of them may be irrelevant.
How feasible is something like this? Is it worth attempting by a sole developer?
Sounds like you're wanting to build a Photosynth style system - check out Blaise Aguera y Arcas' demo at TED back in 2007. There's a section about 4 minutes in where he builds a model of the Sagrada Família from photographs.
I say +1 for photosynth answer, its a great tool. Not sure how well you could incorporate it into your own app though.
Its definately feasable. Anything is possible. And yes, doable for a single developer, just depends how much free time you have. It would be great to see something like this integrated into Virtual Earth or Google Maps Street View. Someone who could nail some software like this could help 3D model the entire world based purely on photographs. That would be a great product and make any single developer rich and famous.
So get coding. :)
I have plenty of free time, as I am in between jobs.
One way to do it is to get an overhead view of the track layout, make a blueprint based on this model, and then get one photo of the track and mimic the track's road colour. That would be a start.
LINQ to Flickr on codeplex has a great API and would be helpful for your task.

Resources