We are noticing a strong downfall in ODOO performance of the users that use ChromeOS, either using ChromeBook or ChromeBoxes (with different configurations...also very fast ones). After the startup of the machine all seems fine but after a while (depending on your activity) we notice a strong downfall in Performance. It is like the machine is not responding at all and after a few seconds (sometimes up to 30 seconds) it continues with opening the screen and seems to be very quick again. Restarting the browser and clearing the cache gives some relieve but the issue keeps coming back. Strange thing is that on our older windows machines (running Chrome Browser) we do not see this issue.
Related
This just started happening for no good reason I can find.
If I launch the MSACCESS.EXE program, then open a database. The database opens within 1 second.
If I launch the same database by double-clicking on the .accdb file's icon. It takes about 40 seconds for the Access window to appear, and less than 1 second after that the database opens.
The database is local, and both Access and the DB are on an SSD. The system is an Asus Z97 motherboard, i7-4790K # 4MHz (not overclocked) with 32gb RAM and about 200gb of free hard disk space.
In both cases, performance after opening is excellent with no issues. It appears it's only the launching of MSACCESS.EXE by double-clicking a .accdb file that is affected. I double-checked the file association for .accdb and it points to the correct executable.
I captured some data with Performance Monitor during the 40-second pause. MSACCESS.EXE is using about 0.4% CPU, doing almost no disk I/O, and there's no network activity.
I've already tried "Compact and Repair" but that had no effect.
This just started happening, and now seems to be affecting Access on ALL .accdb files. They open instantly from within Access but take 40 seconds to open when double-clicked. I haven't installed any new software or Windows updates recently.
Curiously, if I change the .accdb extension to .accdr (runs the db in the client runtime instead of full Access) the database will launch instantly.
What could possibly be going on here? I've searched the web and found some posts having to do with databases on a network share, but that doesn't apply here.
For anyone else encountering this issue, it appears this bug has nothing to do with Access specifically.
I needed to shutdown the machine, and when I did so, Windows seemed to completely ignore multiple shutdown requests. As I was googling to troubleshoot, after about 10 minutes, the shutdown did finally start. It took another 10 minutes to shutdown.
After rebooting the slow launch problem no longer occurs, there's only about a 2 second delay, which I assume is just MSACCESS.EXE loading "cold".
So, the problem is most likely in Windows and not Access.
I spent ages looking for the answers to this on various sites but eventually cobbled together my own fix, so hopefully this saves others some time.
This worked for me and reduced the load time from circa 4 minutes - even just opening a blank accdb fle - to seconds... So 4 mins if double-clicking an accdb. Once MS Access open and using File | Open it was fast.
I had two instances of MS Access both on Windows Servers that can see the Internet but goes through a corporate proxy etc.
After getting some hints by Googling this issue I suspected that the 4 mins or so was some sort of timeout trying to access a site or sites (MS Office apps do this) and that eventually when the proxy returned a timeout then Access started responding again. It was quick on the 2nd open because it didn't redo this request.
Based on this, I tried to divert certain sites to 127.0.0.1 and turn off all the Internet options in Trust Centre | Privacy etc. Nothing worked.
Finally, I got the solution. In Windows Defender firewall I created a new Application rule for the MSACCESS.EXE. This was an outbound rule that blocked all Internet traffic. After this the first double-click was fast again. I assume with traffic totally blocked, whatever request is going out to sites, is immediately stopped and returns a "no internet" to Access, which then carries out executing, rather than waiting for the 3-4min timeout.
I want to deploy my machine learning web app on a Linux server. I find that when I open Firefox (remotely via Mobaxterm), it is too slow due to the X11 bottleneck.
Now I have access to Jupyterlab (directly accessible from the browser) running on the same Linux server which works without any delay.
Why is it so? What can I do to run my Flask app through Firefox without the delay, same as with JupyterLab?
(Your support in editing the question to make it clear will be appreciated)
Try out:
Put in the firefox address bar:
about:config
(click yes on the warning)
Look up:
gfx.xrender.enabled
set it from (default) False to True.
This is over ssh over local wifi....
without xrender, firefox versions of last several years would spend about 0.5-2 seconds per window sending the window content as some kind of raw, uncached image.. not terrible, but if you scroll it would just do 0.5-2 second a pop readraws as it scrolled, so not too good either. More recent versions (maybe due to webrender being on by default?) seem to send MB after MB after MB of traffic for like 30 seconds or more (don't know if it's from the page-load spinner or what), once the page does load it actually scrolls fast (the X server must have a local copy of the page content) but it takes far to long to get there.
xrender, it sends pixmaps to the local X server too, but uses a surprisingly low amount of traffic doing so. Pages like stackoverflow and lighter comic sites load indistinguishable from a local copy of firefox; sites with heavy graphics may spend a second or two sending the large graphics, but then they're in the local X server and the page scrolls around and operates at full speed.
Running xrender if you run firefox locally doesn't seem to cause any harm either (i.e. you don't have to turn the setting on and off depending on if you are using firefox remotely or locally.)
Enjoy the speed!
Cheers!
--Henry
I'm trying, but failing to setup a reliable continuous integration environment using Xcode server.
I have a git repository on a headless mac mini server running the Xcode server service, the server has a separate development user account with administrator privileges that is used by Xcode.
I have setup my schemes, with testing included and shared them to the repository.
The bots run, check out code, build, analyze and archive, but only seems to run tests when it feels like it, which is almost never. I've checked the schemes and they have not changed since Xcode ran the tests and when it didn't.
On first setting them up, tests wouldn't run at all, until I added administrator privileges to the development account, then the tests ran a couple of times, before Xcode server decided to stop running them again.
I don't seem to get any reason why the tests aren't run, sometimes the bots fail to run because of some crash during the setup, and an error is reported, but mostly the bot seems to run, they just don't execute the tests, and no error is reported.
I've logged in remotely to the server, and the simulator is running, but never seems to do anything.
Here's a screenshot of an example bot, you can see the tests used to run, it sees I've reduced my warnings and got rid of an analysis issue. You can also see where no tests run, and no kind of warning or error is given as to why.
I've tried restarting the server, nope.
I've tried restarting the client, nope.
It's really frustrating and can't find any recent issues that offer a proper solution to this. The server is in constant use running backups and other tasks, so I'd rather not have a solution that involves me logging in to the server and restarting something every time there's a problem, which is always, it makes the whole point of bots useless if I'm spending more time logging in to my server trying to get them to work than they are at actually running.
Anyone have similar issues and a solution?
Edit: Noticed that my memory usage was very high on the server, memory pressure was practically always amber, so went out and got some memory today, increased the mac mini's memory from 4GB to 16GB, and now the tests have started running again. Also, the whole process is much faster (less than surprising i guess).
Could it just be low memory causing problems with the simulator? I've only just installed the memory and restarted, so I'll give it a few test runs before I confirm this solution, it's stopped working before...
Seems that this may be a memory issue, I upgraded the servers memory from 4GB to 16GB as my Activity monitor was showing significant memory pressure.
Since doing this, the bots started running tests again, and the total running time for the bot is a quarter that it was.
As per my edit, I've been running the bots for a day now, including bots that run on multiple simulators, and everything seems to be fine.
It's not very good that no obvious indication is given in xcode as to why the tests didn't run.
For reference and to see if this might fix your problems, original server specs were :
Mac Mini Server edition (late 2012)
2.3 GHz Intel Core i7
4GB memory
2x1TB drives
Replaced the 2x2GB memory sticks with 2x8GB sticks (The maximum allowed for the model)
EDIT : After a month of running with no problems, increasing the memory has solved the problem permanently.
I'm developing a web app running on WildFly 8.2 and I'm experiencing annoying UI responsiveness lags that I do know how to pinpoint. The lags occur shortly after loading a new page - they last about 3-4 seconds during which all UI is not responsive (such as hovering does not have effect, you cannot close the window or open DevTools)
I considered the following aspects:
code-related - angular, animations, kendo, less.js, not optimized selectors or iterators in my code, etc. Disproved - I rolled back to a very early version where I had never observed the problem (so to eliminate a hidden impact of newly introduced features), and the problem was there too.
CPU-related - restarted afresh many a time, running no extra apps - no go.
server-related
browser-related - disabled extensions and hardware acceleration setting - disproved. Chrome/FF usually silently choke, sometimes asking whether to close unresponsive page, IE complains about script, but when I choose to Debug the script, all I get is being directed to random script.
The key test however was to view my local deployment over LAN from other desktops - the app performed sluggishly regardless, whereas the same app (same code, same branch revision) deployed locally on those desktops performed superbly.
So this proved this is more server- or CPU related stuff. The app ran fine on WildFly 8.0. When the problem started to bite, I upgraded to 8.2 but it does not seem to have any effect.
As I'm running out of ideas, does anyone seem to have a hint what to do/check next?
Last minute: I went along the advice from here http://www.tomshardware.co.uk/forum/1843-73-windows-slow-browsing-chrome-firefox-faster and turned off Windows Defender's real-time protection - nothing got better.
The problem was caused by invisible content (shown on request in a modal) that grew up to 5000+ table rows and was employing angular binding (ng-repeat a.o.). Thanks for your hints.
So the times you see is an example of typical development. You fire up your server and mysql database, then login to the backend and try to add a simple thing like a menu item.
The times shown are only for the server to start responding, not for the page to actually finish loading. So this is time, passing on the server in the code, executing queries etc. All the JS files and CSS is not part of this measurement.
I can keep going. Clicking on "New Menu Item" and Hitting "Save" will take just as long.
So for a simple thing like adding a menu item the user spends roughly a minute looking at a blank screen (assuming the user knows joomla by heart and makes no wrong clicks and thus never has to go back).
Caching
So I read about caching. If you enable Page Caching I cannot keep developing because it seems my changes are not getting refreshed, and you really want this feature when you develop.
The View Caching actually speeds up the backend and the frontend a lot. But you still have to access the page once slowly before it gets cashed, and you have to access it again in the timeframe of the existence of the cash to profit from it. So for me, this means the backend is basically always slow. Unless I try to do something like adding 10 menu items within 15 minutes.
I btw run on a brand new notebook which really should not be the problem.
Is there something I am missing out on?
Is this actually normal?
EDIT
I could improve my times to around 2 seconds. The profile shows a lot of red colors though, someone has an idea? The picture is for the view menu manager, main menu menu items.
My times are all below 2 seconds, usually approx 1 both on my development server (a VM running CentOS 6 in virtualbox hosted by a Win7, i7 / 6Gb RAM / SSD disk) and my production server (Xeon dual 2GHz / 4Gb / 10000 rpm SATA disks).
Enable the debug for your site and see in the bottom of the page the times each module / component / event takes to run, this will make it possible to determine if it's a single extension / piece of Joomla eating up all the time, or it's just your machine.
I don't have a particularly good local machine (just a cheap W8 and using EasyPHP) and my times are all much faster than wither yours or those other people are reporting. One of the things you can do is turn on debug and look at the profiling data. When I load the admin login page even with debug on I can see it's onAfterDispatch which is the slowest part of the process.
A lot of times upgrading MySQL will give massive speed improvements.
When working on a localhost, the loading time usually depend on the PC performance. I work a hellish amount of time using wampserver (localhost) at work and on my computer at home.
When installing a fresh copy of Joomla 3.2 on Wamp at home, the step to create the database and insert the default content takes around 7-9 seconds, where as at work, it literally takes under 2 seconds. The reason? Because my work computer performance is much better than my personal computer.
It's the same concept for loading pages in the backend.
Hope this bit of info helped you.