Related
I'm a bit confused by a problem that has only become more apparently lately and I'm hoping that someone might be able to point me in the direction of either where I might look for appropriate settings, or if I am running into another problem they have come across before.
I have a Laravel application and a private server that I use for our little museum. Now as the application has become more complex, the lag is noticeable and you can see how it almost lines up the connections, finishing one request before moving along to the next, whether it be api, ajax, view responses, whatever.
I am running Apache 2.4.29 and my Ubuntu Server is 18.04.1.
I have been looking around but not much has helped, in regards to connections settings, if I look at my phpinfo() I see this Max Requests Per Child: 0 - Keep Alive: on - Max Per Connection: 100 but I believe these are just fine the way they are.
If I check my memory I think it says I have 65 GB of available memory, with 5 being used in caching. When reviewing the live data, the memory never crosses into the GB territory and solely remains in the MB territory. This server is absolutely only used for this Laravel project, so I don't have to worry about messing with other projects, I'd just like to make sure this application is getting the best use it can for its purpose.
I'd appreciate any suggestions, I know there's a chance the terms I am searching for are incorrect, or maybe just outdated, so if there are any potential useful resources out there, I'd appreciate those as well.
Thank you so much!
It's really hard to be able to tell since there a lot of details lacking but here some things that can give you a direction of where to look:
Try downloading htop via apt-get and see what happens on your CPU/RAM load with each request to the server.
Do you use php-fpm to manage the php requests? This might help in finding out if the problem lies in your PHP code or in apache configuration
Did you try deploying to a different server? Do you still see the lagging on the other server as well? If not, this indicates a misconfiguration problem and not an issue with your code.
Do you have other processes that are running in the background and might slow things down? Cron? Laravel Queue?
If you try to install another app on the server (let's say phpmyadmin) is it slow as well or it works fine?
Try to take it from here. Best of luck.
currently I am a bit lost or maybe have just a mental blockade.
The topic for my question is a 1.7.3.3 Prestashop currently hosted at shared hosting. Due to slow performance and long TTFB I am currently moving it to a VPS running Plesk, hosted on DigitalOcean.
Now comes the Part where I am a bit lost: I copied the files via WGET, dumped the Database and applied permissions (to my knowledge) correct. Shop comes up at the new Plesk-Host under new domain without issues.
As soon as I am trying to enable MySQL-caching I am able to edit the pages with Apollo Pagebuilder, but not save them anymore. At least the changes don't show up at front office. If I switch back to filecache, changes are propagated as intended, but the modules-page in the backend doesn't work anymore (e.g. error 500, can be fixed by removing /app/cache/prod and app/cache/dev)
So, to summarize my issue: If I enable filecache, everything except the module-page works, if I enable MySQL-cache, everything except Apollo Pagebuilder-propagation works.
What I already tried:
I have reinstalled Apollo Pagebuilder, but this rather completely breaks my Front Office (means I'd have to rebuild everything from scratch, as the current status doesn't seem to be read properly).
Exported, reimported and "update and fixed" Apollo, not successful :(
Only thing that comes to my mind as a fix would be sacrificing something to the gods, but I'd rather not do that.
Environment:
Ubuntu 16.04 LTS; Plesk Onyx 17.8.11; Prestashop 1.7.3.3; PHP 7.1.26
If no one had this problem before, maybe someone has an idea on what to delete to just enable the modules in the backoffice. I'd be willing to take MySQL caching as non-available.
Thank you in advance for your help.
Ok, I think I found the answer. As the server including cache was migrated, it cached the database-connection as well. (Fortunately it wasn't able to write on the previous DB).
So if ever someone faces the same issue:
prestaroot/app/cache/prod/appProdProjectContainer.php stores the connectionstrings at 2 positions.
Once in: protected function getDoctrine_Dbal_DefaultConnectionService() //**around line 670
and once around line 5000. Easiest would be to just search for your previous connection-credentials.
Also you need to make sure that in prestaroot/app/cache/prod/appParameters.php the same, valid, credentials are existant.
Hope it will help someone one day.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
I was asked this question once at an interview:
"Suppose you own a website where the server is at some remote location. One day, some user calls/emails you saying the site is abominably slow. How would you identify why the site is slow? Also, when you check the website yourself as any user would (using your browser), the site behaves just fine."
I could think of only one thing (which was shot down):
Check the server logs to analyse incoming traffic. Maybe a DoS attack or exceptionally high traffic. Interviewer told me to assume the server has normal traffic and no DoS.
I was kind of lost because I had never thought of this problem. I have almost no idea how running a server/website works. So if someone could highlight a few approaches, it would be nice.
While googling around, I could find only this relevant, wonderful article. That article is kind of too technical for me now, but I'm slowly breaking it down and understanding it.
Since you already said when you check the site yourself the speed is fine, this means that (at least for the pages you checked) there is nothing wrong with the server and it can serve those pages at a good speed. What you should be figuring out at this point is what the difference is between you and the user that reports your site is slow. It might be a lot of different things:
Is the user using a slow network connection (mobile for example)?
Does the user experience the same problems with other websites hosted at the same webhoster? If so, this could indicate a network problem. Normally this could also indicate a resource problem at the webserver, but in that case the site would also be slow for you.
If neither of the above leads to an answer, you could assume that the connection to the server and the server itself are fine. This means the problem must be in the users device. Find out which browser/OS he uses and try to replicate the problem. If that fails find out if he uses any antivirus or similar software that might cause problems.
This is a great tool to find the speed of web pages and tells you what makes it slow: https://developers.google.com/speed/pagespeed/insights
I think one of the important thing that is missing from above answers is the server location, which can play a vital in web performance.
When someone is saying that it is taking a longer time to open a web page that means high latency. High latency can be caused due to server location.
Let's assume as you are the owner of the web page then the server and client are co-located, so it will have a low latency.
But, now if client is across the border, then latency time will increase drastically. And hence a slow perfomance.
Another factor is caching which drastically affects the latency time.
Taking the example of facebook, they have server all over the world to reduce the latency time (and also to provide several other advantages) and they use huge caching system to cache their hot data (trending topics) whereas cold data (old data) are stored in hard disk so it takes a longer time to load an older photo or post.
So, a user might would have complained about this as they were trying load up some cold data.
I can think of these few reasons (first two are already mentioned above):
High Latency due to location of client
Server memory might need to be increased
Number of service calls from the page.
If a service could be down at the time of complaint, it could prevent page from loading.
The server load might be too high at the time of the poor experience. The server might need to increase the resources (e.g. adding another server/web server to the cluster).
Check if there was any background job running on the server at that time.
It is important to check the logs and schedules of the batch jobs to determine what all was running at that time.
Hope this help.
Normally the user takes the page loading time as a measure to find out that the site is slow. But if you really want to know that what is taking the maximum time the you can open the browser debugger by pressing f12. if your browser is chrome the click on network and see what calls your application is making and which are taking maximum time. If you are using Firefox the you need to install firebug. If you have that, then again press f12 and click on Net.
One reason could be the role of the user is different of your role. You might be having suppose an administrator privilege (some thing like super user role) and the code might be just allowing everything for such role that means it does not really do much of conditional checking to see what is allowed or not. Some times, it's a considerable over ahead to get all the privileges of the user and have the conditions checking, how course depends how how the authorization is implemented. That means, the page might be really slow for specific roles. Hence, you should find out the roles of the user and see if that is a reason.
Obviously an issue with the connection of the person connecting to your site OR it's possible it was a temporary issue and by the time you checked your site, everything was dandy. You could check your logs or ask your host if there was an issue at the time the slow down occured.
This is usually a memory issue and it can be resolved by increasing the Heap Size of the Web Server hosting the application. In case the application is running on Weblogic Server. Heap size can be increased in "setEnv" file located in Application Home.
Goodluck!
Michael Orebe
Though your question is quite clear, web site optimisation is a very extensive subject.
The majority of the popular web developing frameworks are for some reason, extremely processor inefficient.
The old fashioned way of developing n-tier web applications is still very relevant and is still considered to be best practice according the W3C. If you take a little time to read the source code structure of the most popular web developing frameworks you will see that they run much more code at the server than is necessary.
This may seem a bit of a simple answer but, the less code you run at the server and the more code you run at the client the faster your servers will work.
Sometimes contrasting framework code against the old fashioned way is the best way to get an understanding of this. Here is a link to a fully working mini web application which represents W3C best practices and runs the minimum amount of code at the server and the maximum amount of code at the client: http://developersfound.com/W3C_MVC_EX.zip this codebases is also MVC compliant.
This codebase comes with a MySQL database dump, php and client side code. To see this code in action you will need to restore the SQL dump to a MySQL instance (sql dump came from MySQL 8 Community) and add the user and schema permissions that are found in the php file (conn_include.php); setting the user to have execute permissions on the schema.
If you contrast this code base against all of the most popular web frameworks, it will really open your eyes to just how inefficient these frameworks are. The popular PHP frameworks that claim to be MVC frameworks aren’t actually MVC compliant at all. This is because they rely on embedding PHP tags inside HTML tags or visa-versa (considered very bad practice according the W3C). Also most popular node frameworks run way more code at the server than is necessary. Embedded tags also stop asynchronous calls from working properly unless the framework supports AJAX dumps such as Yii 2.
Two of the most important rules to follow with MVC compliance is: never embed server side tags (such as PHP tags) in HTML tags or visa-versa (unless there is a very good excuse such as SEO) and religiously never write code to run at the server if it can be run at the client. Also true MVC is based on tier separation, where as the MVC frameworks are based on code separation. True MVC compliance is very processor efficient. Don’t get me wrong MVC frameworks are very useful for a lot of things, but if you’re developing a site that is going to get millions of hits, they are quite useless, or at least they will drive your cloud bills so high that it will really eat into your company’s profits.
In summary frameworks don’t give much control over what code runs at the client or server and are very inefficient but you can get prototypes up and running quicker with less code.
In contrast the old fashioned way takes a bit more elbow grease but you have complete control over what runs at the server and what runs at the client.
As an additional bit of advice for optimisation avoid using pass-through queries and triggers and instead opt for stored procedures. Historically stored procedures weren’t invented at the time MVC was present as a paradigm but it definitely increases separation of concerns between the tiers and is much more processor efficient.
Hope this advice helps.
I'm just started out with Ehcache, and it seems pretty good so far. I'm using it in a simplistic fashion to speed up reads against a database, but I wonder whether I can also use it to let the application stay up if the database is unavailable for short periods. (Update - my context is a application with high-availability modules that only read from the database)
It seems like I could do that by disabling expiry in the event of a database read problem, and re-enabling it when a read works again.
What do you think? Is that a reasonable approach or have I missed something? If it's a fair approach, any tips for how best to implement appreciated.
Update - ehcache supports a dynamically configurable option to un/set the cache to 'eternal'. This seems to do what I need.
Interesting question - usually, the answer would be "it depends".
Firstly, if you have database reliability problems, I'd invest time and energy in fixing them, rather than applying a bandaid solution.
Secondly, most applications need both reading and writing to work - it doesn't seem to make sense to keep your app up for reads only.
However, if your app has a genuine "read only" function, and there's a known and controlled reason for database down time (e.g. backups), then yes, you can use your cache to keep the application up and running while the database is down. I would do this by extending the cache periods, rather than trying to code specific edge cases. For instance, you might have a background process which checks whether the database is available and swaps in a different configuration file when there's trouble.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
I have a couple of different projects running for the moment - some PHP apps and a few WordPress instances, which all currently are kept at a web hosting company. The contract period time is about to end and I would lie if I wouldn't say that I really had considered making the switch onto a VPS server in the cloud with the prices getting really great.
I am totally in love with the fact of being able to turn the performance up or down when demand increases, or goes away and thereby cut the costs.
With my background as a PHP developer, with only a little hint of Linux (ubuntu) knowledge, I am thoroughly concerned about the security if I should run my own VPS.
Sure, I am able to install and get things running with my current knowledge (and some help by Google), but is it realistic nowadays to expect that my server (LAMP, really) will stay secure by running out-of the box stuff and keeping it up-to date?
Thanks
Maintaining your server is just one more thing to worry about, and if you're a developer, your focus should probably be on development. That said, it needs to make financial sense to go the managed route. If you're just working on toy projects (I've got a $20/month VPS that I use for my personal projects and homepage, and it's pretty hands-off) or if you're just getting off the ground, VPSes have the great advantage of being cheap and giving you lots of control of your environment. You can even mitigate some of the risk by keeping aggressive backups, since it's easy to redeploy a server quickly.
But, if you get to the point where it won't affect your profitability to do so, you probably should seriously consider getting someone else to take care of infrastructure for you either by buying managed hosting services or hiring someone to do it for you. It all depends on what you can afford to lose if you get rooted and how much time you can afford to invest in server management and recovery as opposed to coding.
I wouldn't. We did the same thing because the non-managed VPS are sooo cheap, but unless you really need to install applications or libraries that are not part of standard shared host setups, in my experience, being a pure developer as well, the time spent is never worth it.
Unless, of course, it is your own tiny blog or you just want to play around.
But imagine you (or whichever automation you use) update php, and for some reasons it fails (or worse, you render your current installation unusable) - are you good enough to handle this? And if so, how long will it take you? Do you have a friend at hand who can help?
We, as a small company, are getting rid of our VPSs step-by-step and moving back to our reseller package, hosted at a good hosting provider.
Good question, though.
As for security, I have successfully used Amazon EC2 for a number of things. It's not the cheapest around, but quite comprehensible in shared data stores between instances, connection to S3, running hosts at different hosting centers etc, grouping hosts in different clusters, etc etc.
They have a firewall built in, where you can turn all things off except say, TCP traffic on port 22 for SSH and 80 for web. That combined with something like Ubuntu, where you can easily run updates without worrying much about breakage, is probably all you need from a security point of view.
You need consider cloud computing as a statement of avaibility, not cost. You can be seriously surprised about the cost at the end.
I already have optioned to use VPS hosting. Good VPS hosting is costly, these days you may find cheap dedicated host compared to VPS. Have look at hivelocity.com – I like their services.
About security, most VPS host company takes care of security for you at the infra-structure level, and some may use antivirus software on files. On dedicated host, you need to take care by yourself or contract managed support services: a tradoff.
LAMP server is cheap everywhere. You can hire a private VPS and have some security, you may count on services like DNS hosting too – this is trouble to configure. VPS can be your first step as you're doubtful and has no experience on hosting. Thereafter when you find out the advantages of having your own server, you'll migrate straight to dedicated server.
What is acceptable from a security standpoint will differ depending on the people involved, what you want to secure and requirements of the product/service.
For a development server I usually don't care so much, so I usually do some basic securing of the server and then don't pay attention to it again. My main concern is more of someone getting a session and using my cycles to run something. I don't normally care about IP so that's not a concern for me.
If I'm setting up a box that has to meet Sarbanes-Oxley, Safe Harbor, or other PII/PCI standards I must meet I would probably go managed just because I don't want the additional security work load.
Somewhere in between is a judgment based on if I want to commit the required time to secure the server to the level I want it secured at. If I don't want to do it myself I pay someone to do it.
I would be careful about assuming your getting a certain level of security just because your paying someone to manage your server. I've come across plenty of shops where security is really an afterthought.
If I understood you correctly, you are considering a move from a web host to a VPS, and wonder if you have the skills to ensure the OS remains secure now that it's under your control?
I guess it's an open-ended question. You are moving from a managed environment to an unmanaged environment, and whether you maintain your environmental security is up to you. If you're running your own server then you need to make sure that default passwords aren't in use (for the database, OS and any services on top), patches are quickly identified and applied, host firewalls are configured properly and suspicious activity alerts are immediately sent to you. Hang on, does your current web host do any of this for you? Without details about your current web host and the planned VPS, you are pretty much comparing apples to oranges.
BTW, I would be somewhat concerned about my LAMP server security, but frankly I would be much more concerned about development errors (SQL injection, XSS) and the packages running on top of my server (default passwords + dev errors).
For a lamp stack, I would probably not do it. It would be a different case if you were using a Platform-as-a-service provider like Windows Azure - by my own experience there is minimal operational overhead and you just upload the app and it runs in a vm (and yes it supports php).
But for Linux there are no such providers that I know of, which means you will have to manage the Operating system, the app frameworks, the web server and anything else that you install on the instance. I wouldn't do it myself. I would consider the options as hiring a person with the relevant experience to do this for me vs the cost of managed services from the vps provider and go with one of those two.
Rather than give you advice about what you should do, or tell you what I would do, I'm just going to address your question "is it realistic nowadays to expect that my server (LAMP, really) will stay secure by running out-of the box stuff and keeping it up-to date?" The answer to this question, in my opinion, is basically yes.
dietbuddha is right, of course: what constitutes an acceptable level of security depends on the context, but for all but the most security-sensitive purposes, if you're using a current (i.e. supported) distro, with sane defaults, and keeping up with the security updates, then you ought to be fine.
I have two VPSs, each of them currently runs Ubuntu 10.04 server. On one of them, I spend some time installing and configuring tiger, tripwire, and taking various other security measures. On the other, I simply installed fail2ban and set security updates to automatic, and left it at that. They've been running for a few years, now, and I've had no problem with either.
You should do it for fun and for learning purposes. Other than that, don't; you're wasting your own time and a lot of other people's time.
I say this because I've wasted serious time setting up an EC2 instance to host my SVN server and a few other things. I mean, I loved setting everything up and messing w/ the server; I learned a lot especially because I'd never done anything a LINUX server before. However, looking back, I wasted a ton of time and had to keep buggin #Jordan S. Jones for help.