I have installed apache-airflow 2.0 on a new EC2 instance (r5a.xlarge) 4 CPU and 32 RAM and SSD disk.
Airflow webserver and scheduler are running in a stable way, BUT the web response is slooooow even with 0 DAGs in execution.
The instance lives in N.Virginia as well as the postgres database (AURORA RDS)is running there.
Since the web responsive is slow (~5 seconds for a web page to load) I tried to scale the amount of workers to 7 but seems to be such change dit not helped as expected.
Literally when one do click on whatever link i.e Admin Variables link, the browser has to wait until 5 seconds to receive a response from http://ec2-#-###-###-###.compute-1.amazonaws.com:8080/variable/list/
I would like to listen suggestions from you guys,
so guys, is there any thing that I am missing?
does exist the possibility of split the webserver and scheduler on different instances?
is it just a problem with the UI apart from scheduler and executors?
thanks a lot for any tip, suggestion or fix!
Related
What happens is that a web service on my IIS server significantly increases the ram used.
It works in the range of 200 ~ 700 mb. But for a few days now, he suddenly starts using 3, 4, 5 gb of ram.
As a palliative solution to not block users, I end the service by the task manager itself and it goes back to normal, but some time later it increases again:
task manager photo
I used the performance monitor and saw that it increases this part here:
performance monitor photo
I really don't know how to solve this, I'm stuck, can anyone help me?
There are a lot of reasons that your IIS worker process could be using a lot of CPU, to start, you should look at which web requests are currently executing with IIS to see if that helps you identify the issue to be able to troubleshoot IIS worker process.
Via the IIS Worker Processes
Via the IIS management console, you can view the running worker processes. You can view which IIS application pool is causing high CPU and view the currently running web requests. after selecting "Worker Processes" from the main IIS menu, you can see the currently running IIS worker processes. If you double-click on a worker process, you can see all of the currently executing requests.
I have a small ruby on rails application which i have deployed on an amazon ec-2 instance using capistrano, my instance is a t2.small instance with nginx installed on it and local postgress db installed on the server too. i have a development instance on which i do frequent deployments, recently whenever i try to do a capistrano deployment on my ec-2 instance the cpu-utilization has an enormous spike, usually is its between 20-25% but during deployment for some reason it goes upto 85% which makes my instance unresponsive and i have to do a hard restart on my server to get it back working
I dont know why is this happening and what should i do to solve this because load balancing and auto scaling makes no sense in this scenario as the issue occurs only during deployment
I have attached a screenshot of my server cpu utilization and the 2 high peaks are both when i performed cap deployment
The only solution i can think of is increasing the instance type, but i want to know what other options do i have to solve this. Any help is appreciated, thanks in advance
If this is interim spike (only during installation) and you don't need high CPU during application usage, you may try t2.unlimited approach.
If t2.unlimited couldn't support your need, I think increasing the instance type is the only option left for you.
After searching to no end, as well as countless hours of trial and error with different settings, I've come up completely empty on why my server is performing so slowly.
Here are the basics. I've switched hosting from a local server (CF8 running on ubuntu) to a better equipped hosting company (CF10 running on Windows Server 2008). Both servers ran Xeon processors. My old linux server ran on 8GB ram. Windows is running on 9GB. Both are running 64-bit. The problem I am having is on a very simple task: initial CFC creation.
I have a custom created CMS, that runs 2 sets of CFC (application and session scoped) for the general public, only application scope CFC's are created, and when a user logs into the site, additional session scoped CFC's are created (anywhere from 8 - 16 depending on the number of modules the site contains).
On the linux box this worked great, fast with no issues. However, since switching to the Windows server and CF10, the creation process has become dreadful. When I go to log into a site, authentication is done, and the CFC's are created. When I first log into a site, this process can take anywhere from 15 - 50 seconds. When I log out, the session scope variables are all killed. If I was to log in a 2nd time, within a short period of time, my login time runs about 1 - 5 seconds depending on server load.
My initial thinking is that it's a memory allocation issue, but I'm running out of ideas. I have some of the following specs
JVM - 1.7.0_40
JVM Heap Size 1280
PermSize 256m
Simultaneous request limit 100
CFC request limit 60
cfthread pool size 50
trusted cache is currently off
I've set worker threads in IIS to have 5 per application. Each worker process runs at about 12,000k.
If anyone could help, it would be greatly appreciated.
I used elastic beanstalk to manage/deploy my .NET MVC 3 application on an EC2 micro instance (has 613 MB memory). It's mainly a static site for now as it is in Beta with registration (including email confirmation) and some error logging (ELMAH).
It was doing fine until recently, I keep getting notifications of CPU Utilization greater than 95.00%.
Is the micro instance with 613MB memory not enough to run an MVC application for Production use?
Added info: Windows Server 2008 R2, running IIS7.5
Thanks!
I've tried running Jetbrains teamcity (which uses Tomcat I think) and was on a linux box using an ec2 micro instance and there wasn't enough memory available to support what it needed.
I did try running a server 2008/2012 box on a micro instance as well and it was pointless took minutes to open anything.
I think you're going to find that running windows on one of those boxes isn't really a viable option unless you start disabling services like crazy and get really creative with you're tweaking.
A micro instance is clearly not enough for Production.
The micro instances have a low I/O limit, and once this limit is reached (for the month I think), all later operations are throttled.
So, I advise you to use at least a small instance for production. And keep your micro for your dev/test/preprod environments!
Edit: I got those info from an Amazon guy.
Make sure your load balancer is pinging a blank html file. I got that message because it was pinging my home page which had db loads. When I set it to ping a blank html file it ran smoothly
This might also belong on serverfault. It's kind of a combo between server config and code (I think)
Here's my setup:
Rails 2.3.5 app running on jruby 1.3.1
Service Oriented backend over JMS with activeMQ 5.3 and mule 2.2.1
Tomcat 5.5 with opts: "-Xmx1536m -Xms256m -XX:MaxPermSize=256m -XX:+CMSClassUnloadingEnabled"
Java jdk 1.5.0_19
Debian Etch 4.0
Running top, every time i click a link on my site, I see my java process CPU usage spike. If it's a small page, it's sometimes just 10% usage, but sometimes on a more complicated page, my CPU goes up to 44% (never above, not sure why). In this case, a request can take upwards of minutes while my server's load average steadily climbs up to 8 or greater. This is just from clicking one link that loads a few requests from some services, nothing too complicated. The java process memory hovers around 20% most of the time.
If I leave it for a bit, load average goes back down to nothing. Clicking a few more links, climbs back up.
I'm running a small amazon instance for the rails frontend and a large instance for all the services.
Now, this is obviously unacceptable. A single user can bring spike the load average to 8 and with two people using it, it maintains that load average for the duration of our using the site. I'm wondering what I can do to inspect what's going on? I'm at a complete loss as to how I can debug this. (it doesn't happen locally when I run the rails app through jruby, not inside the tomcat container)
Can someone enlighten me as to how I might inspect on my jruby app to find out how it could possibly be using up such huge resources?
Note, I noticed this a little bit before, seemingly at random, but now, after upgrading from Rails 2.2.2 to 2.3.5 I'm seeing it ALL THE TIME and it makes the site completely unusable.
Any tips on where to look are greatly appreciated. I don't even know where to start.
Make sure that there is no unexpected communication between the Tomcat and something else. I would check in the first place if:
ActiveMQ broker doesn't communicate with the other brokers in your network. By default AMQ broker start in OpenWire auto-discovery mode.
JGroups/Multicasts in general do not communicate with something in your network.
This unnecessary load may result from the processing of the messages coming from another application.