I had run into a few problems getting Ruby fully installed and working, so I could start building. I finished installing and tested my server at localhost:3000, and it came up fine. Then the next day when I tried to go back to it, it didn't connect and I could not figure out why.
First of all, there's no specific way for us to address this issue. You haven't mentioned any of what happened between the server working and malfunctioning? It could be as simple as the server not being started. Is the server currently running? If it is than you should have some sort of response when you try to hit localhost.3000.
Make sure your server is running with rails s and then let us know.
Perhaps you can try resetting your DB with a rake db:reset if that doesn't work.
Besides that, there really isn't anything we can do on this end without more information.
Related
I have a Scraypd server in heroku. It works fine and the spider works and connects to dbs without any issue.
I have set it to run everyday by the scheduler in the Scrapydweb UI.
However everyday the spider seems to disappear and I would have to reload the scrayd-deploy from my local machine to the server again for it to be scheduled but it never runs anything past that single day. Even though I have set it to run everyday at a certain time.
Does anyone know what might be the problem?
I am not sure what kind of details people need to see to help me resolve this. Please do ask and I shall provide what I can.
I'm not sure if it's something I added in my code recently, but when I try and use Shotgun or Rerun to run my server, it will sometimes hang before it actually says the "Listening on 0.0.0.0" line in Terminal.
I haven't seen this occur when I run my Sinatra server directly with ruby.
There's no errors popping up anywhere - it just seems to hang on starting or reloading after a save... happens about 20% of the time I'd say.
How can I figure out what is causing the issue? I would assume it's either A) something in my code, or B) something going on with my file system.
Why are you using rerun with shotgun? shotgun auto reloads the connection on change.
If you're serving it with puma or something that doesn't refresh when you change the project, then rerun would be necessary.
If you're concerned with speed have a look at this: http://scorchedrb.com/docs/further_reading/code_reloading
I have done something silly and written a script for a website that does an ajax check every 2 seconds. In this case its using wordpress and its admin-ajax.php file every 2 seconds. This essentially burned up all the CPU power of the server, and made every site on the server run really slowly.
After a lot of detective work, i finally found the script and stopped it, so that it doesn't happen on new loads of that website. But looking at my apache log, i can see that it is still running in one browser somewhere.
Is there a way for me to stop that browser from requesting that ajax-call, or perhaps block it from my server? Or will I just have to wait until that browser is being refreshed or closed?
Try to use netstat or something similar through ssh to detect the IP and port of the unknown browser. Also you should try to reboot the server so it may will loose connection.
PS: It's pretty hard to give a clue or answer in the right direction without having any logs or evidence to ensure you answer to this question correctly.
I'm trying to debug a very weird issue that is happening on our environments.
I have a Rails app running on ree-1.8.7-2011.12 that is exhibiting an strange behavior when the ElasticSearch server it is connected to is suspended for some reason (to suspend it, start the ES server in foreground then CTRL-Z it), the Ruby process hangs forever and doesn't timeout or anything like that. It doesn't happen if the ES server is suspended before the rails app connects for the first time, it only happens when the Rails app runs some queries and then the ES server decides to stop answering requests.
Since I'm not that good with native code, i don't even know where to start trying to figure this out (and yeah, unfortunately we can't upgrade our Ruby for many reasons right now). Going in the activity monitor and getting a sample of the process I get this thread dump:
So it seems that it's locked at this curl_wait_for_resolv call.
I'm using the latest ElasticSearch and the Rubberband ES adapter for Ruby. This is reproductible on Mac OS and Linux.
Any hints on how to debug this or correctly timeout the server?
And the issue was at Rubberband, I have filed a pull request for it.
I want to start my Rails server in a background thread from within a Ruby script. I could use Kernel#system but I want to be able to kill the Rails server when the thread is stopped. Is there a way to execute the Rails server using some Rails API call instead? I'm thinking something it would be nice to be able to put something like Rails.run_server(:port => 3000, ...)
I'm on Windows Server 2008.
Check out the file gems/rails.x.x.x/lib/commands/server.rb. It looks like that's the starting point that script/server uses.
Since script/server is itself a ruby script, it stands to reason that you ought to be able to start a server by doing something similar to what's in server.rb. But I imagine you might have some difficulty getting your ruby environment right...
Note that I'm looking at rails 2.3.8 here, so if you're on 3.whatever your results will probably be different.
I eventually decided to avoid any ickiness and start the rails server in its own process, as detailed in this post. (Being able to kill it plus its child processes consistently was the main blocker and the original reason I'd considered starting it in a thread instead.)