Problems with my tbdev code - utorrent

Hi I started tracker few days ago and i have two problems , I dont know exactly witch source code is all i know its modified tbdev I got script from a friend who went off the net some time ago...
First problem is that I am getting Error 500 on utorrent versions 2.2 and above all other versions are woking perfect....
I attached announce here
Second problem is that in my browse.php seeders/leechers are showing incorrect
I'll give example for torrent with 11 seeders and 2 leechers
And its showing only 7 seeder one of them is dupe , so its actualy 6...
My tracker is hosted on VPS , but i also have both problems on my pc using xampp
Any other file if needed please let me know so i can attach it here
Any help would be very appreciated

There is a list of clients and there version that is allowed to access your site. It is (depending on the tbdev ver) in one of the folders in the include folder.
Second the problem with seeders / leechers is due to timeing. Tbdev only updates once every 30 mins by default. You can also change this but i would nto suggest it. it will also show ghost connections , if someone stopped seeded/leeching and then started again before the 30 min update.. I would worry about setting your access for clients first then the rest should work it self out.

Related

Error 502 with Laravel when exporting to Excel on Azure Web App Linux

I have a Laravel App running on Azure Web App Linux service, all running nice and smoothly until I reach a feature that exports a query to an XLS for download. Then I receive the ERROR 502.
On my local environment works normally, I can export the query to XLS with no issues, it is not a large query, just a few rows.
In the same app, I have a function that exports to XLS just 1 row at a time and works fine, so it is just when I go for a larger(ish) query.
Any ideas? I have tried scaling up, restarting the app, apache, changed .ini (via .htaccess to increase execution time).
There is no trace in the logs either, there is something about the container crashing but cannot trace it to this particular error.
Ok, managed to solved it... was not straight forward at all. It has to do with the size of the query, even tough is not big by any means (a couple thousands max) raising memory limit to 1024M or further ended up in 502 Error. Decided to try different and moved from Laravel Excel to Fast-Excel which is less featured but man... it works. Now everything downloads perfectly. In case you are having this issue give fast-excel a try.

Rstudio AMI linkDropbox() gives no link

It is not the first time I use the Rstudio AMI from Louis Aslett on Amazon EC2, but this time the linkDropbox() function drives me crazy and I could not find any help on Google: it just doesn't give any link !
> linkDropbox()
Launching Dropbox client, please wait ...
Dropbox launched. Please visit the following URL in your browser now to link the server to your Dropbox account:
isn't
waiting (please do this now or linking may fail) ...
I don't know what this isn't do here, but there should be a link. Any one has had the same problem?
EDIT
It appears that the function gets stucks into the loop and return:
"This computer isn't linked to any Dropbox account..."
The best solution I came up with was to launch a new instance from a previous working image/snapshot - obviously not ideal if your existing instance has many packages/versions you need, but I found when I created a new AMI from Louis Aslett's site (after having trouble re-starting an old instance and updating Dropbox), I could not replicate behavior from February 2018 (except by using a snapshot of that previously functioning AMI). I am writing this in September 2018, so it is possible that the AMI's on Aslett's site will be updated and this won't be a problem anymore.
Good luck, future readers...
I had this problem, and realized that the security settings for the instance didn't allow outbound traffic on my current IP. I updated my security settings, ran unlinkDropbox(), and then ran linkDropbox() again, and it worked this time.

CodeIgniter error - unable to connect to database using the provided settings

I have a CodeIgniter setup that has been running fine for the past 2 months and recently I keep getting:
CodeIgniter error- unable to connect to database using the provided settings
I've recently added a new domain that has a landing page for the database login (zPanel), but I don't see how that could have caused a problem--maybe the page keeps getting directory attacked or something, but I'm not sure.
Is there a way to check if this is the problem through logs? I'm at dead ends with this problem, as when I restart the server (DigitalOcean) it works fine again.
Really not sure. If anyone else has had a similar problem, I'd love to hear your solution.
Thanks.
I think your mysql is going down so Codeigniter can't connect to your database settings.
Please login to SSH and check processes by "TOP" comment. See what is using resources ram or cpu.
And check your mysql conf settings, be sure that everything written if its empty it will cause alot of problems.
Some example :
http://www.maxwhale.com/how-to-optimize-mysql-for-1gb-memory-vps/

Laravel 3 APC session lifetime is ignored

I have a Laravel 3 project, running on a plesk 11.5 CentOS 4(dedicated). It used to be on an IIS server, but i had to migrate it to plesk, since the company i'm working for is dumping the IIS server. Everything seemed to be running smoothly, until i logged out from my application, at first i got a WSOD (white screen of death), then i enabled php error reporting, and this is the error that was displayed:
Fatal error: Cannot override final method Laravel\Database\Eloquent\Model::sync()
This is a very strange error, since i have no method called Sync in any of my classes, and needless to say that there was no such error while the project was running on IIS.
I tried several different combinations of session/cache drivers, the only one that seems to be working is the APC driver.
When i have the APC driver enabled for cache and session, the above Fatal error is not displayed and everything works correctly. The PROBLEM is that i have given the Session Lifetime a value of 60(minutes) but it is completely ignored, meaning that the user is logged out after 2 or 3 minutes.
I've been to the Laravel IRC channel with this issue, some people kindly suggested to tweak the APC memory and ttl (time to leave) settings, but with no luck unfortunately :(.
Here are some APC settings from my server configuration:
apc.gc_ttl 3600
apc.shm_size 1024M
apc.shm_strings_buffer 32M
I desperately need help if anyone has any to offer! This is for a live running project and i need to find a solution asap.
I had the exact same issue and couldn't find a solution. I was going round in circles trying to figure out what on earth was going wrong.
I finally came across this post:
Fatal error: Cannot override final method
You need to make sure that the apc.include_once_override setting is set to 0. In your apc.ini file set like so:
apc.include_once_override=0
This error seems to be caused by caching of included classes.
I solved the problem after looking around the plesk panel.
The problem was that i had "Run PHP as FastCGI application" selected.
I switched to "Run PHP as CGI application" and everything works perfectly.
I'm not sure what the exact source of the problem was, only that FastCGI triggered the error.

Magento - Upgrade website 1.4 to 1.6.1.0 issue

I am upgrading an existing magento website for 1.4 to 1.6.1.0.
I had dumped the existing database,
Copied all the required custom extension in the blank magento version 1.6.1.0
and after running the installation got the following error:
Error in file:
"/app/code/core/Mage/Customer/sql/customer_setup/mysql4-upgrade-1.5.9.9-1.6.0.0.php"
- SQLSTATE[HY000]: General error: 1025 Error on rename of './sales_flat_order' to './#sql2-3af-a7' (errno: 152)
How can I fix this issue?
Upgrading magento is very painful process. I suggest you to import-export data from old to new shop.
I just went through the same heartburn. I found that letting the page try to load until the script got an error or timed out and then trying again eventually worked. The upgrade script will attempt to start where it last stopped.
Before you do that, make a backup of you site and database. If it continually errors in the same spot, restore and try again.
These tips may help improve the odds of a quicker success:
Put the site in maintenance mode (by adding the maintenance.flag file
to the root directory) before starting.
Increase server and php timeouts by a very large amount (3-5minutes).
Cleanup temp and log database tables that you don't care about
(carefully, everybody has different needs here)
I tried several different methods and that is the only thing that worked. It took probably 10 reloads (waiting for a 3min timeout each time). In the end, everything upgraded correctly. No matter what method you choose, if you want to keep your store data, you will have to run the bulky db upgrade scripts that take forever.
I had similar issues when updating from 1.4.2 to latest.
I built a custom maintenance script included in my index.php that only allowes to access my ip. But the update process via shell replaced my index.php so it was accessible for everyone.
That was the cause that the final sql scripts where run by several clients and caused errors like "can't move table" etc. because those steps where already done.
--> Summing it up: Be sure that the site gets called only once, until the upgrade was successful!
The very best way to migrate magento in my opinion, is to import your entire db to an environment that you have your new magento. Then magento will run all scripts and updates and keep your data.
Maybe you find some problems on the upgrade scripts, but it's easier to fix them than fix the problems regarding model/eav's problems on the fly.
I have succeed by doing this on migrate from 1.4.1 to 1.8.1.

Resources