Arena 1v1 On Trinitycore 3.3.5a Rated Arena Queue - trinitycore

i decided to open a question here because this is a super complexed issue for me which myself and 4 of my developers cannot fix or reproduce.
Emulator: Trinitycore 3.3.5a Version
Database, latest trinity 3.3.5a Database
I successfully applied a patch that simply changes arena 5v5 to 1v1 , the patch is very small and only changes amount of players required to join 5v5. The patch applies and compiles without warnings or errors. When testing on my local machine it works like a charm. It queues to 1v1 Rated and Unrated without any problems. When I run the script on my Dedicated server where my server is hosted, that patch will not queue to RATED arena 1v1, it will oonly allow UNRATED aka Skirmish Queue! However testing 2v2 and 3v3 you can queue just fine both rated and unrated.
We are stuck on the same place here, the 2v2 and 3v3 queue is working as intended. However 1v1 simply will not queue to Rated. We are using the same core, database on our dedicated server.
If anyone can give me some assistance with this or point me to the right direction it would greatly appreciated
Thank you

It's hard to help you in this way. We need to take a look on that patc and also your modified files in core

After looking for the patch for arena 1v1, there is one thing that caught my attention, it doesn't have a hook to check the worldconfig file for arena points distribution in case it is set to Instant vs Weekly, therefore if your worldconfig settings are set to distribute arena points your rated 1v1 will never queue! it always has to be set to true/1.
Thank you

Related

interactive queue for command line tasks windows

Sorry if this question is a bit vague. I don't know the right technical terms.
Basically in my research group we use a shared windows machine with a lot of RAM to run models, using remote desktop to access it from our own computers.
It would be great if we could build a queue so that we get the most use out of the machine, especially if we could then rearrange the order once it is up and running. Often someone will want to run say 50 runs of a 2 hour model, and someone else will just want to run once and check the results immediately, so they should get priority, but it's a pain stopping and starting large sets of runs.
We run models via command line, any ideas?
You could store the total time each user has spent on the computer and it would also be a good feature to let the users estimate the time they intend to use the computer. The queue could be built based on these data and if it is possible, when the estimated time is left 110%, then automatically kick out the user and allow the next to use the computer. I think you should implement a very basic system without too much effort. When all of you see it, you will have ideas about the optimal direction where the project should be headed.

Azure: What if a guest OS update breaks my application?

We are investigating the option of shifting our small company's infrastructure to Azure PaaS (Websites, Cloud Services, SQL) as we do not have the resources to maintain our infrastructure at scale and it takes a lot of developer time to keep our current servers maintained.
The last problem we have with moving the Azure PaaS is that the control over updates seems somewhat limited according to this article Azure enforces that you remain within two patch versions of the guest OS that Microsoft rolls out.
Aside from the fact that that places a testing burden on us (we would have to test that software works with new OS releases forced upon us) there is nothing about what can be done if an Azure update DOES break one of our applications...and it has happens before with Windows Updates.
How is this supposed to be delt with? Has no one else had this problem?
This is typically dealt with by updating your applications and/or fixing your custom code to work with newer patches and/or updates.
There's really very little else you can do. I've worked at places that didn't, and seen the results of blocking an incompatible update long-term (or turning off updates altogether), and it's far worse than just maintaining your whatever. Failure to do so is how you end up paying a group of consultants thousands of dollars an hour to troubleshoot a code base or application that isn't compatible with anything made in the last decade.
I would like to add that you may want to have your whole deployment replicated, but always running on the latest available patch.
This way you could test updates with weeks in advance before updating your production environment.

Setting up an Automatic Install

I am trying to set up an automatic uninstaller for a program. basically I want the program to uninstall after a certain time has passed (lets say 1 year).
Is there any way I could do this? It would basically be a trial version of the software.
Sorry for not being specific about this but i just want some options on how I could do this easily.
Thank You in advance for your responses and sorry for my bad English.
I have never seen such a design. I suppose you could use a scheduled Windows task, but why do this? You can just have the application expire after a year and offer a button on launch to kick off the uninstaller? It can launch the uninstall asynchronously and shut down the application right away.
I have also never seen such a design, likely because it fails to consider several issues, namely how do you keep users from:
reinstalling it?
installing it on another machine, or on a VM with snapshots?
restoring a hard drive backup over it?
killing the uninstallation?
Software licensing is hard to get right. I would recommend using a third-party licensing package that offers trial licensing. I would avoid trying to roll your own solution, as it will likely take you a lot of time to develop and be ineffective nonetheless. Picking the right product for this depends on first answering some questions, though:
How a. skilled and how b. determined will the adversaries be who are costing you the most amount of financial loss? That is determined by:
How much money will you lose if you don't protect it? This should determine the next question, which is:
What is your budget for software protection? It should be less than the amount you would lose without it. This should include the next question, which is:
How many hours do you want to invest to get this working?
It sounds to me like you want an automated wrapper that will work with precompiled applications / installers, as opposed to using an SDK you must integrate into your code.

Travelling Visual Studio developers

I am about to travel to Europe (I'm Australian but imagine this is a similar circumstance for US users and simply flipped for European users).
However, there is the slim possibility I will need to do some Visual Studio work while I'm travelling.
As I see it I have three options:
Leave a desktop PC on at home, access remotely via net cafes.
Carry a laptop with me on the trip, upload files as required using public wifi.
Option 2 but instead buy cheap light netbook that is miraculously capable of running VS.
Does anyone have any experience or advice to shed on any of these options?
For reference, this existing post suggests that VS remotely for short distances is okay, but over longer distances could be more problematic. I've used VS via RDP to a US server before and it was pretty laggy but for small changes I could get by.
Concerns I have that you may have some experience with:
Weight of luggage (ideally like to travel light)
Security of laptop (imagine it'll be too heavy to carry around all the time so have to leave it at hotel/hostel etc. and hope for the best)
Security of data (don't want someone stealing RDP access to my home PC)
Security of FTP (don't want someone stealing FTP passwords over wireless)
I'd go with option #2 (carry a laptop that can run VS).
This way you can use the "more convenient" method if it works well (use it as a RDP client if the connection is low-latency enough), but you can still work locally if the connection you find is not reliable.
I think the bottom line is, always have a backup method when depending on networks that are far away and beyond your control.
Edit: Regarding the additional security concerns, most of those are things you should deal with anyway, traveling or not. If the stuff you're working with is that sensitive, you should probably improve the security of your remote work environment with a VPN and more secure file transfer method. Before you take your laptop anywhere, know what your plan is if you were to lose it.
It's a vacation. How do you expect to rest up properly if you're always worrying about work. Leave the phone at home too.
I used to leave a home PC on with VS and use services like GoToMyPc or LogMeIn or some similar service.
Since I have started using a laptop, I just carry the thing with me with VPN connectivity on business trips along with a 3G data card.
But seriously, if on vacation, I do not want to take my laptop with me.
security
First and foremost, encrypt the contents of the HDD - be safe.
If I am on a business trip, the laptop is with me so I am not as concerned with where it is. If I am on vacation, I do not know that I want to take one with me.
If is important then I would keep my laptop/pc at work ON and there will be someone that has access to turn it on/reboot it. So I would carry a light laptop that lets me connect and work if I need it. If that goes down, I can always head into a cybercafe.
database
If you are anticipating working, bring your dev database with you. I know it hogs space and memory (while in use), but it pulling data over the wire has taken long enough to make me lose concentration.
standalone
Make the laptop standalone so that it can work without a connection to VPN or internet - coverage is not the best / uniform in all areas.
Use TrueCrypt for encrypting your harddisk. Use VPN, SSH or something similar for remote connections. I always bring my laptop, but in case I would lose it, it's just a brick for the finder, and I have a good backup system that makes me able to get up and running on another computer quickly.
I tried installing VS2010 on my NetBook and it was a no-go. I was, however, able to install Expression Blend/Web which is good for most tasks.
Edit: To make this more useful... my netbook is HP Mini 1100 Series w/1GB RAM running Windows 7 "Starter"
beware: i don't know where you are going in europe, but do not count on a reliable internet connection in a hotel. it generally works, but when it does not, don't count on the personnel to repair it. of course, if you also carry your own connection (G3 or EDGE on your mobile phone), then this will not be a problem.
I suggest using the option 2 when working on your source code.
I also recommand using Git so you can work with a source control while being disconnected from the office source control. When you get an access, you can sync your whole repository with your office repository.
Of course, it all depend on which source control provider you are using.
For the occasional stuff that are not on Git, use a VPN for enhanced security.
My experience:
1) Purchased a small netbook (Samsung netbook with 2gb or so of RAM, I can lookup exact model number if anyoned interested but I think it's comparable to, or just above the NC10 (just comment if interested)).
2) Internet is bad in Europe (at least the options available to trav ellers). Something to note.
3) The netbook performance was absolutely fine. You don't want to be doing too much dev because of the small screen (though it was only really an issue for me because I got sick of the trackpad and didn't have a separate mouse) but it's honestly pretty fast and easy to use for .NET MVC development in Visual Studio.

IBM RAD 7 and Websphere 6.1 is slow and unresponsive

How can I improve performance when developing locally with Websphere and RAD? I am using one web application of moderate size (1000? classes) and it is impossible to handle the app locally on a Windows box. The Websphere 6.1 configuration uses the default configs. RAD7 is configured to handle a max heap of 1024mb. I thought about increasing the heap of the server. At present, the min and max are 128/300mb.
In terms of unresponsiveness, sometimes it may take minutes to load a page, if the page loads at all. Also, I disabled "Build Automatically" and Publish Automatically. Maybe those should be turned on?
I'm not sure about RAD7 but from my past experience, I'd suggest to give MyEclipse Blue a try.
Since that might not be an option, here are some other usual culprits, you can check:
How much RAM does your machine have? It's good to give WS 1GB of RAM but if your computer only has 1GB of real RAM, it's going to swap itself to death. If your boss won't pay for it, go get some RAM with your own money. 2GB are less than $80 ATM. I suggest to get at least 4GB. Yes, Windows can only use 3.5GB even when 4 are installed but that half GB costs $20 or less. Even thinking about this for more than five minutes will cost more than simply buying it.
Next make sure whether you are using the correct Java GC options. There should be some info about this in the docs. Plus make sure that the process uses the "jvm.dll" from the "server" directory, not the "client" one. "Process Explorer" will help.
Since I'm not using RAD, I'm not 100% sure about "Build Automatically" and "Publish Automatically" but since RAD7 is based on Eclipse, these options will compile code in the background as you type. This will greatly reduce the time between you saving your last change and the moment the app server can start to load the new code.
When all else fails, run websphere in a profiler and look where it spends all the time.
Aaron had great advice.
I would also suggest using JConsole to see what is going on, to help you determine if you need more memory, larger heap size, etc. My experience with running Websphere and RAD locally is that it will be slow, but then I was on an old machine that needed more memory. :)
http://java.sun.com/j2se/1.5.0/docs/guide/management/jconsole.html
Berlin,
RAD 7 saps your PC! When I was using it to develop Portlets, I followed this optimization guide and it made the IDE significantly quicker to develop Portlets in. Obviously it is aimed at Portlet development but it might help you.
Also following the advice given to the answer to this question will also help.
I definitely agree with Aaron Digulla. You will see a major performance improvement with 4GB RAM installed on your development machine. I developed an Eclipse/RAD plugin with some buddies of mine and we were able to measure how much time we saved by upgrading from 2GB to 4GB.
The plugin is available here: http://lopb.org/
After gathering some hard numbers on how much time we spent waiting for publishing and loading the app on our 2GB development machines, we were able to convince management to upgrade the rest of the developers on the team.
Anyway, you should really consider upgrading to 4GB if you want to run RAD 7 and Websphere 6 on the same development machine. Each one needs -Xms=512m -Xmx=1024m as JVM args to run well, and that means you will swap to disk way too much if you only have 2GB of RAM or less. HTH
Make sure your running was in development mode for your development and testing.
Option is under the server in the console.
Karl
hehe, we had the same problem with RAD6 and Websphere 6.
The way we speeded things up is moved to Eclipse and JBoss.
We developed on Eclipse and JBoss and then first round of testing was on Websphere. We had some issues with the differences but would never had completed the project were it not for out switch (a lot fewer issues than deving on RAD/WAS).
But to help you in the mean time...
Definitely, probably want build automatically and publish automatically off. That way you can make a bunch of changes and then tell RAD to compile and deploy while you go and get coffee.
There is a "run in dev mode" in Websphere (I know there was for 6.0) so track that down and turn it on (it's on the WAS console somewhere)
I found WAS's on stack replacement to work fairly well. I found that at the beginning of the day I'd deploy to WAS and then not have to redeploy at least until lunch time (as I was debugging). I would make changes and the changes would be fed to the server without my having to redeploy.
Chances are, even after running the profiler you'll find there's nothing much that you can do..
Turn off all validations (in RAD), they tend to take forever.
Depending on what you're doing with EE investigate the possibility of deving on another IDE/Server combo, maybe you can do the bulk of your work in there and then deploy from RAD/WAS to do some final testing. If you're using vanilla ejb's or web services this is feasible.
That max heap does sound a bit small to me. The suggestion to fire up JConsole is a good one cause it will tell you how much heap is being used, though I'm not sure if it will work on the IBM vm (RAD's). You might try and turn on the memory usage monitor in RAD, tells you how much memory is being used, that way you can tell if it's hitting the max.
JConsole will not work without specifically enabling it via a JVM command line switch.
Suggestions from Michael Wiles sound reasonable but please update your RAD first to the latest FixPack available.
You can also contact support.

Resources