Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
I want to implement football match processing and so I spent a lot of time to find good algorithms to do it. I have some data as input - players that have some parameters. Some of parameters are static while the match is processing (skills) and some of them are dynamic (physical state, psycological state) that are changing in process. Also I have external parameters that I can change manually. I don't need it to be so close to the real football (excluding result. 20:0 will be awful anyway). And the last main idea is that the same input will not lead to the same output. Some of middle calculations should return random values.
The algorithm should not be very slow because in the near future it will be necessary to process about 1000 matches at the same time step-by-step. Each step will be calculated once per 3 seconds. And also these steps should be logically linked because I will make graphic match process with all ball and players movements.
What algorithms can you recommend for me? I thought about neural network but I'm not sure that this is a good solution.
You will really help me because I spent about of a half of year to find it so thank you very much!
Let's say you have an "action" every 5 minutes of the game, so 90/5 = 18 actions. To make it more realistic you can choose random number like:
numberOfActions = round(10,20);
This number can appear as a lenght of you for().
Than you have interactions between defense and offence parameters of two sets of your players. Lets say each point of offenceA-defenceB creates ten per cent chance to succeed:
if((TeamA.Offence-TeamB.Defence)*10 > round(0,100))
{
TeamA.points++;
}
Of course the goalkeeper can decrease this probability, maybe even significantly.
And so on. Of course you can make it more complicated. Like you can compare your stats only for certain players, depending on who's having the ball. Your both offence and defense parameters can be decreased by time and raised by condition:
TeamA._realOffenceValue =
TeamA.Offence*
(1-i/numberOfActions)*
(TeamA.leftOffencePlayer.Condition);
Remember that in games like football manager or Europa Universalis it's everything about cheating the player. Balancing the game is a job for many hours and nobody will make it for you on the forum :)
Related
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 2 years ago.
Improve this question
I am having the following optimization problem setup:
Given:
about 100 Mechanics, each with
Work time per day [e.g. 8 hours]
Break time per day [e.g. min 1 hour]
Maximum overtime per day [e.g. 1 hour]
Location [e.g. Detroit]
about 1000 Tasks, each with
Location [e.g. Chicago]
Duration [e.g. 1 hour]
Fixed time slot [e.g. 1pm] [optional]
The goal is to schedule all tasks to the mechanics with short paths. One constraint is that every mechanic starts & ends at his home location.
Is there any way to solve this problem in an easy & understandable way? Are there any similar examples online in e.g. python?
Not all workers would be available to do a task because of Location. If Locations don't overlap, you could at least segment the problem into Location-specific one to avoid dealing with it. Then you could assign the fixed timeslots first, always picking the workers with the least hours on the schedule. Since hours are a discrete value, you could pick the nearest worker by distance when choosing a worker amongst several that have an equal number of scheduled hours.
This would be a very basic approach that would do the scheduling but may not do it in a practical manner - for example, two close-by jobs may be assigned to different workers and efficiency may not be good at all when you consider travel time between jobs. You would have to iterate with the business and apply some heuristics to get to a usable solution.
I'd advise you to get a real-world sample of the input data - availability, locations, jobs etc - as large as possible, and create some evaluation function first: overtime, travel time, utilization of the workforce should all factor in, then you could see what heuristics need to be applied to the basic algorithm.
Another approach would be to cluster the jobs by location, into 1-worker-per-day clusters, and assign close-by jobs to the same worker. Look into graph clustering algorithms for that. Within a cluster you could assign the fixed-time jobs first, then the rest in random order. You could also limit the clusters to not have overlapping fixed-time jobs.
Either way, you'll have to come up with heuristics, whichever approach you take.
Finding the optimal solution may be an NP-hard problem http://www.cs.mun.ca/~kol/courses/6901-f14/lec3.pdf
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
Scenario:
We have 10 kinds of toy,and every kind include 10 toys.
We will distribute toys to 100 children.Every child have different degree of satisfaction for 10 kinds. Tip:In the real project we will have 300000+ children records in my database.
My Question is:How to measure and define the best solution for distribution?
And how to get the result?Please give me a hint.
Some friends suggest me to try KM algorithm, I'm not sure it will work for me.
Thinks.
This problem is hard because you haven't decided what you want to optimize, and because many optimization methods will be expensive to run if you have 300K children - or customers - to worry about.
What do you want to optimize? If you try and optimize the sum of some set of per-child satisfaction score, can you really compare the subjective satisfaction of two different children, let alone add them up to produce anything sensible? If you decide on such a system, can you prove that it cannot be distorted by children who decide to lie about their satisfactions, for instance saying that they will be devastated if they don't get one particular toy?
What if somebody decides that the sum of satisfaction scores isn't the right metric, but instead that you should minimize the dis-satisfaction of the most dis-satisfied child?
What if somebody decides that inequality is the real problem, so if there is one very happy child, you should take away their toy and give it to somebody else to minimize the difference in satisfaction between the most and least satisfied child?
What if somebody decides that some children count more than other children, because of something their great-grandparents did, or didn't do?
Just to not be completely negative, here is a cheap scheme, and an attempt to prove a property about it. Put the children in random order and allocate the toys as if each child were to choose according to their preferences in this order - so each child would get the toy they most preferred according to the toys left when they came to choose.
One property you might want for a method of choosing is that, after the toys were distributed, children wouldn't find that they could trade toys amongst themselves to produce a better distribution, making you look silly (aka not a pareto optimal solution). Suppose that such a pattern of trades was possible among the children in this scheme. Consider the trading child who came first among these children in the initial randomization. They chose the toy they wanted most from all those available, so there is in fact nothing the other trading children could offer them that they would prefer. So this scheme is at least not vulnerable to later trades.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I've studied complexity theory and I come from a solid programming background and it's always seemed odd that so many things seem to run in times that are intrinsic to humans. I'm wondering if anyone has any ideas as to why this is?
I'm generally speaking of times in the range of 1 second to 1 hour. If you consider how narrow that span of time is proportional the the billions of operations per second a computer can handle, it seems odd that such a large number of things fall into that category.
A few examples:
Encoding video: 20 minutes
Checking for updates: 5 seconds
Starting a computer: 45 seconds
You get the idea...
Don't you think most things should fall into one of two categories: instantaneous / millions of years?
probably because it signifies the cut-off where people consider further optimizations not being worth the effort.
and clearly, having a computer that takes millions of years to boot wouldn't be very useful (or maybe it would, but you just wouldn't know yet, because it's still booting :P )
Given that computers are tools, and tools are meant to be setup, used, and have their results analyzed by humans (mostly), it makes sense that the majority of operations would be created in a way that didn't take longer than the lifespan of a typical human.
I would argue that most single operations are effectively "instantaneous" (in that they run in less than perceptible time), but are rarely used as a single operation. Humans are capable of creating complexity, and given that many computational operations intrinsically contain a balance between speed and some other factor (quality, memory usage, etc), it actually makes sense that many operations are designed in a way where that balance places them into a "times that are intrinsic to humans". However, I'd personally word that as "a time that is assumed to be acceptable to a human user, given the result generated."
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
In the social network movie i saw Mark used Elo rating system
But was Elo rating system necessary ?
can anyone tell me what was the advantage using elo's rating system ?
Can the problem be solved in this way too ?
is there any problem in this algo [written below] ?
Table Structure
Name Name of the woman
Pic_Name [pk] Path to the picture
Impressions Number, the images was shown
Votes Number, people selected as hot
Now we show randomly 2 photos from the database and the hottest woman is selected by Maximum number of Votes
Before voting close/down please write your reason
But was that necessary?
No, there are several different ways of implement such system.
Can anyone tell me what was the advantage using elo's rating system ?
The main advantage and the central idea in Elo's system is that if someone with low rating wins over someone with high rating their ratings are updated by a larger number, than if the two had similar rating to start with. This means that the ratings will converge fairly quickly.
I don't really see how your approach is a good one. First of all it seems like it depends on how often a pic is randomly selected for potential upvoting. Even if you showed all pics equally many times, the property described above doesn't hold. I.e, if some one wins over a really hot girl, she would still get only a single upvote. This means that your approach wouldn't converge as quickly as Elo's system. In fact, the approach you propose doesn't converge to some steady rating-values at all.
Simply counting the number of votes and ranking women by that is not adequate and I can think of two reasons why:
What if a woman is average-looking but by luck her picture get displayed more often? Then she would get more votes and her ranking would rise inappropriately.
What if a woman is average-looking but by luck your website would always compare her to ugly women? The she would get more votes and her ranking would rise inappropriately.
I don't know much about the Elo rating system but it probably doesn't suffer from problems like this.
It's a movie about geeks. Elo is a geeky way to rate competitors on the basis of the results of pairwise contests between them. Its association with chess adds extra geekiness. It's precisely the kind of thing that geeks in movies should be doing.
It may have happened that exactly way in real life too, in which case Zuckerberg probably chose Elo because it's a well-known algorithm for doing this, which has been used in practice in several sports. Why go to the effort of inventing a worse algorithm?
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
Is it better to describe improvements using percentages or just the differences in the numbers? For example if you improved the performance of a critical ETL SQL Query from 4000 msecs to 312 msecs how would you present it as an 'Accomplishment' on a performance review?
In currency. Money is the most effective medium for communicating value, which is what you're trying to use the performance review to demonstrate.
Person hours saved, (very roughly) estimated value of $NEW_THING_THE_COMPANY_CAN_DO_AS_RESULT, future hardware upgrades averted, etc.
You get the nice bonus that you show that you're sensitive to the company's financial position; a geek who can align himself with what the company is really about.
Take potato
Drench Potato in Lighter Fluid
Light potato on fire
Hand potato to boss
Make boss hold it for 4 seconds.
Ask boss how long those 4 seconds felt
Ask boss how much better half a second would have been
Bask in glory
It is always better to measure relative improvement.
So, if you brought it down to 312ms from 4000ms then it is an improvement of 3688ms, which is 92.2% of the original speed. So, you reduced the runtime by 92.2%. In other words, you brought the runtime down to only 7.8% of what it was originally.
Absolute numbers, on the other hand, usually are not that good since they are not comparable. (If your original runtime was 4,000,000ms then an improvement of 3688ms isn't that great.)
See this link for some nice chart suggestions.
Comparison to Requirements
If I have requirements (response time, throughput), I like to color code the absolute numbers like so:
Green: <= 80% of the requirement (response time); >= 120% of > the requirement (throughput)
No formatting: Meets the requirement.
Red: Does not meet the requirement.
Comparisons are interesting, but only if we have enough to see trends over time; Is our performance steadily improving or degrading? Ultimately, the business only cares if we're meeting the requirement. It's only when we don't that they ask for comparisons to previous releases.
Comparison of Benchmarks
If I'm comparing benchmarks to some baseline, then I like to use percentages, but only if the benchmark is a statistically significant change from the baseline.
Hardware Sizing
If I'm doing hardware sizing or capacity planning, then I like to express the performance as the absolute number plus the cost per transaction. For example:
System A: 1,000 transactions/second, $0.02/transaction
System B: 1,500 transactions/second, $0.04/transaction
Use whichever appears most impressive given the change. According to one method of calculation, that change sped up the query by 1,300%, which looks more impressive than 13x improvement, or
============= <-- old query
= <-- new query
Although the graph isn't a bad method.
If you can calculate the improvement in money, then go for that. One piece of software I wrote many years ago saved a few engineers a little bit of time each day. Figuring out the cost of salary, benefits, overhead and it turned into a savings of more than $12k per year for a small company.
-Adam
Rule of the thumb: Whichever sounds more impressive.
If you went from 10 tasks done in a period to 12, you could say you improved the performance by 20%
Saying you did two tasks more doesnt seem that impressive.
In your case, both numbers sound good, but try different representations and see what you get!
Sometimes graphics help a lot of the improvement is there on a number of factors, but the combined somehow does not look that cool
Example: You have 5 params A, B, C, D, E. You could make a bar chart with those 5 params and "before and after" values side by side for each param. That sure will look impressive.
God im starting to sound like my friend from marketing!
runs away screaming
you can make numbers and graphs say anything you want - the important thing is to make them say something meaningful and relevant to the audience you're presenting them to. if it's end users you can show them differences in the screen refreshes (something they understand), to managers perhaps the reduced number of servers they'll need in order to support the application ($ savings), financial...it's all about the $ how much did it save them. a general rule is the less technical the group the more graphical and dramatic you need to be.