Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 months ago.
Improve this question
Could anyone point me to direction how to use GPT-J model for text paraphrasing . As generating text is easy, but paraphrasing?
Do I need to fine tune on paraphrasing dataset? Or could I just use few shot training?
GPT does only one thing: completing the input you provide it with. This means the main attribute you use to control GPT is the input.
A good way of approaching a certain use-case is to explicitly write out what the task of the model should be + inserting the needed variables + initializing the task.
In your use-case this would be something like this (actual demo using GPT-J):
Input:
Paraphrase the sentence.
Sentence: The dog was scared of the cat.
Paraphrase:
Output:
Paraphrase the sentence.
Sentence: The dog was scared of the cat.
Paraphrase: The cat scared the dog.
GPT-J is very good at paraphrasing content. In order to achieve this, you have to do 2 things:
Properly use few-shot learning (aka "prompting")
Play with the top p and temperature parameters
Here is a few-shot example you could use:
[Original]: Algeria recalled its ambassador to Paris on Saturday and closed its airspace to French military planes a day later after the French president made comments about the northern Africa country.
[Paraphrase]: Last Saturday, the Algerian government recalled its ambassador and stopped accepting French military airplanes in its airspace. It happened one day after the French president made comments about Algeria.
###
[Original]: President Macron was quoted as saying the former French colony was ruled by a "political-military system" with an official history that was based not on truth, but on hatred of France.
[Paraphrase]: Emmanuel Macron said that the former colony was lying and angry at France. He also said that the country was ruled by a "political-military system".
###
[Original]: The diplomatic spat came days after France cut the number of visas it issues for citizens of Algeria and other North African countries.
[Paraphrase]: Diplomatic issues started appearing when France decided to stop granting visas to Algerian people and other North African people.
###
[Original]: After a war lasting 20 years, following the decision taken first by President Trump and then by President Biden to withdraw American troops, Kabul, the capital of Afghanistan, fell within a few hours to the Taliban, without resistance.
[Paraphrase]:
Depending on whether you want GPT-J to stick to the original, or be more creative, you should respectively decrease or increase top p and temperature.
I actually wrote an article about few-shot learning with GPT-J that you might find useful: effectively using GPT-J with few-shot learning
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
Say, there is single elevator which is at Ground-Level in a building of G+24.
A man on G+7 calls elevator to go down (presses down button).
Ergo, the elevator will show up-arrow till G+7 then toggle to down arrow, as it is supposed to go down from there.
If a man at G+2 presses up arrow button (lift just started moving up and has not crossed G+2),
will elevator/lift actually open at G+2?
if it will, what if person at G+2 presses button to go to G+20?
what will be the path of elevator (as in G -> G+2 -> ...)?
I am very confused how exactly elevators/lifts handle these cases!
As a programmer, you use statistics and modeling (best case) or made-up assumptions based on your own subjective experience.
In your example the programmer would use the assumption (or the statistics) that most people who want to go down want to go down all the way to ground level or parking. Couple that with the assumption that people who want to enter an elevator on the 2nd floor (íf the building is a tall one) want to go up.
You would therefore not stop before going as far down as the lowest floor input by the people in the elevator.
Basically, the general answer is you use statistics of movements. Those differ between different buildings. If the building is new and there is no data yet you look at what is on the floor and try to make predictions about movements. Basically, you create a model of people's movements. Then you try to create an optimization function that minimizes waiting time, for example, or queue size, or energy consumption.
You may also take into account time of day. For example, in a business tower you may optimize for going up during the morning rush hour, and for going down in the late afternoon/early evening.
Modeling, simulation and statistics are key to finding good algorithms in such scenarios.
Add to that conditions. For example, you may set the condition to the optimization that nobody should wait for longer than 20 seconds, even if the overall efficiency would go down. For example, if all traffic is on the lower floors and there is a single person on on the 50th floor. It might be most efficient to ignore him/her for an hour, but that's not acceptable. Or an elevator that senses it is full may not stop except on the floor selected by people inside of it.
You can find courses on how to do modeling on the Internet, for example. on education sites such as edX. Here is an example (the course is closed but still accessible): "Mathematical Modeling Basics -- Use mathematics to create models to solve real-life problems."
Here is an example for a paper (of which there are many!) on how to model elevators: "Modeling Elevator System With Coloured Petri Nets" (PDF)
Just to show that this modeling approach is indeed actually used in practice, here is an example of a software (Oasys MassMotion) and how it could be used to model elevators: "Oasys Software Blog: Modelling Lifts"
That is the computer science graduate way of doing this. What works almost as well in practice and requires a lot less skills and knowledge is that you come up with whatever you feel like (making common sense assumptions) and if somebody (in charge) complains you adjust the algorithm :-)
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 5 years ago.
Improve this question
We have a team of about four developers working in Scrum with two week sprints. We use YouTrack and when performing time estimation in Sprint Planning, we hit the two weeks of work quickly.
The issue is, for example, developer John will pick the first item on the backlog and say it'll take about 1 day. Developer Brian will take the next item and say it'll be about 1 day. Of course, that's actually only one day's work if each developer works on each piece, but YouTrack will sum it as 2 days.
When the whole team doesn't have to work on the same item at once, how do we appropriately estimate time? Are we breaking Scrum rules somewhere?
If we're doing it right, and we have to just go over two weeks, how do we be sure we're not giving ourselves too much work?
It should NOT be only Complexity but include Complexity, Effort, Risk and a gentle touch of common sense.
I think you are estimating in the wrong way to begin with, which has nothing to do with Scrum, Kanban or XP. You are estimating in a per person basis which should be actually done on per Story basis. Think about the feature as a whole and then estimate the story and not individual member's effort towards it.
Estimating by man days/hours have always been criticised as it doesn't involves the skill set measurement. Say, one feature with complexity 5 can be done in 1 day by a senior engineer but it will take 2 days for a new joiner or less experienced member. Hence, you can't take only complexity into account to measure the story sizes and sprint commitments.
Many people estimate Stories using complexity only and then sub-tasks according to hours, that has drawbacks as well due to duplication of effort in the actual estimation. Above all, always keep in mind that an Estimate will always be an estimate and Planning poker can only help you find it but it is not the ultimate estimation technique.
Adding my five cents here, complexity of the backlog items is not judged by individual team members.
First of all, a backlog item most likely involves more than a single team member and could be broken down to smaller tasks like 'implement UI', 'run tests' or 'change the database schema'. So there will be both John, Brian and maybe also Mary (a testing engineer) involved. And thus you'll need to estimate 'Complexity, Effort, Risk and a gentle touch of common sense' for the whole team, not just for John or Brian.
Also John, being a senior guy, might say it is of complexity 2, while Brian (a junior) could say it is 5 and Mary will notice a lot of regression because of the database schema change and give it 8. In the end in Scrum it comes to the whole team's commitment for the sprint. So you, as a Scrum Master, will need to facilitate John, Brian and Mary to come to an agreement and here's where tools like planning poker come to the stage.
As far as I know and understand, you don't estimate effort in Scrum. You estimate complexity of user stories in relation to each other and the contained functionality.
Following this, you estimate story points, not man days. The burndown chart shows your team's progress. Over time you get quite an accurate team velocity which basically tells you how many story points your team gets done with per sprint.
Use this team velocity as a basis to decide whether to commit to another user story for any given sprint, or if it will probably be too much.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
In the social network movie i saw Mark used Elo rating system
But was Elo rating system necessary ?
can anyone tell me what was the advantage using elo's rating system ?
Can the problem be solved in this way too ?
is there any problem in this algo [written below] ?
Table Structure
Name Name of the woman
Pic_Name [pk] Path to the picture
Impressions Number, the images was shown
Votes Number, people selected as hot
Now we show randomly 2 photos from the database and the hottest woman is selected by Maximum number of Votes
Before voting close/down please write your reason
But was that necessary?
No, there are several different ways of implement such system.
Can anyone tell me what was the advantage using elo's rating system ?
The main advantage and the central idea in Elo's system is that if someone with low rating wins over someone with high rating their ratings are updated by a larger number, than if the two had similar rating to start with. This means that the ratings will converge fairly quickly.
I don't really see how your approach is a good one. First of all it seems like it depends on how often a pic is randomly selected for potential upvoting. Even if you showed all pics equally many times, the property described above doesn't hold. I.e, if some one wins over a really hot girl, she would still get only a single upvote. This means that your approach wouldn't converge as quickly as Elo's system. In fact, the approach you propose doesn't converge to some steady rating-values at all.
Simply counting the number of votes and ranking women by that is not adequate and I can think of two reasons why:
What if a woman is average-looking but by luck her picture get displayed more often? Then she would get more votes and her ranking would rise inappropriately.
What if a woman is average-looking but by luck your website would always compare her to ugly women? The she would get more votes and her ranking would rise inappropriately.
I don't know much about the Elo rating system but it probably doesn't suffer from problems like this.
It's a movie about geeks. Elo is a geeky way to rate competitors on the basis of the results of pairwise contests between them. Its association with chess adds extra geekiness. It's precisely the kind of thing that geeks in movies should be doing.
It may have happened that exactly way in real life too, in which case Zuckerberg probably chose Elo because it's a well-known algorithm for doing this, which has been used in practice in several sports. Why go to the effort of inventing a worse algorithm?
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
Is it better to describe improvements using percentages or just the differences in the numbers? For example if you improved the performance of a critical ETL SQL Query from 4000 msecs to 312 msecs how would you present it as an 'Accomplishment' on a performance review?
In currency. Money is the most effective medium for communicating value, which is what you're trying to use the performance review to demonstrate.
Person hours saved, (very roughly) estimated value of $NEW_THING_THE_COMPANY_CAN_DO_AS_RESULT, future hardware upgrades averted, etc.
You get the nice bonus that you show that you're sensitive to the company's financial position; a geek who can align himself with what the company is really about.
Take potato
Drench Potato in Lighter Fluid
Light potato on fire
Hand potato to boss
Make boss hold it for 4 seconds.
Ask boss how long those 4 seconds felt
Ask boss how much better half a second would have been
Bask in glory
It is always better to measure relative improvement.
So, if you brought it down to 312ms from 4000ms then it is an improvement of 3688ms, which is 92.2% of the original speed. So, you reduced the runtime by 92.2%. In other words, you brought the runtime down to only 7.8% of what it was originally.
Absolute numbers, on the other hand, usually are not that good since they are not comparable. (If your original runtime was 4,000,000ms then an improvement of 3688ms isn't that great.)
See this link for some nice chart suggestions.
Comparison to Requirements
If I have requirements (response time, throughput), I like to color code the absolute numbers like so:
Green: <= 80% of the requirement (response time); >= 120% of > the requirement (throughput)
No formatting: Meets the requirement.
Red: Does not meet the requirement.
Comparisons are interesting, but only if we have enough to see trends over time; Is our performance steadily improving or degrading? Ultimately, the business only cares if we're meeting the requirement. It's only when we don't that they ask for comparisons to previous releases.
Comparison of Benchmarks
If I'm comparing benchmarks to some baseline, then I like to use percentages, but only if the benchmark is a statistically significant change from the baseline.
Hardware Sizing
If I'm doing hardware sizing or capacity planning, then I like to express the performance as the absolute number plus the cost per transaction. For example:
System A: 1,000 transactions/second, $0.02/transaction
System B: 1,500 transactions/second, $0.04/transaction
Use whichever appears most impressive given the change. According to one method of calculation, that change sped up the query by 1,300%, which looks more impressive than 13x improvement, or
============= <-- old query
= <-- new query
Although the graph isn't a bad method.
If you can calculate the improvement in money, then go for that. One piece of software I wrote many years ago saved a few engineers a little bit of time each day. Figuring out the cost of salary, benefits, overhead and it turned into a savings of more than $12k per year for a small company.
-Adam
Rule of the thumb: Whichever sounds more impressive.
If you went from 10 tasks done in a period to 12, you could say you improved the performance by 20%
Saying you did two tasks more doesnt seem that impressive.
In your case, both numbers sound good, but try different representations and see what you get!
Sometimes graphics help a lot of the improvement is there on a number of factors, but the combined somehow does not look that cool
Example: You have 5 params A, B, C, D, E. You could make a bar chart with those 5 params and "before and after" values side by side for each param. That sure will look impressive.
God im starting to sound like my friend from marketing!
runs away screaming
you can make numbers and graphs say anything you want - the important thing is to make them say something meaningful and relevant to the audience you're presenting them to. if it's end users you can show them differences in the screen refreshes (something they understand), to managers perhaps the reduced number of servers they'll need in order to support the application ($ savings), financial...it's all about the $ how much did it save them. a general rule is the less technical the group the more graphical and dramatic you need to be.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
The fundamental equation of weight loss/gain is:
weight_change = convert_to_weight_diff(calories_consumed - calories_burnt);
I'm going on a health kick, and like a good nerd I thought I'd start keeping track of these things and write some software to process my data. I'm not attentive and disciplined enough to count calories in food, so I thought I'd work backwards:
I can weigh myself every day
I can calculate my BMR and hence how many calories I burn doing nothing all day
I can use my heart-rate monitor to figure out how many calories I burn doing exercise
That way I can generate an approximate "calories consumed" graph based on my exercise and weight records, and use that to motivate myself when I'm tempted to have a donut.
The thing I'm stuck on is the function:
int convert_to_weight_diff(int calorie_diff);
Anybody know the pseudo-code for that function? If you've got some details, make sure you specify if we're talking calories, Calories, kilojoules, pounds, kilograms, etc.
Thanks!
Look at The Hacker's Diet and physicsdiet.com - this wheel has already been invented.
I think the conversion factor is about 3500 calories per pound. Google search (not the calculator!) seems to agree: http://www.google.com/search?q=calories+per+pound
I mean, if this is what you're looking for, you should be set.
Supposely, in Einstein's Theory of Relativity he states that a calorie does have an exact weight(0.000000000000046 grams).
With this said, something like this should work:
int convert_to_weight_diff(int calories)
{
return 0.000000000000046 * calories;
}
That would return, in grams, how much weight was lost. To make it more reasonable, I would do something like find out how many calories are in like half a pound or whatever.
From what I read, that is what you are trying to do. Tell me if not.
I dunno how accurate this is because it's Wikipedia but it looks like a good basis for a rule-of-thumb-o-meter.
http://en.wikipedia.org/wiki/Food_energy
As you will only burn fat, the conversation is as follows:
To burn 1g of fat you'll have to work out 9kcal.
Source: http://en.wikipedia.org/wiki/Food_energy
I think everyone else has summed it up well, however there is something (maybe more) that you have forgotten:
water and stimulants (your a developer right, so caffeine is a standard drug, like Spice is in dune)
For example, if I have 2000cal of food in a day, and thru metabolism and exercise I burn 1750 (I get stuff all exercise at the moment, should be 2500 or so), I have 350cal left, which goes as fat, so I'm about +50 grams (were 3500 cals == about 500g of fat. Not sure if thats right, but lets go with it for the moment)
If I do the exact same thing tomorrow, but I have 2 cups of coffee (keep in mind my coffee of choice is Espresso with nothing else in it, so close to zero cals), I have to take two things into account:
caffeine ups my metabolism, so I burn more - so my burn may be +100cals
caffeine is a diuretic, so I'll lose more water - so my WEIGHT will be down maybe -200g, depending on my bodys reaction to it.
So, I think for a basic idea, your proposal is a good one, but once you start getting more specific, it gets NASTY complex.
Another example: If you are doing exercise, and burn 500cals during a RUN, you will continue to burn cals for a number of hours after. If you burn 200 cals thru weight training, you'll do the same post-exercise burn (maybe more), and your baseline metabolic burn (how much you burn if you just sit on your backside) will be higher until that muscle atrophies back to whatever it was before.
I think you are right tho - not really a SO question, but fun none the less.
I would add that you find a different measurement than BMI into your considerations because it doesn't take body composition into consideration. For example, I remember seeing an article about Evander Holyfield being considered "dangerously obese" based on his high BMI. He looked like he had barely an ounce of fat on him. Anyway, just a consideration.