Can I renew the 4-year hosting plan at Hostinger after it expired? - hosting

I'm about to build a website for personal use and I found Hostinger to be very cheap, the plan I choose is $2.15 a month for 48 months, but I haven't been able to determine if that plan can be renewed after those initial 4 years is done. What I mean is, after 4 years, can I buy another 4 year plan for $2.15/month? Or do I have to pay the full price (~$11/month) after the initial 4 years is done.
Thank you

The renewal prices are a bit higher than the first one, but it's not the full price. There are some discounts for the renewals too but the initial cost is lower. I have Premium account with them and the renewal for 4 more years now is 3.49$/month.
Remember that the prices still can change in 4 years because of the economical changes. However, these renewal prices here are really not bad.

You can renew it after specified time and meanwhile you can choose to opt for other vendors hosting plan, if found to be cheaper then you current plan. But yes you can renew the existing one after 4 years.

Related

Preferred recommendation system

I am implementing a employee planning solution where staff can have their preferred work times and this system can also recommend the best time a staff should work.
To provide recommendations to a staff for their working time, I'd like to have a recommendation system that can recommend a number of working shifts to staff based upon:
Organisation's staff requirements. It is an interval(1 hour) based staff requirements and has min/max staff needed for that interval. (eg: at hrs 1300-1400, I need min 4 and max 6 staff).
Rules that a recommended shift has to follow. (eg: any shift provide should not exceed max_allowed_work_hours_in_week. If employee has completed 35 hours till Thursday and max_allowed_work_hours_in_week is 40 so I can only recommend shift upto 5 hours)
Recommendations also need to respect my historical shifts. (eg: I like to work in evenings on Friday and my history says so. So, a good recommendation of Friday would be an (guess what :)) eve shift.
I have not done much homework as everything leads to Hadoop ecosystem and about hadoop I have as much idea about that as a toddler(non-prodigy) knows of quantum physics. Anyhow, here's what I come up with:
I could use apache spark or mahout OR standalone apache predictionIO.(I'm in Java world)
I know constraint solvers like Optaplanner that I can push to solve this problem but it's not the right tool for this job, I believe but could be wrong.
My question is, what system do you recommend me for such recommendations and is spark/predictionIO the best tools for this job?
I am implementing a employee planning solution where staff can have
their preferred work times and this system can also recommend the best
time a staff should work.
Your use case is really similar to employee rostering example from optaplanner. Each employee has their own preferred work times and it was write down into a contract between employee and the hospital.
Organisation's staff requirements. It is an interval(1 hour) based staff requirements and has min/max staff needed for that interval. (eg: at hrs 1300-1400, I need min 4 and max 6 staff).
The example also has the same requirement, where for every shift there are minimum staff needed.
Rules that a recommended shift has to follow. (eg: any shift provide
should not exceed max_allowed_work_hours_in_week. If employee has
completed 35 hours till Thursday and max_allowed_work_hours_in_week is
40 so I can only recommend shift upto 5 hours)
Those rules are all provided in employee contract, e.g. an employee must work minimum 35 hours per week or must work consecutively 3 days per week.
Recommendations also need to respect my historical shifts. (eg: I like
to work in evenings on Friday and my history says so. So, a good
recommendation of Friday would be an (guess what :)) eve shift.
This is could be added as a new soft constraint whenever the employee has a historical data.
I have not done much homework as everything leads to Hadoop ecosystem
and about hadoop I have as much idea about that as a
toddler(non-prodigy) knows of quantum physics. Anyhow, here's what I
come up with:
I could use apache spark or mahout OR standalone apache
predictionIO.(I'm in Java world) I know constraint solvers like
Optaplanner that I can push to solve this problem but it's not the
right tool for this job, I believe but could be wrong.
I think you could combine Hadoop to store your big data and process it. Then you could feed the processed data to optaplanner to get an optimized result. If you want to build a realtime planning, apache spark could be used to do quick processing to new data and feed it to optaplanner to get the latest optimized result.
So I really recommanded you to go and try the nurse rostering example from optaplanner. Hope this helps, kind regards.

The Perfect Checklist - AGILE [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
I'm starting a new interesting project and, with my team, we are looking for a way to define the our checklist in order to have clear ideas (as much as possible) on what we have to do in order to release a feature starting from a user story.
I've found many interesting resources:
Scrum Checklist
ALL ABOUT AGILE
...and something else
So, my proposal is to start a discussion with someone that is experienced in that question.
Hope someone help me!
There are various things to consider when choosing to develop a project using Agile methodology.
Roles:
Product Owner:
Defines features of the product
Decide on release date and content
Prioritizes and adjusts features every sprint
Scrum Master (typically a developer):
Manages the project
Ensures team is fully functional
Enables close cooperation across all roles and functions
Shields team from external interferences
Ideal Scrum team size ~7 people.
Stages:
1) Create a product backlog (list of user stories):
Using a list of requirements given by the client, create a list of user stories.
2) Conduct a planning poker session:
Only developers are involved in this session, clients may watch but cannot interact.
The purpose of planning poker is to assign a "Story point" value to easy of the user stories.
A story point value is the estimated "effort" of developing a story.
Set up a series of poker cards that range from 0 to 100, the series of cards I am familiar with are 0, 1/2, 1, 2, 3, 5, 8, 13, 20, 40 and 100.
Each developer is given a series of poker cards. A user story is read aloud to the group and each person will have a few seconds to pick a story point value. Values picked are shown at the same time. If a consesus has been reached, move onto the next story. If not, there should be a quick discussion on why you picked your value, and another round of planning poker begins.
If a poker value selected is greater than 20, you should consider breaking the user story down into small stories.
3) Sprint planning:
Sprint Backlog is created
Team selects items from product backlog which they can commit to completing
Tasks are identified and each is estimated
4) Sprint:
Ideal duration = 2-4 weeks
Daily scrum meeting:
Ideally early in the day. Stand-up, quick meeting. Managed by Scrum Master. 3 questions are asked each meeting; What did you do yesterday? What are you going to do today? Is there anything in your way?
Design, development and testing done throughout sprint.
5) Sprint review:
Scrum team present what they accomplished during the sprint (demo new features)
Attendees - Scrum Team, Product owner, stackholders
What went well, problems, how problems were resolved
Demonstrate what user stories are "done done"
Receive feedback from Product Owner
6) Sprint Retrospective:
Occurs after Sprint Review and planning for next sprint
Look at what is and isn't working
Inspect how the Sprint went
Create plan for making improvements on how the scrum team operates
Develop better processes/practices
7) Repeat Stage 3
Plan next sprint using same processes as before.

building an automated timetable scheduling app

I want to build an automated timetable scheduling app. but am finding it difficult to get my logic right. this is what i have do so far.
The system is meant for a university but am just considering one faculty for start.
There is an existing list of all courses and their various credit hours. Most of the courses are 3 credit hours and the rest are 2 credit hours. All 3 credit hours course must be divided into two separate period(days) proportionally 2:1.
There are 12 hours in a normal class day but each student or class can have at most 6 hours a day and at least an hour a day.
Each student from from first year to third offers at least 5 courses and at most 8 courses .
Fourth/final year students offer 5 courses.
Each course must be taken by a particular year group. therefore given the course a location where it would be taught(classroom).
up until this stage I am mixing ideas and am not getting anything right. Please if any one could help or has a better algorithm to build this in php I would be glad.

Predicting opening and closing of an organisation

Given Training data of an organisation meter reading recorded at an interval of 15 minutes each day .Like for some N days we will be provided with data.
And Now with help of this data we need to tell that on a particular day an organisation is closed or open . I need to know if any link can help in this matter if someone has worked into this field.
By closed I mean that on that day consumption of electricity will obviously be almost constant,Though this is just a single feature to take into account.
So how to predict this in best way ?
Maybe the use of threshold not suit well for different kind of organisation, but for sure there are a minimum consumption.
Maybe you would like to use the information of past 15 minutes to increase your reliability.
Adding one more input to your classifier.

Effort Estimation based on Use Case Points [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
As of now I have done effort estimation based on experience and recently using function points.
I am now exploring UCP, read this article http://www.codeproject.com/KB/architecture/usecasep.aspx. I then checked various other articles based on Use Case Points (UCP). I am not able to find out how exactly it works and is it correct.
For example, I have a login functionality where user provides userid and password and I check against a table in database to allow or deny login. I define a user actor and Login as a Use Case.
As per UCP I categorize Login use case as Simple and the GUI interface as Complex. As per UCP factor table I get 5 and 3 so the total is 15. After applying the technical factor and environmental factor adjustment it becomes 7. If I take productivity factor as 20 then I am getting 140 hours. But I know it will take at most 30 hrs along with documentation and testing efforts.
Am I doing something wrong in defining the Use Case here? UCP says if the interface is GUI then its complex but here the gui is easy enough so should I downgrade that factor? Also factor for simple is 5, should I define another level as Very Simple? But then am I not complicating the matter here?
Ironically, the prototypical two box logon form is much more complicated than a 2 box CRUD form because the logon form needs to be secure and the CRUD form only needs to save to a database table (and read and update and delete).
A logon form needs to decide if where to redirect to, how to cryptographically secure an authentication token, if and how to cache roles, how to or if to deal with dictionary attacks.
I don't know what this converts to in UCP points, I just know that the logon screen in my app has consumed much more time a form with a similar number of buttons and boxes.
Last time I was encouraged to count function points, it was a farce because no one had the time to set up a "function points court" to get rulings on hard to measure things, especially ones that didn't fall neatly into the model that function point counting assumes.
Here's an article talking about Use Case Points - via Normalized Use Case. I think the one factor overlooked in your approach is the productivity which is suppose to be based on past projects. 20 seems to be the average HOWEVER if you are VERY productive (there's a known 10 to 1 ratio of moderate to good programmers) the productivity could be 5 bringing the UCP est. close to what you think it should be. I would suggest looking at past projects, calculating the UCP, getting the total hours and determining what your productivity really is. Productivity being a key factor needs to be calculated for individuals and teams to be able to be used in the estimation effectively.
Part of the issue may be how you're counting transactions. According to the author of UCP, transactions are a "round trip" from the user to the system back to the user; a transaction is finished when the system awaits a new input stimulus. In this case, unless the system is responding...a logon is probably just 1 transaction unless there are several round trips to and from the system.
Check out this link for more info...
http://www.ibm.com/developerworks/rational/library/edge/09/mar09/collaris_dekker/index.html
first note that in a previous work of Ribu he stated that effort for 1 UCP ranges from 15 to 30 hrs (see: http://thoughtoogle-en.blogspot.com/2011/08/software-quotation.html for some details);
second it is clear that this kind of estimation like, also Function Points, is more accurate when there are a lot of use-case and not one. You are not considering for example, startup of the project, project management, creation of environments etc. that are all packed in the 20 hours.
I think there is something wrong in your computation: "I get 5 and 3 so the total is 15". UAW and UUCW must be added, not multiplied.

Resources