Why composer.phar needs to be updated after 60 days? - composer-php

I just want to understand that, why the composer.phar file needs to be updated after 60 days?

Why composer.phar needs to be updated after 60 days?
Composer doesn't need to be updated after 60 days. But an update is recommended after that time!
Why 60 days?
I can't speak for the core developers on this decision, so i can only give you an educated guess:
Composer used a shorter frequency for recommending updates during its development and alpha stages. The former message was Warning: This development build of composer is over 30 days old. It is recommended to update....
Users were recommended to pull updates in a short frequency to stay on the latest, bleeding-edge version. A short update frequency allows that users report bugs and issues quickly back to the project. This results in a very fast feedback and development cycle with fast roll-outs of new versions to address urgent issues.
Now, that Composer is released as stable the time was increased to 60 days, because (i guess) a very fast feedback cycle is no longer needed. This reflects that the project is no longer in a pre-release/alpha development mode, but has stabilized. Stats might lie, but you might also take a look at their issue tracker where you find an massive amount of closed tickets and only a few open ones: 120 open to 3,356 closed issues (02-06-2016).

Related

How can I keep track of my CircleCI build minutes usage?

I'm currently trialling the free tier of CircleCI on their Linux build machines. They claim to offer:
1,500 build minutes per month
1 container
1 concurrent build
I've added some repos and done some builds, but there does not seem to be any way to keep track of my minutes usage. In the short term I doubt I will go over this generous limit, but it might be possible in the future, and I'd like to keep an eye on that before I put too much effort into building around this CI system.
The only thing that looks relevant is the screen Settings > Plan > Overview, which looks like this:
It says my usage data will be updated within 30 days. Does that mean I have to wait for the end of the billing period every month to see what my usage was? I want to see my usage data in real time.
Aha, I found an old answer here, in a customer support forum. From an employee from October 2016. It seems to be the case that, at least previously, this feature was not supported:
From time to time, we look at the logs and send emails to organizations that do not have paid plans and whose aggregate builds are more than 1,500 minutes in a given month. To date, we haven't had the need (or desire) to shut anyone down for going over. Most folks either upgrade or realize that they unintentionally left a webhook running on a project that builds constantly (which uses AWS resources that we still incur costs against!)
As an organization, CircleCI cares that users get value out of their CI system. If you are getting value and using us for work that matters to you, hopefully you'll consider our paid options (you will get more containers for concurrency AND parallelism, you'll get engineer support, and features like Insights!).
Finally - we do have plans to show linux minutes in-app in the future but it's not on the current quarter's roadmap.
However, a day or so after I took the screenshot in the question, the data has now populated:
I will keep an eye on this to see if it is indeed real-time, but it looks correct to me.
Update
The usage facility has been non-operational for two weeks, and others have reported problems as well.
While this may be fixed in due course, it may be worth bearing in mind that it seems to be fragile, so open your Network tab if you're having problems!

Updating the complication gradually degrades the Apple Watch app performance in watchOS3

I've been stressing over this issue for about a week now, trying to pin point the source of a slow but steady Apple Watch app performance degradation. Over the course of about two days, my app's UI would become progressively more sluggish. I've narrowed it down to a complication update code. Even if I strip down the complication update to an absolute bare minimum, this problem still happens, albeit slower than if I update the complication with some actual data. I update the complication every 10 minutes. Once the new data comes, I simply execute
for (CLKComplication *comp in [CLKComplicationServer sharedInstance].activeComplications) {
[[CLKComplicationServer sharedInstance] reloadTimelineForComplication:comp];
}
which in turn calls:
- (void)getCurrentTimelineEntryForComplication:(CLKComplication *)complication withHandler:(void(^)(CLKComplicationTimelineEntry * __nullable))handler {
...
}
This works fine, the new data displays, but when repeated a few dozen times, the UI responsiveness of the main app begins to noticeably degrade, and when it's repeated about a hundred times (which happens in less than a day with 10 minute updates) the UI really slows down significantly.
I have nothing fancy going on with the complication structure - no time travel, just display the current data, and everything is set up for that. To make sure I'm not looking at the wrong place, I've made a test that reloads the timeline every second, and in this test, my getCurrentTimelineEntryForComplication looks like this:
- (void)getCurrentTimelineEntryForComplication:(CLKComplication *)complication withHandler:(void(^)(CLKComplicationTimelineEntry * __nullable))handler {
handler(nil);
}
so there's literally nothing going there, just send back the empty handler. Yet, even in this scenario, after a hundred or so timeline reloads, the main app's UI slows down visibly.
Some other things to note:
If I'm not updating the complication, the app's UI performance never degrades, no matter how many times I open it, or how long I use it, or how many times the data fetching code runs in the background.
When testing this in the simulator, I can't get the performance degradation to happen, but I can consistently see that there's a small, but steady memory leak coming from the complication update (again, this happens no matter how simple update I do inside the getCurrentTimelineEntryForComplication method.
Has anyone else noticed this, and is there any hope to deal with it? Am I doing something wrong? For the time being I make sure only to update the complication if the data has changed, but that just postpones the problem, rather than solving it.
Oct 24 edit
I've done more careful testing on a real watch, and while before for some reason I didn't notice the memory leak associated with this on a real watch, I have now definitely seen it happen. The real device mirrors the issue seen on the simulator completely, just with a different initial amount of memory allocation.
Again, all I do is call reloadTimelineForComplication on a constant loop, and the complication is updated with a single line of text from a cached data object, and the complication controller is otherwise stripped to a bare minimum. When the complication is removed from the watch face, memory leak predictably stops.
My main project is written in ObjectiveC, but I have repeated the test with a test project made in Swift, and there are no differences. Also, the problem persists with the latest XCode 8.1 GM and the watchOS 3.1 beta that's supplied with the simulator that comes with it, as well as running it on a real watch with watchOS3.1 installed.
Jan 24, 2017 edit
Sadly, the issue persists in watchOS 3.1.3, completely unchanged. In the meantime I've contacted Apple's code-level support, sent them sample code, and they've confirmed that the problem exists, and told me to file a bug report. I did file a bug report about two months ago, but up until now it remains unclassified, which I guess means no one looked at it yet.
Jan 31, 2017 edit
Apple has fixed the problem in watchOS3.2 beta 1. I've been testing it both in the simulator and on real watch. Everything's working great, no memory leaks or performance degradation anymore. In the end there were no workarounds for this, until they decided to fix it.
Apple has fixed the problem in watchOS3.2 beta 1. I've been testing it both in the simulator and on real watch. Everything's working great, no memory leaks or performance degradation anymore. In the end there were no workarounds for this, until they decided to fix it.
I noticed that using the native calendar complication everything i do becomes very sluggish. So maybe it's a bug in the new watch OS.
After using the calendar complication for a couple of days it's impossible to use that watch face. Even if I change to another complication and switch back to the calendar one it doesn't "reset" the performance. The only thing that solves is to reboot the watch. (or forget about the calendar and use another complication instead)

What’s the ROI of Continuous Integration?

Currently, our organization does not practice Continuous Integration.
In order for us to get an CI server up and running, I will need to produce a document demonstrating the return on the investment.
Aside from cost savings by finding and fixing bugs early, I'm curious about other benefits/savings that I could stick into this document.
My #1 reason for liking CI is that it helps prevent developers from checking in broken code which can sometimes cripple an entire team. Imagine if I make a significant check-in involving some db schema changes right before I leave for vacation. Sure, everything works fine on my dev box, but I forget to check-in the db schema changescript which may or may not be trivial. Well, now there are complex changes referring to new/changed fields in the database but nobody who is in the office the next day actually has that new schema, so now the entire team is down while somebody looks into reproducing the work you already did and just forgot to check in.
And yes, I used a particularly nasty example with db changes but it could be anything, really. Perhaps a partial check-in with some emailing code that then causes all of your devs to spam your actual end-users? You name it...
So in my opinion, avoiding a single one of these situations will make the ROI of such an endeavor pay off VERY quickly.
If you're talking to a standard program manager, they may find continuous integration to be a little hard to understand in terms of simple ROI: it's not immediately obvious what physical product that they'll get in exchange for a given dollar investment.
Here's how I've learned to explain it: "Continuous Integration eliminates whole classes of risk from your project."
Risk management is a real problem for program managers that is outside the normal ken of software engineering types who spend more time writing code than worrying about how the dollars get spent. Part of working with these sorts of people effectively is learning to express what we know to be a good thing in terms that they can understand.
Here are some of the risks that I trot out in conversations like these. Note, with sensible program managers, I've already won the argument after the first point:
Integration risk: in a continuous integration-based build system, integration issues like "he forgot to check in a file before he went home for a long weekend" are much less likely to cause an entire development team to lose a whole Friday's worth of work. Savings to the project avoiding one such incident = number of people on the team (minus one due to the villain who forgot to check in) * 8 hours per work day * hourly rate per engineer. Around here, that amounts to thousands of dollars that won't be charged to the project. ROI Win!
Risk of regression: with a unit test / automatic test suite that runs after every build, you reduce the risk that a change to the code breaks something that use to work. This is much more vague and less assured. However, you are at least providing a framework wherein some of the most boring and time consuming (i.e., expensive) human testing is replaced with automation.
Technology risk: continuous integration also gives you an opportunity to try new technology components. For example, we recently found that Java 1.6 update 18 was crashing in the garbage collection algorithm during a deployment to a remote site. Due to continuous integration, we had high confidence that backing down to update 17 had a high likelihood of working where update 18 did not. This sort of thing is much harder to phrase in terms of a cash value but you can still use the risk argument: certain failure of the project = bad. Graceful downgrade = much better.
CI assists with issue discovery. Measure the amount of time currently that it takes to discover broken builds or major bugs in the code. Multiply that by the cost to the company for each developer using that code during that time frame. Multiply that by the number of times breakages occur during the year.
There's your number.
Every successful build is a release candidate - so you can deliver updates and bug fixes much faster.
If part of your build process generates an installer, this allows a fast deployment cycle as well.
From Wikipedia:
when unit tests fail or a bug emerges, developers might revert the codebase back to a bug-free state, without wasting time debugging
developers detect and fix integration problems continuously - avoiding last-minute chaos at release dates, (when everyone tries to check in their slightly incompatible
versions).
early warning of broken/incompatible code
early warning of conflicting changes
immediate unit testing of all changes
constant availability of a "current" build for testing, demo, or release purposes
immediate feedback to developers on the quality, functionality, or system-wide impact
of code they are writing
frequent code check-in pushes developers to create modular, less
complex code
metrics generated from automated testing and CI (such as metrics for code coverage, code
complexity, and features complete) focus developers on developing functional, quality code, and help develop momentum in a team
well-developed test-suite required for best utility
We use CI (Two builds a day) and it saves us a lot of time keeping working code available for test and release.
From a developer point of view CI can be intimidating when Automatic Build Result, sent by email to all the world (developers, project managers, etc. etc.), says:
"Error in loading DLL Build of 'XYZ.dll' failed." and you are Mr. XYZ and they know who you are :)!
Here's my example from my own experiences...
Our system has multiple platforms and configurations with over 70 engineers working on the same code base. We suffered from somewhere around 60% build success for the less commonly used configs and 85% for the most commonly used. There was a constant flood of e-mails on a daily basis about compile errors or other failures.
I did some rough calculations and estimated that we lost an average of an hour a day per programmer to bad builds, which totals nearly 10 man days of work every day. That doesn't factor in the costs that occur in iteration time when programmers refuse to sync to the latest code because they don't know if it's stable, that costs us even more.
After deploying a rack of build servers managed by Team City we now see an average success rate of 98% on all configs, the average compile error stays in the system for minutes not hours and most of our engineers are now comfortable staying at the latest revision of the code.
In general I would say that a conservative estimate on our overall savings was around 6 man months of time over the last three months of the project compared with the three months prior to deploying CI. This argument has helped us secure resources to expand our build servers and focus more engineer time on additional automated testing.
Our biggest gain, is from always having a nightly build for QA. Under our old system each product, at least once a week, would find out at 2AM that someone had checked in bad code. This caused no nightly build for QA to test with, the remedy was to send release engineering an email. They would diagnose the problem and contact a dev. Sometimes it took as long as noon before QA actually had something to work with. Now, in addition to having a good installer every single night, we actually install it on VM's of all the different supported configurations everynight. So now when QA comes in, they can start testing within a few minutes. Now when you think of the old way, QA came in grabbed the installer, fired up all the vms, installed it, then started testing. We save QA probably 15 minutes per configuration to test on, per QA person.
There are free CI servers available, and free build tools like NAnt. You can implement it on your dev box to discover the benefits.
If you're using source control, and a bug-tracking system, I imagine that consistently being the first to report bugs (within minutes after every check-in) will be pretty compelling. Add to that the decrease in your own bug-rate, and you'll probably have a sale.
The ROI is really an ability to provide what the customer wants. This is of course very subjective but when implemented with involvement of the end customer, you would see that customers starts appreciating what they are getting and hence you tend to see less issues during User Acceptance.
Would it save cost? may be not,
would the project fail during UAT? definitely NO,
would the project be closed in between? - high possibility when the customers find that this would not deliver the
expected result.
would you get real-time and real data about the project - YES
So it helps in failing faster, which helps mitigate risks earlier.

Techniques for keeping your projects on the latest version [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
I'm working on a new project and we're using a pretty nice stack. NHibernate, Spring, MVC... the list goes on.
One thing I've noticed tho is that in the 6 months since we've started we've had a new release of NHibernate, a new release of a third party control toolkit and Windows 7 is on the horizon.
We've had problems before where being stuck on an old version of a technology has cost us dearly, so I'm wondering what are some techniques we could use to help ensure that our transitions to the latest versions of things are as painless as possible?
Quite simply make it a priority and upgrade as you go along. If you keep up-to-date with the latest version, there will be less breaking changes than if you have to update 5 versions at a time.
Perhaps create a branch and do a test update to betas so that you are aware of forthcoming problems when that version RTMs (if using betas is a concern to you).
I agree with the other comments here about updating often. If you wait too long it will be enough work that you will notice it in the project productivity.
The way we do it is the following.
One person on the team, gets the latest version, and makes sure that all tests run.
That person the upgrades whatever dll's / tools are to be upgraded
He also documents the upgrade.
Build all code make necessary changes for it to build
Run all tests, make sure they run.
Manual smoke test of UI
Send info to rest of team with doc of upgrade
Checkin / make sure it builds on build server
This way we do not loose productivity of the team during the upgrade. Note this would be much more difficult without unit tests.
"update early, update often"
if you wait it will be more and more difficult so put high priority on updating the system. Developers mostly like to be on the bleeding edge so they will not mind too much, key challenge is to sell this idea to management.
It is always good to do step by step approach where you upgrade one by one tool. Then it is also easier if you need to roll back to older version. Big bang approach is more difficult and lots of things can go wrong.
Lets be realistic, every update will cost you time and also your team to do the switch to the new tooling version but after some time team learns to deal with it and level of stress when switching the versions is much less.
Speaking from a management point of view, don't upgrade unless there is a compelling reason. You have to look at what the upgrade brings to your project. If there are no benefits to the upgrade, don't do it. Obviously this isn't a hard and fast rule, but most teams I know don't have time to spend upgrading systems for no reason, they are too busy with feature requests and bug fixes. I recommend working in upgrades on the following basis:
The new version runs
[significantly] faster or more
efficiently and your
customers/clients will see this
improvement or it will reduce your
immanent hardware needs.
Features
have been added that you or your
customers/clients want and can take
[immediate] advantage of.
Security enhancement for a security
flaw that affects your current or
immediate future architecture.
License/support reasons. If you are
at the end of your contract then you
will probably want to make the final
jump to the last version of the
software that you are entitled to
while you still have support for the
upgrade. Alternately if you are on
such an old version of the software
that finding support documentation
for it is difficult then upgrading
is certainly called for.
Some aspect of the project that you are
working on is directly impacted by
the software that could be
upgraded. If you are already going
to be working with it and testing
the functionality, it is probably a
good time to upgrade and [probably]
won't add significant load to the
project.
Major changes. If your
project or the software it relies on
have undergone major changes then it
is probably a good time to add the
update(s) into your project plan.
Major changes implies a more
difficult upgrade path and should be
persude on a scheduled basis rather
than having to be shoe horned in at the last minute due to a needed fix or enhancement.
Specific reasons not to upgrade:
Software, installation, and regression testing costs money. Hence the need for a compelling reason to upgrade.
New software is often buggy or has unknown "features." For this reason many choose to stay one version behind the latest release.
Newer versions can often be slower than previous versions, this is especially true for small updates and patches.
Compatibility issues. Upgrades break things, it is better to skip as many incremental upgrades as possible in order to avoid updates that break compatibility, compatibility that may be fixed in the next update.
I recommend keeping a list of all software that your project utilizes along with their version and last upgrade date (along with other important information such as licensing info, support info, etc.). Evaluate every item on this list once a year in order to insure that you don't miss any updates that match a reason to upgrade that you may have missed. Software on this list with an old version/date and a newer version available may be incentive enough to convince management that an upgrade should be done.

SVN performance after many revisions

My project is currently using a svn repository which gains several hundred new revisions per day.
The repository resides on a Win2k3-server and is served through Apache/mod_dav_svn.
I now fear that over time the performance will degrade due to too many revisions.
Is this fear reasonable?
We are already planning to upgrade to 1.5, so having thousands of files in one directory will not be a problem in the long term.
Subversion on stores the delta (differences), between 2 revisions, so this helps saving a LOT of space, specially if you only commit code (text) and no binaries (images and docs).
Does that mean that in order to check out the revision 10 of the file foo.baz, svn will take revision 1 and then apply the deltas 2-10?
What type of repo do you have? FSFS or BDB?
(Let's assume FSFS for now, since that's the default.)
In the case of FSFS, each revision is stored as a diff against the previous. So, you would think that yes, after many revisions, it would be very slow.
However, this isn't the case. FSFS uses what are called "skip deltas" to avoid having to do too many lookups on previous revs.
(So, if you are using an FSFS repo, Brad Wilson's answer is wrong.)
In the case of a BDB repo, the HEAD (latest) revision is full-text, but the earlier revisions are built as a series of diffs against the head. This means the previous revs have to be re-calculated after each commit.
For more info: http://svn.apache.org/repos/asf/subversion/trunk/notes/skip-deltas
P.S. Our repo is about 20GB, with about 35,000 revisions, and we have not noticed any performance degradation.
Subversion stores the most current version as full text, with backward-looking diffs. This means that updates to head are always fast, and what you incrementally pay for is looking farther and farther back in history.
I personally haven't dealt with Subversion repositories with codebases bigger than 80K LOC for the actual project. The biggest repository I've actually had was about 1.2 gigs, but this included all of the libraries and utilities that the project uses.
I don't think the day to day usage will be affected that much, but anything that needs to look through the different revisions might slow down a tad. It may not even be noticeable.
Now, from a sys admin point of view, there are a few things that can help you minimize performance bottlenecks. Since Subversion is mostly a file-based system, you can do this:
Put the actual repositories in a different drive
Make sure that no file locking apps, other than svn, are working on the drive above
Make the drives at least 7,500 RPM. You could try getting 10,000 RPM, but it may be overkill
Update the LAN to gigabit, if everybody is in the same office.
This may be overkill for your situation, but that's what I've usually done for other file-intensive applications.
If you ever "outgrow" Subversion, then Perforce will be your next step up. It's hands down the fastest source control app for very large projects.
We're running a subversion server with gigabytes worth of code and binaries, and it's up to over twenty thousand revisions. No slowdowns yet.
Subversion only stores the delta (differences), between 2 revisions, so this helps saving a LOT of space, specially if you only commit code (text) and no binaries (images and docs).
Additionally I´ve seen a lot of very big projects using svn and never complained about performance.
Maybe you are worried about checkout times? then I guess this would really be a networking problem.
Oh, and I´ve worked on CVS repositories with 2Gb+ of stuff (code, imgs, docs) and never had an performance problem. Since svn is a great improvement on cvs I don´t think you should worry about.
Hope it helps easy your mind a little ;)
I do not think that our subversion slowed down by aging. We have currently several TeraBytes of data, mostly binary. We checkout/commit daily up to 50 GigaByte of data. In total we have currently 50000 revisions. We are using FSFS as storage type and are interfacing either directly SVN: (Windows server) or via Apache mod_dav_svn (Gentoo Linux Server).
I cannot confirm that this gets svn to slowdown over time, as we set up a clean server for performance comparison which we could compare to. We could NOT measure a significant degration.
However I have to say that our subversion is uncommonly slow by default and obviously it is subversion itself as we tried with another computer system.
For some unknown reasons subversion seems to be completly server CPU limited. Our checkout/commit rates are limited to in between 15-30 MegaBytes/s per client because then one server CPU core is completly used up. This is the same for an almost empty repository (1 GigaByte, 5 revisions) as for our full server (~5 TeraByte, 50000 revisions). Tuning like setting compression to 0 = off did not improve this.
Our High Bandwith (delivers ~1 GigaByte/s) FC-Array idles, the other cores idle and network (currently 1 GigaBit/s for clients, 10 GigaBits/s for server) idles as well. Okay not really idling but if only 2-3% of available capacity is used I call it idling.
It is no real fun to see all components idling and we need to wait for our working copies to get checked out or comitted. Basically I have no idea what the server process is doing by fully consuming one CPU core all the time during checkout/commit.
However I am just trying to find a way to tune subversion. If this is not possible we might need to switch to another system.
Therefore: Answer: No SVN does not degrade in performance it is initially slow.
Of course if you do not need (high) performance you won't have a problem.
Btw. all the above applies to subversioon 1.7 latest stable version
The only operations which are likely to slow down are things which read information from multiple revisions (e.g. SVN Blame).
I am not sure..... I am using SVN with apache on Centos 5.2. Works ok. Revision number was 8230 something like that... And on all client machines Commit was so slow that we had to wait at least 2min for a file that is 1kb. I am talking about 1 file that has no big filesize.
Then I made a new repository. Started from rev. 1. Now works ok. Fast.
used svnadmin create xxxxxx.
did not check if it is FSFS or BDB.....
Maybe you should consider improving your workflow.
I don't know if a repos will have perf issues in these conditions, but you ability to go back to a sane revision will.
In your case, you may want to include a validation process, so a team commit in a team leader repo, and each of them commit to the team manager repo who commit to the read-only clean company repos. You have make a clean selection at it stage of what commit must go to the top.
This way, anybody can go back to a clean copy, with an easy to browse history. Merge are much easier, and dev can still commit their mess as much as they want.

Resources