Sometimes, it will occure that the diff of consume offset and broker offset is negative which is quite abnormal.
There are many users of the community who meet this problem.
This question is to list all possible causes and the the solution.
This is a known concurrent bug which has been fix in release 3.4.6(See the release note here: https://github.com/alibaba/RocketMQ/releases)
BTW, you are highly recommended to use the latest release of the Apache RocketMQ(https://github.com/apache/incubator-rocketmq/), we always have compatibility in mind. Further, using latest release, you may expect significant performance enhancement, new features, bug fixes.
Related
As the title says, is it possible? Right now, we have Java 6 + sqljdbc4 + glassfish 2.1.1. We're planning on upgrading our Java 6 to Java 8 in order for Sqljdbc42.jar to work, because we are having JDBC Connectivity issue and the solution might be to upgrade to sqljdbc42. Please see Option 1 in this link:
https://blogs.msdn.microsoft.com/dataaccesstechnologies/2016/11/30/intermittent-jdbc-connectivity-issue-the-driver-could-not-establish-a-secure-connection-to-sql-server-by-using-secure-sockets-layer-ssl-encryption-error-sql-server-returned-an-incomplete-respons/
Of course some of you might say to upgrade Glassfish to a later version but in case is this is not an option, would errors occur? I found out that editing the asenv.bat will do the trick (http://alvinalexander.com/blog/post/java/fixing-glassfish-jdk-path-problem-solved) but I'm not sure about the deeper problems we might face.
Thank you so much for your answers.
You would have to be very brave (and have lots of free time) to try jdk-8 on glassfish-2. While you can still download glassfish-2, the Extended Support support ends this year (2017). This is not even Premium Support, it's Extended meaning you have already gone too far by this time. I know this because of a client I used to work for that used glassfish-2.
There were multiple bugs and complaints reported or the 3-rd and 4-th versions with jdk-8, not even speaking of 2-nd. Unfortunately you should upgrade to something way more recent (and have a constant plan to upgrade from there still). Obviously you can try and change the jdk version and see what happens - but I bet you would be visiting this site way more often than you would want to.
The real reason why you should seriously consider upgrading is that not a lot of people can answer deeper problems we might face, because not a lot of people use this version. Just my 0.02$.
i wonder if Doctrine 2 is stable enough to use for a production project?
i guess the project will be finished 3 months from now so maybe then Doctrine 2 will be released in a complete version.
i'm wondering if its smarter to use learn and use doctrine 2 right away instead of learning the current version and then convert everything to version 2. cause i've read that the difference is huge between them.
thanks
I've been using Doctrine 2 in production for a few weeks now. Performance wise, it is much speedier than Doctrine 1. And it's much easier to develop with. I've had a few minor issues with bugs, or unimplemented features, but nothing that I couldn't work around.
Honestly, I don't think learning Doctrine 1 is terribly worth your time. Development for it will stop in 2011. And the two framework are so different, you're going to need to teach yourself twice.
As mentioned elsewhere, there have been some backwards compatibility API changes between the last Alpha release and the upcoming Beta (which is slated for the end of April), but they haven't been huge.
It's very possible by the time you're really picking up speed with a project, it'll be into the Beta phase.
The difference is huge, but one additional concern is API stability. I think they've stated in some blog posts that the API won't be considered final until a beta release (so far everything's been alpha). So, there's a chance you'll still have to refactor some code to fit any API changes they may make before beta.
I doubt they change anything earth-shattering, but not being able to say so definitively means that it's a little disconcerting for production use. My suggestion would be to at least wait until the first beta release, which should mark the API freeze.
My job is to upgrade all of our applications that use OpenSceneGraph (an openGl toolkit) version 0.9.9 to the latest and greatest 2.8. Well quite a few caveats come with this task:
1.) After version .9.9 there was a major overhaul the the core OpenSceneGraph (OSG). Commonly used functions and classes were added and removed. Simply put, I can't just replace the old with the new and modify a few deprecated or removed functions - there is a lot that needs to be changed.
2.) Our applications were built using MFC in Visual Studio 2003. They want me to stick with using that for the upgrade.
3.) Good organized documentation for my particular scenerio for OSG seems impossible to find and unorganized and scattered at best.
My question is: What would be a somewhat detailed methodical approach for tackling this problem. I've got about two weeks to upgrade one of the applications. The plan is then to follow and apply this approach for the rest of the applications. For me, the biggest hurdle is finding a starting point. On most projects I work on, I can easily just dig right in and figure it out with a little organization and plan at hand. This seems like a bit more convoluted problem, with a more accurate and precise plan of action needed. Your ideas and suggestions are much appreciated.
In my opinion, you should upgrade your osg to new version and try to build it again until you get no exception.
for every exception which is not solved easily, you can ask to osg-mailing list.
you can also ask what are the major changes with your current version and latest version.
Bugs can be difficult enough to resolve when they're your (or a coworker's) fault. However, we all know that the technology we use to implement our programs is written by infallible people such as ourselves. So it stands to reason that some people have been affected by bugs in the implementation of the tools they used.
So, have you found a bug in your program that was caused by a widespread underlying technology, such as a programming language or framework? If so, did it fail with some indication, or did it silently overwrite some data? How difficult was it to debug? Did it cause a potential security vulnerability? Were you able to contact the provider and confirm that it was fixed (or fix it yourself)?
Here are some of the worst (in my opinion) technologies to have a bug in (especially one that fails silently):
Programming language
Concurrency framework
Remote API
Database
I deal with one on a daily basis called Internet Explorer.
To be fair though, there are lots of bugs in all of the browsers. I have filed several bugs for Firefox as well, and just the other day I found a strange case where the border doesn't take padding into consideration.
This is a good argument for writing lots of unit tests. If you upgrade your platform to a newer version that has some new bug, hopefully you have a test that reveals the bug
in one case I had, the vendor was working on a brand new API. They were not ready to release the new API, but they were not very keen to fix the bug in the old one either as they considered it dead from a $$ perspective.
A colleague once stumbled across a bug in the Jikes Java compiler. He had something like this:
if (condition)
{
}
else
{
System.out.println("Code that does stuff.");
}
He hadn't intended to leave the top block empty permanently, but just had it that way during development. He discovered that the condition was ignored unless he put a comment in that block so that it was no longer empty.
During my time developing (mostly) with Java I've run into bugs in the following components:
Java compiler
This actually happened several times. Usually we found that ecj (the Eclipse compiler) and javac (the Sun compiler) disagree about the validity of some Java code. Usually I enter bug reports for both systems, one of them gets accepted and the other one is closed as invalid.
Database engine
Those are very rare and very, very nasty, because no one expects the DB itself to have a bug. In our case it was an old product (the bug was already fixed in a newer release) that accepted values in a field that were not within the defined range (similar to having a NOT NULL field containing NULL)
JDBC driver
There were several bug fixes to a JDBC driver due to a project I've been working on. The bug fixes ranged from trivial ("why is there debug output in the production release?") to might-not-even-be-a-real-bug ("you can easily safe one roundtrip per statement by doing that-and-that").
JVM implementations
those are hard to diagnose and often present themselves as effectively random crashes on one JVM and stable operation on another one. We temporarily switched JVM vendor several times due to things like this.
Each time it took quite some time (and usually the involvement of the vendor of the component) before I actually believed it was a bug in that component.
And yes: the cases of false positives (i.e. the bug was actually in my/our code) were orders of magnitude more common.
The only place where bugs in the third-party component are kind-of expected seem to be web browsers. Almost no one questions you when you say "that's a bug in <insert buggy browser of the week>, we need to work around it like this ...".
I guess almost anyone who has programmed JavaScript with Internet Explorer has found a bug in their program which was caused by a widespread underlying technology.
The indication of failure is the blue "e" on your Windows desktop.
The first that comes to mind was with version 1 of the .NET Framework; for some reason, Random.NextDouble() method never produced a value greater than 0.5. I was completely baffled, and having run a test apps that called the method thousands and thousands of times, I had to presume it was a bug and work round it.
Never did find out what the cause was...
I've run into something with the gcc 4.4.0 but as the product I'm currently working on is still pre-alpha, it was fairly easy to repair locally. Hopefully they'll fix it soon.
i found a very strange bug in gcc on Mipsel (openwrt). We was testing a small app (about 3K sloc) that give me sigsev even if the code was corrected in theory.
I don't know the detail of the bug (and I don't have that code anymore) but change the gcc version from 4.1 to 3.6 solve the problem.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
I'm working on a new project and we're using a pretty nice stack. NHibernate, Spring, MVC... the list goes on.
One thing I've noticed tho is that in the 6 months since we've started we've had a new release of NHibernate, a new release of a third party control toolkit and Windows 7 is on the horizon.
We've had problems before where being stuck on an old version of a technology has cost us dearly, so I'm wondering what are some techniques we could use to help ensure that our transitions to the latest versions of things are as painless as possible?
Quite simply make it a priority and upgrade as you go along. If you keep up-to-date with the latest version, there will be less breaking changes than if you have to update 5 versions at a time.
Perhaps create a branch and do a test update to betas so that you are aware of forthcoming problems when that version RTMs (if using betas is a concern to you).
I agree with the other comments here about updating often. If you wait too long it will be enough work that you will notice it in the project productivity.
The way we do it is the following.
One person on the team, gets the latest version, and makes sure that all tests run.
That person the upgrades whatever dll's / tools are to be upgraded
He also documents the upgrade.
Build all code make necessary changes for it to build
Run all tests, make sure they run.
Manual smoke test of UI
Send info to rest of team with doc of upgrade
Checkin / make sure it builds on build server
This way we do not loose productivity of the team during the upgrade. Note this would be much more difficult without unit tests.
"update early, update often"
if you wait it will be more and more difficult so put high priority on updating the system. Developers mostly like to be on the bleeding edge so they will not mind too much, key challenge is to sell this idea to management.
It is always good to do step by step approach where you upgrade one by one tool. Then it is also easier if you need to roll back to older version. Big bang approach is more difficult and lots of things can go wrong.
Lets be realistic, every update will cost you time and also your team to do the switch to the new tooling version but after some time team learns to deal with it and level of stress when switching the versions is much less.
Speaking from a management point of view, don't upgrade unless there is a compelling reason. You have to look at what the upgrade brings to your project. If there are no benefits to the upgrade, don't do it. Obviously this isn't a hard and fast rule, but most teams I know don't have time to spend upgrading systems for no reason, they are too busy with feature requests and bug fixes. I recommend working in upgrades on the following basis:
The new version runs
[significantly] faster or more
efficiently and your
customers/clients will see this
improvement or it will reduce your
immanent hardware needs.
Features
have been added that you or your
customers/clients want and can take
[immediate] advantage of.
Security enhancement for a security
flaw that affects your current or
immediate future architecture.
License/support reasons. If you are
at the end of your contract then you
will probably want to make the final
jump to the last version of the
software that you are entitled to
while you still have support for the
upgrade. Alternately if you are on
such an old version of the software
that finding support documentation
for it is difficult then upgrading
is certainly called for.
Some aspect of the project that you are
working on is directly impacted by
the software that could be
upgraded. If you are already going
to be working with it and testing
the functionality, it is probably a
good time to upgrade and [probably]
won't add significant load to the
project.
Major changes. If your
project or the software it relies on
have undergone major changes then it
is probably a good time to add the
update(s) into your project plan.
Major changes implies a more
difficult upgrade path and should be
persude on a scheduled basis rather
than having to be shoe horned in at the last minute due to a needed fix or enhancement.
Specific reasons not to upgrade:
Software, installation, and regression testing costs money. Hence the need for a compelling reason to upgrade.
New software is often buggy or has unknown "features." For this reason many choose to stay one version behind the latest release.
Newer versions can often be slower than previous versions, this is especially true for small updates and patches.
Compatibility issues. Upgrades break things, it is better to skip as many incremental upgrades as possible in order to avoid updates that break compatibility, compatibility that may be fixed in the next update.
I recommend keeping a list of all software that your project utilizes along with their version and last upgrade date (along with other important information such as licensing info, support info, etc.). Evaluate every item on this list once a year in order to insure that you don't miss any updates that match a reason to upgrade that you may have missed. Software on this list with an old version/date and a newer version available may be incentive enough to convince management that an upgrade should be done.