POST Card displaying error - cpu

The problem is : my computer start, and my screen is black (as always when there is a big problem :D).
I already tried to : change my motherboard, change my graphic card, change my power supply, reset CMOS : no success. Would it be the CPU ? :D :D :D ... :'(.
I'll tell you all I know when the problem occurred.
- "I was chilly peacefully playing a game when my PC instantly switched off. I opened my PC and I've seen a lot of dust in the alimentation fan. I vacuumed it!".
I recently bought a POST card that, they say : "give information about the POST checks".
This card, when I power up my computer after a long time, shows : --AA (4-digits screen).
When I switch off my computer and switch it on just after, the card shows : --FF.
I noticed the sound of the CPU fan is different between the AA and the FF code. (so the fans run).
From what does this error come from and how to solve the problem ?
Thank you for your time ! Have a great day.

You haven't tried to replace your memory...
As you can see a check point, I believe the CPU is still good.
Good luck.

To know what AA and FF mean, you need to get a list of POST codes for your system. Check with your motherboard manufacturer, perhaps in the manual.
If you can't find it there, the next best thing would be to get a list of generic POST codes for the BIOS. You will need to determine what BIOS is actually in use on the mobo though (AMI Aptio, Phoenix SecureCore Tiano, etc). Then do a Google search for the BIOS name + POST codes, and you will pull up a list.
Note that just the code by itself may not be enough to debug it. The BIOS spits out hundreds of POST codes during boot up. The sequence of the codes can tell you something too.

Related

How to help Windows or Windows Applications handle Composite Joysticks correctly?

The context for this question is primarily Windows 7, though I've tried it on 10 as well.
I've built a 4-player composite joystick using the Arduino Mega 2560. It is a USB device composed of 4 HID Joystick interface, each with its own endpoint. Each joystick accompanied with its buttons shows up correctly in Device Manager as a separate HID interface. They are identified by a VID/PID/MI_# triplet correctly, with MI_# being the interface index (MI_0, MI_1, etc). Calibration also sees each interface as separate, with inputs correctly corresponding to each controller in their enumerated order (ie: the first one receives inputs from only the joystick at index 0). When I dump the Descriptors, they also look correct.
There are two issues:
1) Naming
Windows only reads the interface string from the first interface. According to the Descriptor dump, each interface should have its own string, going from "Player 1" to "Player 4". Windows 7 sees them all as "Player 1". Inspecting regedit, this may because Windows 7 only has one OEM Name per joystick device, and so only gets the one for the first interface. Am I stuck with this behaviour, unless I somehow get a resolution from Microsoft?
For some reason, Windows 10 calls them all "Arduino Joystick". I'm not sure if because I'm using the same Test VID/PID combo I got from an Arduino Joystick tutorial and Windows is just picking up the name that someone else has used for their device, or if it is concatenating my Manufacturer String with the interface type "Joystick". I'm not sure why it would do the latter, but if it's the former I'd prefer to block that look-up somehow.
I'd like to resolve both, but practically speaking I'm using Windows 7 mostly.
2) Mixed Inputs
I've seen this behaviour only with some applications, but unfortunately one of them is Steam, and the others may be due to Unity. It manifests differently, so I'm lead to believe it's due to there being no standard way for dealing with composite joysticks.
On Steam, in Big Picture mode when I try to add/test a controller, while it detects all 4 controllers (all as Player 4, I might add), it only accepts the inputs from Joy4 no matter which of the controllers I choose. After saving the config however, all the joysticks have the same mappings applied. This is actually good, as I can use any controller to navigate Big Picture mode, but I'm concerned it's symptomatic of other problems which I might be seeing in other applications.
In "Race the Sun", when manually configuring joystick controls (it says Player 4 is detected), it will interpret inputs from a single joystick as coming from multiple joysticks. Usually, two of the four directional inputs come from Joy1, while the two other come from another Joystick other than the one being used. Eg: if I'm configuring Joy2, it'll register inputs from Joy1 and say Joy3.
In "Overcooked", it allows a single joystick to register as 4 different players. Normally you'd hit a particular button on the controller you want to use to register as a player, but in my case if you hit that button on joy1 4 times, then 4 players will be registered. If you start the game like this, you end-up controlling all 4 characters simultaneously with one joystick. Interesting, but not the intended usage, I'm sure.
Both "Race the Sun" and "Overcooked" are developed using Unity, and I understand that Unity's joystick management is rather lacking. Overcooked at least is designed to handle multiple players though (it's a couch co-op game), so this probably has more to do with the composite nature of my controllers.
I should note that other applications have no problems differentiating between the joysticks. Even xbox360ce sees them as separate, and the emulation works on several Steam games, single and multiplayer. Overcooked is still getting the joysticks crossed even though I'm using xbox360ce with it.
The question I'm bringing to Stack Overflow is what could I do to improve how applications handle my joysticks? Right now I'm using the generic Windows game controller driver. Theoretically this should have been enough, but issue #1 shows that composite joysticks may not be an expected use case. Would driver development even have a hope of resolving the issue with the applications I mentioned above, as I don't see how the device would differ significantly in its identification. I'm hoping someone more experienced with coding for USB devices can offer some insight.
For what it's worth, my Arduino sketch and firmware can be found here.
I have found a solution for "Race the Sun" and "Overcooked".
"Race the Sun" may have expected my joystick to provide axis ranges from 0 to a certain maximum (eg: 32767). I had the firmware set up going from -255 to +255. While other applications handle this fine, "Race the Sun" may expect the more common 0-X behaviour (this is the range that a Logitech joystick of mine provides). After changing this, I was able to configure and play it correctly. I've also updated my GitHub project; the link is in the original question.
The problem with "Overcooked" was actually caused by a badly configured or corrupted xbox360ce installation. Somewhere in tinkering with the emulator I must have screwed something up, as I messed up games that were previously working. I solved it by wiping all its files, including the content in ProgramData/X360CE, and re-downloading everything and redoing the controllers. Now all my games seem to be behaving correctly.
This still leaves the problem with Steam. For some reason Steam doesn't remember my joystick configuration from reboot to reboot. For the time-being I've decided just to put up with the default joystick behaviour, but would like to sort this one out eventually, too.

How "Unique" and safe actually is WMI Win32_xxxxx serial number property? (aka is it possible to change it by any way?)

As read on topic here How to find the unique serial number of a flash device? and especially here How to get manufacturer serial number of an USB flash drive? I know it is possible to get properties of hardware devices (particularly hard drives and usb drives...) using WMI Win32_PhysicalMedia and Win32_DiskDrive, which I'm getting done successfully.
However, I really want to know about the safety of these informations.
PhysicalMedia property SerialNumber returns the actual serial number of the main hard drive, while using other Win32_LogicalDisk and other calls we can map the drive letter of flash storage to actual Win32_DiskDrive device, and from there read properties like Name, Model, FirmwareRevision, SerialNumber, DeviceID, Manufacturer...
Now, DeviceID is generated by Windows / Pc itself, while SerialNumber should be the one that manufacturer added to the physical flash drive.
Manufacturer in most cases returns "Standard" something, Name is also of no use, while SerialNumber actually gets me a something that looks like unique ID, (I've read that in some cases this is not returned, so PNPDeviceID should be used instead? , Model gives the actual model of the flash drive, and FirmwareRevision just a number that could be used to add safety switch to the licensing, but is not vital.
However, the only one of these that seems / should be actually safe to use is SerialNumber, right?
So, the question here goes: Which level is Win32_DiskDrive actually reading this info from? Is it possible to fake that at all (Ok, letalone the actual lowlevel hacking stuff or driver injection etc...(??)), and if so, how hard it is?
If there's a known way / guide / example, I'd be also happy to read it. (not necessary info looking for here though.)
This is not for intention of bypassing some licensing. I'm making licensing for my SW, and am curious, whether it would be safe enough to use USB drive's SerialNumber property, and lock license against the presence of that USB flash, for which the license was bought for? Basically to use it as kind of a dongle, but not like the dongles actually work (using communication with the actual hardware inside the dongle...)
I know it may not seem as a safe solution, as flash drives dies quite often these days, or get lost etc, but this is just to add an option to my licensing from "Per PC" to "Portable - per USB device".
Thanks for any info!!!
EDIT:
I am completely aware that bypassing these kind of safety switches is very possible. Of course, even Windows itself is not licensed in a way that couldn't be hacked, nor Adobe, ProTools etc, (software that is widely used and costs a lot!).
But that wasn't a real question, and also, that's not the case for me -> the software will not be that expensive and not used by that much people, that I'd be afraid to drag interest in someone who will do extensive programming to make a patch/crack for it. Regular debugger use and workaround is pretty unlikely to be used by regular client who would need the software, ( and also, since it is something to be used in business environment, where stability is vital, I doubt they will really play around that...).
Main point here:
It is possible for sure, but: HOW hard is it to do for a regular person? (I know, the answer is: depending on your code.)
Main question of the post: Is it possible to change the ID on the USB itself, OR to make an app that will fake that data to my app? If it is, I'm sure it might be easier than making a crack/patch, that's why I wanted to know, whether WMI reads explicitly from hardware, or could one make an app that would pass fake data to it?
WMI just returns what the hardware tells it. It's as unique as the hardware. Which ultimately depends on the vendor.
But...
If someone has an administrator account to the computer†, then there are very few things that can be done to keep them from just hooking up the kernel debugger to your program and overriding your checks, or recording the raw USB communication session and replaying it on an unauthorized system. The real dongles do some to mitigate this, by having the hardware generate a response to a particular challenge. The challenge/response changes for each request, so it's not as susceptible to replay attacks, but the debugger tricks still work.
This is the real problem with the serial number approach. Uniqueness is not the primary concern for dongled software. The primary concern is unpredictability.
An illustrative example-
Let's say that I'm a bouncer at an exclusive night club. We're so exclusive that you have to answer a question to get in. You really want to get in, but no one will tell you the answer to the question. One night, you hatch a plan. You hang out in the alley and listen to the conversations that I'm having with the patrons trying to enter the club. It doesn't take you long to realize that I'm asking everyone the exact same question, and you're in. (This is the serial number approach)
After a while, I notice that there are a lot of people coming into the club that I've never seen before, and begin to suspect something. The people we really want to allow in are all given a card with a formula‡ on it. Whenever they come to the door of the club, I give them a number and they apply their formula and tell me the result. Since I also know the formula, I can tell if they are really allowed in. Now, even if you hear the entire challenge and response, without the formula, you aren't getting in. (This is one common approach taken by dongles.)
But what about the debugger? The debugger just made herself the club's owner, fired me, and can come and go as she pleases.
†Or has physical access to the machine and a password reset disk.
‡Stop laughing, this could totally happen. :)
Photo credit: Guillaume Paumier, CC-BY. Found on the Wikimedia Commons 7-Oct-15
Edit to address the question edit:
HOW hard is it to do for a regular person? (I know, the answer is: depending on your code.)
The question is how skilled is the 'regular person'? If you're talking about software/electrical engineers, then this is a trivial task. If you're talking about sales/marketing then it's a challenging task.
Is it possible to change the ID on the USB itself, OR to make an app that will fake that data to my app?
It depends and Yes. Changing the ID on the device itself is possible with some devices, and impossible with others. Software to spoof/man-in-the-middle the USB communication, or to create a virtual USB device is possible.
If it is, I'm sure it might be easier than making a crack/patch, that's why I wanted to know, whether WMI reads explicitly from hardware, or could one make an app that would pass fake data to it?
As I led with above, WMI reads from the hardware. This can be intercepted or bypassed.
Some ways to bypass the check:
Make a virtual USB device
Modify the USB MSD device driver to report the same serial number for all devices.
Build hardware using commercially available cheap host controllers that identifies with the same information as the authorized device. ($10 worth of raw components and a little bit of time.)
Redirect the system calls to/from USB to a compromised library.
Note also that:
Some places have restrictions on USB storage devices, ranging from discouraging their use, to outright bans. This would prevent your software from being used in sensitive computing environments processing private data, like credit cards, PII, trade secrets, classified information, etc. (In the US many governmental agencies have outright bans on USB storage devices, and block the install of any MSD.)
The Mass Storage specification doesn't require serial numbers. They are usually there, but they don't have to be, and many low-cost vendors
A USB PKI token costs a little bit more, but would probably do what you want. Here's an example from Safenet (Disclaimer: I am in no way affiliated with Safenet Inc, and you should evaluate all the possible options from all vendors. I suggested this because it was the first thing that came up through CDW, and the price was ~$30)

How to know SQL Server connection string?

Hello,
I have a VB6 program which I only have the compiled executable file and not the source code. This program connects to a SQL Server 2000 database. I get the,
[Microsoft][ODBC Driver Manager] Data source name not found and no
default driver specified
error. Is there a way to know what is the connection string coded inside the VB6 program? VB6 De-compilers did not work.
You can do a weird try, might be a help that is using Cheat Engine, generally used for cheatings Facebook games. This is a nice tool that helps you reading the changing values in the memory.
What we do is :
We execute the game in FF or Chrome. Open CheatEngine and from there we select the FF or Chrome from its proces windows. Now in its home screen, you can have search option in the memory registers. What we generally do is we search for a value on screen that changes on every step. So we place the value and set the search type as exact search and clicks on new search. In the left list box , it will give you 10000's of memory addresses, but dont worry. In your next turn, check the value after chnage, now put that value and click on next scan. Now you will have lesser address, repeat it until you exactly two register, generally comes in 3-4 clicks and 4-5 retries , so it will take around 15-20 mins of your time.
I am not sure that this will work. In your case, you have to find a unique keyword like ODBC Driver Manager , why because this register or its nearby register might contain the value.
If it seems a hectic and failure process, still you can explore the CheatEngine as it is full of options and nice small small utilities. Check that out, it might be a help. Or send me its exe, i will give a try for you, if you wish.
I know this is not an appropriate answer , but this might be a help , as he is totally stucked, kindly do not downvote. As its solely his decision whether to try or not

What are some good examples showing that "I am not the user"?

I'm a software developer who has a background in usability engineering. When I studied usability engineering in grad school, one of the professors had a mantra: "You are not the user". The idea was that we need to base UI design on actual user research rather than our own ideas as to how the UI should work.
Since then I've seen some good examples that seem to prove that I'm not the user.
User trying to use an e-mail template authoring tool, and gets stuck trying to enter the pipe (|) character. Problem turns out to be that the pipe on the keyboard has a space in the middle.
In a web app, user doesn't see content below the fold. Not unusual. We tell her to scroll down. She has no idea what we're talking about and is not familiar with the scroll thumb.
I'm listening in on a tech support call. Rep tells the user to close the browser. In the background I hear the Windows shutdown jingle.
What are some other good examples of this?
EDIT: To clarify, I'm looking for examples where developers make assumptions that turn out to be horribly false about what users will know, understand, etc.
I think one of the biggest examples is that expert users tend to play with an application.
They say, "Okay, I have this tool, what can I do with it?"
Your average user sees the ecosystem of an operating system, filesystem, or application as a big scary place where they are likely to get lost and never return.
For them, everything they want to do on a computer is task-based.
"How do I burn a DVD?"
"How do I upload a photo from my camera to this website."
"How do I send my mom a song?"
They want a starting point, a reproducible work flow, and they want to do that every time they have to perform the task. They don't care about streamlining the process or finding the best way to do it, they just want one reproducible way to do it.
In building web applications, I long since learned to make the start page of my application something separate from the menus with task-based links to the main things the application did in a really big font. For the average user, this increased usability hugely.
So remember this: users don't want to "use your application", they want to get something specific done.
In my mind, the most visible example of "developers are not the user" is the common Confirmation Dialog.
In most any document based application, from the most complex (MS Word, Excel, Visual Studio) through the simplest (Notepad, Crimson Editor, UltraEdit), when you close the appliction with unsaved changes you get a dialog like this:
The text in the Untitled file has changed.
Do you want to save the changes?
[Yes] [No] [Cancel]
Assumption: Users will read the dialog
Reality: With an average reading speed of 2 words per second, this would take 9 seconds. Many users won't read the dialog at all.
Observation: Many developers read much much faster than typical users
Assumption: The available options are all equally likely.
Reality: Most (>99%) of the time users will want their changes saved.
Assumption: Users will consider the consequences before clicking a choice
Reality: The true impact of the choice will occur to users a split second after pressing the button.
Assumption: Users will care about the message being displayed.
Reality: Users are focussed on the next task they need to complete, not on the "care and feeding" of their computer.
Assumption: Users will understand that the dialog contains critical information they need to know.
Reality: Users see the dialog as a speedbump in their way and just want to get rid of it in the fastest way possible.
I definitely agree with the bolded comments in Daniel's response--most real users frequently have a goal they want to get to, and just want to reach that goal as easily and quickly as possible. Speaking from experience, this goes not only for computer novices or non-techie people but also for fairly tech-savvy users who just might not be well-versed in your particular domain or technology stack.
Too frequently I've seen customers faced with a rich set of technologies, tools, utilities, APIs, etc. but no obvious way to accomplish their high-level tasks. Sometimes this could be addressed simply with better documentation (think comprehensive walk-throughs), sometimes with some high-level wizards built on top of command-line scripts/tools, and sometimes only with a fundamental re-prioritization of the software project.
With that said... to throw another concrete example on the pile, there's the Windows start menu (excerpt from an article on The Old New Thing blog):
Back in the early days, the taskbar
didn't have a Start button.
...
But one thing kept getting kicked up
by usability tests: People booted up
the computer and just sat there,
unsure what to do next.
That's when we decided to label the
System button "Start".
It says, "You dummy. Click here." And
it sent our usability numbers through
the roof, because all of a sudden,
people knew what to click when they
wanted to do something.
As mentioned by others here, we techie folks are used to playing around with an environment, clicking on everything that can be clicked on, poking around in all available menus, etc. Family members of mine who are afraid of their computers, however, are even more afraid that they'll click on something that will "erase" their data, so they'd prefer to be given clear directions on where to click.
Many years ago, in a CMS, I stupidly assumed that no one would ever try to create a directory with a leading space in the name .... someone did, and made many other parts of the system very very sad.
On another note, trying to explain to my mother to click the Start button to turn the computer off is just a world of pain.
How about the apocryphal tech support call about the user with the broken "cup holder" (CD/ROM)?
Actually, one that bit me was cut/paste -- I always trim my text inputs now since some of my users cut/paste text from emails, etc. and end up selecting extra whitespace. My tests never considered that people would "type" in extra characters.
Today's GUIs do a pretty good job of hiding the underlying OS. But the idosyncracies still show through.
Why won't the Mac let me create a folder called "Photos: Christmas 08"?
Why do I have to "eject" a mounted disk image?
Can't I convert a JPEG to TIFF just by changing the file extension?
(The last one actually happened to me some years ago. It took forever to figure out why the TIFF wasn't loading correctly! It was at that moment that I understood why Apple used to use embedded file types (as metadata) and to this day I do not understand why they foolishly went back to file extensions. Oh, right; it's because Unix is a superior OS.)
I've seen this plenty of times, it seems to be something that always comes up. I seem to be the kind of person who can pick up on these kind of assumptions (in some circumstances), but I've been blown away by what the user was doing other many times.
As I said, it's something I'm quite familiar with. Some of the software I've worked on is used by the general public (as opposed to specially trained people) so we had to be ready for this kind of thing. Yet I've seen it not be taken into account.
A good example is a web form that needs to be completed. We need this form completed, it's important to the process. The user is no good to us if they don't complete the form, but the more information we get out of them the better. Obviously these are two conflicting demands. If just present the user a screen of 150 fields (random large number) they'll run away scared.
These forms had been revised many times in order to improve things, but users weren't asked what they wanted. Decisions were made based on the assumptions or feelings of various people, but how close those feelings were to actual customers wasn't taken into account.
I'm also going to mention the corollary to Bevan's "The users will read the dialog" assumption. Operating off the "the users don't read anything" assumption makes much more sense. Yet people who argue that the user's don't read anything will often suggest putting bits of long dry explanatory text to help users who are confused by some random poor design decision (like using checkboxes for something that should be radio buttons because you can only select one).
Working any kind of tech support can be very informative on how users do (or do not) think.
pretty much anything at the O/S level in Linux is a good example, from the choice of names ("grep" obviously means "search" to the user!) to the choice of syntax ("rm *" is good for you!)
[i'm not hatin' on linux, it's just chock full of unix-legacy un-usability examples]
How about the desktop and wallpaper metaphors? It's getting better, but 5-10 years ago was the bane of a lot of remote tech support calls.
There's also the backslash vs. slash issue, the myriad names for the various keyboard symbols, and the antiquated print screen button.
Modern operating systems are great because they all support multiple user profiles, so everyone that uses my application on the same workstation can have their own settings and user data. Only, a good portion of the support requests I get are asking how to have multiple data files under the same user account.
Back in my college days, I used to train people on how to use a computer and the internet. I'd go to their house, setup their internet service show them email and everything. Well there was this old couple (late 60's). I spent about three hours showing them how to use their computer, made sure they could connect to the internet and everything. I leave feeling very happy.
That weekend I get a frantic call, about them not being able to check their email. Now I'm in the middle of enjoying my weekend but decide to help them out, and walk through all the things, 30 minutes latter, I ask them if they have two phone lines..."of course we only have one" Needless to say they forgot that they need to connect to the internet first (Yes this was back in the day of modems).
I supposed I should have setup shortcuts like DUN - > Check Email Step 1, Eduora - Check Email Step 2....
What users don't know, they will make up. They often work with an incorrect theory of how an application works.
Especially for data entry, users tend to type much faster than developers which can cause a problem if the program is slow to react.
Story: Once upon a time, before the personal computer, there was timesharing. A timesharing company's customer rep told me that once when he was giving a "how to" class to two or three nice older women, he told them how to stop a program that was running (in case it was started in error or taking to long.) He had one of the students type ^K, and the timesharing terminal responded "Killed!". The lady nearly had a heart attack.
One problem that we have at our company is employees who don't trust the computer. If you computerize a function that they do on paper, they will continue to do it on paper, while entering the results in the computer.

What's the toughest bug you ever found and fixed? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
Locked. This question and its answers are locked because the question is off-topic but has historical significance. It is not currently accepting new answers or interactions.
What made it hard to find? How did you track it down?
Not close enough to close but see also
https://stackoverflow.com/questions/175854/what-is-the-funniest-bug-youve-ever-experienced
A jpeg parser, running on a surveillance camera, which crashed every time the company's CEO came into the room.
100% reproducible error.
I kid you not!
This is why:
For you who doesn't know much about JPEG compression - the image is kind of broken down into a matrix of small blocks which then are encoded using magic etc.
The parser choked when the CEO came into the room, because he always had a shirt with a square pattern on it, which triggered some special case of contrast and block boundary algorithms.
Truly classic.
This didn't happen to me, but a friend told me about it.
He had to debug a app which would crash very rarely. It would only fail on Wednesdays -- in September -- after the 9th. Yes, 362 days of the year, it was fine, and three days out of the year it would crash immediately.
It would format a date as "Wednesday, September 22 2008", but the buffer was one character too short -- so it would only cause a problem when you had a 2 digit DOM on a day with the longest name in the month with the longest name.
This requires knowing a bit of Z-8000 assembler, which I'll explain as we go.
I was working on an embedded system (in Z-8000 assembler). A different division of the company was building a different system on the same platform, and had written a library of functions, which I was also using on my project. The bug was that every time I called one function, the program crashed. I checked all my inputs; they were fine. It had to be a bug in the library -- except that the library had been used (and was working fine) in thousands of POS sites across the country.
Now, Z-8000 CPUs have 16 16-bit registers, R0, R1, R2 ...R15, which can also be addressed as 8 32-bit registers, named RR0, RR2, RR4..RR14 etc. The library was written from scratch, refactoring a bunch of older libraries. It was very clean and followed strict programming standards. At the start of each function, every register that would be used in the function was pushed onto the stack to preserve its value. Everything was neat & tidy -- they were perfect.
Nevertheless, I studied the assembler listing for the library, and I noticed something odd about that function --- At the start of the function, it had PUSH RR0 / PUSH RR2 and at the end to had POP RR2 / POP R0. Now, if you didn't follow that, it pushed 4 values on the stack at the start, but only removed 3 of them at the end. That's a recipe for disaster. There an unknown value on the top of the stack where return address needed to be. The function couldn't possibly work.
Except, may I remind you, that it WAS working. It was being called thousands of times a day on thousands of machines. It couldn't possibly NOT work.
After some time debugging (which wasn't easy in assembler on an embedded system with the tools of the mid-1980s), it would always crash on the return, because the bad value was sending it to a random address. Evidently I had to debug the working app, to figure out why it didn't fail.
Well, remember that the library was very good about preserving the values in the registers, so once you put a value into the register, it stayed there. R1 had 0000 in it. It would always have 0000 in it when that function was called. The bug therefore left 0000 on the stack. So when the function returned it would jump to address 0000, which just so happened to be a RET, which would pop the next value (the correct return address) off the stack, and jump to that. The data perfectly masked the bug.
Of course, in my app, I had a different value in R1, so it just crashed....
This was on Linux but could have happened on virtually any OS. Now most of you are probably familiar with the BSD socket API. We happily use it year after year, and it works.
We were working on a massively parallel application that would have many sockets open. To test its operation we had a testing team that would open hundreds and sometimes over a thousand connections for data transfer. With the highest channel numbers our application would begin to show weird behavior. Sometimes it just crashed. The other time we got errors that simply could not be true (e.g. accept() returning the same file descriptor on subsequent calls which of course resulted in chaos.)
We could see in the log files that something went wrong, but it was insanely hard to pinpoint. Tests with Rational Purify said nothing was wrong. But something WAS wrong. We worked on this for days and got increasingly frustrated. It was a showblocker because the already negotiated test would cause havoc in the app.
As the error only occured in high load situations, I double-checked everything we did with sockets. We had never tested high load cases in Purify because it was not feasible in such a memory-intensive situation.
Finally (and luckily) I remembered that the massive number of sockets might be a problem with select() which waits for state changes on sockets (may read / may write / error). Sure enough our application began to wreak havoc exactly the moment it reached the socket with descriptor 1025. The problem is that select() works with bit field parameters. The bit fields are filled by macros FD_SET() and friends which DON'T CHECK THEIR PARAMETERS FOR VALIDITY.
So everytime we got over 1024 descriptors (each OS has its own limit, Linux vanilla kernels have 1024, the actual value is defined as FD_SETSIZE), the FD_SET macro would happily overwrite its bit field and write garbage into the next structure in memory.
I replaced all select() calls with poll() which is a well-designed alternative to the arcane select() call, and high load situations have never been a problem everafter. We were lucky because all socket handling was in one framework class where 15 minutes of work could solve the problem. It would have been a lot worse if select() calls had been sprinkled all over of the code.
Lessons learned:
even if an API function is 25 years old and everybody uses it, it can have dark corners you don't know yet
unchecked memory writes in API macros are EVIL
a debugging tool like Purify can't help with all situations, especially when a lot of memory is used
Always have a framework for your application if possible. Using it not only increases portability but also helps in case of API bugs
many applications use select() without thinking about the socket limit. So I'm pretty sure you can cause bugs in a LOT of popular software by simply using many many sockets. Thankfully, most applications will never have more than 1024 sockets.
Instead of having a secure API, OS developers like to put the blame on the developer. The Linux select() man page says
"The behavior of these macros is
undefined if a descriptor value is
less than zero or greater than or
equal to FD_SETSIZE, which is normally
at least equal to the maximum number
of descriptors supported by the
system."
That's misleading. Linux can open more than 1024 sockets. And the behavior is absolutely well defined: Using unexpected values will ruin the application running. Instead of making the macros resilient to illegal values, the developers simply overwrite other structures. FD_SET is implemented as inline assembly(!) in the linux headers and will evaluate to a single assembler write instruction. Not the slightest bounds checking happening anywhere.
To test your own application, you can artificially inflate the number of descriptors used by programmatically opening FD_SETSIZE files or sockets directly after main() and then running your application.
Thorsten79
Mine was a hardware problem...
Back in the day, I used a DEC VaxStation with a big 21" CRT monitor. We moved to a lab in our new building, and installed two VaxStations in opposite corners of the room. Upon power-up,my monitor flickered like a disco (yeah, it was the 80's), but the other monitor didn't.
Okay, swap the monitors. The other monitor (now connected to my VaxStation) flickered, and my former monitor (moved across the room) didn't.
I remembered that CRT-based monitors were susceptable to magnetic fields. In fact, they were -very- susceptable to 60 Hz alternating magnetic fields. I immediately suspected that something in my work area was generating a 60 Hz alterating magnetic field.
At first, I suspected something in my work area. Unfortunately, the monitor still flickered, even when all other equipment was turned off and unplugged. At that point, I began to suspect something in the building.
To test this theory, we converted the VaxStation and its 85 lb monitor into a portable system. We placed the entire system on a rollaround cart, and connected it to a 100 foot orange construction extension cord. The plan was to use this setup as a portable field strength meter,in order to locate the offending piece of equipment.
Rolling the monitor around confused us totally. The monitor flickered in exactly one half of the room, but not the other side. The room was in the shape of a square, with doors in opposite corners, and the monitor flickered on one side of a diagnal line connecting the doors, but not on the other side. The room was surrounded on all four sides by hallways. We pushed the monitor out into the hallways, and the flickering stopped. In fact, we discovered that the flicker only occurred in one triangular-shaped half of the room, and nowhere else.
After a period of total confusion, I remembered that the room had a two-way ceiling lighting system, with light switches at each door. At that moment, I realized what was wrong.
I moved the monitor into the half of the room with the problem, and turned the ceiling lights off. The flicker stopped. When I turned the lights on, the flicker resumed. Turning the lights on or off from either light switch, turned the flicker on or off within half of the room.
The problem was caused by somebody cutting corners when they wired the ceiling lights. When wiring up a two-way switch on a lighting circuit, you run a pair of wires between the SPDT switch contacts, and a single wire from the common on one switch, through the lights, and over to the common on the other switch.
Normally, these wires are bundeled together. They leave as a group from one switchbox, run to the overhead ceiling fixture, and on to the other box. The key idea, is that all of the current-carrying wires are bundeled together.
When the building was wired, the single wire between the switches and the light was routed through the ceiling, but the wires travelling between the switches were routed through the walls.
If all of the wires ran close and parallel to each other, then the magnetic field generated by the current in one wire was cancelled out by the magnetic field generated by the equal and opposite current in a nearby wire. Unfortunately, the way that the lights were actually wired meant that one half of the room was basically inside a large, single-turn transformer primary. When the lights were on, the current flowed in a loop, and the poor monitor was basically sitting inside of a large electromagnet.
Moral of the story: The hot and neutral lines in your AC power wiring are next to each other for a good reason.
Now, all I had to do was to explain to management why they had to rewire part of their new building...
A bug where you come across some code, and after studying it you conclude, "There's no way this could have ever worked!" and suddenly it stops working though it always did work before.
One of the products I helped build at my work was running on a customer site for several months, collecting and happily recording each event it received to a SQL Server database. It ran very well for about 6 months, collecting about 35 million records or so.
Then one day our customer asked us why the database hadn't updated for almost two weeks. Upon further investigation we found that the database connection that was doing the inserts had failed to return from the ODBC call. Thankfully the thread that does the recording was separated from the rest of the threads, allowing everything but the recording thread to continue functioning correctly for almost two weeks!
We tried for several weeks on end to reproduce the problem on any machine other than this one. We never could reproduce the problem. Unfortunately, several of our other products then began to fail in about the same manner, none of which have their database threads separated from the rest of their functionality, causing the entire application to hang, which then had to be restarted by hand each time they crashed.
Weeks of investigation turned into several months and we still had the same symptoms: full ODBC deadlocks in any application that we used a database. By this time our products are riddled with debugging information and ways to determine what went wrong and where, even to the point that some of the products will detect the deadlock, collect information, email us the results, and then restart itself.
While working on the server one day, still collecting debugging information from the applications as they crashed, trying to figure out what was going on, the server BSoD on me. When the server came back online, I opened the minidump in WinDbg to figure out what the offending driver was. I got the file name and traced it back to the actual file. After examining the version information in the file, I figured out it was part of the McAfee anti-virus suite installed on the computer.
We disabled the anti-virus and haven't had a single problem since!!
I just want to point out a quite common and nasty bug that can happens in this google-area time:
code pasting and the infamous minus
That is when you copy paste some code with an minus in it, instead of a regular ASCII character hyphen-minus ('-').
Plus, minus(U+2212), Hyphen-Minus(U+002D)
Now, even though the minus is supposedly rendered as longer than the hyphen-minus, on certain editors (or on a DOS shell windows), depending on the charset used, it is actually rendered as a regular '-' hyphen-minus sign.
And... you can spend hours trying to figure why this code does not compile, removing each line one by one, until you find the actual cause!
May be not the toughest bug out there, but frustrating enough ;)
(Thank you ShreevatsaR for spotting the inversion in my original post - see comments)
The first was that our released product exhibited a bug, but when I tried to debug the problem, it didn't occur. I thought this was a "release vs. debug" thing at first -- but even when I compiled the code in release mode, I couldn't reproduce the problem. I went to see if any other developer could reproduce the problem. Nope. After much investigation (producing a mixed assembly code / C code listing) of the program output and stepping through the assembly code of the released product (yuck!), I found the offending line. But the line looked just fine to me! I then had to lookup what the assembly instructions did -- and sure enough the wrong assembly instruction was in the released executable. Then I checked the executable that my build environment produced -- it had the correct assembly instruction. It turned out that the build machine somehow got corrupt and produced bad assembly code for only one instruction for this application. Everything else (including previous versions of our product) produced identical code to other developers machines. After I showed my research to the software manager, we quickly re-built our build machine.
Somewhere deep in the bowels of a networked application was the line (simplified):
if (socket = accept() == 0)
return false;
//code using the socket()
What happened when the call succeeded? socket was set to 1. What does send() do when given a 1? (such as in:
send(socket, "mystring", 7);
It prints to stdout... this I found after 4 hours of wondering why, with all my printf()s taken out, my app was printing to the terminal window instead of sending the data over the network.
With FORTRAN on a Data General minicomputer in the 80's we had a case where the compiler caused a constant 1 (one) to be treated as 0 (zero). It happened because some old code was passing a constant of value 1 to a function which declared the variable as a FORTRAN parameter, which meant it was (supposed to be) immutable. Due to a defect in the code we did an assignment to the parameter variable and the compiler gleefully changed the data in the memory location it used for a constant 1 to 0.
Many unrelated functions later we had code that did a compare against the literal value 1 and the test would fail. I remember staring at that code for the longest time in the debugger. I would print out the value of the variable, it would be 1 yet the test 'if (foo .EQ. 1)' would fail. It took me a long time before I thought to ask the debugger to print out what it thought the value of 1 was. It then took a lot of hair pulling to trace back through the code to find when the constant 1 became 0.
I had a bug in a console game that occurred only after you fought and won a lengthy boss-battle, and then only around 1 time in 5. When it triggered, it would leave the hardware 100% wedged and unable to talk to outside world at all.
It was the shyest bug I've ever encountered; modifying, automating, instrumenting or debugging the boss-battle would hide the bug (and of course I'd have to do 10-20 runs to determine that the bug had hidden).
In the end I found the problem (a cache/DMA/interrupt race thing) by reading the code over and over for 2-3 days.
Not very tough, but I laughed a lot when it was uncovered.
When I was maintaining a 24/7 order processing system for an online shop, a customer complained that his order was "truncated". He claimed that while the order he placed actually contained N positions, the system accepted much less positions without any warning whatsoever.
After we traced order flow through the system, the following facts were revealed. There was a stored procedure responsible for storing order items in database. It accepted a list of order items as string, which encoded list of (product-id, quantity, price) triples like this:
"<12345, 3, 19.99><56452, 1,
8.99><26586, 2, 12.99>"
Now, the author of stored procedure was too smart to resort to anything like ordinary parsing and looping. So he directly transformed the string into SQL multi-insert statement by replacing "<" with "insert into ... values (" and ">" with ");". Which was all fine and dandy, if only he didn't store resulting string in a varchar(8000) variable!
What happened is that his "insert ...; insert ...;" was truncated at 8000th character and for that particular order the cut was "lucky" enough to happen right between inserts, so that truncated SQL remained syntactically correct.
Later I found out the author of sp was my boss.
This is back when I thought that C++ and digital watches were pretty neat...
I got a reputation for being able to solve difficult memory leaks. Another team had a leak they couldn't track down. They asked me to investigate.
In this case, they were COM objects. In the core of the system was a component that gave out many twisty little COM objects that all looked more or less the same. Each one was handed out to many different clients, each of which was responsible for doing AddRef() and Release() the same number of times.
There wasn't a way to automatically calculate who had called each AddRef, and whether they had Released.
I spent a few days in the debugger, writing down hex addresses on little pieces of paper. My office was covered with them. Finally I found the culprit. The team that asked me for help was very grateful.
The next day I switched to a GC'd language.*
(*Not actually true, but would be a good ending to the story.)
Bryan Cantrill of Sun Microsystems gave an excellent Google Tech Talk on a bug he tracked down using a tool he helped develop called dtrace.
The The Tech Talk is funny, geeky, informative, and very impressive (and long, about 78 minutes).
I won't give any spoilers here on what the bug was but he starts revealing the culprit at around 53:00.
While testing some new functionality that I had recently added to a trading application, I happened to notice that the code to display the results of a certain type of trade would never work properly. After looking at the source control system, it was obvious that this bug had existed for at least a year, and I was amazed that none of the traders had ever spotted it.
After puzzling for a while and checking with a colleague, I fixed the bug and went on testing my new functionality. About 3 minutes later, my phone rang. On the other end of the line was an irate trader who complained that one of his trades wasn’t showing correctly.
Upon further investigation, I realized that the trader had been hit with the exact same bug I had noticed in the code 3 minutes earlier. This bug had been lying around for a year, just waiting for a developer to come along and spot it so that it could strike for real.
This is a good example of a type of bug known as a Schroedinbug. While most of us have heard about these peculiar entities, it is an eerie feeling when you actually encounter one in the wild.
The two toughest bugs that come to mind were both in the same type of software, only one was in the web-based version, and one in the windows version.
This product is a floorplan viewer/editor. The web-based version has a flash front-end that loads the data as SVG. Now, this was working fine, only sometimes the browser would hang. Only on a few drawings, and only when you wiggled the mouse over the drawing for a bit. I narrowed the problem down to a single drawing layer, containing 1.5 MB of SVG data. If I took only a subsection of the data, any subsection, the hang didn't occur. Eventually it dawned on me that the problem probably was that there were several different sections in the file that in combination caused the bug. Sure enough, after randomly deleting sections of the layer and testing for the bug, I found the offending combination of drawing statements. I wrote a workaround in the SVG generator, and the bug was fixed without changing a line of actionscript.
In the same product on the windows side, written in Delphi, we had a comparable problem. Here the product takes autocad DXF files, imports them to an internal drawing format, and renders them in a custom drawing engine. This import routine isn't particularly efficient (it uses a lot of substring copying), but it gets the job done. Only in this case it wasn't. A 5 megabyte file generally imports in 20 seconds, but on one file it took 20 minutes, because the memory footprint ballooned to a gigabyte or more. At first it seemed like a typical memory leak, but memory leak tools reported it clean, and manual code inspection turned up nothing either. The problem turned out to be a bug in Delphi 5's memory allocator. In some conditions, which this particular file was duly recreating, it would be prone to severe memory fragmentation. The system would keep trying to allocate large strings, and find nowhere to put them except above the highest allocated memory block. Integrating a new memory allocation library fixed the bug, without changing a line of import code.
Thinking back, the toughest bugs seem to be the ones whose fix involves changing a different part of the system than the one where the problem occurs.
When the client's pet bunny rabbit gnawed partway through the ethernet cable. Yes. It was bad.
Had a bug on a platform with a very bad on device debugger.
We would get a crash on the device if we added a printf to the code. It then would crash at a different spot than the location of the printf. If we moved the printf, the crash would ether move or disappear. In fact, if we changed that code by reordering some simple statements, the crash would happen some where unrelated to the code we did change.
It turns out there was a bug in the relocator for our platform. the relocator was not zero initializing the ZI section but rather using the relocation table to initialze the values. So any time the relocation table changed in the binary the bug would move. So simply added a printf would change the relocation table an there for the bug.
This happened to me on the time I worked on a computer store.
One customer came one day into shop and tell us that his brand new computer worked fine on evenings and night, but it does not work at all on midday or late morning.
The trouble was that mouse pointer does not move at that times.
The first thing we did was changing his mouse by a new one, but the trouble were not fixed. Of course, both mouses worked on store with no fault.
After several tries, we found the trouble was with that particular brand and model of mouse.
Customer workstation was close to a very big window, and at midday the mouse was under direct sunlight.
Its plastic was so thin that under that circumstances, it became translucent and sunlight prevented optomechanical wheel for working :|
My team inherited a CGI-based, multi-threaded C++ web app. The main platform was Windows; a distant, secondary platform was Solaris with Posix threads. Stability on Solaris was a disaster, for some reason. We had various people who looked at the problem for over a year, off and on (mostly off), while our sales staff successfully pushed the Windows version.
The symptom was pathetic stability: a wide range of system crashes with little rhyme or reason. The app used both Corba and a home-grown protocol. One developer went so far as to remove the entire Corba subsystem as a desperate measure: no luck.
Finally, a senior, original developer wondered aloud about an idea. We looked into it and eventually found the problem: on Solaris, there was a compile-time (or run-time?) parameter to adjust the stack size for the executable. It was set incorrectly: far too small. So, the app was running out of stack and printing stack traces that were total red herrings.
It was a true nightmare.
Lessons learned:
Brainstorm, brainstorm, brainstorm
If something is going nuts on a different, neglected platform, it is probably an attribute of the environment platform
Beware of problems that are transferred from developers who leave the team. If possible, contact the previous people on personal basis to garner info and background. Beg, plead, make a deal. The loss of experience must be minimized at all costs.
Adam Liss's message above talking about the project we both worked on, reminded me of a fun bug I had to deal with. Actually, it wasn't a bug, but we'll get to that in a minute.
Executive summary of the app in case you haven't seen Adam message yet: sales-force automation software...on laptops...end of the day they dialed up ...to synchronize with the Mother database.
One user complained that every time he tried to dial in, the application would crash. The customer support folks went through all their usually over-the-phone diagnostic tricks, and they found nothing. So, they had to relent to the ultimate: have the user FedEx the laptop to our offices. (This was a very big deal, as each laptop's local database was customized to the user, so a new laptop had to be prepared, shipped to the user for him to use while we worked on his original, then we had to swap back and have him finally sync the data on first original laptop).
So, when the laptop arrived, it was given to me to figure out the problem. Now, syncing involved hooking up the phone line to the internal modem, going to the "Communication" page of our app, and selecting a phone number from a Drop-down list (with last number used pre-selected). The numbers in the DDL were part of the customization, and were basically, the number of the office, the number of the office prefixed with "+1", the number of the office prefixed with "9,,," in case they were calling from an hotel etc.
So, I click the "COMM" icon, and pressed return. It dialed in, it connected to a modem -- and then immediately crashed. I tired a couple more times. 100% repeatability.
So, a hooked a data scope between the laptop & the phone line, and looked at the data going across the line. It looked rather odd... The oddest part was that I could read it!
The user had apparently wanted to use his laptop to dial into a local BBS system, and so, change the configuration of the app to use the BBS's phone number instead of the company's. Our app was expecting our proprietary binary protocol -- not long streams of ASCII text. Buffers overflowed -- KaBoom!
The fact that a problem dialing in started immediately after he changed the phone number, might give the average user a clue that it was the cause of the problem, but this guy never mentioned it.
I fixed the phone number, and sent it back to the support team, with a note electing the guy the "Bonehead user of the week". (*)
(*) OkOkOk... There's probably a very good chance what actually happened in that the guy's kid, seeing his father dial in every night, figured that's how you dial into BBS's also, and changed the phone number sometime when he was home alone with the laptop. When it crashed, he didn't want to admit he touched the laptop, let alone broke it; so he just put it away, and didn't tell anyone.
It was during my diploma thesis. I was writing a program to simulate the effect of high intensity laser on a helium atom using FORTRAN.
One test run worked like this:
Calculate the intial quantum state using program 1, about 2 hours.
run the main simulation on the data from the first step, for the most simple cases about 20 to 50 hours.
then analyse the output with a third program in order to get meaningful values like energy, tork, momentum
These should be constant in total, but they weren't. They did all kinds of weird things.
After debugging for two weeks I went berserk on the logging and logged every variable in every step of the simulation including the constants.
That way I found out that I wrote over an end of an array, which changed a constant!
A friend said he once changed the literal 2 with such a mistake.
A deadlock in my first multi-threaded program!
It was very tough to find it because it happened in a thread pool. Occasionally a thread in the pool would deadlock but the others would still work. Since the size of the pool was much greater than needed it took a week or two to notice the first symptom: application completely hung.
I have spent hours to days debugging a number of things that ended up being fixable with literally just a couple characters.
Some various examples:
ffmpeg has this nasty habit of producing a warning about "brainfart cropping" (referring to a case where in-stream cropping values are >= 16) when the crop values in the stream were actually perfectly valid. I fixed it by adding three characters: "h->".
x264 had a bug where in extremely rare cases (one in a million frames) with certain options it would produce a random block of completely the wrong color. I fixed the bug by adding the letter "O" in two places in the code. Turned out I had mispelled the name of a #define in an earlier commit.
My first "real" job was for a company that wrote client-server sales-force automation software. Our customers ran the client app on their (15-pound) laptops, and at the end of the day they dialed up to our unix servers to synchronize with the Mother database. After a series of complaints, we found that an astronomical number of calls were dropping at the very beginning, during authentication.
After weeks of debugging, we discovered that the authentication always failed if the incoming call was answered by a getty process on the server whose Process ID contained an even number followed immediately by a 9. Turns out the authentication was a homebrew scheme that depended on an 8-character string representation of the PID; a bug caused an offending PID to crash the getty, which respawned with a new PID. The second or third call usually found an acceptable PID, and automatic redial made it unnecessary for the customers to intervene, so it wasn't considered a significant problem until the phone bills arrived at the end of the month.
The "fix" (ahem) was to convert the PID to a string representing its value in octal rather than decimal, making it impossible to contain a 9 and unnecessary to address the underlying problem.
Basically, anything involving threads.
I held a position at a company once in which I had the dubious distinction of being one of the only people comfortable enough with threading to debug nasty issues. The horror. You should have to get some kind of certification before you're allowed to write threaded code.
I heard about a classic bug back in high school; a terminal that you could only log into if you sat in the chair in front of it. (It would reject your password if you were standing.)
It reproduced pretty reliably for most people; you could sit in the chair, log in, log out... but if you stand up, you're denied, every time.
Eventually it turned out some jerk had swapped a couple of adjacent keys on the keyboard, E/R and C/V IIRC, and when you sat down, you touch-typed and got in, but when you stood, you had to hunt 'n peck, so you looked at the incorrent labels and failed.
While I don't recall a specific instance, the toughest category are those bugs which only manifest after the system has been running for hours or days, and when it goes down, leaves little or no trace of what caused the crash. What makes them particularly bad is that no matter how well you think you've reasoned out the cause, and applied the appropriate fix to remedy it, you'll have to wait for another few hours or days to get any confidence at all that you've really nailed it.
Our network interface, a DMA-capable ATM card, would very occasionally deliver corrupted data in received packets. The AAL5 CRC had checked out as correct when the packet came in off the wire, yet the data DMAd to memory would be incorrect. The TCP checksum would generally catch it, but back in the heady days of ATM people were enthused about running native applications directly on AAL5, dispensing with TCP/IP altogether. We eventually noticed that the corruption only occurred on some models of the vendor's workstation (who shall remain nameless), not others.
By calculating the CRC in the driver software we were able to detect the corrupted packets, at the cost of a huge performance hit. While trying to debug we noticed that if we just stored the packet for a while and went back to look at it later, the data corruption would magically heal itself. The packet contents would be fine, and if the driver calculated the CRC a second time it would check out ok.
We'd found a bug in the data cache of a shipping CPU. The cache in this processor was not coherent with DMA, requiring the software to explicitly flush it at the proper times. The bug was that sometimes the cache didn't actually flush its contents when told to do so.

Resources