Windows Crash Reporting - windows

Not sure if this is the place to ask for this kind of help... but I've recently been having issues with my PC randomly shutting off at night (idle) and sometimes while in use.
Monitors+USB devices all power down
Motherboard Lights + CPU Fan Lights Stay On
Have to cycle power to fix.
This is generally at least a once a day occurrence, sometimes multiple times.
Things tried;
Reapplying thermal compound on both CPU + GPU
Replaced Motherboard
I have not tried replacing GPU/CPU yet as thats like... last ditch as I don't want to spend any money I don't need to.
Last failure
Failure - Event Viewer
Not sure if this helps or anything but just trying to get an idea of whats going on.
Please note: This particular crash was a BSOD whereas most of the time it simply just blacks out without anything.

is it plugged into a power strip?
if it is that might be the problem, my pc was doing somthing close to that (though it was never a BSOD), and it was pluged into a power strip.
its that or it could be a fault of some sort.
it could be a windows issue, if you have the install disc, try re-installing windows, that could help...

Related

windowed OpenGL first frame delay after idle

I have windowed WinApi/OpenGL app. Scene is drawn rarely (compared to games) in WM_PAINT, mostly triggered by user input - MW_MOUSEMOVE/clicks etc.
I noticed, that when there is no scene moving by user mouse (application "idle") and then some mouse action by user starts, the first frame is drawn with unpleasant delay - like 300 ms. Following frames are fast again.
I implemented 100 ms timer, which only does InvalidateRect, which is later followed by WM_PAINT/draw scene. This "fixed" the problem. But I don't like this solution.
I'd like know why is this happening and also some tips how to tackle it.
Does OpenGL render context save resources, when not used? Or could this be caused by some system behaviour, like processor underclocking/energy saving etc? (Although I noticed that processor runs underclocked even when app under "load")
This sounds like Windows virtual memory system at work. The sum of all the memory use of all active programs is usually greater than the amount of physical memory installed on your system. So windows swaps out idle processes to disc, according to whatever rules it follows, such as the relative priority of each process and the amount of time it is idle.
You are preventing the swap out (and delay) by artificially making the program active every 100ms.
If a swapped out process is reactivated, it takes a little time to retrieve the memory content from disc and restart the process.
Its unlikely that OpenGL is responsible for this delay.
You can improve the situation by starting your program with a higher priority.
https://superuser.com/questions/699651/start-process-in-high-priority
You can also use the virtuallock function to prevent Windows from swapping out part of the memory, but not advisable unless you REALLY know what you are doing!
https://msdn.microsoft.com/en-us/library/windows/desktop/aa366895(v=vs.85).aspx
EDIT: You can improve things for sure by adding more memory and for sure 4GB sounds low for a modern PC, especially if you Chrome with multiple tabs open.
If you want to be scientific before spending any hard earned cash :-), then open Performance Manager and look at Cache Faults/Sec. This will show the swap activity on your machine. (I have 16GB on my PC so this number is very low mostly). To make sure you learn, I would check Cache Faults/Sec before and after the memory upgrade - so you can quantify the difference!
Finally, there is nothing wrong with the solution you found already - to kick start the graphic app every 100ms or so.....
Problem was in NVidia driver global 3d setting -"Power management mode".
Options "Optimal Power" and "Adaptive" save power and cause the problem.
Only "Prefer Maximum Performance" does the right thing.

need help debugging an unstable program

Following some changes my Arduino sketch became unstable, it only run 1-2 hours and crashes. It's now a month that I am trying to understand but do not make sensible progress: the main difficulty is that the slightest change make it run apparently "ok" for days...
The program is ~1500 lines long
Can someone suggest how to progress?
Thanks in advance for your time
Well, the embedded systems are wery well known for continuous fight against Universe forth dimension: time. It is known that some delays must be added inside code - this does not imply the use of a sistem delay routine always - just operation order may solve a lot.
Debugging a system with such problem is difficult. Some techniques could be used:
a) invasive ones: mark (i.e. use some printf statements) in various places of your software, entry or exit of some routines or other important steps and run again - when the application crashes, you must note the last message seen and conclude the crash is after that software step marked by the printf.
b) less invasive: use an available GPIO pin as output and set it high at the entry of some routine and low at the exit; the crasing point will leave the pin either high or low. You can use several pins if available and watch the activity with an oscilloscope.
c) non invasive - use the JTAG or SWD debugging - this is the best one - if your micro support faults debugging, then you have the methods to locate the bug.

Using usb cable for random number generation

I have a thought, but am unsure how to execute it. I want to take a somewhat long usb cable and plug both ends into the same machine. Then I would like to send a signal from one end and time how long it would take to reach the other end. I think this should cause signal to arrive at different times and that would cause me to get random numbers.
Can someone suggest a language in which I could do this the quickest? I have zero experience in sending signals over usb and don't know where to start or how to start. Any help will be greatly appreciated.
I simply want to do this as a fun in home project, so I don't need anything official and just would like to see if this idea can work.
EDIT: What if I store the usb cable in liquid nitrogen or a substance just as cold in order to slow down the signal as much as possible (I have access to liquid nitrogen).
Sorry I can't comment (not enough rep), but the delay should always be the same through the wire. This might limit the true randomness of your numbers. Plus the acutal delay time in the wire might be shorter than even a CPU cycle.
If your operating system is Windows, you may run into this type of issue:
Why are .NET timers limited to 15 ms resolution?
Apparently the minimum time resolution on Windows is around 15ms.
EDIT: In response to your liquid nitrogen edit, according to these graphs, you may have more luck with heat! Interestingly enough...
Temperature vs Conductivity http://www.emeraldinsight.com/content_images/fig/1740240120008.png
I want to take a somewhat long usb cable and plug both ends into the same machine.
Won't work. A USB connection is always Host -> Device, a PC can only be Host. And the communication uses predictable 1 ms intervals - bad for randomness.
Some newer microcontrollers have both RNG and USB on chip, that way you can make a real USB RNG.
What if I store the usb cable in liquid nitrogen or a substance just as cold in order to slow down the signal
The signal would travel a tiny bit faster, as the resistance of the cable is lower.

Is it possible for computers to tell time without a built in clock?

Computers keep time normally with a built in clock on the motherboard. But out of curiosity, can a computer determine when a certain interval of time has passed?
I would think not as a computer can only execute instructions. Of course, you could rely on the computer knowing its own processing speed, but that would be an extremely crude hack that would take up way too many clock cycles and be error prone.
Not without it constantly running to keep track of it, it pulling the time off of the internet constantly, or a piece of hardware to get the time from the constantly broadcast signal.
In certain cases, this is possible. Some microcontrollers and older processors execute their instructions in a defined time period, so by tracking the number of instructions executed, one can keep track of periods of time. This works well, and is useful, for certain cases (such as oscillating to play a sound wave), but in general, you're right, it's not particularly useful.
In the olden days there was a fixed timer interrupt. Often every 60th of a second in the US.
Old OS's simply counted these interrupts -- effectively making it a clock.
And, in addition to counting them, it also used this to reschedule tasks, thereby preventing a long-running task from monopolizing the procesor.
This scheme did mean that you had to set the time every time you turned the power on.
But in those days, powering down a computer (and getting it to reboot) was an epic task performed only by specialists.
I recall days when the machine wouldn't "IPL" (Initial Program Load) for hours because something was not right in the OS and the patches and the hardware.

Proving that replacing hardware will improve developer performance

Now the machines we are forced to use are 2GB Ram, Intel Core 2 Duo E6850 # 3GHz CPU...
The policy within the company is that everyone has the same computer no matter what and that they are on a 3 year refresh cycle... Meaning I will have this machine for the next 2 years... :S
We have been complaining like crazy but they said they want proof that upgrading the machines will provide exactly X time saving before doing anything... And with that they are only semi considering giving us more RAM...
Even when you put forward that developer resources are much more expensive than hardware, they firstly say go away, then after a while they say prove it. As far as they are concerned paying wages comes from a different bucket of money to the machines and that they don't care (i.e. the people who can replace the machines, because paying wages doesn't come from their pockets)...
So how can I prove that $X benefit will be gained by spending $Y on new hardware...
The stack I'm working with is as follows: VS 2008, SQL 2005/2008. As duties dictate we are SQL admins as well as Web/Winform/WebService Developers. So its very typical to have 2 VS sessions and at least one SQL session open at the same time.
Cheers
Anthony
Actually, the main cost for your boss is not the lost productivity. It is that his developers don't enjoy their working conditions. This leads to:
loss of motivation and productivity
more stress causing illness
external opportunities causing developers to go away
That sounds like a decent machine for your stack. Have you proven to yourself that you're going to get better performance, using real-world tests?
Check with your IT people to see if you can get the disks benchmarked, and max out the memory. Management should be more willing to take these incremental steps first.
The machine looks fine apart from the RAM.
If you want to prove this sort of thing time all the things you wait for (typically load times and compile times), add it all up and work how much it costs you to sit around. From that make some sort of guess how much time you'll save (it'll have to be a guess unless you can compare like with like, which is difficult if they won't upgrade your systems). You'll probably find that they'll make the money back on the RAM at least in next to no time - and that's before you even begin to factor in the loss of productivity from people's minds wandering whilst they wait for stuff to happen.
Unfortunately if they're skeptical then it's unlikely you can prove it to them in a quantitative way alone. Even if you came up with numbers, they'll probably question the methodology. I suggest you see if they're willing to watch a 10 minute demo (maybe call it a presentation), and show them the experience of switching between VS instances (while explaining why you'd need to switch and how often), show them the build process (again explaining why you'd need to create a build and how often), etc.
Ask them if you're allowed to bring your own hardware. If you're really convinced it would make you more productive, upgrade it yourself and when you start producing more ask for a raise or to be reimbursed.
Short of that though..
I have to ask: what else are you running? I'm not really that familiar with that stack, but it really shouldn't be that taxing. Are they forcing you to run some kind of system-slowing monitoring or antivirus app?
You'd probably have better luck convincing them to let you change that than getting them to roll out new updates.
If you really must convince them, your best bet is to benchmark your machine as accurately as you can and price out exactly what you need upgraded. Its a lot easier to get them to agree to an exact (and low) dollar amount than some open-ended upgrade
Even discussion this with them for more than five minutes will cost more than just calling out to your local PC dealer and buy the RAM out of your own pocket. Ask you project lead whether they can put it on the tab of the project as another "development tool". If (s)he can't, don't bother and cough up the
When they come complaining, then put the time of the meetings for this on their budget (since they come crying). See how long they can take this.
When we had the same issue, my boss bought better gfx cards for the whole team out of his own pockets and went to the PC guys to get each of us a second monitor. A few days later, he went again to get each of us 2GB more RAM, too.
The main cost from slow developer machines comes from the slow builds and the 'context switching', ie the time that it takes you to switch between the tasks required of you:
Firing up the second instance of VS and waiting for it to load and build
Checking out or updating a source tree
Starting up another instance of VS or checking out a clean source tree to 'have a quick look at' some bug that's been assigned
Multiple build/debug cycles to fix difficult bugs
The mental overhead in switching between different tasks, which shouldn't be underestimated
I made a case a while ago for new hardware after doing a breakdown of the amount of time that was wasted waiting for the machine to catch up. In a typical day we might need to do 2 or 3 full builds at half an hour each. The link time was around 3 minutes, and in a build/debug cycle you might do that 40 times a day. So that's 3.5 hours a day waiting for the machine. The bulk of that is in small 2 or 3 minute pockets which isn't long enough for you to context switch and do something else. It's long enough to check your mail, check stackoverflow, blow your nose and that's about it. So there's nothing else productive you can do with that time.
If you can show that a new machine will build the full project in 15 minutes and link in 1 minute then that's theoretically given you an extra 2 hours of productivity a day (or more realistically, the potential for more build cycles).
So I would get some objective timings that show how long it takes for different parts of your work cycle, then try to do comparative timings on machines with 4GB of RAM, a second drive (eg something fast like a WD Raptor), an SSD, whatever, to come up with some hard figures to support your case.
EDIT: I forgot to mention: present this as your current hardware is making you lose productivity, and put a cost on the amount of time lost by multiplying it by a typical developer hourly rate. On this basis I was able to show that a new PC would pay for itself in about month.
Take a task you do regularly that would be improved with faster hardware - ex: running the test suite, running a build, booting and shutting down a virtual machine - and measure the time it takes with current hardware and with better hardware.
Then compute the monthly, or yearly cost: how many times per month x time gained x hourly salary, and see if this is enough to make a case.
For instance, suppose you made $10,000/month, and gained 5 minutes a day with a better machine, the loss to your company per month would be around (5/60 hours lost a day) x 20 work days/month x $10,000 / 8 hours/day = $105 / month. Or about $1200/year lost because of the machine (assuming I didn't mess up the math...). Now before talking to your manager, think about whether this number is significant.
Now this is assuming that 1) you can measure the improvement, even though you don't have a better machine, and 2) while you are "wasting" your 5 minutes a day, you are not doing anything productive, which is not obvious.
For me, the cost of a slow machine is more psychological, but it's hard to quantify - after a few days of having to wait for things to happen on the PC, I begin to get cranky, which is both bad for my focus, and my co-workers!
It’s easy; hardware is cheap, developers are expensive. Throwing reasonable amounts of money at the machinery should be an absolute no brainer and if your management doesn’t understand that and won’t be guided by your professional opinion then you might be in the wrong job.
As for your machine, throw some more RAM at it and use a fast disk (have a look at how intensive VS is on disk IO using the resource monitor – it’s very hungry). Lots of people going towards 10,000 RPM or even SSD these days and they make a big difference to your productivity.
Try this; take the price of the hardware you need (say fast disk and more RAM), split it across a six month period (a reasonable time period in which to recoup the investment) and see what it’s worth in “developer time” each day. You’ll probably find it only needs to return you a few minutes a day to pay for itself. Once again, if your management can’t understand or support this then question if you’re in the right place.

Resources