Debug makefile that is hanging indefinitely - makefile

I am debugging a very large make system. When I do a full build, about two hours after it starts, it seems to hang, and no more output appears. Unfortunately, there are 1000's of makefiles involved, and I'd like to narrow down which one is executing at the time of the hang. Is there a good way to suspend make, and then see what recipes it is running at that point and time? (I'm using GNU make 3.81 if that matters.)

Related

windows 10 became unresponsive after installing GCC ARM compiler

I would really appreciate some help here or at least an indication of where to post this.
After unsuccessfully trying to make my programs run on e2studio, I installed GCC ARM compiler, and it made the program build successfully.
However my fast windows 10 computer is totally slow. It is unresponsive a lot of the time, it can not open any office anymore (in the task manager I can see they are running). The screen sometimes goes black. And mainly file explorer takes a lot of time to do the simplest of things. (it seems as any action takes a lot of processing)
In other words, I can't work normally in my computer anymore since I installed this compiler.
My disk resources are ample. No problem there.
I am trying to savage all my work somehow since backing up now takes a lot of time
Any help would be greatly appreciated please. If you believe this does not goes here, instead of downvote me please help me redirect my problem to the appropriate place

GDB is getting slower over time

When debugging with GDB during one debug session it becomes slower and slower over time. Even simplest operations like step over and step into can take dozens of seconds and sometimes even minutes.
I was debugging a rather big project (Chromium browser). The only reason I could think of was that gdb is getting slower over time because it loads more and more symbols and it takes longer to work with them. However Chromium compiles entire code into one huge executable, which contains all symbols which should be loaded in the very beginning. Thus symbol database won't grow during the debugging. Moreover why would one need to look up symbols just to perform step over or step into operation?
While testing I have tried using gdb with front-ends (Eclipse, QtCreator, Emacs) and from the command line to confirm that this is not an IDE problem. Both use cases demonstrate same problem, however it seems like it starts to appear sooner in IDEs (probably because IDE also loads symbols for the watch view, call stack, list of threads etc.).
Why is GDB getting slower? Is it a design flaw, a bug or some specific problem in my computer? Are there any free alternatives to GDB that work faster?
Why is GDB getting slower?
It's a bug. Try newer version of GDB (preferably current CVS snapshot). If the problem is still there, report it to GDB bugzilla with repro instructions.
all symbols which should be loaded in the very beginning.
GDB loads partial symbols (psymbols) on startup, and reads more "on-demand", so some growth is expected.
why would one need to look up symbols just to perform step over or step into
In order to step over or into, GDB would likely need line tables for the current translation unit (TU). If your "step into" operation takes you to a new TU, then new line tables would have to be loaded.
Still, it should not take GDB anywhere near minutes to step or next.

Best practices for maintaining cronjobs and shell scripts?

I have inherited a sprawling crontab that I need to maintain and update. I don't have much experience with it or bash scripting (I think I've got a decent grip on the basics) and I want to do a good job.
Short request: Any guidelines for 'refactoring' a messy crontab and set of bash scripts
Long request: I've run into a number of issues, but are so many people using cron files etc that I feel like I must be missing some large repository of information, best practices and tools - or is this just a stylistic difference for this kind of programming? (My bias: why do something manually if I can use a tool to do it faster, consistently and well?).
Examples of issues so far:
Due to an external event, the crontab didn't run for a couple of days. Along with someone else, we manually went through the list, trying to figure out what didn't run, what we needed to rerun, and what scripts we needed to edit and run with earlier dates etc.
What I can't find:
There are plenty of (slightly pointless) 'cron generators' online. Where are the reverse? Something I can feed in a long crontab, two dates, and have it output which processes should have run when, or just how many times total?
This seems within my meager scripting capabilities, so shouldn't it exist already? ;)
Alternatively, if I ever have to do that again, is there some way of calling a bashscript so that any instances of date() are pre-set to an earlier time, rather than changing every date call within the script? (e.g. for all the missed reports and billing invoices)
It turns out a particular report hadn't been running for two years. It was just requested again, and lo, there it was in the crontab! The bash script just had broken path references to the relevant files.
What I can't find: some kind of path checker for bash files? Like a website link checker. Yes I'll be going through these all manually eventually, but it'd show up some at least some of the problem areas.
It sounds like some times, there has either been too long or short a gap between dependent processes, so updates have happened after the first has been run, or the first hasn't finished running before the second has been called. I've seen a few possible options for this (eg anacron runs in sequential order), but what would you recommend?
There are also a large number of essentially meaningless emails generated from the crontab (scripts throwing errors but running 'correctly', failing mostly silently, or just printing everystep of non-essential scripts). I'll be manually going through scripts and trying to get them to provide more useful data, or 'succeed quietly', but y'know - any guidelines?
If my understanding or layout of the issue is confused, then I apologize, but hey - you see my problem then! I need to go from newbie, to knowing what to do to get this right, and not screw up a touchy system further. Thanks!
Not a full answer, but more resources that have been helpful:
http://blog.endpoint.com/2008/12/best-practices-for-cron.html
I am slowly going through this, and trying to implement each of the points. I hadn't thought to google 'best practices cron' til after my post. :P
For version control, I'm just going to use RCS in the meantime, as I edit scripts on a file-by-file basis, but I've been advised to get Git set up (or Mercurial if I was on a Windows system).
This actually sounds great:
http://everythingsysadmin.com/2010/09/xed-202-released.html
"xed is a perl script that locks a file, runs $EDITOR on the file, then unlocks it."...and puts it in RCS if it wasn't already.
Completely brainless version control. If I get my head around bash, I'd like to create an editing shortcut that automatically commits to whichever version control system I use.
Other tips I received from an System Admin,
Dates: Rather than using say, date, or --date="last monday", use a fixed date and add a day/week etc to it each time it runs (if not more than current day obviously), because then if the script doesn't run, I can just re-run the script repeatedly until it catches up. Ah!
(And, this might sound obvious, but heaps of the reports I'll be eventually edit, don't say prominently what dates the report is running for. Will fix.)
And was reassured I should try and get the cron emails as quiet as possible, so that I actually notice if there's an error email.
There are wrappers for better cron error reporting that I have not yet investigated, linked here: http://habilis.net/cronic/
Herculean task ahead of you, best of luck. :)
I'd suggest finding all the tasks that run daily and shove them into their own scripts in /etc/cron.daily/. Same for weekly into /etc/cron.weekly, hourly, and monthly.
You might want to investigate use of anacron(8) for scheduling your jobs, if the machine won't always be online, but you still need some level of control over when the jobs are run. It's been the default cron-helper-tool for multiple distributions for a few years, so hopefully it's stable enough to rely on for your own tasks; but I could easily imagine that it might not perfectly meet your needs.
Faking the dates to scripts can be done with at least two packages on Ubuntu: datefudge and faketime. I have no experience with either, but both sound like they should be able to help. I hope you won't need it in the future. :)
Sorry, I know of no path-checker for bash scripts. It seems unlikely, since simple scripts are simple and easy to check by eye :) and complex scripts will be generating their pathnames at runtime anyhow. Maybe you could keep a database of pathnames used by each script and write a new script to verify that database regularly.
You could disable the cron email by setting MAILTO="". I'm not sure I like this. Maybe setting MAILTO to a logging-only account would help the deluge. Another option is getting really good at your procmail(1) rules so you can stuff them in another mailbox completely.
Getting good at mutt color or score controls can help you spot the wheat amongst the chaff. (color index red black ERROR or similar commands might help you spot the problems more quickly.)

Improving Scala script startup time -- client mode?

I'd like to get short Scala scripts running as fast as python scripts do, particularly in terms of script startup time.
Can anyone recommend some ways of doing this, that doesn't involve compiling with GCJ, for instance?
One way I can think of is to run the script using the JVM's client mode, but I can't seem to get this working. An example (known-good) shebang for this would be great.
UPDATE: I'm aware of the other questions, but don't think any workable answers have been found so far, as I'm looking for solutions that work on STANDARD installs, without additional requirements. This is what I was trying to get at with "doesn't involve compiling with GCJ, for instance".
It seems that -client mode is designed for this express purpose, but it's just awkward to activate from scala scripts for some reason.
As many other questions have gone before, if one could only know how to look for them, use Nailgun.
Other ways to improve on script performance is to start fsc at system boot, so it will be available for scripts, and use -savecompiled, to avoid repeated compilation of scripts.
EDIT
You mention -client mode, but I think that's really not a good choice. The will give you a slower Scala compiler, and do little to improve the startup time of the compiler itself, if not Java. Much better to have fsc as daemon, running as -server, and/or save compiled scripts with -savecompiled.
Now, I don't know what problems you are having with -client, but I have read that it doesn't work with 64 bits JVM. Might that be your case?
PS: Looking up similar questions, I noticed JRuby has builtin Nailgun support!
I haven't tried this yet, but scala-native promises near-instant startup because it compiles into a native binary. So one solution is to provide it as a number of binary downloads.
http://www.scala-native.org/en/latest/
I just tried to pass the '-client' parameter through Scala to the JVM this way:
#!/bin/sh
exec scala -J-client "$0" "$#"
!#
args.foreach(println)
It seems to work. Daniel C. Sobral wrote that he read that it doesn't work with 64 bits JVM. I don't know, maybe this is outdated. Anyway it seem to drop the startup time a little bit.
Running:
:~$ time /tmp/testScalasScript arg1
arg1
real 0m2,170s
user 0m2,228s
sys 0m0,217s
This was the fastest run of just a couple of tests. Without it it takes up to 0.5s longer. But this was really a quick test and should be done more systematically to come to meaningful results.
Wasn't there a way to make Scala compile and save the compilation result at the first run of the script for faster reuse? But I don't no for sure at this point of time.
UPDATE:
I just saw that on 'java -help' the option '-client' is not documented (anymore?). Anyway there is no error thrown (which is done on usage of unexisting options). So I am not sure if the '-client' option really has consequences.

Program is slower when compiled

Any suggestions on why a VB6 program would be slower when compiled than when running in the debugger? I'm compiling it with "Optimize for fast code."
Notes:
I measure performance by running the compiled version and the non-compiled version on the same machine. I based my predictions on wall-clock time, since 30 minutes vs. 100 minutes is a big enough difference to be visible.
Several months ago, I configured a debugging tool to attach itself to my program whenever it ran. I totally forgot that I had done this.
Special thanks to Process Monitor for making this very obvious.
Turning it off made the program run fast.
AppVerifier, for those who are curious.
You should select the compile to Native Code option
The compile to P-code option forces your program to run in an interpreted mode, which can be slower.
There are some optimizations in the advanced section. Try them out too.
Some more points to consider:
Are you running the compliled application in the same environment? Is it taking the same data as input?
How did you know that it is slow? What if your timing program is wrong?
How do you measure the performance?
It is hard to measure the performance by what you just said. You have to ensure the running environment must be exactly same for compare the performance?
Are you running on the same machine? Do you connect to DB? Does DB has the same work load at different run? You need isolate other factors before reaching such a decision.

Resources