What is a "local no-change full rebuild"? - makefile

Android's AOSP 10 no longer includes its own prebuilt version of the ccache utility.
In the explanation given for why, they mention that "Local no-change full rebuilds were showing better results, but why not just use incremental builds at that point?"
What exactly is a "local no-change full rebuild"?

Related

Why does running a clang compiled executable on a network drive, hang all subsequent executions of compiled executables?

I'm perplexed by this one and not sure what's relevant so will include all context:
MacBook Pro with an M1 Pro running macOS 12.6.
Apple clang version 14.0.0, freshly installed by deleting DeveloperTools folder and running xcode-select --install.
Using zsh in Terminal.
Network share mounted using no-configuration Finder method (seems to use standard SMB, but authenticates with my Apple ID)
Network share is my home directory on a iMac with a Core i5 running macOS 11.6.8.
Update: also tried root directory and using the tmp directory, to eliminate one category of doubt. Same result.
The minimum repeatable example of the issue I've managed to find is:
Use gcc from Apple's Developer Tools to compile a “Hello World” C application (originally discovered using ghc to compile Haskell - effect is the same).
Run the compiled executable. No surprises.
cd to the mounted network drive.
Do the same thing there - compiled executable hangs! First surprise, but relatively minor.
Return to the local machine. Original compiled executable still runs fine.
Use the DeveloperTools to compile anything, including the original source - compiled executable on local machine now hangs!
I've created an asciinema recording of the MRE. You can see the key part of the transcript in this still:
I’ve tried killing processes, checking lsof, unmounting the drive, logging in and out, checking the PATH, etc. Nothing gets me back to a working state short of a reboot.
Some more troubleshooting data:
gcc -v is identical for both executables, except for -fdebug-compilation-dir (set to cwd) and the name of the object file (randomly generated).
Just performing the compilation doesn't trigger the issue - running the networked executable does.
Trawling through the voluminous Console log reveals nothing relevant.
system.log shows no entries around the time of the issue.
lsof and ps -axww show reams and reams of output that is hard to spot patterns in, but I'm pretty sure there is no significant before/after differences.
I left the hung process running on the local machine overnight, and there's no change the next day.
Have I triggered some sandboxing or security fault and am being protected from disastrous consequences? Or this some clang/llvm related quirk I'm not familiar with? Or, given that ghc using its native code generator seems to have the same result, is this a bug in the way stdout is provided to executables? I'm at a loss!
Oh boy, avoiding Apple ID authentication of the network share fixed this for me.
I forced Finder to not use it’s magic no-configuration Apple ID login method, by opening the Location in Finder, clicking the "Disconnect" button and then clicking the "Connect As..." button that appears in its place. If I choose "Registered User" and use my username and password, I can then execute exactly the same commands (since the mount name ends up being the same) and execution works without an issue. I can continue to compile and execute to my heart's content.
That the Apple ID method is being used in the first place is not obvious (in true minimal design fashion), but subtly indicated at the top of the Finder window as "Connected as ". The only obvious difference this makes, is the username shown in mount:
Apple ID:
//com.apple.idms.appleid.prd.<UUID>#<HOSTNAME>._smb._tcp.local/<SHARE> on /Volumes/<SHARE> (smbfs, nodev, nosuid, mounted by <USERNAME>)
"Registered User":
//<USERNAME>#<HOSTNAME>._smb._tcp.local/<SHARE> on /Volumes/<SHARE> (smbfs, nodev, nosuid, mounted by <USERNAME>)
Obviously something far more significant is different, given the fundamental impact, but it's not at all clear to me what that is. So at this stage, this answer is just a workaround to a nasty bug.

Intel Advisor - view results from Linux cloud on local Windows GUI

With Intel Advisor, I ran the following on Linux in the cloud and downloaded the result folders to my local WIndows machine:
advixe-cl --collect=survey ./My_Program.exe
advixe-cl --collect=map ./My_Program .exe
advixe-cl --collect=dependencies ./My_Program .exe
Then I opened my Windows GUI for Advisor. I clicked the "Open Result" icon and opened the advixeproj file. It correctly shows sub-folders for survey, map and dependencies. When I open any of them, all I see in each of the window panes is:
No Data
To collect data about your application's performance, compile your application in Release Build settings and run Survey analysis.
My application is a C program (My_Program.exe) that calls a shared object written in assembly language (NASM). I assume Advisor can work with assembly language programs and shared objects because VTune does, so I don't think that's the problem.
Next to the Application field I click "Browse" and browse to the My_Program.exe, but Advisor says the file "is not an executable binary." Maybe that's because this is Windows and the binary is for Linux.
My question is: how do I view results from a Linux cloud server downloaded to my local Windows machine for analysis with the Windows GUI? I do that regularly with VTune without any problems.
Thanks.
In short - the method described in the question is generally correct, but for Advisor it is also important to specify --project-dir (keep it the same accross all analysis types).
1) [on linux] advixe-cl --collect=survey --project-dir ./my_project_dir ./My_Program.exe
2) [on linux] advixe-cl --collect=tripcounts --project-dir ./my_project_dir ./My_Program.exe
etc..
3) copy my_project_dir folder from Linux to Windows
4) [on windows in Advisor GUI or with advixe-gui - open] ./my_project_dir and use "Show My Result" button.
This is covered on Intel forum too, as noted in "comments" to the original question.
In addition There are 2 other different methods : using --snapshot command or just exchaning interactive HTML GUIs (available for Roofline and Offload features). They are described at e.g. given nice article: https://software.intel.com/content/www/us/en/develop/documentation/advisor-cookbook/top/analyze-performance-remotely-and-visualize-results-on-macos.html

What is the correct way to use Intel Advisor on a remote machine?

Intel VTune Amplifier has the possibility to profile a parallel application executed on a remote machine.
Intel Advisor doesn't have such an option. According to this document, you have to use the command-line version of Intel Advisor:
This makes it possible to automate many tasks as well as analyze an
application running on remote hosts
However, the GUI version has many features not offered by the cl version (like suggestions about how to solve vectorization/multi-thread inefficiency etc).
I tried to run advixe-cl on the remote machine and then copy locally the project (and produced results). It works, but some features are lost. As last chance I tried to ssh -X the remote machine and the use advixe-gui, but it seems that the main core of my Xeon Phi KNL is too weak to ruun properly such a graphic application.
What is the correct/best use of Intel Advisor in such a scenario?
The recommended way is described by you here: "run advixe-cl on the remote machine and then copy locally the project".
But you mentioned that "some features were lost". What did you loose exactly?
The key defficiency of given command-line+GUI approach is that you may not see your source code in "Source View" tabs initially. To overcome this limitation, you have to adjust Project Properties of your local project copy and specify "Source Search" and sometimes to "Binaries/Symbol Search" specifying directories providing path to the location where original source code and sometimes executable binarry plus DWARF/pdb debug info files are located.
In case you used "-no-auto-finalize" option in command line (which is more advanced scenario), you may also need to use Re-Finalize feature (available only starting from 2017 Update 2 new release) or (for older versions) make sure that you provide Binary/Symbol/Source Search after opening local project copy, but before "Show My Result" upload data action.

"Hacking: The Art of Exploitation" - Assembly Inconsistencies in book examples vs. my system's gcc

I am studying "Hacking: The Art of Exploitation". I am trying to follow the code examples, but for some reason the assembly codes simply does not match the one on my actual Linux (running on Virtual Box as Guest). I have made sure that I have installed 32 bit Linux OS. Is there any args that I can pass to gcc that lets me compile the code into an assembly that matches closely with the ones given in the book?
I would be fine reconciling the code differences between the book & what I see if they were minor, but the difference I see is stark. I somehow don't like to run the code from the "Preconfigured incubator environment" as this inhibits my skill development.
I've actually been in the same boat--for the last week or two I've tried a ton of ways to produce comparable assembly code in my normal development environment (LMDE), including chroot, compiling with the -m32 flag, installing an x86 ubuntu, etc, and nothing really worked. Today I found http://www.nostarch.com/hackingCD.htm and I followed the instructions and was able to get the livecd to boot in vmware workstation 10. Here's what I did:
Download the iso from the link above (though it should work with the
livecd as well)
Create a .vmx file and copy and paste the config from the link
I took out the section defining the cdrom device, since I was using an iso
Open the file with VmWare Workstation--if you are using the iso, go to "Edit VM Settings" and set up a cdrom device and point it to the iso
VM booted without any issues
I know this isn't as convenient as going through the examples in your main OS/system, and that you were trying to avoid using the LiveCD, but after doing a lot of research I've discovered that this is an extremely common issue and hopefully this answer helps someone. Using the LiveCD might not be ideal but it is still a heck of a lot better than dual booting.
for some reason the assembly codes simply does not match the one on my actual linux
The most likely reason is that the book was published in 2008, and used then-stable GCC (you can see GCC release history here).
GCC that you are using now is likely much newer, and so generates significantly different (and one hopes better) code.
Is there any args that I can pass to gcc that lets me compile the code into an assembly that matches closely with the ones given in the book?
No. You can try to compile and install a version from 2008, perhaps 4.2.3 or 4.3.0, and check whether that gives you closer output.
P.S. It looks like the first revision of the book is from 2003, and it's unlikely that the authors rebuilt all of their examples for the second edition in 2008, so perhaps try GCC 3.3 instead?
This is why the book comes with a LiveCD with a linux distro and all of the example source code from the book on there. All of the examples in the book match exactly with what will happen in the LiveCD.
Just run the included LiveCD using VirtualBox or VMware and follow along with the book using that. If you don't have the CD, it can be downloaded from a torrent provided by No Starch (linked from their website)
it doesn´t matter whether the output of gcc is different, the only thing it changes is the memory addresses; plus, you said u r using a VM to run it, meaning that the memory u will get is dummy memory, try booting the iso and run it directly, it will almost the same.
https://www.youtube.com/watch?v=pIN7oFkz5rM

reason for crashing of the windows

I wrote some program which uses information about (reads via Windows) hardware of the current PC (big program, so I can't post here code) and sometimes my windows 7 crashes, the worst thing is that I have no idea why, and debug doesn't help me, is there any way to receive from windows 7 some kind of log, why it crashed? thanks in advance for any help
The correct (but somewhat ugly) answer:
Go to Computer->Properties, go to 'Advanced System Settings'.
Under startup and recovery, make sure it is set to "Kernel memory dump" and note the location of the dump file (on a completely default install, you are looking at C:\windows\memory.dmp)
You optimally want to install Windows Debugging tools (now in the Windows SDK) as well as setting the MS Symbol store in your symbol settings (http://msdn.microsoft.com/en-us/library/ff552208(v=vs.85).aspx)
Once youv'e done all that, wait for a crash and inspect memory.dmp in the debugger. Usually you will not see the exact crash because your driver vendors don't include symbols, but you will also generally get to see the DLL name that is involved in the crash, which should point you to what driver you are dealing with.
If you are not seeing a specific driver DLL name in the stack, it often indicates to me a hardware failure (like memory or overhead) that needs to be addressed.
MS has a good article here at technet that describes what I mentioned above (but step by step and in greater detail) http://blogs.technet.com/b/askcore/archive/2008/11/01/how-to-debug-kernel-mode-blue-screen-crashes-for-beginners.aspx
You can also look at the event log as someone else noted, but generally the information there is next to useless, beyond the actual kernel message (which can sometimes vaguely indicate whether the problem is driver or something else)

Resources