Can Visual Studio 2010 ignore specific pragmas? - visual-studio-2010

I have literally hundreds of C++ source files used by many projects. They were written quite a while back, and they are all wrapped in packing pragmas:
#pragma pack(push, 1)
/* Code here ... */
#pragma pack(pop)
I have been put in charge of porting to x64. Amongst the many changes that need to be made, one is the requirement for a 16-byte aligned stack for Windows API calls. After some analysis of our system, we've determined that 1-byte structure alignment is not necessary and won't have any adverse affects on the system. I need to get rid of the 1-byte packing.
I know I can do a quick find/replace on all the files and just strip them out. This is an OK solution; I'm perfectly happy to do this if it's the only way. However, if I can avoid having to check in a revision which involves changes to literally hundreds of source files, and all the conflicts that might go with it, then that would be preferable.
Is there a way to get the Microsoft Compiler to ignore the #pragma pack?

As far as I know, there is no way to disable #pragmas when using MSVC.
What I would probably do is write a script in a language that is well suited to text processing (I'd probably use Ruby, but Python, Perl, sed should all do the job) and simply use that to comment out or remove the #pragma pack lines. This should be comparatively easy as the #pragmas are going to be the only statement on a give line of code, and the script languages do usually include functionality to iterate through a set of directories.

Related

How To Structure Large OpenCL Kernels?

I have worked with OpenCL on a couple of projects, but have always written the kernel as one (sometimes rather large) function. Now I am working on a more complex project and would like to share functions across several kernels.
But the examples I can find all show the kernel as a single file (very few even call secondary functions). It seems like it should be possible to use multiple files - clCreateProgramWithSource() accepts multiple strings (and combines them, I assume) - although pyopencl's Program() takes only a single source.
So I would like to hear from anyone with experience doing this:
Are there any problems associated with multiple source files?
Is the best workaround for pyopencl to simply concatenate files?
Is there any way to compile a library of functions (instead of passing in the library source with each kernel, even if not all are used)?
If it's necessary to pass in the library source every time, are unused functions discarded (no overhead)?
Any other best practices/suggestions?
Thanks.
I don't think OpenCL has a concept of multiple source files in a program - a program is one compilation unit. You can, however, use #include and pull in headers or other .cl files at compile time.
You can have multiple kernels in an OpenCL program - so, after one compilation, you can invoke any of the set of kernels compiled.
Any code not used - functions, or anything statically known to be unreachable - can be assumed to be eliminated during compilation, at some minor cost to compile time.
In OpenCL 1.2 you link different object files together.

How expensive it is for the compiler to process an include-guarded header?

To speed up the compilation of a large source file does it make more sense to prune back the sheer number of headers used in a translation unit, or does the cost of compiling code far outweigh the time it takes to process-out an include-guarded header?
If the latter is true an engineering effort would be better spent creating more, lightweight headers instead of less.
So how long does it take for a modern compiler to handle a header that is effectively include-guarded out? At what point would the inclusion of such headers become a hit on compilation performance?
(related to this question)
I read an FAQ about this the other day... first off, write the correct headers, i.e. include all headers that you use and don't depend on undocumented dependencies (which may and will change).
Second, compilers usually recognize include guards these days, so they're fairly efficient. However, you still need to open a lot of files, which may become a burden in large projects. One suggestion was to do this:
Header file:
// file.hpp
#ifndef H_FILE
#define H_FILE
/* ... */
#endif
Now to use the header in your source file, add an extra #ifndef:
// source.cpp
#ifndef H_FILE
# include <file.hpp>
#endif
It'll be noisier in the source file, and you require predictable include guard names, but you could potentially avoid a lot of include-directives like that.
Assuming C/C++, simple recompilation of header files scales non-linearly for a large system (hundreds of files), so if compilation performance is an issue, it is very likely down to that. At least unless you are trying to compile a million line source file on a 1980s era PC...
Pre-compiled headers are available for most compilers, but generally take specific configuration and management to work on non system-headers, which not every project does.
See for example:
http://www.cygnus-software.com/papers/precompiledheaders.html
'Build time on my project is now 15% of what it was before!'
Beyond that, you need to look at the techniques in:
http://www.amazon.com/exec/obidos/ASIN/0201633620/qid%3D990165010/002-0139320-7720029
Or split the system into multiple parts with clean, non-header-based interfaces between them, say .NET components.
The answer:
It can be very expensive!
I found an article where someone had done some testing of the very issue addressed here, and am astonished to find you can increase your compilation time under MSVC by at least an order of magnitude if you write your include guards properly:
http://www.bobarcher.org/software/include/index.html
The most astonishing line from the results is that a test file compiled under MSVC 2008 goes from 5.48s to 0.13s given the right include guard methodology.

Why can't the compiler just compile my code as I type it?

Why can't the compiler just compile my code as I type it?
From the user's point of view, it could work as smoothly as syntax colouring does today. If you stop typing for long enough (maybe a couple of seconds) the compilation (not linking) would finish, and code errors would be identified using something like syntax colouring.
It's not like my 3GHz quad core monster computer was really busy doing something else. Why not let it compile all the time?
That's exactly what the VB.NET code editor in Visual Studio does.
The advantage is much more accurate IntelliSense than C#. The disadvantage is that it wastes truly vast amounts of processor time and memory. :-(
It can. Or, to be more useful, the answer to this question depends on
What language
What degree of optimization you require
How annoyed you will be if you temporarily type something dumb, and the compiler compiles and injects the result into the binary your are debugging before you can fix it.
Some really strong optimizations would be very messy to mess with on the fly. On the other hand, a basic compilation, if there's no need to worry about assigning offsets for X86 instructions? Sure.
Some IDEs do compile (or at least check syntax and some semantics) code as it is typed. For example, I think Eclipse does it. I think Visual Basic 6 (and maybe earlier versions) did this.
Note sure what IDE you're using, but that's how VB.NET works.
I'm not well-versed in compilers or the methods by which code is converted to IL and machine language, etc. But even so I can see how altering my program by one flow control statement can completely invalidate the work a compiler has done up to that point. By adding or changing a single line of code, entire portions of a program may become obsolete, unused, or in some other way require re-evaluation.
I think I'd rather save those CPU cycles for distributed.net or SETI # Home instead of constantly recompiling my code as I alter it.
That totally depend on the language.
Languages that have context-independent syntaxes "could" pre-compile expressions once typed. However, compilation of such languages project is always fast, so why use the cpu when you can batch quickly the work when the code is ready?
Other languages, like infamously C++, are context-dependent. In most cases, the compiler can't understand an expression without having already read the whole code before the expression. It's really really hard to parse and that's why we have error checking before compilation only now (in VS2010 and other recent ide). In this case it looks like impossible to implement the feature you're asking for.
That said, I'm not a specialist at all. That's all I know about it.
Even interpreted languages like PHP have support for this in the Komodo editor. I'm sure there's many more editors out there that support this for almost any language.

Javascript source code analysis ( specifically duplication checking )

Partial duplicate of this
Notes:
I already use JSLint extensively via a tool I wrote that scans in intervals my current project directory for recently updated/created .js files. It's drastically improved productivity for me and I doubt there is anything as good as JSLint for the price (it's free).
That said, is there any analysis tool out there that can find repetitive or near-duplicate code blocks, the goal being to make it easier to find opportunities to consolidate large files or small/medium sized projects?
May not be exactly what your after, but Google's Javascript optimizer is worth a look.
Our CloneDR is a tool for finding exact and near-miss cloned code blocks for a variety of languages. It will find duplicates in the same file or across literally thousands of files if you have them. You don't have to provide it with any guidance; it can find the cloned code by itself. And it won't be fooled by different indentation or line breaks, or even consistent renaming of identifiers
It does support JavaScript, even if it isn't clear from the website.
You can see sample clone reports for a variety of langauges at the website.
You may want to have a look at jsinspect.
jsinspect ./src
It will print a list of code blocks that are either identical or structurally very similar.
And there's also jscpd.

Patching an EXE using IDA

Say there is a buggy program that contains a sprintf() and i want to change it to a snprintf so it doesn't have a buffer overflow.. how do I do that in IDA??
You really don't want to make that kind of change using information from IDA pro.
Although IDA's disassembly is relatively high quality, it's not high quality enough to support executable rewriting. Converting a call to sprintf to a call to snprintf requires pushing a new argument on to the stack. That requires the introduction of a new instruction, which impacts the EA of everything that follows it in the executable image. Updating those effective addresses requires extremely high quality disassembly. In particular, you need to be able to:
Identify which addresses in the executable are data, and which ones are code
Identify which instruction operands are symbolic (address references) and which instruction operands are numeric.
Ida can't (reliably) give you that information. Also, if the executable is statically linked against the crt, it may not contain snpritnf, which would make performing the rewriting by hand VERY difficult.
There are a few potential workarounds. If there is sufficient padding available in (or after) the function making the call, you might be able to get away with only rewriting a single function. Alternatively, if you have access to object files, and those object files were compiled with the /GY switch (assuming you are using Visual Studio) then you may be able to edit the object file. However, editing the object file may still require substantial fix ups.
Presumably, however, if you have access to the object files you probably also have access to the source. Changing the source is probably your best bet.

Resources