I was wondering if there is a quick and effective way to remove all the unused variables (local, instance, even properties) in xcode... I am doing a code cleanup on my app and if I knew a quick way for code refactoring it would help me a lot...
Thanks...
It's being a long time since you made your question and maybe you found an answer already, but from an answer to a related question:
For static analysis, I strongly
recommend the Clang Static Analyzer
(which is happily built into Xcode 3.2
on Snow Leopard). Among all its other
virtues, this tool can trace code
paths an identify chunks of code that
cannot possibly be executed, and
should either be removed or the
surrounding code should be fixed so
that it can be called.
For dynamic analysis, I use gcov (with
unit testing) to identify which code
is actually executed. Coverage reports
(read with something like CoverStory)
reveal un-executed code, which —
coupled with manual examination and
testing — can help identify code that
may be dead. You do have to tweak some
setting and run gcov manually on your
binaries. I used this blog post to get
started.
Both methodologies are exactly for what you want, detecting unused code (both variables and methods) and removing them.
Related
Is there a way to run my .fs file with either a breakpoint or a command like System.Diagnostics.Debugger.Launch() in it so that I can use FSI to examine values, apply functions to them, plot them, etc.?
I know this question has been asked before but I tried the answers and could not make them work. Clear instructions or a link to a write-up explaining the process would be of great help not only to myself, but also, I believe, to all beginners.
Unfortunately, you cannot hit a breakpoint and jump into FSI. The context of a running .NET program is quite different to that of an interactive FSI session and they are not compatible enough to just switch between one or the other. I can understand an expectation of this kind of debugging when coming from other dynamic/interpreted languages such as JavaScript, Python etc. It is a really powerful feature.
As you have already noted, you can use the VS immediate window but you need to use its C#-like syntax and respect its limitations. Also, since it's not F#, you need to understand the F# to .NET conversion in order to make full use of it.
If you use Paket for dependency management in your project you have another option: Add generate_load_scripts: true to your paket.dependencies. This will give you a file like .paket\load\net452\main.group.fsx, which you can load into FSI to get all of the dependencies for your project. However, you are still responsible for loading in your own source files and building up some state similar to what is found at your desired breakpoint.
To hit a break point, in visual studio or visual studio code, you just click to the left of the line number you want to set your breakpoint. This is very much a supported feature in .fs files.
We write code, and we write tests. When you write tests, you're thinking about both the expected usage of the thing you're testing - and you're also thinking about the internal implementation of that thing, writing your test to try exposing bad behaviors.
Whenever you change the implementation of the thing, you're adding new lines of code, and sometimes it's difficult to be sure that your test actually exercises all the code in the thing. One thing you can do is set a breakpoint, and step through painstakingly. Sometimes you have to. But I'm looking for a faster, better way to ensure all the code has been tested.
Imagine setting a breakpoint and stepping through the code, but every line that was run gets highlighted and stays highlighted after it was run. So when the program finishes, you can easily look at the code and identify lines which were never run during your test.
Is there some kind of tool, extension, add-in, or something out there, that will let you run a program or test, and identify which lines were executed and which were not? This is useful for improving the quality of tests.
I am using Visual Studio and Xamarin Studio, but even if a tool like this exists for different languages and different IDE's, having that information will be helpful to find analogous answers on the IDE's and languages that I personally care about.
As paddy responded in a comment (not sure why it wasn't an answer), the term you're looking for is coverage. It's a pretty critical tool to pair with unit testing obviously because if some code isn't covered by a test and never runs, you'd want to know so you can more completely test. Of course, the pitfall is that knowing THAT a line of code was touched doesn't automatically tell you HOW the line was touched - you can have 100% test coverage without covering 100% of use cases of a particular LOC - maybe you have an inline conditional where one bit never gets hit, just as an example.
Coverage is also useful to know even outside of testing. By performing a coverage analysis on code in production, you can get a feel for what aspects of your codebase are most critical in real use and where you need to focus on bugfixing, testing, and robustness. Similarly, if there are areas of your codebase that rarely or never get hit in a statistically significant period of production uptime, maybe that piece of the codebase can or should get trimmed out.
dotCover and dotTrace seem like they'll probably put you on the right path. Coming from use in Python, I know I used to use Django-Nose, which comes with coverage testing and reporting integrated. There's coverage.py for standalone analysis, and of course these aren't the only tools in the ecosystem, just the ones I used.
I am learning compilers and want to make changes of my own to GCC parser and lexer. Is there any testing tool or some another way available which let me change gcc code and test it accordingly.
I tried changing the lexical analysis file but now I am stuck because I don't know how to compile these files. I tried the compilation using other GCC compiler but show errors. I even tried configure and make but doing this with every change does not seems efficient.
The purpose of these changes is just learning and I have to consider GCC only as this is the only compiler my instructor allowed.
I even tried configure and make but doing that wit every change is not at all efficient.
That is exactly what you should be doing. (Well, you don't need to re-configure after every change, just run make again.) However, by default GCC configures itself in bootstrap mode, which means not only does your host compiler compile GCC, that compiled GCC then compiles GCC again (and again). That is overkill for your purposes, and you can prevent that from happening by adding --disable-bootstrap to the configuration options.
Another option that can help significantly reduce build times is enabling only the languages you're interested in. Since you're experimenting, you'll likely be very happy if you create something that works for C or for C++, even if for some obscure reason Java happens to break. Testing other languages becomes relevant when you make your changes available for a larger audience, but that isn't the case just yet. The configuration option that covers this is --enable-languages=c,c++.
Most of the configuration options are documented on the Installing GCC: Configuration page. Throroughly testing your changes is documented on the Contributing to GCC page, but that's likely something for later: you should know how to make your own simpler tests pass, by simply trying code that makes use of your new feature.
You make changes (which are made "permanent" by saving the files you modify), compile the code, and run the test suite.
You typically write additional tests or remove those that are invalidated by your changes and that's it.
If your changes don't contribute anything "positive" to the compiler upstream will probably never accept them, and the only "permanence" you can get is the modifications in your local copy.
I wish to sell Go application. I will provide serial number to my clients. Is there ways to make it a bit more complex to crack app?
I say it is complex to crack C app and it is easy to crack Java app. Is there tools that will make Go app cracking job as hard as cracking C app? or some tutorial? At least something I could do to protect my project a bit. I do not ask about super heavy protection.
Once you have the binary itself, obfuscation is pretty difficult. People have tried stripping the symbols out of Go binaries before, but it usually leads to instability and unpredictable behavior, since symbols are required for certain reflection operations.
While you can't necessarily obfuscate the libraries you're statically linking against, you can certainly obfuscate your /own/ code by changing variable, type, and function names prior to compilation to names that are meaningless. If you want to go one step further, you can try obtaining the source code for the libraries you're using (the source code for the standard libraries is available and is included in most Go installations), and applying this obfuscation to the library source code as well.
As for post-compilation binary modification, as I mentioned before, it's probably best to stay away from it.
To add on joshlf13's answer: while stripping Go binaries is not recommended, there's a flag you can pass to the linker to omit the debugging symbols all along:
Pass the '-s' flag to the linker to omit the debug information (for example, go build -ldflags "-s" prog.go).
(Debugging Go Code with GDB)
This should at least be a better way, since I haven't seen any warnings for this like the ones about stripping symbols post-compilation.
Another option, with Go 1.16+ (Feb. 2021:
burrowers/garble
Produce a binary that works as well as a regular build, but that has as little information about the original source code as possible.
The tool is designed to be:
Coupled with cmd/go, to support modules and build caching
Deterministic and reproducible, given the same initial source code
Reversible given the original source, to de-obfuscate panic stack traces
That might not be obfuscated enough for your need, but it is a good start.
After some time using make to build C++ programs I still don't have a very good knowledge about Makefiles. I am thinking about asking for a "good" example and use it from now on. I have been searching, but the ones I found is too complicated for me to understand. Please give me a template, with comments explaining how it works.
Thanks.
Makefiles have the tendency to get really hairy really fast, particularly when working with multiple directories. Many of the Makefile I came across in my professional life where little more then glorified shell scripts with the dependency part mostly non existent. This kind of problems were noted by the seminal paper recursive make considered harmful.
There, and in a following article Implementing non-recursive make -- you can find a reasonable template.
Also here and here on SO you can find people searching for the illusive Makefile(s) template.
Typically, the good Makefile I have seen where the result of an expert that worked for several months and created an infrastructure that transformed the Makefile syntax into something almost completely different. The developers just needed to make assignment to special variables, include the magic code, and build.
The next step in this evolution, is a more modern tool such as CMake. CMake will generate well formed Makefiles for you. If you have control over it, please consider such a tool.
IMHO you will find it makes much more sense, and make you much more productive, give you the added value of cross platform and support the entire build process (including configuration, packaging, Continuous Integration etc.)