How to refactor correctly and effectively - xcode

How to refactor effectively
I tried cleaning the code to ensure that it’s more readable which didn’t work out the way I intended it to

Make sure that the code you are refactoring is under test. Run the tests after each refactoring step.
Commit your refactored code to version control regularly - as soon as you find it hard to remember all you recent refactorings. The code repository is has a better memory than you do!
Use a good IDE (e.g. Eclipse, Intellij, VSCode) and as far as possible use its wizards to do the refactoring for you, e.g. renaming variables, making a method static, etc. It won't make mistakes!
If you have to make a more complicated change, that the IDE does not support (e.g. strength reduction of a loop) then first think carefully about whether the test coverage is adequate - tests that cover the original code might not cover your refactored code. If so, first add and commit the extra tests.

Related

Why might it be necessary to force rebuild a program?

I am following the book "Beginning STM32" by Warren Gay (excellent so far, btw) which goes over how to get started with the Blue Pill.
A part that has confused me is, while we are putting our first program on our Blue Pill, the book advises to force rebuild the program before flashing it to the device. I use:
make clobber
make
make flash
My question is: Why is this necessary? Why not just flash the program since it is already made? My guess is that it is just to learn how to make an unmade program... but I also wonder if rebuilding before flashing to the device is best practice? The book does not say why?
You'd have to ask the author, but I would suggest it is "just in case" no more. Lack of trust that the make file specifies all possible dependencies perhaps. If the makefile were hand-built without automatically generated dependencies, that is entirely possible. Also it is easier to simply advise to rebuild than it is to explain all the situations where it might be necessary or otherwise, which will not be exhaustive.
From the author's point of view, it eliminates a number of possible build consistency errors that are beyond his control so it ensures you don't end up thinking the book is wrong, when it might be something you have done that the author has no control over.
Even with automatically generated dependencies, a project may have dependencies that the makefile or dependency generator does not catch, resource files used for code generation using custom tools for example.
For large projects developed over a long time, some seldom modified modules could well have been compiled with an older version of the tool chain, a clean build ensures everything is compiled and linked with the current tool.
make builds dependencies based on file timestamp; if you have build variants controlled by command-line macros, the make will not determine which dependencies are dependent on such a macro, so when building a different variant (switching from a "debug" to "release" build for example), it is good idea to rebuild all to ensure each module is consistent and compatible.
I would suggest that during a build/debug cycle you use incremental build for development speed as intended, and perform a rebuild for release or if changing some build configuration (such as enabling/disabling floating-point hardware for example or switching to a different processor variant.
If during debug you get results that seem to make no sense, such as breakpoints and code stepping not aligning with the source code, or a crash or behaviour that seems unrelated to some small change you may have made (perhaps that code has not even been executed), sometimes it may be down to a build inconsistency (for a variety of reasons usually with complex builds) and in such cases it is wise to at least eliminate build consistency as a cause by performing a rebuild all.
Certainly if you if you are releasing code to a third-party, such as a customer or for production of some product, you would probably want to perform a clean build just to ensure build consistency. You don't want users reporting bugs you cannot reproduce because the build they have been given is not reproducible.
Rebuilding the complete software is a good practice beacuse here you will generate all dependencies and symbol files path along with your local machine paths.
If you would need to debug any application with a debugger then probably you would need symbol file and paths where your source code is present and if you directly flash the application without rebuilding , then you might not be able to debug certain paths because you dont get to know at which path the application was compiled and you might miss the symbol related information.

Make a ruby file unreadable to a user

Can I make a ruby file (e.g script.rb) unreadable to a user?
The file is on an Ubuntu (offline) machine. The user will use a local Sinatra app that will use some ruby files. I don't want the user to see the code in some of those files.
How can I do it?
EDIT:
Can I setup the project in a way that the user will be able to start the app but won't have access to specific files in it?
Thanks
Does that correspond to what you are searching for ?
chmod yourfile.rb 711
As I said in my comment it is literally almost impossible to hide the content of your ruby source file, many people try this in many different ways but it is almost always trivial to reverse engineer. There are some "suggestions" for making your code hidden but they never really work still, here are a few;
Obfuscation - The process of making your code executable but unreadable, using a tool like ProGuard for Java (there are ones for most major languages) will try to make your code a mess, and as unreadable as possible while still maintaining execution speed. Normally this consists of renaming variables, using strange characters and generally hiding, moving or wrapping functions in complicated structures.
Package the interpreter - You can use a tool like ocra to package the script up inside an executable with the interpreter and standard library, but anyone with even a tiny bit of experience in reverse engineering will be able to easily tear out the source code given a small amount of time
Write a custom interpreter - Now we are getting somewhere with making it harder. Writing a custom interpreter will allow you to compile your script to a "bytecode" that can then be executed. This is of course a very time consuming, expensive and incompatible solution when it comes to working with other code bases.
Write most of your code in C and then call out to it via extensions - Again this mostly moves the problem but its still there. It will take more time but anyone can easily pull apart the machine code of the C library you load in and bob is your uncle they have the source code.
Many more alternatives - This isn't a comprehensive list, I am probably missing a few ideas or suggestions.
As far as it goes making code unreadable is hard a better solution might just to be consider providing a licence agreement with your code. That way, someone reads or modifies the source file you can take them to court for a legal settlement.
Extract your code and its functionality to an external API. And then provide it as a service. This way you don't have to expose your source code to your 'users'.

Separate visual studio solution for unit testing?

I have a visual studio solution that contains around 18 projects. I want to write unit tests for those projects (by creating a test-project that contains unit tests against each source project).
Should I use a separate solution that contains all the test-projects? Or should I use Partitioned solution approach of Visual Studio 2008 and create a sub-solution for all test-projects?
To put unit tests in a separate solution would seem to me to create problems. Inherently, unit tests must be able to reference the types (classes) they are testing. If in a separate solution then it implies a binary association. That would seem very 'difficult'.
So, to investigate the reasons for the question, and hopefully provide some help, these are the reasons I would put my unit tests in the same solution:
Project referencing. I like to name unit tests projects as .Tests where is the name of the production assembly holding the types being tested. That way they appear next to each other (alphabetical order) in VS.
With TDD development I need rapid switching between production code and unit tests. It is about writing the code, not so much about post code testing.
I can select the solution node at the top of the solution pane and say 'run all tests' (I use Resharper(.
The build farm (TeamCity) always runs all tests on all commits. Much easier with them all in one solution.
As I write this I wonder why put them in another solution. Unit test have no meaning outside of the solution with the production code.
Perhaps you can explain why your asking the question. Can you see a problem with them being in the same solution. 18 projects does not, to me, seem like a lot. The solutions I work on have many more ... but if using Resharper (and why wouldn't you?) you will need sufficient RAM in you box.
Hope this is of some help.

Testing file copy, move, delete operations in Ruby

I am developing a backup library in Ruby. And, as you may expect, there are many files copied, moved and deleted during the backup. In my test I want to make sure that the proper files and folders are copied from source to destination. What are the best practices of testing it? Should I deal with physical files during the tests? Or is it better to mock it?
It is better to avoid using real filesystem for testing (it results in slow, brittle tests with messy setup/cleanup). Better to stub it out, with fakefs gem, for example.
Unit tests need to run fast, so that they can be run very often, after each change. So touching the file system is not an option here.
Then integration tests (or whatever they can be called) will ensure the physical files are actually copied. These tests can be slower, as they are run less often.

Reading from web.config while running Pex explorations

I've just started using Pex to generate parameterized unit-tests for my project. However, when I let Pex run its explorations, my code crashes because it cannot read from the web.config (ConfigurationSettings.AppSettings has zero elements to be more precise). The working-directory during the explorations is: "C:\Program Files (x86)\Microsoft Visual Studio 9.0\Common7\IDE". I assume this is the root-cause.
I know that the supposedly proper way to handle this is to create mock-objects corresponding to the values I need. However, this would force me to create tons of mock-code and wouldn't provide any tangible value IMHO, because I have no problem bundling web.config with the test-project.
How do I enable reading from web.config (or app.config) while the Pex explorations executes?
You've answered your own question I'm afraid - you wouldn't directly access your database from your code, so why do it with your config files? Just put a thin wrapper around your config file settings and stub it out in your tests. You don't have to do it all in one go, start with the piece of code under test and move the direct references behind your wrapper bit by bit. The tangible benefit of doing this is that it makes testing easy.
Also, with Pex if your code is getting fully torn down between each run (depends on your code and the tests whether or not this is actually the case) you'll be hitting the file system each time which will have a serious impact on performance.
The Pex developers don't read (often) stackoverflow. You better ask your Pex-related question on the forums at http://social.msdn.microsoft.com/Forums/en/pex/threads

Resources