How to keep your own debug lines without checking them in? - debugging

When working on some code, I add extra debug logging of some kind to make it easier for me to trace the state and values that I care about for this particular fix.
But if I would check this in into the source code repository, my colleagues would get angry on me for polluting the Log output and polluting the code.
So how do I locally keep these lines of code that are important to me, without checking them in?
Clarification:
Many answers related to the log output, and that you with log levels can filter that out. And I agree with that.
But. I also mentioned the problem of polluting the actual code. If someone puts a log statement between every other line of code, to print the value of all variables all the time. It really makes the code hard to read. So I would really like to avoid that as well. Basically by not checking in the logging code at all. So the question is: how to keep your own special purpose log lines. So you can use them for your debug builds, without cluttering up the checked in code.

If the only objetive of the debugging code you are having problems with is to trace the values of some varibles I think that what you really need is a debugger. With a debugger you can watch the state of any variable in any moment.
If you cannot use a debugger, then you can add some code to print the values in some debug output. But this code should be only a few lines whose objective has to be to make easier the fix you are doing. Once it's commited to trunk it's fixed and then you shouldn't need more those debug lines, so you must delete them. Not delete all the debug code, good debug code is very useful, delete only your "personal" tracing debug code.
If the fix is so long that you want to save your progress commiting to the repository, then what you need is a branch, in this branch you can add so much debugging code as you want, but anyway you should remove it when merging in trunk.

But if I would check this in into the
source code repository, my colleagues
would get angry on me for polluting
the Log output and polluting the code.
I'm hoping that your Log framework has a concept of log levels, so that your debugging could easily be turned off. Personally I can't see why people would get angry at more debug logging - because they can just turn it off!

Why not wrap them in preprocessor directives (assuming the construct exists in the language of your choice)?
#if DEBUG
logger.debug("stuff I care about");
#endif
Also, you can use a log level like trace, or debug, which should not be turned on in production.
if(logger.isTraceEnabled()) {
logger.log("My expensive logging operation");
}
This way, if something in that area does crop up one day, you can turn logging at that level back on and actually get some (hopefully) helpful feedback.
Note that both of these solutions would still allow the logging statements to be checked in, but I don't see a good reason not to have them checked in. I am providing solutions to keep them out of production logs.

If this was really an ongoing problem, I think I'd assume that the central repository is the master version, and I'd end up using patch files to contain the differences between the official version (the last one that I worked on) and my version with the debugging code. Then, when I needed to reinstate my debug, I'd check out the official version, apply my patch (with the patch command), fix the problem, and before check in, remove the patch with patch -R (for a reversed patch).
However, there should be no need for this. You should be able to agree on a methodology that preserves the information in the official code line, with mechanisms to control the amount of debugging that is produced. And it should be possible regardless of whether your language has conditional compilation in the sense that C or C++ does, with the C pre-processor.

I know i'm going to get negative votes for this...
But if I were you, i'd just build my own tool.
It'll take you a weekend, yes, but you'll keep your coding style, and your repository clean, and everyone will be happy.
Not sure what source control you use. With mine, you can easily get a list of the things that are "pending to be checked in". And you can trigger a commit, all through an API.
If I had that same need, i'd make a program to commit, instead of using the built-in command in the Source Control GUI. Your program would go through the list of pending things, take all the files you added/changed, make a copy of them, remove all log lines, commit, and then replace them back with your version.
Depending on what your log lines look like, you may have to add a special comment at the end of them for your program to recognize them.
Again, shouldn't take too much work, and it's not much of a pain to use later.
I don't expect you'll find something that does this for you already done (and for your source control), it's pretty specific, I think.

Similar to
#if DEBUG #endif....
But that will still mean that anyone running with the 'Debug' configuration will hit those lines.
If you really want them skipped then use a log level that no one else uses, or....
Create a different run configuration called MYDEBUGCONFIG
and then put your debug code in between blocks like this:
#if MYDEBUGCONFIG
...your debugging code
#endif;

What source control system are you using? Git allows you to keep local branches. If worse comes to worst, you could just create your own 'Andreas' branch in the repository, though branch management could become pretty painful.

If you really are doing something like:
puts a log statement
between every other line of code, to
print the value of all variables all
the time. It really makes the code
hard to read.
that's the problem. Consider using a test framework, instead, and write the debug code there.
On the other hand, if you are writing just a few debug lines, then you can manage to avoid those by hands (e.g. removing the relevant lines with the editor before the commit and undoing the change after it's done) - but of course it have to be very infrequent!

IMHO, you should avoid the #if solution. That is the C/C++ way of doing conditional debugging routines. Instead attribute all of logging/debugging functions with the ConditionalAttribute. The constructor of the attribute takes in a string. This method will only be called if the particular pre-processor definition of the same name as the attribute string is defined. This has the exact same runtime implications as the #if/#endif solution but it looks a heck of a lot better in code.

This next suggestion is madness do not do it but you could...
Surround your personal logging code with comments such as
// ##LOG-START##
logger.print("OOh A log statment");
// ##END-LOG##
And before you commit your code run a shell script that strips out your logs.
I really wouldn't reccomend this as it's a rubbish idea, but that never stops anyone.
Alternative you could also not add a comment at the end of every log line and have a script remove them...
logger.print("My Innane log message"); //##LOG
Personally I think that using a proper logging framework with a debug logging level etc should be good enough. And remove any superfluous logs before you submit your code.

Treat it as first class code and keep with the code with proper logging API and build option to compile it out/completely disable.

Related

Does Netlogo have a way for Code Debugging?

Anybody knows guys is there a possibility to trace a code in Netlogo statement by statement, like using F7 or F8 in C/C++,...
Generic longer comment:
/*
So as the comments above say, there is not a debugger but there are work-arounds.
Replayable random-seeds and many print statements help. I define v-print routine to print if a "verbose" switch is TRUE, so I can put in lots of v-print statements, sometimes one per line of code, but shut them all off until needed with one interface switch.
For many Netlogo users, judging by the questions, this is the first programming language they have ever used. So there is some generic advice like "take small steps and do sanity checks often, expecting that something will go wrong."
But maybe the best and hardest advice is to learn enough Git and GitHub to set up a repository and "SAVE COMMENTED VERSIONS OFTEN", because otherwise it is a losing proposition to try to debug tangled code with multiple errors that you have been trying random changes on to see if they help. Some times you need to fall back to the prior last-good version and work forward from there again step by step looking for where it breaks. Working forward is easy. Debugging backwards is hard.
Debugging code with one error is much easier than debugging code with multiple errors.
Decades ago I used to repair analog televisions. I loved rich people because they called as soon as one thing was wrong. Poor people waited until the set was full of errors then called. I've seen similar behavior in programmers continuing to code hoping that prior errors will somehow magically go away. Slow is fast.
GitHub has been a life-saver a few times when I accidentally deleted a section of code in the editor and didn't notice I'd done it. It's easy to have lines of code highlighted and then type a word while not looking at the screen. Then you go crazy with code that worked yesterday and doesn't work today and you "didn't change anything".
Using GitHub to restore a prior version, or simply doing a differences ( "diff" ) between the current version and prior version shows what changed.
*/

Beckhoff PLC using ENUM's in CASE OF question

When I use an enum in a switch statement in C#, I am used to add a debug break statement to the Default case to prevent adding items to the enum which are not covered by the switch. During debugging, the code will then break if it hits the Default case.
Now I am programming a beckhoff PLC and want to do the same in a CASE .. OF ELSE ...END CASE in STL. Is this possible and/or normal in PLC programming?
I don’t think you can. Also it wouldn’t be desirable to stop a PLC program and prevent it from executing machine relevant code.
Instead you could use the ADSLOGSTR function to log to the event logger. Or show a message box. This will work in both TC2 and TC3.
You can set breakpoints when you are in online-mode, but as pboedker pointed out as soon as the breakpoint is reached (unless you have a special configuration, but this is another subject) your ethercat master will timeout, your safety module will produce a com error and your drives will need a reset aswell.
If you don't have real hardware and an ethercat master attached in your project you can use breakpoints without any worries.
I personally take another approach.
I always build a separate Debug-Visualization in the plc together with a special Debug FunctionBlock which helps me to track bugs in the project.
In your case for example I would simply call a special method of the Debug-FunctionBlock with an errror code and a string when the program flow reaches the default-case.
The error code and the string would then be visualized in the Debug-Visualization.
Even if it's a little more effort than simply calling adslogstr I would rather implement a separate Debug-FunctionBlock for 3 reasons:
You need more logic than simply calling adslogstr anyway because if by any chance adslogstr is called cyclically, you end up spamming the event logger.
Reuse in other projects
You can expand the Debug-Visualization to a Test-Suite if needed, which can come in handy
You can find more info about the beckhoff visualization here:
https://infosys.beckhoff.com/english.php?content=../content/1033/tc3_plc_intro/3523377803.html&id=
Breakpoints are possible like Filippo said. You can prevent outputs from being reset during breakpoint by setting KeepOutputsOnBP (see this: https://stackoverflow.com/a/52158801/8140625).
You could also set error/warning/note message to your Visual Studio when that happens by using ADSLOGSTR(see this: https://stackoverflow.com/a/51700613/8140625). So add a ADSLOGSTR call to your CASE ELSE with appropriate message and you will see it in error list / TwinCAT console.
Edit: Somehow missed pboedkers answer, he already answered the ADSLOGSTR.
I like the solution of Filippo. Is could be easy to change the behavior of the debug function in the future without touching the code to much.
I was thinking to much in the C# solutions :)
Thank!

Can I mark some code as optional while debugging in Visual Studio 2012?

I'm not sure how to really put my question into words so let me try to explain it with an example:
Let's say my program runs into some weird behavior at a specific action. I already find some code which is the cause of this weird behavior. When disabling this sequence I don't run into this behavior. Unfortunately, I need this code because something else is not working then.
So, what I gonna do next is figuring out why something is going different when that code excerpt is active.
In order to better understand what's going on I sometimes want to run the whole action including the 'bad code' and sometimes without. Then I can compare the outcome, for example what happens in the UI or what my function returns.
The first approach which comes to my mind is to run my program with the code enabled, do whatever I want, then stop my program, comment out the code, recompile and run again. Um... that sounds dumb. Especially if I then again need to turn on that code to see another time the other behavior, and then again turn off, and on, and off and so on.
It's not an option for me to use breakpoints and influence the statement order or to modify values so that I run or not run into if-statements, for-loops etc. Two examples:
I debug a timing critical behavior and when I halt the program the timing changes significantly. Thus, the first breakpoint I can set must be at the end of the action. 1
I expect a tooltip or other window to appear which is 'suppressed' when focus is given to VS. Thus, I cannot use any breakpoints at all. Neither in the beginning nor at the end of the action.1
Is there any technique in Visual Studio 2012 which allows me to mark this code to be optional and I can decide whether or not I want to run this code sequence before I execute the action? I think of something like if(true|false) on a higher level.
I'm not looking for a solution where I need to re-run my program several times. In that case I could still doing the simple approach of simply commenting out the code with #if false.
1 Note that I, of course, may set a breakpoint when I need to look into a specific variable at a certain position (if I haven't written the value into output) but will turn off breakpoints again to run the whole action in one go.
In the Visual Studio debugger you can set a breakpoint right in front of your "code in question". When the code stops at that point, you can elect to let it continue or you can right-click on any other line and select Set Next Statement.
It's kind of a weird option, but I've come to appreciate it.
The only option I can think of is to add something to your UI that only appears when debugging, giving you the option to include/exclude the operations in question.
While you're at it, you might want to enable resetting the application to a "known state" from the UI as well.
I think of something like if(true|false) on a higher level.
Why "on a higher level"? Why not use exactly this?
You want a piece of code sometimes executed, sometimes not, and the switch should be changed at run time, not at compile time - this obviously leads to
if(condition)
{
// code in stake
}
The catch here is what kind of condition you will use - maybe a variable you set to true in the release version of your code, and to false sometimes in your debug version. Maybe the value is taken from a configuration file, maybe from an environment variable, maybe calculated by some kind of logic in your program, whatever and whenever you like.
EDIT: you could also introduce a boolean variable in your code for condition, initialize it to true by default and change its value using the debugger whenever you like.
Preprocessor Directives might be what you're after. They're bits of code for the compiler to execute, identifiable by starting with a # character (and stylistically, by default they don't follow the indent pattern of your code, instead always residing firmly at the left-hand edge of the editor):
#define INCLUDE_DODGY_CODE
public void MyMethodWithDodgyBits() {
#if INCLUDE_DODGY_CODE
myDodgyMethod();
#endif
myOkMethod();
}
In this case, if #define INCLUDE_DODGY_CODE was included, the myDodgyMethod() call will be compiled into your program. Otherwise, the call will be skipped by the compiler and will simply not exist in your binary.
There are a couple of options for debugging as you ask.
Visual Studio has a number of options to directly navigate through code. You can use the Set Next Statement feature to move directly to a particular statement. You can also directly edit values through the Immediate Window the QuickWatch and the tooltip that hovers over variables while debugging.
Visual Studio also has the ability to playback the execution history. Take a look at IntelliTrace to get started. It can be helpful when you have multiple areas of concern that are interacting and generating the error condition.
You can also wrap your sections of code within conditional blocks, and set the conditional variables as appropriate. That could be while you're debugging, or you could pass parameters in through a configuration file. Using conditional checks may be easier than manually stepping through code if there are a number of statements you wish to exclude.
It sometimes depends on the version of VS and the language, but you can happily edit the code (to comment it out, or wrap it in a big #ifdef 0) then press alt+F10 and the compiler will recompile, relink and continue execution as if you'd never fiddled with it.
But while that works beautifully in VC++ (since VS v6 IIRC), C# can have issues - I find (with VS2010) that I cannot edit and continue in this way with functions containing any lambda (mainly linq) statements, and 64-bit code never used to do this too. Still, its worth experimenting with as its really useful sometimes.
I have worked on applications that have optional code used for debugging alone that should not appear in the production environment. This segment of optional code was easiest for us to control using a config file since it didn't require a re-compile to change.
Such a fix might not be the end all be all for your end result, but it might help get through it until a fix is found. If you have multiple optional sections that need to be tested in combination this style of fix could require multiple keys in the config file, which could be a downside and a pain to keep track of.
Your question isn't exactly clear, which is possibly why there are so many answers which you think are invalid. You may want to consider rewording it if no one seems able to answer the question.
With the risk of giving another non-valid answer I'll add some input on how I've dealt with the issue in the past.
The easiest way is to place any optional code within
#if DEBUG
//Optional code here
#endif
That way, when you run in debug mode the code is implemented and when you run in release mode it's not. Switching between the two requires clicking one button.
I've also solved the same problem in a similar way with a simple flag:
bool runOptionalCode = false;
then
if (runOptionalCode)
{
//Place optional code here
}
Again, switching between modes requires changing one word, so is a simple task. You mention this in your question but discount it for reasons that are unclear. As I said, it requires very little effort to switch between the two.
If you need to make changes between the code while it's running the best way is to use a UI item or a keystroke which modifies the flag mentioned in the example above. Depending on your application though this could be more effort than it's worth. In the past I've found that when I have a key listener already implemented as part of the project, having a couple of key strokes decide whether to run my debug (optional) code works best. In an application without key listeners I'd rather stick with one of the previous methods.

Refactoring and non-refactoring changes as separate check-ins?

Do you intermingle refactoring changes with feature development/bug fixing changes, or do you keep them separate? Large scale refactorings or reformatting of code that can be performed with a tool like Resharper should be kept separate from feature work or bug fixes because it is difficult to do a diff between revisions and see the real changes to code in amongst the numerous refactoring changes. Is this a good idea?
When I remember, I like to check in after a refactoring in preparation for adding a feature. Generally it leaves the code in a better state but without a change in behaviour. If I decide to back out the feature, I can always keep the better structured code.
Keep it simple.
Every check in should be a distinct, single, incremental change to the codebase.
This makes it much easier to track changes and understand what happened to the code, especially when you discover that an obscure bug appeared somewhere on the 11th of last month. Trying to find a one-line change amidst a 300-file refactoring checkin really, really sucks.
Typically, I check in when I have done some unit of work, and the code is back to compiling/unit tests are green. That may include refactorings. I would say that the best practice would be to try to separate them out. I find that to be difficult to do with my workflow.
I agree with the earlier responses. When in doubt, split your changes up into multiple commits. If you don't want to clutter the change history with lots of little changes (and have your revisions appear as one atomic change), perform these changes in a side branch where you can split them up. It's a lot easier to read the diffs later (and be reassured that nothing was inadvertently broken) if each change is clear and understandable.
Don't change functionality at the same time you are fixing the formatting. If you change the meaning of a conditional so that a whole bunch of code can be outdented, change the logic in one change and perform the outdent in a subsequent change. And be explicit with your commit messages.
If the source code control system allows it..
(this does not work in my current job due to the source code control system not liking a single user checking out a single file to more than one location.)
I have two working folders
Both folders are checkout from the same branch
I use one folder to implement the new feature development/bug fixing changes
In the other folder I do the refactoring,
After each refactoring I check in the refactoring folder
Then update the new feature development folder that merges in my refactorings
Hence each refactoring is in own checkin and other developers get the refactoring quickly, so there are less merge problems.

What helps to you improve your ability to find a bug?

I want to know if there are method to quickly find bugs in the program.
It seems that the more you master the architecture of your software, the more quickly
you can locate the bugs.
How the programmers improve their ability to find a bug?
Logging, and unit tests. The more information you have about what happened, the easier it is to reproduce it. The more modular you can make your code, the easier it is to check that it really is misbehaving where you think it is, and then check that your fix solves the problem.
Divide and conquer. Whenever you are debugging, you should be thinking about cutting down the possible locations of the problem. Every time you run the app, you should be trying to eliminate a possible source and zero in on the actual location. This can be done with logging, with a debugger, assertions, etc.
Here's a prophylactic method after you have found a bug: I find it really helpful to take a minute and think about the bug.
What was the bug exactly in essence.
Why did it occur.
Could you have found it earlier, easier.
Anything else you learned from the bug.
I find taking a minute to think about these things will make it far less likely that you will produce the same bug in the future.
I will assume you mean logic bugs. The best way I have found to capture logic bugs is to implement some sort of testing scheme. Check out jUnit as the standard. Pretty much you define a set of accepted outputs of your methods. Every time you compile your system it checks all of your test cases. If you have introduced new logic that breaks your tests, you will know about it instantly and know exactly what you have to fix.
Test driven design is a pretty big movement in programming right now. You will be hard pressed to find a language that doesn't support some kind of testing. Even JavaScript has a multitude of test suites.
Experience makes you a better debugger. Pay close attention to the bugs that you AND others commonly make. Try to figure out if/how these bugs apply to ALL code that affects you, not the single instance of where the bug was seen.
Raymond Chen is famous for his powers of psychic debugging.
Most of what looks like psychic
debugging is really just knowing what
people tend to get wrong.
That means that you don't necessarily have to be intimately familiar with the architecture / system. You just need enough knowledge to understand the types of bugs that apply and are easy to make.
I personally take the approach of thinking about where the bug may be in the code before actually opening up the code and taking a look. When you first start with this approach, it may not actually work very well, especially if you are pretty unfamiliar with the code base. However, over time someone will be able to tell you the behavior they are experiencing and you'll have a good idea where the problem is located or you may even know what to fix in the code to remedy the problem before even looking at the code.
I was on a project for several years that maintained by a vendor. They were not very good debuggers and most of the time it was up to us to point them to an area of the code that had the problem. What made our problem worse was that we didn't have a nice way to view the source code, so a lot of our "debugging" was just feeling.
Error checking and reporting. The #1 newbie coder debugging mistake is to turn off error reporting, avoid checking for whether what's going on makes sense, etc etc. In general, people feel like if they can't see anything going wrong then nothing is going wrong. Which of course could not be further from the case.
Instead, your code should be chock full of error conditions that will make lots of noise, with detailed reporting, someplace you will see it. (This doesn't mean inside a production web page.) Then, instead of having to trace an error all over the place because it got passed through sixteen layers of execution before it finally got someplace that broke, your errors start happening proximately to the actual issue.
It seems that the more you master the
architecture of your software ,the
more quickly you can locate the bugs.
After understanding the architecture, one's ability to find bugs in the application increases with their ability to identify and write extensive tests.
Know your tools.
Make sure that you know how to use conditional breakpoints and watches in your debugger.
Use static analysis tools as well - they can point out the more obvious issues.
Sleep and rest.
Use programming methods that produce fewer bugs in the first place.
If to implement a single stand-alone functional requirement it takes N separate point-edits to source code, the number of bugs put into the code is roughly proportional to N, so find programming methods that minimize N. Ways to do this: DRY (don't repeat yourself), code generation, and DSL (domain-specific-language).
Where bugs are likely, have unit tests.
Obviously.IMHO, the best unit tests are monte-carlo.
Make intermediate results visible.
For example, compilers have intermediate representations, in the form of 4-tuples. If there is a bug, the intermediate code can be examined. That tells if the bug is in the first or second half of the compiler.
P.S. Most programmers are not aware that they have a choice of how much data structure to use. The less data structure you use, the less are the chances for bugs (and performance issues) caused by it.
I find tracepoints to be an invaluable debugging tool. They are a bit like logging, except you create them during a debugging session to solve a particular issue, like breakpoints.
Printing the stacktrace in a tracepoint can be especially useful. For example, you can print the hash code and stacktrace in the constructor of an object, and then later on when the object is used again you can search for its hashcode to see which client code created it. Same for seeing who disposed it or called a certain method etc.
They are also great for debugging issues related to window focus changes etc, where the debugger would interfere if you drop in break mode.
Static code tools like FindBugs
Assertions, assertions, and assertions.
Some areas of our code has 4 or 5 assertions for each line of real code. When we get a bug report the first thing that happens is that the customer data is processed in our debug build 99 times out a hundred an assert will fire near the cause of the bug.
Additionally our debug build perform redundant calculations to ensure that an optimized algorithm is returning the correct result, and also debug functions are used to examine the sanity of data structures.
The hardest thing new developers have to contend with is getting their code to survive the assertions of the code gthey are calling.
Additionally we do not allow any code to be putback to toplevel that causes any integration or unit test to fail.
Stepping through the code, examining flow/state where unexpected behavior is occurring. (Then develop a test for it, of course).
Writing Debug.Write(message) in your code and using DebugView is another option. And then run your application find out what is going on.
"Architecture" in software means something like:
Several components
The components interact across clearly-defined interfaces
Each component has a well-defined responsibility
The responsibility of one component is unlike the responsibilities of other components
So, as you said, the better the architecture the easier it is to find bugs.
First: knowing the bug, you can decide which functionality is broken, and therefore know which component implements that functionality. For example, if the bug is that something isn't being logged properly, therefore this bug should be in one of 3 places:
In the component that's responsible for logging (your logging library)
Or, above that in the application code which is using this library
Or, below that in the system code which this library is using
Second: examine the data transfered across the interfaces between components. To continue the previous example above:
Set a debugger breakpoint on the application code which invokes the logger API, to verify whether the logger API is being used correctly (e.g. whether it's being invoked at all, whether parameters are as-expected, etc.).
Doing this tells you whether the bug is in the component above this interface, or in the component that's below this interface.
Repeat (perhaps using binary search if the call stack is very deep) until you've found which component is at fault.
When you come to the point that you think there must be a bug in the OS, check your assertions -- and put them into the code with "assert" statements.
Conversely, as you are writing the code, think of the range of valid inputs for your algorithms and put in assertions to make sure you have what you think you have. Same goes for output: Check that you produced what you think you produced.
E.g. if you expect a non-empty list:
l = getList(input)
assert l, "List was empty for input: %s" % str(input)
I'm part of the QA team # work, and knowing anything about the product and how it is developed, helps a lot in finding bugs, also when I make new QA tools I pass it to our dev team to test it, finding bugs in your own code is just plain hard!
Some people say programmers are tainted, so we cannot see bugs in their own product; we are not talking about code here, we are beyond that, usability and functionality itself.
Meanwhile unit testing seams to be a nice solution to find bugs in your own code, its totally pointless if you're wrong even before writing the unit test, how are you going to find the bugs then? you don't!, let your co-worker find them, hire a QA guy.
Scientific debugging is what I always used, and it greatly helps.
Basically, if you can replicate a bug, you can track its origin. You should then experiment some tests, observe the results, and infer hypotheses on why the bug happens.
Writing about all your hypotheses, attempts, expected results and observed results can help you track down the bugs, particularly if they're nasty.
There are automated tools that can help you with that process, particularly git-bisect (and similar bisection tools on other revision systems) to quickly find which change introduced the bug, unit testing to reproduce a bug and prevent regressions in your code (can be used in combination with bisect), and delta debugging to find the culprit in your code (similar to git-bisect but whereas git-bisect works on the code history, delta debugging works on the code directly).
But whatever the tools you are using, the most important benefit is in the scientific methodology, as this is the formalization of what most experienced debuggers do.

Resources