I have a data structure with references to something like pLimit.TopPoint, where TopPoint has information including x and y location, so its structured.
Certainly before access, it makes sense to verify that pLimit is not null.
Running a linter suggests I should also check *pLimit before accessing pLimit.TopPoint. Why is that? Is the linter wrong?
Related
I have been using c++11 for some time but I always avoided using std::move because I was scared that, while reading a library where the user does not have the access to the code, it would try to use the variable after I move it.
So basically something like
void loadData(std::string&& path);
Would not be enough to make the user understand that it will be moved.
Is it expected that the use of && would imply that the data will be moved. I know that comments can be used to explain the use case, but a lot of people dont pay attention to that.
Is it safe to assume that when you see a && the data will be moved, or when should I use std::move and how to make it explicit from the signature.
Is it expected that the use of && would imply that the data will be moved.
Generally speaking yes. A user cannot call loadData with an lvalue. They must provide a prvalue or an xvalue. So if you have a variable to pass, your code would generally look like loadData(std::move(variable)), which is a pretty good indicator of what you're doing from your side. forwarding could also be employed, but you'd still see it at the call site.
Indeed, generally speaking it is extremely rude to move from a parameter which is not an rvalue reference.
I want to store brief snippets of code in the database (following a standard signature) and "inject" them at runtime. One way would be using eval(my_code). Is there some way to debug the injected code using breakpoints, etc? (I'm using Rubymine)
I'm aware I can just log to console, etc, but I'd prefer IDE-style debugging if possible.
Hm. Let's analyze your question. Firstly, it does not seem to have anything to do with databases: You simply store a code block in the source form somewhere. It can be a file, or a database. Secondly, you don't want IDE-style "debugging", but TDD-style. (But don't concentrate on that question now.)
What you need, is assertions about your code. That is, you need to describe what output should your code produce given some input examples. And then, you need to run that code and see whether its function matches the expectations. Furthermore, if you are not sure about the source of your snippets, run them in a sandbox (with $SAFE = 4). If your code fails the expectations, raise nice errors (TypeError, or even better, your custom made exception), and then you can eg. rescue those exceptions and eg. use some default code snippets...
... but maybe I'm not actually answering the same question that you are asking. If that's the case, then let me share one link to this sourcify gem, which let's you know the source, so that you can insert a breakpoint by saying eg. require 'rdebug' in the middle of code, or can even convert code to sexps. That's all I know.
I am new to Ruby, and I am currently working with an API which is unfamiliar to me. In order to use code completion, which helps me learn, I installed RadRails in Eclipse. However, I am having trouble with Content Assist: specifically, the Content Assist does not reveal the methods for objects in the API.
For example, one of my objects, ins, represents a loaded XBRL instance document. If I run ins.methods, the list contains all of the methods I want, including those in the API (such as functions that allow me to access items in the instance):
...
item
item_all
item_all_groupby_vocab
item_all_map
item_by_vocab
item_ctx_filter
...
etc.
However, if I just type ins. with Content Assist enabled, it only shows options like:
dclone
gem
gem_original_require
JSON
Pathname(path)
...
etc.
which appear to be system options. As a result, the Content Assist exposes exactly zero of the methods I actually want to use. If I know the methods ahead of time and start typing them, I can get Content Assist to give them to me, eventually, by pressing Ctrl+Space. However, that requires me to know what I want ahead of time; since I am using this to explore the API, that doesn't work for me.
Does anyone know how to get RadRails/Eclipse to show me the correct methods?
Regards,
Matt
This is a general problem inherent to dynamic languages and IDEs/editors. The IDE has to guess at the type of the variable that the code assist is being invoked upon, and from that generate the list of applicable methods.
IRB has type information at runtime, so it knows what methods apply. The IDE is trying to guess the type by analyzing your code statically (not running it).
Having said that, the IDE should often be able to guess correctly. Providing the larger context of the snippet of code that this is being invoked on would be helpful to look into whether or not we could provide helpful content assist on this object. You may want to file a ticket with the version number, and the sample code here: http://aptana.com/r/apbugs
I'm looking into writing an addin (or package, if necessary) for Visual Studio 2005 that needs watch window type functionality -- evaluation of expressions and examination of the types. The automation facilities provide
Debugger::GetExpression, which is useful enough, but the information
provided is a bit crude.
From looking through the docs, it sounds like an
IDebugExpressionContext2 would be more useful. With one of these it
looks as if I can get more information from an expression -- detailed
information about the type and any members and so on and so forth, without having everything come through as strings.
I can't find any way of actually getting a IDebugExpressionContext2,
though! IDebugProgramProvider2 sort of looks relevant, in that I
could start with IDebugProgramProvider2::GetProviderProcessData and
then slowly drill down until reaching something that can supply my
expression context -- but I'll need to supply a port to this, and it's
not clear how to retrieve the port corresponding to the current debug
session. (Even if I tried every port, it's not obvious how to tell
which port is the right one...)
I'm becoming suspicious that this simply isn't a supported use case, but with any luck I've simply missed something crashingly obvious.
Can anybody help?
By using IDebugExpressionContext you'll ultamitely end up getting ahold of an instance of IDebugProperty. This interface is implemented by the Expression Evaluator service. This is, typically, a language specific service. It's designed to abstract out the language specific details of evaluating an expression. It understands much higher level commands like "Evaluate", and inspection.
I don't think you're going to get what you're looking for though because you can't get ahold of any kind of type object this way. Nearly all of the inspection methods return their results in String form. For example you won't get the type Int32 but instead the string "int". This makes type inspection next to impossible.
I don't believe what you're trying is a supported case. The type system being evaluated doesn't exist in the current process. It exists in the debuggee process and is fairly difficult to get access to.
There's a hack you could do to get more information about the type of a variable you've evaluated using Debugger::GetExpression method.
You could evaluate "AppDomain.CurrentDomain.GetAssemblies()" to get all the assemblies loaded into the debugee, and cache them in your add-in. You may also need to listen for new assemblies being loaded onto the AppDomain.
Then, run the following:
Expression myExpression = Debugger.GetExpression(...);
Expression typeRefExpression = Debugger.GetExpression("typeof(" + myExpression.Type + ").FullName"
once you have the TypeFullName, you can search inside your assemblies cache for a matching System.Type, and once you have that, you can dig into it all you want using the standart Reflection API.
Note that this will only work in C#, because of its "typeof" keyword. You'll have to use a different keyword for VB.Net, for example.
For example, referencing something as System.Data.Datagrid as opposed to just Datagrid. Please provide examples and explanation. Thanks.
The benefit is that you don't need to add an import for everything you use, especially if it's the only thing you use from a particular namespace, it also prevents collisions.
The downside, of course, is that the code balloons out in size and gets harder to read the more you use specific qualifiers.
Personally I tend to use imports for most things unless I know for sure I will only be using something from a particular namespace once or twice, so it won't impact the readability of my code.
You're being very explicit about the type you're referencing, and that is a benefit. Although, in the very same process you're giving up code clarity, which clearly is a downside in my case, as I want code to be readable and understandable. I go for the short version unless I have a conflict in different namespaces which can only be solved with the explicit referencing to classes.. Unless I make an alias for it with the keyword using:
using Datagrid = System.Data.Datagrid;
Actually the full path is global::System.Data.DataGrid. The point of using a more qualified path is to avoid having to use additional using statements, especially if the introduction of another using will cause problems with type resolution. More fully qualified identifiers exist so that you can be explicit when you need to be explicit, but if the class's namespace is clear, then the DataGrid version is clearer to many.
I generally use the shortest form available in order to keep the code as clean and readable as possible. That's what using directives are for, after all, and tooltips in the VS editor give you instant detail on the provenance of a type.
I also tend to use a namespace tag for RCWs in a COM interop layer, to call out those variables explicitly in the code (they may need special attention on lifecycle and collection), eg
using _Interop = Some.Interop.Namespace;
In terms of performance there is no upside/downside. Everything is resolved at compile time and the generated MSIL is identical whether you use fully-qualified names or not.
The reason why its use is prevalent in the .NET world is because of auto-generated code, such as designer markup. In that case it would be better to fully-qualify names like class names because of possible conflicts with other classes you may have in your code.
If you have a tool like ReSharper, it will actually tell you what fully-qualified references you have are unnecessary (e.g. by graying them out) so you can lop them off. If you frequently cut-paste code across your various code bases, it would be a must to fully qualify them. (then again, why would you want to do cut-paste all the time; it's a bad form of code reuse!)
I don't think there is really a downside, just readability vs actual time spent coding. In general if you don't have namespaces with ambiguous object I don't think it's really needed. Another thing to consider is level of use. If you have one method that uses reflection and you are alright with typeing System.Reflection 10 times, then it's not a big deal but if you plan on using a namespace alot then I would recommend an include.
Depending on your situation, extra qualifiers will generate a warning (if this is what you mean by redundant). If you then treat warnings as errors, that's a pretty serious downside.
I've run into this with GCC for example.
struct A {
int A::b; // warning!
}