Is there any way autoload Hack type aliases? I've placed them in separate files on PSR-4-compliant paths, and although I understand they are Hack-only and aren't formally mentioned in the PSR-0 or PSR-4, I figured one of the following would happen:
HHVM would expand type aliases to their base types, or
spl_autoload would treat the type as a class, function or interface name and execute the script, resolving the alias.
However, neither happen. At runtime, the methods call fail due to incompatibility with the type hints, i.e.:
Catchable fatal error: Argument passed to {method_name} must be an instance of {type_alias}, {concrete_type} given.
Edit: I should mention that I am specifically using Composer. I'm unsure if this is Composer-specific or not.
Yes, you can autoload types in HHVM. You need to be using a class-map approach and the HH\autoload_set_paths function.
There is the hhvm-autoload package which adds support for generating the necessary map into composer.
I don't believe this is possible. PHP doesn't register type hints for autoloading. And it doesn't need to, because the only way to fulfill the type hint is to pass that class or subclasses, whose construction would've triggered the autoloader call. It is impossible then for a type hint to be unknown to the interpreter at the time it checks against it.
This is only a problem in Hack because type aliases introduce this possibility. To stay consistent with PHP, I expect the only viable solution of the two mentioned would be for HHVM to expand type aliases while compiling the bytecode.
Related
I'm cross-compling with mingw and got this error:
undefined reference to `rpl_realloc'
After some searching I found this can be resolved as follows in configure.ac or as environment variables set prior to calling ./mingw64-configure:
ac_cv_func_malloc_0_nonnull=yes
ac_cv_func_realloc_0_nonnull=yes
What defines these macros, and as there any documentation on the subject? I couldn't find any...
What defines these macros, and as there any documentation on the subject?
Autoconf uses the ac_cv_ prefix for its "cache variables", in which it records the results of configuration tests it has performed. In the event that the same check is requested multiple times, these allow it to use the previously-determined result instead of performing the check again.
The general naming convention for these is documented in the Autoconf manual. The particular cache variable names you ask about are documented to cache the results of the Autoconf's AC_FUNC_MALLOC and AC_FUNC_REALLOC macros, respectively. That documentation also speaks to the rpl_realloc name.
It is allowed to use these variables in configure.ac to programmatically determine the results of those checks, but it is a relatively nasty hack to assign values to those variables directly. In this particular case, however, the error suggests that whoever prepared the autotooling for the project you're trying to build did a sloppy job of it. If fudging the cache variables gets you a successful build and a working program then that's a tempting and much easier alternative to actually fixing the project.
If there is a function from an external package that is not used at all in my project, will the compiler remove the function from the generated machine code?
This question could be targeted at any language compiler in general. But, I think the behaviour may vary language to language. So, I am interested in knowing what does go compilers do.
I would appreciate any help on understanding this.
The language spec does not mention this anywhere, and from a correctness point of view this is irrelevant.
But know that the current version does remove certain constructs that the compiler can prove is not used and will not change the runtime behaviour of the app.
Quoting from The Go Blog: Smaller Go 1.7 binaries:
The second change is method pruning. Until 1.6, all methods on all used types were kept, even if some of the methods were never called. This is because they might be called through an interface, or called dynamically using the reflect package. Now the compiler discards any unexported methods that do not match an interface. Similarly the linker can discard other exported methods, those that are only accessible through reflection, if the corresponding reflection features are not used anywhere in the program. That change shrinks binaries by 5–20%.
Methods are a "harder" case than functions because methods can be listed and called with reflection (unlike functions), but the Go tools do what they can even to remove unused methods too.
You can see examples and proof of removed / unlinked code in this answer:
How to remove unused code at compile time?
Also see other relevant questions:
Splitting client/server code
Call all functions with special prefix or suffix in Golang
I would like to define my Error Codes in a package models.
error.go
package models
const{
EOK = iota
EFAILED
}
How can I use them in another package without referring to them as models.EOK. I would like to use directly as EOK, since these codes would be common across all packages.
Is it the right way to do it? Any better alternatives?
To answer you core question
You can use the dot import syntax to import the exported symbols from another package directly into your package's namespace (godoc):
import . "models"
This way you could directly refer to the EOK constant without prefixing it with models.
However I'd strongly advice against doing so, as it generates rather unreadable code. see below
General/style advice
Don't use unprefixed export path like models. This is considered bad style as it will easily globber. Even for small projects, that are used only internally, use something like myname/models. see goblog
Regarding your question about error generation, there are functions for generating error values, e.g. errors.New (godoc) and fmt.Errorf (godoc).
For a general introduction on go and error handling see goblog
W.r.t. the initial question, use a compact package name, for example err.
Choosing an approach to propagating errors, and generating error messages depends on the scale and complexity of the application. The error style you show, using an int, and then a function to decode it, is quite C-ish.
That style was partly caused by:
the lack of multiple value returns (unlike Go),
the need to use a simple type (to be easily propagated), and
that gets translated to text with a function (unlike Go's error interface), so that the local language strings can be changed.
For small apps with simple errors strings. I put the packages' error strings at the head of a package file, and just return them, maybe using errors.New(...), or fmt.Errorf if the string needs to be completed using some data.
That 'int' style of error reporting doesn't offer something as flexible as Go's error interface. The error interface lets us build information-rich error structures, to return useful information, and not just an int value or string.
An implication is different packages can yield different real-types which implement the Error interface. We don't need to agree a single error real-type across an entire set of packages. So error is an interface which can be easily propagated, like an int, yet, the real-type of error can be much richer than an int. Error generation (implementing Error) can be as centralised or distributed as we need, unlike strerror()-style functions which can be awkward to extend.
The Effective go has following advice on naming of getters:
Go doesn't provide automatic support for getters and setters. There's
nothing wrong with providing getters and setters yourself, and it's
often appropriate to do so, but it's neither idiomatic nor necessary
to put Get into the getter's name. If you have a field called owner
(lower case, unexported), the getter method should be called Owner
(upper case, exported), not GetOwner. The use of upper-case names for
export provides the hook to discriminate the field from the method. A
setter function, if needed, will likely be called SetOwner. Both names
read well in practice:
Source: https://golang.org/doc/effective_go.html#Getters
Now, this advice doesn't seem to consistent as the stdlib itself violates this multiple times.
As you can see in above screenshot, there are multiple methods which use GetX naming convention which is advised against in the effective go guide.
So the question is is the advice given in guide wrong or these methods are named wrongly & would be fixed in future versions?
These names are not consistent with Go naming by design. Rob Pike, one of the Go creators, says this about the names in the OS package:
There are inconsistencies but this is the key point. It should be Stdout not StdOut, because that name is coming from the underlying system. Similarly it's Fprintf not FPrintf or FPrintF because that is a very familiar name. These names are coming into Go, not being created there, and the initial cap is the admission fee.
The names will not be changed in a future version of Go.
[go-nuts] FunctionName caseinconsistencies
A lot of the all lowercase names were chosen before we had really
figured out what the naming conventions should be. The rule we
adopted, which might be worth revisiting later, was that entry points
in package os or syscall, which are named after equivalents in C, just
had a single capital at the beginning, to avoid needing to decide
where the internal capitalizations are in abbreviations like geteuid
or getwd or chdir. Names like Readdirnames, which are actual words,
might be worth revisiting at some point.
Russ
os: inconsistent casing in names #1187
Is there any sort of rule about the casing of functions used in the
"os" package? Looking through, it doesn't sound like it's very easy
to recall whether a given function should be called LikeThat or
Likethat.
For instance:
Mkdir
MkdirAll
TempDir
Getenv
ForkExec
Readlink
ReadAt
Readdir
It feels very ad-hoc, and hard to recall.
It's a known issue. It's unplanned.
The term "getters" refers to methods on structs that allow you to read values of (often unexported) fields on that struct. The functions you're pointing to are top-level functions which allow you to read values from the OS. That idiomatic rule is not relevant to this case.
For example, referencing something as System.Data.Datagrid as opposed to just Datagrid. Please provide examples and explanation. Thanks.
The benefit is that you don't need to add an import for everything you use, especially if it's the only thing you use from a particular namespace, it also prevents collisions.
The downside, of course, is that the code balloons out in size and gets harder to read the more you use specific qualifiers.
Personally I tend to use imports for most things unless I know for sure I will only be using something from a particular namespace once or twice, so it won't impact the readability of my code.
You're being very explicit about the type you're referencing, and that is a benefit. Although, in the very same process you're giving up code clarity, which clearly is a downside in my case, as I want code to be readable and understandable. I go for the short version unless I have a conflict in different namespaces which can only be solved with the explicit referencing to classes.. Unless I make an alias for it with the keyword using:
using Datagrid = System.Data.Datagrid;
Actually the full path is global::System.Data.DataGrid. The point of using a more qualified path is to avoid having to use additional using statements, especially if the introduction of another using will cause problems with type resolution. More fully qualified identifiers exist so that you can be explicit when you need to be explicit, but if the class's namespace is clear, then the DataGrid version is clearer to many.
I generally use the shortest form available in order to keep the code as clean and readable as possible. That's what using directives are for, after all, and tooltips in the VS editor give you instant detail on the provenance of a type.
I also tend to use a namespace tag for RCWs in a COM interop layer, to call out those variables explicitly in the code (they may need special attention on lifecycle and collection), eg
using _Interop = Some.Interop.Namespace;
In terms of performance there is no upside/downside. Everything is resolved at compile time and the generated MSIL is identical whether you use fully-qualified names or not.
The reason why its use is prevalent in the .NET world is because of auto-generated code, such as designer markup. In that case it would be better to fully-qualify names like class names because of possible conflicts with other classes you may have in your code.
If you have a tool like ReSharper, it will actually tell you what fully-qualified references you have are unnecessary (e.g. by graying them out) so you can lop them off. If you frequently cut-paste code across your various code bases, it would be a must to fully qualify them. (then again, why would you want to do cut-paste all the time; it's a bad form of code reuse!)
I don't think there is really a downside, just readability vs actual time spent coding. In general if you don't have namespaces with ambiguous object I don't think it's really needed. Another thing to consider is level of use. If you have one method that uses reflection and you are alright with typeing System.Reflection 10 times, then it's not a big deal but if you plan on using a namespace alot then I would recommend an include.
Depending on your situation, extra qualifiers will generate a warning (if this is what you mean by redundant). If you then treat warnings as errors, that's a pretty serious downside.
I've run into this with GCC for example.
struct A {
int A::b; // warning!
}