Vim settings to detect includes in D source - include

I am trying to add some support for D programming language to my vim config. For autocompletion I need to detect packages that are included. This is not exactly hard to do in simple case:
import std.stdio;
import std.conv;
My config:
set include=^\\s*import
set includeexpr=substitute(v:fname,'\\.','/','g')
Works great.
However, imports can have more complicated format, for example:
package import std.container, std.stdio = io, std.conv;
I was not able to find a simple way to parse this with include and includeexpr.
Also there is a second problem: import can have different access modifiers, like public and private. VIM scans included files recursively, import statements from included files are parsed too. But I need to distinguish between the file I am working with now and files which are scanned automatically: in current file all imports should be detected, but in other files only public import statements should add more files to the search.
Thanks for help.
Update
It's a shame if this can not be done without full parsers. Essentially, I only need two things:
ability to return an array from includeexpr instead of one file name
ability to distinguish between includes in current and other files

I think only way to do it reliably is to use complete parser and semantic analyzer. D Completion Daemon (https://github.com/Hackerpilot/DCD/tree/master/editors/vim ) has vim plugin and is not very resource-hungry.

Vim's include mechanism and 'includeexpr' are heavily influenced by the C programming language and only work for single files. You cannot return a list of filenames, so it won't be possible to support D's complex include mechanism with Vim. Use an IDE that is fully tailored to support the programming language, not a general-purpose text editor.

Related

How to merge similar multi-package projects in go

Several organizations distribute variants of the same project, and we regularly pull changes from one another. It would be great if we could eventually merge code repositories and maybe, maybe have a common source tree managed by a consortium. However, each member would probably want the option of distributing their own variant without too much pain for customers in case there is trouble upstreaming changes required to work with newer products.
The project consists of three packages:
A library
A compiler executable that outputs go code that needs to import the library
A utility executable that uses code generated by #2 and links with #1
A big annoyance, when pulling changes back and forth, is gratuitous differences in import paths. We basically have to edit every version of import "github.com/companyA/whatever" to import "companyB.com/whatever". Of course these problems would go away with (gasp) relative import paths. If we resorted to such heresy, our compiler can just hard-code the absolute import path in generated code to isolate end users from the library's import path. It would also require only one gratuitous difference in the source trees (the line in the compiler that outputs import statements) rather than a bunch.
But anyway, I know relative import paths are bad - this is a tricky situation. I know this is similar to questions such as this or this, because the answer of just asking end users to create a directory called companyB.com and cloning something from companyA in there is just not going to fly for practical and political reasons.
I already know that go is not really good at accommodating this scenario, so I'm also not asking for a magic bullet to make go handle something it can't. Another thing that unfortunately won't fly is asking customers to curl whatever | sh, as this is viewed as too much of a liability (deemed "training customers to do dangerous things"). Maybe we could forego go get and have everyone clone to some neutral non-DNS-name under $GOPATH/src, but we would need to do this without a "flag day" in which code suddenly breaks if it's in the wrong place.
My question is whether anyone has successfully merged SDK-type projects with existing end users, and if so, how did you do it, what worked, and what didn't? Did you in fact avoid relative import paths or gnarly GOPATH hacking, and if so was it worth it? What mechanisms did you employ (environment variables, configuration files, .project-config files in the current working directory, abusing the vendor directory, code-generation packages that figure out their absolute import path at compliation time) to make this work smoothly? Did you just muddle through with a huge amount of sed or maybe gofmt -r? Are there tricks involving clever use of .gitattributes or go generate to rewrite import paths on checkout/checkin?
Merging them is pretty easy - cross-merge so that they all match, pick one (or create a new one) as the canonical source of truth, then migrate all references to the canonical import and make all future updates there.
The probablem arises here:
each member would probably want the option of distributing their own variant without too much pain for customers in case there is trouble upstreaming changes required to work with newer products
There's no particularly good way to do that without using any of the known solutions you've already ruled out, and no way to do that depending on your threshold for "too much pain".
Without knowing more about the situation it's hard to suggest options, but if there's any way that each company can abstract their portion out into a separate library they could maintain and update at their pace while using a smaller shared library with shared responsibility, that would likely be the best option - something like the model used by Terraform for its providers? Basically you'd have shared maintenance of a shared "core" and then independent maintenance of "vendor-specific" packages.

Can I develop a go package in multiple source directories?

I am developing a go package, which is a little bit complex and thus I want to organize the source code into multiple directories.
However, I don't want the users of the package to have to use too long imports. Anyways, the internal structure of the package isn't their concern.
Thus, my package structure looks so:
subDir1
subSubDir1
subSubDir2
subDir2
subSubDir3
...and so on. All of them have their exported calls.
I would like to avoid that my users have to import
import (
"mypackage/subDir1"
"mypackage/subDir1/subSubDir2"
)
...and so on.
I only want, if they want to use an exported function from my package, they should have access all of them by simply importing mypackage.
I tried that I declare package mypackage in all of the .go files. Thus, I had source files in different directories, but with the same package declaration.
In this case, the problem what I've confronted was that I simply couldn't import multiple directories from the same package. It said:
./src1.go:6:15: error: redefinition of ‘mypackage’
"mypackage/mysubdir1"
^
./src1.go:4:10: note: previous definition of ‘mypackage’ was here
"mypackage"
^
./src1.go:5:15: error: redefinition of ‘mypackage’
"mypackage/mysubdir2"
^
./src1.go:4:10: note: previous definition of ‘mypackage’ was here
"mypackage"
^
Is it somehow possible?
You should not do this in any case, as the language spec allows a compiler implementation to reject such constructs. Quoting from Spec: Package clause:
A set of files sharing the same PackageName form the implementation of a package. An implementation may require that all source files for a package inhabit the same directory.
Instead "structure" your file names to mimic the folder structure; e.g. instead of files of
foo/foo1.go
foo/bar/bar1.go
foo/bar/bar2.go
You could simply use:
foo/foo1.go
foo/bar-bar1.go
foo/bar-bar2.go
Also if your package is so big that you would need multiple folders to even "host" the files of the package implementation, you should really consider not implementing it as a single package, but break it into multiple packages.
Also note that Go 1.5 introduced internal packages. If you create a special internal subfolder inside your package folder, you may create any number of subpackages inside that (even using multiple levels). Your package will be able to import and use them (or to be more precise all packages rooted at your package folder), but no one else outside will be able to do so, it would be a compile time error.
E.g. you may create a foo package, have a foo/foo.go file, and foo/internal/bar package. foo will be able to import foo/internal/bar, but e.g. boo won't. Also foo/baz will also be able to import and use foo/internal/bar because it's rooted at foo/.
So you may use internal packages to break down your big package into smaller ones, effectively grouping your source files into multiple folders. Only thing you have to pay attention to is to put everything your package wants to export into the package and not into the internal packages (as those are not importable / visible from the "outside").
Inside your package source code, you have to differentiate your source directories by renamed imports. You can declare the same package mypackage in all of your source files (even if they are in different directories).
However, while you import them, you should give an induvidual names to the directories. In your source src1.go, import the other directories on this way:
import (
"mypackage"
submodule1 "mypackage/mySubDir"
)
And you will be able to reach the API defined in "mypackage" as mypackage.AnyThing(), and the API defined in mySubDir as submodule1.AnyThing().
The external world (i.e. the users of your package) will see all exported entities in myPackage.AnyThing().
Avoid namespace collisions. And use better understable, intuitive naming as in the example.
Yes, this is doable without any problems, just invoke the Go compiler by hand, that is not via the go tool.
But the best advice is: Don't do that. It's ugly and unnecessarily complicated. Just design your package properly.
Addendum (because the real intention of this answer seems to get lost sometimes, maybe because irony is too subtle): Don't do that!! This is an incredible stupid idea! Stop fighting the tools! Everybody will rightfully hate you if you do that! Nobody will understand your code or be able to compile it! Just because something is doable in theory doesn't mean this is a sensible idea in any way. Not even for "learning purpose"! You probably even don't know how to invoke the Go compiler by hand and if you figure it out it will be a major pita.

Can gdb set break at every function inside a directory?

I have a large source tree with a directory that has several files in it. I'd like gdb to break every time any of those functions are called, but don't want to have to specify every file. I've tried setting break /path/to/dir/:*, break /path/to/dir/*:*, rbreak /path/to/dir/.*:* but none of them catch any of the functions in that directory. How can I get gdb to do what I want?
There seems to be no direct way to do it:
rbreak file:. does not seem to accept directories, only files. Also note that you would want a dot ., not asterisk *
there seems to be no way to loop over symbols in the Python API, see https://stackoverflow.com/a/30032690/895245
The best workaround I've found is to loop over the files with the Python API, and then call rbreak with those files:
import os
class RbreakDir(gdb.Command):
def __init__(self):
super().__init__(
'rbreak-dir',
gdb.COMMAND_BREAKPOINTS,
gdb.COMPLETE_NONE,
False
)
def invoke(self, arg, from_tty):
for root, dirs, files in os.walk(arg):
for basename in files:
path = os.path.abspath(os.path.join(root, basename))
gdb.execute('rbreak {}:.'.format(path), to_string=True)
RbreakDir()
Sample usage:
source a.py
rbreak-dir directory
This is ugly because of the gdb.execute call, but seems to work.
It is however too slow if you have a lot of files under the directory.
My test code is in my GitHub repo.
You could probably do this using the Python scripting that comes with modern gdb's. Two options: one is to list all the symbols and then if they contain the required directory create an instance of the Breakpoint class at the appropriate place to set the breakpoint. (Sorry, I can't recall off hand how to get a list of all the symbols, but I think you can do this.)
You haven't said why exactly you need to do this, but depending on your use-case an alternative may be to use reversible debugging - i.e. let it crash, and then step backwards. You can use gdb's inbuilt reversible debugging, or for radically improved performance, see UndoDB (http://undo-software.com/)

Unifying enums across multiple languages

I have one large project with components in multiple languages that each depend on some of the same enum values. What solutions have you come up with to unify enums across multiple arbitrary languages? I can think of a few, but I'm looking for the best solution.
(In my implementation, I'm using Php, Java, Javascript, and SQL.)
You can put all of the enums in a text file, then use a code generator to write out the appropriate syntax for each language from that common file so that each component has the enums. Make that text file the authoritative source of information.
You can express the text file in XML but I'd think a tab-delimited flat file would work just fine.
Make them in a format that every language can understand or has a library for. I am using JSON for this at the moment.
Then you can include it with two ways:
For development: Load it from a file/URL at runtime
good for small changes you want too see immediately
slow
For productive usage: Include it in the files
using a build script
fast
no instant feedback
I would apply the dry principle and using code generator as such you could add anew language easely even if it has not enum natively existing.

Usage/availability of import function in Processing.js

The reference page (use "toggle all") for Processing.js says the import command remains unimplemented. It references a page for the Java Processing language that describes a usage pattern like this: import processing.opengl.*;
I see at Github that some work on the import command was committed to the root in May. Does anyone know how this syntax works in a JavaScript environment? It's not clear what the path to the library file and its assets would be. Does this depend on the use of an environment variable similar to PYTHONPATH, or is there a directory naming convention?
Finally, would you care to discuss the relative merits of the import command (assuming it now works) versus the approach described here, and discussed briefly here on StackOverflow.
I was looking for a similar solution. Perusing the processing.js base code, I noticed that you can simply list multiple files in your datasrc declaration if you separate the file names with spaces. That kind of does what I want, although the result is multiple ajax calls to load the separate scripts.
<canvas id="test" datasrc="resources/pjs/Spot.class.pjs resources/pjs/cursor.pjs"></canvas>
I think a cleaner solution, given the current state of the processing.js code (as of July 2010) would be to simply build a server-side code concatenator a-la minify:
http://groups.google.com/group/minify

Resources