We've been looking into solutions to keep our API documentation accurate and up-to-date. This has led me down this rabbit hole of various solutions using standards such as OpenAPI and a whole universe of open source tools for integrating API documentation into your build pipeline, setting up mock servers, failing the build when APIs change or no longer match the documentation, etc. Right now, I'm fairly overwhelmed so to avoid having this question closed as "Too Broad", I'd like to narrow things down to a key piece of the puzzle that I'm a bit lost on. In simple terms, how do you connect the auto-generated Swagger JSON to the fully documented and detailed version of the spec (most likely as a bunch of YAML files and Markdown files)?
Right now, we have the ability to generate Swagger JSON based on our compiled code, but it's sparse and doesn't contain any information that can't be obtained through the code itself. Obviously I'd want to fill in details such as longer descriptions of each API, sample code, etc. From what I can tell, it's standard practice to generate your OpenAPI specs using a tool that can import directly from Swagger, then you use an editor (such as Swagger Editor) to fill in all the details. This gets bundled up as a bunch of YAML files, MD files, etc which get checked into Github, and then there's various tools that could build that into some portal for your users to go to (Redoc does this sort of thing).
However, how do changes to your original "auto-generated JSON" get merged into the "final product"? My guess would be there's some build step that runs some sort of diff tool that compares the autogenerated JSON from the current build to the "published" OpenAPI spec and then somehow alerts the team if there's new APIs that have to be documented or existing APIs that have changed and need their documentation update. Is this kinda on track or is there a big part I'm missing. Thanks!
Related
I am creating a PythonAnalyzer using the following code:
var interpreterFactory = InterpreterFactoryCreator.CreateAnalysisInterpreterFactory(
PythonLanguageVersion.V36.ToVersion());
var analyzer = PythonAnalyzer.Create(interpreterFactory);
Later on I also create and analyze a simple python module, that looks like this:
name = input('What is your name?\n')
print('Hi, %s.' % name)
Then I do module.Analysis.GetValuesByIndex("name", 4).
At this moment I expected the "value" to be 'str', because that's what Visual Studio shows when I open the same file in it. However, I get 'object' instead. So it seems that the PythonAnalyzer when constructed as mentioned above lacks some important information about where to look for standard library and/or its types.
Unfortunately, the documentation on PythonAnalyzer is lacking, so I was hoping the community could help understand how to configure it properly.
Congratulations on getting so far :)
What you're hitting here is the fact that CreateAnalysisInterpreterFactory is really intended for "pure" cases, where you have access to all the code that you're trying to analyze and nothing needs to be looked up. It is mostly used for the unit tests, or as a fallback when no copies of Python are installed. Depending on precisely which version of PTVS you are using, the bare information you're getting is either coming from DefaultDB\v3\python.pyi or CompletionDB\__builtin__.idb, both of which are somewhat lacking (by design).
Assuming you have a copy of Python installed, I would suggest creating an instance of InterpreterConfiguration with all of its details, and passing that to CreateInterpreterFactory (without "Analysis").
If you're on the latest sources (strongly recommended), this may run the interpreter in the background to collect information from it (you can control caching of this info with the DatabasePath and UseExistingCache members of InterpreterFactoryCreationOptions). If you are using the older version still, you'll need to trigger a completion DB regeneration or have one that you've created through VS.
And a final caveat: this part of PTVS is currently under some pretty heavy development at time of writing, so you'll either want to keep updating the version you're working against or stick with a slightly older one. Also feel free to post questions like this on the GitHub site, as while this is technically public API, it's barely documented at all and so the best help will come from the dev team.
I am currently trying to analyse Bugzilla in order to find the ratio of number of bugs : lines of code for each Firefox component. However, I have never worked with Bugzilla before and have no knowledge of Firefox's codebase.
How would I go about finding lines of code per Firefox component (as they appear on Bugzilla under Comp header)? I have made an attempt at looking through mozilla central, but have no idea which source files relate to which components.
EDIT: Dexter pointed out that there is a directive BUG_COMPONENT in the mozilla-central tree, but this directive seems extremely incomplete and is not helpful. Any other advice, or pointers as to where I could get such advice would be much appreciated.
Great question! We recently added the BUG_COMPONENT directive (see the meta bug) to the Firefox code: it's in the moz.build file contained in each directory in the source. This directive allows linking each file in the repository to the related Bugzilla component.
For example, the following directive found here, tells that all the files in test/browser containing the Telemetry word belong to the Toolkit::Telemetry component on Bugzilla.
with Files("test/browser/*Telemetry*"):
BUG_COMPONENT = ("Toolkit", "Telemetry")
You can use either DXR or searchfox to quickly search the Firefox repository.
Updated the answer to account for the questions in the comments.
As noted in the comments, some components are tracked on Bugzilla (e.g. Activity Stream) but do not have a direct mapping to source files within the mozilla-central repository (the one Firefox is built from). That's because some newer components do not ride "the trains" (~6 weeks development cycle), but are rather updated more frequently and deployed as addons.
The code for these components usually lives under the Mozilla github account, along with other project. Since there are quite a number of projects, one way to identify the ones you might be interested in is to restrict them to JavaScript ones. If you follow this last link, you'll see the repository for both the test-pilot and Activity Stream (plus other addons).
I'm afraid the only way to match GitHub projects to Bugzilla components is to look at the name of the repository on GitHub and find the matching component in Bugzilla: you can type the name here to get some component suggestions. If you want to get fancy, you might also leverage the Bugzilla REST API:
Get a list of the JS GitHub project.
Extract the name of the project.
Use the REST API to get the component suggestion.
I would personally just consider the mozilla-central repository as a starting point, as it is mostly annotated: scrape the BUG_COMPONENT from the source files, map them to the paths then use the REST API to get the list of bugs.
Sidenote: the Download Panel seems to be correctly annotated in the main repo.
I want to learn more about sketchapp to build a plugin for it, I was looking at the JSON files we can extract from a project and i noticed all the "classes" (i look them up and they show up as headers) tagged in it like: "MSRect", "MSColor", "MSExportOptions", etc.
I've looked at the sketchapp developer webpage, and some forums and i found some mentions to this classes but i couldn't get anything usefull at basic level.
The question would be, where i can find information about what are those classes and what they do?
Thank you.
https://github.com/abynim/Sketch-Headers, here I could find all the information i needed about this "classes", some one took the time to use the dump command to get all the information ( Methods, properties, etc.), and upload it to github.
This files took some time to understand and manage, but they solved all the problems i had with the creation of my plugin for Sketch.
I have been working with the HL7 FHIR .NET API reference implementation - utilizing the existing resource models embedded in the library. Now, I am trying to use the Forge tool to modify the resources (contraints/extensions) to suit my requirements.
I noticed that the HL7 publishing mechanism does not generate C# models from DSTU 2 onwards and was wondering - what is the best way of converting profiles created using Forge into C# resource classes such that they may be included into the HL7.Fhir.Model assembly that is part of the reference implementation.
The generation of the models not being part of the official build is correct.
This has now moved to https://github.com/ewoutkramer/fhir-net-api where the rest of the API is maintained more easily.
It is done using T4 templates on the output from the official builds.
There is a simple process for updating the models with the new versions of the spec, and we keep it fresh as people need it, and for each connectathon we publish a new build in NuGet and have a branch of the code in GitHub.
(Its a powershell script that downloads all the latest build outputs and puts them in the appropriate folders, then you need to run the t4 templates in Visual Studio)
Such as this one for the May Connectathon in Montreal
https://github.com/ewoutkramer/fhir-net-api/tree/ft-connectathon-may2016
This is able to be done yourself with a little assistance.
As for generating code for a profile, we haven't done that as yet, but will theoretically be possible.
Don't know that I'd advise this at the moment while the profiles are in so much development and change.
background
I have designed many tools in the past year or so that is designed to help me program for XPages. These tools include primarily helper java classes, extended logging (making use of OpenLogger and my own stuff), and a few other things that I personally feel I cannot work without. It has been discussed with my employer, and we feel that it might be a good idea to start publishing these items to openNTF. Since these tools are made up of about 3 .nsfs, all designed to use the same java code, key javascript classes, css, and even a custom control or two, I would like to consolidate key items into a plug-in that can be installed at the server and client level. I want to do this consolidation before I even think about publishing any of the work I've done so far. It would just be far too much work to maintain, not just for me, but for potential users. I have not really found any information on how to do such a thing in google searches. I also have to make sure that I am able to make use of the ExtLib libraries, openNTF Domino API, and the Notes API.
my questions
How does one best go about designing such plug-ins? Must a designer
use eclipse, or is this it possible to do this directly in the Notes
Designer?
How does a designer best go about keeping a server and client up to date while designing and updating the plug-in code? Is this why GitHub is often used?
Where is the best place to get material to get started in this direction? I sort of feel lost in the woods, knowing I need to head north, but not having a compass for that first step.
Thank you very much for your input.
In my experience, I found that diving into plug-in development is a huge PITA until you get used to it, but it's definitely worth it overall.
As for whether you can use Designer for plugin development: yes, but you will likely eventually want to not do so. I started out by using Designer for this sort of thing for a while, presumably with the same sentiment as you: why bother installing another instance of Eclipse when I'm already sitting in one all day? However, between Designer's age (it's roughly equivalent to, I think, Eclipse 3.4), oddities when it comes to working sets between the "Applications" and "Project Explorer" views, and, in my case, my desire to use a Mac app, I ended up switching.
There are two major starting points: the XSP Starter Kit (http://www.openntf.org/internal/home.nsf/project.xsp?name=XSP%20Starter%20Kit) and Niklas Heidloff's video on setting up Eclipse for XPages development (http://www.openntf.org/main.nsf/blog.xsp?permaLink=NHEF-8RVB5H). The latter mentions the XPages SDK (http://www.openntf.org/internal/home.nsf/project.xsp?name=XPages%20SDK%20for%20Eclipse%20RCP), which is also useful. In my setup, I found the video largely useful, but some aspects either difficult to find (IBM's downloads are shifting sands) or optional (debugging, which will depend on whether or not you're using Eclipse on Windows).
Those resources should generally get you set up. The main thing to worry about when setting up your Eclipse environment will be making sure your Plug-In Execution Environment is properly done. If you're following the SDK setup instructions, that SHOULD get you where you need to be.
The next thing to know about is the way plugins are structured. Each plugin you want to install in Designer or Domino will also be paired with a feature project (a feature can house several plugins), and potentially an update site - the last one is optional if you just want to import the features into an Update Site NSF. That's how I often do my normal plugin development: export the paired feature to a directory and then import the feature into the server's Update Site NSF and then install in Designer from there using Application -> Install. You can also set things up so that you deploy into the server's plugin/feature directories instead of taking the step of installing into an update site if you'd prefer. GitHub doesn't really come into play for this aspect - it's more about sharing/collaborating with your code and also having a remote storage location for your git repositories (which I highly advise).
And as for the "lost in the woods" feeling: yep, you'll have that for a good while. There are lots of moving parts and esoteric concepts to get a hold of all at once. If you mostly follow the above links and then start with some basics from the XSP Starter Kit (which is itself a plugin project that you can pair with a feature) - say, printing text in the Activator class and making an implicit global variable just to make sure it works - that should help get your feet wet.
It's best done in Eclipse. You can debug your code running on the server from there, as well as run it directly from there. The editors are also more up-to-date. You want:
Eclipse for RCP and RAP developers
XPages SDK for Eclipse RCP (from OpenNTF)
XPages Debug Plugin (from OpenNTF - basically allows you to load the plugins to the Domino server dynamically, rather than exporting to an Update Site all the time)
XSP Starter Kit on OpenNTF is a good starting point for a plugin. There are various references to the library id, which has to be unique for your plugin. Basically, references to org.openntf.xsp.starter need changing to whatever you want to call your plugin. You're also best advised to remove what you don't need. I tend to work in a copy of the Starter, remove stuff, build and if there are errors with required classes (Activator.java obviously will be required and some others), then paste them back in from the Starter.
XPages OpenLog Logger is a good cross-reference, that was built from XPages Starter Kit. It's pretty much stripped down and you'll be able to see what had to be changed. A lot of the elements of the XSP Starter Kit correspond to Java classes you'll probably be familiar with from your XPages Java development.
GitHub etc tend to be used as source control, which is useful for working out what's changed from time to time.