I'm trying to convert my ontology, from OWL to SHACL. However, the SKOS labels, preflabels, comments etc. are not being converted. At least, they don't come back in the ttl file that is generated. Right now, I load my original TTL file into topbraid, and use Model -> Convert OWL/RDFS To SHACL... -> standard settings . This works great: All of my object properties, data properties, cardinalities etc. are neatly converted, however, everything that is not SHACL is completely ommited from the resulting TTL file, including my preflabels and comments.
What am I doing wrong: Or better: How do I convert everything to SHACL, but keep the SKOS things in there?
Thank you for any hints!
The OWL to SHACL importer only produces the SHACL-specific triples. The rest of the class definition can remain in the OWL file, and typically the generated SHACL file will owl:import the original OWL file. As a result, if anyone opens the SHACL file, the definitions from the OWL file will also still be there.
Related
My goal is to extract the localization keys and strings from a Base.lproj's .nib files.
While most compiled nib files use the binary plist format, I ran into a few that are in a different format, where the file starts with "NIBArchive".
An example (in macOS Monterey) is the file at:
/System/Library/CoreServices/Finder.app/Contents/Resources/Base.lproj/ClipWindow.nib
For "bplist" files, I can easily read them via CFPropertyListCreateFrom… into a NSDictionary, and then find the translatable strings therein (inside the "$classes" entry they're always three consecurity dict, string and string entries, with the dict containing the keys "NS.string", "NSKey" and "NSDev", and the following strings being the key and value of a translation entry, similar to what .strings files contain).
The NIBArchive, however format doesn't seem to be documented anyway. Has anyone figured out how to decode the entries in a meaningful manner so that I could find the translation items in them? Or convert them into the bplist format?
Note that this kind of file is a compiled nib, and ibtool won't work because it gives the error: "Interface Builder cannot open compiled nibs".
I am working with random .nib files, for which I don't know the implementation specifics. All I want is to extract are the .strings contents that were originally compiled into the Base localization file.
I had googled for this format before but found nothing. Now, with a slightly modified search, I ran into some answers.
My best hope to solving this so far is this format description, determined through reverse-engineering:
https://github.com/matsmattsson/nibsqueeze/blob/master/NibArchive.md
I can build a parser based on this, but still wonder if there are easier ways.
Another possible solution is to use NSKeyedUnarchiver to decode the data, after loading it into a NSNib object, as suggested here:
https://stackoverflow.com/a/4205296/43615
This method of decoding keyed archives of unknown types is also shown in the PlistExplorer project:
https://github.com/karstenBriksoft/PlistExplorer
It seems https://github.com/kam800/MachObfuscator does include a NIBArchive-reader NibArchive+Loading written in Swift.
I am writing lots of Sphinx/RestructuredText and this includes Sequence Diagrams using PlantUML. I have lots of text that I am reusing, so to make things cleaner, I created a definitions.iuml file. In this file, I can create named text references (via !startsub/!endsub blocks) that allows me to reference them in several different Sequence Diagrams. Change it once in the source location, and they all change. Perfect.
My problem is how to use these references outside of Sequence Diagrams? I use the exact same code (!includesub ../defintions.iuml!NAMED-REFERENCE) in the .rst file, and when I make docx/pdf, I see that link, I don't see the text that it is referencing. To make things worse, Google has like no documentation or search results on this. Queries of includesub, startsub, endsub +sphinx come back with nothing.
Help me obiwan kenobi.
I found the answer, which only resulted in more questions haha. Anyway, one thing at a time:
To create reference variables in your text document, use rst_prolog or rst_epilog in your Conf.py file. Why there are 2 commands that serve the same purpose, I dont know.
rst_prolog = """
.. |Variable1| replace:: Monday
.. |Variable2| replace:: Tuesday
"""
Now whenever you write |Variable1| in your text, the document will generate Monday.
The problem with the above is that its just for short words/phrases. You can't use it for code blocks, or anything that is more than one line. To reference in Code Blocks:
Create a new .rst file with the code you want to display. Best practice is to create a Code folder and place them all in there.
Further best practice is to stop using the '.. code block::' and instead use '.. parsed-literal::'. The output is the exact same, but parsed-literal allows you to use conf.py variables and ..codeblock:: doesn't.
So in this .rst file, first line is .. parsed-literal:: and all the text below it is the code you want to reference
In the original document that you wanted this code, type:
<4 spaces indent>.. include:: <Folder/File.rst>
Generate your document, and notice how the code is now being reference. You can include this reference all throughout your document.
I will soon be creating a new thread, this time asking how the text body and sequence diagrams can use the same reference. Currently, all text needs one reference, all sequences need another reference, and now we have double updates. Not ideal
I want to create a fragment file that will only contain a CustomTable in the file. This is easy enough, but I do not know how to link/include it back into the main product.wxs file.
The fragment file is in the same project as the product file, and I have also tried adding an include tag for the file without success, and even putting the custom table into a WiX include file.
Is there a way to do this? Or is it going to have to live in the product file?
The WiX toolset compiles and links in a similar manner to the C/C++ compiler. The linker starts at the "main" entry point (Product element, in your case) then follows the references from there, which in turn follows references from there until all are resolved.
Part of your question is missing but based on the title I'm going to guess that you want a CustomTable element. Typically that CustomTable is processed by a CustomAction. There are a couple good ways to reference a CustomAction.
I would not use an include file.
You could try using EnsureTable if you'd like to make sure the table is created whether or not there is data in it. If you'd like to separate the custom table's schema definition from the data I believe you can just define them in separate fragments and reference the schema definition from the data definition fragment by opening with <CustomTable Id="your table name"> and defining the rows of data within it.
In general Wix won't pull fragments into the main authoring unless they contain elements that are referred to somewhere and since there is currently no such thing as CustomTableRef you may opt to use other elements such as an empty PayloadGroup or ComponentGroup that you can refer to (using a PayloadGroupRef or ComponentGroupRef respectively) from your main Bundle, Product or Module element as the case may be.
I have an XML file (actually a Visual C# project file) that I want to manipulate using a Ruby script. I want to read the XML into memory, do some work on them that includes changing some attributes and some text (fixing up some path references), and then write the XML file back out. This isn't so hard.
The hard part is, I want the file I write to look the same as the file I read in, except where I made changes. If the input file used double quotes, I want the output to use double quotes. If the input had a space before />, I want the output to do the same. Basically, I want the output to be the same as the input, except where I explicitly made changes (which, in my case, will only be to attribute values, or to the text content of an element).
I want minimal diffs because this project file is checked into version control -- and because the next time I make a change in Visual Studio, it's going to rewrite it in its preferred format anyway. I want to avoid checking in a bunch of meaningless diffs that will then be changed back again in the near future. I also want to avoid having to open the project in Visual Studio, make a change, and save, before I can commit my Ruby script's changes. I want my Ruby script to just make its changes, nothing more.
I originally just parsed the file with regexes, but ran into cases where I really needed an XML library because I needed to know more about child elements. So I switched to REXML. But it makes the following undesirable changes to my formatting:
It changes all the attributes from double quotes to single quotes.
It escapes all the apostrophes inside attribute values (changing them to ').
It removes the space before />.
It sorts each element's attributes alphabetically, rather than preserving the original order.
I'm working around this by doing a bunch of gsub calls on REXML's output, but is there a Ruby XML-manipulation library that's a better fit for "minimal diff" scenarios?
You can build your own SAX parser (using Nokogiri, for example, it's very easy and I recommend to use it) to parse your XML file, change some data in it, and flush the processed XML file with your own customized, built from scratch, XML generator. The bad news is, you have to build a tiny XML library and generator routine in this case, so it is not an ordinary task.
Another way: don't build the SAX parser, but write an XML generator. Parse XML with your favourite library, change what you need to change and generate anything you want. You just need to recursively walk through all nodes in your document and output them within your conventions.
I have tool that creates variables for a simulation. The current workflow involves hand copying those variables into the simulation input file. The input file is a standard flat file, i.e. not binary or XML. I would like to automate the addition of the variables to the flat input file.
The variables copy over existing variables in the file, e.g.
New Variables:
Length 10
Height 20
Depth 30
Old Variables:
...
Weight 100
Age 20
Length 10
Height 20
Depth 30
...
Would like to have the old variables copy over the new variable. They are 200 lines into the flat input file.
Thanks for any insights.
P.S. This is on Windows.
If you're stuck using flat, then you're stuck using the old fashioned way of updating them: read from original, write to temp file, either write the original row or change the data and then write that. To add data, write it to the temp file at the appropriate point; to delete data, simply don't copy it from the original file.
Finally, close both files and rename the temp file to the original file name.
Alternatively, it might be time to think about a little database.
For something like this I'd be looking at a simple template engine. You'd have a base template with predefined marker tokens instead of variable values and then just pass the values required to your engine along with the template and it will spit out the resultant file, all present and correct. There are a number of Open Source template engines available in Java that would meet your needs, I imagine such things are also available in your language of choice. You could even roll your own without too much difficulty.
Note that under Unix, one would probably look at using mmap() because you can then use functions such as memmove() to move the data around and add new data or truncate() the result if the file is then smaller (you may also want to use truncate() to grow the file).
Under MS-Windows, you have the MapViewOfFileEx() function to do the same thing. The API is different, though,
and I'm not exactly sure what happens or how to grow/shrink the file (MSDN also includes a truncate()-like function and maybe that works).
Of course, it's important to use memcpy() or memmove() properly to not overwrite the wrong data or go outside the buffer.