i have the following problem:
i want to transform a PDF/A-1A document to a PDF/A-3A.
The original document is validated by Arobat Reader Pro, so i can asume it is PDF/A-1A conform.
I try to convert the PDF metadata with the following code:
private PDDocumentCatalog makeA3compliant(PDDocument doc) throws IOException, TransformerException {
PDDocumentCatalog cat = doc.getDocumentCatalog();
PDMetadata metadata = new PDMetadata(doc);
cat.setMetadata(metadata);
XMPMetadata xmp = new XMPMetadata();
XMPSchemaPDFAId pdfaid = new XMPSchemaPDFAId(xmp);
xmp.addSchema(pdfaid);
XMPSchemaDublinCore dc = xmp.addDublinCoreSchema();
String creator = "TestCr";
String producer = "testPr";
dc.addCreator(creator);
dc.setAbout("");
XMPSchemaBasic xsb = xmp.addBasicSchema();
xsb.setAbout("");
xsb.setCreatorTool(creator);
xsb.setCreateDate(GregorianCalendar.getInstance());
PDDocumentInformation pdi = new PDDocumentInformation();
pdi.setProducer(producer);
pdi.setAuthor(creator);
doc.setDocumentInformation(pdi);
XMPSchemaPDF pdf = xmp.addPDFSchema();
pdf.setProducer(producer);
pdf.setAbout("");
PDMarkInfo markinfo = new PDMarkInfo();
markinfo.setMarked(true);
doc.getDocumentCatalog().setMarkInfo(markinfo);
pdfaid.setPart(3);
pdfaid.setConformance("A");
pdfaid.setAbout("");
metadata.importXMPMetadata(xmp);
return cat;
}
If i try to validate the new file with Acrobat again, i get a validation error:
CIDset in subset font is incomplete (font contains glyphs that are not listed)
if i try to validate the file with this online validator (http://www.pdf-tools.com/pdf/validate-pdfa-online.aspx) it is a valid PDF/A-3A....
am i missing something?
is nobody able to help?
EDIT: Here is the PDF file
This worked for us to be fully PDF/A-3 compliant regarding the CIDset issue:
private void removeCidSet(PDDocumentCatalog catalog) {
COSName cidSet = COSName.getPDFName("CIDSet");
// iterate over all pdf pages
for (Object object : catalog.getAllPages()) {
if (object instanceof PDPage) {
PDPage page = (PDPage) object;
Map<String, PDFont> fonts = page.getResources().getFonts();
Iterator<String> iterator = fonts.keySet().iterator();
// iterate over all fonts
while (iterator.hasNext()) {
PDFont pdFont = fonts.get(iterator.next());
if (pdFont instanceof PDType0Font) {
PDType0Font typedFont = (PDType0Font) pdFont;
if (typedFont.getDescendantFont() instanceof PDCIDFontType2Font) {
PDCIDFontType2Font f = (PDCIDFontType2Font) typedFont.getDescendantFont();
PDFontDescriptor fontDescriptor = f.getFontDescriptor();
if (fontDescriptor instanceof PDFontDescriptorDictionary) {
PDFontDescriptorDictionary fontDict = (PDFontDescriptorDictionary) fontDescriptor;
fontDict.getCOSDictionary().removeItem(cidSet);
}
}
}
}
}
}
}
OK - I think I have an answer on your question from the perspective of the callas and/or Adobe technology (and once more, I'm affiliated with callas and its pdfToolbox technology that is also used inside of Acrobat).
According to my research and the people I consulted, your example PDF document contains a font with a CID character set that is incomplete. Why does pdfToolbox or Acrobat say it's a valid PDF/A-1a file but not a valid PDF/A-3a file? Interesting question:
1) The rules for incomplete CID sets changed between PDF/A-1a and PDF/A-3a. They are stricter in PDF/A-3a.
2) But while in PDF/A-1a a CID set always had to be there, in PDF/A-3a you can have a valid, compliant file, without such a CID set.
So, your PDF file contains a CID set (which makes it valid for PDF/A-1a and A-3a) but while that CID set is fine for A-1a it does not contains all characters to be A-3a compliant.
To test at least part of this theory, I processed your file through pdfToolbox with a fixup entitled "Remove CIDset if incomplete". That correction (as the name implies) removes the CID set from the file but doesn't change anything else. After doing so your file validates as a valid A-3a file.
That leaves the question why the pdftools web site claims this is a valid PDF/A-3a file; according to the people I've spoken to, the result from preflight for this file is correct and there should be an error on this file. So perhaps that's something you need to take up with the pdftools guys (and they possibly with callas to figure out who's finally right).
Feel free to send me a personal message if you want to discuss this further - more discussion on the tools themselves probably becomes off-topic for this public site.
Related
In a spring boot application I read an image file from a remote service, which returns byte array and in headers I can check what is file extension:
ResponseEntity<byte[]> result = restTemplate.exchange(url, HttpMethod.GET, entity, byte[].class);
Now I want to put this byte array in a S3 bucket in a folder which I decide during run time, for example folder name can base don current timestamp.
I checked AmazonS3 class, but it doesnt seem to have any such API which can help me?
How can this be done?
As per example from documentation:
https://docs.aws.amazon.com/sdk-for-java/v2/developer-guide/examples-s3-objects.html#upload-object
// Put Object. here 'bytes' is byte array.
PutObjectResponse response = s3.putObject(PutObjectRequest.builder().bucket(bucketName).key(filePathLocation).build(),RequestBody.fromBytes(bytes));
You can use the MinIO java S3 client. Here you can find the documentation.
The code will look something like the following one:
MinioClient minioClient =
MinioClient.builder()
.endpoint("https://play.min.io")
.credentials("Q3AM3UQ867SPQQA43P2F", "zuf+tfteSlswRu7BJ86wekitnifILbZam1KYY3TG")
.build();
StringBuilder builder = new StringBuilder();
for (int i = 0; i < 1000; i++) {
builder.append(
"Sphinx of black quartz, judge my vow: Used by Adobe InDesign to display font samples. ");
builder.append("(29 letters)\n");
builder.append(
"Jackdaws love my big sphinx of quartz: Similarly, used by Windows XP for some fonts. ");
builder.append("(31 letters)\n");
builder.append(
"Pack my box with five dozen liquor jugs: According to Wikipedia, this one is used on ");
builder.append("NASAs Space Shuttle. (32 letters)\n");
builder.append(
"The quick onyx goblin jumps over the lazy dwarf: Flavor text from an Unhinged Magic Card. ");
builder.append("(39 letters)\n");
builder.append(
"How razorback-jumping frogs can level six piqued gymnasts!: Not going to win any brevity ");
builder.append("awards at 49 letters long, but old-time Mac users may recognize it.\n");
builder.append(
"Cozy lummox gives smart squid who asks for job pen: A 41-letter tester sentence for Mac ");
builder.append("computers after System 7.\n");
builder.append(
"A few others we like: Amazingly few discotheques provide jukeboxes; Now fax quiz Jack! my ");
builder.append("brave ghost pled; Watch Jeopardy!, Alex Trebeks fun TV quiz game.\n");
builder.append("---\n");
// Create a InputStream for object upload.
ByteArrayInputStream bais = new ByteArrayInputStream(builder.toString().getBytes("UTF-8"));
// Create object 'my-objectname' in 'my-bucketname' with content from the input stream.
minioClient.putObject(
PutObjectArgs.builder().bucket("my-bucketname").object("my-objectname").stream(
bais, bais.available(), -1)
.build());
bais.close();
System.out.println("my-objectname is uploaded successfully");
The full code can be found here.
Checkout the AWS JAVA SDK:
Here the getting started section:
https://docs.aws.amazon.com/sdk-for-java/v2/developer-guide/getting-started.html
In order to use in Spring context use the Maven dependency:
https://docs.aws.amazon.com/sdk-for-java/v2/developer-guide/setup-project-maven.html
Uploading an object to S3 Bucket:
https://docs.aws.amazon.com/sdk-for-java/v1/developer-guide/examples-s3-objects.html#upload-object
import com.amazonaws.AmazonServiceException;
import com.amazonaws.regions.Regions;
import com.amazonaws.services.s3.AmazonS3;
System.out.format("Uploading %s to S3 bucket %s...\n", file_path, bucket_name);
final AmazonS3 s3 = AmazonS3ClientBuilder.standard().withRegion(Regions.DEFAULT_REGION).build();
try {
s3.putObject(bucket_name, key_name, new File(file_path));
} catch (AmazonServiceException e) {
System.err.println(e.getErrorMessage());
System.exit(1);
All I'd like to do here is open an InDesign 2018 CC file, pull out text uniquely (here I've chosen to grab content inside XML tag called "Title" from named tag window in the InDesign application side), save it to a txt file, and close the InDesign doc. I'm working in the Extendscript app, using Adobe InDesign CC 2018 (13.064). I just need to push to a txt file only certain named data (textboxes, xmltags, pageitems, etc) the contents based on anything, but via the name of the data holder. But xmltags are the only objects that I can name in the InDesign app apart from layers, and layers won't work for other reasons. So I'm stuck not being able to refer to xml-tagged contents. Please help.
Note:
I get an error with this code saying "Title" isn't defined, and I understand the error, but not sure how to utilize the method XML.toString() without referring to an object that's named inside the InDesign file. So I guess I'm using the wrong method to refer to xml-tagged data already located in a file??
So naturally, I throw out XML.toString() and utilize the commented out code (below) "app.activeDocument.xmlItems.item;" thinking maybe I will get an array of all items that are xml tagged, which is not even specific enough for my goal, but I'm desperate, and I get another newer error regarding the "exportfile" line of code: myArticles.exportFile() is not a function.
My code so far:
app.open(File("C:/Users/Sean/Desktop/New folder/va tech 2.indd"), true);
myArticles = Title.toString();
//THIS ATTEMPT WON'T WORK EITHER AS RPLCMNT FOR LINE ABOVE: myArticles= app.activeDocument.xmlItems.item;
myArticles.exportFile(ExportFormat.textType, new File("/C/Users/Sean/Desktop/New folder/test.txt"), false);
app.documents.everyItem().close(SaveOptions.NO);
var main = function() {
var doc, root, xes, n, nXE, st, xc, strs = [],
f = File ( Folder.desktop+"/contents.txt" );
try {
//NEED TO CHANGE THE URL. Ex: some/url > /Users/user/Desktop/foo.indd
doc = app.open ( File ( "some/url" ) );
}
catch(err){
alert(err.message);
return;
}
if ( !doc ) {
alert("You need an open document" );
return;
}
root = doc.xmlElements[0];
xes = root.evaluateXPathExpression("//Title");
n = xes.length;
while ( n-- ) {
nXE = xes[n];
xc = nXE.xmlContent;
if ( xc instanceof Story ) {
strs.push( xc.contents );
}
}
if ( strs.length ) {
f.open('w');
f.write ( strs.reverse().join("\r") );
f.close();
}
}
var u;
app.doScript ( "main()",u,u,UndoModes.ENTIRE_SCRIPT, "The Script" );
I am trying to display a mainframe file in a eclipse RCP application using JRecord library.I already have the COBOL copybook as a text file.
to accomplish that,
I am transferring the file from mainframe to my desktop through
apache commons net FTPClient API
Now I have a text file
I am removing the newline and carriage return characters
then I read it via ., a CobolIoProvider and convert it into a ArrayList of type AbstractLine
But I have offset issues because of some special charcters .
here are the issues
when I dont perform step #3 , there are offset issues right from
record 1. hence I included step #3
even when I perform step #3 , the first few thounsands of records seem to be formatted(or read ) by the AbstractLineReader correctly unless it encounters a special character (not sure but thats my assumption).
Code snippet:
ArrayList<AbstractLine> lines = new ArrayList<AbstractLine>();
InputStream copyStream;
InputStream fis;
try {
copyStream = new FileInputStream(new File(copybookfile));
String filec = FileUtils.readFileToString(new File(datafile));
System.out.println("initial len: "+filec.length());
filec=filec.replaceAll("\r", "");
filec=filec.replaceAll("\n", "");
System.out.println("initial len: "+filec.length());
fis= new ByteArrayInputStream(filec.getBytes());
CobolIoProvider ioProvider = CobolIoProvider.getInstance();
AbstractLineReader reader = ioProvider.newIOBuilder(copyStream, "REQUEST",
Convert.FMT_MAINFRAME).newReader(fis);
AbstractLine line;
while ((line = reader.read()) != null) {
lines.add(line);
}
} catch (FileNotFoundException e) {
e.printStackTrace();
} catch (IOException e) {
e.printStackTrace();
}
What am I missing here ? is there an additional preprocessing that I need to do for the file transferred from mainframe ?
If it is a Text File (no binary data) with \r\n line delimiters try:
ArrayList<AbstractLine> lines = new ArrayList<AbstractLine>();
InputStream copyStream;
InputStream fis;
try {
copyStream = new FileInputStream(new File(copybookfile));
AbstractLineReader reader = CobolIoProvider.getInstance()
.newIOBuilder(copyStream, "REQUEST", ICopybookDialects.FMT_MAINFRAME)
.setFileOrganization(Constants.IO_STANDARD_TEXT_FILE)
.newReader(datafile);
AbstractLine line;
while ((line = reader.read()) != null) {
lines.add(line);
}
} catch (FileNotFoundException e) {
e.printStackTrace();
} catch (IOException e) {
e.printStackTrace();
}
Note: The setFileOrganization tells JRecord what type of file it is. So .setFileOrganization(Constants.IO_STANDARD_TEXT_FILE) tells JRecord it is a Text file with \n or \r\n end-of-line markers. Here is a Description of FileOrganisation in JRecord.
The special charcters worry me though, if there is a \n in the 'Data' it will be treated as an end-of-line. You may need to do binary transfer and keep the RDW (Record-Descriptor-Word) if it is a VB file.
If The file contains Binary data, you will need:
do a binary transfer (with RDW if it is a VB file)
use the appropriate File-Organisation
Specify Ebcdic (.setFont("cp037") tells JRecord is US-Ebcdic)
I will add a second answer for Generating Code using the RecordEditor
If you are absolutely sure all the records are the same length you can use the low-level routines to do the reading see the ReadAqtrans.java program in https://sourceforge.net/p/jrecord/discussion/678634/thread/4b00fed4/
basically you would do:
ICobolIOBuilder iobuilder = CobolIoProvider.getInstance()
.newIOBuilder("copybookFileName", ICopybookDialects.FMT_MAINFRAME)
.setFont("CP037")
.setFileOrganization(Constants.IO_FIXED_LENGTH);
LayoutDetail layout = iobuilder.getLayout();
FixedLengthByteReader br
= new FixedLengthByteReader(layout.getMaximumRecordLength() + 2);
br.open("...");
byte[] bytes;
while ((bytes = br.read()) != null) {
lines.add(iobuilder.newLine(bytes));
}
Future Reference / Binary File
If the file does contain Binary Data, you really need to do a binary transfer. You may find the RecordEditor useful.
The RecordEditor 0.98 has a JRecord code Generation
function. The advantages of using the RecordEditor Generate function are
The Recordeditor will try and work out the appropriate File attributes by looking at the File
You can try out various attributes (left hand pane) and see what the file looks like with those attributes
(right hand side).
When happy, hit the Generate button and the RecordEditor will generate JRecord code. There are several Code Templates
available:
Standard - will generate basic JRecord code (with a field name class
lineWrapper - will generate a "wrapper" class with the Cobol fields represented as get/set methods
RecordEditor Generate
In the RecordEditor select Generate >>> Java~JRecord code for Cobol
Generate Screen
Enter the Cobol CopyBook / Sample file and adjust the attributes as needed
Code Template
Next you can select the Code Template
Generated Code
Finally the RecordEditor will generate JRecord code based on the Attributes entered.
I'm using MaxEntTagger for pos-tagging and sentence splitting by using the follwing codes:
MaxentTagger tagger = new MaxentTagger("models/left3words-wsj-0-18.tagger");
#SuppressWarnings("unchecked")
List<Sentence<? extends HasWord>> sentences = MaxentTagger.tokenizeText(new BufferedReader(new StringReader(out2)));
for (Sentence<? extends HasWord> sentence : sentences) {
content.append(sentence + "\n");
Sentence<TaggedWord> tSentence = MaxentTagger.tagSentence(sentence);
out.append(tSentence.toString(false) + "\n");
}
The problem is it will complain there are untokenizable characters in the text. And the tagged output will omit those untokenizable characters. So for example, the original text is:
Let Σ be a finite set of function symbols, the signature.
where Σ is in big5 code. But the program will show the following warning message:
Untokenizable: Σ (first char in decimal: 931)
and the tagged output is:
Let/VB be/VB a/DT finite/JJ set/NN of/IN function/NN symbols/NNS ,/, the/DT signature/NN ./.
the splitted sentence I got is:
Let be a finite set of function symbols , the signature .
My question is how to retain these untokenizable characters?
I've tried modifying the mode's props file but with no luck:
tagger training invoked at Sun Sep 21 23:03:26 PDT 2008 with arguments:
model = left3words-wsj-0-18.tagger
arch = left3words,naacl2003unknowns,wordshapes(3)
trainFile = /u/nlp/data/pos-tagger/train-wsj-0-18 ...
encoding = Big5
initFromTrees = false
Any suggestion?
Thanks Prof. Manning's help. But I encounter the same issue when utilizing parser tree.
The sequel
I need to get the parser tree of a sentence, so I used the following codes:
PTBTokenizer<Word> ptb = PTBTokenizer.newPTBTokenizer(new StringReader(sentences));
List<Word> words = ptb.tokenize();
Tree parseTree2 = lp.apply(words);
TreebankLanguagePack tlp = new PennTreebankLanguagePack();
GrammaticalStructureFactory gsf = tlp.grammaticalStructureFactory();
GrammaticalStructure gs = gsf.newGrammaticalStructure(parseTree2);
But I don't know how to set PTBTokenizer for resolving the issue of untokenizable characters this time.
If using the factory method to generate an PTBTokenizer object, I don't know how to concatenate it to the StringReader.
List<Word> words = ptb.getTokenizer(new StringReader(sentences));
doesn't work.
The Stanford tokenizer accepts a variety of options to control tokenization, including how characters it doesn't know about are handled. However, to set them, you currently have to instantiate your own tokenizer. But that's not much more difficult than what you have above. The following complete program makes a tokenizer with options and then tags using it.
The "noneKeep" option means that it logs no messages about unknown characters but keeps them and turns each into a single character token. You can learn about the other options in the PTBTokenizer class javadoc.
NOTE: you seem to be using a rather old version of the tagger. (We got rid of the Sentence class and started just using List's of tokens about 2 years ago, probably around the same time these options were added to the tokenizer.) So you may well have to upgrade to the latest version. At any rate, the code below will only compile correctly against a more recent version of the tagger.
import java.io.*;
import java.util.*;
import edu.stanford.nlp.ling.*;
import edu.stanford.nlp.process.*;
import edu.stanford.nlp.objectbank.TokenizerFactory;
import edu.stanford.nlp.tagger.maxent.MaxentTagger;
/** This demo shows user-provided sentences (i.e., {#code List<HasWord>})
* being tagged by the tagger. The sentences are generated by direct use
* of the DocumentPreprocessor class.
*/
class TaggerDemo2 {
public static void main(String[] args) throws Exception {
if (args.length != 2) {
System.err.println("usage: java TaggerDemo modelFile fileToTag");
return;
}
MaxentTagger tagger = new MaxentTagger(args[0]);
TokenizerFactory<CoreLabel> ptbTokenizerFactory =
PTBTokenizer.factory(new CoreLabelTokenFactory(), "untokenizable=noneKeep");
BufferedReader r =
new BufferedReader(new InputStreamReader(new FileInputStream(args[1]), "utf-8"));
PrintWriter pw = new PrintWriter(new OutputStreamWriter(System.out, "utf-8"));
DocumentPreprocessor documentPreprocessor = new DocumentPreprocessor(r);
documentPreprocessor.setTokenizerFactory(ptbTokenizerFactory);
for (List<HasWord> sentence : documentPreprocessor) {
List<TaggedWord> tSentence = tagger.tagSentence(sentence);
pw.println(Sentence.listToString(tSentence, false));
}
}
}
The MSI stores the installation directory for the future uninstall tasks.
Using the INSTALLPROPERTY_INSTALLLOCATION property (that is "InstallLocation") works only the installer has set the ARPINSTALLLOCATION property during the installation. But this property is optional and almost nobody uses it.
How could I retrieve the installation directory?
Use a registry key to keep track of your install directory, that way you can reference it when upgrading and removing the product.
Using WIX I would create a Component that creates the key, right after the Directy tag of the install directory, declaration
I'd use MsiGetComponentPath() - you need the ProductId and a ComponentId, but you get the full path to the installed file - just pick one that goes to the location of your installation directory. If you want to get the value of a directory for any random MSI, I do not believe there is an API that lets you do that.
I would try to use Installer.OpenProduct(productcode). This opens a session, on which you can then ask for Property("TARGETDIR").
Try this:
var sPath = this.Context.Parameters["assemblypath"].ToString();
As stated elsewhere in the thread, I normally write a registry key in HKLM to be able to easily retrieve the installation directory for subsequent installs.
In cases when I am dealing with a setup that hasn't done this, I use the built-in Windows Installer feature AppSearch: http://msdn.microsoft.com/en-us/library/aa367578(v=vs.85).aspx to locate the directory of the previous install by specifying a file signature to look for.
A file signature can consist of the file name, file size and file version and other file properties. Each signature can be specified with a certain degree of flexibility so you can find different versions of the the same file for instance by specifying a version range to look for. Please check the SDK documentation: http://msdn.microsoft.com/en-us/library/aa371853(v=vs.85).aspx
In most cases I use the main application EXE and set a tight signature by looking for a narrow version range of the file with the correct version and date.
Recently I needed to automate Natural Docs install through Ketarin. I could assume it was installed into default path (%ProgramFiles(x86)%\Natural Docs), but I decided to take a safe approach. Sadly, even if the installer created a key on HKLM\Software\Wow6432Node\Microsoft\Windows\CurrentVersion\Uninstall, none of it's value lead me to find the install dir.
The Stein answer suggests AppSearch MSI function, and it looks interesting, but sadly Natural Docs MSI installer doesn't provide a Signature table to his approach works.
So I decided to search through registry to find any reference to Natural Docs install dir, and I find one into HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Installer\UserData\S-1-5-18\Components key.
I developed a Reg Class in C# for Ketarin that allows recursion. So I look all values through HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Installer\UserData\S-1-5-18\Components and if the Main application executable (NaturalDocs.exe) is found into one of subkeys values, it's extracted (C:\Program Files (x86)\Natural Docs\NaturalDocs.exe becomes C:\Program Files (x86)\Natural Docs) and it's added to the system environment variable %PATH% (So I can call "NaturalDocs.exe" directly instead of using full path).
The Registry "class" (functions, actually) can be found on GitHub (RegClassCS).
System.Diagnostics.ProcessStartInfo startInfo = new System.Diagnostics.ProcessStartInfo("NaturalDocs.exe", "-h");
startInfo.UseShellExecute = false;
startInfo.CreateNoWindow = true;
var process = System.Diagnostics.Process.Start (startInfo);
process.WaitForExit();
if (process.ExitCode != 0)
{
string Components = #"SOFTWARE\Microsoft\Windows\CurrentVersion\Installer\UserData\S-1-5-18\Components";
bool breakFlag = false;
string hKeyName = "HKEY_LOCAL_MACHINE";
if (Environment.Is64BitOperatingSystem)
{
hKeyName = "HKEY_LOCAL_MACHINE64";
}
string[] subKeyNames = RegGetSubKeyNames(hKeyName, Components);
// Array.Reverse(subKeyNames);
for(int i = 0; i <= subKeyNames.Length - 1; i++)
{
string[] valueNames = RegGetValueNames(hKeyName, subKeyNames[i]);
foreach(string valueName in valueNames)
{
string valueKind = RegGetValueKind(hKeyName, subKeyNames[i], valueName);
switch(valueKind)
{
case "REG_SZ":
// case "REG_EXPAND_SZ":
// case "REG_BINARY":
string valueSZ = (RegGetValue(hKeyName, subKeyNames[i], valueName) as String);
if (valueSZ.IndexOf("NaturalDocs.exe") != -1)
{
startInfo = new System.Diagnostics.ProcessStartInfo("setx", "path \"%path%;" + System.IO.Path.GetDirectoryName(valueSZ) + "\" /M");
startInfo.Verb = "runas";
process = System.Diagnostics.Process.Start (startInfo);
process.WaitForExit();
if (process.ExitCode != 0)
{
Abort("SETX failed.");
}
breakFlag = true;
}
break;
/*
case "REG_MULTI_SZ":
string[] valueMultiSZ = (string[])RegGetValue("HKEY_CURRENT_USER", subKeyNames[i], valueKind);
for(int k = 0; k <= valueMultiSZ.Length - 1; k++)
{
Ketarin.Forms.LogDialog.Log("valueMultiSZ[" + k + "] = " + valueMultiSZ[k]);
}
break;
*/
default:
break;
}
if (breakFlag)
{
break;
}
}
if (breakFlag)
{
break;
}
}
}
Even if you don't use Ketarin, you can easily paste the function and build it through Visual Studio or CSC.
A more general approach can be taken using RegClassVBS that allow registry key recursion and doesn't depend on .NET Framework platform or build processes.
Please note that the process of enumerating the Components Key can be CPU intense. The example above has a Length parameter, that you can use to show some progress to the user (maybe something like "i from (subKeysName.Length - 1) keys remaining" - be creative). A similar approach can be taken in RegClassVBS.
Both classes (RegClassCS and RegClassVBS) have documentation and examples that can guide you, and you can use it in any software and contribute to the development of them making a commit on the git repo, and (of course) opening a issue on it's github pages if you find any problem that you couldn't resolve yourself so we can try to reproduce the issue to figure out what we can do about it. =)