JSON file as config - go

I have JSON file as config. The problem I may see is that this cannot be compiled in Go and I'm worried that this also might affect the performance of the application since JSON is imported for every request. Would I be better off using Struct and initialising it in a separate Go file?

If you can store the configuration Go code, then I assume that the configuration does not change during the execution of the application. Load the configuration when the application starts and store the parsed representation in memory, possibly referenced from a package level variable.

Related

How to parse and convert local DART configuration file to objects in dart / flutter on windows

(Windows platform)
I have a complex local configuration file (dart file) full of objects and enums etc I'd like to parse and load as objects.
Note: The data is full of nested complex objects I don't want to spend 10 years trying to convert to JSON with no benefit and possibly ending "unserializably"
So basically the program loads, opens its default configuration file:
loadedConfig = default embedded config.
I want to read/write/maintain a local dart asset file and load the contents as loadedConfig.
There is plenty of info handing to/from JSON but there is no apparent info on loading a simple dart(txt) file and deserializing it into dart objects?? The closest I've found is storing simple numbers or lists of strings.
Really makes sense to have a local dart file as a config file on a static platform.
Anyone with the wisdom on how to do this or a better unrelated approach? Cheers.

how to generate an embed.FS?

I have a embed.FS, like:
//go:embed static
var embedStatic embed.FS
and I want to (at startup time) pass the files through a minifier. I want to be able to create an in-memory fs.FS with the same files available on embedStatic, but with their content minified.
I know there are external libraries (like Afero and MemFS), but I'd usually try to avoid adding dependencies.
I also know I can do this by creating a new interface and implementing all the methods that I care about (Open for fs.FS, ReadDir, etc...) by myself, but it seems like everything that I want to do is already done by embed.FS, except for the construction of the files.
My question is: is there a way to do this while re-using embed.FS? Can I create an embed.FS on the fly?
I can see that embed.FS has a files *[]file, but it's obviously private. I wonder if there's a way to create a new type and tell Go to "pretend this was created properly and just use it as an embed.FS".
embed.FS is a specific implementation for reading files embedded in the binary - it can't be used for filesystems built at runtime.
There are some fs.FS implementations in the standard library that may work for your use case. You could process your files into:
A temporary filesystem directory and pass to os.DirFS.
An in-memory ZIP file and use archive/zip.Reader as an fs.FS.
testing/fstest.MapFS. This is really intended for testing, but it is there..
Personally, I'd would either:
Minify via go generate before building the binary and using embed.FS. This could provide a smaller binary with less startup time/memory usage.
Write my own fs.FS or pull in a dependency if the files need to be modified at runtime. It's not much code.

existdb: identify database server

We have a number of (developer) existDb database servers, and some staging/production servers.
Each have their own configuration, that are slightly different.
We need to select which configuration to load and use in queries.
The configuration is to be stored in an XML file within the repository.
However, when syncing the content of the servers, a single burnt-in XML file is not sufficient, since it is overwritten during copying from the other server.
For this, we need the physical name of the actual database server.
The only function found, request:get-server-name that is not quite stable since a single eXist server can be accessed through a number of various (localhost, intranet or external) URLs. However, that leads to unnecessary duplication of the configuration, one for each external URL...
(Accessing some local files in the file system is not secure and not fast.)
How to get the physical name of the existDb server from XQuery?
I m sorry but I don't fully understand your question, are you talking about exist's default conf.xml or your own configuration file that you need to store in a VCS repo? Should the xquery be executed on one instance and trigger an event in all others, or just some, or...? Without some code it is difficult to see why and when something gets overwritten.
you could try console:jmx-token which does not vary depending on URL (at least it shouldn't)
Also you might find it much easier to use a docker based approach. Either with multiple instances coordinated via docker-compose or to keep the individual configs from not interfering with each other when moving from dev to staging to production https://github.com/duncdrum/exist-docker
If I understand correctly, you basically want to be able to get the hostname or the IP address of a server from XQuery. If the functions in the XQuery Request module are not doing as you wish, then another option would be to set a Java System Property when starting eXist-db. This system property could be the internal DNS name or IP of your server, for example: -Dour-server-name=server1.mydomain.com
From XQuery you could then read that Java System property using util:system-property("our-server-name").

Google Cloud Logs Export Names

Is there a way to configure the names of the files exported from Logging?
Currently the file exported includes colons. This are invalid characters as a path element in hadoop, so PySpark for instance cannot read these files. Obviously the easy solution is to rename the files, but this interferes with syncing.
Is there a way to configure the names or change them to no include colons? Any other solutions are appreciated. Thanks!
https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/site/markdown/filesystem/introduction.md
At this time, there is no way to change the naming convention when exporting log files as this process is automated on the backend.
If you would like to request to have this feature available in GCP, I would suggest creating a PIT. This page allows you to report bugs and request new features to be implemented within GCP.

Is it possible to configure properties like jcr:PrimaryType from a maven install

I'm following the steps from the Adobe instructions on How to Build AEM Projects using Maven and I'm not seeing how to populate or configure the meta data for the contents.
I can edit and configure the actual files, but when I push the zip file to the CQ instance, the installed component has a jcr:primaryType of nt:folder and the item I'm trying to duplicate has a jcr:primaryType of cq:Component (as well as many other properties). So is there a way to populate that data without needing to manual interact with CQ?
I'm very new to AEM, so it's entirely possible I've overlooked something very simple.
Yes, this is possible to configure JCR node types without manually changing with CQ.
Make sure you have .content.xml file in component folder and it contains correct jcr:primaryType ( e.g. jcr:primaryType="cq:Component").
This file contains metadata for mapping JCR node on File System.
For beginners it may be useful take a look VLT, that used for import/export JCR on File System. Actually component's files in your project should be similar to VLT component export result from JCR.

Resources