I Dart we can read some yaml config files with for example this plugin
https://pub.dartlang.org/packages/safe_config
From what I understand, it is a file access every time.
So I was wondering, is there a clean way to cache those data ?
I could do something like on init do a Config.warmUp() to load the file but then, apart from setting a global var somewhere and then importing it but I don't think this is a "classy" move.
Is there an internal cache or buffer system included in Dart or am I obliged to do this global var stuff ?
PS : It is for an Angular App, so something like localStorage in JS (but hidden to the user would be a potential solution)
You can always use package:yaml to load the data and then just hold on the the result of the loadDocument call. This will be an in-memory data structure (like a YamlMap) that you read from.
If you want to get fancy, you could use package:json_serializable to map the Yaml to a data object. See an example here: https://github.com/dart-lang/json_serializable/blob/master/json_serializable/test/yaml/build_config.dart
Related
I am creating a web application in Go.
I have modified my working code so that it can read and write files on both a local filesystem and a bucket of Google Cloud Storage based on a flag.
Basically I included a small package in the middle, and I implemented my-own-pkg.readFile or my-own-pkg.WriteFile and so on...
I have replaced all calls in my code where I read or save files from the local filesystem with calls to my methods.
Finally these methods include a simple switch case that runs the standard code to read/write locally or the code to read/wrote from/to a gcp bucket.
My current problem
In some parts I need to perform a ReadDir to get the list of DirEntries and then cycle though them. I do not want to change my code except for replacing os.readDir with my-own-pkg.ReadDir.
So far I understand that there is not a native function in the gcp module. So I suppose (but here I need your help because I am just guessing) that I would need an implementation of fs.FS for the gcp. It being a new feature of go 1.6 I guess it's too early to find one.
So I am trying to create simply a my-own-pkg.ReadDir(folderpath) function that does the following:
case "local": { }
case "gcp": {
<Use gcp code sample to list objects in my bucket with Query.Prefix = folderpath and
Query.Delimiter="/"
Then create a slice of my-own-pkg.DirEntry (because fs.DkrEntry is just an interface and so it needs to be implemented... :-( ) and return them.
In order to do so I need to implement also the interface fs.DirEntry (which requires the implementation of interface for FileInfo and maybe something else...)
Question 1) is this the right path to follow to solve my issue or is there a better way?
Question 2) (only) if so, does the gcp method that lists object with a prefix and a delimiter return just files? I can't see a method that returns also the list of prefixes found
(If I have prefix/file1.txt and prefix/a/file2.txt I would like to get both "file1.txt" and "a" as files and prefixes...)
I hope I was enough clear... This time I can't include code because it's incomplete... But in case it helps I can paste what I can.
NOTE: by the way go 1.6 allowed me to solve elegantly a similar issue when dealing with assets either embedded or on the filesystem thanks to the existing implementation of fs.FS and the related ReadDirFS. So good if I could follow the same route 🙂
By the way I am going on studying and experimenting so in case I am successful I will contribute as well :-)
I think your abstraction layer is good but you need to know something on Cloud Storage: The directory doesn't exist.
In fact, all the object are put at the root of the bucket / and the fully qualified name of the object is /path/to/object.file. You can filter on a prefix, that return all the object (i.e. file because directory doesn't exist) with the same path prefix.
It's not a full answer to your question but I'm sure that you can think and redesign the rest of your code with this particularity in mind.
I'm currently trying to add scripting functionality to my C++ application by using v8. The goal is to process some data in buffers with JS and then return the result. I think I can generate an ArrayBuffer by using New with an appropriate BackingStore. The result would be a Local. I would now like to run scripts via v8::Script::Compile and v8::Script::Run. What would be the name of the ArrayBuffer - or how can I assign it a name so that it's accessible in the script? Do I need to make it a Globalif I need to run multiple scripts on the same ArrayBuffer?
If you want scripts to be able to access, say, my_array_buffer, then you'll have to install the ArrayBuffer as a property on the global object. See https://v8.dev/docs/embed for an introduction to embedding V8, and the additional examples linked from there. In short, it'll boil down to something like:
global_object->Set(context,
v8::String::NewFromUtf8(isolate, "my_array_buffer"),
array_buffer);
You don't need to store the ArrayBuffer in a Global for this to work.
I have set up direct uploads to S3 with Shrine. This works great. Among others, I have the following plugins enabled:
Shrine.plugin :backgrounding
Shrine.plugin :store_dimensions
Shrine.plugin :restore_cached_data
Correct me if I'm wrong but image dimensions extraction appears to be done synchronously. If I let the user bulk upload images via Uppy and then persist them all, this seems to be taking a long time.
What I'd like to do is perform image dimensions extraction asynchronously - I don't need the dimensions available for the cached file. If possible, I'd like to do that in the background when the file gets promoted to the store. Is there a way to do it?
The way I got this to work is by making use of :refresh_metadata plugin, instead of :restore_cached_data which I used originally. Thanks to Janko for pointing me in the right direction.
Reading into the source code provided some useful insights. :store_dimensions plugin by itself doesn't extract dimensions - it adds width and height to the metadata hash so that when Shrine's base class requests metadata, they get extracted too.
By using :restore_cached_data, this was being done on every assignment. :restore_cached_data uses :refresh_metadata internally so we can use that knowledge to only call it when the file is promoted to the store.
I have :backgrounding and :store_dimensions set up in the initializer so the final uploader can be simplified to this:
class ImageUploader < Shrine
plugin :refresh_metadata
plugin :processing
process(:store) do |io, context|
io.refresh_metadata!(context)
io
end
end
This way persisting data we get from Uppy is super fast and we let the background job extract dimensions when the file is promoted to the store, so they can be used later.
Finally, should you have questions related to Shrine, I highly recommend its dedicated Google Group. Kudos to Janko for not only creating an amazing piece of software (seriously, go read the source), but also for his dedication to supporting the community.
I am trying to optimize a RequireJS project first for HTTP/2 and SPDY, and secondly for HTTP/1.x. Rather than concatenating all of my RequireJS modules into one big file, I am hacking r.js to resave the modules that would have been concatenated as separate files. This will be better for the new protocols, because each file will be cached individually, and a change to one file won't invalidate the caches of all the other files. For the old protocols, I will pay a penalty in additional HTTP requests, but I can at least avoid subsequent rounds of requests by appending revision hashes to my files (i.e. "module.js" becomes "module.89abcdef.js") and setting a long max-age on them, so I want to do that.
r.js adds names to previously-anonymous modules. That means that the file "module.js" with this definition:
define(function () { ... })
Becomes this:
define("module.js", function () { ... })
I want this, because I will be loading some of my modules via <script> elements (to prevent RequireJS HTTP requests acting as a bottleneck to dependency resolution; I can instead utilize r.js's topological sorting to generate <script> elements which the browser can start loading as soon as it parses them). However, I also plan to load some modules dynamically (i.e., via RequireJS's programmatic insertion of <script> elements). This means RequireJS may at some point request a module via a URL.
Here is the problem I have run into: I am generating a RequireJS map configuration object to map my modules' original module IDs to their revision-hashed module IDs, so that when RequireJS loads a module dynamically, it requests the revision-hashed file name. But the modules it loads, which are now named modules thanks to r.js, have the un-revisioned name of the module, because I don't revision the files until after I minify them, which I only do after r.js generates them (thus r.js doesn't know about the revisions).
So the file "module.89abcdef.js" contains the module:
define("module.js", function () { ... })
When these named modules execute, define will register a module with the wrong name. When other modules require "module.js", this name will be mapped to "module.89abcdef.js" (thanks to the map configuration), but since a module with that name was not actually defined after the onload event of a <script> with the src attribute "module.89abcdef.js" fired, the module they receive is undefined.
To be faithful to the new file name, the file "module.89abcdef.js" should probably instead contain the definition:
define("module.89abcdef.js", function () { ... })
So, I believe the solution to my problem will be to replace the unrevisioned name of the module in the r.js-generated file with the revisioned name. However, this means the hash of the file is now invalidated. My question: In this situation, does it matter that the hash of the file is invalid?
I think it's impossible that the module name in the source code could be faithful to the file name, because if the module name had a hash, well, then the hash of the file would be different, so the module name would become the new hash, and so on recursively.
I fear there might be a case where the unfaithful hash would cause the file to still be cached even when it'd actually changed, but I cannot think of one. It seems that appending the same hash for the file's original contents to the file's name and the module's name should always still result in a unique file.
(My lengthy explanation is an attempt to eliminate certain "workaround"-type answers. If I weren't loading modules dynamically, then I wouldn't need to use map and I wouldn't have a file-name-vs-module-name mismatch; but I do load modules dynamically, so that workaround won't work. I also believe I would not need to revision files if I were only serving clients on newer protocols [I could just use ETags as there would be no additional overhead to make the comparison], but I do serve clients on the older protocols, so that workaround doesn't work either. And finally, I don't want to run r.js after revisioning files, because I only want to revision after minifying the files, in case minification maps two sources to the same output, in which case I could avoid an unnecessary cache invalidation.)
I try to add AutoSave support to the Core Data File Wrapper example
Now if i have a new/untitled document writeSafelyToURL is called with the NSAutosaveElsewhereOperation type.
The bad thing is, I get this type in both typical use cases
- new file: which store a complete new document by creating the file wrapper and the persistent store file
- save diff: where the file wrapper already exists and only an update is required.
Does somebody else already handled this topic or did somebody already migrated this?
The original sample use the originalStoreURL to distinguish those two use cases, which solution worked best for you?
Thanks