how to generate an embed.FS? - go

I have a embed.FS, like:
//go:embed static
var embedStatic embed.FS
and I want to (at startup time) pass the files through a minifier. I want to be able to create an in-memory fs.FS with the same files available on embedStatic, but with their content minified.
I know there are external libraries (like Afero and MemFS), but I'd usually try to avoid adding dependencies.
I also know I can do this by creating a new interface and implementing all the methods that I care about (Open for fs.FS, ReadDir, etc...) by myself, but it seems like everything that I want to do is already done by embed.FS, except for the construction of the files.
My question is: is there a way to do this while re-using embed.FS? Can I create an embed.FS on the fly?
I can see that embed.FS has a files *[]file, but it's obviously private. I wonder if there's a way to create a new type and tell Go to "pretend this was created properly and just use it as an embed.FS".

embed.FS is a specific implementation for reading files embedded in the binary - it can't be used for filesystems built at runtime.
There are some fs.FS implementations in the standard library that may work for your use case. You could process your files into:
A temporary filesystem directory and pass to os.DirFS.
An in-memory ZIP file and use archive/zip.Reader as an fs.FS.
testing/fstest.MapFS. This is really intended for testing, but it is there..
Personally, I'd would either:
Minify via go generate before building the binary and using embed.FS. This could provide a smaller binary with less startup time/memory usage.
Write my own fs.FS or pull in a dependency if the files need to be modified at runtime. It's not much code.

Related

Laravel 8: is it fine to leave unused functionality config files untouched in terms of performance, correctness and security?

Laravel 8 has config files for auth, mail, broadcasting, queue, services, session .etc, but I am not using those functionalities for my specific application.
Is it fine to leave the config files (and corresponding .env settings) untouched, or is it better to delete those files?
I am talking about both in terms of performance, execution correctness and security, but not in terms of code readability here. In other words I am talking about "real effects to my application".
Short answer:
it's not recommended to do something like that if you don't wanna go out from laravel's ecosystem.
TL;DR:
Since Laravel version 5.1 there was some general changes about the configurations. It means it for your v8 as well.
In every today's laravel project you have bootstrap/cache folder, which includes some bootstrap/cache/.gitignore file for ignoring other files. That ignored (3-4) files are actually cached files, which were creating automatically, and you can't do anything about that in-the-box.
The bootstrap/cache/config.php file is responsible for all the configurations, and it's creating as from config/*.php config files, as well as from all the config files from vendored dependencies. It means, everytime when you're using your laravel app, that file will be created automatically using
all the project-specific configurations
all the vendored configurations
environment variables from .env.
Note: there might be a case, when some in-box package (like laravel's default mail.php config file) or some 3rd-party package (unlikely, but anyway) could not have their own config file, so in vendor core codes there may be a confidence, that it can get some configs exactly from their apropriate config/*.php file, so in that case it's not recommended to delete that. For example, when you're deleting config/mail.php, and you're using that, in both cases you'll have approximately the same cached config, except some small mail-config-features (optimization there only approx. ~60 lines of human readable lines), but in that case you can't use laravel's Mailing functionality.
The sense is that when you want to override some configs, you're just creating some own config file(s) (in general with the same name as it's in vendor's appropriate location), so that laravel can do caching from config/*.php, but not from vendor/username/package/path/to/config.php.
So for optimization, laravel doing that caching process once, and after whole usage it will retrieve conifigs only from bootsrapped bootsrap/cache/config.php configs.
That's why everytime you're changing something in some config/*.php file or/and in .env file, you need to clean (manually) and it will be created again automatically
OR just:
there's a in-build command like:
# this will clean all the application caches
php artisan optimize
# or only for cleaning and recreating config
# this is an old-school version, but still used
php artisan config:cache
All this means, that you don't need to delete some config file from config/*.php, because laravel will check is the cached file exists, (if not, then it will create caches anyway from vendor configs), and all the time it will read them from cached files.
Conclusion: All this means, that
that is not related with security and there's nothing to do about that
if you feel that decreasing the cached bootstrap/config.php file content from 1566 lines to 1500 lines it's a way of optimization, then you're free to do that, but you should know that it could make some problems in future.
Subjective recommendation: in human language, i don't recommend to do something like that. Anyway you can do that, may you can improve something, that couldn't do another contributors yet. But it will not give a real effect to your app.

How to include config files for the Google Ops-agent

I want to do some configurations for Google Cloud Ops-Agent in order to deploy it via Ansible.
For example /etc/google-cloud-ops-agent/kafka.yaml
How to include *.yaml configs?
If using /etc/google-cloud-ops-agent/config.yaml I'm worried then the configuration will be overwritten
There are two ways I can think of to do this.
The easiest (and least precise): use the copy module to recusively copy the the directory content to the target. Of course, if there are files other than ".yaml", you'll get those as well.
The more complex way...and I have not tested this. use the find module to execute locally on the control node, to get a list of the .yaml files, register their locations and then copy them up. There's probably a simpler way.

How can I use named pipes to stream a GCP Cloud Storage object to an executable that wants input files?

I have a third-party executable that takes a directory path as an argument and in turn looks there for a collection of .db files. I have said collection of files stored in a Google Cloud Storage bucket and would like to stream the content of those files into some local named pipes that can be used as input to the executable.
I'm writing an application to perform the above in Go and am using the "cloud.google.com/go/storage" package to work with cloud storage objects.
As a note, I need all pipes/files to be available for reading at the time I run the executable.
What is the best way to go about this? I'm looking to essentially used the named pipe as a proxy of sorts to make remote files look local to this executable. Possible?

Get path to file in storage

I saved a file to storage using:
$request->file('avatar')->store('avatars');
Which saved it to:
storage/app/avatars/avatar.png
How can I get the path to this file/folder (not the URL)? What is the correct way to do this using Laravel's Filesystem?
There is no correct way to do this; because it should not be done. The Storage is an opaque system to talk to different storage systems; as such there is no api to get the backing file path. As an example, that wouldn't work with Amazon S3. The only path your application knows about is the string you send to the Storage facade to work with the file, there are no guarantees that this string is used to generate the filename when the storage system stores the file.
There are some hacks you can use that works for the local disk, but those are not available for the Storage system in general. Using these solutions means that you'll limit yourself to only use the local disk; this will cause you troubles when you need to scale out and add another server. You'll then have two servers with two separate local disks, with separate content.
The correct way to work with the files, that will work for all configurations, is to get the file content (Storage::get), do the modifications (including storing them in a temporary file) and then write back the new file content (Storage::set).
If you're really sure that you will only ever use the local filesystem, use the File facade instead of the Storage facade. I'm unable to find any documentation for this, only the interface it exposes.
Reference: https://github.com/laravel/framework/issues/13610
Try this
storage_path('app/avatars/avatar.png');
you can only get the storage folder path from laravel function, you can give nested folder name after it, it will bind the base url as well
storage_path(folder1/folder2/.../file.png);

Spring API to unzip files

I know Spring has MultipartFile component.
I am wondering if there is any API to unzip files or read zip files to do some processing?
I have a zip file that following a certain format.
photos\
audio\
report.xml
when the user upload it via web, I wish to scan the zip file and do some processing.
Is there a solution for this issue?
I do not know spring have any such type of API,
but you can use other API for ZIP or UNZIP files.
1) http://commons.apache.org/compress/
2) java.util.zip
and also see
What is a good Java library to zip/unzip files?
There are a couple of Java SE APIs for reading ZIP files:
java.util.zip.ZipInputStream - gives you a one-pass reader
java.util.zip.ZipFile - gives you a reader that allows you to read the entries and the files in any order.
You should be able to use one or the other of these, depending on the nature of your processing.
If the processing requires the images to be in actual files, you would have to create the directories and write the files yourself. In this case, it would probably be simpler to use an external command to do the ZIP extraction.

Resources