Purpose of /var/resource_config.json - magento

I'm trying to figure out what the purpose of the file /var/resource_config.json is in Magento. It appears to perhaps be a caching of a configuration, but can't see where in the source code it is being created and/or updated.
I'm in the process of setting up local/dev/staging/prod environments for an EE1.12 build and want to figure out if I can safely exclude it from my repo or whether I need to script some updates to it for deploys.
Maybe the flash image uploader in admin creates it?
Any ideas or directions to look?

This is a configuration cache file for the "alternative media store" system. This is a system where requests for media files are routed through get.php, and allows you to store media in the database instead of the file system. (That may be a gross over simplification, as I've never used the feature myself)
You can safely, (and should) exclude this file from deployments/source control, as it's a cache file and will be auto generated as needed. See the following codeblock in the root level get.php for more information.
if (!$mediaDirectory) {
$config = Mage_Core_Model_File_Storage::getScriptConfig();
$mediaDirectory = str_replace($bp . $ds, '', $config['media_directory']);
$allowedResources = array_merge($allowedResources, $config['allowed_resources']);
$relativeFilename = str_replace($mediaDirectory . '/', '', $pathInfo);
$fp = fopen($configCacheFile, 'w');
if (flock($fp, LOCK_EX | LOCK_NB)) {
ftruncate($fp, 0);
fwrite($fp, json_encode($config));
}
flock($fp, LOCK_UN);
fclose($fp);
checkResource($relativeFilename, $allowedResources);
}
Speaking in general terms, Magento's var folder serves the same purpose as the *nix var folder
Variable files—files whose content is expected to continually change during normal operation of the system—such as logs, spool files, and temporary e-mail files. Sometimes a separate partition
and should be isolated to particular systems (i.e. not a part of deployments)

Related

Spring integration SFTP - issue with filters and number of messages emits

I started using spring integration SFTP and I have some questions.
Filters not working. I have example configuration:
Sftp.inboundAdapter(ftpFileSessionFactory())
.preserveTimestamp(true)
.deleteRemoteFiles(false)
.remoteDirectory(integrationProperties.getRemoteDirectory())
.filter(sftpFileListFilter()) // doesn't work
.patternFilter("*.xlsx") // doesn't work
And my ChainFileListFilter:
private ChainFileListFilter<ChannelSftp.LsEntry> sftpFileListFilter() {
ChainFileListFilter<ChannelSftp.LsEntry> chainFileListFilter = new ChainFileListFilter<>();
chainFileListFilter.addFilter(new SftpPersistentAcceptOnceFileListFilter(metadataStore(), "INT"));
chainFileListFilter.addFilter(new SftpSimplePatternFileListFilter("*.xlsx"));
return chainFileListFilter;
}
If I understand correctly, only the XLSX file should be saved in the local directory. If yes it doesn't work with this configuration. Am I doing something wrong or misunderstood this?
How I can configure SFTP that each downloaded file emit message? I see in the doc two params max-messages-per-poll and max-fetch-size, but I don't know how to set it up so that every file emits a message. I would like to sync files once every 24 hours and produce batch job queue. Maybe there is a workaround?
Is there built-in filter which allow me fetch only files with changed content? The best solution would be to check the checksums of the files.
I will be grateful for your help and explanations.
You cannot combine filter() and patternFilter(). Only one of them can be used: the last one overrides whatever you used before. In other words: or filter() or patternFilter() - not both. By default the logic is like this:
public SftpInboundChannelAdapterSpec patternFilter(String pattern) {
return filter(composeFilters(new SftpSimplePatternFileListFilter(pattern)));
}
private CompositeFileListFilter<ChannelSftp.LsEntry> composeFilters(FileListFilter<ChannelSftp.LsEntry>
fileListFilter) {
CompositeFileListFilter<ChannelSftp.LsEntry> compositeFileListFilter = new CompositeFileListFilter<>();
compositeFileListFilter.addFilters(fileListFilter,
new SftpPersistentAcceptOnceFileListFilter(new SimpleMetadataStore(), "sftpMessageSource"));
return compositeFileListFilter;
}
So, technically you don't need your custom one, if you don't use external persistent MetadataStore. But if you do, think about flipping SftpSimplePatternFileListFilter with SftpPersistentAcceptOnceFileListFilter. Since it is better to check for the pattern before storing the file into MetadataStore.
It is the fact that every synched remote file, passed those filters, is stored into local dir and the message for that local file is emitted immediately when the poller does a request.
The maxFetchSize plays the role when we load remote files into a local dir. The maxMessagesPerPoll is used from the poller, but those are already built from the local files. The message is emitted per local file, not as a batch for all of them. That's not what messaging is designed for.
Please, share more info what does not work with files. The SftpPersistentAcceptOnceFileListFilter checks not only file name, but also mtime of the file. So, that it not about any checksum, but more last modified timestamp of the file.

Yocto PREMIRROR/SOURCE_MIRROR_URL with url arguments (SAS_TOKEN) possible?

I sucessfully created a premirror for our yocto builds on an Azure Storage Blob,
that works if I set the access level to "Blob (Anonymous read).."
Now I wanted to keep the blob completely private, and access only via SAS Tokens.
SAS_TOKEN = "?sv=2019-12-12&ss=bf&srt=co&sp=rdl&se=2020-08-19T17:38:27Z&st=2020-08-19T09:38:27Z&spr=https&sig=abcdef_TEST"
INHERIT += "own-mirrors"
SOURCE_MIRROR_URL = "https://somewhere.blob.core.windows.net/our-mirror/downloads/BASENAME${SAS_TOKEN}"
BB_FETCH_PREMIRRORONLY = "1"
In general this works, but yocto (or to be exact the bitbake fetch module) will try then try to fetch from https://somewhere.blob.core.windows.net/our-mirror/downloads/bash-5.0.tar.gz%3Fsv%3D2019-12-12%26ss%3Dbf%26srt%3Dco%26sp%3Drdl%26se%3D2020-08-19T17%3A38%3A27Z%26st%3D2020-08-19T09%3A38%3A27Z%26spr%3Dhttps%26sig%3Dabcdef_TEST/bash-5.0.tar.gz
Which also encodes the special characters for the parameters and of course the fetch fill fail.
Did anybody has solved this or similar issues already?
Or is it possible to patch files inside the poky layer (namely in ./layers/poky/bitbake/lib/bb/fetch2) without changing them, so I can roll my on encodeurl function there?

possible to stream a mp3/mp4 from Dropbox API V2 with PHP?

Yesterday I set it up so I can serve MP3 files stored in my Dropbox using https://github.com/spatie/dropbox-api and Laravel. However this only works for small'ish files as the way it's working now, it has to load the entire file first and then serve it from Laravel. This doesn't work at all for movies or for long tracks as it takes forever and runs out of memory.
Here's the code I'm currently using
$authorizationToken = 'my-api-token';
$client = new \Spatie\Dropbox\Client($authorizationToken);
$path = "/offline/a-very-long-song.mp3"; // path in dropbox
$stream = $client->download($path);
$file = stream_get_contents($stream);
fclose($stream);
unset($stream);
$file_info = new \finfo(FILEINFO_MIME_TYPE);
return response($file, 200)->withHeaders([
'Content-Type' => $file_info->buffer($file),
'Content-Disposition' => 'inline; filename="' . basename($path) . '"',
]);
I was wondering if there's a way to stream it so it doesn't have to load the entire file first. I guess this happens naturally when you load a media file in the browser, but since there are no direct links to the physical file with Dropbox, I'm not sure if it's possible.
The Dropbox API does offer the ability to retrieve temporary direct links that can be used for streaming files like this, via the /2/files/get_temporary_link endpoint:
https://www.dropbox.com/developers/documentation/http/documentation#files-get_temporary_link
In the library you're using, that appears to be available as the getTemporaryLink method, as shown in the example here:
https://github.com/spatie/dropbox-api#a-minimal-implementation-of-dropbox-api-v2

Access the Android Special Folder Path by using Environment

I want to save my logs to a folder which I can access with windows explorer. For example I want to create my log in the following path
This PC\Galaxy A5 (2017)\Phone\Android\data\MyApp\files
So I tried to use Environment variables... I get such as
/data/user/...
But here i cannot see the file what I created (using code I can access the path but I want to see in the explorer).
how I can create a path like above with code?
When I tried this code
var finalPath2 = Android.OS.Environment.GetExternalStoragePublicDirectory
(Android.OS.Environment.DataDirectory.AbsolutePath);
I get the path "/storage/emulated/0/data"
and
If i use the code
var logDirectory =Path.Combine(System.Environment.GetFolderPath
(System.Environment.SpecialFolder.ApplicationData),"logs");
I get the following path like:
/data/user/0/MyApp/files/.config/logs
and
var logDirectory =Path.Combine(System.Environment.GetFolderPath
(System.Environment.SpecialFolder.MyDocuments),"logs");
"/data/user/0/IM.OneApp.Presentation.Android/files/logs"
but unfortunately I cannot access this folder by explorer....
This PC\Galaxy A5 (2017)\Phone\Android\data\MyApp\files
So how to find out this path in c# by using environments?
Update:
when I give the following path hardcoded, it creates the file where I want..
logDirectory = "/storage/emulated/0/Android/data/MyApp/files/logs";
is there any environment to create this path? I can combine 2 environments and do some string processing in order to create this path. But maybe there is an easier way?
You are looking for the root of GetExternalFilesDir, just pass a null:
Example:
var externalAppPathNoSec = GetExternalFilesDir(string.Empty).Path;
Note: This is a Context-based instance method, you can access it via the Android application context, an Activity, etc... (see the link below to the Android Context docs)
Shared storage may not always be available, since removable media can be ejected by the user. Media state can be checked using Environment.getExternalStorageState(File).
There is no security enforced with these files. For example, any application holding Manifest.permission.WRITE_EXTERNAL_STORAGE can write to these files.
re: https://developer.android.com/reference/android/content/Context#getExternalFilesDir(java.lang.String)
string docFolder = Path.Combine(System.Environment.GetFolderPath
(System.Environment.SpecialFolder.MyDocuments), "logs");
string libFolder = Path.Combine(docFolder, "/storage/emulated/0/Android/data/MyApp/files/logs");
if (!Directory.Exists(libFolder))
{
Directory.CreateDirectory(libFolder);
}
string destinationDatabasePath = Path.Combine(libFolder, "temp.db3");
db.Backup( destinationDatabasePath, "main");

Getting the filename/path from MvvmCross Plugins.DownloadCache

I'm currently using MvvmCross DownloadCache -- and it's working alright -- especially nice when I just need to drop in an Image URL and it automagically downloads / caches the image and serves up a UIImage.
I was hoping to leverage the code for one other use case -- which is I'd like to grab source images from URL's and cache the files on the local file system, but what I really want for this other use case is the image path on the local file system instead of the UIImage itself.
What would help me most if I could get an example of how I might accomplish that. Is it possible to make that happen in a PCL, or does it need to go into the platform specific code?
Thanks -- that works, but just in case anyone else is following along, I wanted to document how I got the Mvx.Resolve<IMvxFileDownloadCache>() to work. In my setup.cs (in the touch project), I had:
protected override void InitializeLastChance ()
{
Cirrious.MvvmCross.Plugins.DownloadCache.PluginLoader.Instance.EnsureLoaded();
Cirrious.MvvmCross.Plugins.File.PluginLoader.Instance.EnsureLoaded();
Cirrious.MvvmCross.Plugins.Json.PluginLoader.Instance.EnsureLoaded();
...
}
But that wasn't enough, because nothing actually registers IMvxFileDownloadCache inside the DownloadCache plugin (which I was expecting, but it's just not the case).
So then I tried adding this line here:
Mvx.LazyConstructAndRegisterSingleton<IMvxFileDownloadCache, MvxFileDownloadCache>();
But that failed because MvxFileDownloadCache constructor takes a few arguments. So I ended up with this:
protected override void InitializeLastChance ()
{
...
var configuration = MvxDownloadCacheConfiguration.Default;
var fileDownloadCache = new MvxFileDownloadCache(
configuration.CacheName,
configuration.CacheFolderPath,
configuration.MaxFiles,
configuration.MaxFileAge);
Mvx.RegisterSingleton<IMvxFileDownloadCache>(fileDownloadCache);
...
}
And the resolve works okay now.
Question:
I do wonder what happens if two MvxFileDownloadCache objects that are configured in exactly the same way will cause issues by stepping on each other. I could avoid that question by changing the cache name on the one I'm constructing by hand, but I do want it to be a single cache (the assets will be the same).
If you look at the source for the plugin, you'll find https://github.com/MvvmCross/MvvmCross/blob/3.2/Plugins/Cirrious/DownloadCache/Cirrious.MvvmCross.Plugins.DownloadCache/IMvxFileDownloadCache.cs - that will give you a local file path for a cached file:
public interface IMvxFileDownloadCache
{
void RequestLocalFilePath(string httpSource, Action<string> success, Action<Exception> error);
}
You can get hold of a service implementing this interface using Mvx.Resolve<IMvxFileDownloadCache>()
To then convert that into a system-wide file path, try NativePath in https://github.com/MvvmCross/MvvmCross/blob/3.2/Plugins/Cirrious/File/Cirrious.MvvmCross.Plugins.File/IMvxFileStore.cs#L27

Resources