Using Redis for Symfony /var/cache and /var/logs - caching

I'm running Symfony 3.* instances in Homestead with PHP 7.1 and recently switched the cache and logs directory away from my main folder as the NFS syncing was going crazy and greatly diminished the performance of the whole installation.
I was wondering if I could fully dispatch logging and caching that usually go to ./var/ to Redis in some way through the configuration or a workaround?

You can modify the logging to turn it off, or down - or elect to not write it to a file, and instead send it to Redis, or other sources. There are many optional targets that Monolog can use, usually with a support library and configuration.
The cached files are not designed to be written elsewhere though. Because they are being written to disk, they can then be cached by OpCache.
It doesn't mean that var/* has to be written to a real disk however. If you have shared memory, to be used as a ram-disk (also known as tmpfs). An app can be quite easily altered to use that - for cache and/or log files:
class AppKernel extends Kernel
{
// ...
public function getCacheDir()
{
if (in_array($this->environment, array('dev', 'test'))) {
return '/dev/shm/appname/cache/' . $this->environment;
}
return parent::getCacheDir();
}
public function getLogDir()
{
if (in_array($this->environment, array('dev', 'test'))) {
return '/dev/shm/appname/logs';
}
return parent::getLogDir();
}
}
Source: http://www.whitewashing.de/2013/08/19/speedup_symfony2_on_vagrant_boxes.html via https://stackoverflow.com/a/10784563

Related

Preact PWA, configure maximumFileSizeToCacheInBytes to change this limit

I'm creating a PWA using the Preact Cli.
When compiling I get the warning:
/bundle.js is 2.15 MB, and won't be precached. Configure
maximumFileSizeToCacheInBytes to change this limit.
Since Preact CLI generates a service worker when building, to customize the service worker,
I've created a custom "sw.js" that uses the Preact CLI high-level API.
The question is how to use this API to change the "maximumFileSizeToCacheInBytes" (if possible).
This is definitely a situation where you need to be careful that you're not precaching large items accidentally, but if you need to up the limit, you can use your preact.config.js file to do so. All of this is done with WorkBox's InjectManifest plugin, so editing the service worker itself would do nothing.
preact.config.js
export default {
webpack(config, env, helpers, options) {
const manifestPlugin = helpers.getPluginsByName(config, 'InjectManifest')[0];
if (manifestPlugin) {
manifestPlugin.plugin.config.maximumFileSizeToCacheInBytes = 5 * 1024 * 1024; // or however much you need. This is 5mb.
}
}
}

How do I clear local images from Electron's cache when using a preload script?

Edit: While I still do not understand the differences between the session and webFrame cache, webFrame.clearCache() can simply be called from the within the preload script.
Problem
I have an Electron application which involves renaming and reordering images on the local filesystem. This often results in files swapping their filenames which causes caching issues that are not resolved until the window is reloaded. I am unable to clear or disable the cache.
Methods that work (unsatisfactory)
Calling require("electron").webFrame.clearCache(); from within the renderer process. This is not a satisfactory solution as it requires nodeIntegration to be enabled. (The WebFrameMain class available to the main process does not have a clearCache method).
Checking "Disable cache" from Chrome DevTools. Obviously this is not a solution for production.
Methods that don't work
Clearing the session cache. I noted that the session cache size was always 0.
mainWindow.webContents.session.clearCache();
Clearing the session storage data.
mainWindow.webContents.session.clearStorageData();
Adding the following command line switches to the main process.
app.commandLine.appendSwitch("disk-cache-size", "0");
app.commandLine.appendSwitch("disable-http-cache");
Providing a session object with cache disabled when creating the window.
webPreferences: {
preload: path.join(__dirname, "preload.js"),
session: session.fromPartition("example", { cache: false })
}
There are clearly components to the caching system I do not understand. It seems that the session cache and webFrame cache must be two different things, and I can not find a way to access the later from the main process or without nodeIntegration.
A minimal project which shows this issue can be found here: https://github.com/jacob-c-bickel/electron-clear-cache-test. Clicking the button swaps the filenames of the two images, however no visible change occurs until the window is reloaded.
I am using Electron 13.1.4 and Windows 10 2004.
You can create a function to clearCache with require("electron").webFrame.clearCache(); and attach it to the windows object in your preLoad script, then use that in your isolated renderer.
Google "electron preload"
Yeah – webFrame is meant to be used inside the browser, so that's usually going to mean preload.js. You then can use Electron's Inter-Process Communication (IPC) techniques to expose that method globally on the window object, allowing use elsewhere in your app!
It's like you're carving out a path through each level of your app for access to the right functions.
Altogether, that will look like this:
preload.js:
import { contextBridge, webFrame } from 'electron';
export const RendererApi = {
clearCache() {
webFrame.clearCache()
},
};
contextBridge.exposeInMainWorld("api", RendererApi);
And then anywhere inside your rendering process:
window.api.clearCache();

How to set FiddlerCore up to monitor all system traffic?

We are evaluating FiddlerCore for a use-case. Basically, for now we just want to catch all of the endpoints/urls being requested on a system. This works fine in Fiddler, no issues. But we only want to catch them while a certain vendor software is open. So we want to write a plugin to that software that will run when it launches, and then exit when it exits. Hence, using FiddlerCore (hopefully).
As proof-of-concept, I just made a simple app, one form with a textbox, that it should just append each url into the textbox. Simple as simple can be. However, it's not doing anything. I run the app, then refresh a page in my browser, and ... nothing.
Here is the entire (non-generated) code of my program...
using Fiddler;
using System;
using System.Windows.Forms;
namespace ScratchCSharp {
public partial class Form1 : Form {
public Form1() {
InitializeComponent();
FiddlerApplication.AfterSessionComplete += FiddlerApplication_AfterSessionComplete;
FiddlerApplication.Startup(8888, FiddlerCoreStartupFlags.Default);
}
private void FiddlerApplication_AfterSessionComplete(Session s) {
textBox1.Invoke((Action)delegate () {
AddText(s.fullUrl);
});
}
public void AddText(string text) {
textBox1.Text += $"{text}\n";
}
}
}
After a little more poking around, I see that FiddlerApplication.IsSystemProxy is returning false. Seems to have to do with that the Startup flag to set as system proxy is no longer honored, and it tells you now to use the Telerik.NetworkConnections.NetworkConnectionManager to set it as the system proxy. But I can't find anywhere that actually says how to do that. The closest thing I could find is this thread which seems to be their official answer to this question. However, it only goes into a lot of talk about WHY they deprecated the flag, and what their thinking was in how they designed its replacement, but not actually into HOW TO USE the replacement. The Demo app also does NOT use these libraries (probably why it doesn't catch anything either).
The biggest problem though, is that the NetworkConnectionsManager class has no public constructor, so you can't create an instance. It is not inheritable, so you can't make a subclass instance. All of the methods on it are instance methods, not static/shared. And there seems to be no method in the libraries which will create an instance of NetworkConnectitonsManager for you.
So while the class is clearly designed to be used as an instance (hence the methods not being static/shared, there doesn't actually seem to be any way to create an instance.
Any help on how to set this thing up to catch all the outgoing URLs on the system?
You can use the following code for starting Fiddler Core and registering it as a system proxy:
FiddlerCoreStartupSettings startupSettings =
new FiddlerCoreStartupSettingsBuilder()
.ListenOnPort(fiddlerCoreListenPort)
.RegisterAsSystemProxy()
.ChainToUpstreamGateway()
.DecryptSSL()
.OptimizeThreadPool()
.Build();
FiddlerApplication.Startup(startupSettings);
Some of the methods are obsolete for now, but I would recommend to stick with them until the NetworkConnectionManager API is improved and finalized.
Also, there is a sample application (that FiddlerCore installer installs on the Desktop), which is useful for a starting point with the development.

How to delete cached element manually in Play Framework 2.3.X?

I have my web application using Play 2.3.3. Here is a method in my controller :
#Cached(key = "categories", duration = 3600)
public static Result index() throws ApiException {
// Stuff read from database here
return ok(MyView.render(data));
}
My question is how can I delete the categories element from the cache manually to force it to be regenerated ? For example, when I add a category in my database, I don't want to wait for the cache to expire to make it visible to my users.
I couldn't find anything in the documentation : https://playframework.com/documentation/2.3.x/JavaCache
So is the cache stored in RAM, somewhere in the filesystem ?
Thanks.
EDIT : Ideally, I'd like to connect to my server, browse to a directory and delete the file. I suppose that's not possible with EHCache ?
You would use Cache.remove(key) from the docs you have linked.
import play.api.cache.Cache
Cache.remove("categories")
Play uses EhCache by default, which is an in-memory cache really only suited for single machine deployments. You can replace it another cache plugin though, play2 memcached for example.

How to speed up Azure deployment from Visual Studio 2010

I have Visual Studio 2010 solution with an Azure Service and an ASP.NET MVC 3 solution that serves as a Web Role for the Azure service. No other roles attached to the service other than that.
Every deployment to the Azure staging (or production, for that matter) environment takes up to 20 minutes to complete, form the moment I click publish on Visual Studio until all instances (2) are started.
As you can imagine this makes it a PITA to publish often, or to quick-fix some bugs. Is there a way to speed the process up? Would it be faster to upload the package to de Blob storage and upgrade from there? How would I go about achieving that?
I feel on-line docs on Azure leave a lot to be desired. Particularly when it comes to troubleshooting by the way.
Thanks.
One idea for reducing the need (and frequency) for redeploying is to move static content into blob storage, external to the package. For instance, move your css and javascript to blob storage, along with images. Once this is done, you'd only have to recompile / redeploy for .NET code changes. You can upload updated css, at any time, to blob storage. If you want to test this in staging first, you could always have a staging vs. production container name for your static content and store that container name in a config setting.
This doesn't change the deployment time when you do need to redeploy, but at least you can reduce how often you go through that process...
You should enable Web Deploy in your Azure project. It works this way :
1/ Create a RDP account (don't forget, you need to upload a certificate with its private key so that Azure can decipher the password). That is hidden in the Deploy Dialog Box for your Azure deployment project.
2/ Enable Web Deployment - same place
Once you've published the app that way, right-click in the web application (not the azure deployment project) and select Publish. The pop-up has everything defined except the password, enter that as well and you'll upload your changes to Azure in a matter of seconds.
CAVEAT : this is meant for single-instance web apps, definitely not the way to go for a production upgrade strategy, and the Blob storage answer already mentioned is the best option in that case.
Pierre
My solution to this problem is only to push a new package when I am changing code in the RoleEntryPoint or with the Service Definition. In Azure 1.3 you now have the ability to use Remote Desktop Connection. Using RDC, I will compile my code locally and use copy/paste to place it on the Azure server in the appropriate directory. Once the production code is running correctly, I can then push the fully tested version to staging and then do a VIP swap. This limits the number of times I actually have to deploy a package.
You actually have quite a long window in which you can keep modifying your code in Azure before you have to publish a new package. The new package is only really needed for those cases where Azure has to shutdown/restart your role instance.
It's a nice idea to try uploading your project to blob storage first, but unfortunately this is what Visual Studio is doing for you behind the scene anyway. As has been pointed out elsewhere, most of the time in doing the deploy is not the upload itself, but the stopping and starting of all of your update domains.
If you're just running this site in a development environment, then the only way I know to speed it up is to run just one instance. If this is the live environment, then... sorry, I think you're out of luck.
So that I don't have to deploy to the cloud to test minor changes, what I've found works quite well is to engineer the site so that it works when running in local IIS just like any other MVC site.
The biggest barrier to this working are settings that you have in the cloud config. The way we get around this is to make a copy of all of the settings in your cloud config and put them in your web.config in the appSettings. Then rather than using RoleEnvironment.GetConfigurationSettingValue() create a wrapper class that you call instead. This wrapper class checks RoleEnvironment.IsAvailable to see if it is running in the Azure fabric, if it is, it calls the usual config function above, if not, it calls WebConfigurationManager.AppSettings[].
There are a few other things that you'll want to do around getting the config setting change events which hopefully you can figure out from the code below:
public class SmartConfigurationManager
{
private static bool _addConfigChangeEvents;
private static string _configName;
private static Func<string, bool> _configSetter;
public static bool AddConfigChangeEvents
{
get { return _addConfigChangeEvents; }
set
{
_addConfigChangeEvents = value;
if (value)
{
RoleEnvironment.Changing += RoleEnvironmentChanging;
}
else
{
RoleEnvironment.Changing -= RoleEnvironmentChanging;
}
}
}
public static string Setting(string configName)
{
if (RoleEnvironment.IsAvailable)
{
return RoleEnvironment.GetConfigurationSettingValue(configName);
}
return WebConfigurationManager.AppSettings[configName];
}
public static Action<string, Func<string, bool>> GetConfigurationSettingPublisher()
{
if (RoleEnvironment.IsAvailable)
{
return AzureSettingsGet;
}
return WebAppSettingsGet;
}
public static void WebAppSettingsGet(string configName, Func<string, bool> configSetter)
{
configSetter(WebConfigurationManager.AppSettings[configName]);
}
public static void AzureSettingsGet(string configName, Func<string, bool> configSetter)
{
// We have to store these to be used in the RoleEnvironment Changed handler
_configName = configName;
_configSetter = configSetter;
// Provide the configSetter with the initial value
configSetter(RoleEnvironment.GetConfigurationSettingValue(configName));
if (AddConfigChangeEvents)
{
RoleEnvironment.Changed += RoleEnvironmentChanged;
}
}
private static void RoleEnvironmentChanged(object anotherSender, RoleEnvironmentChangedEventArgs arg)
{
if ((arg.Changes.OfType<RoleEnvironmentConfigurationSettingChange>().Any(change => change.ConfigurationSettingName == _configName)))
{
if ((_configSetter(RoleEnvironment.GetConfigurationSettingValue(_configName))))
{
RoleEnvironment.RequestRecycle();
}
}
}
private static void RoleEnvironmentChanging(object sender, RoleEnvironmentChangingEventArgs e)
{
// If a configuration setting is changing
if ((e.Changes.Any(change => change is RoleEnvironmentConfigurationSettingChange)))
{
// Set e.Cancel to true to restart this role instance
e.Cancel = true;
}
}
}
The uploading itself takes a bit more than a minute most of the time. It's the starting up of the instances that take up most of the time.
What you can do is to deploy your fixes to staging first (note that it costs money so don't let it be there for too long). Swapping from staging to production only takes a couple of seconds. So while your application's still running you can upload the patched version, let your testers test it on staging and when they give the go then simply swap it to production.
I haven't tested your possible alternative approach by first uploading to blob storage first. But I think that's overhead as it doesn't speed up starting up the instances.

Resources