I'm creating a PWA using the Preact Cli.
When compiling I get the warning:
/bundle.js is 2.15 MB, and won't be precached. Configure
maximumFileSizeToCacheInBytes to change this limit.
Since Preact CLI generates a service worker when building, to customize the service worker,
I've created a custom "sw.js" that uses the Preact CLI high-level API.
The question is how to use this API to change the "maximumFileSizeToCacheInBytes" (if possible).
This is definitely a situation where you need to be careful that you're not precaching large items accidentally, but if you need to up the limit, you can use your preact.config.js file to do so. All of this is done with WorkBox's InjectManifest plugin, so editing the service worker itself would do nothing.
preact.config.js
export default {
webpack(config, env, helpers, options) {
const manifestPlugin = helpers.getPluginsByName(config, 'InjectManifest')[0];
if (manifestPlugin) {
manifestPlugin.plugin.config.maximumFileSizeToCacheInBytes = 5 * 1024 * 1024; // or however much you need. This is 5mb.
}
}
}
Related
I am using curl to publish to maven:
const curlOptions = [
'--silent',
'--output', '/dev/stderr',
'--write-out', '"%{http_code}"',
'--upload-file', fileLocation,
'--noproxy', options.noproxy ? options.noproxy : '127.0.0.1',
'--fail'
];
const curlCmd = ['curl', curlOptions.join(' '), targetUri].join(' ');
const childProcess = exec(curlCmd, execOptions, function (error) {
if (error) {
console.log(chalk.red(error));
}
});
This works for the upload but the artefact gets cached and I cannot get the artefact from curl without going to nexus and running rebuild metadata on the affected artifact.
Can i programmatically invalidate the cache?
To answer your question directly, it should be possible to invalidate cache in later versions of NXRM3 using the REST API and the "/beta/repositories/{repositoryName}/invalidate-cache" endpoint.
It should also be possible to do the same and run the scheduled task (rebuild metadata; /v1/tasks/{id}/run endpoint) though that seems a less desirable route as that is generally used for repair.
You can see more on the REST API in the NXRM3 documentation though the intent of some is that it's self documenting by using the Swagger UI in the application. Note at time of this answer, only those with the nx-admin privilege can access the Swagger UI (though people with proper permission can use the endpoints). You can find the Swagger UI in the Admin section under System -> API.
That being said, I think there's likely something else going on. I don't think it should be necessary to invalidate your cache like that each time. I didn't want to go too far from the question however. I encourage you to look at community.sonatype.com for answers to what else might be going on and ask there if you don't see one.
I'm doing a tiny offline-able web app / PWA. It's meant to be opened from a home screen icon and mimic a regular app by loading entirely from a cache when offline.
The app is written using Vue and to accomplish the above I'm just using their PWA template and whatever it generates. To the best of my knowledge what this does is set up workbox using the GenerateSW plugin to precache everything in the Webpack build, and registers it using register-service-worker. That is, I have fairly little control out of the box over the fine details, it's meant to be a turnkey solution.
That said, I'm not sure how to actually load a new build of the application when it's available. The above can detect this - the generated SW registration file with my changes looks like this:
import debug from 'debug';
import { register } from 'register-service-worker';
if (process.env.NODE_ENV === 'production') {
register(`${process.env.BASE_URL}service-worker.js`, {
ready(...args) {
log('App is being served from cache by a service worker.\n', ...args);
},
cached(...args) {
log('Content has been cached for offline use.', ...args);
},
updated(...args) {
log('New content is available; please refresh.', ...args);
},
offline(...args) {
log('No internet connection found. App is running in offline mode.', ...args);
},
error(error, ...args) {
log('Error during service worker registration:', error, ...args);
}
});
}
When I make a new build of the application, and I refresh the app in a browser, the updated() callback is executed, but nothing else is done. When I tried adding:
window.location.reload(true);
which should be a forced refresh, I just get a refresh loop. I'm assuming this is because the service worker cache is completely independent from the browser cache and unaffected by things like the above or Ctrl+F5. (Which makes the "please refresh" rather misleading.)
Since this is going to mimic a native app, and it's supposed to be a simple line-of-business tool, I don't really need to do anything more complicated than immediately reload to the new version of the app when an update is available. How can I achieve this?
Okay so the behaviour I've observed is that the update does happen automatically, it's just not obvious as to what the exact sequence of events is. I'll try to describe my best understanding of how the generated service worker works in the installed PWA scenario. I'll speak in terms of "app versions" for simplicity, because the mental model behind this is closer to how apps, not webpages work:
You deploy v1 of your application to a server, install / precache it on a device, and run it for the first time.
When you suspend and resume your app, it does not hit your servers at all.
The app will check for an update when it's either cold-started, or you reload the page, i.e. using the pull down gesture on Android.
(Possibly also periodically as the cached version goes stale, but I haven't checked this.)
Say you've deployed v2 of your app in the meantime. Reloading an instance of v1 of the app will find this update, and precache it.
(One reason why prompting the user to reload doesn't seem to make sense. Whatever the reloading is meant to accomplish has already happened.)
Reloading an instance of v1 again does absolutely nothing. The app remains running between reloads, and you'll just get v1 afterwards.
(Reason number two why prompting the user to reload is pointless - it's not what causes a new version of an app to load.)
However, next time you cold start your app - e.g. nuke it from the task switcher and reopen - v2 of your app will be loaded and I'm guessing v1 will get cleaned out. That is, your app must fully shut down so an update will load.
In short, for an application to be updated from v1 to v2, the following steps need to occur:
Deploy v2 to server
Refresh instance of v1 on device, or shut down and reopen the app.
Shut down and reopen the app (again).
I am utilizing Syncfusion's PdfViewerControl and PdfLoadedDocument classes to generate thumbnail images of a PDF. However, once I moved the project to an Azure App Service, the PdfViewerControl is throwing an exception when being initialized. I am curious if it is attempting to use system memory and Azure is blocking this. Below is the method GenerateThumbnails I've created and the exception is being thrown when creating a new PdfViewerControl. If anyone has a work around for this or has experienced something similar when moving to Azure, any assistance would be greatly appreciated.
Along with that, if someone knows of another tool to create thumbnails from a PDF in this manner that'd be very helpful as well. Thanks!
Exception:
System.AccessViolationException: 'Attempted to read or write protected memory. This is often an indication that other memory is corrupt.'
Method:
public static List<Byte[]> GenerateThumbnails(Byte[] file)
{
Int32 resizedHeight;
Int32 resizedWidth;
List<Byte[]> thumbnails = new List<Byte[]>();
using (PdfViewerControl pdfViewerControl = new PdfViewerControl())
using (PdfLoadedDocument pdfLoadedDocument = new PdfLoadedDocument(file, true))
{
// The PDF Viewer Control must load the PDF from a PdfLoadedDocument, rather than directly from the filename because
// when loaded from the filename, it is not disposed correctly and causes a file lock.
pdfViewerControl.Load(pdfLoadedDocument);
for (Int32 i = 0; i < pdfViewerControl.PageCount; ++i)
{
using (Bitmap originalBitmap = pdfViewerControl.ExportAsImage(i))
{
if (pdfViewerControl.LoadedDocument.Pages[i].Size.Width > pdfViewerControl.LoadedDocument.Pages[i].Size.Height)
{
resizedHeight = (PdfUtility.TARGET_THUMBNAIL_WIDTH_LANDSCAPE * originalBitmap.Height) / originalBitmap.Width;
resizedWidth = PdfUtility.TARGET_THUMBNAIL_WIDTH_LANDSCAPE;
}
else
{
resizedHeight = PdfUtility.TARGET_THUMBNAIL_HEIGHT_PORTRAIT;
resizedWidth = (PdfUtility.TARGET_THUMBNAIL_HEIGHT_PORTRAIT * originalBitmap.Width) / originalBitmap.Height;
}
using (Bitmap resizedBitmap = new Bitmap(originalBitmap, new Size(resizedWidth, resizedHeight)))
using (MemoryStream memoryStream = new MemoryStream())
{
resizedBitmap.Save(memoryStream, ImageFormat.Jpeg);
thumbnails.Add(memoryStream.ToArray());
}
}
}
}
return thumbnails;
}
Update
Web App for Containers on Windows is now supported. This allows you to bring your own docker container that runs outside of the sandbox, so the restrictions described below won't affect your application.
There are restrictions in the sandbox that the app is running in that prevents certain API calls.
Here is a list of frameworks and scenarios that have been found to be
not be usable due to one or more of the restrictions above. It's
conceivable that some will be supported in the future as the sandbox
evolves.
PDF generators failing due to restriction mentioned above:
Syncfusion Siberix Spire.PDF The following PDF generators are
supported:
SQL Reporting framework: requires the site to run in Basic or higher
(note that this currently does not work in Functions apps in
Consumptions mode) EVOPDF: See
http://www.evopdf.com/azure-html-to-pdf-converter.aspx for vendor
solution Telerik reporting: requires the site to run in Basic or
higher. More info here Rotativa / wkhtmltopdf: requires the site to
run in Basic or higher. NReco PdfGenerator (wkhtmltopdf): requires
subscription plan Basic or higher Known issue for all PDF generators
based on wkhtmltopdf or phantomjs: custom fonts are not rendered
(system-installed font is used instead) because of sandbox GDI API
limitations that present even in VM-based Azure Apps plans (Basic or
higher).
Other scenarios that are not supported:
PhantomJS/Selenium: tries to connect to local address, and also uses
GDI+.
https://github.com/projectkudu/kudu/wiki/Azure-Web-App-sandbox
I'm running Symfony 3.* instances in Homestead with PHP 7.1 and recently switched the cache and logs directory away from my main folder as the NFS syncing was going crazy and greatly diminished the performance of the whole installation.
I was wondering if I could fully dispatch logging and caching that usually go to ./var/ to Redis in some way through the configuration or a workaround?
You can modify the logging to turn it off, or down - or elect to not write it to a file, and instead send it to Redis, or other sources. There are many optional targets that Monolog can use, usually with a support library and configuration.
The cached files are not designed to be written elsewhere though. Because they are being written to disk, they can then be cached by OpCache.
It doesn't mean that var/* has to be written to a real disk however. If you have shared memory, to be used as a ram-disk (also known as tmpfs). An app can be quite easily altered to use that - for cache and/or log files:
class AppKernel extends Kernel
{
// ...
public function getCacheDir()
{
if (in_array($this->environment, array('dev', 'test'))) {
return '/dev/shm/appname/cache/' . $this->environment;
}
return parent::getCacheDir();
}
public function getLogDir()
{
if (in_array($this->environment, array('dev', 'test'))) {
return '/dev/shm/appname/logs';
}
return parent::getLogDir();
}
}
Source: http://www.whitewashing.de/2013/08/19/speedup_symfony2_on_vagrant_boxes.html via https://stackoverflow.com/a/10784563
I have Visual Studio 2010 solution with an Azure Service and an ASP.NET MVC 3 solution that serves as a Web Role for the Azure service. No other roles attached to the service other than that.
Every deployment to the Azure staging (or production, for that matter) environment takes up to 20 minutes to complete, form the moment I click publish on Visual Studio until all instances (2) are started.
As you can imagine this makes it a PITA to publish often, or to quick-fix some bugs. Is there a way to speed the process up? Would it be faster to upload the package to de Blob storage and upgrade from there? How would I go about achieving that?
I feel on-line docs on Azure leave a lot to be desired. Particularly when it comes to troubleshooting by the way.
Thanks.
One idea for reducing the need (and frequency) for redeploying is to move static content into blob storage, external to the package. For instance, move your css and javascript to blob storage, along with images. Once this is done, you'd only have to recompile / redeploy for .NET code changes. You can upload updated css, at any time, to blob storage. If you want to test this in staging first, you could always have a staging vs. production container name for your static content and store that container name in a config setting.
This doesn't change the deployment time when you do need to redeploy, but at least you can reduce how often you go through that process...
You should enable Web Deploy in your Azure project. It works this way :
1/ Create a RDP account (don't forget, you need to upload a certificate with its private key so that Azure can decipher the password). That is hidden in the Deploy Dialog Box for your Azure deployment project.
2/ Enable Web Deployment - same place
Once you've published the app that way, right-click in the web application (not the azure deployment project) and select Publish. The pop-up has everything defined except the password, enter that as well and you'll upload your changes to Azure in a matter of seconds.
CAVEAT : this is meant for single-instance web apps, definitely not the way to go for a production upgrade strategy, and the Blob storage answer already mentioned is the best option in that case.
Pierre
My solution to this problem is only to push a new package when I am changing code in the RoleEntryPoint or with the Service Definition. In Azure 1.3 you now have the ability to use Remote Desktop Connection. Using RDC, I will compile my code locally and use copy/paste to place it on the Azure server in the appropriate directory. Once the production code is running correctly, I can then push the fully tested version to staging and then do a VIP swap. This limits the number of times I actually have to deploy a package.
You actually have quite a long window in which you can keep modifying your code in Azure before you have to publish a new package. The new package is only really needed for those cases where Azure has to shutdown/restart your role instance.
It's a nice idea to try uploading your project to blob storage first, but unfortunately this is what Visual Studio is doing for you behind the scene anyway. As has been pointed out elsewhere, most of the time in doing the deploy is not the upload itself, but the stopping and starting of all of your update domains.
If you're just running this site in a development environment, then the only way I know to speed it up is to run just one instance. If this is the live environment, then... sorry, I think you're out of luck.
So that I don't have to deploy to the cloud to test minor changes, what I've found works quite well is to engineer the site so that it works when running in local IIS just like any other MVC site.
The biggest barrier to this working are settings that you have in the cloud config. The way we get around this is to make a copy of all of the settings in your cloud config and put them in your web.config in the appSettings. Then rather than using RoleEnvironment.GetConfigurationSettingValue() create a wrapper class that you call instead. This wrapper class checks RoleEnvironment.IsAvailable to see if it is running in the Azure fabric, if it is, it calls the usual config function above, if not, it calls WebConfigurationManager.AppSettings[].
There are a few other things that you'll want to do around getting the config setting change events which hopefully you can figure out from the code below:
public class SmartConfigurationManager
{
private static bool _addConfigChangeEvents;
private static string _configName;
private static Func<string, bool> _configSetter;
public static bool AddConfigChangeEvents
{
get { return _addConfigChangeEvents; }
set
{
_addConfigChangeEvents = value;
if (value)
{
RoleEnvironment.Changing += RoleEnvironmentChanging;
}
else
{
RoleEnvironment.Changing -= RoleEnvironmentChanging;
}
}
}
public static string Setting(string configName)
{
if (RoleEnvironment.IsAvailable)
{
return RoleEnvironment.GetConfigurationSettingValue(configName);
}
return WebConfigurationManager.AppSettings[configName];
}
public static Action<string, Func<string, bool>> GetConfigurationSettingPublisher()
{
if (RoleEnvironment.IsAvailable)
{
return AzureSettingsGet;
}
return WebAppSettingsGet;
}
public static void WebAppSettingsGet(string configName, Func<string, bool> configSetter)
{
configSetter(WebConfigurationManager.AppSettings[configName]);
}
public static void AzureSettingsGet(string configName, Func<string, bool> configSetter)
{
// We have to store these to be used in the RoleEnvironment Changed handler
_configName = configName;
_configSetter = configSetter;
// Provide the configSetter with the initial value
configSetter(RoleEnvironment.GetConfigurationSettingValue(configName));
if (AddConfigChangeEvents)
{
RoleEnvironment.Changed += RoleEnvironmentChanged;
}
}
private static void RoleEnvironmentChanged(object anotherSender, RoleEnvironmentChangedEventArgs arg)
{
if ((arg.Changes.OfType<RoleEnvironmentConfigurationSettingChange>().Any(change => change.ConfigurationSettingName == _configName)))
{
if ((_configSetter(RoleEnvironment.GetConfigurationSettingValue(_configName))))
{
RoleEnvironment.RequestRecycle();
}
}
}
private static void RoleEnvironmentChanging(object sender, RoleEnvironmentChangingEventArgs e)
{
// If a configuration setting is changing
if ((e.Changes.Any(change => change is RoleEnvironmentConfigurationSettingChange)))
{
// Set e.Cancel to true to restart this role instance
e.Cancel = true;
}
}
}
The uploading itself takes a bit more than a minute most of the time. It's the starting up of the instances that take up most of the time.
What you can do is to deploy your fixes to staging first (note that it costs money so don't let it be there for too long). Swapping from staging to production only takes a couple of seconds. So while your application's still running you can upload the patched version, let your testers test it on staging and when they give the go then simply swap it to production.
I haven't tested your possible alternative approach by first uploading to blob storage first. But I think that's overhead as it doesn't speed up starting up the instances.