In eo.editor.js > storageManager, what does the "type: local " property mean? - ckeditor4.x

In eo.editor.js > storageManager, what does the "type: local " property mean ?
storageManager: {
type: 'local',
},

Switching up the remote storage is very simple, it's just a matter of specifying your endpoints for storing and loading, which generally might be also the same (if you rely on HTTP methods).
const editor = grapesjs.init({
...
storageManager: {
type: 'remote',
stepsBeforeSave: 3,
urlStore: 'http://endpoint/store-template/some-id-123',
urlLoad: 'http://endpoint/load-template/some-id-123',
// For custom parameters/headers on requests
params: { _some_token: '....' },
headers: { Authorization: 'Basic ...' },
}
});
As you can see we've left some default option unchanged, increased changes necessary for autosave triggering and passed remote endpoints.
you can set type to local and remote

Related

Improve Nuxt TTFB

I'm building a large application using Nuxt and Vuetify, everything is good and working fine but unfortunately the score from Lighthouse is not the best with only 42 in performance.
I already improved a few things like:
Better fonts loading from google;
Moving async code from nuxtServerInit to the layout;
Removing unnecessary third party services;
It went from 42 to 54 but I'm still not very happy about the result.
Unfortunately I'm not the best doing these improvements because I lack of knowledge.
I see the TTFB is not optimal at all but I don't really know what can I improve... So I hope you can help me to boost my application with hints and suggestions.
Here I will paste my nuxt.congig.js so that you're aware of what I'm using and how:
const path = require('path')
const colors = require('vuetify/es5/util/colors').default
const bodyParser = require('body-parser')
const maxAge = 60 * 60 * 24 * 365 // one year
const prefix = process.env.NODE_ENV === 'production' ? 'example.' : 'exampledev.'
const description =
'description...'
let domain
if (
process.env.NODE_ENV === 'production' &&
process.env.ENV_SLOT === 'staging'
) {
domain = 'example.azurewebsites.net'
} else if (
process.env.NODE_ENV === 'production' &&
process.env.ENV_SLOT !== 'staging'
) {
domain = 'example.com'
} else {
domain = ''
}
module.exports = {
mode: 'universal',
/**
* Disabled telemetry
*/
telemetry: false,
/*
** Server options
*/
server: {
port: process.env.PORT || 3030
},
serverMiddleware: [
bodyParser.json({ limit: '25mb' }),
'~/proxy',
'~/servermiddlewares/www.js'
],
router: {
middleware: 'maintenance'
},
env: {
baseUrl:
process.env.NODE_ENV === 'production'
? 'https://example.com'
: 'https://localhost:3030',
apiBaseUrl:
process.env.API_BASE_URL || 'https://example.azurewebsites.net'
},
/*
** Headers of the page
*/
head: {
title: 'example',
meta: [
{ charset: 'utf-8' },
{ name: 'viewport', content: 'width=device-width, initial-scale=1' },
{
hid: 'description',
name: 'description',
content: description
},
{
hid: 'fb:app_id',
property: 'fb:app_id',
content: process.env.FACEBOOK_APP_ID || 'example'
},
{
hid: 'fb:type',
property: 'fb:type',
content: 'website'
},
{
hid: 'og:site_name',
property: 'og:site_name',
content: 'example'
},
{
hid: 'og:url',
property: 'og:url',
content: 'https://example.com'
},
{
hid: 'og:title',
property: 'og:title',
content: 'example'
},
{
hid: 'og:description',
property: 'og:description',
content: description
},
{
hid: 'og:image',
property: 'og:image',
content: 'https://example.com/images/ogimage.jpg'
},
{
hid: 'robots',
name: 'robots',
content: 'index, follow'
},
{
name: 'msapplication-TileColor',
content: '#ffffff'
},
{
name: 'theme-color',
content: '#ffffff'
}
],
link: [
{
rel: 'apple-touch-icon',
sizes: '180x180',
href: '/apple-touch-icon.png?v=GvbAg4xwqL'
},
{
rel: 'icon',
type: 'image/png',
sizes: '32x32',
href: '/favicon-32x32.png?v=GvbAg4xwqL'
},
{
rel: 'icon',
type: 'image/png',
sizes: '16x16',
href: '/favicon-16x16.png?v=GvbAg4xwqL'
},
{ rel: 'manifest', href: '/site.webmanifest?v=GvbAg4xwqL' },
{
rel: 'mask-icon',
href: '/safari-pinned-tab.svg?v=GvbAg4xwqL',
color: '#777777'
},
{ rel: 'shortcut icon', href: '/favicon.ico?v=GvbAg4xwqL' },
{
rel: 'stylesheet',
href:
'https://fonts.googleapis.com/css?family=Abril+Fatface|Raleway:300,400,700&display=swap'
}
]
},
/*
** Customize the page loading
*/
loading: '~/components/loading.vue',
/*
** Global CSS
*/
css: ['~/assets/style/app.scss', 'swiper/dist/css/swiper.css'],
/*
** Plugins to load before mounting the App
*/
plugins: [
'#/plugins/axios',
'#/plugins/vue-swal',
'#/plugins/example',
{ src: '#/plugins/vue-infinite-scroll', ssr: false },
{ src: '#/plugins/croppa', ssr: false },
{ src: '#/plugins/vue-debounce', ssr: false },
{ src: '#/plugins/vue-awesome-swiper', ssr: false },
{ src: '#/plugins/vue-html2canvas', ssr: false },
{ src: '#/plugins/vue-goodshare', ssr: false }
],
/*
** Nuxt.js modules
*/
modules: [
'#/modules/static',
'#/modules/crawler',
'#nuxtjs/axios',
'#nuxtjs/auth',
'#nuxtjs/device',
'#nuxtjs/prismic',
'#dansmaculotte/nuxt-security',
'#nuxtjs/sitemap',
[
'#nuxtjs/google-analytics',
{
id: 'example',
debug: {
sendHitTask: process.env.NODE_ENV === 'production'
}
}
],
['cookie-universal-nuxt', { parseJSON: false }],
'nuxt-clipboard2'
],
/*
** Security configuration
*/
security: {
dev: process.env.NODE_ENV !== 'production',
hsts: {
maxAge: 15552000,
includeSubDomains: true,
preload: true
},
csp: {
directives: {
// removed contents
}
},
referrer: 'same-origin',
additionalHeaders: true
},
/*
** Prismic configuration
*/
prismic: {
endpoint: 'https://example.cdn.prismic.io/api/v2',
preview: false,
linkResolver: '#/plugins/link-resolver',
htmlSerializer: '#/plugins/html-serializer'
},
/*
** Auth module configuration
*/
auth: {
resetOnError: true,
localStorage: false,
cookie: {
prefix,
options: {
maxAge,
secure: true,
domain
}
},
redirect: {
callback: '/callback',
home: false
},
strategies: {
local: {
endpoints: {
login: {
url: '/auth/local',
method: 'POST',
propertyName: 'token'
},
logout: { url: '/auth/logout', method: 'POST' },
user: { url: '/me', method: 'GET', propertyName: false }
},
tokenRequired: true,
tokenType: 'Bearer'
},
google: {
client_id:
process.env.GOOGLE_CLIENT_ID ||
'example'
},
facebook: {
client_id: process.env.FACEBOOK_APP_ID || 'example',
userinfo_endpoint:
'https://graph.facebook.com/v2.12/me?fields=about,name,picture{url},email',
scope: ['public_profile', 'email']
}
}
},
/*
** Vuetify Module initialization
*/
buildModules: [
['#nuxtjs/pwa', { meta: false, oneSignal: false }],
'#nuxtjs/vuetify'
],
/*
** Vuetify configuration
*/
vuetify: {
customVariables: ['~/assets/style/variables.scss'],
treeShake: true,
rtl: false,
defaultAssets: {
font: false,
icons: 'fa'
}
},
/*
** Vue Loader configuration
*/
chainWebpack: config => {
config.plugin('VuetifyLoaderPlugin').tap(() => [
{
progressiveImages: true
}
])
},
/*
** Build configuration
*/
build: {
analyze: true,
optimizeCSS: true,
/*
** You can extend webpack config here
*/
extend(config, ctx) {
config.resolve.alias.vue = 'vue/dist/vue.common'
// Run ESLint on save
if (ctx.isDev && ctx.isClient) {
config.devtool = 'cheap-module-source-map'
config.module.rules.push({
enforce: 'pre',
test: /\.(js|vue)$/,
loader: 'eslint-loader',
exclude: /(node_modules)/,
options: {
fix: true
}
})
}
if (ctx.isServer) {
config.resolve.alias['~'] = path.resolve(__dirname)
config.resolve.alias['#'] = path.resolve(__dirname)
}
}
}
}
A few maybe useful information:
I use only scoped style for each page and component and the amount of custom style is really poor since I'm using almost everything from Vuetify as it is;
When I do "view page source" from my browser, I don't like to see a very long CSS inside the page, not minimised...
I don't load anything using fetch or asyncData, I prefer to load data once component is mounted;
Evrything is deployed on Azure and I consume a .Net core API.
What would be nice to know are the best practices with some examples to improve the performances, in particular the TTFB.
In Lighthouse I see "Remove unused JavaScript" with a list of /_nuxt/.. files... But I don't think these files are unused and so I would like to know why they are flagged like so...
Maybe Azure should clean the project on each deploy? I don't know...
I use the az Azure Cli and I deploy just by doing git push azure master, so nothing special.
"Reduce initial server response time"... How? The plan where production app is running is the faster in Azure, what should I improve and how?
"Minimize main-thread work": What does it mean?
"Reduce JavaScript execution time": How?
I hope you can help me to understand and boost everything.
I will keep this post updated with your requests, maybe you wish to see something more about the project. Thanks
I've recently had to go through this process with a rather large Nuxt application, so I can share some of the insights and solutions we came up with. We managed to bump ours up by about 40 points before we were happy.
My number one piece of advice for anyone reading: Ditch the frameworks. By design, they are bloated to handle as many common use cases as possible and make application as easy as possible, at the expense of size. In the realm of browsers, where size and speed are everything, each new framework (Nuxt, Vue, Vuetify) adds another layer of abstraction that negatively impacts size and speed.
Anyways, with that out of the way, here's some other pieces of advice for those that cannot ditch the frameworks.
Lighthouse can often be misleading
We found that the "Remove unused Javascript" warnings were basically impossible to fix with Vue. The problem is that Lighthouse is only able to inspect the code that is actually run during the test, and has no idea that code for error handling or onclick handling in the Vue runtime is necessary, until of course it is.
Unfortunately, it's not possible to know ahead of time what code in the runtime is going to be necessary, so it all needs to be sent. However, as the developer, you at least have control over what 3rd party libraries, modules, and plugins are needed during the initial load of the application. It's up to you to ensure only the necessary pieces are sent and used.
So in Lighthouses eyes, there's lots of useless, unused code. However, the second the application needs to do anything, it's no longer useless. Hence why it is somewhat misleading.
Always keep this in mind, because there's a lot of "problems" that these tools will report that are just a fact of how Javascript applications work. To me, it seems that the developers of these frameworks still have a few more hurdles to overcome in making Javascript apps truly accessible and performant in the eyes of Google.
Keep your Plugins and modules short.
Each plugin you add to your application in the nuxt.config.js increases the size of the main JS bundle included in each page. This inevitably leads to lots of unused code, huge JS file sizes, and of course, longer load times.
It's perfectly valid to instead add plugins to only the pages they're needed on:
// inside the SocialSharing.vue component
import Vue from 'vue'
import VueGoodshare from 'vue-goodshare'
Vue.use(VueGoodshare)
export default { ... }
A reminder though: The page this import happens will still have all the code from vue-goodshare added. It's much better to instead only include the components from these libraries that you actually need.
A good way to check this is running your build with the analyze property set to true. (It may be helpful for you to share your analysis here)
Reduce Initial Server Response Time
If you're already running the best server, there's still a few things you can do to help speed things up.
Leverage caching for your pages, so that there's no need to render them server side. However, some of these tests (like Lighthouse) specifically disable caching, leading to poor results.
Reduce the amount of work required to render pages. Ensure there's no blocking API calls happening, keep pages simple and small, and ensure that the server is not overloaded.
Utilize edge caching, or edge deployments, so that your application is closer to your users. For example, if your application is deployed in USWEST, and Lighthouse is being tested in Dubai, you're likely going to see a lot of latency in that request, which will drive up the server response time.
You may need to follow this up with the specific server you're running, and where it's located to get more help. However, the points I outlined would almost certainly get your TFFB to a green score.
Minimize Main Thread Work
In browsers, the main thread is where all the action happens. It is solely responsible for handling user interactions, updating the page, and in essence, turning a document of HTML into a living application. A main thread that is too busy can lead to performance problems, especially noticeable by users when they're trying to interact with your page.
Often, when seeing this, it's because you're running too much Javascript. Specifically, you're running too much Javascript all at once, which ends up blocking up the main thread. Javascript-heavy applications are notorious for this, and it can be a really challenging problem to solve.
The single biggest helper for our app was delaying the loading of unimportant scripts. For example, we run Rollbar, and Google Analytics on all our pages. Instead of loading the scripts at app-start, we instead just load their small command queues, and delay the load time of the big scripts by ~5s. This frees up the main thread to focus on more important things, like rehydrating the Vue application.
You'll also find significant savings by just reducing the amount of JS there is to process. Each line of code returned to the client is another line that has to be sent, parsed, and executed. I would definitely take a look at your modules and plugins first to see if there's some low hanging fruit.
Reduce Javascript Execution Time
This is another unfortunate metric being used, which in our test often just means "the app is still doing something". I say it's unfortunate because in our experience it did not impact the performance or user experience in the application.
We frequently saw our third party services, like Intercom, Rollbar, GA, etc, extending their execution times well past 10s, and with third party code, there's nothing you can do besides not use it.
My advice: Focus on optimizing the application using everything else I've highlighted. This is something that can be incredibly difficult to specifically fix, and is usually just a symptom of other things, such as the main thread being too busy, third part code being slow.
One Last Piece Of Advice
If all else fails, you may be able to "trick" some of the tests in your favour. We did this by delaying the load of our GA and Rollbar scripts until after the test has completed. Remember, this tool is looking at certain metrics in a certain timeframe, and scoring you based on that. You may be able to leverage simple alternate techniques, like lazy loading below the fold, to see a noticeable difference in performance.
Anyways, this is quite a complicated task, and by no means is there a "3 step guide to success" here. You'll find plenty of guides online claiming they've brought their Vue app from 30 to 100 with a few simple changes, but they all ignore the fact that real apps have a lot of code and do a lot of things, and balancing that with speed and performance is an art form.
You may want to take a look at resources such as the shell application model, or service workers.
If you need any clarification on this post, feel free to ask away. But keep in mind, the question you're asking is broad, and doesn't just have a single "right" way of approaching. It's ultimately up to you to take the important bits here and apply them as you can.
Update with examples
Most of what I've talked about has been quite hard to show examples for, as I've covered topics that are either overly simplistic and don't need an explanation, or are vague concepts to begin with. However, one method we used that had some good results can be shown.
Here's an example of a modified script we use to load Intercom:
var APP_ID = "your_app_id_here";
window.intercomSettings = {
app_id: APP_ID,
hide_default_launcher: !0,
session_duration: 36e5
},
function() {
var n,
e,
t = window,
o = t.Intercom;
"function" == typeof o ? (o("reattach_activator"), o("update", t.intercomSettings)) : (n = document, (e = function() {
e.c(arguments)
}).q = [], e.c = function(t) {
e.q.push(t)
}, t.Intercom = e, o = function() {
// Don't load the full Intercom script until after 10s
setTimeout(function() {
var t = n.createElement("script");
t.type = "text/javascript",
t.crossorigin = "anonymous",
t.async = !0,
t.src = "https://widget.intercom.io/widget/" + APP_ID;
var e = n.getElementsByTagName("script")[0];
e.parentNode.insertBefore(t, e)
}, 1e4)
}, "complete" === document.readyState ? o() : t.attachEvent ? t.attachEvent("onload", o) : t.addEventListener("load", o, !1))
This is a custom version of the script they give you to place in your apps <head></head> tag. However, you'll notice we've added a setTimeout function that will delay the loading of the full Intercom script. This gives your application a chance to load everything else without competing for network or CPU time.
However, as Intercom is no longer guaranteed to be available, you'll need to use greater caution when interacting with it.
This exact same concept can be applied to just about every 3rd party script you might load in. We also use it with Google Analytics, where we initialize the command queue, but defer loading the actual script. Obviously, this can cause tracking issues with short sessions, but that is the tradeoff you need to make if performance is your primary goal.

Posting object to Google Cloud Storage doesn't replace ${filename} variable

As described here https://cloud.google.com/storage/docs/xml-api/post-object
I can use ${filename} as part of the key when uploading a file to GCS from the browser (using a signed request).
That's great, because it's the exact same variable S3 uses. But - a problem. It doesn't actually replace ${filename} with the name of the file when I upload. This works perfectly on S3, but with GCS I literally get $%7Bfilename%7D as the key in the response - clearly it's not replacing this with the correct value.
I'm building the request like so:
export function uploadVideo(video) {
return new Promise((resolve, reject) => {
if (!video) {
resolve(null);
return;
}
// this fetches a presigned post object from the server:
request.get('/presigned_post')
.query({format: 'json'})
.end((err, response)=> {
if (!err && response.ok) {
request
.post(`https://${response.body.url.host}/`)
.field(response.body.fields)
.field('Content-Type', image.type)
.attach('file', image, image.name)
.end((err, response) => {
if (err) {
console.log("Error uploading image: ", err);
reject(err);
} else {
resolve({
location: response.text.match('<Location>' + '(.*?)' + '</Location>')[1],
bucket: response.text.match('<Bucket>' + '(.*?)' + '</Bucket>')[1],
key: response.text.match('<Key>' + '(.*?)' + '</Key>')[1],
checksum: response.text.match('<ETag>"' + '(.*?)' + '"</ETag>')[1]
});
}
});
}
});
});
}
The key is some_folder/${filename} and this is included in the form data to google (in response.fields).
You can see I am providing the filename with .attach('file', image, image.name). It is uploading correctly, just not replacing the ${filename} var.
edit
I have narrowed down the issue more. We query S3 and GCS to get the fields in the presigned post. With S3, if I give a key of ${filename} then I get exactly that returned in the fields:
def presigned_post(bucket_name:, key:, **opts)
response = s3_resource.
bucket(bucket_name).
presigned_post(key: key, content_type_starts_with: '', **opts)
{
fields: response.fields,
url: { host: URI.parse(response.url).host }
}
end
Fields here will contain "key"=>"${filename}".
However in the GCS case, the following code returns :key=>"$%7Bfilename%7D" in the fields:
# https://cloud.google.com/storage/docs/xml-api/post-object#usage_and_examples
# https://www.rubydoc.info/gems/google-cloud-storage/1.0.1/Google/Cloud/Storage/Bucket:post_object
def presigned_post(bucket_name:, key:, acl:, success_action_status:, expiration: nil)
expiration ||= (Time.now + 1.hour).iso8601
policy = {
expiration: expiration,
conditions: [
["starts-with", "$key", "" ],
["starts-with", "$Content-Type", "" ],
{ acl: acl },
{ success_action_status: success_action_status }
]
}
post_obj = get_bucket!(bucket_name).post_object(key, policy: policy)
url_obj = { host: URI.parse(post_obj.url).host }
# Have to manually merge in these fields
fields = post_obj.fields.merge(
acl: acl,
success_action_status: success_action_status
)
return { fields: fields, url: url_obj }
end
If i manually change the key in the GCS request fields, then it works. Is that really what I'm supposed to do?
A fix for this issue was released in google-cloud-storage v1.24.0.
Figured out the issue is caused by:
def ext_path
URI.escape "/#{#bucket}/#{#path}"
end
in lib/google/cloud/storage/file.rb of the google-cloud-ruby-gem.
This is called by def post_object in that same file.
I am going to raise an issue on their Github page.

Can I get FilePond to show previews of loaded local images?

I use FilePond to show previously uploaded images with the load functionality. The files are visible, however I don't get a preview (which I get when uploading a file).
Should it be possible to show previews for files through load?
files: [{
source: " . $profile->profileImage->id . ",
options: {
type: 'local',
}
}],
First you have to install and register File Poster and File Preview plugins and here is the example of how to register it in your code:
import * as FilePond from 'filepond';
import FilePondPluginImagePreview from 'filepond-plugin-image-preview';
import FilePondPluginFilePoster from 'filepond-plugin-file-poster';
FilePond.registerPlugin(
FilePondPluginImagePreview,
FilePondPluginFilePoster,
);
then You have to set the server.load property to your server endpoint and add a metadata property to your files object which is the link to your image on the server:
const pond = FilePond.create(document.querySelector('file'));
pond.server = {
url: '127.0.0.1:3000/',
process: 'upload-file',
revert: null,
// this is the property you should set in order to render your file using Poster plugin
load: 'get-file/',
restore: null,
fetch: null
};
pond.files = [
{
source: iconId,
options: {
type: 'local',
metadata: {
poster: '127.0.0.1:3000/images/test.jpeg'
}
}
}
];
the source property is the variable you want to send to your end point which in my case I wanted to send to /get-file/{imageDbId}.
In this case it does not matter what your endpoint in the load property returns but my guess is, we have to return a file object.

Why isn't fineUploader sending an x-amz-credential property among the request conditions?

My server-side policy signing code is failing on this line:
credentialCondition = conditions[i]["x-amz-credential"];
(Note that this code is taken from the Node example authored by the FineUploader maintainer. I have only changed it by forcing it to use version 4 signing without checking for a version parameter.)
So it's looking for an x-amz-credential parameter in the request body, among the other conditions, but it isn't there. I checked the request in the dev tools and the conditions look like this:
0: {acl: "private"}
1: {bucket: "menu-translator"}
2: {Content-Type: "image/jpeg"}
3: {success_action_status: "200"}
4: {key: "4cb34913-f9dc-40db-aecc-a9fdf518a334.jpg"}
5: {x-amz-meta-qqfilename: "f86d03fb-1b62-4073-9458-17e1dfd8b3ae.jpg"}
As you can see, no credentials. Here is my client-side options code:
var uploader = new qq.s3.FineUploader({
debug: true,
element: document.getElementById('uploader'),
request: {
endpoint: 'menu-translator.s3.amazonaws.com',
accessKey: 'mykey'
},
signature: {
endpoint: '/s3signaturehandler'
},
iframeSupport: {
localBlankPagePath: '/views/blankForIE9Support.html'
},
cors: {
expected: true,
sendCredentials: true
},
uploadSuccess: {
endpoint: 'success.html'
}
});
What am I missing here?
I fixed this by altering my options code in one small way:
signature: {
endpoint: '/s3signaturehandler',
version: 4
},
I specified version: 4 in the signature section. Not that this is documented anywhere, but apparently the client-side code uses this as a flag for whether or not to send along the key information needed by the server.

Cannot fire Bigcommerce webhooks

so far I've managed to create two webhooks by using their official gem (https://github.com/bigcommerce/bigcommerce-api-ruby) with the following events:
store/order/statusUpdated
store/app/uninstalled
The destination URL is a localhost tunnel managed by ngrok (the https) version.
status_update_hook = Bigcommerce::Webhook.create(connection: connection, headers: { is_active: true }, scope: 'store/order/statusUpdated', destination: 'https://myapp.ngrok.io/bigcommerce/notifications')
uninstall_hook = Bigcommerce::Webhook.create(connection: connection, headers: { is_active: true }, scope: 'store/app/uninstalled', destination: 'https://myapp.ngrok.io/bigcommerce/notifications')
The webhooks seems to be active and correctly created as I can retrieve and list them.
Bigcommerce::Webhook.all(connection:connection)
I manually created an order in my store dashboard but no matter to which state or how many states I change it, no notification is fired. Am I missing something?
The exception that I'm seeing in the logs is:
ExceptionMessage: true is not a valid header value
The "is-active" flag should be sent as part of the request body--your headers, if you choose to include them, would be an arbitrary key value pair that you can check at runtime to verify the hook's origin.
Here's an example request body:
{
"scope": "store/order/*",
"headers": {
"X-Custom-Auth-Header": "{secret_auth_password}"
},
"destination": "https://app.example.com/orders",
"is_active": true
}
Hope this helps!

Resources