I'm using Strapi together with a static site generator (Gatsby), and I'm trying to automate the "rebuild" process whenever you make any modifications in the CMS content.
I'm trying to use the lifecycle callbacks mentioned in the Strapi documentation to do this: https://strapi.io/documentation/3.x.x/guides/webhooks.html
The problem is that these callbacks are being called multiple times in different models. For example, the "afterUpdate" callback is getting called 5 times for the 5 models I have.
I only want to execute the build trigger function only once per change, is there a way to do that?
This seems to be the correct behavior of Strapi lifecycle callbacks: https://github.com/strapi/strapi/issues/1153
Actually there is no issue here. In fact when you create an entry, we first create the entry and then update to handle relations. That's why many events are trigger on create entry.
The documentation is misleading and I don't think lifecycle methods should be used to trigger the SSG builds.
A better choice I found is to use the ContentManager.js controller, it's located in: plugins/content-manager/controllers/ContentManager.js
The create, update and delete functions get called only once per request, so this is a better place to trigger a SSG build:
delete: async ctx => {
ctx.body = await strapi.plugins['content-manager'].services['contentmanager'].delete(ctx.params, ctx.request.query);
// This is just a request to another service
// that triggers the SSG build.
await build.triggerSSGBuild();
},
Related
I am using laravel and vuejs for a project.when a page is rendered, api requests are sent to laravel and I get back as a response some data, how can I save these response data inorder to avoid sending requests every time I refresh the page?
I want to avoid sending requests everytime I refresh the page so I only send the request one time and then save the response somehow somewhere and by refreshing no more api requests are sent.
local storage is not very safe, and vuex store gets empty by refreshing the page is there some other way?
Though your answer might be duplicate you can use the vuex-persitedstate plugin. This plugin offers the options property which provides a white list of variable paths which you wanted to be persistent.
See the example below, you could try adding ‘setCounts’ to the paths option, which will be persistent.
Example:
const store = new Vuex.Store({
// ...
plugins: [createPersistedState({
paths: ['setCount']
})
option paths is an array of any paths to partially persist the state. If no paths are given, the complete state is persisted. If an empty array is given, no state is persisted. Paths must be specified using dot notation. there are other options, you can check vuex-persistedstate
You can install a vuex plugin called 'vuex-persistedstate' which can be install via npm then in your vuex index.js file you need to import createPersistedState from 'vuex-persistedstate';
and below 'export default' add plugins: [createPersistedState()],
You can also try Service Workers. For that, you may need to check progressive web application implementation. But if your application has a lot of APIs which needs to be cached, with timely update, you can go with service workers.
Looking to see if anyone could point me to any documentation on if Bolt [#slack/bolt] is configurable for using with multiple workspaces? I am currently utilizing the Slack Bolt NPM package SDK to develop a Slack App, and have it working for a single workspace - but now I want to make it available for multiple workspaces. I have it currently configured in the API settings for allowing Redistribution, but not understanding what needs to be done inside the App to enable the App to pick up event(s) triggered across multiple workspaces from the App.
I only seem to be able to make it work with 1 single workspace at a time.
I have scoured all available sources but can't find anything that can point me to the right direction. I feel like there is something that I am not understanding or if it isn't a possibility with Bolt...
Development environment (NodeJS), working with the [Slack Bolt SDK from Slack]
Do things change for how you configure the client for a redistributable app? (no longer passing in the signingSecret and token), and having to be initialized a different way? I feel like I am missing something. Can anyone point me to some documentation on how this is accomplished for making the app functional for multiple workspaces with the Bolt framework (if it is). Or point me to a project with existing implementation of a project that currently exists as an example.
Thank you, I appreciate any help/feedback.
Does that requirere that new individual app instances of the app would need to be setup and running for each workspace (one app can’t handle all workspaces). Also does that mean that a new app has to be setup for each workspace so it can be configured with where to point to for it’s event handlers? I was initially thinking that a single app could be the central app to handle multiple workspaces - and that additional work spaces would be able to add the app from the ‘Add Slack App’ button from the Redistribution settings of the single app.
I do currently have it setup to pull in and store the additional workspaces IDs and tokens and initializing them through the app const values (but having to setup each one as it own individual running app per workspace. But the big issue I am having is that the redistribution settings for the API only seem to let you specify 1 App endpoint to direct events to, but each app is setup as it’s own app in this pattern (so I would need to have 1 for each)
If you build using the OAuth examples here, then it will work across multiple workspaces using a single instance of the app.
Instantiating the app would look like this in your code -
const app = new App({
signingSecret: process.env.SLACK_SIGNING_SECRET,
clientId: process.env.SLACK_CLIENT_ID,
clientSecret: process.env.SLACK_CLIENT_SECRET,
stateSecret: 'my-state-secret',
scopes: ['channels:read', 'groups:read', 'channels:manage', 'chat:write', 'incoming-webhook'],
installationStore: {
storeInstallation: async (installation) => {
// change the line below so it saves to your database
return await database.set(installation.team.id, installation);
},
fetchInstallation: async (InstallQuery) => {
// change the line below so it fetches from your database
return await database.get(InstallQuery.teamId);
},
storeOrgInstallation: async (installation) => {
// include this method if you want your app to support org wide installations
// change the line below so it saves to your database
return await database.set(installation.enterprise.id, installation);
},
fetchOrgInstallation: async (InstallQuery) => {
// include this method if you want your app to support org wide installations
// change the line below so it fetches from your database
return await database.get(InstallQuery.enterpriseId);
},
},
});
One App ID === One instance, which can be installed on multiple workspaces once you activate public distribution.
I profiled the performance of my application using react redux by following this article by Ben Schwarz.
In the user timing section, i get these warnings (with a no entry sign):
There is two messages:
(Committing Changes) Warning: Lifecycle hook scheduled a cascading update
Connect(MyComponent).componentDidUpdate Warning: Scheduled a cascading update
I made some search but i found nothing special. It seems related to the componentDidUpdate function of the connect HOC of react-redux.
What does these messages means ?
The messages mean that componentDidUpdate is getting changed props or setting the state and so the update will cascade (happen right after the last update) because it is the last lifecycle method called during an update. Basically React has determined that another update needs to happen and it isn't even finished with it's current update yet. I'm not sure if this is a problem with react-redux or your application.
I want to add "Weather: 24C" to the rental-listing component of the super-rentals tutorial app.
Where would be the "best-practices" place to put this ajax request?
Ember.$.getJSON(`http://api.openweathermap.org/data/2.5/weather?q=${location}&APPID=${apiKey}`)
.then(function(json) {
return JSON.parse(json).main.temp;
});
Do I need to add a component, add a model, add a service, add a second adapter, modify the existing adapter? Something else? All of these? Is the problem that the tutorial uses Mirage? I ask this because when I think I'm getting close, I get an error like this:
Mirage: Your Ember app tried to GET
'http://api.openweathermap.org/data/2.5/weather?q=london&APPID=5432',
but there was no route defined to handle this request.
Define a route that matches this path in your
mirage/config.js file. Did you forget to add your namespace?
You need to configure mirage to allow you making calls to outside in case mirage is active; what I mean is using this.passthrough function within mirage/config.js, that is explained in api documentation quite well.
Regarding your question about where to make the remote call is; it depends:
If you need the data from the server to arrive in case a route is about to open; you should prefer putting it within model hook of the corresponding route.
If you intend to develop a component that is to be reused from within different routes or even from within different applications with the same remote call over and over again; you can consider putting the ajax remote call to a component. Even if that is not a very common case usually; it might be the case that a component itself should be wrapped up to fetch the data and display it by itself for reusing in different places; there is nothing that prevents you to do so. However by usually applying data-down action-up principle; generally the remote calls fall into routes or controllers.
Whether using an ember-data model is another thing to consider. If you intend to use ember-data; you should not directly use Ember.$.ajax but rather be using store provided by ember-data and perhaps providing your custom adapter/serializer to convert data to the format ember-data accepts in case the server do not match to the formats that ember-data accepts. In summary; you do not need to use models if you use pure ajax as you do in this question.
I am working on designing a lengthy approval system in CRM using a combination of OOB workflows (designed using CRM UI Workflow Designer) and custom actions (actions written using .NET code). Idea is to keep the entire branching/simpler logic in OOB workflow and call custom Actions wherever necessary. However I have few questions with this approach:
How can I handle run-time errors generated in the action code?
For example, one of my Actions contain the code to push data to an external system via web service. In case this web service call fails, I need to perform some steps in the parent workflow.
How can I handle 'if conditions' which can't be handled by 'Check Condition' step? For example, suppose that before performing a certain workflow step I need to check some data which can't be queried within CRM. I can create an Action which will return true/false based on the custom logic which can then be checked in parent workflow.
An alternate approach would be to use plugins but I am inclined towards using OOB functionalities as much as possible. Any inputs would be helpful.
First of all let's clear the semantics, because I'm not sure if you understand what are you talking about - there are Actions (you can refer to them as custom actions, but then you should refer to every workflow you create as custom and I figured out of your post that you are describing them as OOB, which also is semantically wrong - every workflow you create is a custom workflow, maybe it's using OOB steps, but that's a different story) and Custom Workflow Activities. I'm assuming that you want to use Custom Workflow Activities, because the are more suited for what you are trying to achieve here. Also you tagged your question as CRM 2011 and CRM 2013 - not sure what you meant, because Actions were not available for CRM 2011.
So basically Custom Workflow Activities can have Input and Output parameters. Output parameters are answer to both your questions, because you can use them to get the error message after your custom processing or use then in conditional statements later in your workflow. Output parameters can be defined like that:
[Output("Error message")]
public OutArgument<string> ErrorMessage { get; set; }
You can find more examples here:
https://technet.microsoft.com/en-us/library/gg327842.aspx
You can of course set this properties simply by calling
ErrorMessage.Set(executionContext, messageText)
So now when you define your workflow, wherever you need something not configurable in OOB blocks, you can put your Custom block, after it's done simply check it's output for the error (this is just an example, you can pimp it up by adding additional output parameters, to make it more generic), if it's empty then do something, if not then do something else for example send email with the error message. It all depends on what are you trying to achieve.
Actions are serving different purposes, they are useful to create a logic that can be easily called through plugin or javascript (webAPI) and allows you to also put a plugin on it alongside doing everything within one transaction. Maybe it will be useful somewhere in your workflow, but as far as I remember in CRM 2013 actions could not be called from a workflow...
UPDATE:
Ok so if we are dealing with CRM 2016, we can call Action from a workflow. What is best in this situation really depends on the scenario and what we are trying to achieve, but to make it easier to decide let me highlight main differences:
1) Activities are simply a blocks of code that can be put inside your workflow. Actions by themself are not code, they are custom Messages that you can call. Of course you can register a plugin on this custom Message and do there any custom logic you want, but this is another step to take
2) Actions can be run in transaction, Activities not (but you can run Activities inside Actions, so in this case they can run in transaction)
3) Actions can be called directly from Javascript, plugins and workflows. It's a great thing, but if you will make let's say 10 custom Actions which you will be using ONLY inside you one workflow, they will be visible when you will be registering plugins (and also any js developer will be able to call them with JS)
So basically Actions are a big, fat feature that can serve many purposes (including running Activities on their own!), Activities are much simpler but in your case they will also do their job. So you should ask yourself questions:
Do I need my logic to run inside transaction?
And
Do I need to call this logic somewhere else than my workflow?
If you have any "Yes" then go for Actions, of no, then go for Activities, because you will be overcomplicating things without any good reason.