Migrate Salesforce knowledge articles including all historical versions from one environment to another - heroku

I'm trying to migrate knowledge articles from one Salesforce production environment to a new Salesforce production environment. I need to migrate ALL articles, published, archived, draft and all versions of each. I've tried to use the Heroku tool at the below url:
https://kbapps2.herokuapp.com/exportk2k/submit
However, it seems it is not bringing over the historical versions. For example, if article #12345 has v1, v2, and the currently published v3, when I export the published articles, I only receive v3.
It is very important that we migrate all versions of each knowledge article to the new Salesforce environment that we have implemented. Can anyone confirm if this tool should be able to export the historical versions of the articles or is there another solution that you are aware of to perform this?
I've tried exporting the articles using the heroku app and selecting each of published, archived and draft. I only receive one article record for each article. I do not receive the historical versions.

Related

Using custom metrics in self-hosted sentry

I have started using sentry within my org and loving it so far.
I've been trying to use its performance monitoring tool with custom metrics added.
While I can add custom metrics to the transactions I'm generating in sentry_sdk (for Python), I can't get access to them on the dashboard of our self-hosted installation of sentry.
After a lot of digging, I came across this paragraph here which states that
This feature is only available to organization on our latest plans which include Dynamic Sampling. Customers on legacy plans must move to one of these plans in order to access custom metrics.
From what I gather, I believe their plans in general is to run sentry on their servers. Unless you opt-in to their self-hosted code that can be downloaded from github here.
This is absolutely a bummer because I know my org will not consider moving internal data to third-party servers.
Wondering if someone knows of a solution to this problem. If sentry folks know of (paid) options that enables this feature on self-hosted version or if someone has hacked into their open source code?
I'd also love to hear any out-of-the-box suggestion you folks might have.

How do I manage micro services with DevOps?

Say I have a front end node and three backed nodes tools, blog, and store. Each node communicates with the other. Each of these nodes have their own set of languages and libraries, and have their own Dockerfile.
I understand the DevOps lifecycle of a single monolithic web application, but cannot workout how a DevOps pipeline would work for microservices.
Would each micro-service get its own github repo and CI/CD pipeline?
How do I keep the versions in sync? Let's say the tools microservice uses blog version 2.3. But blog just got pushed to version 2.4, which is incompatible with tools. How do I keep the staging and production environments in sync onto which version they are supposed to rely on?
If I'm deploying the service tools to multiple different servers, whose IP's may change, how do the other services find the nearest location of this service?
For a monolithic application, I can run one command and simply navigate to a site to interact with my code. What are good practices for developing locally with several different services?
Where can I go to learn more?
Would each micro-service get its own github repo and CI/CD pipeline?
From my experience you can do both. I saw some teams putting multiple micro-services in one Repository.
We where putting each micro-service in a separate repository as the Jenkins pipeline was build in a generic
way to build them that way. This included having some configuration files in specific directories like
"/Scripts/microserviceConf.json"
This was helping us in some cases. In general you should also consider the Cost as GitHub has a pricing model
which does take into account how many private repositories you have.
How do I keep the versions in sync? Let's say the tools micro-service uses blog version 2.3. But blog just got pushed to version 2.4, which
is incompatible with tools. How do I keep the staging and production
environments in sync onto which version they are supposed to rely on?
You need to be backwards compatible. Means if your blogs 2.4 version is not compatible with tools version 2.3 you will have high dependency
and coupling which is going again one of the key benefits of micro-services. There are many ways how you get around this.
You can introduce a versioning system to your micro-services. If you have a braking change to lets say an api you need to support
the old version for some time still and create a new v2 of the new api. Like POST "blogs/api/blog" would then have a new api
POST "blogs/api/v2/blog" which would have the new features and tools micro-service will have some brige time in which you support
bot api's so it can migrate to v2.
Also take a look at Semantic versioning here.
If I'm deploying the service tools to multiple different servers, whose IP's may change, how do the other services find the nearest
location of this service?
I am not quite sure what you mean here. But this goes in the direction of micro-service orchestration. Usually your Cloud provider specific
service has tools to deal with this. You can take a look at AWS ECS and/or AWS EKS Kubernetes service and how they do it.
For a monolithic application, I can run one command and simply navigate to a site to interact with my code. What are good practices
for developing locally with several different services?
I would suggest to use docker and docker-compose to create your development setup. You would create a local development network of docker
containers which would represent your whole system. This would include: your micro-services, infrastructure(database, cache, helpers) and others. You can read about it more in this answer here. It is described in the section "Considering the Development Setup".
Where can I go to learn more?
There are multiple sources for learning this. Some are:
https://microservices.io/
https://www.datamation.com/applications/devops-and-microservices.html
https://www.mindtree.com/blog/look-devops-microservices
https://learn.microsoft.com/en-us/dotnet/standard/microservices-architecture/multi-container-microservice-net-applications/multi-container-applications-docker-compose

Store RASA chat history in local file system

I am using RASA to create chat bot. I want to store the chat history as .txt file in local file system. I found that RASA support chat history storage for Mongo DB, Redis and SQL. However i want to store the chat history in local file system. Any help would be highly appreciated.
This is currently not supported out of the box by Rasa. You could however implement your own custom tracker store.
I would actually recommend to use the SQLTrackerStore with sqlite. SQLite is also file based, which means you don't have to run anything in the background, but would probably be faster and less implementation effort than implementing your own tracker store. Note that the SQLTrackerStore is only supported since Rasa 1.0.

What is Snaplogic?

As per Wikipedia:
SnapLogic is a commercial software company that provides Integration Platform as a Service (iPaaS) tools for connecting Cloud data sources, SaaS applications and on-premises business software applications.
It is surely a competitor to informatica, but it doesn't seem to be just another ETL tool. I have a rough understanding that it is used for data integration but that's about it.
Is it merely an ETL tool or does it have any other functionality? Also, what are iPaaS tools in general?
Well, the best place to learn about SnapLogic will be their website, https://www.snaplogic.com/
Here is a video of SnapLogic: https://www.youtube.com/watch?v=KYJK7bjOlA0
A simple developer friendly example:
Let's say i want to search for twitter feeds posted with a particular hashtag by a particular person and write that data into a database of my choice or into amazon S3. SnapLogic allows me to do that without learning about the Twitter API and AWS. SnapLogic takes care of the abstraction for the user so that they can focus on the business logic of things.
The demos are available in the blog: http://www.snaplogic.com/blog
A look at SnapLogic on crunchbase is not a bad idea and you could also find their competitors there.
Your other questions like what are integration platform as a service(iPaaS) tools is too basic and should just be googled.
Basically there are two types of cloud integration, iPaaS and dPaaS
Basically SnapLogic is a iPaas Tool (Integration Platforma as a Service), One of the growing online cloud based integration ETL tool.
As per the Gartner report one of leader in "Enterprise Integration Platform as a Service" refer below link
https://www.snaplogic.com/press-releases/gartner-names-snaplogic-as-a-leader-in-the-magic-quadrant
About tool and few useful links,
1.Designer
It's a canvas area to play and develop your integration pipelines.
2.Manager
Is is a main important to manage all the projects , pipelines, assets ,accounts,creating users and providing permissions, and also import and export of
projects.
3.Dashboard
Monitoring the pipelines logs and information of running pipelines and history.
Useful links:
1.Main site:
https://www.snaplogic.com
2.SnapLogic free trial:
https://www.snaplogic.com/free-trial
3.SnapLogic Documentation
http://doc.snaplogic.com/
4.SnapLogic Community
https://community.snaplogic.com/
5.SnapLogic Blog
https://www.snaplogic.com/blog
Also recently they released SnapLogic Extremem, Please have a look
https://www.snaplogic.com/press-releases/introducing-snaplogic-extreme-to-help-data-engineers-operationalize-cloud-based-big-data-integrations

CodenameOne plan for the cloud storage API

Since CodenameOne doesn't support "the cloud storage API" any more and the parse.com is going to retire soon as well. Does CodenameOne has any plan to release a new Cloud Storage API or provide suggestions/guidelines to help developers to deal with the parse4cn1 library code, cloud code, database structure and data in parse.com?
That is something you will have to figure out yourself as parse4cn1 was initially contributed by a community member and wasn't developed by Codenameone team.
You can use a simple webservices created in php, python or java, hosted along your content with any ISP.
You may also have a look at amazon aws which is promising, they provide a cloud solution but their SDKs is not yet integrated to Codenameone.
I made the parse4cn1 lib and I'm also wondering what's smartest to do. With the announcement of Parse.com's imminent shutdown, there's been a lot of discussion around alternatives. My feeling is that "the dust is yet to settle" as per what options are best and reliable for the longer term (it would be a pity to migrate to another service only for it to be shut down soon). So I personally plan to wait till sometime in Q2 to do a proper evaluation of the alternatives. Hopefully, there'll be more clarity then.
The option to host one's own Parse server (e.g. on AWS or Heroku) is getting interesting. They recently announced support for push notifications on iOS and Android. If (when?) they open source the Parse.com dashboard code, I think that option would be much more interesting.
At some point in the coming months, I plan to make a parse4cn1 release that exposes an option to set the server path. With that, anyone migrating to the Parse server option should, in principle, be able to continue to use the cn1lib. Of course, for features that are supported by the open source Parse server.
PS: Here are pointers to some of such discussions on Parse alternatives:
https://github.com/relatedcode/ParseAlternatives
http://www.slant.co/topics/5219/compare/~firebase_vs_kumulos_vs_kinvey

Resources