Resolving #salesforce modules on Heroku - heroku

My company has an app on Salesforce Platform that we are planning to expand. I'm researching the pros and cons of hosting the expanded features on Heroku, instead of more on Salesforce. One of the biggest drawback I see currently is not being to access #salesforce modules, but I cannot find documentation for that. Would you know if #salesforce modules can be imported?
ref: https://developer.salesforce.com/docs/component-library/documentation/en/lwc/lwc.reference_salesforce_modules

Some, but not all, base Lightning Web Components can be used in open source LWC-based applications built on the Heroku platform or elsewhere. Learn more about LWC OSS at Trailhead.
However, other parts of the Lightning Web Component ecosystem are specific to the Salesforce platform. For example, in LWC OSS, you cannot import from within #salesforce/apex. You also won't be able to import from modules like #salesforce/schema, which provides schema details of your org, when you're not deploying code and metadata in your org.
What you'll be able to use is the portions of LWC that are built on standard JavaScript, but not the interaction with your Salesforce org. If you need to interact with the org, you'd have to establish your own API connection and make all of the calls yourself.

Related

How do I manage micro services with DevOps?

Say I have a front end node and three backed nodes tools, blog, and store. Each node communicates with the other. Each of these nodes have their own set of languages and libraries, and have their own Dockerfile.
I understand the DevOps lifecycle of a single monolithic web application, but cannot workout how a DevOps pipeline would work for microservices.
Would each micro-service get its own github repo and CI/CD pipeline?
How do I keep the versions in sync? Let's say the tools microservice uses blog version 2.3. But blog just got pushed to version 2.4, which is incompatible with tools. How do I keep the staging and production environments in sync onto which version they are supposed to rely on?
If I'm deploying the service tools to multiple different servers, whose IP's may change, how do the other services find the nearest location of this service?
For a monolithic application, I can run one command and simply navigate to a site to interact with my code. What are good practices for developing locally with several different services?
Where can I go to learn more?
Would each micro-service get its own github repo and CI/CD pipeline?
From my experience you can do both. I saw some teams putting multiple micro-services in one Repository.
We where putting each micro-service in a separate repository as the Jenkins pipeline was build in a generic
way to build them that way. This included having some configuration files in specific directories like
"/Scripts/microserviceConf.json"
This was helping us in some cases. In general you should also consider the Cost as GitHub has a pricing model
which does take into account how many private repositories you have.
How do I keep the versions in sync? Let's say the tools micro-service uses blog version 2.3. But blog just got pushed to version 2.4, which
is incompatible with tools. How do I keep the staging and production
environments in sync onto which version they are supposed to rely on?
You need to be backwards compatible. Means if your blogs 2.4 version is not compatible with tools version 2.3 you will have high dependency
and coupling which is going again one of the key benefits of micro-services. There are many ways how you get around this.
You can introduce a versioning system to your micro-services. If you have a braking change to lets say an api you need to support
the old version for some time still and create a new v2 of the new api. Like POST "blogs/api/blog" would then have a new api
POST "blogs/api/v2/blog" which would have the new features and tools micro-service will have some brige time in which you support
bot api's so it can migrate to v2.
Also take a look at Semantic versioning here.
If I'm deploying the service tools to multiple different servers, whose IP's may change, how do the other services find the nearest
location of this service?
I am not quite sure what you mean here. But this goes in the direction of micro-service orchestration. Usually your Cloud provider specific
service has tools to deal with this. You can take a look at AWS ECS and/or AWS EKS Kubernetes service and how they do it.
For a monolithic application, I can run one command and simply navigate to a site to interact with my code. What are good practices
for developing locally with several different services?
I would suggest to use docker and docker-compose to create your development setup. You would create a local development network of docker
containers which would represent your whole system. This would include: your micro-services, infrastructure(database, cache, helpers) and others. You can read about it more in this answer here. It is described in the section "Considering the Development Setup".
Where can I go to learn more?
There are multiple sources for learning this. Some are:
https://microservices.io/
https://www.datamation.com/applications/devops-and-microservices.html
https://www.mindtree.com/blog/look-devops-microservices
https://learn.microsoft.com/en-us/dotnet/standard/microservices-architecture/multi-container-microservice-net-applications/multi-container-applications-docker-compose

OpenWhisk support of websockets and static websites

I'm choosing a serverless platform for my projects. I have explored AWS and found it excessively complicated: they provide an enormous bunch of settings but some basic scenarios are been too hard to implement.
The other platform looking promising for me is IBM Cloud with its OpenWhisk. And I'd like to check if the necessary capabilities are either implemented or in close plans for implementation.
Questions
Can I use websocket for my functions as a trigger for connect, message and disconnect? I found only a half year old discussion and nothing more. But this feature is demanded for real time applications.
Can I have static websites in both my custom domain and in subpath? I saw recipes where a docker container and lambda functions were employed. But writing my own implementation of Nginx looks nonsense. But this feature is also strongly demanded for single page applications (SPA) and there can be multiple such SPAs on one domain.
This blog with an IBM Cloud Functions overview has links and answers to your second question. There are tutorials on how to use custom domains with IBM Cloud Functions as backend for applications (see this tutorial with static page / SPA custom domain, and recipes for Express and Flask).
IBM Cloud Functions also has a package to post to Websockets. AFAIK there is functionality to listen to Websockets. My understanding is that serverless is incompatible with the "always on" nature of Websockets and the serverless runtime would need an API gateway or similar to manage the communication. If something is received, the action would be invoked.
Support for websockets for the ActionLoop proxy (used by Go,Swift,Python,PHP,Rust and Java) is here: https://github.com/sciabarracom/incubator-openwhisk-runtime-go/tree/websocket-support.
It can be used to build runtimes that support websocket but you need to deploy the runtime by yourself using Kubernetes. The support had ben postponed as an integration of OpenWhisk with Knative is a better path to include it in OpenWhisk.

What is Snaplogic?

As per Wikipedia:
SnapLogic is a commercial software company that provides Integration Platform as a Service (iPaaS) tools for connecting Cloud data sources, SaaS applications and on-premises business software applications.
It is surely a competitor to informatica, but it doesn't seem to be just another ETL tool. I have a rough understanding that it is used for data integration but that's about it.
Is it merely an ETL tool or does it have any other functionality? Also, what are iPaaS tools in general?
Well, the best place to learn about SnapLogic will be their website, https://www.snaplogic.com/
Here is a video of SnapLogic: https://www.youtube.com/watch?v=KYJK7bjOlA0
A simple developer friendly example:
Let's say i want to search for twitter feeds posted with a particular hashtag by a particular person and write that data into a database of my choice or into amazon S3. SnapLogic allows me to do that without learning about the Twitter API and AWS. SnapLogic takes care of the abstraction for the user so that they can focus on the business logic of things.
The demos are available in the blog: http://www.snaplogic.com/blog
A look at SnapLogic on crunchbase is not a bad idea and you could also find their competitors there.
Your other questions like what are integration platform as a service(iPaaS) tools is too basic and should just be googled.
Basically there are two types of cloud integration, iPaaS and dPaaS
Basically SnapLogic is a iPaas Tool (Integration Platforma as a Service), One of the growing online cloud based integration ETL tool.
As per the Gartner report one of leader in "Enterprise Integration Platform as a Service" refer below link
https://www.snaplogic.com/press-releases/gartner-names-snaplogic-as-a-leader-in-the-magic-quadrant
About tool and few useful links,
1.Designer
It's a canvas area to play and develop your integration pipelines.
2.Manager
Is is a main important to manage all the projects , pipelines, assets ,accounts,creating users and providing permissions, and also import and export of
projects.
3.Dashboard
Monitoring the pipelines logs and information of running pipelines and history.
Useful links:
1.Main site:
https://www.snaplogic.com
2.SnapLogic free trial:
https://www.snaplogic.com/free-trial
3.SnapLogic Documentation
http://doc.snaplogic.com/
4.SnapLogic Community
https://community.snaplogic.com/
5.SnapLogic Blog
https://www.snaplogic.com/blog
Also recently they released SnapLogic Extremem, Please have a look
https://www.snaplogic.com/press-releases/introducing-snaplogic-extreme-to-help-data-engineers-operationalize-cloud-based-big-data-integrations

Is Parse an adequate solution here?

I'm contemplating to use Parse as a platform for my app, as I'm trying to avoid creating and managing the cloud infrastructure myself.
For the sake of simplicity let's say that my app will hook into an Exchange Server and will need to leverage some hosted Machine Learning service to categorize my e-mail and report on insights found.
I'm assuming that Parse would store my core data, while the hosted ML will store the "Big Data" associated with processing for insights.
I'm also expecting my app to receive push notifications generated by the hosted ML service.
Does this sound like a plausible way to go about it and leverage Parse, or am I better off developing the backend myself?
I think parse.com is the right place for you requirements, because they have everything you need like storage of core data, push notifications, cloud module which can be integrated with heroku, social integration, user management functionalities.
They also have large set of client libraries for desktop and mobile apps (node,java,.net etc...) also they have libraries of embedded devices.
The biggest advantage is that everything is setup, and you are focused on software development not on infrastructure things. This is my opinion.
I've been experimenting with the above stack and so far was really impressed. Seems like a viable path forward. The Cloud Code capability of Parse is very solid, and easy to work with. If you want to run services outside of Parse code this us also possible : just issue REST calls.

UML Modeling Client/Server Systems In regards to Mobile Applications

I need some advice on how to go about developing the model for a Client/Server system using UML.
In a short explanation the system consists of a Mobile client which runs on mobile phones. As is common with most mobile applications, the mobile application connects to a server in order to carry out some processing, logging for backups, and connectivity to third-party applications.
Where I need an advice is that envisaging the whole system, almost all the classes in the mobile application are replicated in the server application with the exceptions of a few classes. Likewise in the Server application which contains most of the same classes in the mobile application except some others and some extra functionality.
Giving an example, the Mobile application has a User class that consists of the actors personal details and login details. Likewise the Server application has a User class with the same members existing in the Mobile applications User class except that it has some functionality/methods that are not in the mobile application.
The Server application also has a class that connects to a third-party application to carry out its billing functionality/method. This class obviously is replicated in the Mobile application too however without the Mobile applications billing class having the functionality/method to connect to the third-party.
Ok to the issue on hand, I feel if I am going to follow the principles of UML modeling, I should not replicate these classes but rather should make use of Reuse in the modeling. As I am making use of packages to separate the Mobile application from the Server application, I guess it would involve:
Having the basic classes that do the same thing (methods & members) in both the mobile & server applications
For classes with extra members & functionality in any of the mobile or Server applications, I should use inheritance dependencies to build extra classes to take care of them.
Using << includes >> dependency to add classes generated from #2 to the Mobile and Server packages OR using << includes >> dependency to add classes generated from #1 to the mobile and Server packages as the case may be necessary.
Please is my line of thought correct on how to implement the modeling as I feel replicating the same classes would be against the ideals of UML modeling. Yet the fact that their is a separation between the mobile and Server application sort of wants me to think along the line of modeling totally seperately for the mobile application and then modeling seperately for the Server application.
Again, please is my line of thought correct.
It seems to me you just have one model with three packages:
a commonComponents package containing the classes which are used in the mobile and server application
a mobile package containing the classes used in the mobile application
a server package containing the classes used in the server application
the mobile and server packages import (<> relationship) the elements contained in the commonComponents package. For instance the User commonComponents:User class is imported in the server package where it is extended by the serve:User class. Note that as packages are namespace you can have classes with the same name.
I hope this might help you
http://lowcoupling.com/post/47802411601/uml-diagrams-and-models-with-papyrus

Resources