Using custom metrics in self-hosted sentry - performance

I have started using sentry within my org and loving it so far.
I've been trying to use its performance monitoring tool with custom metrics added.
While I can add custom metrics to the transactions I'm generating in sentry_sdk (for Python), I can't get access to them on the dashboard of our self-hosted installation of sentry.
After a lot of digging, I came across this paragraph here which states that
This feature is only available to organization on our latest plans which include Dynamic Sampling. Customers on legacy plans must move to one of these plans in order to access custom metrics.
From what I gather, I believe their plans in general is to run sentry on their servers. Unless you opt-in to their self-hosted code that can be downloaded from github here.
This is absolutely a bummer because I know my org will not consider moving internal data to third-party servers.
Wondering if someone knows of a solution to this problem. If sentry folks know of (paid) options that enables this feature on self-hosted version or if someone has hacked into their open source code?
I'd also love to hear any out-of-the-box suggestion you folks might have.

Related

SalesForce Commerce Cloud. Intercept data of users and orders

I'm not so familiar with Commerce Cloud product but I need to know one point and I hope community can help me.
I need to implement a feature for customer who use SF Commerce Cloud and I would like to know it is possible or not. Customer wants to send some of data such as orders and users to an additional storage. This is requirement of local law and they have to implement it to do business.
Is it possible to intercept some actions like order placing, modifying, deleting and e-store customers creation, modifying, deleting? It would be great if you help me with direction where I can find additional information because after several attempts I can't get access to trial version of Commerce Cloud.
Thank you!
Yes, it is possible to do this in various ways. One way might be to implement a Javascript tracking integration that runs in the customer's browser and is referenced by the Storefront application that is running on SFCC. Another way would be to implement what is known as an Integration Cartridge which would implement several export jobs and/or service connections to your third party storage solution.
There is no trial version of the platform. In order to access an instance for development purposes, you will need to work through your customer's sandbox instances or become a Salesforce Partner.
Please review the Getting Started documentation. See also: Demandware/SFCC prerequisites

How do I manage micro services with DevOps?

Say I have a front end node and three backed nodes tools, blog, and store. Each node communicates with the other. Each of these nodes have their own set of languages and libraries, and have their own Dockerfile.
I understand the DevOps lifecycle of a single monolithic web application, but cannot workout how a DevOps pipeline would work for microservices.
Would each micro-service get its own github repo and CI/CD pipeline?
How do I keep the versions in sync? Let's say the tools microservice uses blog version 2.3. But blog just got pushed to version 2.4, which is incompatible with tools. How do I keep the staging and production environments in sync onto which version they are supposed to rely on?
If I'm deploying the service tools to multiple different servers, whose IP's may change, how do the other services find the nearest location of this service?
For a monolithic application, I can run one command and simply navigate to a site to interact with my code. What are good practices for developing locally with several different services?
Where can I go to learn more?
Would each micro-service get its own github repo and CI/CD pipeline?
From my experience you can do both. I saw some teams putting multiple micro-services in one Repository.
We where putting each micro-service in a separate repository as the Jenkins pipeline was build in a generic
way to build them that way. This included having some configuration files in specific directories like
"/Scripts/microserviceConf.json"
This was helping us in some cases. In general you should also consider the Cost as GitHub has a pricing model
which does take into account how many private repositories you have.
How do I keep the versions in sync? Let's say the tools micro-service uses blog version 2.3. But blog just got pushed to version 2.4, which
is incompatible with tools. How do I keep the staging and production
environments in sync onto which version they are supposed to rely on?
You need to be backwards compatible. Means if your blogs 2.4 version is not compatible with tools version 2.3 you will have high dependency
and coupling which is going again one of the key benefits of micro-services. There are many ways how you get around this.
You can introduce a versioning system to your micro-services. If you have a braking change to lets say an api you need to support
the old version for some time still and create a new v2 of the new api. Like POST "blogs/api/blog" would then have a new api
POST "blogs/api/v2/blog" which would have the new features and tools micro-service will have some brige time in which you support
bot api's so it can migrate to v2.
Also take a look at Semantic versioning here.
If I'm deploying the service tools to multiple different servers, whose IP's may change, how do the other services find the nearest
location of this service?
I am not quite sure what you mean here. But this goes in the direction of micro-service orchestration. Usually your Cloud provider specific
service has tools to deal with this. You can take a look at AWS ECS and/or AWS EKS Kubernetes service and how they do it.
For a monolithic application, I can run one command and simply navigate to a site to interact with my code. What are good practices
for developing locally with several different services?
I would suggest to use docker and docker-compose to create your development setup. You would create a local development network of docker
containers which would represent your whole system. This would include: your micro-services, infrastructure(database, cache, helpers) and others. You can read about it more in this answer here. It is described in the section "Considering the Development Setup".
Where can I go to learn more?
There are multiple sources for learning this. Some are:
https://microservices.io/
https://www.datamation.com/applications/devops-and-microservices.html
https://www.mindtree.com/blog/look-devops-microservices
https://learn.microsoft.com/en-us/dotnet/standard/microservices-architecture/multi-container-microservice-net-applications/multi-container-applications-docker-compose

What is Snaplogic?

As per Wikipedia:
SnapLogic is a commercial software company that provides Integration Platform as a Service (iPaaS) tools for connecting Cloud data sources, SaaS applications and on-premises business software applications.
It is surely a competitor to informatica, but it doesn't seem to be just another ETL tool. I have a rough understanding that it is used for data integration but that's about it.
Is it merely an ETL tool or does it have any other functionality? Also, what are iPaaS tools in general?
Well, the best place to learn about SnapLogic will be their website, https://www.snaplogic.com/
Here is a video of SnapLogic: https://www.youtube.com/watch?v=KYJK7bjOlA0
A simple developer friendly example:
Let's say i want to search for twitter feeds posted with a particular hashtag by a particular person and write that data into a database of my choice or into amazon S3. SnapLogic allows me to do that without learning about the Twitter API and AWS. SnapLogic takes care of the abstraction for the user so that they can focus on the business logic of things.
The demos are available in the blog: http://www.snaplogic.com/blog
A look at SnapLogic on crunchbase is not a bad idea and you could also find their competitors there.
Your other questions like what are integration platform as a service(iPaaS) tools is too basic and should just be googled.
Basically there are two types of cloud integration, iPaaS and dPaaS
Basically SnapLogic is a iPaas Tool (Integration Platforma as a Service), One of the growing online cloud based integration ETL tool.
As per the Gartner report one of leader in "Enterprise Integration Platform as a Service" refer below link
https://www.snaplogic.com/press-releases/gartner-names-snaplogic-as-a-leader-in-the-magic-quadrant
About tool and few useful links,
1.Designer
It's a canvas area to play and develop your integration pipelines.
2.Manager
Is is a main important to manage all the projects , pipelines, assets ,accounts,creating users and providing permissions, and also import and export of
projects.
3.Dashboard
Monitoring the pipelines logs and information of running pipelines and history.
Useful links:
1.Main site:
https://www.snaplogic.com
2.SnapLogic free trial:
https://www.snaplogic.com/free-trial
3.SnapLogic Documentation
http://doc.snaplogic.com/
4.SnapLogic Community
https://community.snaplogic.com/
5.SnapLogic Blog
https://www.snaplogic.com/blog
Also recently they released SnapLogic Extremem, Please have a look
https://www.snaplogic.com/press-releases/introducing-snaplogic-extreme-to-help-data-engineers-operationalize-cloud-based-big-data-integrations

CodenameOne plan for the cloud storage API

Since CodenameOne doesn't support "the cloud storage API" any more and the parse.com is going to retire soon as well. Does CodenameOne has any plan to release a new Cloud Storage API or provide suggestions/guidelines to help developers to deal with the parse4cn1 library code, cloud code, database structure and data in parse.com?
That is something you will have to figure out yourself as parse4cn1 was initially contributed by a community member and wasn't developed by Codenameone team.
You can use a simple webservices created in php, python or java, hosted along your content with any ISP.
You may also have a look at amazon aws which is promising, they provide a cloud solution but their SDKs is not yet integrated to Codenameone.
I made the parse4cn1 lib and I'm also wondering what's smartest to do. With the announcement of Parse.com's imminent shutdown, there's been a lot of discussion around alternatives. My feeling is that "the dust is yet to settle" as per what options are best and reliable for the longer term (it would be a pity to migrate to another service only for it to be shut down soon). So I personally plan to wait till sometime in Q2 to do a proper evaluation of the alternatives. Hopefully, there'll be more clarity then.
The option to host one's own Parse server (e.g. on AWS or Heroku) is getting interesting. They recently announced support for push notifications on iOS and Android. If (when?) they open source the Parse.com dashboard code, I think that option would be much more interesting.
At some point in the coming months, I plan to make a parse4cn1 release that exposes an option to set the server path. With that, anyone migrating to the Parse server option should, in principle, be able to continue to use the cn1lib. Of course, for features that are supported by the open source Parse server.
PS: Here are pointers to some of such discussions on Parse alternatives:
https://github.com/relatedcode/ParseAlternatives
http://www.slant.co/topics/5219/compare/~firebase_vs_kumulos_vs_kinvey

Why use AppHarbor addons?

Why should I use AppHarbor addons when I can get an account directly from the provider and have additional benefits (like multiple users or projects per account)? I know having addons per application centralizes configuration but it also means you have to go through AppHarbor.
In addition AppHarbor adds their header on the website of some providers (notably Airbrake), which ruins the design (looks out of place and has massive margins). On some addons pricing is much more flexible than the addon pricing (again, Airbrake is a good example - no idea what those plans offer!).
Provisioning add-ons through AppHarbor gives you the advantages of automatic application configuration, consolidated billing and being able to manage everything from AppHarbor (and not having to remember X logins and not having to remember to keep X credit cards updated at various services providers).
We've tried to make make the header as inconspicuous as possible, and it seems to work well on most of our add-on partners sites. Please drop us a line at if it causes breakage anywhere.
We're also continuously working with our add-on partners to keep their add-on plan offerings up-to-date and I've just shot the AirBrake guys an email. Thanks for alerting us to the problem!

Resources