Does Apache Aurora have an API? - mesos

Is the command-line client the only way to submit jobs? Is there a REST or language-specific API?

This is a great and frequently asked question.
Apache Aurora does not currently have a first-class scheduler HTTP/JSON API, but it is being planned (AURORA-987) and an API is one of several ideas that has discussed as a potential projects that may be worked on during a June Aurora community hackathon.
An open call for ideas and feedback was sent out to the project mailing list a few months ago, and if you're interested in being part of that discussion I'd suggest joining #aurora on irc.freenode.net, and the project mailing list by emailing dev-subscribe#aurora.apache.org.

Related

Using custom metrics in self-hosted sentry

I have started using sentry within my org and loving it so far.
I've been trying to use its performance monitoring tool with custom metrics added.
While I can add custom metrics to the transactions I'm generating in sentry_sdk (for Python), I can't get access to them on the dashboard of our self-hosted installation of sentry.
After a lot of digging, I came across this paragraph here which states that
This feature is only available to organization on our latest plans which include Dynamic Sampling. Customers on legacy plans must move to one of these plans in order to access custom metrics.
From what I gather, I believe their plans in general is to run sentry on their servers. Unless you opt-in to their self-hosted code that can be downloaded from github here.
This is absolutely a bummer because I know my org will not consider moving internal data to third-party servers.
Wondering if someone knows of a solution to this problem. If sentry folks know of (paid) options that enables this feature on self-hosted version or if someone has hacked into their open source code?
I'd also love to hear any out-of-the-box suggestion you folks might have.

Migrate Salesforce knowledge articles including all historical versions from one environment to another

I'm trying to migrate knowledge articles from one Salesforce production environment to a new Salesforce production environment. I need to migrate ALL articles, published, archived, draft and all versions of each. I've tried to use the Heroku tool at the below url:
https://kbapps2.herokuapp.com/exportk2k/submit
However, it seems it is not bringing over the historical versions. For example, if article #12345 has v1, v2, and the currently published v3, when I export the published articles, I only receive v3.
It is very important that we migrate all versions of each knowledge article to the new Salesforce environment that we have implemented. Can anyone confirm if this tool should be able to export the historical versions of the articles or is there another solution that you are aware of to perform this?
I've tried exporting the articles using the heroku app and selecting each of published, archived and draft. I only receive one article record for each article. I do not receive the historical versions.

What is Snaplogic?

As per Wikipedia:
SnapLogic is a commercial software company that provides Integration Platform as a Service (iPaaS) tools for connecting Cloud data sources, SaaS applications and on-premises business software applications.
It is surely a competitor to informatica, but it doesn't seem to be just another ETL tool. I have a rough understanding that it is used for data integration but that's about it.
Is it merely an ETL tool or does it have any other functionality? Also, what are iPaaS tools in general?
Well, the best place to learn about SnapLogic will be their website, https://www.snaplogic.com/
Here is a video of SnapLogic: https://www.youtube.com/watch?v=KYJK7bjOlA0
A simple developer friendly example:
Let's say i want to search for twitter feeds posted with a particular hashtag by a particular person and write that data into a database of my choice or into amazon S3. SnapLogic allows me to do that without learning about the Twitter API and AWS. SnapLogic takes care of the abstraction for the user so that they can focus on the business logic of things.
The demos are available in the blog: http://www.snaplogic.com/blog
A look at SnapLogic on crunchbase is not a bad idea and you could also find their competitors there.
Your other questions like what are integration platform as a service(iPaaS) tools is too basic and should just be googled.
Basically there are two types of cloud integration, iPaaS and dPaaS
Basically SnapLogic is a iPaas Tool (Integration Platforma as a Service), One of the growing online cloud based integration ETL tool.
As per the Gartner report one of leader in "Enterprise Integration Platform as a Service" refer below link
https://www.snaplogic.com/press-releases/gartner-names-snaplogic-as-a-leader-in-the-magic-quadrant
About tool and few useful links,
1.Designer
It's a canvas area to play and develop your integration pipelines.
2.Manager
Is is a main important to manage all the projects , pipelines, assets ,accounts,creating users and providing permissions, and also import and export of
projects.
3.Dashboard
Monitoring the pipelines logs and information of running pipelines and history.
Useful links:
1.Main site:
https://www.snaplogic.com
2.SnapLogic free trial:
https://www.snaplogic.com/free-trial
3.SnapLogic Documentation
http://doc.snaplogic.com/
4.SnapLogic Community
https://community.snaplogic.com/
5.SnapLogic Blog
https://www.snaplogic.com/blog
Also recently they released SnapLogic Extremem, Please have a look
https://www.snaplogic.com/press-releases/introducing-snaplogic-extreme-to-help-data-engineers-operationalize-cloud-based-big-data-integrations

Scheduled mapreduce job on Google Cloud Platform

I'm developing a node.js application that basically stores user event logs in a database and shows insights about user actions.
For achieving this event logs must be analyzed by using a Mapreduce job which would run once a day automatically (every night).
I've found lots of tutorials about mapreduce on google cloud web site but I'm totally lost because there are several technologies and can't find a way to do it without using the command line and also there is no information about scheduling (I want that the whole analysis process to be entirely automated)
Please, could you provide me advice about what google technologies should I use or where I can find a good tutorial?
Thank you
You want to be looking at:
Dataproc (run Hadoop/Spark jobs out of the box)
Dataflow (develop 'pipelines' using the Dataflow/Beam programming model)

CodenameOne plan for the cloud storage API

Since CodenameOne doesn't support "the cloud storage API" any more and the parse.com is going to retire soon as well. Does CodenameOne has any plan to release a new Cloud Storage API or provide suggestions/guidelines to help developers to deal with the parse4cn1 library code, cloud code, database structure and data in parse.com?
That is something you will have to figure out yourself as parse4cn1 was initially contributed by a community member and wasn't developed by Codenameone team.
You can use a simple webservices created in php, python or java, hosted along your content with any ISP.
You may also have a look at amazon aws which is promising, they provide a cloud solution but their SDKs is not yet integrated to Codenameone.
I made the parse4cn1 lib and I'm also wondering what's smartest to do. With the announcement of Parse.com's imminent shutdown, there's been a lot of discussion around alternatives. My feeling is that "the dust is yet to settle" as per what options are best and reliable for the longer term (it would be a pity to migrate to another service only for it to be shut down soon). So I personally plan to wait till sometime in Q2 to do a proper evaluation of the alternatives. Hopefully, there'll be more clarity then.
The option to host one's own Parse server (e.g. on AWS or Heroku) is getting interesting. They recently announced support for push notifications on iOS and Android. If (when?) they open source the Parse.com dashboard code, I think that option would be much more interesting.
At some point in the coming months, I plan to make a parse4cn1 release that exposes an option to set the server path. With that, anyone migrating to the Parse server option should, in principle, be able to continue to use the cn1lib. Of course, for features that are supported by the open source Parse server.
PS: Here are pointers to some of such discussions on Parse alternatives:
https://github.com/relatedcode/ParseAlternatives
http://www.slant.co/topics/5219/compare/~firebase_vs_kumulos_vs_kinvey

Resources