Managing and storing an artifact/library (e.g. a jar) used by an Heroku/Cloudfoundry application - heroku

I have read the 12-factor-app manifesto and I have the following interrogation: where and how can I store a in-house library such a jar that is going to be consumed by several of my Heroku or Cloudfoundry apps?
All of this bearing in mind that the codebase for that library/jar is versioned in Github.

I would use a file store such as OpenStack Swift (it has an S3 compatible API as well. You could also use CouchDB and store attachments in a DB record as well.

There really isn't anything like that that I have seen yet in java. You can't have dynamic dependency injection in java, its something you can't do.

Related

Convert Resuable ErrorHandling flow in to connector/component in Mule4

I'm Using Mule 4.2.2 Runtime. We use the errorHandling generated by APIKIT and we customized it according to customer requirement's, which is quite standard across all the upcoming api's.
Thinking to convert this as a connector so that it will appear as component/connector in palette to reuse across all the api's instead copy paste everytime.
Like RestConnect for API specification which will automatically convert in to connector as soon as published in Exchange ( https://help.mulesoft.com/s/article/How-to-generate-a-connector-for-a-REST-API-for-Mule-3-x-and-4-x).
Do we have any option like above publishing mule common flow which will convert to component/connector?
If not, which one is the best way suits in my scenario
1) using SDK
https://dzone.com/articles/mulesoft-custom-connector-using-mule-sdk-for-mule (or)
2) creating jar as mentioned in this page
[https://www.linkedin.com/pulse/flow-reusability-mule-4-nagaraju-kshathriya][2]
Please suggest which one is best and easy way in this case? Thanks in advance.
Using the Mule SDK (1) is useful to create a connector or module in Java. Your questions wasn't fully clear about what do want to encapsulate in a connector. I understand that you want is to share parts of a flow as a connector in the palette, which is different. The XML SDK seems to be more inline with that. You will need to make some changes to encapsulate the flow elements, as described in the documentation. That's actually very similar to how REST connect works.
The method described in (2) is for importing XML flows from a JAR file, but the method described by that link is actually incorrect for Mule 4. The right way to implement sharing flows through a library is the one described at https://help.mulesoft.com/s/article/How-to-add-a-call-to-an-external-flow-in-Mule-4. Note that this method doesn't create a connector that can be used from Anypoint Studio palette.
From personal experience - use common flow, put it to repository and include it as dependency to pom file. Even better solution - include is as flow to the Domain app and use it alone with your shared https connector.
I wrote a lot of Java based custom components. I liked them a lot and was proud of them. But transition from Mule3 to Mule4 killed most of them. Even in Mule4 Mulesoft makes changes periodically which make components incompatible with runtime.

Organization of protobuf files in a microservice architecture

In my company, we have a system organized with microservices with a dedicated git repository per service. We would like to introduce gRPC and we were wondering how to share protobuf files and build libs for our various languages. Based on some examples we collected, we decided at the end to go for a single repository with all our protobuf inside, it seems the most common way of doing it and it seems easier to maintain and use.
I would like to know if you have some examples on your side ?
Do you have some counter examples of companies doing the exact opposite, meaning hosting protobuf in a distributed way ?
We have a distinct repo for protofiles (called schema) and multiple repos for every microservice. Also we never store generated code. Server and client files are generated from scratch by protoc during every build on CI.
Actually this approach works and fits our needs well. But there are two potential pitfalls:
Inconsistency between schema and microservice repositories. Commits to two different git repos are not atomic, so, at the time of schema updates, there is always a little time period when schema is updated, while microservice's repo is not yet.
In case if you use Go, there is a potential problem of moving to Go modules introduced in Go 1.11. We didn't make a comprehensive research on it yet.
Each of our microservices has it's own API (protobuf or several protobuf files). For each API we have separate repository. Also we have CI job which build protoclasses into jar (and not only for Java but for another language too) and publish it into our central repository. Than you just add dependencies to API you need.
For example, we have microservice A, we also have repository a-api (contains only protofiles) which build by job into jar (and to another languages) com.api.a-service.<version>

any way to call mlcp from java apps

I'm new to Marklogic and mlcp. I'm working on marklogin 9.0-8. I wnat to use mlcp to load content, but since some parameters may need to be dynamically built based on content, does anyone know if it is possible to call mlcp from java application?
Thanks a lot,
Helen
MarkLogic provides two Java-based ways to load content: MLCP and DMSDK. MLCP is intended to be used as a command-line tool (and I believe that's the only supported use).
The Data Movement SDK, on the other hand, is specifically intended to offer very similar functionality in the form of a JAR, making it easy to access from a Java application. I encourage you to look into using that instead.
tutorial
JavaDoc
Asynchronous Multi-Document Operations
12-minute video intro to DMSDK
common tasks made easier through ml-gradle

How to handle different versions of my REST services in Smart Devices?

We launched an application in the Apple AppStore and Google Play Store, and now we need to launch an update. But this update will change the server-side code (i.e. the API).
Does GeneXus handle multiple API versions? I mean, how to prevent that an app in the 1.0 version break when we launch version 1.1?
There are several considerations when publishing a new version of your application.
If you need both versions to be available at the same time, then the best option is to publish the new version's services to another URL. Say, for instance, you had version 1.0's services at https://example.com/myapp10, then create a new "virtual directory" https://example.com/myapp11 and make the new version point there.
A special consideration is needed if there are also changes in the database. If you only have new tables and/or attributes (and the new attributes are nullable), then you don't need to do anything else.
However, if you remove or change existing attributes, then the "old" services may not work with the new database schema. In that case, you'll also need to keep both versions of the database, and consider some replication mechanism to keep then in sync.
You may find this article interesting, about Pesobook application's deploy process (Spanish only).
Here you will find more detailed information about versioning an SD App with Genexus.
And this article explains how to do it at the Knowledge Base.
You could also create modules in order to manage your services versions. Instead of create a new virtual directory with all the objects, you can move the new (or updated - via save as) services to a new module.
Example:
webapp/wsv1/rest/myservice
webapp/wsv2/rest/myservice
webapp/wsv3/rest/myservice
You must duplicate "myservice", however, the other objects of the KB will not be duplicated.
Then your apps will consume the version of "myservice" as they need.
I use this way to serve some native apps which are not made with GeneXus but they consume GeneXus REST webservices.
Hope it will be usefull :)

Caching objects locally for reuse (Parse.com / Backbone)

Currently using a setup that follows: Backbone, Parse, Require, and Marionette.
I've found through my application that I often need to reuse objects I've already pulled down from Parse.
Parse already does this through Parse.User.current(), However it would be great to store other entities locally rather than retrieving them over and over again.
Does anyone have any suggestions in terms of good practices or libraries to use for caching these objects locally or would having global variables that hold the information while the application runs be enough?
The Parse JavaScript SDK is open source, so you could look at the implementation of Parse.User.current and Parse.User._saveCurrentUser. Maybe you could do something similar. http://www.parsecdn.com/js/parse-1.1.11.js

Resources