One-click OpenAPI/Swagger build architecture for backend server code - spring

I use swagger to generate both client/server api code. On my frontend (react/ts/axios), integrating the generated code is very easy: I make a change to my spec, consume the new version through NPM, and immediately the new API methods are available
On the server side, however, the process is a lot more janky. I am using java, and I have to copy and paste specific directories over (such as the routes, data types etc) and a lot of the generated code doesn't play nice with my existing application (in terms of imports etc). I am having the same experience with a flask instance that I have.
I know that comparing client to server is apple to oranges, but is there a better way to construct a build architecture so that I don't have to go through this error prone/tedious process every time? Any pointers here?

Related

Do you need copies of protobufs in both client and server in web applications?

I'm not sure if this is the right forum to post this question, but I'm trying to learn gRPC/protobufs in the context of a web application. I am building the UI in Flutter, and the backend in Go with MongoDB. I was able to get a simple go service running and I was able to query it using Kreya, however my question now is - how do I integrate the UI with the backend? In order to make the Kreya call, I needed to import the protobufs. Do I need to maintain identical protobufs in both the front end and backend? Meaning, do I literally have to copy all of my protobufs in the backend into my UI codebase and compile locally there as well? This seems like a nightmare to maintain, as now the protobufs have to be maintained in two places, as opposed to one.
What is the best way to maintain the protobufs?
Yes, but think of the protos as a shared (contract) between your clients and servers.
The protos define the interface by which the client is able to communicate with the server. In order for this to be effective, the client and server need to implement the same interface.
One way to do this is to store your protos in a repo that you share in any clients and servers that implement it. This provides a single source of truth of the protos. I also generate copies of the protos compiled (protoc) to the languages I will use e.g. Golang, Dart etc. in this shared protos repo and import from the repo where needed.
Then, in your case, the client imports the Dart-generated sources and the Golang server imports the Golang-generated sources from the shared repo.
Alternatively, your client and your server could protoc compile appropriate sources when they need them, on-the-fly, usually as part of an automated build process.
Try not to duplicate the protos across clients and servers because it will make it challenging to maintain consistency; it will be challenging to ensure every copy remains synchronized.

Can I have two or more web processes using Heroku

I'm trying to implement reasonably complex architecture using Heroku. I have a Java application that reads/writes data from one source using REST and puts results onto a queue using RabbitMQ. A Django application then reads from this queue parsing data collected then saves to it's database. The Django application feeds Android and ISO apps through GraphQL. The problem I have is Heroku only seems to let me define one web process in my Procfile where in fact I need two. One for the Java application and one for the Django application. Is there anyway I can make this work?
There is no un-hacky / good solution to this. And as the comments stated, it is a bad idea to combine codebases here.
Following Heroku's ideas here, you would split these into separate applications/services that communicate to each other via HTTP or the queue.
Many addons you can attach to multiple applications if they are shared. So you have the same queue.

Removing all database access capability from rails 5.x client app

I am still fairly new to ruby and Ruby on Rails framework.
Rails --api only project is great for just getting into bare bones rest API development without any of the unneeded extras.
I would always build a web API app to handle my data access and domain logic.
I would create a separate rails app for the front end web client. Maybe this seperation of tiers is not always necessary for small or 'pet' projects but I prefer this architecture.
I am trying to find the best way to remove the data access related code and gems from my client project as I will never access the database.
Is there aclean way to do this automatically in a similar way that --api switch includes only things relevant to API app?
It's not the end of the world doing this manually but it would be nice if I could write a bash script that I can pass a project name and it would build me 2 shell projects - 1 API, 1 client - with only the things I would usually need at each level.
Any suggestions about this are greatly appreciated.
Thanks in advance!

How to connect to MongoDB from a single page ClojureScript / React.js application using Ajax?

Consider a ClojureScript web application using reagent where the reagent components are subscribed to a single db atom containing a vector of maps. The contents of this vector is different for each user and has to be queried from a mongo database ( which is updated with regular intervals ). The database might be hosted by a third party. Considering that CongoMongo, Karras and Monger are Clojure ( not ClojureScript ) libraries what would be the best way to connect to MongoDB from a single page ClojureScript/React.js using Ajax?
This “answer” is more of a comment but here goes.
If you don't absolutely need a Clojure backend, I'd recommend having a ClojureScript-only single-page app without any Clojure wrapper to Mongo (so no need for Sente either). As Timothy Baldridge (of Cognitect, so he knows a thing or two about this 😛) pointed out, your ClojureScript app can just make HTTP REST requests to the database.
cljs-http is a ClojureScript project that uses Clojure's core.async library to make HTTP requests and is perfect for interacting with REST APIs if you know or can learn core.async.
A more conventional (i.e., callbacks) approach, but still very ClojureScript-friendly, is to use Google Closure's goog.net.XhrIo library. I have an example here of connecting to a public REST API using XhrIo and re-frame (built on top of reagent, and highly recommended) that may help show how to get started.
Using either of these ClojureScript/JS libs, you can make requests directly from the ClojureScript browser app to the database, get replies, parse the JSON with (js->clj (js/JSON.parse json-string)) or with transit-cljs, and do something with the result.
Since Mongo has a fairly simple REST interface (https://docs.mongodb.org/ecosystem/tools/http-interfaces/#simple-rest-api), I'd be tempted to just write my own CLJS code that calls the Mongo server. Depends on your security requirements. But writing the CLJS code would be no different than any other remote request. Just a bit of string concatenation and parameter serialization.
You could use sente to get communication going between the Reagent application and your web server. This SO answer references an example client/server application that consists of a web server with browser access, giving you some buttons to press that return information from the server. It is not Reagent - but you can substitute what they use. It is a starting point example that works out of the box.
Then build up the example's web server so that it communicates with the three Clojure libraries rather than just returning static text as it does.

Lightweight Real-time Ajax, WebSocket, or Similar for Scala

Our requirements for a real-time web framework include:
lightweight framework
scala support on server side
flexible on communication mechanism : may be Ajax, Server Sent Event or WebSocket.
relatively little changes required to client html.
E.g. using the WebSockets js library is fine
introducing significant compile time/server side page processing is not. E.g. Play routing annotations are not acceptable
must have working examples for both:
web clients
server to server communications
fully functional build. Preferably sbt, but maven maybe acceptable
I have evaluated the following frameworks: and each one of them has one or more drawbacks that make usage within our application less than desirable.
Play: somewhat heavy, but more importantly it introduces custom annotations/processing into the html page. We need VANILLA html pages.
Spray: closer to the mark. But although I found a number of example applications, the actor-based communication is not working in those examples. The SimpleServer example has a built-in "cases" counter (from SimpleClient) that do not work as given: they could certainly be made to work .. eventually..
atmosphere: lacking examples
jetty, netty: lacked fully functional examples buildable within sbt or maven
socko : The markdown essentially stipulates using eclipse/scala-IDE for running tests/doing development. That is a non-starter for us (IJ shop). It was unclear how to run examples and/or start their servers from sbt / command line.
I ended up writing a fair amount of custom code wrapped around Netty. After it is in better shape I may drop it on GitHub.
http://xitrum-framework.github.io/ is actively developed and contains SocksJs support. It is rather lightweight, you can directly annotate routes on actors and they become exposed on the web.

Resources