I would like to know if it is possible to create isolated modules with golang, the application always starts in a file, can i separate several REST modules and upload them depending on the conditions?
For example. I have a products module with Rest and database, and I have a users module with Rest and database. Can I create a way to choose if I will run only one or both dynamically? Without having to import everything into a main.go file?
Go is compiled language and import happens at the build time, not at the run time. Each module source is imported and compiled into the final binary and that's what you run then.
To achieve what you are asking for you'd probably need to split your program into multiple binaries each containing only relevant portion of code for its responsibility. Then you would need also to provide some communication between them (e.g. having gateway that intercepts incoming request and make relevant request(s) internally to e.g. Users service or Products service, etc.). This is often called microservice architecture.
For smaller project I recommend just put everything together first; and as it grows (and causes problems, if ever) eventually refactor to microservices.
Related
I use swagger to generate both client/server api code. On my frontend (react/ts/axios), integrating the generated code is very easy: I make a change to my spec, consume the new version through NPM, and immediately the new API methods are available
On the server side, however, the process is a lot more janky. I am using java, and I have to copy and paste specific directories over (such as the routes, data types etc) and a lot of the generated code doesn't play nice with my existing application (in terms of imports etc). I am having the same experience with a flask instance that I have.
I know that comparing client to server is apple to oranges, but is there a better way to construct a build architecture so that I don't have to go through this error prone/tedious process every time? Any pointers here?
I'm not sure if this is the right forum to post this question, but I'm trying to learn gRPC/protobufs in the context of a web application. I am building the UI in Flutter, and the backend in Go with MongoDB. I was able to get a simple go service running and I was able to query it using Kreya, however my question now is - how do I integrate the UI with the backend? In order to make the Kreya call, I needed to import the protobufs. Do I need to maintain identical protobufs in both the front end and backend? Meaning, do I literally have to copy all of my protobufs in the backend into my UI codebase and compile locally there as well? This seems like a nightmare to maintain, as now the protobufs have to be maintained in two places, as opposed to one.
What is the best way to maintain the protobufs?
Yes, but think of the protos as a shared (contract) between your clients and servers.
The protos define the interface by which the client is able to communicate with the server. In order for this to be effective, the client and server need to implement the same interface.
One way to do this is to store your protos in a repo that you share in any clients and servers that implement it. This provides a single source of truth of the protos. I also generate copies of the protos compiled (protoc) to the languages I will use e.g. Golang, Dart etc. in this shared protos repo and import from the repo where needed.
Then, in your case, the client imports the Dart-generated sources and the Golang server imports the Golang-generated sources from the shared repo.
Alternatively, your client and your server could protoc compile appropriate sources when they need them, on-the-fly, usually as part of an automated build process.
Try not to duplicate the protos across clients and servers because it will make it challenging to maintain consistency; it will be challenging to ensure every copy remains synchronized.
I am working on a CLI in Go that scrapes a webpage to collect the href attributes of all the links on the page into a slice. I want to store this slice in memory for some time so that the scraper is not being called on every execution of the CLI command. Ideally, the scraper would only be called after the cache expires or the user provides some sort of --update flag.
I came across the library go-cache and other similar libraries, but from what I could tell they only work for something that is continuously running, like a server.
I thought about writing the links to a file, but then how would I expire the results after a specific duration? Would it make sense to create a small server in the background that shuts down after a while in order to use a library like go-cache? Any help is appreciated.
There are two main approaches in these scenarios:
Create a daemon, service or background application that acts as your data repository. You can run it as an HTTP server / RPC server depending on your requirements. Your CLI application then interacts with this daemon as required;
Implement a persistence mechanism that will allow data to be written and read across multiple CLI application executions. You may use normal text files, databases or even an implementation of golang's encoding/gob to write and read your slice (a map would probably be better) to and from a binary file.
You can timestamp entries and simply remove them after their ttl expires by explicitly deleting them, or by simply not rewriting them during subsequent executions, according to the strategy / approach selected above.
The scope and number of examples for such an open ended question is too myriad to post in a single answer and will most likely require multiple specific questions.
Use a database and store as much detail as you can (fetched_at, host, path, title, meta_desc, anchors etc). You'll be able to query over the data later and it will be useful to have it in a structured format. If you don't want to deal with a db dependency you could embed something like boltdb (pure go) or sqlite (cgo).
I need to provide pluggable functionality to a Go program.
The idea is that a 3rd party can add functionality for a given path, i.e.
/alive maps to http://localhost:9876, or
/branding maps to http://localhost:9877 and so on.
I first tried to think of it as adding a JSON config file, where each such plugin would have an entry, e.g.:
{
"Uri": "alive",
"Address":"http://localhost:9876",
"Handler":"github.com/user/repo/path/to/implementation"
},
This though blatantly reveals Java thinking - and feels like utterly inadequate for Go - there is no notion of Class loaders in Go, and loading this would mean to have to use the loader package from golang's tools.
Proposals on how to do this in a more Go- idiosyncratic way? In the end I just need to be able to map a URI to a port and to an implementation.
Compile-time configuration
If you can live with compile-time configuration, then there is no need for a JSON (or any other) configuration file.
Your main package can import all the involved "plugins", and map their handler to the appropriate path. There is also no need to create multiple servers, although you may do so if that fits better your (or the modules') needs.
Run-time configuration
Run-time configuration and plugging in a new module requires code to be loaded at run-time. This is supported by the plugin package, but currently only under linux.
For this you may use a JSON config file, where you would list the compiled plugins (path to the compiled plugins) along with the paths you need to map them.
In the main package you can read the config file, and load the plugins, which should expose a variable or a function that returns you the handler that handles the traffic (requests). This is preferred to the plugins themselves firing up an http server for performance reasons, but both can work (plugins returning a handler for you to register, and the plugins launching their servers).
Note that there is also no need to make the configuration "static", the main app could receive and load new modules at runtime too (e.g. via a dedicated handler, which could receive the (file) path to the new module and the path to map it to, optionally maybe even the binary plugin code too; but don't forget about security!).
Note that while you can load plugins at runtime, there is no way to "unload" them. Once a plugin is loaded, it will stay in memory until the app exists.
Separate, multi-apps
There is a third solution, in which your main app would act as a proxy. Here you may start the additional "modules" as separate apps, listening on localhost at specific ports, and the main app would act as a proxy, forwarding requests coming in to the other independent apps listening on different ports #localhost (or even on other hosts).
The standard library provides you the httputil.ReverseProxy doing just this.
This does not require runtime code loading, as the "modules" are separate apps which can be launched separately. Still, this gives runtime configuration flexibility, and this solution also works on all platforms. Moreover this setup supports taking down modules at runtime, as you can just as easily un-map / close the apps of the independent modules.
The separate apps can also be launched separately, or from / by the main app, both solutions are viable.
I have application which has core website, api and admin area. I wanted to know is it bad idea to have everything in one app or should I create different Symfony2 project or should I split them into different kernels?
I'm not sure if adding lots of bundles on same kernel will effect performance a lot or is just a little bit, which does not matter?
Following are options:
keep everything on same kernel, it wont make much difference
have multiple kernel for different part of application (api, admin and core website)
create different Symfony2 project for admin area and api.
or your wise words :)
You can define more "environments".
For example :
In AppKernel.php
public function registerBundles()
{
$bundles = array(
new Symfony\Bundle\FrameworkBundle\FrameworkBundle(),
new Symfony\Bundle\SecurityBundle\SecurityBundle(),
new Symfony\Bundle\TwigBundle\TwigBundle(),
new Symfony\Bundle\MonologBundle\MonologBundle(),
new Symfony\Bundle\SwiftmailerBundle\SwiftmailerBundle(),
new Doctrine\Bundle\DoctrineBundle\DoctrineBundle(),
new Sensio\Bundle\FrameworkExtraBundle\SensioFrameworkExtraBundle(),
//new AppBundle\AppBundle()
);
if (in_array($this->getEnvironment(), array('api'), true)) {
$bundles[] = new ApiBundle\ApiBundle();
//-- Other bundle
}
//-- Other environments
return $bundles;
}
}
It mostly depends on bundles quality. And this how much connected they are.
I would reject point 3 at start (create different Symfony2 project for admin area and api.) - as probably you don't build two separate applications.
Have multiple kernel for different part of application (api, admin and core website)
Common problem is created by Listeners and services in container. Especially when your listener should work only in one of app contexts (api/frontend/backend). Even if you remember to check it at very beginning of listener method (and do magic only in wanted context) then still listener can depend on injected services which need to be constructed and injected anyway. Good example here is FOS/RestBundle: even if you configure zones then still on frontend (when view_listener is activated for api) view_handler is initialized and injected to listener - https://github.com/FriendsOfSymfony/FOSRestBundle/blob/master/Resources/config/view_response_listener.xml#L11 I'm not sure for 100% here but also disabling translations and twig (etc.) for API (most of api's don't need it) will speed it up.
Creating separate Kernel for API context would solve that issue (in our project we use one Kernel and we had to disable that listener - as blackfire.io profiles were telling us that it saves ~15ms on every fronted request).
Creating new Kernel for API would make sure that none of API-only services/listeners will not interfere with frontend/backend rendering (it work both ways). But it will create for you additional work of creating shared components used in many bundles inside project (those from different kernels) - but in world with composer it's not a huge task anymore.
But it's case only for people who measure every millisecond of response time. And depends on your/3dparty bundles quality. If all there is perfectly ok then you don't need to mess with Kernels.
It's personal choice, but I have a similar project and I have a publicBundle, adminBundle and apiBundle all within the same project.
The extra performance hit is negliable but organisation is key ... that is why we're using an MVC package (Symfony) in the first place, is it not? :)
NB: You terminology is a little confusing, I think by Kernel you mean Bundle.
Have several kernels could not necessarily help.
Split your application in bundles and keep all advantages of sharing your entities (and so on) through the different parts of your application.
You can define separated routing/controllers/configuration that are loaded depending on the host/url.
Note :
If you are going to separate your app in two big bundles (i.e. Admin & Api),
and that the two share the same entities, you will surely have to do a choice.
This choice may involves that one of your bundles contains too much (and non related) logic and will need to be refactored in several bundles later.
Create a bundle per section of your application that corresponds to a set of related resources and make difference between the two parts through different contexts from configuration.
Also, name your classes/namespaces sensibly.