How to manage user access in Janusgraph Version 0.5.3 - janusgraph

I am using Janusgraph Version 0.5.3 with CS and ES, which has gremlin 3.4.6.
I am able to create 2 graphs using ConfiguredGraphFactory (dev & test)
Now what i want to achive is create users (hopefully using credentailDB, where i am currently facing issues) and after i create dev users and test users, want to know if i can restrict them to have access only to specific graph, rather than all the graphs.
What i mean is ConfiguredGraphFactory.getGraphNames() should return only respective graphs for that user and not all graphs in CS.
Thanks,
Atul.

There are no authorization functions built into ConfiguredGraphFactory. There may be ways to do what you are asking but none that are easily implemented without detailed knowledge of Gremlin Server internals. In TinkerPop 3.5.0 (not yet released as of the time of my answering, but it is expected within weeks), this situation changes as authorization features have been exposed. The work was done under this issue: TINKERPOP-2389 if you'd like to learn more.

Related

Tracking api/even changes between different microservice versions before deployment

I work devops for a fairly large company that is in process of transitioning to microservices. This is a new area for most people involved and some of the governing requests seem like bad practice to me but I don't have the expertise to convince otherwise.
The request is to generate a report before deploying that would list any new api/events (Kafka is our messaging service) in a microservice.
The path that's being recommended is for devs to follow a style guide and then scrape the source code during CI/CD pipeline to generate a report that can be compared to previous reports and identify any new apis.
This seems backwards and unsustainable but I've been unable to find another solution that would satisfy their requests. I've recommended deploying to dev first, then using a tracing tool to identify any api changes, or event subscriptions, but they insist on having the report before deploying.
I'm hoping for any advice on best practice to accomplish this.
Tracing and detecting version changes is definitely over engineering. Whats simpler like #zenwraight has mentioned, is to version your APIs. While tracing through services to explore the different versions and schema could be a potential solution, it requires a lot more investment upfront and if thats not the bread and butter of the company, I would rather use a vendor product that might support something like this.
If discovery is a mechanism that is needed, I would recommend something that publishes internal API docs using a tool like Swagger so that you can search if there's an API you can consume.
And finally to support moving to different versions, I would recommend having an API onboarding process for the services so that teams can notify other teams that are using specific versions their services are coming to the end of their lifecycle and they will need to migrate to newer ones.

How to monitor Elastic Stack without X-Pack?

Can we monitor the elastic stack 6.0 and above(like elastic search..) without using the X-Pack?As we know many of the Features like security, machine learning, graph APIs don't be supported under BASIC(free Licence).
So I want to know if there are any APIs, without Licence limitation, can be used to implement those functionalities mentioned above?
All the information should be in the cluster APIs, you'll just lack the visualizations.
Monitoring (of the local cluster) is actually included in X-Pack Basic unlike the other features. Any reason you don't want to use it?
Alternatives include Kopf, Cerebro,... though you'll need to run them as a separate process and watch out for version compatibilities.
We've had success with ElasticHQ for Monitoring (requires python)
https://github.com/ElasticHQ/elasticsearch-HQ
And sentinl for setting up alerts/watchers (it is a plugin for kibana)
https://github.com/sirensolutions/sentinl/wiki
We have set up a reverse proxy to enable ssl/tls and use ubuntu user management to create logins, however, we do not limit access within Kibana itself.
We have little need for graph/machine learning so I am unaware of free alternatives.
The company I work for is heavily Open Source, so these projects suit us.

Using opendaylight to document networks

I am facing a task to analyze, document and visualize a rather large global network (more than 200 sites, various technologies) and wonder if opendaylight might help me with this. The documentation should mainly focus on Layer 3 and Layer 4 but may partially also include L2 topology. There is no need to actually use a controller to configure devices through southbound APIs, but I would like to use the benefits of a structured, consistent description (e.g. YANG) and a visualization (DLUX, NeXt) which ODL provides.
So here's my question: Is there any way to manually add a topology (nodes, links) through a graphical editor to ODL?
From the general project description and from the last 20 seconds of this video, NeXt in general seems to be able to add/modify topology information. How far does the NeXt integration with ODL go? Would I need to write my own app (which could use NeXt) which would need to use the RESTCONF APIs to add information to the topology? Or should I maybe create a virtual topology of the real network using mininet? Any other ideas?
I understand that an sdn controller is probably not the right kind of tool for this task, but other alternatives (e.g. Net2Plan, Visio, yEd) are basically local solutions, a bit too complicated or provide no standardized DSL for topologies. Besides that, the documentation covering ODL and NeXt integration is very limited - couldn't get a grasp of how that integration should work (I'll try harder).

Heroku and Elasticsearch - which add-on to use?

I plan to use Elasticsearch on heroku.
I was looking for the best option of Elasticsearch add-on I can use.
Found was my first choice from the following reasons:
It is now part of elastic.
When using Elasticsearch on heroku it will be opened to the world - a secure wrapper to the transport client was introduced - https://github.com/foundit/elasticsearch-transport-module/
But it looks like this repository is not highly maintained, and Elasticseach 1.5 is the latest version which is supported.
What is the recommended add-on then?
If I want to use the latest version of Elasticsearch I am doomed to use an unsecure connection?
Maybe use the official java client?
Nick with Bonsai here. Based on your question, and my own obvious bias, I'll suggest Bonsai for the following reasons:
All of our clusters have SSL with basic auth to secure the connection. We feel pretty strongly that security comes as a standard feature.
We were the first hosted Elasticsearch provider, ever. (And one of the first addon providers on Heroku, ever, with our first search addon, Websolr.) So we've got plenty of experience hosting search and and thousands of other happy Heroku customers.
One definite tradeoff with using Bonsai is that we're generally always going to lag a bit behind the latest version of ES. As of this posting we're still running ES 1.7, but updates to ES 2.2 are just around the corner.
This is probably going to be true in the future as well. Part of the reason for this is that we're a small, bootstrapped company, and we have to be pragmatic in where we focus our engineering efforts. Plus as an operations company with thousands of businesses, we like to let major new upgrades spend a few months in the wild before we commit to supporting it.
We also work hard on providing managed upgrades, at least for versions that are sufficiently backwards compatible. Everyone has their tools for helping to manage upgrades, but I don't think any of the other providers do actual in-place upgrades.
Unless you have a hard requirement for a specific feature in 2.x (and if you do, please let me know) you may do fine on 1.7 until our 2.x support is fully baked. Drop us a line at info#bonsai.io to get whitelisted for the first release of that in the coming weeks.

Should cluster support be at the application or framework level?

Lets say you're starting a new web project that required the website to run on and MVC framework on Mono. A couple major requirements are that it has to scale easy, be stable and work with multiple servers that may or may not be in the same place or even on the same local network.
The first thing I thought of was a sort of cluster communication between servers. Each server would act as a node and be its own standalone application and would query other nodes in a known list for session information and things like that.
But one major design questions I have is should this functionality be built into the supporting framework or should the application handle the synchronization of the data?
Or am I just way off and this would never work?
Normaly clustering rather belongs to some kind of middleware layer, thus on your framework level. However it can also be implemented on the application level.
It depends on your exact use, if you want load balancing, scalability etc.

Resources