I am new to pika (https://github.com/pika/pika). I wonder whether there are APIs to check whether one virtual host exists or not, and whether there are APIs to create virtual host. I know that vhost can be created by rabbitmqctl, but I did not find them in pika... Did I miss anything?
We can use the rabbitmq web management plugin (https://www.rabbitmq.com/management.html), which provide REST APIs. What we need to do is to write program to do PUT and GET.
There are some good examples:
(1) a good client: pyrabbit, https://github.com/bkjones/pyrabbit/blob/master/pyrabbit/api.py;
(2) some code based on requests. https://github.com/numenta/numenta-apps/blob/1ff572a21a5c27fd290822e572ce33f42e1ee19e/nta.utils/nta/utils/test_utils/amqp_test_utils.py#L145-L160
(3) good examples based on urllib2: https://github.com/jasonmcintosh/rabbitmq-zabbix/blob/master/scripts/rabbitmq/api.py
Related
I have been playing around with creating a webapp that uses elasticsearch to perform queries. Currently, everything is in production, thus on the localhost, let's say elasticsearch runs at 123.123.123.123:9200. All fun and games, but once the webapplication (react) is finished, the webapp should be able to send the queries to the current local elastic search db.
I have been reading around on how to get this done in a proper and most of all secure way. Summary of this all is currently:
"First off, exposing an Elasticsearch node directly to the internet without protections in front of it is usually bad, bad news." (see here: Accessing elasticsearch from a public domain name or IP).
Another interesting blog I found: https://code972.com/blog/2017/01/dont-be-ransacked-securing-your-elasticsearch-cluster-properly-107.
The problem with the above-mentioned sources is that they are a bit older, and thus I am not sure whether they are up to date.
Therefore the following questions:
Is nginx sufficient to act as a secure middleman, passing the queries from the end-users to elastic?
What is the difference at that point with writing a backend into the react application (e.g. using node and express)?
What is the added value taking into account the built-in security from elasticsearch (usernames, password, apikey, certificates, https,...)?
I am reading a lot about using a VPN or tunneling. I have the impression that these solutions are more geared towards a corporate-collaborative approach. Let's say I am running my front-end on a live server, I can use tunneling to show my work to colleagues, my employer. VPN would be more realistic for allowing employees -wish I had them, just a cs student here- to access e.g. the database within my private network (let's say an employee needs to access kibana to adapt something, let's say an API-key - just making something up here), he/she could use a VPN connection for that.
Thank you so much for helping me clarify the above-mentioned points!
TLS, authorisation and access control are free for the Elastic Stack, and have been for a while. I'd start by looking at the docs, as it's an easy way to natively secure your cluster
for nginx, it can be useful for rate limiting, or blocking specific queries for eg. however it's another thing to configure and maintain
from a client POV it would really only matter if you are using the official Elasticsearch clients, and you use nginx and make changes to the way the API would respond to the client (eg path rewrites, rate limiting)
it's free, it's native, it's easy to manage via Kibana
I'd follow the docs to secure Elasticsearch and then see if you need this at some point in the future. this would be handled outside Elasticsearch anyway, and you'd still want to secure Elasticsearch
The point in exposing Elasticsearch nodes directly to the internet is a higher vulnerability in principle. You should follow the rule of the least "surface" of your system on the internet.
A good practice is to hide from the internet whatever doesn't need to be there, although well protected. It takes ~20mins to get cyber attacks on any exposed service (see a showcase).
So I suggest you install a private network, such as a traditional VPN or an SDP product such as Shieldoo Mesh.
tl;dr
Requesting suggestions, guidelines or examples for possibilities to extend spring-boot-adminto use methods other than HTTP requests for health moitoring of non-spring projects like MariaDB.
Full version
There is a requirement to setup a monitoring application using spring-boot-admin. Several of the clients are spring-bootapplications and are easily implemented. There are however a couple of non spring-boot projects like the database server MariaDB.
The question is therefore formulated thusly : Is it possible to extend SBA to monitor the databse status by methods other than HTTP requests. One possible approach, for example, might be to check if it is possible to connect to the application specific TCP port to verify if the db server is still running. However, other possibilities can be exploited too.
One post I found similar to my question was this :
https://github.com/codecentric/spring-boot-admin/issues/504. The key difference here though is that the provided answer still sugests a HTTP approach. The reference guide also does not suggest an alternative.
Should such a possibility exists, a brief outline of the approach or an example implementation would be most welcome.
SBA currently only supports checking health via http. But your DB should be implicitly monitored if you have an according health indicator on your business application.
It should be possible to extend the StatusUpdater#queryStatus() doing a tcp connect if it encounters an health-url beginning with tcp:// instead of http://...
And in case you accomplish that a PR is appreciated :)
I am running a local ElasticSearch server from my own home, but would like access to the content from outside. Since I am on a dynamic IP and besides that do not feel comfortable opening up ports to the outside, I would like to rent a VPS somewhere, setup ElasticSearch and let this server be a read only copy of the one I have at home.
As I understand it, this should be possible - however I have been unsuccessful at creating any usable version that lets another server be a read-only version of my home ES-server.
Can anyone point me to a piece of information or create a guide, that would help me to set this up? I am rather known to ES-usage, however my setup-skills are still vague.
As I understand it, this should be possible
It might be possible with some workarounds, but it's definitely not built for that:
One cluster needs to be in one physical region; mainly because of latency and the stability of the network connection.
There are no read-only versions. You could only allow read access to a node (via a reverse proxy or the security plugin), but that's only a workaround.
I have a C# client application that connects to multiple servers. I noticed that it is necessary to use NetLimiter activated rules in order to make my client connect correctly with higher priority when there is so many traffic on the client computer.
I did not find any documents about how can I embed and make rules programmatically in this application. However, I read here that someone tried to use Netlimiter API but failed.
I read somewhere that I can write my own application that uses TC API of the Windows in here and mark DSCP to make priorities. But I reached to this problem before setting flow options of my C# application.
Please guide me with this issue.
Look here. Connect() and SetRule() are the only APIs available.
NetLimiter seems to be a COM object, so to use it from C# you need something like this:
dynamic myownlimiter = Activator.CreateInstance(Type.GetTypeFromProgID("NetLimiter.VirtualClient"));
myownlimiter.Connect("host", "port");
and then use SetRule() as described in the first link.
I'm trying to create an app such that gear 2 according to this model can be accessed by gear 3,4...n when using the --scaling option.
The idea being for this structure is the head of a chain of relays. I'm trying to find where the relevant information is so all the following gears have the same behavior. It would look like this:
I've found no documentation that describes how to reach gear 2 (The Primary DNAS) with a url (internal/external ip:port) or otherwise, so I'm a little lost as to how to let the app scale properly.
I should mention so far I've only used bash scripting, but I'm not worried about starting the program in other languages, but so long as it follows that structure in openshift I'm not worried.
The end result is hopefully create a scalable instance of shoutcast on openshift.
To Be Clear:
I'm developing a cartridge, not using the diy, all I understand of openshift is in this guide but of course I'm limited because I'm new.
I'm stuck trying to figure out how to have the cartridge handle having additional gears use the first gear as a relay. I am not confused about how Openshift routes requests externally to the gears and load balances them. I'm not lost how to use port-forwarding to connect to my app, the goal would be to design the cartridge so this wouldn't be a requirement at all, to only use external routes.
The problem as described above is that additional gears need some extra configuration, they need an available source (what better than the first gear?). In fact the solution to my issue might be to somehow set up this cartridge to bypass haproxy with an external route that only goes to the first gear.
Github for those interested, pass it around, it'll remain public. Currently this works only as a standalone, scaling it (what I'd like to fix) causes issues. I've been working on this too long by myself, so have at it :)
There's a great KB article that explains how the routing works on OpenShift gears here https://help.openshift.com/hc/en-us/articles/203263674-What-external-ports-are-available-on-OpenShift-.
On a scalable application, haproxy handles all the traffic routing to your gears. the only way to access your gears is through the ports mentioned in the article above. rhc does however provide a port-forwading option that would allow you to access things like mysql directly from your local machine.
Please note: We don't allow arbitrary binding of ports on the externally accessible IP address.
It is possible to bind to the internal IP with port range: 15000 - 35530. All other ports are reserved for specific processes to avoid conflicts. Since we're binding to the internal IP, you will need to use port forwarding to access it: https://openshift.redhat.com/community/blogs/getting-started-with-port-forwarding-on-openshift