How does the Titan (not backend storage) clustering work? - cluster-computing

Context
I'm using Titan v1.0.0, on AWS infrastructure and want to support failover/fault tolerance. AWS will take care of DynamoDb storage backend but it seems necessary to have several titan instances serviced by an (ELB) load balancer.
I'm using a nodeJs library to get to gremlin, and gremlin to access Titan.
Question
So, how does the Titan (not backend storage) clustering work? If at all.
To be clear, I'm not talking about any backend storage clustering, as I'm using dynamoDb on AWS. The documentation on transaction locking suggests to me that a titan cluster must exist, as other titan nodes wouldn't know about the locking without some sort of inter-communication. But I don't see any configuration options that support this.
If clustering is possible on titan, does anyone have any information on how to get this setup in a production High-Availability setup?
An illustration of the server-side architecture:
[NodeJsA]\ /[TitanA]\
\ / \
[ELB (AWS)] [DynamoDb (AWS)]
/ \ /
[NodeJsB]/ \[TitanB]/
Further, if there is no clustering of the titan nodes then a change made via the TitanA node (above) could take the following amount of time to be seem on TitanB node (worst case):
(AWS eventual consistency convergence time (~1 sec) + TitanB cache timeout + poll time from NodeJs nodes to titanDB)
Another consequence of the lack of clustering would be that sessions would have be to pinned in the ELB else a read request, after an update, could be served by a different node with stale information.

Titan doesn't do any "clustering" outside of what is supported by the selected backend. You referred to "locking" as something that would indicate that clustering is supported, but if you read about the locking providers in that link you supplied you'll see that locking isn't really doing anything terribly fancy at a Titan level and that it is backend dependent. So Titan instances really don't have any external clustering capabilities or knowledge of each other. You therefore need to take that into account with respect to your architecture.

Related

Amazon EMR vs EC2 for Off loading BI & Analytics anno 2018

I looked at some posts but they are a bit older on this topic. I have read the AWS and other blogs as well, but ...
My simple non-programming question for AWS in today's environment is:
If we have a DWH of say, 20+TB and growing, that we want to off-load to the Cloud as many are doing, then
if we have a regular daily DWH feed with some mutations, then
should we in the case of AWS, use EMR or EC2?
Moreover, it is a complete batch environment, no Streaming or KAFKA requirements. Usage of SPARK for sure.
EMR seems great, but I have the impression it is for Data Scientists to do whatever they want whenever they want. For more regular ETL I am wondering if this is suited. The appeal of less management is certainly a boon.
In the docs on AWS I cannot find a definitive answer, hence this question.
My impression is that with AMI and bootstrapping own services, that EMR is certainly one way to go, and, that EC2 would be more for a KAFKA Cluster or if you really want to control your own environment and tooling completely based on say Cloudera's distribution per se.
So, the answer here is for others that may need to assess which options apply for off-loading, whatever. It is actually not so hard in hindsight. Note that AZURE and non-AWS vendors not considered here. In a nutshell, then:
EMR is an (PaaS) AWS Managed Hadoop Service
EMR provides tools that AMAZON feel will do the job for Data Science, Analytics, etc. But you can "bootstrap" your own requirements / software, if needed.
EMR-clusters comprise short-running EC2 instances and provisioning happens under water as it were. You get patches effected easily this way. You can up- and downscale very easily as well. Compute and storage are divorced allowing this scaling to occur easily.
Elasticity applies obviously more so to compute, data needs to be there as long as you need it. EMR relies on S3 to save results to, longer term. After saving, one terminates the EMR-cluster, and when required, start a new EMR-cluster and attach your saved S3 results - if applicable - to this new cluster. EMRFS allows S3 to look like part of HDFS and provides easy access. EBS-backed storaged exists that allows saving of results to storage tied to the EC2-instance for the duration of that instance.
It's a new way of doing things. One has access to "spot" instances with obviously spot prices. Billing is less predictable as it depends what you do, but could well overall be cheaper - provided governed correctly. An example of this is expedia's management of EMR-clusters.
Ad-hoc querying is not well served with S3, so you will need another AWS Managed Services such as Presto / Athena or Redshift (Spectrum) which is an additional set of services and cost. Just mentioning this due to slower S3 performance.
EC2 (IaaS) is more "traditional"
You elect to take this path if you want to provision EC2 instances yourself a syou want control of the software and what you want on your Hadoop environment.
EC2 instances - VMs - have compute power, memory, EBS-backed temporal storage, and use EFS for file systems for HDFS or, say, KUDU, and S3. S3 access is not as easy to access as under EMRFS with EMR.
You install and maintain the Hadoop software yourself and apply patches, etc. Management of Hadoop on these EC2 instances is of course less of a big deal with Cloudera and Cloudbreak.
Billing is more predictable one could argue, on the basis of up-time of an EC2 instance, and billing applies continuously for any persisted storage.
Important point, one can combine an EC2 approach for, say, DWH Loading on Hadoop - if "off-loading", and EMR Clusters for Data Science.
MR Data Locality
This not adhered to in both approaches unless bare metal options used, but then the elasticity - E - is harder for both parties, which allows cost savings.
Data locality seems to be assumed by most, but actually it has gone with Cloud computing as expected, and seems quite OK in terms of performance for Data Science etc.
For ad hoc querying AMAZON say they are not so sure on S3, and from experience, using EFS fof HDFS/PARQUET or KUDU works pretty quick, to say the least, from my experience at least.

Redis vs dynamoDb geolocation tracking

I am currently a bit confused into which database to use for geolocation Tracking. What I want to do is update the location of a group of people every 30 secs. The data is sent to the server using web-sockets. Each user has an Id in the database and I would like to update the location of that user every 30 second. After doing so, I would like to query these locations and show it in real time to another group of users. My question is what is the advantage and the disadvantages of DynamoDb and Redis. Which one is faster and can scale easier. I am expecting almost 2 million QPS
Both can scale fairly well, but this depends heavily on your use case and architecture.
DynamoDB is a cloud based NoSQL storage system, and Redis is an in memory data structure store. This means that queries to DynamoDB would involve making a roundtrip to Amazon's servers, while queries to Redis would be over RAM (so, much, much lower latency).
As a consequence of the above, the amount of data you can store in Redis would be limited by the RAM available on your hardware. That said, in the event of Redis or your hardware crashing for some reason, you would have to be content with some level of data loss. You can mitigate this somewhat by configuring Redis persistence so that Redis writes to disk regularly (either every N seconds or by manually triggering a write in your code) and mitigate further by then copying those writes to S3 or elsewhere. This trades performance (depending on your scale) for data safety somewhat due to I/O latency. See the documentation for Redis persistence and this blog post by the GitHub engineering team mentioning their decision to remove Redis persistence for performance reasons.
Meanwhile all of the issues above are abstracted away for you by DynamoDB since AWS handles availability for you behind the scenes. You are really only limited by how much you can afford and usage (read/write per second) limits.
DynamoDB does not have native support for querying and inserting geospatial data (although there is a library for it, but it seems to be unmaintained), Redis does. You could write your own code for this.
DynamoDB does not have support for namespacing, or rather, DynamoDB is namespaced by your AWS account meaning you would not be able to maintain a separate DynamoDB instance with the same table names (say for production vs dev data) on the same AWS account. Redis doesn't either, but you can trivially spin up a separate Redis instance for this.
See also Redis MEMORY USAGE command and Redis memory optimization docs.

Consul infrastructure footprint and performance

Is there any documentation on Consul's requirements as it pertains to infrastructure footprint ? (e.g. memory / disk / cpu requirement and typical usage of Consul agents and master themselves). How does this compare with other similar service discovery solutions ?
I know of no official documentation on this. It will depend in part on the number of total nodes you are running (i.e. server agents plus client agents on each machine running services.
This thread includes comments (and a good link to a presentation by Darron at DataDog) from production users. They indicate using AWS m3.medium to m3.large instances in their production situations (you can find specs on those instance types here). Darron's presentation includes some information on # of nodes in their scenario as well as comments on how they scaled as the number of nodes grew.

Docker for Elasticsearch multi-tenancy SaaS or single instance and proxy?

I am trying to build a prototype of Elasticsearch as a Service. I have thought of 2 different approaches and I'd like to get opinions towards one or the other implementation
One single installation of Elasticsearch, and a proxy layer on top to add user validation (http basic authentication + user account to validate the usage).
This approach would be relatively straight forward and the main challenge would be configure the cluster properly to handle the load, as well as the permissions so there are no data leaks of the users don't have access to the cluster management APIs.
Use Docker as a container and have one instance of elasticsearch for each user. In this case I would be providing the isolation by using the Linux container (Docker). I'd still need to manage authentication.
It probably would be good to implement both, play around and see how things behave. Any opinions about pros and cons of each approach?
Thanks!
Disclaimer: I am the founder of the Elasticsearch service provider Facetflow, which currently offers shared clusters.
I think that both approaches have merit, but maybe suited for different types of customers.
Looking at other SaaS providers, like MongoDB provider MongoLab, they essentially ended up offering both setups (although not using Docker).
So, pros and cons as I see them:
Shared Cluster
Most Elasticsearch as a Service providers operate this way.
Pros:
Far more affordable for the majority of users just looking for good search and analytics.
Simpler maintenance, less clusters for you to monitor
Potentially less versions of Elasticsearch to integrate with. If you need to communicate with other systems (which you do), write your own plugins (we did, for authentication, silos, entitlements, stats etc.) less versions will be far easier to maintain.
Cons:
Noisy neighbours have to be monitored and you have to scale and relocate indices to handle this.
Users have to choose from a limited list of versions of Elasticsearch, usually a single version.
Users don't get full cluster admin control.
Private Clusters using Docker
One provider that works this way is Found.
Pros:
Users could potentially be able to deploy a variety of versions of Elasticsearch
Users can have complete cluster admin access
Noisy neighbours don't affect their cluster, less manual intervention from you
Cons:
Complex monitoring and support. If people can do whatever they want (shut down the cluster over the api), you have to be clear where your responsibility as a provider ends, and what wakes you up at night.
Complex integration with multiple versions, see shared cluster pros.
More expensive since you have to allocate resources that might not always be used.

How do you distribute your app across multiple servers using EC2?

For the first time I am developing an app that requires quite a bit of scaling, I have never had an application need to run on multiple instances before.
How is this normally achieved? Do I cluster SQL servers then mirror the programming across all servers and use load balancing?
Or do I separate out the functionality to run some on one server some on another?
Also how do I push out code to all my EC2 windows instances?
This will depend on the requirements you have. But as a general guideline (I am assuming a website) I would separate db, webserver, caching server etc to different instance(s) and use s3(+cloudfont) for static assets. I would also make sure that some proper rate limiting is in place so that only legitimate load is on the infrastructure.
For RDBMS server I might setup a master-slave db setup (RDS makes this easier), use db sharding etc. DB cluster solutions also exists which will be more complex to setup but simplifies database access for the application programmer. I would also check all the db queries and the tune db/sql queries accordingly. In some cases pure NoSQL type databases might be better than RDBMS or a mix of both where the application switches between them depending on the data required.
For webserver I will setup a loadbalancer and then use autoscaling on the webserver instance(s) behind the loadbalancer. Something similar will apply for app server if any. I will also tune the web servers settings.
Caching server will also be separated into its on cluster of instance(s). ElastiCache seems like a nice service. Redis has comparable performance to memcache but has more features(like lists, sets etc) which might come in handy when scaling.
Disclaimer - I'm not going to mention any Windows specifics because I have always worked on Unix machines. These guidelines are fairly generic.
This is a subjective question and everyone would tailor one's own system in a unique style. Here are a few guidelines I follow.
If it's a web application, separate the presentation (front-end), middleware (APIs) and database layers. A sliced architecture scales the best as compared to a monolithic application.
Database - Amazon provides excellent and highly available services (unless you are on us-east availability zone) for SQL and NoSQL data stores. You might want to check out RDS for Relational databases and DynamoDb for NoSQL. Both scale well and you need not worry about managing and load sharding/clustering your data stores once you launch them.
Middleware APIs - This is a crucial part. It is important to have a set of APIs (preferably REST, but you could pretty much use anything here) which expose your back-end functionality as a service. A service oriented architecture can be scaled very easily to cater multiple front-facing clients such as web, mobile, desktop, third-party widgets, etc. Middleware APIs should typically NOT be where your business logic is processed, most of it (or all of it) should be translated to database lookups/queries for higher performance. These services could be load balanced for high availability. Amazon's Elastic Load Balancers (ELB) are good for starters. If you want to get into some more customization like blocking traffic for certain set of IP addresses, performing Blue/Green deployments, then maybe you should consider HAProxy load balancers deployed to separate instances.
Front-end - This is where your presentation layer should reside. It should avoid any direct database queries except for the ones which are limited to the scope of the front-end e.g.: a simple Redis call to get the latest cache keys for front-end fragments. Here is where you could pretty much perform a lot of caching, right from the service calls to the front-end fragments. You could use AWS CloudFront for static assets delivery and AWS ElastiCache for your cache store. ElastiCache is nothing but a managed memcached cluster. You should even consider load balancing the front-end nodes behind an ELB.
All this can be bundled and deployed with AutoScaling using AWS Elastic Beanstalk. It currently supports ASP .NET, PHP, Python, Java and Ruby containers. AWS Elastic Beanstalk still has it's own limitations but is a very cool way to manage your infrastructure with the least hassle for monitoring, scaling and load balancing.
Tip: Identifying the read and write intensive areas of your application helps a lot. You could then go ahead and slice your infrastructure accordingly and perform required optimizations with a read or write focus at a time.
To sum it all, Amazon AWS has pretty much everything you could possibly use to craft your server topology. It's upon you to choose components.
Hope this helps!
The way I would do it would be, to have 1 server as the DB server with mysql running on it. All my data on memcached, which can span across multiple servers and my clients with a simple "if not on memcached, read from db, put it on memcached and return".
Memcached is very easy to scale, as compared to a DB. A db scaling takes a lot of administrative effort. Its a pain to get it right and working. So I choose memcached. Infact I have extra memcached servers up, just to manage downtime (if any of my memcached) servers.
My data is mostly read, and few writes. And when writes happen, I push the data to memcached too. All in all this works better for me, code, administrative, fallback, failover, loadbalancing way. All win. You just need to code a "little" bit better.
Clustering mysql is more tempting, as it seems more easy to code, deploy, maintain and keep up and performing. Remember mysql is harddisk based, and memcached is memory based, so by nature its much more faster (10 times atleast). And since it takes over all the read load from the db, your db config can be REALLY simple.
I really hope someone points to a contrary argument here, I would love to hear it.

Resources