Does TIBCO rvcache still exist? - tibco

A few years ago I worked on a project where we used TIBCO Rendezvous Cache (rvcache) with the TIBCO messaging framework. It would cache the topics/subjects and then it would send the cache when requested. The project I'm currently on is looking to use TIBCO as a messaging system again. I was trying to explain about the rvcache that I used years ago, but now I'm unable to find much information on it. I was curious if anyone knew if it was still being used or if perhaps it was replaced with something new with a different name.

rvcache is still part of TIBCO Rendezvous and can usually be found at /tibrv//bin/rvcache. You can find more information about rvcache in the TIBCO Rendezvous documentation

Related

How to integrate Oracle and Kafka

I've been trying to find the most efficient/effective way capture change notifications in a single Oracle 11g R2 instance and deliver those events to an Apache Kafka queue, but I haven't been able to find any simple examples or tutorials along these lines.
I've seen some possibilities on the Oracle side (Streams, Change Data Capture,triggers (yuck), etc..), but I'm still not sure which would be best to pursue.
Here is a project utilizing MySQL and Kafka on GitHub called mypipe, I just haven't seen anything similar for Oracle. I'm not sure if it would be best to focus writing an Oracle package for this, or a layer similar to the mypipe project, etc. etc..
Any recommendations, suggestions or examples would be greatly appreciated. Thank you.
There is currently just one tool which is open source and has minimal impact on the database. This is OpenLogReplicator.
license is GPL - it is fully open source
it has very low impact on the source database - it requires no licensing options and just turning on supplemental logging on the source (like all other replication tools)
it is written completely in C++ - so it has very low latency and high throughput
it works completely in memory
it supports all Oracle database versions since 11.2.0.1 (11.2, 12.1, 12.2, 18, 19)
It reads binary format of Oracle Redo logs and sends them to Kafka. It can work on the database host, but you can also configure it to read the redo logs using sshfs from another host - with minimal load of the database.
disclaimer #1: I am the author of this solution
disclaimer #2: to other StackOverflow users: please do not delete this answer. This question has a lot of duplicates. But this is the first question and other duplicates should be redirected here and marked as duplicates. Not the other way. I have deleted all other answers from other questions and just leaving this answer as the primary answer.
I think one approach might be to utilize Oracle GoldenGate for Big Data (researching this myself), obviosuly its most likely a costly solution ($)?
https://blogs.oracle.com/dataintegration/entry/introducing_oracle_goldengate_for_big
Let me know if you got anywhere with this, good luck ...

Redis Vs RabbitMQ as a data broker/messaging system in between Logstash and elasticsearch [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 8 months ago.
The community reviewed whether to reopen this question 8 months ago and left it closed:
Original close reason(s) were not resolved
Improve this question
We are defining an architecture to collect log information by Logstash shippers which are installed in various machines and index the data in one elasticsearch server centrally and use Kibana as the graphical layer. We need a reliable messaging system in between Logstash shippers and elasticsearch to grantee the delivery. What factors should be considered when selecting Redis over RabbitMQ as a data broker/messaging system in between Logstash shippers and the elasticsearch or vice versa?
After evaluating both Redis and RabbitMQ I chose RabbitMQ as our broker for the following reasons:
RabbitMQ allows you to use a built in layer of security by using SSL certificates to encrypt the data that you are sending to the broker and it means that no one will sniff your data and have access to your vital organizational data.
RabbitMQ is a very stable product that can handle large amounts of events per seconds and many connections without being the bottle neck.
In our organization we already used RabbitMQ and had good internal knowledge about using it and an already prepared integration with chef.
Regarding scaling, RabbitMQ has a built in cluster implementation that you can use in addition to a load balancer in order to implement a redundant broker environment.
Is my RabbitMQ cluster Active Active or Active Passive?
Now to the weaker point of using RabbitMQ:
most Logstash shippers do not support RabbitMQ but on the other hand, the best one, named Beaver, has an implementation that will send data to RabbitMQ without a problem.
The implementation that Beaver has with RabbitMQ in its current version is a little slow on performance (for my purposes) and was not able to handle the rate of 3000 events/sec from one server and from time to time the service crashed.
Right now I am working on a fix that will solve the performance problem for RabbitMQ and make the Beaver shipper more stable. The first solution is to add more processes that can run simultaneously and will give the shipper more power. The second solution is to change Beaver to send data to RabbitMQ asynchronously which theoretically should be much faster. I hope that I’ll finish implementing both solutions by the end of this week.
You can follow the issue here:
https://github.com/josegonzalez/python-beaver/issues/323
And check the pull request here:
https://github.com/josegonzalez/python-beaver/pull/324
If you have more questions feel free to leave a comment.
Redis is created as a key value data store despite having some basic message broker capabilities.
RabbitMQ is created as a message broker. It has lots of message broker capabilities naturally.
I have been doing some research on this topic. If performance is important and persistence is not, RabbitMQ is a perfect choice. Redis is a technology developed with a different intent.
Following is a list of pros for using RabbitMQ over Redis:
RabbitMQ uses Advanced Message Queuing Protocol (AMQP) which can be configured to use SSL, additional layer of security.
RabbitMQ takes approximately 75% of the time Redis takes in accepting messages.
RabbitMQ supports priorities for messages, which can be used by workers to consume high priority messages first.
There is no chance of loosing the message if any worker crashes after consuming the message, which is not the case with Redis.
RabbitMQ has a good routing system to direct messages to different queues.
A few cons for using RabbitMQ:
RabbitMQ might be a little hard to maintain, hard to debug crashes.
node-name or node-ip fluctuations can cause data loss, but if managed well, durable messages can solve the problem.
I have been wondering the same thing. Earlier recommendations by the Logstash folks recommend Redis over RabbitMQ (http://logstash.net/docs/1.1.1/tutorials/getting-started-centralized), however that section of the notes no longer exists in the current documentation although there are generic notes on using a broker to deal with spikes here https://www.elastic.co/guide/en/logstash/current/deploying-and-scaling.html.
While I am also using RabbitMQ quite happily, I'm currently exploring a Redis broker, since the AMQP protocol is likely overkill for my logging use case.
If you specifically want to send logs from Logstash to Elasticsearch, you might want to use Filebeat instead of either Redis or RabbitMQ. Personally, I use fluent-bit to collect logs to send to Elasticsearch.
However, the other answers on this page have a lot of out-of-date information regarding Redis's capabilities. Redis has supported:
publish/subscribe since version 2.0
clustering since version 3.0.
streams since version 5.0.
SSL/TLS since version 6.0.
a copy-on-write append-only log for persistence to disk. It works best with Redis 7.0 or newer. This log is useful for recovering from a crash.
But there are some limitations:
Redis is still not as focused as RabbitMQ when it comes to message durability and crash recovery.
Redis pub/sub is not as scalable as RabbitMQ. Redis pub/sub messages were not sharded by Redis cluster nodes (until relatively recently). Redis Streams are a newer, more scalable API.
Quick questions to ask:
why do you need a broker? If you're using logstash or logstash-forwarder to read files from these servers, they both will slow down if the pipeline gets congested.
do you have any experience with administering rabbit or redis? All things being equal, the tool you know how to use is the better tool.
In the realm of opinions, I've run redis as a broker, and hated it. Of course, that could have been my inexperience with redis (not a problem with the product itself), but it was the weakest link in the pipeline and always failed when we needed it most.

CometD connection on ios7

I was asked to establish a connection using the cometD library going from ios7 to a server. After some research I came to the conclusion that my two options were Dave Duncan's DDComet, and Paul Crawford's FayeObjC. I tried using DDComet, but when I opened the github project, it came with 30+ errors. They were mainly ARC errors witch I attempted to fix, but it only ended up crashing the application. I then took a look at the FayeObjC documentation, and quickly realized that it had very little/nothing to do with cometD.
My question is: Is CometD an outdated library? If so, what should I be using as a replacement. If not, how would I be able to implement it in ios7?
CometD is not an outdated library. The last version of CometD is barely one month old.
The CometD project does not have an ObjC client.
I know of companies that have written one and maintain themselves that is fully compatible with CometD 2.x and 3.x.
It may happen that in future their implementation is open sourced.
Faye uses the Bayeux protocol, defined by the CometD project, so in theory they should be able to interoperate. However, I don't know exactly the status of either Faye or FayeObjC.
As the CometD project leader I'd love to have an ObjC client in the project, but it has not happened yet.

Do you think OSGi has a solid future in enterprise apps, or it is going to fade away like the whole ESB thing appears to be?

As per title. I don't know if this is the right place or way to ask this, admins feel free to edit/move/close the question if appropriate.
I'd like to get pointers to recent material clarifying the market trends, as well as real life examples. Even pseudo-pundit, Gartner-like stuff is OK. Thanks.
I am curious about the second part of the question. What is the basis of your statement that 'the ESB thing' appears to be fading? I don't believe it is.
The problem with ESBs however is that some vendors call their product an ESB, but it actually is much much more than that. In some companies this happened with their integration product just because Gartner or some other analysts company says that ESB is hot. Marketing strategy is changed: The product is called ESB and maybe somethings are added that are expected in an ESB.
Paul Fremantle of WSO2 wrote a very good article about what an ESB really is [1].
As for OSGi: The first company I saw using it in their middleware was WSO2. I have heard, that TIBCO, another middleware vendor, is also moving or has moved towards using it in their Active Matrix platform.
OSGi may help in various ways. The most important is that it decreases the effort of the installation of the platform. Install a minimum on each system used to deploy the application, and during deployment the components required to run the application will be added. You do not have to worry about having installed the right plug-ins, add-ons and what not. This is what both WSO2 and TIBCO are doing.
With some vendors, you see that you need to install an awful amount of software, of which you in the end may be using just a small part (e.g. IBM WebSphere). Because of this, you may have to use over-dimensioned systems, which adds extra costs.
OSGi may prevent this.
Have a look at the presentation of WSO2 about the WSO2 Carbon platform [2].
The statement at the end of the presentation says it all:
Adapt the middleware to your architecture, not the architecture to the middleware
So yes, I think OSGi has a future in enterprise apps.
[1] http://wso2.org/library/2913
[2] http://www.slideshare.net/wso2.org/the-carbon-story-presentation-855666
Disclaimer:
I am in no way affiliated with WSO2, TIBCO or IBM. I am a certified TIBCO BusinessWorks Developer and have been developing applications for the IBM WebSphere Process Server platform. Above all, I am a WSO2 Enthusiast.
I would say yes..WSO2 has proof for that..Check the following links
http://osgi.dzone.com/articles/carbon-osgi-and-soa
http://www.infoworld.com/d/developer-world/wso2-upgrades-osgi-middleware-695

Tool for posting test messages onto a JMS queue? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking us to recommend or find a tool, library or favorite off-site resource are off-topic for Stack Overflow as they tend to attract opinionated answers and spam. Instead, describe the problem and what has been done so far to solve it.
Closed 9 years ago.
Improve this question
Can anyone recommend a tool for quickly posting test messages onto a JMS queue?
Description:
The tool should allow the user to enter some data, perhaps an XML
payload, and then submit it to a queue.
I should be able to test consumer without producer.
This answer doesn't apply to all JMS brokers, but if you happen to be using Apache ActiveMQ, the web-based admin console (by default at http://localhost:8161/admin) allows you to manually send text messages to topics or queues. It's handy for debugging.
HermesJMS seems to be a rather powerful client for interacting with JMS providers. In my opinion, it is pretty unintuitive and hard to set up, though. (At least I'm mostly failing at it...)
Other, more user-friendly clients are often vendor-specific. Sonic Message Manager is a very nice and simple-to-use open-source JMS client for SonicMQ. It would be great to have a client like that working with different providers.
The ActiveMQ's web-based admin console has a big deficiency - one cannot specify any headers / custom properties when posting a message.
I came across a neat FOSS tool that can post a message and also specify headers/properties:
http://sourceforge.net/projects/activemqbrowser/
HTH
Apache JMeter is a tool (written for the Java platform) which allows:
sending messages to a queue ( point to point)
publishing/subscribing to a topic
sending both persistent and non persistent messages
sending text , map and object messages
Apache ActiveMQ includes a ProducerTool and a ConsumerTool example sources (Java) with many command-line configuration options. As it is based on the JMS API, using it with other message brokers should be easy with minor modifications.
IBM provide a free, powerful command line tool called perfharness.
Although aimed at benchmarking JMS providers, it's really good at generating (and consuming) test messages. You can use data either generated randomly or taken from a file.
The power features include sending and consuming messages at a fixed rate, using a specific number of threads, using either JMS or native MQ, etc. It generates statistics telling you exactly how fast your queue is performing (hence the name).
The only down side is that it's not super intuitive, given the number of operations it supports.
I recommend the approach of #Will and using the Web Console of ActiveMQ which lets you post messages and browse queues or delete messages easily.
Another approach I often use is to use a directory of files as sample data and use a Camel route to move the messages from the directory to a JMS queue - or to take them from a queue and save them to disk etc
e.g.
from("file://someDirectory").
to("activemq:MyQueue");
This would move all the files from someDirectory and send them to an ActiveMQ queue called MyQueue. If you'd rather leave the files in place you can use the URI "file://someDirectory?noop=true".
For more details see
the file endpoint in Camel
a sample Camel example routing from files to JMS
the various enterprise integration patterns Camel supports
Also if the JMS broker supports JMX like ActiveMQ does you can use JConsole to post message and do a lot more.
ActiveMQ has a web console for sending test messages (like mentioned above), but if your provider doesn't have this, it might be easiest to just write a console app/web page to post test messages. Sending a message in JMS isn't too hard, you might get the most benefit just writing your own test client.
If you can use Spring in Java, it has some really powerful utilities, check out the JmsTemplate.
I'm not aware of a simple client. I remember looking for one a long time ago when I researched different queue systems and trying JMS I couldn't find one then, and I couldn't find one now. One thing though - there are a ton of tutorials that get you started and you could do a simple form to achieve that.
Sorry to be not more helpful.
I have built a GUI tool for administering Open Source JMS Servers (Currently Activemq and Hornetq). It can send and receive messages and most of the usual stuff, as well as aggregate queues and topics into logical "groups".
Its a commercial product but the BETA is free and is fully functional.
try it out at http://www.rockeyesoftware.com/
For ActiveMQ the examples directory holds scripts. For Rubyists, look at example/ruby/stompcat.rb and catstomp.rb for subscribing and publishing.
I'm a brazilian developer and I made a Java program for Post HTTP and JMS Messages his available for download at: https://sites.google.com/site/felipeglino/softwares/posttool
In thath page you can found english instructions.

Resources