NuoDB: Very good at technologying, Not so good at communitying

Lately, I've been very excited about a new commercial database offering called NuoDB. NuoDB takes a fresh approach to a very old problem, and solves it spectacularly well I think, especially as pertains to web-scale systems.

The NuoDB company clearly has it's head on straight with regard to the technology; it's a brilliant and scalable approach. Alas, the same cannot be said for their support community. 

In the strongest possible sense, I would hope that NuoDB management understands that in order to succeed, NuoDB must create a vibrant community in the spirit of openness and honesty. Absent this, they will certainly not get my business, and I suspect the same may be true for others.

Below is a post which I had attempted to submit to the NuoDB forums, but which has thus far been rejected or ignored by moderators. Suffice to say, this kind of moderation is a sure fire way to kill the community, and my interest in the product as well.

Greetings NuoDBicans,

I'm facing an interesting architectural challenge, and I'm in the process of evaluating NuoDB as possible part of the solution.
Perhaps someone might be able to point me in the right direction, as I'm a bit of a Nuo-Noob.

Our application requires that we propagate a high volume of change data out to various UIs and other third party systems. These items are time sensitive, and as such, we have opted to use a reactive programing approach. Our application requires ACID and a high level of redundancy/reliability, so we very much like NuoDB as a migration path away from the management nightmare that comes with a Mysql system-of-record approach.

At present, we are effectively using a single large Mysql master for our persistence layer, and RabbitMQ Pub/Sub for our messaging layer.
Every time we commit to Mysql, we immediately publish messages to RabbitMQ detailing what was changed. The various consumers then propagate these changes in a reactive fashion to other systems. This generally provides the high degree of performance that we require (sub-500ms in many cases.) We can tolerate higher latencies occasionally, but it must be fast and efficient overall otherwise we will encounter significant message queue backlogs, and response time will suffer.

One challenge with this approach, is that it's possible for the Mysql commit to succeed, and the RabbitMQ publish to fail. This is no good.
One possible alternative approach would be to have an agent consume the Mysql replication stream, grok any useful change data, and feed that into the pub/sub messaging system. Technically there are a few challenges here too, as we're interested in publishing semantic change data (like the application user_id who made the change,) rather than the somewhat-lower-level DB record edits.

So, I've read the white papers, and consumed as many NuoDB-under-the-hood type articles as I can, but I must confess that I'm a bit stumped here. I don't mind maintaining an out-of-band message passing layer if necessary, but I'm having a tough time conceiving of a way to ensure that the message doesn't arrive before the data shows up in the NuoDB transaction engine on the other end. What I guess I'm really asking here, is if there's a way to perform basic message passing [I]through[/I] NuoDB, in such a way that the messages are correlated to each commit. This is important to us because we generally have to read multiple different tables to render the final representation of the data to be transmitted to each 3rd party. It would therefore be a bad thing if, in response to receiving said message when we looked up the supplemental data in NuoDB, it hadn't shown up yet.

Without a doubt, we could simply use a big blobby table as the initial part of the message passing system, and use a single polling process to perform the fanout. We'd insert a large blob of JSON or Protobuf containing the aforementioned semantic edit data just before each commit. We could also employ triggers to do something similar... the issue is that we wish very much to avoid having to poll for new messages on the receiving end. If we went with this approach, we'd have to poll the table many times per second. Certainly, this could work, but aside from the very serious yuck-factor of high-frequency polling, it could also break quite miserably in the the face of high traffic.

Is there at present, or could there ever be a way to efficiently pass messages to a very simple consumer (presumably per each transaction engine) in such a way that it was guaranteed to be consistent with the transaction when you're dealing with more than one transaction engine? Might this sort if thing be possible/buried within the guts of NuoDB?

Cheers, and Happy holidays!

Daniel NormanComment