Friday May 28, 2010

[Time] NameMessage
[00:56] jugg are there any online logs of this channel?
[12:27] Guest1 hi there, I saw the impressive throughput using tcp. On my mac I'm getting over 1million messages per sec, 1 byte. Sounds nuts. What is the difference in terms of architecture between zeromq and something like Rabbit MQ which can only do a few thousands?
[12:31] sjampoo 0MQ is lighter, not only the protocol (binary instead fo AMQP) but it also has less machinery overhead.
[12:31] sustrik sjampoo is right
[12:32] sustrik however, the actual algorithm that allows this thing is called 'message batching'
[12:32] Guest1 ah is it because the size of the payload is upfront
[12:32] sustrik it means that you send many messages using a single OS call
[12:32] Guest1 message batching is like socket nodelay = false on sockets?
[12:32] sustrik exactly
[12:32] Guest1 so do the messages get released after a timeout?
[12:32] Guest1 I think sockets is 200ms
[12:32] sustrik what exactly do you mean by released?
[12:33] Guest1 if messages are batched, and a listener is waiting for a message
[12:33] sustrik aha
[12:34] sustrik it's answered in faq
[12:34] sustrik but here you go:
[12:34] sustrik When sending messages in batches you have to wait for the last one to send the whole batch. This would make the latency of the first message in the batch much worse, wouldn't it?
[12:34] sustrik ØMQ batches messages in opportunistic manner. Rather than waiting for a predefined number of messages and/or predefined time interval, it sends all the messages available at the moment in one go. Imagine the network interface card is busy sending data. Once it is ready to send more data it asks ØMQ for new messages. ØMQ sends all the messages available at the moment. Does it harm the latency of the first message in the batch? No. The mes
[12:34] sustrik sage won't be sent earlier anyway because the networking card was busy. On the contrary, latency of subsequent messages will be improved because sending single batch to the card is faster then sending lot of small messages. On the other hand, if network card isn't busy, the message is sent straight away without waiting for following messages. Thus it'll have the best possible latency.
[12:35] Guest1 ah makes sense!
[12:35] Guest1 its true that sending a each message asap is slow
[12:35] Guest1 batching makes sense
[12:35] Guest1 nice
[12:35] sustrik :)
[12:36] Guest1 last question, I know you guys must be busy, do you use something like libevent? I quickly scanned the code and it looks like you may be using regular sockets
[12:37] Guest1 I was then thinking you might be using a thread per socket connection
[12:37] sustrik no libevent, 0mq has almost zero dependencies to make it as portable as possible
[12:38] sustrik as for the threading, there's a thread pool
[12:38] sustrik you can assign one thread to one connection
[12:38] sustrik but in default case the connections are simply load balanced evenly within the thread pool
[12:39] Guest1 were you able to benchmark number of possible connections, should I look in performance section?
[12:39] Guest1 reason being I'm an architect/engineer for a mobile application
[12:39] Guest1 we have an application that requires 30k concurrent connections
[12:39] Guest1 with a throughput of 10k messages per second
[12:40] sustrik for each connection?
[12:40] Guest1 not for each connection
[12:40] sustrik altogether
[12:40] sustrik ok
[12:40] Guest1 yes
[12:40] sustrik it should work, but we haven't really tested with 10k's of connections
[12:40] sustrik what's done however, is:
[12:41] Guest1 thanks for being honest, I will have a look at doing this
[12:41] sustrik 1. underlying polling mechanism is epoll/devpoll/kqueue -- if available on the platform
[12:41] Guest1 at the moment we're playing with libevent, but getting a max throughput of 5k messages per second (non batched) over 1Gbps network for 9k connections
[12:42] sustrik 2. scheduling algorithms are O(1) with respect to numer of connections
[12:42] Guest1 ah cool using kqueue if available
[12:42] Guest1 nice
[12:42] Guest1 are you guys the developers?
[12:43] sustrik yup
[12:43] Guest1 neato
[12:44] sustrik Guest1_: if you do the test with 30k connections it would be great if you can shared the results
[12:45] Guest1 shall do
[12:45] Guest1 do you have a roadmap btw?
[12:46] Guest1 for example are you planning to go towards being an ESB?
[12:46] Guest1 or being the best multi-platform & language queuing platform?
[12:46] sustrik it can be used in ESB way
[12:47] sustrik the functionality is somehow constrained when compared to enterprise solutions
[12:47] sustrik but that's by design
[12:47] Guest1 just the queue management, orchestration and message mapping would be great
[12:47] sustrik "get rid of bells and whistles"
[12:47] Guest1 yep
[12:47] Guest1 just a thought for you, if you were bored
[12:48] sustrik queue management = monitoring number of messages in queues?
[12:48] Guest1 yes
[12:48] Guest1 and configuring such as security
[12:49] sustrik what about message mapping?
[12:49] Guest1 also massive bonus if you built connectors to mobile phone stacks like J2ME and iPhone/ObjectiveC
[12:49] Guest1 message mapping is taking the raw message such as JSON/XML and mapping it to another structure which is JSON/XML
[12:49] sustrik ah transformation
[12:49] Guest1 aye
[12:49] sustrik can be built on top
[12:49] Guest1 yep
[12:49] sustrik not my concern :)
[12:50] sustrik i am not a java developer, but shouldn't java binding work for j2me?
[12:50] Guest1 no, not J2SE standard
[12:51] Guest1 you have to use the J2ME socket library
[12:51] Guest1 also, when you detect blackberry you have do a special switch
[12:51] sustrik the java binding is just a wrapper on top of c library
[12:51] sustrik all you need from JDK is JNI
[12:51] Guest1 no C library access on J2ME phone
[12:51] Guest1 you can't access it, sandboxed
[12:51] sustrik aha, then no luck
[12:52] Guest1 so we have ported to 13 Nokia models and Blackberry. Luckily 99% the same socket code
[12:52] Guest1 just a suggestion for you
[12:52] Guest1 I will leave you in peace, thanks for talking to me
[12:52] sustrik no problem
[12:52] sustrik bye
[12:52] Guest1 cya
[20:59] truelyky1e help