Monday September 12, 2011

[Time] NameMessage
[07:36] pouete Hello all
[07:47] sustrik hi
[08:01] eintr 'morning all
[09:30] mickem Hello, is there any way to get around "only one call to zmq_init per process" ? (I have a modular application (so/dll) and the zeromq stuff is inside a module not the central part of the application so the various modules would preferably have their own "context"...
[09:30] sustrik mickem: yes
[09:30] sustrik that's exactly what contexts are for
[09:31] mickem sustrik: humm... but it assets when I create the second context?
[09:31] sustrik does it?
[09:31] sustrik what does it say?
[09:31] mickem sustrik: so I assumed it was "not possible"
[09:32] mickem sustrik: Assertion failed: !pgm_supported () (zmq.cpp:240)
[09:33] sustrik do you need PGM?
[09:33] mickem sustrik: given that I dont know what it is... I doubt it :)
[09:33] sustrik :)
[09:33] sustrik it's a protocol for reliable multicast
[09:33] mickem aha, no... not currently anyway...
[09:34] sustrik the problem here is that 0mq uses OpenPGM underneath
[09:34] sustrik old versions of openPGM didn't allow to be initialised multiple times
[09:34] sustrik the problem was fixed in the meantime
[09:34] sustrik what version of 0mq are you using?
[09:37] mickem not really sure actually... using a stock kubunto...
[09:37] mickem it is "old" as a lot of the consts I use on windows (has latest version) was not there...
[09:38] sustrik i haven't even knew it was distributed with kubuntu
[09:38] mickem could be 2.10 unless they renumber...
[09:39] sustrik anyway, using newer version would fix the problem
[09:39] mickem err... 2.0.10 even...
[09:39] sustrik optionally, you can recompile the old version without --with-pgm flag set
[09:39] mickem ok...
[12:02] CIA-121 jzmq: 03Lucas Johnson 07master * rb9a109e 10/ (16 files in 4 dirs): Fixed the spec file for rpmbuild and included autogen generated files. ...
[12:03] CIA-121 jzmq: 03Lucas Johnson 07master * r7963d0c 10/ (.gitignore README config/ax_jni_include_dir.m4): Updated java checks in ...
[12:03] CIA-121 jzmq: 03Gonzalo Diethelm 07master * rf8842e8 10/ (18 files in 4 dirs): Merge pull request #71 from gui81/build-updates ...
[16:12] shales I'm doing non blocking sends to integrate 0mq with another event loop. Is the following true? When sending a multipart message, only the first call to send can fail and set EAGAIN. If the first send call succeeds then the rest of the message will be accepted immediately.
[16:15] shales Or can the subsequent calls to send fail, and then you need to send all of the message parts again when you later try to resend the message?
[16:22] sustrik shales: only the first send could fail
[16:27] shales sustrik: ok, that's what I thought. Is there a maximum message size then?
[16:28] sustrik you can set one
[16:28] sustrik ZMQ_MAXMSGSIZE option
[16:31] shales that's for received messages. When 0mq accepts a send with ZMQ_SNDMORE, the lib has no idea how big the message is going to be or how many parts. If the entire message can't be sent at once, will 0mq allocate as much memory as it needs to buffer the rest of the message?
[16:40] sustrik it will
[16:40] sustrik unless you run out of memory
[16:41] shales ok, thanks
[16:45] WouterVH Question: is it possible to setup a many-to-one connection with PUSH/PULL, multiple pushers to one puller?
[16:54] guido_g sure
[16:55] WouterVH guido_g, was that a reply to my question?
[16:56] guido_g yes
[17:04] WouterVH if you have a PUSH-PULL pipeline, how then can you add a second PUSH on in a different process/server, using the same port?
[17:05] guido_g jsut connect to the pull
[17:11] WouterVH in that case the PULL only see the messages send from the first PUSHER, the one that binds
[17:14] guido_g *sigh*
[17:14] guido_g bind pull, connect push
[17:17] michelp moin
[17:17] pieterh hi guys
[17:17] pieterh guido_g: how's stuff?
[17:17] guido_g hi pieterh
[17:18] michelp anyone from the 0mq community going to this?
[17:18] guido_g pieterh: recovered enough to look for a new gig
[17:18] pieterh was just looking at for random reasons
[17:18] guido_g hehe
[17:19] guido_g learning scala now
[17:19] guido_g will do something with ømq and scala for sure
[17:19] pieterh nice
[17:23] guido_g and now, after hacking nio and scala for nearly the whole day, relaxing w/ friends
[17:23] guido_g cu
[17:24] pieterh ciao
[22:01] dcolish Are there any stats for how much cpu a zmq polling loop might take?
[22:02] dcolish I know its partially based on what happens in the loop
[22:03] dcolish just wondering for the case of a very simple forwarder
[22:29] mikko dcolish: it's not a busy loop
[22:30] mikko dcolish: so if you poll with -1 timeout the cpu usage should be close to 0
[22:30] dcolish thats what i was expecting, let me double check the timeouts
[22:31] mikko even with small timeout the cpu usage should be fairly minimal
[22:33] dcolish yeah we're seeing like 5%, but we are doing some work in that loop
[22:33] dcolish hmm my timeout is 1
[22:33] dcolish maybe i should reduce that
[22:34] mikko 1?
[22:35] mikko 1 as in one millisecond?
[22:35] dcolish 1 ms
[22:35] dcolish sorry
[22:35] dcolish units do help :)
[22:35] mikko why do you need so short timeout?
[22:35] mikko even if you use -1 (eternal) the poll call will return immediately when there is activity
[22:36] dcolish right, I was trying to guard against where we are sending to dying
[22:36] dcolish i think there are better patterns for this although
[22:40] dcolish i guess my use isnt really a forwarder but its similar
[22:41] dcolish i'm receiving over an ipc PAIR from another thread and accumulating those messages for sending over a tcp PUB socket
[22:41] dcolish I also heartbeat on that PUB socket and didnt want to block, hence the low timeout
[22:42] dcolish however it might be too low and honestly 1ms is a little overkill for that use
[22:51] mikko PUB won't block
[22:51] mikko it will drop messages if there are no receivers
[22:51] mikko or do you poll the pub to see if it's ok to send?
[22:51] dcolish yeah thats fine, i'm polling on the PAIR to see if i got anything
[22:52] mikko why dont you block recv on the pair?
[22:52] mikko if you don't have anything to send you don't have anything to send
[22:52] dcolish but that would block the pub from sending no?
[22:52] mikko but if there is nothing to send you can block
[22:53] dcolish i want to send heartbeats even if i didnt receive anything over the pair
[22:53] mikko ok
[22:53] mikko 1ms heartbeat sounds a bit frequent
[22:53] dcolish the pair is just for additional instrumentation
[22:53] dcolish yeah i'm thinking thats right
[22:54] mikko why dont you ditch the pair
[22:54] mikko move the heartbeat to another thread
[22:54] mikko and treat it as any other socket?
[22:54] mikko use PUSH PULL let's say
[22:54] eydaimon so can I set up zeromq in a distributed fashion for redundancy? iow, a load balancer in front of it, and if one of the nodes crashes, I still have access to the messages?
[22:54] mikko the publisher pulls on left side
[22:54] mikko and pubs on right side
[22:55] mikko eydaimon: i think the guide contains some redundancy patterns
[22:55] mikko eydaimon: it's very hard to do redundancy if you need to guarantee ordering or deliver only once
[22:56] dcolish mikko: hmm that might be ok, threads start to get expensive in python
[22:56] eydaimon mikko: I read the comparison to activemq and it talks about being being distributed like git. What does it mean then?
[22:56] dcolish we're already using three
[22:56] eydaimon mikko: that's precisly what I'm going for
[22:56] eydaimon mikko: i.e. deliver only once and guarantee ordering
[22:56] mikko eydaimon: that is a very complicated problem
[22:57] mikko like let's say
[22:57] mikko you send a message
[22:57] mikko the other peer sends ACK
[22:57] mikko but how does that peer know that the ACK has been received?
[22:57] mikko maybe the server needs to ACK the ACK
[22:57] mikko but how do you know the client received ACK for the ACK ?
[22:57] mikko etc
[22:58] eydaimon mikko: no concept of a slave then? mysql pulls it off with binary logs etc, right?
[22:58] mikko eydaimon: different problem there
[22:58] eydaimon yeah true
[22:58] eydaimon slave is read only
[22:59] eydaimon :/
[22:59] mikko yes
[22:59] eydaimon not much help here
[22:59] eydaimon just realized after I wrote
[22:59] mikko it's somewhat interesting but also fairly complex domain
[23:00] mikko guaranteed ordering is also very complicated with multiple publisher nodes
[23:00] mikko if you have multiple active ones
[23:01] mikko i think it would be beneficial to start from defining exactly what is that you are after
[23:01] mikko and what are the error scenarios you want / can recover from
[23:01] mikko let's say you lose network connectivity between 'master' and a 'slave'
[23:02] mikko but not with master -> clients and slave -> clients
[23:02] mikko how do you guarantee ordering and deliver only once in that scenario?
[23:04] mikko on the other hand you have application constraints to think about
[23:04] mikko which is worse: not delivering at all or delivering twice?
[23:05] eydaimon In my case I'd only need the connectivity of a 'master'
[23:05] eydaimon the second node is for redundancy only
[23:05] mikko but the second node needs to know somehow what has been delivered
[23:05] mikko if the master dies it needs to pick up
[23:06] mikko and if you have hard deliver once and guaranteed ordering then it needs some serious thought
[23:08] mikko i gotta sleep, long day ahead tomorrow
[23:08] mikko good night
[23:08] eydaimon maybe it could just assume if'ts accessed
[23:08] eydaimon it's now the master
[23:08] eydaimon thanks :) good night
[23:09] mikko just a quick one: what i meant is that when the slave takes over it needs to know what the previous master has delivered this far so that you don't get double delivery
[23:09] mikko maybe you can query that from the client
[23:09] mikko or a database of some sort
[23:11] eydaimon off home too. sleep well :)
[23:12] dcolish what do you think about punctuating over that stream and rollback (after a reasonable t/o) if you never get the punctuation from the first master?
[23:14] dcolish although that doesnt deal with a client recv failure does it?
[23:14] dcolish oh well