Friday July 16, 2010

[Time] NameMessage
[00:24] sd88g93 ok, fixed one problem, but still doesnt work with the forwarder and the threading:
[00:24] sd88g93 even just one thread
[02:05] fruminator hey all, I sent a post to the mailing list but I don't see it in the archives. did it get through?
[02:09] sd88g93 just got it , frum
[02:09] sd88g93 i'm having a similar problem, but i'm using the forwarder programmatically , the zmq_device() function
[02:11] sd88g93 fruminator: in my case, i'm trying to publish a messsage from multiple threads at one socket point, using inproc communication, i can only get it to work when i dispense with the forwarder device and use it in one thread
[02:11] sd88g93 i have the forwarder device to forward from each thread to the main socket in the main thread
[02:12] sd88g93 there's not much documentation on the forwarder,
[02:13] sd88g93
[02:14] sd88g93 fruminator: did you try using "tcp://lo:5555" instead of ?
[02:16] fruminator I will try that. I have to use the forwarder because this is between 2 totally different systems
[02:18] sd88g93 all it is is an infinite loop that recieves at one end and sends on the other
[02:18] fruminator dont follow
[02:18] sd88g93 ok lol
[02:18] fruminator what is?
[02:20] sd88g93 actually, i can get it to work if i keep it to one thread doing the publishing, I will probably have to code my own queuer , to queue the messages up before publishing
[02:20] fruminator seems like a different use case than I'm facing.
[02:21] fruminator the out of the box behavior is perfect for me; I just need it to work offline
[02:21] sd88g93 oh ok
[02:23] fruminator gotta go for now, hope to see some replies on my email. thanks!
[05:27] ak47 I'm having a bit of a problem with ZMQ_RCVMORE
[08:51] jugg sustrik: it looks like you've removed issue support on your zeromq2 github page. How would you like patches submitted to you?
[08:53] jugg anyway, here is a patch that adds ZMQ_EVENTS to zmq_getsockopt() ->
[08:58] feroz Hello !
[09:00] jugg sustrik: in any case with your latest code base zmq_term never returns.
[09:03] feroz I'm reading some docs about 0mq, and looking into the Pub/Sub pattern. From examples i see, web subscribing to a channel, you only match against the begining of a string, is there anyway to do more ?
[09:08] guido_g feroz: no
[09:14] feroz Okay, do you think it would be easy to extend that from actual source code?
[09:15] guido_g you mean as a patch to ØMQ itself?
[09:26] jugg feroz: what "more" do you want to do?
[10:10] feroz Im thinking of serializing objects and send them thought Pub/Sub
[10:10] feroz So it could be nice if i could filter against some attribute
[10:14] jugg just filter it yourself, post recv() then.
[10:15] jugg I suppose the subscription interface could be modified to be able to register a callback for custom publication filtering, but in your case the entire message content would have to be passed along so it could be unserialized, at which point you've gained nothing.
[10:18] feroz Okay, thanks !
[10:49] sustrik jugg: yes, I am aware of that
[10:49] sustrik sustrik/zeromq2 happens to have shutdown broken
[10:50] jugg ok
[10:52] sustrik jugg: what was that about the erlang binding?
[10:54] jugg Well, I have it compiling, and it can send data, but when I receive that data in a non-erlang app, it appears to be corrupted.
[10:54] jugg I have not yet been able to get the erlang recv functionality to work either.
[10:54] jugg it just blocks
[10:55] sustrik i see
[10:57] jugg not sure if it is interesting or not, but the general term() blocking issue can be reproduced with (lua code): - remove the connect line: no issue, change REQ to REP: no issue, change connect to bind: no issue.
[10:58] jugg however, if I have both a REQ and a REP and the send/recv to each other just one, both will lock on term()
[10:58] jugg s/just one/just once/
[11:03] sustrik jugg: yes, the term code got broken during work on socket migration between threads
[11:04] sustrik i have to fix it
[11:05] jugg yah. I'm just trying to reduce the known problems while debugging this erlang binding and ran across that termination issue while testing outside of erlang.
[11:07] jugg I realize these issues are from your socket migration work - so just fyi in case you hadn't noticed it yet. Are there any other major issues you've found from the migration work?
[11:19] sustrik no, i don't think so
[11:19] sustrik shutdown is the only problem i am aware of
[11:22] jugg sustrik: on the erlang bindings, I'm not getting this ZMQ_FD thing. It get the fd, when it signals, I can use ZMQ_EVENTS, however from my reading of has_in/out, those won't return true until socket's process_commands() is called - in which case, it would seem a mechanism is needed to call that.
[11:26] jugg s/It get the fd/I get the fd and wait on it/
[11:32] sustrik jugg: it can be called from getsockopt IMO
[11:32] sustrik ZMQ_EVENTS
[11:33] jugg Is that correct that it needs to be called then?
[11:34] sustrik yes
[11:34] sustrik process_commands gets the socket state up to date
[11:35] sustrik if it is not called, getsockopt would report some historical state
[11:38] jugg yay, that fixed the bindings.
[11:39] jugg Now to figure out why sending messages external to erlang appears to corrupt them.
[11:41] jugg sustrik: here is an updated ZMQ_EVENTS then ->
[11:43] jugg sustrik: what happens if pending events are not handled, and the fd is waited upon again?
[11:45] sustrik it'll return immediately
[11:48] sustrik jugg: your patch looks ok
[11:48] sustrik would you like to contribute it?
[11:50] jugg It was originally written by either Dhammika or Serge who wrote the bindings... I only moved it to socket_base and cleaned it up...
[11:51] jugg hmm, actually looking at their original code, I've fairly well redone it... so yes, I'll contribute it.
[11:52] sustrik thanks
[11:52] sustrik can you send the patch to the mailing list saying it's submitted under MIT license?
[11:52] jugg ok
[11:58] feroz hey, is there any paper explaining how pub/sub was implemented in zeromq?
[12:00] Lazesharp hi guys, are there any docs/articles on streamer devices?
[12:00] jugg feroz: not sure exactly what you mean, but:
[12:01] feroz Thanks jugg
[12:05] sustrik Lazesharp: there's none
[12:05] jugg sustrik: Would you like ZMQ_EVENTS to be stubbed out in the doc/zmq_getsockopt.txt file with this patch as well, or keep doc update in a separate patch?
[12:05] sustrik but it's pretty obvious
[12:06] sustrik it gets messages from an UPSTREAM sockets and passes them to a DOWNSTREAM socket
[12:06] sustrik jugg: single patch
[12:07] Lazesharp sustrik: any queuing if there are no downstream sockets connected?
[12:07] Lazesharp or are messages just discarded
[12:08] Lazesharp actually, these are just building blocks - there's nothing stopping me writing my own "queueing streamer" is there
[12:11] sustrik yes, it queues messages
[12:12] Lazesharp oh right, awesome
[12:13] jugg sustrik: is the document line wrap at 78 chars?
[12:23] jugg sustrik: this is the text I came up with. Sufficient for the patch?
[12:24] sustrik 80 chars
[12:26] jugg it should probably mention some relation to ZMQ_FD... but that isn't documented yet.
[12:27] sustrik i'll do that
[12:27] sustrik there's an in parameter to getsockopt (ZMQ_EVENTS)
[12:27] sustrik ?
[12:27] sustrik that's kind of strange
[12:28] sustrik i would say it should check both IN & OUT
[12:28] jugg oh?
[12:29] sustrik once the events were processed (process_commands), actual has_in & has_out are pretty lightweight, so there's no real performance penalty
[12:30] sustrik it just seems wrong the getsockopt would accept an in parameter
[12:31] jugg ok, it was a carry over from how it was being used in the erlang binding. I can change the binding's use of it of course.
[12:32] jugg I'll update the patch then.
[12:33] sustrik thx
[12:35] jugg ok, this is what I have now:
[12:42] sustrik nice & simple
[12:42] sustrik post it to the ml and I'll patch the codebase
[12:43] jugg sent
[12:45] sustrik thx
[14:19] jugg meh, these erlang bindings are worthless for sending messages with a non erlang application. The bindings packs the data so that the receiving end must also be an erlang application to unpack them.
[14:20] jugg what point was it to ever create these bindings except for the use case of interacting with a non erlang application?!
[16:49] erickt good morning #zeromq. question about up/downstreaming sockets. I was playing around with my own version of the butterfly example, and I found that when I killed one of the parallel workers, the other workers didn't pick up the leftover work. Is there a common pattern for dealing with problems like this?
[16:54] dirtmcgirt i've got a PUB sending at a fast rate over a IPC socket, but messages aren't returned by zmq_recv in the SUB until the PUB process exits
[17:03] sustrik dirtmcgirt: do you have a test program?
[17:03] dirtmcgirt sustrik: i'll see if i can write one up
[17:04] sustrik good, please report the issue using the bug tracker then
[17:04] dirtmcgirt noticed it in production last night - the memory on the SUB process swelled, but zmq_recv didn't deliver
[17:04] dirtmcgirt is this something that's been seen, or new?
[17:04] sustrik no, i haven't seen that yet
[17:04] cremes erickt: just a guess, but i think the upstream socket you killed had a bunch of "work" messages queued up
[17:05] erickt yeah, that's what I was thinking
[17:05] cremes erickt: you might want to set the HWM for the upstream socket to 1 so the downstream socket knows not to send it more work than it can handle before dying
[17:05] cremes or so you don't lose work msgs
[17:06] erickt oh that's a good idea
[17:10] erickt oh, but if the component had already started working, there's still a chance it could die halfway through processing that message. I guess in that case it'd make sense to have another socket to the upstream node to send back heartbeat status updates
[17:10] erickt or, just log the state in some database
[18:36] cremes erickt: agreed; PUB and DOWNSTREAM are fire-and-forget so if you need guaranteed delivery you need to build that on top
[19:08] erickt thanks cremes
[19:36] cremes you are welcome
[20:20] erickt does zeromq support dns-sd yet? I've seen a couple emails mentioning using that for service discovery
[20:20] erickt support directly I mean, as opposed to doing the dns queries myself
[20:35] sustrik erickt: not yet
[20:36] sustrik it's a research issue
[20:36] erickt thanks again
[21:39] erickt I'm sorry to keep asking questions, but is there a public roadmap? Or if not, is there an estimated time when the failover for streaming or REP/REQ will be implemented?