Wednesday December 15, 2010

[Time] NameMessage
[02:51] brandyn does the new xpub/xsub allow for PUB side filtering?
[02:51] brandyn I haven't seen any documentation on it, just a few mailing list posts
[02:56] Steve-o that is what the design is for
[02:56] Steve-o the spec is on the mailing list
[03:19] Steve-o
[04:18] EricL Is anyone around?
[04:20] Steve-o all sleeping
[04:21] EricL I am trying to figure out what the best approach for my setup is to use 0MQ
[04:22] Steve-o to do what?
[04:22] EricL I have a log server and a few clients that are pushing a lot of data to the server. I think I want something similar to pub/sub that is a queue on the client.
[04:23] EricL This way any one of multiple log servers can grab a line from the client and it pops off the queue.
[04:24] EricL from any client rather.
[04:25] Steve-o as in polling the logger
[04:25] EricL Kind of. The way the logging happens is that there is a Ruby Resque job that does a few things and dumps the output to a log.
[04:26] Steve-o so, REQ/REP sockets then
[04:26] Steve-o it's not ideal to have the chatter back and forth though
[04:27] Steve-o you end up slowing the communication down to the RTT
[04:28] EricL That's what I am curious about.
[04:30] Steve-o which is why PUB is preferred,
[04:30] Steve-o so what do you not like about PUB/SUB implementation?
[04:31] EricL Because then multiple log servers will be receiving the same msgs, no?
[04:32] Steve-o you can use different topics for each server
[04:33] EricL Log server?
[04:34] Steve-o yes, or only have the client connect to one log server
[04:34] Steve-o you can partition by connection or topic/subject
[04:35] EricL The problem there is that there is more data coming from the client than can be written to disk (on a MB/s basis)
[04:36] EricL That's why I am interested in something like 0MQ. I just am not sure how to segment the write load across multiple servers.
[04:36] Steve-o so the messages will queue up on the log clients
[04:36] Steve-o you will probably want to use the zmq_swap function to spool onto disk too
[04:38] Steve-o log client: send messages as soon as available, zmq queues up messages if log server is congested receiving
[04:38] EricL That won't solve the issue since it's constantly going to be backing up.
[04:39] brandyn any know the state of pub side filtering?
[04:40] Steve-o EricL: zmq is managing the queue and congestion for you, so it sounds ideal
[04:40] brandyn that is a really essential feature for my application and I have to emulate it now
[04:40] Steve-o brandyn: have you checked the source in git/master?
[04:41] brandyn Steve-o, yeah I got it this morning but it looks to be a copy of pub/sub (the XPUB/SUB that is)
[04:42] EricL It's only ideal if I can avoid duplication of msg logging on the server side.
[04:43] Steve-o duplication? As in re-transmits?
[04:43] brandyn
[04:44] EricL duplication meaning that if I have more than 1 logserver and 3 clients pushing out msgs, I don't care which logserver the msgs go to as long as there are not any duplicate msgs.
[04:44] Steve-o brandyn: I guess you have to ask Gerard Toonstra, he was submitting the patches and RFC
[04:44] EricL (from logserver to logserver)
[04:45] Steve-o EricL: so you also have a question of which logserver is being targetted
[04:47] EricL I don't care which logserver is being targeted as long as it gets there.
[04:52] Steve-o k, the link for loggly's notes has disappeared,
[04:52] EricL Looking.
[04:55] EricL Yea, that doesn't tell much.
[04:57] Steve-o maybe PUSH/PULL sockets, implements load sharing, not sure about guarantees on one only delivery
[05:06] EricL Hmm...I guess I am not sure of the best approach in general.
[05:08] EricL So you think I should setup a PUSH socket on the client and then write everything from the Resque processes to that socket (which will then do the buffering).
[05:10] EricL Then on the server(s), I should do a PULL and hope that I am not receiving duplicates.
[05:13] Steve-o yes, although I'd check with the devs on the mailing list about duplicate delivery
[05:15] Steve-o otherwise you need req/rep and you suffer RTT time
[05:20] EricL Alright. I think I have some reading to do in order to understand how to implement all this.
[05:21] Steve-o the main devs should be online in a few hours time
[05:21] EricL They in EST?
[05:22] Steve-o MET I think
[05:22] Steve-o I only look after PGM stuff, all my middleware knowledge is TIBCO stuff
[05:23] EricL Gotchya.
[05:25] Steve-o In TIBCO you would use a Rendezvous Distributed Queue (RVDQ), but the implementation is very layered and not as efficient as 0MQ
[05:28] EricL I think I need the efficiency of 0MQ because of the amount of data I am dealing with.
[07:01] EricL Steve-o: Thanks.
[11:09] gb hello
[11:22] sustrik hello
[11:26] mikko hi
[11:35] sustrik finally i'm getting to your patch
[11:35] sustrik sorry for the delay
[11:35] mikko i'm patching jmeter at the moment
[11:36] mikko only works master-slave if it's in the same network / no firewalls
[12:04] mikko sustrik: it seems that swap_t::full() was maybe an old implementation from where
[12:04] mikko sustrik: it seems unlikely that buffer_space () == 1 in any case
[12:06] sustrik mikko: the swap isn't my code
[12:06] sustrik i'm trying to make sense of it
[12:07] mikko the problem is was hitting was that swap was not full but there wasn't enough space for the given message
[12:07] mikko hence swap_t::fits(..
[12:11] Steve-o why does swap_t call GetCurrentThreadId on Windows but getpid on POSIX? brain fart.
[12:12] sustrik it just need some unique number
[12:13] sustrik i think the code was written before 0mq was linked with libuuid
[12:13] Steve-o but not both thread id or process id, one of each
[12:15] Steve-o meh, also how many people use octets as a counter? old people :P
[12:21] sustrik well, it's always one of them afaics
[12:21] sustrik tid on windows
[12:21] sustrik pid on posix
[12:21] sustrik but, presumably, uuid would be better
[13:05] sustrik mikko: still there?
[13:29] mikko yes
[13:32] sustrik mikko: i've sent a modified swap patch
[13:32] sustrik basically i've just removed some dead code
[13:33] mikko sustrik: you still have the implementation of full there
[13:33] sustrik oh my
[13:33] mikko no you dont
[13:33] mikko misread the patch
[13:33] sustrik it's commented out
[13:33] mikko yes, looks good to me. i can test it after i get a haircut
[13:33] sustrik goodo
[13:33] sustrik just ping me then
[13:33] sustrik and i'll apply it
[13:34] mikko i'll give you ping
[13:34] mikko i added freebsd 8.1 to build cluster
[13:34] mikko linux, solaris, win, freebsd now
[13:36] sustrik it's getting pretty complete :)
[13:36] mikko Bad file descriptor
[13:36] mikko nbytes != -1 (tcp_socket.cpp:197)
[13:36] mikko Abort trap (core dumped)
[13:36] mikko shutdown_stress on freebsd 8.1
[13:41] CIA-20 zeromq2: 03Mikko Koppanen 07master * ra46980b 10/ (4 files in 4 dirs):
[13:41] CIA-20 zeromq2: Remove assertions from devices
[13:41] CIA-20 zeromq2: Signed-off-by: Mikko Koppanen <> -
[13:42] sustrik mikko: that's presumably the bug dhammika fixed
[13:42] sustrik i still have to look at the one you've reported yesterday
[14:58] mikko sustrik: you mean the HWM one?
[15:00] sustrik mikko:
[15:07] Guthur sustrik: I just noticed that clrzmq2 issue there now
[15:07] Guthur I've replied
[15:07] sustrik Guthur: good
[15:07] Guthur all I can say is... at least he finally got it built
[15:08] sustrik problem with clrzmq or problem with the user?
[15:08] Guthur user
[15:08] sustrik :)
[15:08] Guthur He's wondering why passing in a String encoding object only returns strings
[15:09] Guthur he wanted binary
[15:09] Guthur which is the standard method, returning a byte array
[15:10] Guthur I need to check the comments again, I'm sure I was pretty explicit
[15:10] sustrik depends on the user
[15:10] sustrik however simply you explain, there's still someone who's not going to understand
[15:11] Guthur hehe comment -> Listen for message, and return it in string format
[15:12] Guthur actually to be fair there could be some improvement in the comment for the method he wanted
[15:12] Guthur I'll look into updating later
[15:15] sustrik Guthur: there have been something else wrt clrzmq discussed on the ML