Thursday April 21, 2011

[Time] NameMessage
[04:30] MK_FG Hi. I'm trying to use dealer-to-dealer sockets to implement async event processing (events are emitted from one socket group, and replies are sent only when it's necessary to take action on it)
[04:31] MK_FG But I've hit a problem: if processing is slow, emitters start eating all the ram available
[04:32] MK_FG And setting HWM to, say 5, everywhere doesn't seem to have any effect
[04:32] MK_FG Yet docs seem to indicate that sending dealer sockets should block after not being able to dispose of 5 msg...
[04:33] MK_FG Where am I doing it wrong? ;)
[05:01] MK_FG Hm... my mistake it this case was setting hwm after connecting socket (doesn't seem to be documented), although setting it to 5 with a single emitter and single collector (on both sides) still allows the former to send 1k msgs when the latter blocks on the first one
[05:10] guido_g <- see top of the page
[05:11] MK_FG Indeed, I seem to be ignoring stuff in fancy frames, thanks
[05:12] guido_g ok
[05:12] guido_g for the other problem just post a minimal example showing this behaviour to a pastebin
[05:13] MK_FG Sure, a moment...
[05:13] MK_FG <-- emitter, <-- collector
[05:14] MK_FG libzmq 2.1.0, pyzmq 2.1.4
[05:14] guido_g ahhh... a langauge i know... :)
[05:14] MK_FG Everyone knows python ;_)
[05:14] guido_g luckily this is not the case
[05:15] MK_FG Well, I've yet to see a person IRL who can't read it, at least ;)
[05:16] guido_g so true
[05:16] MK_FG Starting these two, emitter gets to "Sent: 1052" here before seeing the first reply
[05:17] MK_FG I'd have expected 10 or 5, but 1k...
[05:18] guido_g i think you got the socket types wrong
[05:19] guido_g did you read the guide?
[05:19] MK_FG Yes, I did
[05:20] guido_g <- re-read carefully
[05:23] guido_g dealer sockets (xreq) are not ment to talks to each other
[05:24] MK_FG What about then?
[05:24] guido_g and from the reference: "When a ZMQ_DEALER socket is connected to a ZMQ_REP socket each message sent must consist of an empty message part, the delimiter, followed by one or more body parts."
[05:24] MK_FG "is connected to a ZMQ_REP socket" is not true in my case!
[05:25] guido_g and?
[05:25] guido_g it's about the message format required
[05:26] MK_FG Hmm...
[05:27] guido_g you got the envelope wrong
[05:27] guido_g and btw, "fanout" is the term to use for a 1:1 link
[05:28] guido_g *is not the
[05:28] MK_FG Thanks, guess I'll re-read the envelope sections there
[05:28] MK_FG In my case it should be "few emitters" and "lots of collectors"
[05:28] guido_g or describe the use-case and hope for some help :)
[05:28] guido_g it's called pub-sub
[05:29] MK_FG "[10:30:35]<MK_FG> Hi. I'm trying to use dealer-to-dealer sockets to implement async event processing (events are emitted from one socket group, and replies are sent only when it's necessary to take action on it)"
[05:29] guido_g this is not a use-case
[05:29] MK_FG Not really, I don't need to send the same message to _all_ collectors, only to a single one
[05:29] guido_g then you build a router
[05:30] guido_g your naming really needs an upgrade
[05:30] MK_FG Guess so
[05:32] guido_g <- might be the thing you want
[05:33] MK_FG REQ is flip-flop afair, so if it'll get a request, it must send a reply
[05:33] MK_FG And I don't need a replies 99% of time
[05:34] MK_FG I thought about splitting replies into a separate channel, but dealer-to-dealer seemed to handle the case nicely...
[05:34] MK_FG Guess I'll test it a bit more and check envelopes, then re-evaluate that idea
[05:36] guido_g why not push socket to distribute the requests?
[05:37] guido_g the back channel could also be built w/ push, this time on the worker side
[05:42] MK_FG Yes, guess that's how I'll rebuild it, but I still can't quite see why dealer-to-dealer shouldn't work
[07:18] saurabh hi
[07:19] saurabh had a question regarding zeromq durable sockets ...
[07:20] saurabh did the following:
[07:20] saurabh 1) created connection between xrep(bind) and xreq(connect) sockets
[07:21] saurabh 2) sent messages from xrep to xreq sockets ( this is working properly)
[07:21] saurabh 3) now I kill xrep socket ...
[07:22] saurabh heres the problem ...
[07:23] saurabh since I don't explicitly set identity on xrep. I am expecting xreq to start dropping messages to xrep. but xreq keeps adding those messages to its queue and caching them.
[07:28] saurabh what I want is that all messages sent to xreq socket should either be dropped or sent to xrep if connected.
[07:29] saurabh this queuing behaviour causes problems on 2 levels
[07:29] saurabh 1) consumes memory xreq side
[07:30] saurabh 2) blasts messages on xrep side when it comes back up
[07:32] MK_FG saurabh_, send with xreq should block upon reaching hwm, in my (quite limited yet) understanding
[07:32] sustrik saurabh_: how are you killing the xrep side?
[07:33] sustrik Ctrl+C? Power-downing the machine?
[07:33] saurabh kill -9
[07:33] saurabh yep
[07:33] sustrik hm, not sure whether TCP reports the connection disruption then
[07:33] sustrik have you tried with Ctrl+C?
[07:34] saurabh i'll give it a try
[07:36] headzone tcp won't know until the far end comes back up and sends an RST in response to the next segment sent
[07:38] saurabh tried something else ... received 10 msgs on xrep end & then explicitly called socket.close followed by context.term & ~2sec of wait before normal program termination
[07:38] saurabh still xreq cached those messages
[07:40] sustrik saurabh_: ok, it looks like a bug then
[07:40] saurabh seems ok
[07:40] saurabh 1-30 i received that way
[07:40] sustrik can you report it in the bug tracker?
[07:40] saurabh & then i received 90-
[07:40] saurabh so its fine ...
[07:40] saurabh ok ... explaing
[07:41] saurabh req end was doing non stop sending after some sleep
[07:41] saurabh jsut nos (1, 2, etc.)
[07:41] guido_g may i dare to ask for a minimal code example?
[07:42] saurabh sure ...
[07:42] sustrik guido_g: afaiu what saurabh_ is saying that it actually works as expected
[07:42] guido_g into a paste-bin, please
[07:42] sustrik right?
[07:42] guido_g sustrik: i'm not sure what he is saying
[07:43] saurabh will paste the code link ... 2 mins
[07:43] sustrik thx
[07:44] saurabh
[07:45] saurabh its java code ... it has some other oops stuff to create messages & all
[07:45] saurabh to explain xreq was sending messages numbered 1, 2, 3, etc.
[07:45] saurabh xrep used to read 10 messages & then shutdown
[07:46] saurabh but still xrep read 1-10, <shutdown>, 11,20, <shutdown>, 21-30
[07:46] saurabh after this xrep read msg no 96
[07:48] saurabh so i think some messages added to queue were persisted even though xrep shutdown & restart cycles
[07:48] saurabh ok?
[07:49] sustrik they may have been already passed to the peer when the shutdown happened
[07:50] saurabh sry .. didn't get u
[07:51] sustrik forget it, the last line i wrote is nonsense
[07:52] sustrik ok, i see
[07:53] sustrik MK_FG was right
[07:53] sustrik xreq socket blocks when there's noone to send the message to
[07:53] sustrik so when xrep is killed
[07:53] sustrik the sending application just waits doing nothing
[07:53] saurabh i hadn't set hwm
[07:53] saurabh that happens only in case hwm is set
[07:54] sustrik hwm is per-peer
[07:54] sustrik if there's no peer, there's no queue to store messages in
[07:54] sustrik so it simply blocks
[07:54] sustrik this can be implemented better, but that's the way it is for now
[07:55] saurabh the xreq sending code was running throught ... it didn't block
[07:55] saurabh u see i sends one msg & then increments
[07:55] saurabh *it
[07:55] sustrik and no message is dropped in the same test?
[07:55] saurabh so the sending code reached 90s in terms of msg sent
[07:56] saurabh messages were dropped - the ones in between 30 & 96
[07:56] saurabh saurabh_> but still xrep read 1-10, <shutdown>, 11,20, <shutdown>, 21-30
[07:56] saurabh <saurabh_> after this xrep read msg no 96
[07:56] sustrik sound strange
[07:57] sustrik please, create an issue in the bug tracker
[07:57] sustrik i'll have a look later on
[07:57] saurabh ok
[08:04] mikko pieterh: 2-2 builds get stuck occasionally
[08:04] mikko /home/jenkins/build/jobs/libzmq2-2_clang-debian/workspace/tests/.libs/lt-test_pair_inproc
[08:04] mikko Process 4830 attached - interrupt to quit
[08:04] mikko recv(4,
[08:04] mikko blocking on recv
[08:14] djc mikko: I'm back now, if you have time to investigate the --with-system-pgm issues
[08:15] cwb I'm trying a minimal test case using pyzmq and 0MQ 2.1.5 on Ubuntu 10.04 and am getting a "Bad address" on the socket.recv() command. I've opened my firewalls up fully so I don't think that is the problem. What does might "Bad address" mean?
[08:18] cwb I'm using pyzmq 2.1.4
[08:19] th cwb: that was a mistake in the 2.1.5 release - thats why it's no longer available for download
[08:20] cwb Aha, thanks! Am I better of using 2.1.4 or trunk?
[08:21] guido_g 2.1.4
[08:21] th there is a fix in git already, but asking that question yesterday i got the answer that there is no recommended git revision in 21 branch > 2.1.4 meaning that i should use 2.1.4
[08:22] cwb Super, thanks for your help.
[08:35] benoitc hum not sure to understand,, reading the code of mongrel2 it seems to bind to a PUSH socket where i thought we could just connect
[08:36] benoitc in which case can you connect to PUSH or bind ?
[08:37] MK_FG benoitc, Why it can't be both? ;)
[08:38] benoitc well pyzmq tell me "unknown error Operation not supported by device
[08:38] benoitc "
[08:38] benoitc and i don't understand the bind here then
[08:39] benoitc how could I get connections on a PUSH ? if i cant recv
[08:39] MK_FG Same error for me
[08:40] MK_FG But I don't really understand the limitation - .connect() and socket operations seem to be orthogonal
[08:41] guido_g benoitc: huh? push is send only
[08:41] guido_g what do you do?
[08:41] benoitc if i read the doc it tells "zmq_bind - accept connections on a socket" (like a socket) , since you can't recv on PUSH it sound logical you can't recv
[08:41] guido_g minimal example -> pastebin
[08:41] benoitc guido_g: that what I say
[08:41] djc mikko: I used objdump -T instead of nm, that gives symbols
[08:41] djc (our way of building openpgm strips libraries)
[08:41] benoitc i don't understand how you could bind them
[08:41] benoitc then
[08:41] djc but pgm_send isn't in there (a bunch of other pgm_* functions are)
[08:42] MK_FG benoitc, You can bind XREQ socket and then .send() on it
[08:42] benoitc guido_g: in mongrel2 code I have void *handler_socket = mqsocket(ZMQ_PUSH);
[08:42] benoitc rc = zmq_bind(handler_socket, send_spec);
[08:42] benoitc not sure how you could bind on a PUSH here
[08:42] guido_g and?
[08:43] benoitc and I can't do it with pyzmq.
[08:43] benoitc binding on somthing you can't recv is illogical for me.
[08:43] MK_FG benoitc, Not that I advise to use XREQ, just want to point out that .connect doesn't seem to conflict with high-level queueing operations
[08:43] benoitc so pyzmq looks right
[08:44] benoitc MK_FG: II agree, you can obviously send on a binded socket, just trying to understand if normally you can bind on a PUSH and if in this case there isn't a bug in pyzmq
[08:45] guido_g benoitc: works here
[08:45] guido_g tcp 0 0* LISTEN 15755/python
[08:45] guido_g netstat says
[08:47] MK_FG Erm... indeed it works, as I'd expect, and my "same error for me" before was yet another copy-pasting bug of mine, sorry
[08:47] guido_g tz tz tz
[08:47] MK_FG Just pasted push into code that does .recv() and seen the same error
[08:47] guido_g omg
[08:48] MK_FG ...w/o looking at the line where the error actually happens
[08:52] benoitc hum definitly have something weird in my code then
[08:52] benoitc and now wondering how i can manage things simply on a multiprocess tcp->zeromq thing
[08:53] guido_g whatever that might be
[08:53] benoitc .
[08:54] benoitc but problem is that i can have only one handler behind.
[08:54] benoitc so i should change the way I bind/connect to sockets
[08:55] benoitc since obviously i can only have one bind,
[08:57] guido_g i _guess_ that you want to take on arbitrary tcp connections and forward the data via ømq push socket, right?
[08:58] benoitc yup
[08:59] guido_g ok
[08:59] guido_g next assumption would be that you have "worker" that fetch data from the ømq push socket
[09:00] guido_g s/"worker"/a bunch of "workers"/
[09:00] benoitc well i have one tcp socket shared between workers on different cpus
[09:00] guido_g what you have is not necessarily what you need
[09:00] benoitc the tcp request is handle by one of the worker depending how the system loadbalanced it
[09:01] benoitc gand yes second operation is that
[09:01] benoitc guido_g: any idea ?
[09:02] guido_g you read the data from the tcp sockets and send them to the push socket
[09:02] guido_g on the worker side you fetch da data via a pull socket
[09:02] benoitc mmm no i read the data from tcp soccket and send them via the push socket
[09:03] guido_g what?
[09:03] benoitc and have a pull handler
[09:03] benoitc that what I have now
[09:03] benoitc the push socket is sending them to the pull socket
[09:03] guido_g of couse
[09:03] guido_g *course
[09:03] benoitc tehn i resend them to tcp using a PUB/SUB
[09:04] guido_g what?
[09:04] guido_g lets make clear, tcp means raw tcp sockets, os level
[09:04] guido_g pub/sub is ømq
[09:04] benoitc yes
[09:05] guido_g then "<benoitc> tehn i resend them to tcp using a PUB/SUB" doesn't make sense
[09:05] benoitc the pull worker, publish the message, and the tcp worker suscribing to this message and resend it
[09:05] benoitc to the tcp
[09:05] guido_g ahhh... much better
[09:06] guido_g you're using mongrel2, right?
[09:07] guido_g so this push and pub/sub mixture is inevitable
[09:08] benoitc i have
[09:08] benoitc mongrel2 do differently and isn't multicore so that why it works
[09:08] benoitc mongrel2 bind to a PUSH socket where I connect
[09:09] benoitc and connect to a PULL where i bind
[09:09] benoitc the same is true for PUB/SUB
[09:09] benoitc the advantage for mongrel2 is that it can balance a worker to multiple handler, where I can have only one
[09:10] benoitc which is better imo
[09:10] benoitc balance a tcp request sorry
[09:10] guido_g i simply don't get what you want
[09:10] benoitc i can only have one zmq handler accepting forwarded tcp requests currently
[09:11] benoitc and I would like to have more than one to remove and add them dynamically
[09:12] guido_g you can add as much handlers as you want to a push socket
[09:12] benoitc the problem is that I can't do it right now in my schem
[09:12] benoitc only if i bind it
[09:12] guido_g ??
[09:15] benoitc you need to bind the PUSH and connect to a PULL load balance PUSH to PULL
[09:16] guido_g you bind the push socket
[09:16] guido_g you connect the pulls from the workers to the bound address of the push
[09:16] guido_g where is the problem?
[09:17] benoitc yes so that something I can't in my design. since i'm multicore and read the route depending on the data (the conf is not loaded at first)
[09:17] mikko djc: i installed upstream openpgm
[09:17] mikko djc: and --with-system-pgm seems to link ok
[09:17] djc mikko: what version did you get?
[09:17] guido_g benoitc: what has multicore to do with that?
[09:18] guido_g benoitc: and if you don't have the config, then you should better make sure the config there when you need it
[09:18] mikko hmm, spotify just rickrolled me
[09:18] benoitc cause i'm know that I have to use a PUSH socket only in the worker.
[09:18] mikko random music playing
[09:18] mikko djc: 5.1.115
[09:18] djc did you see my messages before, about objdump?
[09:18] benoitc and I can't bind to a PUSH socket in each workers.
[09:18] mikko djc: yep
[09:18] guido_g benoitc: then don't do it
[09:18] mikko djc: not sure why the symbol isn't there
[09:18] mikko djc: does your build break it?
[09:19] mikko as in visibility flags foobarred or something
[09:19] djc mikko: so does your build of upstream openpgm show symbols for nm?
[09:19] djc and if you do objdump -T, does it show pgm_send?
[09:19] mikko djc: give me a second, just ran to swap starting a virtual machine
[09:19] benoitc jaja i like such answer. anyway thanks will try to revisit the way i can handle it. or just trash this support, thanks for the help
[09:20] guido_g benoitc: hey, don't blame me if you're not capable of explaining what you want
[09:20] mikko djc: # nm /usr/local/lib/ | grep pgm_send
[09:20] mikko 000000000002d555 T pgm_send
[09:21] djc right, and now with objdump -T?
[09:22] mikko 000000000002d555 g DF .text 0000000000000241 Base pgm_send
[09:22] djc ugh
[09:22] djc that's pretty weird
[09:22] mikko try taking the upstream tarball and building to --prefix
[09:23] mikko is the gentoo .so built with scons or autotools?
[09:23] mikko can you build unstripped version?
[09:23] djc autotools, I think
[09:23] benoitc guido_g: i wouln't say that also didn't put any blame in my sentence.
[09:24] djc mikko: building without stripping now, let's see what happens
[09:24] guido_g benoitc: "<benoitc> jaja i like such answer..." <- reads like someone is unhappy
[09:25] guido_g benoitc: try to describe what you want to achieve, not what you have (and what obviously is not working)
[09:26] djc mikko: when building with "nostrip", I see pgm_send in nm but not in objdump -T
[09:26] djc so something is fishy there
[09:27] benoitc to decribe what i want to achieve i needed to decribe what I had and what i want to achieve, so i can make it perfectly clear that route is handled on the worker level and until this moment i don't know if Ihave to uforward to a zmq thing, or simply forwarding tcp. so i can not pre-bind
[09:29] mikko djc: can you try with upstream tarball?
[09:29] mikko do you apply weird patches on gentoo side?
[09:29] djc mikko: this is from the upstream tarball
[09:29] djc no openpgm patches
[09:29] mikko 5.1.115?
[09:30] djc yup
[09:30] guido_g benoitc: sorry, can't parse that
[09:34] djc mikko: so are you building with autotools or cmake?
[09:34] mikko djc: interesting
[09:34] mikko djc: autotools
[09:34] mikko i've been using cmake lately
[09:34] mikko and it makes autotools look pretty cumbersome
[09:35] mikko off-topic
[09:35] djc so where is visibility controlled with autotools?
[09:35] djc or s/where/how/
[09:35] mikko djc: what do you mean?
[09:35] mikko -fvisibility to compiler
[09:35] mikko it's a compiler thing rather than autotools
[09:35] mikko although i don't see why the symbol isn't there at all
[09:36] mikko even if the symbol was hidden it should still be there (just hidden)
[09:36] djc for example, I'm passing -DCONFIG_HAVE_DSO_VISIBILITY
[09:36] djc could that have anything to do with it?
[09:36] mikko shouldn't
[09:37] mikko can you compile without optimizations as well?
[09:37] mikko -O0
[09:37] mikko just to be sure that the symbol doesn't accidentally get optimised out
[09:38] djc I'll give it a shot
[09:39] djc with -O0 it's the same
[09:39] benoitc guido_g: drawing a schema
[09:40] mikko djc: that is just strange
[09:41] djc this is the compiler invocation on source.c (which defines pgm_send, I think):
[09:41] guido_g benoitc: good idea
[09:42] guido_g benoitc: you might want to try
[09:43] mikko djc: i just downloaded 5.1.116
[09:44] mikko and the symbol is there after fresh compile
[09:44] mikko what gcc version are you using?
[09:44] mikko # nm /opt/test-pgm/lib/ | grep pgm_send
[09:44] mikko 00020910 T pgm_send
[09:44] mikko gcc version 4.4.5 (Debian 4.4.5-8)
[09:44] benoitc guido_g: thanks
[09:44] mikko ia32
[09:45] djc gcc (Gentoo 4.4.5 p1.2, pie-0.4.5) 4.4.5
[09:45] djc let me try .116
[09:45] djc (on amd64)
[09:46] mikko just compiled on amd64 as well
[09:46] mikko the symbol is still there
[09:46] djc are you familiar with the openpgm community?
[09:47] mikko yes
[09:47] mikko well, steven
[09:48] djc ah! objdump -T on the has pgm_send
[09:48] djc so would it be safe to use zeromq-2.1.5 (or .5.1 or whatever will be released soon) with openpgm-5.1.116?
[09:50] djc ah, it's probably this:
[09:50] djc and it looks like the changes between .115 and .116 are fairly minor
[09:50] djc (there's only one other change)
[09:51] mikko should be ok
[09:52] mikko can you comment on the ticket as well?
[09:52] mikko on the progress
[09:52] djc will do
[09:52] mikko so that other people possibly facing the same issue can benfit
[09:52] mikko pieterh: have you merged message fixes to 2.2 ?
[09:54] djc mikko: so we like to remove bundled libraries from the process to make sure they don't get built
[09:54] djc but that seems to fail
[09:55] djc ah wait, is pgm still optional with zeromq-2.1?
[10:02] mikko yes
[10:03] djc it would be nice if the foreign/openpgm/ was optional
[10:03] mikko what do you mean "seems to fail" ?
[10:03] mikko autotools doesn't support that
[10:03] djc mm, ok
[10:03] mikko i had a long fight with autotools to get it even this flexible :)
[10:04] mikko
[10:04] djc ah, nice, I made it remove the libpgm tarball in there
[10:04] djc and leave the
[10:04] djc and that seems to work fine
[10:04] mikko it should
[10:04] mikko it would be nice to be able to make it fully conditional but that doesn't seem to be possible
[10:05] djc okay, so that was the without-pgm build, let's check the with-system-pgm build once more
[10:05] djc yup, works
[10:05] djc I think we're done here
[10:05] djc thanks a lot for the help!
[10:06] mikko no problem
[10:06] mikko hopefully debian and others follow soon
[10:07] mikko if debian packages zeromq 2.1 it should trickle down to ubuntu and kubuntu pretty soon(ish)
[10:07] djc so now that this is fixed, will be released soon?
[10:07] djc i.e. has the other 2.1.5 problem been fixed?
[10:07] mikko yes,
[10:08] mikko this is this morning's build
[10:08] mikko tests seem to pass at least
[10:14] benoitc guido_g: is what I'm trying to achieve
[10:15] guido_g ouch
[10:15] guido_g why accepts in multiple processes?
[10:16] benoitc to load balance the connection
[10:16] mikko you can accept and pass down the accepted handle
[10:16] guido_g accept in one process and send the fd of the accepted socket to a child
[10:16] mikko makes it easier architecturally to accept in one process
[10:16] mikko guido_g: >)
[10:17] benoitc well not really , here we let the os load balance the connection, he knows its job
[10:17] benoitc s/its/his
[10:17] guido_g ic
[10:17] mikko that's what i've been wondering
[10:18] mikko do you get a race condition when you have multiple acceptors?
[10:18] mikko or does the os invoke just one?
[10:18] benoitc the os invoke just one
[10:18] guido_g it damn complicated to get right, see apache code
[10:18] guido_g *its
[10:18] benoitc you share the socket handle between all the workers
[10:19] guido_g why should one do that?
[10:19] guido_g what os is that?
[10:19] benoitc well this is designed for atchiecture where you can't spawn os threads
[10:19] benoitc between cpus
[10:19] benoitc eg any python programs.
[10:20] benoitc any UNIX/POSIX system is designed for that
[10:20] guido_g huh?
[10:20] benoitc 1 python vm is locked to once CPU.
[10:20] benoitc one
[10:20] guido_g it'S not locked
[10:20] benoitc it is.
[10:21] benoitc gil and such
[10:21] guido_g threads are synchroniced by t eh gil
[10:21] guido_g which is something compeltely differnt
[10:21] guido_g but with processes it would work
[10:21] benoitc well by process you mean os forks ?
[10:22] guido_g and what i suggested was exactly that: one process accepts the tcp connection and forwards the socket fd to a child process
[10:22] benoitc and what is your policy to load balance ?
[10:22] benoitc also that exactly what does teh os here
[10:22] benoitc you share the socket fd between forks
[10:22] benoitc and only one will accept
[10:23] guido_g no, between parent and child processes, for is system call
[10:23] guido_g *fork is a
[10:24] benoitc yes sure. so what are you calling process ?
[10:24] guido_g your diagrams states that you do the accept in the workers, which is -- as some of know -- a very very bad idea
[10:24] benoitc could be anything , os threads, threads or green threads or os processes
[10:24] benoitc not really that's unixish
[10:24] benoitc share a socket
[10:24] benoitc accept on it
[10:24] benoitc the os will load balance
[10:25] benoitc that what does nginx, gunicorn, unicorn and such
[10:25] guido_g benoitc:
[10:25] guido_g benoitc: no, the os will not "loadbalance"
[10:25] benoitc yes it will.
[10:25] benoitc that what we use in gunicorn
[10:25] guido_g it will just allow the processes to run as they need according to the os's scheduling policy
[10:26] guido_g benoitc: ok, show me the load-balancing in the linux kernel, please
[10:26] benoitc and if it can.
[10:27] guido_g to distribute load is not load balancing
[10:27] benoitc if your cpu is busy or the process is it won't pass to it. that what we call load balancing
[10:28] benoitc that part works pefrfectly anyway. that not the part i'm trying to achieve.
[10:29] guido_g load balancing is a process where the load is sent to a specific processing unit according to some algorithm
[10:29] guido_g the kernel and your code don't do that
[10:31] benoitc oh dear. load balancing only mean that it will balance depeding on the load. ie if smth is busy or not . that could be just a try/fail system. which is done by the os.
[10:31] benoitc anyway you're just here saying a common pattern in unix doesn't work.
[10:32] guido_g what?
[10:57] benoitc to be clear, all workers share the same listener and do a non blocking acccept. The kernel decide which worker process will get the socket and the other will sleeps if there is nothing to accept, this model works well. that's more the second part i'hev problem to achieve
[12:17] th anyone interested in looking into an c++ example where multipart messages get mixed?
[12:21] th i collected information here: client ( and server ( with output.txt
[12:24] th every part of the multipart message is prefixed when sending. and the receiving side checks that all parts of the received mp have the same prefix. which fails
[12:25] th it only fails under high load
[12:25] th (meaning: while ./ ; do ; done)
[12:25] th aehh.. not ".cc" of course
[12:27] th the server also prints the identity (XREP) received as first part in hex
[12:33] pieterh th: can you post to the list?
[12:33] pieterh I'm not at a pc now but will look at it later
[12:48] th pieterh_: done.
[13:02] Balistic How can one do proper message routing(Based on contents of a message) in zmq?
[13:11] guido_g from the channel topic: Before asking for advice here, Please Read the Guide -
[13:12] Balistic guido_g: i read that a few months ago, had nothing on routing
[13:12] guido_g yeah
[13:13] guido_g but now is now
[13:14] Balistic i see, the guide is more than double in length compared to what it was
[13:18] mpales Balistic: would you like to use zeromq for stateful services?
[13:20] Balistic mpales: yes
[13:28] mpales zeromq is not quite there yet
[13:29] mpales and it's not quite its job to do application level routing
[13:30] mpales it would really help if zeromq had an "email" type socket
[13:31] mpales atm you need to do the address mapping yourself
[16:04] coopernurse if you have a REQ connected to REP and doing a send/recv loop
[16:04] coopernurse will the REQ socket reconnect automatically if the REP endpoint dies and is restarted on the same port?
[16:05] coopernurse I'm using 2.1 with the Python bindings
[16:06] coopernurse the behavior I'm seeing is the REQ process is still running, but idle after I restarted the REP process
[16:53] Toba that depends on if you use a timeout or not on the recv portion. if you don't use pollitem_t and not recv() until you know there is a result, a "lost result" can hang a looping requestor.
[16:53] coopernurse Toba: ah, cool, thank you I'll read about that more
[17:01] Toba note to self: add timeout on the recv portion of my req/rep systems
[17:01] coopernurse :-)
[19:20] else- hm, anyone knows why my libzapi git checkout fails during compilation of zsocket.c with "zsocket.c:116:28: error: ‘ZMQ_XSUB’ undeclared"? i'm at commit 2c1bd38bfd1220226c04c03a39a28e62adf4c631
[20:24] Bwen I was looking thru the irc archive and I was wondering why there is 3 "2010-March" ? O.o
[20:24] lt_schmidt_jr 3 subscribers consuming a published event :)
[21:28] hardwire ahoy.
[21:29] hardwire I don't suppose there is a way to subscribe to published data and filter if the publisher sends json is there.
[21:30] hardwire unless the json is ordered.
[21:42] lt_schmidt_jr hardwire, I don't understand the question
[21:43] hardwire subscription filters pass strings that start with the filter
[21:43] lt_schmidt_jr but I am using PUB/SUB with json with multipart with the first part of the message containing the data (bytes) used for routing
[21:43] hardwire since item order in json isn't static in most cases.. it's hard to filter unless you post process and subscribe to everything
[21:44] hardwire lt_schmidt_jr: so you're concatenating a json string and a routing string?
[21:45] lt_schmidt_jr no - ZMQ allows for multipart messages
[21:45] lt_schmidt_jr I use 2 parts
[21:46] lt_schmidt_jr 1st is the routing data
[21:46] lt_schmidt_jr 2nd is the json
[21:46] lt_schmidt_jr string
[21:48] lt_schmidt_jr you can just use the 2nd part for whatever it is you intend to do with it
[21:52] hardwire what api are you using?
[21:52] lt_schmidt_jr i am using jzmq
[21:53] lt_schmidt_jr java, but its available in the c api as well, should be in other
[21:53] lt_schmidt_jr s
[21:53] hardwire <- python
[21:56] hardwire ah.. spiffers
[21:56] hardwire danke
[21:57] lt_schmidt_jr did you find it?
[22:08] hardwire your a genus.
[22:09] hardwire send + SNDMORE flag
[22:09] lt_schmidt_jr yes
[22:09] hardwire send_json w/o flag
[22:09] hardwire and recv_multipart
[22:09] lt_schmidt_jr correct
[22:09] hardwire works for me.
[22:09] lt_schmidt_jr I am also a specie
[22:09] lt_schmidt_jr :)
[22:10] hardwire I could do the json using the json module.. but I like the zmq module handing it for now :)
[22:10] else- do you guys know whether there are prebuilt debian packages for libzmq and libzapi somewhere?
[22:10] lt_schmidt_jr not me
[22:10] hardwire I do
[22:11] hardwire else-: which debian?
[22:11] else- squeeze
[22:11] hardwire you can install the experimental packages directly.
[22:11] hardwire I backported them from experimental
[22:11] else- i didn't find libzapi though
[22:11] hardwire ah.. I'm not familiar with it.
[22:12] else- well i heard about 0mq just recently and would like to try out the examples, and as far as i've understood they use libzapi
[22:13] hardwire else-: I don't see their existance.
[22:13] hardwire are you familiar with building libzapi yourself?
[22:13] else- i tried, but it fails (see my msg at 09:20pm)
[22:14] hardwire I wasn't here then.. dunno if I could help either
[22:14] hardwire maybe.. however
[22:14] hardwire do you have build-essentials installed?
[22:14] else- ok, wait
[22:14] else- yes
[22:15] hardwire did you build libzmq from source?
[22:15] else- my libzapi git checkout fails during compilation of zsocket.c with "zsocket.c:116:28: error: ‘ZMQ_XSUB’ undeclared"
[22:15] else- hardwire: yes
[22:15] hardwire what was the make error?
[22:19] else- hardwire:
[22:20] else- and configure completes without any errors
[22:20] hardwire looks like your zmq build didn't install headers for some reason
[22:21] hardwire or there is a version problem.. I'm not sure
[22:22] else- i have installed zmq 2.0.11
[22:22] lt_schmidt_jr else-: you should avoid using 2.0 - many issues resovled in 2,1
[22:22] lt_schmidt_jr 2.1
[22:23] else- ok, i'll try that
[22:23] else- thanks
[22:24] hardwire else-: I'm guessing zapi wants 2.1
[22:24] else- let's see :)
[22:25] hardwire I'm integrating zmq with some django apps... right now I just need message queueing per django thread so this is gonna be spiffers.
[22:29] else- awesome, works now :) thanks guys
[22:30] lt_schmidt_jr NP
[22:30] lt_schmidt_jr the more the merrire
[22:30] lt_schmidt_jr merrier