[Time] Name | Message |
[00:06] CIA-79
|
pyzmq: 03Min Ragan-Kelley 07master * r7921097 10/ (zmq/tests/__init__.py zmq/tests/test_poll.py): relax timeouts in tests ...
|
[00:54] grantr
|
hey all
|
[00:55] grantr
|
anybody using zmqmachine? having some issues with it and was wondering if its still maintained
|
[00:55] grantr
|
cant get the examples to run properly
|
[01:17] grantr
|
never mind, it was a conflict with an old version of zeromq library i installed a year ago
|
[02:51] bettsp
|
Can anyone enlighten a newb on how to use ZeroMQ in a non-blocking way?
|
[02:51] bettsp
|
It seems as if there is no way to be notified when data is available without blocking or polling
|
[02:52] bettsp
|
i.e. zmq_poll blocks, and zmq_recv with ZMQ_NOBLOCK means I have to poll
|
[02:52] minrk
|
if you use poll with a timeout, it will block for a finite time
|
[02:53] bettsp
|
But I still have to have a thread parked in a loop
|
[02:54] minrk
|
What do you want to happen?
|
[02:54] bettsp
|
I'm more used to the NT model, where you have a threadpool and all of the requests come in on that pool
|
[02:55] bettsp
|
I suppose it's alright if I can watch > 1 socket with poll (which it looks like I can)
|
[02:56] minrk
|
you can watch any/all sockets with poll
|
[02:58] bettsp
|
I think I've got an idea of how I want to write this - thanks for the hints (and letting me think out loud)
|
[02:59] minrk
|
sure
|
[14:23] mikko
|
good afternoon
|
[14:23] mikko
|
been a quiet day it seems
|
[14:31] pieter_hintjens
|
hi mikko
|
[14:32] mikko
|
hi
|
[14:33] pieter_hintjens
|
I gave a 0MQ presentation yesterday at softshake, in Geneva
|
[14:33] pieter_hintjens
|
after about 20 minutes of bottom-up explanation of why we made 0MQ and what it does...
|
[14:34] pieter_hintjens
|
some young guy raises his hand to ask, "so what does 0MQ do?"
|
[14:34] calvin
|
haha oh gosh, how did you answer?
|
[14:37] pieter_hintjens
|
calvin: I think I just laughed at him...
|
[14:38] pieter_hintjens
|
but in any audience like that there's only going to be 10% who really get it
|
[14:40] pieter_hintjens
|
anyhow, so I wrote a blog entry to kind of mess with the notion that you always have test everything you write
|
[14:40] pieter_hintjens
|
http://unprotocols.org/blog:17
|
[14:55] cremes
|
pieter_hintjens: i saw that blog post go up the other day
|
[14:56] cremes
|
i was going to comment, but then i saw a tweet (on another matter) that summed up my feelings
|
[14:56] cremes
|
"you don't argue with crazy or ridiculous, you just laugh at it"
|
[14:56] cremes
|
:)
|
[15:19] pieter_hintjens
|
cremes: well, I guess almost no-one got the point
|
[15:27] errordeveloper
|
pieter_hintjens: should I reblog you .. hm
|
[15:27] errordeveloper
|
s/you/unprotocols.org\/blog:17/
|
[15:28] errordeveloper
|
I mean, I quite like it :)
|
[17:29] mikko
|
pieter_hintjens: i'll look at the builds soon
|
[17:29] mikko
|
i already got access to new hardware
|
[17:29] mikko
|
just need to figure out routing etc
|
[17:34] CIA-79
|
pyzmq: 03MinRK 07master * r3c0b196 10/ (zmq/core/error.pyx zmq/core/socket.pyx): force char* coercion of addresses outside nogil block ...
|
[17:34] CIA-79
|
pyzmq: 03MinRK 07master * red63650 10/ zmq/devices/monitoredqueue.pxd : tweak monqueue to avoid Cython warnings - http://git.io/Y7g2pw
|
[19:51] jond
|
minrk: hi, further question about yr mux.py for 0mq 4.0
|
[19:52] minrk
|
sure
|
[19:53] jond
|
essentially it's a mapping scheme that allows messages in (from, to) to traverse back and forth
|
[19:53] minrk
|
sure
|
[19:54] jond
|
in yr case, do the workers essentially produce a single response, or could they over time emit multiple responses to participaants who'd submitted earlier requests
|
[19:55] minrk
|
In my case, there are 0-1 replies per request at the worker
|
[19:56] minrk
|
but the client may send 100 requests and never ask for a reply
|
[19:58] jond
|
ok and that's directed back to the original requester. I'm looking at model where the workers are stateful and a request would initially get a reponse, but a subsequent request may result in a reponse to the many earlier requesters
|
[19:59] minrk
|
Yes, in this case each reply corresponds to a request, and propagates to the requester
|
[19:59] minrk
|
It should work the same as a pair of ROUTER sockets facing each other directly
|
[20:00] minrk
|
where all clients are connected to all workers
|
[20:00] minrk
|
(2.x-style identity-based ROUTERs, that is)
|
[20:00] minrk
|
As long as you keep track of the label-prefix associated with a given reply, the ordering should not matter
|
[20:01] jond
|
so it can be done with the mux but could also be done with multiple pub/sub. The advantage of the mux is that i can ensure that only one worker offers each service where as the pub sub, mutiple subscribers could join a pattern.
|
[20:01] minrk
|
sure
|
[20:02] minrk
|
this is for single-endpoint messaging
|
[20:02] minrk
|
that single endpoint could, of course, map itself to multiple workers
|
[20:02] minrk
|
My use case is essentially stateful multiplexed RPC
|
[20:03] minrk
|
a collection of labeled remote namespaces, where code can be executed
|
[20:03] jond
|
this all said, compared to the old queue based xrep-xrep it's a lot more awkward to code, but i take Martin's point that it doesnt really fit well with the lower layers
|
[20:04] jond
|
my case is similar except you may get a subsequent response later....
|
[20:05] minrk
|
The MUX shouldn't care about ordering at all
|
[20:05] minrk
|
once the worker receives *one* request from a client, it should be able to send as many replies back as it likes, in any order
|
[20:05] minrk
|
it just needs that first request in order to establish the routing prefix
|
[20:06] minrk
|
The actual IPython application is a good deal more complex, with PUB/SUB, load-balanced requests, multiplexed requests, and heartbeating
|
[20:07] minrk
|
http://ipython.org/ipython-doc/dev/development/parallel_connections.html
|
[20:07] jond
|
exactly, ordering to the worker is important in my case but that is automatically handled already
|
[20:08] jond
|
now the devices have gone, there doesnt appear to be any easy way to get this back in the core though.
|
[20:08] minrk
|
(unless you use pyzmq)
|
[20:09] minrk
|
devices are removed from the *core* of libzmq, with the idea that they should be implemented in more nimble wrapper libraries or bindings
|
[20:10] minrk
|
pyzmq includes the device source from 2.1, so you can still use devices with libzmq 3/4
|
[20:18] jond
|
minrk: cheers. that's an interesting link.
|
[20:21] jond
|
minrk: what are the workers computing; simulations ?
|
[20:22] minrk
|
arbitrary Python code
|
[20:22] jond
|
you send the code in the message?
|
[20:22] minrk
|
yes
|
[20:23] jond
|
so why does it need targetting to a specific worker?
|
[20:24] minrk
|
depends on the workload - it can run load-balanced or multiplexed
|
[20:25] minrk
|
but since it targets interactive use, things like 'import numpy' might be in one message, while 'a=numpy.zeros(â¦)' might be in a subsequent one
|
[20:26] jond
|
better hope it's not 'exit()' as the message
|
[20:27] minrk
|
that's fine, too - you are allowed to shutdown and startup engines all you want
|
[20:27] minrk
|
since it's Python, is could be os.system('rm -rf /')
|
[20:29] jond
|
this is why lisp has *read-eval*; you can set it to false so the expressions are read but not evaluated. then you can walk the code....
|
[20:29] minrk
|
We absolutely can parse the code if we want, but we have no interest in protecting users from themselves
|
[20:30] jond
|
so if worker dies; it manually has to be restarted some how
|
[20:31] minrk
|
if it is still needed
|
[20:33] jond
|
minrk: cheers, gotta shoot
|
[20:43] mikko
|
pieter_hintjens: there?
|
[20:53] mattbillenstein
|
hi all
|
[20:53] mattbillenstein
|
anyone have experience with pyzmq + gevent?
|
[20:55] minrk
|
I don't use gevent myself, but I've helped a a couple people with gevent issues in the past, so I might be of some use
|
[20:56] mattbillenstein
|
cool, I'm writing an rpc server and I'm getting deadlocked somewhere
|
[20:58] minrk
|
hm, that might be a bit beyond me
|
[20:59] minrk
|
Are you using straight pyzmq, or traviscline's gevent-zeromq wrapper?
|
[20:59] mattbillenstein
|
the latter
|
[21:00] mattbillenstein
|
I basically bind an xrep socket then go into an infinite loop doing sock.recv_multipart()
|
[21:01] mattbillenstein
|
for each message, I spawn a greenlet to do some work, and link that to a function in the current greenlet to pass the result back through the socket
|
[21:02] mattbillenstein
|
at some point, I get blocked on the recv() forever
|
[21:03] mattbillenstein
|
with some number of greenlets running
|
[21:04] mattbillenstein
|
if I connect from another process and send another message to unblock the recv(), it'll resume for awhile
|
[21:04] deam
|
I'm trying to convert my application which uses AMQP to ZMQ, what it does right now is pub/sub and rpc calls (which are similar to rep/req calls in ZMQ). I want data to be transferred over one socket, what socket type should I choose? Is there a way to combine types?
|
[21:08] cremes
|
deam: http://zero.mq/zg if you haven't read it yet
|
[21:09] cremes
|
covers lots of patterns
|
[21:09] cremes
|
there is no way to combine pub/sub, req/rep and other types over a single socket
|
[21:09] minrk
|
mattbillenstein: Could it have to do with the zmq FD being edge-triggered?
|
[21:10] deam
|
I have read some parts of that.. ah, that was my conclusion.. I need two sockets then :-(
|
[21:12] mattbillenstein
|
minrk: how do you mean?
|
[21:13] mattbillenstein
|
hmm
|
[21:14] Steve-o
|
anyone an idea about the jzmq present status?
|
[21:14] mattbillenstein
|
maybe this is simpler than I thought â gevent-zeromq doesn't override send_multipart/recv_multipart â just send/rev
|
[21:14] deam
|
minrk: can I abuse the rep/req socket to simulate a sub/pub scenario?
|
[21:15] minrk
|
deam: no, not unless your sub/pub is not really sub/pub at all
|
[21:15] deam
|
minrk: well can I fire-and-forget from one endpoint?
|
[21:15] minrk
|
matt: true, but _multipart do call the overridden send/recv
|
[21:15] minrk
|
req/rep enforce a strict send/recv alternation
|
[21:16] minrk
|
so once you send, you have to recv before you send again
|
[21:16] deam
|
ah
|
[21:16] minrk
|
Other sockets do not enforce this ordering
|
[21:16] deam
|
so I must work with ack's then
|
[21:16] deam
|
thing is, two sockets is a big problem.. the design enforces a single tcp port
|
[21:16] mattbillenstein
|
minrk: cool, I thought that would probably be the case
|
[21:18] minrk
|
matt: can you post a simple test case that locks up?
|
[21:20] minrk
|
recv *should* block itself, waiting on the next message to arrive, but presumably gevent should continue while the recv waits on the read_event
|
[21:20] mattbillenstein
|
minrk: Yup, I'll see if I can package something up
|
[21:21] minrk
|
but I get confused by concurrent tools, so simple cases help me out
|
[21:26] mattbillenstein
|
hmm
|
[21:27] mattbillenstein
|
I think I might know what's happening â I'm using the same greenlet to send/recv
|
[21:27] mattbillenstein
|
so when I'm blocked on recv, I can't send
|
[21:27] minrk
|
ah
|
[21:27] minrk
|
that makes sense
|
[21:27] mattbillenstein
|
and since I can't send a response, my client is deadlocked essentially
|
[21:27] mattbillenstein
|
ie, it's doing a .join() on a bunch of concurrent requests
|
[21:30] mattbillenstein
|
so the io loop isn't blocked which is why I can connect from another client, send a msg, and get the whole thing started again
|
[21:52] mattbillenstein
|
seems like a hack, but if I recv(zmq.NOBLOCK) and put a small sleep in the exception hander I can get around this
|
[23:49] mikko
|
steve jobs died
|