IRC Log

Saturday September 10, 2011

[Time] NameMessage
[00:00] jcase guide confirms: [ØMQ] queues messages automatically when needed. It does this intelligently, pushing messages to *as close as possible* to the receiver before queuing them.
[00:06] ninjas that's not exactly explicit confirmation, though. I also see this line in the guide: 'If you're using TCP, and a subscriber is slow, messages will queue up on the publisher. We'll look at how to protect publishers against this, using the "high-water mark" later.'
[00:08] ninjas the reason I ask is because I'm seeing behavior consistent with messages getting queued up on the sender side of things -- just wanted some confirmation one way or another, is all
[00:35] mikko ninjas: on the other hand those are not contradicting
[00:35] mikko it's still as close as possible
[00:53] errordeveloper .. ..
[02:28] Alec I'm trying to connect two computers. I want to send a zmessage to all the computers in the segment. Only one computer will be running the service in the specified port and when it gets the message it responds with another message containing the necesary data to establish a "permanant" conection. Is this possible?
[04:54] strich Quick quesiton - Does anyone know of any decent python RPC wrappers for zeromq?
[06:04] guido_g why not pyro? it's made for rpc and object distribution
[06:32] strich How do you receive message parts in python?
[06:33] strich I know how to do it in C++
[06:35] guido_g http://zeromq.github.com/pyzmq/api/generated/zmq.core.socket.html#zmq.core.socket.Socket.recv_multipart
[06:35] guido_g or the standard way like in c/c++
[06:36] guido_g http://zeromq.github.com/pyzmq/api/generated/zmq.core.socket.html#zmq.core.socket.Socket.rcvmore <- short-cut for the check is also built in
[06:50] strich Cheers
[06:54] strich Is there API doco for C++ too btw?
[06:55] strich nm found it
[06:56] strich Anyone had any expiernce with zmq3 C++?
[10:29] mikko sustrik: there?
[15:57] mikko sustrik: there?
[16:20] timonato1 good day
[16:21] timonato1 i'm wondering about the router/dealer hello world example in the zeromq guide. the C implementation uses a while loop that dose SNDMORE until getsockopt for RCVMORE isn't set any more
[16:21] timonato1 the python code, however, does not have any such loop
[16:22] timonato1 is that a problem with the code or is that a major difference in the zeromq api for python?
[16:22] timonato1 http://zguide.zeromq.org/py:rrbroker ← this is the example i'm talking about
[17:17] timonato1 huh. how come i send "hello" and a json-serialized message with SNDMORE once and then no flags, but get a bytestring, an empty message and then the two messages i sent from the other end? is that the "normal" behavior for XREP/ROUTER?
[17:17] timonato1 oh, it's even documented. silly me.
[20:12] MickeM is there any way to "signal" all my worker threads it is time to "die" (closing the socket seems not to work) and seem like a bother to have an additional control socket just for that...
[20:15] sustrik zmq_term()
[20:16] sustrik mikko: hi
[20:31] mikko hi
[20:32] mikko sustrik: what do you think about LIBZMQ-252?
[20:32] sustrik been out of town consulting for past few days
[20:32] sustrik let me have a look
[20:32] mikko nice, long way?
[20:32] sustrik nope, paris
[20:33] mikko my condolences
[20:33] sustrik :)
[20:35] mikko i wonder if the invalid message should be discarded silently
[20:35] mikko rather than returning error
[20:35] mikko but then again, error tells the user that something went wrong
[20:35] sustrik it's in recv() function, right?
[20:36] sustrik if so, it should be discarded silently
[20:36] mikko yes
[20:36] mikko req socket recv
[20:36] sustrik something went wrong on the other side
[20:36] mikko how do you discard silently from there?
[20:36] mikko let me open the code
[20:36] sustrik i mean
[20:37] sustrik the assert happens when you get an invalid message from a peer
[20:37] mikko yes
[20:37] sustrik no poiny in letting user know as it was not *his* error
[20:37] sustrik it's someone else's error
[20:38] sustrik the other side should be notified about the error by closing the connection imo
[20:38] sustrik so, for example
[20:38] sustrik if you connect by telnet
[20:38] sustrik and type some nonsence
[20:38] sustrik the telnet connection will be closed by 0mq application on the other size
[20:39] sustrik side
[20:39] mikko this usually happens when you send multipart message with incorrect parts
[20:39] mikko which is very easy to do by accident
[20:39] sustrik that means you've violated the protocol specs
[20:39] mikko hence i returned EPROTONOCOMPAT on req.cpp
[20:39] mikko but it might be wrong place for that
[20:40] mikko i thought it from perspective that req usually connects to server
[20:40] sustrik the thing is that you should pass the error to the peer
[20:40] mikko and you want to know if the server is sending rubbish
[20:40] sustrik because it was the peer who sent the incompatible message
[20:40] mikko is there a way to do that inside the protocol?
[20:40] mikko like "remote error"
[20:40] sustrik understood
[20:41] sustrik this is a re-occuring theme
[20:41] sustrik there's a need for specialised "client" library IMO
[20:41] sustrik just one connection
[20:41] sustrik explicit disconnection notifications
[20:41] sustrik explicit errors
[20:41] sustrik etc.
[20:41] sustrik in 0mq as is
[20:42] mikko but for this specific case, should i just return 0 and not set errno?
[20:42] sustrik the problem is that you can have multiple connections even from req socket
[20:42] mikko that should cause the message to be discarded
[20:42] mikko but
[20:42] sustrik yes + you should close the connection
[20:42] sustrik as there's some strange, possibly malevolent, application on the other side
[20:42] mikko in that case error would be better
[20:43] mikko because in case of REQ socket
[20:43] mikko it would block on recv for forever
[20:43] sustrik i have to implement auto-resend in REQ socket :|
[20:43] sustrik in such case the request could be resent to a different peer
[20:44] sustrik which is hopefully ok
[20:44] mikko that is probably the cleanest solution
[20:44] sustrik i think so
[20:44] sustrik but i still believe we need special "client" library
[20:45] sustrik which makes hard assumption about having exactly 1 connection underneath
[20:45] sustrik that way it can say connect synchronously
[20:45] sustrik report the disconnection etc.
[20:45] sustrik maybe it can be built on top of 0mq as is
[20:46] sustrik but the API has to be different
[20:46] sustrik more TCP-like
[20:48] mikko how do you close a specific connection where the message came from?
[20:49] sustrik that's the hard part :)
[20:49] mikko does any of the sockets do that?
[20:49] sustrik it's done in few places iirc
[20:49] sustrik the problem is that it's done in the I/O thread atm
[20:49] sustrik while the assert is in the client thread
[20:51] mikko discarding the message seems to be relatively straight-forward
[20:51] sustrik mikko: yes, we can do that for now
[20:51] sustrik just make a mental note about closing the connection
[20:52] mikko i'll add a note in the code
[20:52] sustrik ok
[20:52] mikko is it ok to return ENOTCOMPATPROTO
[20:52] mikko and check that in xreq.cpp
[20:52] mikko and retry recv in that case
[20:52] mikko might cause recv to block for a very long time on stream of invalid messages
[20:53] mikko but that seems to be problem in sub as well
[20:53] sustrik yes, that's the consequence of not closing the connection
[20:53] sustrik malevolent client can block the socket
[20:53] sustrik never mind, we'll fix that later
[20:54] sustrik also not that there's fair queueing
[20:54] sustrik so if there's a client doing DoS
[20:54] sustrik and a decent one
[20:54] sustrik both get 50% of the time
[20:55] sustrik btw, use EAGAIN instead of ENOTCOMPATPROTO
[20:56] mikko should the connection be closed in the future in req or xreq?
[20:56] mikko i guess EAGAIN makes sense
[20:56] sustrik req is a specialisation of xreq
[20:56] sustrik so if it's closed in xreq it's closed in xreq as well
[20:57] sustrik req can have it's own failures though
[20:59] mikko i'll reset my git head later and update the patch
[21:00] sustrik thanks