[Time] Name | Message |
[08:00] sjampoo
|
morning, is the pollset and waitfd as discussed on the mailing list going to be part of 2.0.7 or is this something which will be implemented somewhere in the future? I would love to have an estimate on that, ie a month / a year.
|
[08:15] sustrik
|
jugg: yes, see zmq_tcp(7)
|
[08:21] jugg
|
thanks
|
[08:22] sustrik
|
sjampoo: i'll add it after 2.0.7 release
|
[08:22] sustrik
|
however, the API would still be unstable
|
[08:23] sustrik
|
presumably, it can be done in better way
|
[08:23] sustrik
|
everything depends on performance impact
|
[11:43] mato
|
sustrik: are you there?
|
[12:03] sustrik
|
mato: hi
|
[12:03] mato
|
sustrik: you wrote that ZMQ_HWM is per-peer, what happens where there are no peers?
|
[12:04] sustrik
|
exceptional behaviour is triggered
|
[12:04] sustrik
|
drop/block
|
[12:04] mato
|
yes, but when?
|
[12:04] sustrik
|
on send
|
[12:05] mato
|
ok, but immediately, right?
|
[12:05] sustrik
|
right
|
[12:53] mato
|
sustrik: the problem i'm having is that it's impossible to explain flow control accurately without dealing with the *actual* peers connected on a socket
|
[12:54] mato
|
sustrik: this is further complicated by the fact that the term "connect" is overloaded
|
[12:55] mato
|
sustrik: for example, it's really hard to explain coherently the state when a socket "has no peers"
|
[12:56] mato
|
sustrik: which means that it has been connected with zmq_connect(), but no peers exist
|
[13:05] sustrik
|
mato: don't even try to explain that
|
[13:05] sustrik
|
what has to be documented is
|
[13:05] sustrik
|
1. dealing with overloads (drop vs. block)
|
[13:06] sustrik
|
2. HWM socket option - this is used to prevent out-of-memory rather than for flow control
|
[13:07] mato
|
sustrik: it's impossible to explain accurately what HWM *means* without dealing with the existence of the individual message queues associated with a socket!
|
[13:07] mato
|
same for drop/blocking behaviour
|
[13:08] sustrik
|
mato: yes
|
[13:08] mato
|
yes?
|
[13:08] sustrik
|
HWM is maximal number of messages destined for one peer that the socket may hold
|
[13:09] sustrik
|
the connection doesn't have to exist
|
[13:09] sustrik
|
even the peer may not be running at the moment
|
[13:10] mato
|
the problem is "the connection" != "the socket connection"
|
[13:10] sustrik
|
i prefer to say "there's no concept of connections in 0mq"
|
[13:11] mato
|
then we need to rename the concept of connections to something else :-)
|
[13:11] sustrik
|
what do you mean by connection?
|
[13:11] mato
|
sctp got around this by using the term "association" IIRC
|
[13:12] mato
|
sustrik: what i mean by connection is the "something" created by zmq_connect() or zmq_bind() :-)
|
[13:13] sustrik
|
bind creates nothing, just an endpoint
|
[13:13] sustrik
|
a name
|
[13:13] sustrik
|
do you mean message pipe/queue?
|
[13:13] mato
|
and connect? does what? :-)
|
[13:13] sustrik
|
attaches to endpoint + creates a queue
|
[13:14] sustrik
|
bind just creates the endpoint, creation of associated queue is async
|
[13:14] sustrik
|
queues*
|
[13:14] sustrik
|
but that's invisible to user
|
[13:14] sustrik
|
and - imo - should not be mentioned in documentation
|
[13:14] sustrik
|
what user needs to know is:
|
[13:15] sustrik
|
1. socket may spak to multiple peers
|
[13:15] sustrik
|
2. socket has a routing algorithm
|
[13:15] sustrik
|
(which decides which peer gets which message)
|
[13:15] sustrik
|
3. HWM limits number of messages routed to particular peer
|
[13:16] sustrik
|
messages in memory*
|
[13:22] mato
|
see, with 1. the terminology problem kicks in again:
|
[13:22] mato
|
current doc:
|
[13:22] mato
|
A
|
[13:22] mato
|
'ZMQ_REQ' socket may be connected to multiple peers; each request sent is
|
[13:22] mato
|
load-balanced among all connected peers.
|
[13:24] sustrik
|
well, what's the problem with that?
|
[13:24] sustrik
|
"connected" word?
|
[13:24] mato
|
i think so
|
[13:25] sustrik
|
just drop it
|
[13:25] sustrik
|
"load-balanced among all peers."
|
[13:25] sustrik
|
that's more precise actually
|
[13:37] mato
|
crap, i still don't know how to explain the relationship between socket <-> endpoints <-> queues <-> peers
|
[13:40] sustrik
|
:)
|
[13:40] mato
|
sustrik: let me try and put my problem another way
|
[13:40] mato
|
you keep saying that "there are no connections", and that we should not talk about them in the documentation
|
[13:41] mato
|
but the explanation of ZMQ_HWM is directly dependent on the individual connections to peers
|
[13:41] mato
|
or at least the queues created by those connections
|
[13:42] sustrik
|
there's a peer
|
[13:42] sustrik
|
there's a queue associated with the peer
|
[13:43] sustrik
|
actual connection to the peer is invisible to the user
|
[13:43] mato
|
so i do need to talk about the queues associated with a socket's peers
|
[13:43] sustrik
|
what's wrong with the explanaition above:
|
[13:43] sustrik
|
sustrik> 1. socket may spak to multiple peers
|
[13:43] sustrik
|
<sustrik> 2. socket has a routing algorithm
|
[13:43] sustrik
|
<sustrik> (which decides which peer gets which message)
|
[13:43] sustrik
|
<sustrik> 3. HWM limits number of messages routed to particular peer
|
[13:44] mato
|
it's not an explanation
|
[13:44] sustrik
|
?
|
[13:45] mato
|
it needs to be uncompressed :-)
|
[13:47] sustrik
|
ah, ok
|
[13:47] mato
|
anyway, i'm trying
|
[13:47] mato
|
maybe it'd help if you were sitting next to me ...
|
[13:51] sustrik
|
hm
|
[13:51] sustrik
|
let's give it one more try
|
[13:52] sustrik
|
socket may speak to multiple peers
|
[13:52] mato
|
if we get it right you won't have to answer nearly as many emails :-)
|
[13:52] sustrik
|
yes, i know
|
[13:52] sustrik
|
actually, it's not even peers
|
[13:53] sustrik
|
because the peer may not even be running at the moment
|
[13:53] mato
|
see, there is a concept missing
|
[13:53] sustrik
|
what about endpoint
|
[13:53] sustrik
|
we use that. no?
|
[13:53] sustrik
|
so, there are endpoints on the network
|
[13:53] mato
|
yes, but at the moment we talk about connecting to endpoints, which overloads the word connect
|
[13:54] sustrik
|
they are pure virtual entities
|
[13:54] sustrik
|
such as multicast group
|
[13:54] sustrik
|
there's no real thing corresponding to multicast group, right?
|
[13:54] mato
|
not really
|
[13:54] sustrik
|
it's just a name
|
[13:54] sustrik
|
ideal concept
|
[13:55] sustrik
|
the 0MQ endpoint is the same kind of thing
|
[13:55] sustrik
|
now, a socket can "implement" the endpoint
|
[13:55] sustrik
|
by binding to it
|
[13:56] sustrik
|
another socket may announce it's desire to speak to a particular endpoint
|
[13:56] sustrik
|
(same thing as joining a multicast group)
|
[13:56] sustrik
|
that's "connect"
|
[13:57] sustrik
|
the whole point is that there may be no application associated with particular endpoint
|
[13:57] sustrik
|
endpoint = name
|
[13:58] sustrik
|
each socket is speaking to multiple peers
|
[13:58] sustrik
|
it does so either by binding to an endpoint or connecting to an endpoint
|
[13:58] mato
|
"speaking to" is not a technical term :-)
|
[13:58] sustrik
|
communicating
|
[14:00] mato
|
how does one refer to a peer a socket is communicating with?
|
[14:00] sustrik
|
peer?
|
[14:01] sustrik
|
anyway, the problem is that peers may not be alive, online and reading messages
|
[14:01] sustrik
|
in such case we may limit number of messages in memory destined for a single peer by setting HWM
|
[14:02] sustrik
|
i know, the whole explanation is creaky, but at least it gives some intuitive insight
|
[14:03] mato
|
creaky, yes :-(
|
[14:07] mato
|
sustrik: so you don't want to talk about message queues associated with sockets at all?
|
[14:08] sustrik
|
do we need to?
|
[14:08] sustrik
|
i am a fan of occam's razor
|
[14:09] mato
|
ok, let me try and see if i can explain everything by removing all references to "message queue associated with socket ..." :-)
|
[14:09] sustrik
|
;)
|
[14:29] mato
|
sustrik: ok, read this:
|
[14:30] mato
|
The 'ZMQ_HWM' option shall set the high water mark for the specified 'socket'.
|
[14:30] mato
|
The high water mark is a hard limit on the maximum number of outstanding
|
[14:30] mato
|
messages 0MQ shall queue in memory for any single peer that the specified
|
[14:30] mato
|
'socket' is communicating with.
|
[14:30] mato
|
If this limit has been reached for all peers then the socket shall enter an
|
[14:30] mato
|
exceptional state, and depending on the socket type 0MQ shall take appropriate
|
[14:30] mato
|
action such as blocking or dropping newly sent messages; refer to
|
[14:30] mato
|
linkzmq:zmq_socket[3] for details. The exceptional state shall persist until
|
[14:30] mato
|
the number of outstanding messages for at least one peer falls below the low
|
[14:30] mato
|
water mark; the low water mark shall be computed automatically by 0MQ.
|
[14:30] mato
|
avoids using the term "connected to" anywhere
|
[14:30] mato
|
or mentioning message queues
|
[14:31] mato
|
i can even change "shall queue in memory" to "shall hold in memory" if queue is a no-no :-)
|
[14:31] sustrik
|
doesn't matter
|
[14:31] mato
|
sustrik: further, then related to this, the following paragraph for ZMQ_REQ:
|
[14:31] mato
|
in zmq_socket(3)
|
[14:31] sustrik
|
the problem is that it's not true
|
[14:31] mato
|
what is not true?
|
[14:32] sustrik
|
the part about the exceptional state
|
[14:32] sustrik
|
what you've written applies to sat REQ/REP
|
[14:32] sustrik
|
doesn't apply to PUB/SUB
|
[14:32] mato
|
hmm, crap, ok
|
[14:32] sustrik
|
:)
|
[14:32] sustrik
|
try to express it in a simple manner
|
[14:33] mato
|
so how about we keep the 1st paragraph only?
|
[14:33] mato
|
and then refer to the individual socket types?
|
[14:33] sustrik
|
yes
|
[14:33] mato
|
1st paragraph being this bit:
|
[14:33] mato
|
The 'ZMQ_HWM' option shall set the high water mark for the specified 'socket'.
|
[14:33] mato
|
The high water mark is a hard limit on the maximum number of outstanding
|
[14:33] mato
|
messages 0MQ shall queue in memory for any single peer that the specified
|
[14:33] mato
|
'socket' is communicating with.
|
[14:33] sustrik
|
yes
|
[14:33] mato
|
ok, so then the relevant paragraph for REQ sockets:
|
[14:33] mato
|
When a 'ZMQ_REQ' socket enters an exceptional state due to having reached the
|
[14:33] mato
|
high water mark for all peers, or if there are no peers at all, then any
|
[14:33] mato
|
_zmq_send()_ operations on the socket shall block until the exceptional state
|
[14:33] mato
|
ends or at least one peer is connected; messages are not discarded.
|
[14:33] mato
|
makes sense?
|
[14:34] mato
|
i still have a problem there with "at least one peer is connected"
|
[14:34] mato
|
note the careful absence of "connecting" anywhere else :-)
|
[14:35] sustrik
|
"until at least one peer becomes available for sending"
|
[14:36] mato
|
do i want to talk about when exactly the exceptional state ends?
|
[14:36] mato
|
or just be vague and mysterious?
|
[14:36] sustrik
|
be vague
|
[14:36] sustrik
|
it depends on lwm
|
[14:36] sustrik
|
which is invisible
|
[14:36] mato
|
:-(
|
[14:36] sustrik
|
what's wrong with that?
|
[14:36] sustrik
|
you have the same thing with TCP
|
[14:37] mato
|
ok
|
[14:37] sustrik
|
and you even don't realise it
|
[14:38] mato
|
sustrik: can i use the following summary table for sockets:
|
[14:38] mato
|
.Summary of ZMQ_REQ characteristics
|
[14:38] mato
|
Compatible peer sockets:: 'ZMQ_REP'
|
[14:38] mato
|
Direction:: Bidirectional
|
[14:38] mato
|
Send/receive pattern:: Send, Receive, Send, Receive, ...
|
[14:38] mato
|
Queuing strategy:: Load-balanced
|
[14:39] mato
|
Flow control:: Block
|
[14:39] mato
|
maybe queueing strategy should be "Routing strategy" ?
|
[14:40] mato
|
and Flow control should be called something else, "ZMQ_HWM behaviour" ?
|
[14:41] lvh
|
Is it just me, or did 0MQ involve a broker some time ago?
|
[14:41] lvh
|
I'm seeing all this brokerless stuff and I don't get it.
|
[14:41] mato
|
lvh: no, it never did... there was a zmq_locator in 1.x but that did something else
|
[14:42] mato
|
sustrik: halo?
|
[14:42] sustrik
|
mato: here i am
|
[14:42] mato
|
sustrik: see above
|
[14:42] sustrik
|
yes
|
[14:42] sustrik
|
send/receive patter is kind of funny
|
[14:43] sustrik
|
but i don't care
|
[14:43] mato
|
brian suggested that
|
[14:43] sustrik
|
it's ok
|
[14:43] mato
|
and the others?
|
[14:43] sustrik
|
there are actually two strategies
|
[14:43] sustrik
|
one for outgoing messages, one for incoming messages
|
[14:44] sustrik
|
say REQ load balances outgoing messages
|
[14:44] mato
|
yeah, that makes sense
|
[14:44] mato
|
what it does with incoming messages you don't really care
|
[14:44] sustrik
|
and receives only from the peer it sent the last request to
|
[14:45] mato
|
hmm, how can I put that in one sentence...
|
[14:45] sustrik
|
well, for say REP socket, the incoming stratege is fair queueing
|
[14:45] sustrik
|
strategy*
|
[14:46] sustrik
|
wouldn't it be better to explain what the socket does in one sentence
|
[14:46] mato
|
I do that already
|
[14:46] mato
|
but I wanted a summary table in there
|
[14:46] sustrik
|
instead of splitting it into 4 different rows in a table
|
[14:46] mato
|
sustrik: if you were here you would see what i have on my screen :-(
|
[14:46] sustrik
|
ok, let's go through the table
|
[14:47] sustrik
|
comaptible sockets is OK
|
[14:47] mato
|
do we want a table or not?
|
[14:47] mato
|
I thought it would be good to explain in prose first, and then have a table at the end
|
[14:47] lvh
|
Okay. We have a bunch of entry points for mobile devices that speak JSON-over-HTTPS and Thrift-over-SSL. They send stuff to an AMQP broker (RabbitMQ): a queue for persisting, which roundrobins stuff to persisters that write to a database, and then a pubsub thing for a live web interface. I don't understand how you do the load balancing bit in ZMQ. How do persisters register? How do they know *where* to register if there is no central
|
[14:47] lvh
|
broker?
|
[14:48] sustrik
|
mato: i would omit the strategies from the table, they are explained in the text already
|
[14:49] mato
|
sustrik: well, the table is nice for a quick look
|
[14:49] mato
|
sustrik: if it can be done for all sockets correctly
|
[14:49] sustrik
|
lvh: so devices are sending messages to a central node (broker) which then load-balances them among persisters, right?
|
[14:49] lvh
|
sustrik: Yes, that's one part of the behavior
|
[14:49] lvh
|
sustrik: Let's focus on that now :)
|
[14:50] lvh
|
sustrik: That's a single AMQP queue.
|
[14:50] sustrik
|
mato: you can try but strategies need textual description anyway
|
[14:50] sustrik
|
lvh: is that req/rep scenario?
|
[14:50] lvh
|
What does the failure mode look like? What happens to messages when a persister grabs a message and then blows up?
|
[14:50] sustrik
|
are the persisters sending replies back to the devices?
|
[14:51] sustrik
|
lvh: unreliable
|
[14:51] lvh
|
sustrik: Ideally yes, to confirm "hey I wrote this thing and the db says its okay"
|
[14:51] sustrik
|
you can run zmq_queue in the middle
|
[14:51] sustrik
|
it's lika an AMQP queue
|
[14:52] sustrik
|
then both devices and persisters can connect to it
|
[14:52] lvh
|
Okay, cool :-) I'll read up on that one.
|
[14:52] lvh
|
sustrik: How about the pubsubbing? Is that with zmq_queue too?
|
[14:52] sustrik
|
no, that's zmq_forwarder
|
[14:53] lvh
|
Aha! Okay, I'll read the docs for those two and try to figure it out. Thanks! :-)
|
[14:54] mato
|
sustrik: so what's the incoming strategy for ZMQ_SUB?
|
[14:56] sustrik
|
fair-queuing
|
[14:59] mato
|
does this work:
|
[15:00] mato
|
REQ:
|
[15:00] mato
|
Compatible peer sockets:: 'ZMQ_REP'
|
[15:00] mato
|
Direction:: Bidirectional
|
[15:00] mato
|
Send/receive pattern:: Send, Receive, Send, Receive, ...
|
[15:00] mato
|
Outgoing routing strategy:: Load-balanced
|
[15:00] mato
|
Incoming routing strategy:: Last peer
|
[15:00] mato
|
Flow control:: Block
|
[15:00] mato
|
REP:
|
[15:00] mato
|
Compatible peer sockets:: 'ZMQ_REQ'
|
[15:00] mato
|
Direction:: Bidirectional
|
[15:00] mato
|
Send/receive pattern:: Receive, Send, Receive, Send, ...
|
[15:00] mato
|
Incoming routing strategy:: Fair-queued
|
[15:00] mato
|
Outgoing routing stratagy:: Last peer
|
[15:00] mato
|
Flow control:: Drop
|
[15:02] sustrik
|
you have it mixed
|
[15:02] sustrik
|
the upper is ZMQ_REQ
|
[15:03] sustrik
|
the lower is ZMQ_REP
|
[15:03] mato
|
that's what i wrote
|
[15:03] mato
|
sustrik: what i'm asking is whether that kind of table makes sense
|
[15:03] sustrik
|
oh, orry
|
[15:04] sustrik
|
sorry
|
[15:04] sustrik
|
yes, it makes sense
|
[15:04] mato
|
the reason i want to put it in there beside the prose is brian wrote:
|
[15:04] mato
|
In my understanding, 0MQ sockets
|
[15:04] mato
|
can be described in a uniform manner by specifying things like:
|
[15:04] mato
|
* Bidirectional or unidirectional
|
[15:04] mato
|
* send/recv pattern (ssssss, rrrrrrr, srsrsrsr, etc.)
|
[15:04] mato
|
* Outbound/inbound message queuing pattern (load balanced, fair queued)
|
[15:04] mato
|
* Number of allowed clients.
|
[15:05] mato
|
* How multiple clients are handled.
|
[15:05] mato
|
* Number of allowed in flight messages (synch, async)
|
[15:05] mato
|
* Algorithm used when the queue fills.
|
[15:05] mato
|
* Allowed peer socket types.
|
[15:05] mato
|
* I may be missing certain things.
|
[15:05] mato
|
* How identities are used in message routing.
|
[15:05] mato
|
The main problem that I see right now is that some of these things are
|
[15:05] mato
|
not clearly documented. Making a list of all these things and
|
[15:05] mato
|
documenting them for each socket type would be immensely helpful and
|
[15:05] mato
|
clarify the abstraction of a 0MQ socket.
|
[15:05] mato
|
...
|
[15:05] mato
|
so it seems that a table would help
|
[15:05] sustrik
|
yes, sure
|
[15:05] mato
|
sustrik: agree?
|
[15:05] sustrik
|
yes
|
[15:05] sustrik
|
let's use your current table
|
[15:05] sustrik
|
if there are more things to specify
|
[15:05] sustrik
|
we can add it later
|
[15:06] mato
|
ok, good
|
[15:06] mato
|
it makes peer look scary:
|
[15:06] mato
|
Compatible peer sockets:: 'ZMQ_PAIR'
|
[15:06] mato
|
Direction:: Bidirectional
|
[15:06] mato
|
Send/receive pattern:: Unrestricted
|
[15:06] mato
|
Incoming routing strategy:: N/A
|
[15:06] mato
|
Outgoing routing strategy:: N/A
|
[15:06] mato
|
Flow control:: Block
|
[15:06] mato
|
lots of N/A :)
|
[15:07] mato
|
sustrik: one more thing needs to go in that table
|
[15:07] sustrik
|
:)
|
[15:07] mato
|
sustrik: which is what brian calls "number of allowed clients", and what he means is whether a socket can be many-to-many/many-to-one/etc
|
[15:07] mato
|
what is that "thing" called? :-)
|
[15:08] sustrik
|
arity
|
[15:08] sustrik
|
cardinality
|
[15:08] sustrik
|
dunno
|
[15:08] mato
|
hmm, neither of those really work
|
[15:08] sustrik
|
any normal 0mq socket allows many peers
|
[15:09] sustrik
|
pair is pathological case
|
[15:09] sustrik
|
just note in the text
|
[15:09] mato
|
already noted
|
[15:09] sustrik
|
word 'pathological' should be definitely present :)
|
[15:18] lvh
|
sustrik: So, in a typical setup, would these brokers and forwarders have dedicated machines? I'm assuming that you *DO* actually need to know where your forwarder/queue is.
|
[15:18] sustrik
|
yes, you have to connect to it
|
[15:19] sustrik
|
having a dedicated machine seems like an overkill
|
[15:19] sustrik
|
depends on what are you doing
|
[15:19] sustrik
|
if you are NASDAQ, you probably want a dedicated machine...
|
[15:19] lvh
|
well, we're currently on EC2
|
[15:20] lvh
|
I suppose we could move to RC and use a tiny 256M box, and even that's overkill.
|
[15:20] sustrik
|
then use a single box :)
|
[15:21] mato
|
%^#$^
|
[15:22] mato
|
sustrik: so how do i translate this without mentioning message queues/pipes:
|
[15:22] mato
|
REP:
|
[15:22] mato
|
Ak je spojenie urcene na odoslanie odpovede preplnene, potom ZMQ
|
[15:22] mato
|
spravu zahodi a send skonci uspesne.
|
[15:23] sustrik
|
that's the flow control bit?
|
[15:23] mato
|
yes
|
[15:24] sustrik
|
if there's not enough space to store the message, it'll be dropped
|
[15:25] mato
|
sustrik: it needs to be explained in the context of REP and HWM
|
[15:25] mato
|
help me out here
|
[15:26] sustrik
|
if requester is not receiving replies and the number of outstanding replies reaches HWM any further replies will be dropped
|
[15:30] mato
|
sustrik: hw about this?
|
[15:30] mato
|
When a 'ZMQ_REP' socket enters an exceptional state due to having reached the
|
[15:30] mato
|
high water mark for a _client_, then any replies sent to the _client_ in
|
[15:30] mato
|
question shall be dropped until the exceptional state ends.
|
[15:30] sustrik
|
yes, why not
|
[16:05] mato
|
sustrik: are you still there?
|
[16:11] sustrik
|
mato: yes
|
[16:12] mato
|
sustrik: in a moment i will commit my changes
|
[16:12] mato
|
sustrik: i've also added some general text to zmq_socket in an attempt to explain the interesting bits
|
[16:12] mato
|
sustrik: once i commit it can you review this please?
|
[16:12] sustrik
|
sure, i will
|
[16:12] sustrik
|
one more question, btw
|
[16:12] sustrik
|
there's a "zmqd" thing in the trunk
|
[16:13] sustrik
|
should we drop the devices in favour of zmqd
|
[16:13] sustrik
|
or keep both for the time being
|
[16:13] sustrik
|
?
|
[16:13] mato
|
sustrik: oh, btw, since we're removing app_threads, zmq_socket will no longer return EMTHREAD ever?
|
[16:13] mato
|
sustrik: as for zmqd, i'm not sure, i've not had a chance to review it
|
[16:13] sustrik
|
actually, it will
|
[16:14] sustrik
|
when it reaches max socket count
|
[16:14] sustrik
|
as with POSIX
|
[16:14] mato
|
sustrik: for 2.0.7 i would ignore zmqd, and keep packaging the current devices
|
[16:14] sustrik
|
ok
|
[16:14] mato
|
sustrik: since the devices/zmqd really need a proper review
|
[16:15] sustrik
|
fine
|
[16:15] sustrik
|
what about renaming EMTHREAD to EMFILE?
|
[16:15] sustrik
|
probably not
|
[16:15] sustrik
|
just an idea
|
[16:15] mato
|
what does this value have to do with it?
|
[16:16] mato
|
Maximal number of OS threads that can own 0MQ sockets
|
[16:16] mato
|
// at the same time.
|
[16:16] mato
|
max_app_threads = 512,
|
[16:16] mato
|
from config.hpp
|
[16:16] sustrik
|
yes, that's max thread count
|
[16:16] sustrik
|
kind of like max socket count in POSIX OS
|
[16:16] sustrik
|
max fd count i meant
|
[16:17] mato
|
ok
|
[16:17] mato
|
in that case I'll change the error explanation to just say:
|
[16:18] mato
|
*EMTHREAD*::
|
[16:18] mato
|
The number of application threads using sockets within this 'context' has been
|
[16:18] mato
|
exceeded.
|
[16:18] mato
|
and nothing else
|
[16:18] sustrik
|
there's no concept of application thread now
|
[16:18] sustrik
|
i would just say
|
[16:18] sustrik
|
"maximal number of sockets exceeded"
|
[16:18] sustrik
|
it's not precise
|
[16:18] mato
|
sorry, yeah
|
[16:18] mato
|
Maximum btw
|
[16:19] mato
|
I don't know where you keep getting maximal from :-)
|
[16:19] sustrik
|
but nobody going to experience the error
|
[16:19] sustrik
|
slovak language
|
[16:19] mato
|
oh, you never know, there may be people using more than 512 threads
|
[16:19] sustrik
|
those are doomed anyway
|
[16:19] sustrik
|
:)
|
[16:19] mato
|
not if you imagine that they're running on some freaky 48-core box :-)
|
[16:20] sustrik
|
supercomputing use cases
|
[16:20] sustrik
|
guys on blue gene should be smart enough to figure it out
|
[16:20] cremes
|
mato: is zmq_init still taking a threads parameter? if so, does your latest doc update explain its meaning better?
|
[16:21] mato
|
cremes: it is, but won't be for 2.0.7
|
[16:21] mato
|
hence the doc update
|
[16:21] cremes
|
so in 2.0.7 the call to zmq_init will no longer take any arguments?
|
[16:21] mato
|
correct
|
[16:22] cremes
|
good
|
[16:22] sustrik
|
not corrext
|
[16:22] cremes
|
oops
|
[16:22] sustrik
|
there's still io_threads parameter
|
[16:22] mato
|
yeah, that's right, sorry
|
[16:22] sustrik
|
size of working thread pool
|
[16:22] sustrik
|
app_threads and flags is dropped
|
[16:22] cremes
|
ah, ok
|
[16:23] cremes
|
i never understood what app threads were for...
|
[16:23] cremes
|
but now it doesn't matter
|
[16:23] mato
|
:-)
|
[16:23] sustrik
|
that's why we removed them
|
[16:23] mato
|
cremes: anything else you don't know what its for?
|
[16:23] mato
|
cremes: maybe we can remove that too :-)
|
[16:23] cremes
|
ha!
|
[16:24] sustrik
|
seriously
|
[16:24] cremes
|
let me see... you may want to clarify that zmq_poll can return when *any* event occurs inside the library even if no sockets are readable/writable
|
[16:25] cremes
|
and the usec delay value is the maximum it may block and will likely return earlier
|
[16:25] sustrik
|
yeah, that's kind of confusing
|
[16:25] cremes
|
i know :)
|
[16:25] sustrik
|
i would like to block till timeout expires but people seem to be so much concerned about zmq_poll performance...
|
[16:27] cremes
|
sustrik: that behavior as fine as long as it is documented
|
[16:27] cremes
|
particularly the part that it's possible that no sockets are readable/writable
|
[16:27] sustrik
|
mato: can you add one sentence explaining that?
|
[16:28] cremes
|
i'd like to keep 0mq as fast as possible so that my choice of a slow language (ruby) doesn't hurt as much
|
[16:28] lvh
|
you should probably measure your bottlenecks first
|
[16:28] lvh
|
IO tends to be it but ZMQ is probably a tiny bit of that
|
[16:28] mato
|
sustrik: working on it now
|
[16:29] sustrik
|
lvh: that's true wrt latency
|
[16:29] sustrik
|
as for throughput it should be as fast as possible
|
[16:29] sustrik
|
because network stack is called only once in a while
|
[16:30] sustrik
|
so it most cases 0mq overhead is the only overhead there is
|
[16:30] sustrik
|
exact timeouts on zmq_poll require one call to gettimeofday per invocation
|
[16:30] sustrik
|
that can slow the whole thing down
|
[16:32] mato
|
sustrik: so poll returns any time, even if there are no events and timeout has not yet expired?
|
[16:33] sustrik
|
yes
|
[16:33] sustrik
|
the only guarantee is that it won't return _after_ the timeout expired
|
[16:34] mato
|
IMPORTANT: The _zmq_poll()_ function may return *before* the 'timeout' period
|
[16:34] mato
|
has expired even if no events have been signaled.
|
[16:34] mato
|
this should do?
|
[16:34] mato
|
right after the RETURN value section
|
[16:34] sustrik
|
cremes?
|
[16:35] sustrik
|
looks like he's away, i, for myself, am happy with the wording
|
[16:35] mato
|
ok
|
[16:36] mato
|
poll is a hack anyway and needs to be redone :)
|
[16:36] sustrik
|
yeah, but the exact timeout would be a problem anyway :|
|
[16:38] CIA-17
|
zeromq2: 03Martin Lucina 07master * r7c9b09b 10/ (7 files):
|
[16:38] CIA-17
|
zeromq2: Documentation: Flow control, zmq_socket(3)
|
[16:38] CIA-17
|
zeromq2: Mostly Flow control and additions to zmq_socket(3)
|
[16:38] CIA-17
|
zeromq2: Removed/changed lots of text regarding message queues
|
[16:38] CIA-17
|
zeromq2: More fixes for 2.0.7 changes - http://bit.ly/d24DAg
|
[16:38] mato
|
sustrik: ok, committed
|
[16:38] mato
|
sustrik: please take a look at the beginning of zmq_socket(3) and tell me if the added text is correct and helpful or not
|
[16:39] sustrik
|
lemme see
|
[16:44] sustrik
|
very nice
|
[16:44] mato
|
thanks
|
[16:44] mato
|
i wanted to emphasize the many-to-many and multiple endpoints
|
[16:44] sustrik
|
yes, that's good
|
[16:44] sustrik
|
it should be pointed out explicitly
|
[16:45] mato
|
great, then this should help
|
[16:45] mato
|
sustrik: beer & pizza?
|
[16:45] sustrik
|
i've just ate
|
[16:45] sustrik
|
beer?
|
[16:46] mato
|
beer is good, but i need food too
|
[16:46] sustrik
|
where do you want to go?
|
[16:46] mato
|
somewhere outside
|
[16:46] sustrik
|
it's *COLD*
|
[16:46] mato
|
is it? i thought not really
|
[16:47] sustrik
|
14 degrees or something
|
[16:47] mato
|
sustrik: ok how about randal? that has been and pizza
|
[16:47] mato
|
sustrik: and no concert tonight, i just checked
|
[16:47] sustrik
|
is it open now?
|
[16:47] mato
|
yes, every day
|
[16:47] sustrik
|
ok then
|
[16:47] mato
|
see you there at half past seven then?
|
[16:48] sustrik
|
more or less
|
[16:48] sustrik
|
i have few emails to answer still
|
[16:48] mato
|
ok
|
[16:55] cremes
|
mato, sustrik: the zmq_poll rewording is fine
|
[16:55] sustrik
|
ack
|
[19:03] lvh
|
What's the recommended thing for debian stable packages?
|