[Time] Name | Message |
[04:20] Remoun
|
"If you need to know the identity of the peer you got a message from, only the XREP socket does this for you automatically. For any other socket type you must send the address explicitly, as a message part."
|
[04:21] Remoun
|
So how _do_ I get the address/identity?
|
[04:29] the_hulk
|
what is type identifier for socket, and context for C API's, or should i just declare them as void?
|
[04:44] Remoun
|
the_hulk; they're opaque handles, void*
|
[04:54] the_hulk
|
ok
|
[07:16] sustrik
|
Remoun: just write it into the message
|
[07:17] Remoun
|
I was/am looking for the value to write into the message :)
|
[07:17] sustrik
|
think of something :)
|
[07:19] Remoun
|
Relatedly, can I use semi-durable sockets such that I can actually address individual workers, but not have them eat memory when they're gone?
|
[07:19] Remoun
|
I'm basically trying to distribute the 'broker' in the last example in the guide, while also adding a layer of authentication
|
[07:22] sustrik
|
hm, that works only with REQ/REP pattern
|
[07:22] sustrik
|
when you don't set identity, one is generated for you
|
[07:22] sustrik
|
but the connections are still transient
|
[07:22] Remoun
|
And it's generated by the REP side, right?
|
[07:23] sustrik
|
the identity?
|
[07:23] Remoun
|
Meaning if the worker actually talks to more than one broker, they'd have different IDs for that worker?
|
[07:23] sustrik
|
yes
|
[07:23] Remoun
|
Therein lies the catch...
|
[07:24] Remoun
|
I need to avoid single points of failure, particularly with sth as involved as the broker
|
[07:24] Remoun
|
Yet synchronizing the 'availability' across more than one node (thread, process, etc.) is nigh impossible
|
[07:30] Remoun
|
sustrik; any ideas?
|
[07:30] sustrik
|
i don't follow
|
[07:30] sustrik
|
what's the problem?
|
[07:32] Remoun
|
Splitting the load-balancing across several brokers; I don't know how to approach that
|
[07:33] sustrik
|
connect the client to several brokers?
|
[07:33] Remoun
|
But then more than one broker would dispatch to the same worker
|
[07:33] Remoun
|
simultaneously, that is
|
[07:35] sustrik
|
you have to decide what pattern are you going to use
|
[07:36] sustrik
|
load balancing makes sense only with REQ/REP and PUSH/PULL
|
[07:36] sustrik
|
which one are you using?
|
[07:37] Remoun
|
I was going for REQ/REP, but now that I think about it, it can work better with PUSH/PULL
|
[07:37] sustrik
|
what does " more than one broker would dispatch to the same worker" means with REQ/REP or PUSH/PULL?
|
[07:38] Remoun
|
Well, a broker can only really service one request/pull at a time, right?
|
[07:39] sustrik
|
right -- unless it dispatches it further, to a separate worker thread or somesuch
|
[07:41] Remoun
|
I might just need to RTFM; I'll pour over the guide again
|
[07:46] Remoun
|
Where can I read more about PUSH/PULL sockets/patterns? The guide doesn't talk much about them
|
[07:47] sustrik
|
i think there's an chapter about it
|
[07:47] sustrik
|
the one with "ventilator"
|
[12:00] the_hulk
|
How do i know that server is down, from client site?
|
[17:12] ptrb
|
if I have a push/pull set up, with one pusher and multiple pullers, is there some way to have the "push" action target a specific "puller", absent some other out-of-band communication?
|
[17:13] mikko
|
ptrb: have each "puller" subscribe to generic and puller specific topic
|
[17:13] mikko
|
and use puller specific topic to communicate with specific puller
|
[17:15] ptrb
|
so pub/sub instead
|
[17:16] mikko
|
yes
|
[17:16] mikko
|
PUSH/PULL load balances the messages as well
|
[17:16] mikko
|
im not sure if there is a way to message based on the ident of client using push/pull
|
[17:16] mikko
|
sustrik might know better
|
[17:17] sustrik
|
ptrb: PUSH socket does load balancing
|
[17:17] sustrik
|
thus it decides which peer to send the message to itself
|
[17:18] mikko
|
sustrik: i got solaris10 running as build slave
|
[17:18] mikko
|
running first tests now
|
[17:18] mikko
|
will try installing windows later on
|
[17:18] sustrik
|
wow!
|
[17:35] mikko
|
mato: there?
|
[17:36] sustrik
|
mikko: i think he's travalling atm
|
[17:39] mikko
|
i just notced that the way we unpack pgm sources doesnt seem to be portable
|
[17:39] mikko
|
-C option to tar
|
[17:40] sustrik
|
shrug
|
[17:41] sustrik
|
no idea myself
|
[18:47] ptrb
|
is it possible to change the HWM behavior of a socket?
|
[18:48] ptrb
|
or, failing that, poll to see the current, uh, water level?
|
[18:53] mikko
|
watermark
|
[18:53] mikko
|
yes, you can poll
|
[18:54] mikko
|
it should come back as writable if hwm is reached
|
[18:57] ptrb
|
am I stupid and missing what that function is?
|
[18:58] mikko
|
what function?
|
[18:58] ptrb
|
oh, you getsockopt on ZMQ_HWM?
|
[18:59] mikko
|
zmq_poll
|
[18:59] mikko
|
you can not get current amount of messages in transit
|
[18:59] mikko
|
but zmq_poll should not return the socket as writable if hwm has been reached
|
[19:00] ptrb
|
oh, okay. and that will work that way no matter what type of socket(s) you poll on
|
[19:01] mikko
|
i've only tested on push sockets
|
[19:02] ptrb
|
hmm.
|
[19:03] mikko
|
not sure about pub socket
|
[19:03] mikko
|
as the behavior with pub socket when hwm is reached is to discard messages
|
[19:03] ptrb
|
right, which I'm trying to work around
|
[19:04] ptrb
|
looks like the "correct" solution here is to manually manage N ZMQ_PUSH sockets... which is what I was hoping to avoid... but..
|
[19:06] ptrb
|
OK, thanks for the tip.. if anything else strikes you in the night, feel free to let me know :)
|
[19:06] mikko
|
you could easily test zmq_poll + pub socket
|
[19:06] ptrb
|
yeah but that is more work than I can rightly manage at 8pm on a Friday :)
|
[19:08] ptrb
|
cheers
|
[19:09] mikko
|
http://zguide.zeromq.org/chapter:all
|
[19:09] mikko
|
there is an example for zmq_poll
|
[19:09] mikko
|
you should be able to mod that with ease
|