[Time] Name | Message |
[07:38] CIA-17
|
zeromq2: 03Dhammika Pathirana 07master * r9a1d4df 10/ src/session.cpp :
|
[07:38] CIA-17
|
zeromq2: fix typo, destroy new engine
|
[07:38] CIA-17
|
zeromq2: Signed-off-by: Dhammika Pathirana <dhammika@gmail.com> - http://bit.ly/bHakfe
|
[13:11] Steve-o
|
Any interest in a broadcast IPC bus? Just playing about a design today
|
[13:27] sustrik
|
Steve-o: hi
|
[13:27] sustrik
|
you mean using multicast on loopback?
|
[13:27] Steve-o
|
no, a dedicated shared memory broadcast bus
|
[13:28] Steve-o
|
similar to 29 West's new broadcast IPC bus I guess
|
[13:28] sustrik
|
how would that work?
|
[13:28] sustrik
|
shared memory guarded by semaphores?
|
[13:28] Steve-o
|
no semaphores or locks
|
[13:29] Steve-o
|
one segment is a transmit window, with a trailing advance window similar to the PGM spec
|
[13:29] Steve-o
|
another segment is a list of receive window lead sequences and sleeping flags
|
[13:29] sustrik
|
i see
|
[13:30] Steve-o
|
when all the receive window sequences advance beyond the advance window lead the trail can advance
|
[13:30] Steve-o
|
when a receiver indicates it is in a sleep state the sender needs to send a notification of a new sequence via a local socket
|
[13:31] sustrik
|
there's already an ipc transport on 0mq level which more or less works this way
|
[13:31] sustrik
|
it's less efficient obviously as it uses named pipes rather than shared mem
|
[13:32] Steve-o
|
it's a design for big users with 8-core Nehalems really though
|
[13:33] sustrik
|
so the point is to improve perf over what named pipes can provide, right?
|
[13:33] Steve-o
|
yup, no fan out copying
|
[13:33] sustrik
|
makes sense
|
[13:33] Steve-o
|
I think I mean 8-socket Nehalems,
|
[13:33] sustrik
|
steve-o: on a different issue
|
[13:34] sustrik
|
the one we've discussed on the ML
|
[13:34] sustrik
|
i am thinking of how to make that work
|
[13:34] Steve-o
|
which issue?
|
[13:34] sustrik
|
multiple multicast groups
|
[13:34] sustrik
|
subscribed from different apps
|
[13:34] Steve-o
|
group/port overlap, ok
|
[13:35] sustrik
|
it's pretty dangerous as it is now
|
[13:35] sustrik
|
as deploying a new app on a box can crash other apps
|
[13:35] sustrik
|
that so far worked well
|
[13:35] sustrik
|
so is it possble to get a multicast group address from openpgm along with a packet?
|
[13:36] sustrik
|
if so, it can be filtered on 0mq level
|
[13:36] Steve-o
|
I used to have the src & dst addresses per SKB, but they're very big
|
[13:36] sustrik
|
big?
|
[13:37] sustrik
|
isn't it 4 bytes for IPv4?
|
[13:37] Steve-o
|
ipv6 sockaddr struct
|
[13:39] Steve-o
|
that's what I'm passing around inside
|
[13:40] Steve-o
|
note that sequences regenerated by FEC will not have a destination address
|
[13:41] Steve-o
|
The preferable option is to allow the application developer to specify the data-destination port separate from UDP encapsulation port
|
[13:42] Steve-o
|
Note that the network parameter allows you to subscribe to multiple multicast groups, so the destination address can be bogus
|
[13:42] Steve-o
|
e.g. "epgm://;239.192.0.1,239.192.0.2;239.192.0.3:7500"
|
[13:52] sustrik
|
sorry, was on phone
|
[13:54] sustrik
|
so FEC repairs cannot be filtered?
|
[13:54] sustrik
|
they have to have some value set in dest field though
|
[14:06] Steve-o
|
FEC repairs regenerate the payload and not the IP header
|
[14:07] Steve-o
|
Plus not forgetting the RFC likes to state several times the PGM protocol is not tied to a multicast address.
|
[14:08] sustrik
|
one thing i don't get is how it is supposed to be used then
|
[14:09] Steve-o
|
the only thing you can use is the PGM data-destination port
|
[14:09] sustrik
|
so all packets with the same destination port are conisdered part of a single "feed"
|
[14:09] Steve-o
|
you use multiple multicast groups for fancy scaling of large distribution networks
|
[14:09] Steve-o
|
correct
|
[14:10] sustrik
|
but that means that application has to join *all* mutlicast groups
|
[14:10] sustrik
|
to be able to get the feed
|
[14:10] Steve-o
|
well, all the groups that are part of the feed.
|
[14:11] sustrik
|
aha, so it's not "one feed per port"
|
[14:11] sustrik
|
but rather "one feed per group of multicast addresses and a port"
|
[14:11] Steve-o
|
sounds better, yes
|
[14:12] Steve-o
|
so you can have a IPv4 and IPv6 multicast group in the same feed if you really wanted to
|
[14:12] Steve-o
|
it's certainly coded that way
|
[14:13] sustrik
|
ok, so how can multiple groups per feed be used to scale the deployment up?
|
[14:13] Steve-o
|
that is a good question
|
[14:13] Steve-o
|
I do not know the answer
|
[14:14] sustrik
|
ok, i see
|
[14:15] sustrik
|
hopefully we'll figure the best way to use the groups as we go on
|
[14:15] Steve-o
|
maybe one of the other PGM vendors use it some how, all the ones I have seen don't
|
[14:15] sustrik
|
thanks for the info
|
[14:15] Steve-o
|
honestly I've only ever seen multiple groups at LIFFE which kind of pioneered deploying asymmetric multicast
|
[14:16] sustrik
|
ack
|
[14:18] Steve-o
|
and then the entire system was canned before even half got online, lol
|
[14:21] sustrik
|
well, we should at least compile some guide of best practices as we get more experienced with the deployment
|
[14:21] sustrik
|
right now it's not obvious how to deploy the whole thing
|
[14:22] sustrik
|
wrong deployment can literally crash the system
|
[14:22] Steve-o
|
These are the notes I have currently, http://code.google.com/p/openpgm/wiki/OpenPgmConceptsTransport
|
[14:23] sustrik
|
great, i'll have a read
|
[14:23] Steve-o
|
Every site likes to do something different or push things to extremes (Fido)
|
[15:45] Lizito
|
Hey folks. I'm planning a parallel pipeline design where the ventilator and the sink are on separate threads. My noob question is....
|
[15:45] Lizito
|
Do I have to call zmq_init on each thread, or can I share the context cross-thread? I suspect the former is true.
|
[15:49] cremes
|
Lizito: you can share contexts across threads; create the socket in the thread where you will use it
|
[15:49] Lizito
|
Awesome, thanks!
|
[15:49] cremes
|
it's sharing sockets across threads where things get a little more dicey
|
[15:49] Lizito
|
They'll be separate sockets by design, so easy enough to create them in their own thread. 'preciate it.
|
[16:12] ptrb
|
a context is thread-safe, too
|
[16:27] mikko
|
another day, another list of bugs to fix
|
[16:30] sustrik
|
mikko: it's not that bad afaics
|
[16:30] sustrik
|
just ZeroMQPerl-master_ZeroMQ2-master_GCC failing afaics
|
[16:50] mikko
|
sustrik: yeah
|
[16:50] mikko
|
looks pretty good
|
[17:55] mato
|
account off
|
[18:15] starkdg
|
hello !
|
[20:24] starkdg
|
ding
|
[21:47] sustrik
|
starkdg: hi
|
[21:54] MattJ100
|
Does ZeroMQ support non-blocking client sockets? I haven't found it in the docs yet...
|
[22:01] starkdg
|
MattJ: use the ZMQ_NOBLOCK flag in the recv function
|
[22:01] MattJ
|
Ah, thanks
|
[22:02] starkdg
|
sustrik: are there any plans to make zmq sockets interoperatble with regular sockets ? if so, when do you think htat could happen ?
|
[22:03] MattJ
|
Hmm, that would make it instantly return EAGAIN it seems - polling? :)
|
[22:04] starkdg
|
oh yeah, there's a zmq_poll function too
|
[22:04] MattJ
|
Aha, thanks
|
[22:04] starkdg
|
yeah, it returns eagain if no message, have to put it in a loop, or just use the zmq_poll function if you want to monitor more than one at a time, or do other things while polling
|
[22:05] MattJ
|
Got it, thanks - zmq_poll() seems to support "normal" sockets too
|
[22:05] starkdg
|
ive used the no_block option when i want to time out
|
[22:05] starkdg
|
i put it in a loop until the current time reaches a time set in the future
|
[23:00] sustrik
|
starkdg: no chance
|
[23:00] sustrik
|
there's no way to simulate real file descriptors from the user space
|