[Time] Name | Message |
[00:51] travlr
|
Just noticed Pieter will be on FLOSS weekly Dec. 28 ...+1
|
[01:01] lusis
|
indeed
|
[01:01] lusis
|
looking forward to that one
|
[01:01] lusis
|
stopped listening a while ago since the interviews got rather weak
|
[01:11] travlr
|
lusis: yeah sometimes they can be lame. But Twit in general is pretty cool. I'm usually watching.
|
[01:12] lusis
|
I guess I got tired of some of the banal questions from Merlyn
|
[01:12] lusis
|
randall...whatever he goes by ;)
|
[01:12] lusis
|
Maybe the target audience isn't me
|
[01:12] lusis
|
heh
|
[01:13] travlr
|
i wish they got more into the programming aspects.. they need a linux today as well for hackers.
|
[01:14] tarcieri
|
sup there lusis
|
[01:14] lusis
|
tarcieri: hola!
|
[01:15] lusis
|
tarcieri: was reading the backlog. Looking forward to seeing what you come up with
|
[01:15] tarcieri
|
I'm dabbling with plugging Celluloid into 0mq
|
[01:15] tarcieri
|
yeah we'll see
|
[01:15] lusis
|
tarcieri: wish I had more time to help =/
|
[01:15] tarcieri
|
heh
|
[01:15] tarcieri
|
I might have a bit of time on my hands here
|
[01:16] lusis
|
good or bad "bit of time"?
|
[01:17] tarcieri
|
haha
|
[01:17] tarcieri
|
depends how you look at it I guess
|
[01:17] lusis
|
heh
|
[01:36] tarcieri
|
okay, so I'm confused
|
[01:36] tarcieri
|
when I use explicit pairs
|
[01:36] tarcieri
|
is that 1:1 for a particular TCP port?
|
[01:45] tarcieri
|
maybe I really want push and pull
|
[01:46] tarcieri
|
each node is a message sink
|
[01:46] tarcieri
|
that can have N people sending it messages
|
[02:00] tarcieri
|
so a pull sock can bind
|
[02:01] tarcieri
|
and N clients can push
|
[05:56] sustrik
|
cremes: there?
|
[07:43] CIA-79
|
libzmq: 03Martin Sustrik 07master * rb3cda2a 10/ src/kqueue.cpp : Bug in kqueue poller fixed (issue 261) ...
|
[09:50] CIA-79
|
libzmq: 03Paul Betts 07master * r1b706ac 10/ (src/err.cpp src/err.hpp): Enable exceptions raising on assert on Win32 ...
|
[09:51] CIA-79
|
libzmq: 03Martin Sustrik 07master * r68ab5f8 10/ AUTHORS : Paul Betts added to the AUTHORS file ...
|
[11:04] mikko
|
sustrik: gj on the kqueue stuff
|
[11:04] mikko
|
sustrik: looking at the UApycon push/pull convo
|
[11:04] mikko
|
does this tie in with libzmq-160
|
[11:05] sustrik
|
thanks
|
[11:05] sustrik
|
yes, it's basically the same thing
|
[11:05] sustrik
|
however, decent shutdown requires some API changes as well
|
[11:06] sustrik
|
some kind of 2-phase shutdown
|
[11:06] sustrik
|
1. ask socket to shutdown
|
[11:06] sustrik
|
2. read the remaining queued messages
|
[11:06] sustrik
|
3. exit
|
[11:06] sustrik
|
so, i guess, first we should decide on what exactly should be done and how
|
[11:12] mikko
|
sustrik: this shutdown business might tie in with the dealer lost messages case
|
[11:13] sustrik
|
yes, it does
|
[11:13] sustrik
|
it doesn't solve the case of byzantine failure
|
[11:13] sustrik
|
but no messages should be lost on decent shut down
|
[11:20] mikko
|
sustrik: this is true
|
[11:34] jond
|
sustrik: hi, what was the underlying issue with the kqueue? the patch seems to protect against multiple add/rm of fd?
|
[11:46] cremes
|
sustrik: just got up... i'll be around in about 90 minutes if u still need me.
|
[11:56] sustrik
|
jond: yes
|
[11:56] sustrik
|
there was a case where a fd was removed *twice* from the pollset
|
[11:56] sustrik
|
which caused the error
|
[11:57] sustrik
|
cremes: everything's done already
|
[11:57] sustrik
|
thanks for providing the osx box
|
[12:00] cremes
|
sustrik: you are welcome; glad i could be of service
|
[12:00] cremes
|
now i don't have to compile on osx with FORCE_POLL or whatever the macro is!
|
[12:01] sustrik
|
great
|
[12:01] sustrik
|
if there's another problem with osx in the future i'll ping you
|
[12:02] CIA-79
|
libzmq: 03Ben Gray 07master * r9e000c8 10/ (src/dist.cpp src/msg.cpp src/msg.hpp): Patch for issue LIBZMQ-275. Dealing with VSM in distribution when pipes fail to write. ...
|
[12:03] CIA-79
|
libzmq: 03Martin Sustrik 07master * r9b3e61a 10/ AUTHORS : Ben Gray added to the AUTHORS file ...
|
[12:06] cremes
|
ok
|
[14:00] nbx
|
hello everyone, i've a problem. i installed 0MQ 2.1.10 on debian linux distribution, however after everything's done and installed - i cannot run any of the tests bundled in /perf directory. programs just don't respond after i run them, they hang. how to verify if i did something wrong while installing 0MQ?
|
[14:03] sustrik
|
nbs: what command lines are you using to run them?
|
[14:03] cremes
|
nbx: there should be a 'make test' or similar target for make that will try to run the tests
|
[14:07] nbx
|
sustrik i change to /perf directory and run the command from the example with ./local_lat tcp://eth0:5555 1 100000
|
[14:07] sustrik
|
what about the other peer?
|
[14:07] nbx
|
cremes, the installation says that performance tests executables will be available in /perf directory after buliding and they are..
|
[14:08] cremes
|
nbx: you were asking how to make sure the lib built correctly; there is a make target that will run some
|
[14:08] cremes
|
tests that *use* the library and confirm it was built correctly
|
[14:08] nbx
|
sustrik it says this is the local latency test, i'm not sure what other peer.. i literally copy pasted the example from 0mq website
|
[14:08] cremes
|
the perf tests are a separate isue
|
[14:08] nbx
|
aha i see
|
[14:08] cremes
|
nbx: give us the url that you copied the example from
|
[14:09] cremes
|
there is usually a local and a remote (2 peers)
|
[14:09] nbx
|
http://www.zeromq.org/results:perf-howto
|
[14:09] nbx
|
i ran the 1st command that tests local latency
|
[14:09] cremes
|
wrong
|
[14:09] sustrik
|
have a look two lines below that :)
|
[14:09] cremes
|
read the text there... it tells you exactly what to do
|
[14:11] nbx
|
now i feel dumb :) thanks for the help
|
[14:12] cremes
|
nbx: don't feel dumb!
|
[14:13] nbx
|
now the second question, is there any reason i should be getting segmentation fault errors when trying to connect a client to 0mq server? again, i used PHP examples from the introduction article
|
[14:13] cremes
|
we are kind of "tough" on people to read the docs thoroughly, that's all
|
[14:13] nbx
|
and how to determine why i got them?
|
[14:13] cremes
|
segfaults are bad
|
[14:13] cremes
|
you should not get them
|
[14:13] nbx
|
well, i've been into message queues for about 2 weeks now, i do read everything usually but after the whole day of reading.. you miss things :)
|
[14:13] mikko
|
nbx: does php segfault?
|
[14:14] nbx
|
php script reports segfault, but the 0mq client fails to connect to 0mq server
|
[14:14] mikko
|
hmm, sounds odd
|
[14:14] mikko
|
can you try this:
|
[14:14] mikko
|
gdb --args php script.php
|
[14:14] mikko
|
then type run
|
[14:14] mikko
|
and when it crashes do 'bt'
|
[14:15] mikko
|
and put the backtrace to gist.github.com
|
[14:15] mikko
|
also, what version of PHP and which OS?
|
[14:15] nbx
|
php 5.3.8 and debian 6
|
[14:15] mikko
|
have you got imagick or uuid extensions installed?
|
[14:16] nbx
|
i got imagick
|
[14:16] mikko
|
ok, thats it
|
[14:16] nbx
|
alright, killing the imagick then
|
[14:16] mikko
|
this is an old bug in glibc
|
[14:17] mikko
|
you can get it to work by loading imagick after zmq
|
[14:17] nbx
|
alright, going to try it, thanks a lot guys :) (i'll be back probably, this 0mq really owns)
|
[14:17] mikko
|
this is a bug in initialising thread-local storage which causes a crash in libuuid
|
[14:17] mikko
|
np
|
[14:23] nbx
|
mikko that took care of it, thanks
|
[15:12] CIA-79
|
libzmq: 03Bernd Prager 07master * r52bab42 10/ src/zmq.cpp : Missing bracket added ...
|
[15:32] cremes
|
i need someone to explain to me the functions zmsg_wrap and zmsg_unwrap in the czmq library
|
[15:32] cremes
|
is the intention for these calls to add and remove envelope information for XREP sockets?
|
[15:33] mikko
|
pieter is your man
|
[15:33] mikko
|
i would guess
|
[15:33] cremes
|
he's in seoul... i hope someone else is using the czmq lib and can help me :)
|
[15:34] cremes
|
i'm not confused by the code... i'm confused by the api as in "why does it exist?"
|
[16:09] sustrik
|
cremes: i guess asking on list could help
|
[16:09] cremes
|
writing a message now :)
|
[16:09] sustrik
|
pieter may get to the list even in seoul
|
[16:09] sustrik
|
:)
|
[16:12] mikko
|
cremes: did you look at the source?
|
[16:12] cremes
|
mikko: i did; i understand the source completely
|
[16:12] cremes
|
it's not the mechanics of it that have me confused
|
[16:12] cremes
|
it's the purpose of it
|
[17:08] cremes
|
sustrik: in 3.x, would it make sense for the zmq_msg_t to be modified so it also carries a flag indicating label true/false?
|
[17:09] cremes
|
e.g. int zmq_msg_label(zmq_msg_t *msg)
|
[17:09] cremes
|
returns 0 for false, 1 for true
|
[17:09] cremes
|
that would be instead of using getsockopt to test whether the last message received was a label
|
[17:10] cremes
|
that makes more sense to me; thoughts?
|
[18:08] indygreg
|
cremes: I dig your recent 3.x msg API change proposal. +100 from me
|
[18:09] cremes
|
ha! are you gregory szorc?
|
[18:09] indygreg
|
yup
|
[18:09] cremes
|
i'm just reading your reply to the other thread... the idea of returning an int and testing the bits
|
[18:09] cremes
|
is interesting
|
[18:09] cremes
|
i hadn't considered that
|
[18:09] indygreg
|
... very similar to flagmatch
|
[18:10] indygreg
|
except more C like
|
[18:10] cremes
|
yes
|
[18:10] cremes
|
i like the "less C like" version :)
|
[18:10] cremes
|
but i could be happy with something like:
|
[18:10] indygreg
|
well, that's why you have high-level languages or multiple APIs to be more explicit
|
[18:10] cremes
|
int zmq_msg_flags(zmq_msg_t *msg)
|
[18:10] cremes
|
and testing the result
|
[18:11] cremes
|
right
|
[18:11] indygreg
|
that works
|
[18:12] cremes
|
looking through socket_base.cpp, i think something like zmq_msg_flags() would be a pretty easy replacement
|
[18:12] cremes
|
for the current methodology
|
[18:14] cremes
|
indygreg: do you like the name zmq_msg_flags() or is there something more idiomatic for C?
|
[18:21] indygreg
|
cremes: zmq_msg_flags() sounds good to me! zmq_msg_fields() would be another choice. but flags is better IMO since I think "flag" in C implies bits
|
[18:22] cremes
|
ok
|
[18:22] cremes
|
i will reply to my own thread and modify the proposal
|
[18:30] indygreg
|
I also think separate int zmq_msg_flags(zmq_msg_t *msg) and int zmq_msg_flag(zmq_msg_t *msg, int flag) would be a decent addition - at the risk of having some redundancy in the API
|
[18:31] indygreg
|
cremes: ^^^
|
[18:32] cremes
|
hmmm... you won me over to the "flags" api... why add this second one?
|
[18:32] LodeRunner
|
hello - does creating an inproc socket allocate a file descriptor?
|
[18:32] indygreg
|
LodeRunner: AFAIK inproc sockets are basically in-memory data structures
|
[18:32] indygreg
|
so no
|
[18:33] LodeRunner
|
indygreg: that's what I thought, but I'm getting "too many open files" and I see tons of 'unix' sockets in lsof when I raise the number of threads in my app
|
[18:34] indygreg
|
cremes: I would prefer zmq_msg_flags() only, but some may not like the low-level nature of it. do you force people to do "if (zmq_msg_flags(msg) & ZMQ_MSG_MORE)"? do you write macros: #define TEST_ZMQ_MSG_MORE(flags) (flags & ZMQ_MSG_MORE) ?
|
[18:36] indygreg
|
LodeRunner: most interesting. I'm not too familiar with the internals. maybe cremes, sustrik, or mikko can help you
|
[18:36] cremes
|
indygreg: good point; let's see what folks on the ML have to say about it
|
[18:36] cremes
|
LodeRunner: are you using ipc or inproc transport?
|
[18:37] LodeRunner
|
cremes: inproc
|
[18:37] cremes
|
LodeRunner: what version of 0mq?
|
[18:38] cremes
|
and what os?
|
[18:38] LodeRunner
|
cremes: currently 2.1.7, linux-x86
|
[18:38] cremes
|
ok
|
[18:38] cremes
|
in versions < 2.1.10, the internal commands were passed around using a "mailbox" structure
|
[18:39] cremes
|
based off of unix sockets
|
[18:39] cremes
|
in 2.1.10, that was changed
|
[18:39] cremes
|
any chance you can upgrade and try again?
|
[18:39] LodeRunner
|
cremes: oh, thanks for the info. time to upgrade, then!
|
[18:39] LodeRunner
|
sure!
|
[18:42] cremes
|
let us know if that fixes your issue; it could still be something else
|
[19:07] LodeRunner
|
cremes: unfortunately, I'm getting the same behavior with 2.1.10
|
[19:08] LodeRunner
|
on each new thread I'm creating 5 inproc sockets (1 SUB, 2 PUSH, 2 REP); for each new thread I add, lsof counts 12 more descriptors
|
[19:12] LodeRunner
|
(it may also be relevant that one of the REP sockets is connected to a DEALER in a QUEUE zmq_device)
|
[19:37] mikko
|
LodeRunner: which os?
|
[19:38] LodeRunner
|
mikko: linux-x86
|
[19:38] mikko
|
12 sounds a bit excessive
|
[20:02] LodeRunner
|
mikko: my mistake, I'm creating 6 inproc sockets (1 PUB, 1 SUB, 2 PUSH, 2 REP) per thread
|
[20:02] LodeRunner
|
and there are 12 unix sockets being created, two per zmq_socket (I see the numbers go down as I comment out the zmq_socket's)
|
[20:15] indygreg
|
LodeRunner: it might help if you post sample code
|
[20:16] LodeRunner
|
indygreg: ok, I'll try to come up with a sample that isolates the issue
|
[20:16] mikko
|
LodeRunner: why so many?
|
[20:17] mikko
|
two fds per inproc socket doesn't sound that bad
|
[20:17] mikko
|
one for each end
|
[20:17] mikko
|
i don't know what the new implementation looks like under the hood
|
[20:23] LodeRunner
|
mikko: you mean 6 zmq_sockets per thread is too many? I've been using them liberally, for various purposes (one to report to the logger thread, one to get tasks from the listener thread, one to get commands from a console thread, etc.)
|
[20:24] mikko
|
LodeRunner: doesn't large amount of sockets add complexity?
|
[20:25] LodeRunner
|
mikko: well, before that I was using threads+shared memory+locks to handle all this communication, so converting the code to threads+zeromq+messages simplified things
|
[20:26] mikko
|
LodeRunner: i mean do you find it easier to handle than one socket per thread and using routing for messages?
|
[20:26] mikko
|
if you do i guess its fine
|
[20:26] mikko
|
there should be very little overhead
|
[22:11] LodeRunner
|
mikko, cremes, indygreg: I just solved my issue of "Too many open files" by raising the value of max_sockets in config.hpp and recompiling zeromq
|
[22:11] amoffatw
|
hi. i'm trying to understand how pub-sub works. when a subscriber sets its subscribe filter, does this filter register on the publisher side? or does every published message get sent to all subscribers, which filter them on their own?
|
[22:12] amoffatw
|
the former makes the most sense
|
[22:12] amoffatw
|
but i just want to be sure
|
[22:13] minrk
|
amoffatw - depends on what version of zeromq
|
[22:13] minrk
|
one of the major improvements in zeromq-3 (current beta) is publisher-side filtering
|
[22:13] amoffatw
|
minrk, ah ok, so if i'm not using the beta, the filtering is subscriber side then
|
[22:13] minrk
|
correct
|
[22:13] amoffatw
|
ok that answers it, thanks minrk
|
[22:15] mikko
|
LodeRunner: how many sockets do you have?
|
[22:17] cremes
|
LodeRunner: i forgot about that! i always recompile my copy of 0mq to use 51200 sockets. :)
|
[22:18] LodeRunner
|
mikko: my process now has 1374 sockets, with 100 threads, but that's including a bunch of TCP ports I'm listening as well.
|
[22:18] mikko
|
LodeRunner: thats a fair amount of sockets
|
[22:19] LodeRunner
|
cremes: I found this solution in one of the IRC logs, through Google, so I figured I'd better mention the solution of my problem here as well
|
[22:19] mikko
|
hmm
|
[22:19] mikko
|
i wonder if max sockets should be configurable in ./configure
|
[22:19] mikko
|
fairly simple to change
|
[22:22] LodeRunner
|
mikko: no idea how this number is used internally, but I think the only reason not to make this easy to tweak is if it breaks ABI
|
[23:21] mbj
|
PULL sockets do not really pull the server? So a busy worker executing a long job cloud easily have many unprocessed messages mailbox that are blocked for a long time?
|
[23:21] mbj
|
s/cloud/could/
|
[23:27] minrk
|
yes, in push/pull, the push is really a more accurate descriptor than pull, which only pulls from its local queue
|
[23:29] mbj
|
minrk: thx for calrification.
|
[23:30] mbj
|
minrk: Exactly what I understood, but since this is a key reason for one of my design decisions I had to double check this ;)
|