IRC Log


Thursday October 21, 2010

[Time] NameMessage
[05:14] mikko sustrik: are you this early bird?
[05:24] sustrik mikko: hi
[11:38] pieterh mikko, ping
[11:58] mikko pong
[14:41] CIA-14 jzmq: 03Gonzalo Diethelm 07master * r5a221a5 10/ (src/Socket.cpp src/org/zeromq/ZMQ.java): Added support for [gs]etLinger, when using 0MQ 2.1.0 or younger. - http://bit.ly/9bu3X9
[16:45] mikko pieterh: adding zfl
[17:36] pieterh mikko: thanks! :-)
[17:38] pieterh mikko: in fact we don't use the zfl maint branch, it's all on master
[19:04] zomg Hi, I'm trying to install pyzmq but I'm having a hard time figuring out how exactly I'm supposed to install latest zmq dev
[19:05] zomg The readme in git just points out I should read INSTALL which doesn't exist =)
[19:06] zomg No idea what I should be doing, so I just peeked at configure.in and tried running autoconf but it throws up some errors on undefined macros and generates a non-working configure (complains about missing install-sh)
[19:06] zomg Any pointers would be appreciated. This is on Ubuntu 10.4
[19:12] pieterh hi zomg
[19:12] pieterh you're trying to install 0MQ first?
[19:12] zomg Yes, as instructed here http://www.zeromq.org/bindings:python
[19:13] pieterh so what git, zeromq2 or pyzmq?
[19:13] zomg zeromq2
[19:13] pieterh right
[19:14] pieterh so after you've cloned the git, run 'sh autogen.sh'
[19:14] pieterh or just ./autogen.sh
[19:14] zomg okay, seemed to work. Anything else?
[19:15] pieterh The full procedure is here: http://www.zeromq.org/docs:procedures
[19:15] pieterh it's './autogen.sh; ./configure; make; sudo make install; sudo ldconfig'
[19:15] pieterh once you have the necessary packages
[19:16] pieterh sorry that this isn't more clearly explained, I'm not sure why the pyzmq page says you need the development master of zeromq
[19:16] pieterh should work with the release package and IMO should refer to that
[19:17] zomg Yeah might be easier, or at least should point to that procedurs page for us who can't find it on our own =)
[19:17] zomg Thanks
[19:17] pieterh am fixing that now, happily it's a wiki and trivial to change
[19:17] pieterh feel free BTW to do that if you find things that could be improved
[19:18] zomg Maybe I will once I get the hang of this
[19:20] pieterh enjoy, anyhow :-)
[19:58] zomg pieterh: something worth pointing out regarding pyzmq might be it requires cython 0.13
[19:58] zomg You just get an obscure error about some file missing when trying to install otherwise
[20:01] mikko pieterh: gcc is there
[20:01] mikko pieterh: all add other compilers later
[20:01] mikko pieterh: it's compiling zfl against master and maint of zeromq
[20:01] mikko might need to come up with better naming convention for the builds
[20:04] mikko zomg: take github chekout of maint branch
[20:04] mikko of zeromq
[20:04] mikko pyzmq is not working well with the current master branch
[20:09] zomg mikko: well after these initial hiccups I did manage to get it installed
[20:10] mikko zomg: at least the tests get stuck with master
[20:10] mikko http://build.valokuva.org/job/pyzmq_master/
[20:10] mikko gray ones being canceled builds due to endless blocking
[20:12] zomg I'll keep that in mind if I run into any issues :) Thanks for the tip
[20:30] pieterh i fixed the pyzmq bindings page to link to both stable and github downloads
[20:30] pieterh zomg: normally you only need cython if you want to modify pyzmq
[20:31] pieterh mikko: thanks for the clarification about the zfl builds, that makes sense
[20:31] zomg pieterh: I couldn't get it to install without it
[20:32] pieterh zomg: you weren't doing the 'Development' install from the pyzmq github readme?
[20:33] zomg Nope
[20:33] zomg just python setup.py install as said in the first part of it
[20:33] pieterh ok, what was the error message you got?
[20:34] pieterh might be some other package that cython also brings in...
[20:34] mikko pieterh: sent you login details if i dont happen to be around and something needs doing
[20:34] pieterh mikko: i appreciate it but my hash table is full
[20:34] mikko pieterh: you can also add your email to receive notifications of failed builds
[20:34] pieterh i'd have to kick out... facebook or twitter to make space
[20:34] zomg pieterh: it was something about some-zmq-file.pxd 'cpython.pxd' not found
[20:35] pieterh mikko: ok, that sounds fun, I'll do that
[20:35] mikko maybe even separate mailing-list at some point
[20:35] mikko so that people can subscribe to build notifications
[20:35] pieterh mikko: hmm, yes, something like that
[20:36] pieterh it could magically email the last person doing a commit on that git
[20:36] pieterh "FOOL! YOU BROKE IT!"
[20:37] mikko yes, there is "Mail person who broke the build"
[20:38] pieterh that'd be ideal
[20:38] mikko "Send separate e-mails to individuals who broke the build"
[20:38] mikko is the option
[20:38] pieterh zomg: I hesitate to document that as a dependency cause it sounds like breakage somewhere
[20:38] pieterh mikko: ok, lemme take a peek
[20:38] mikko first icc build for zfl going
[20:38] mikko http://build.valokuva.org/job/zfl_master_ICC/1/console
[20:38] zomg pieterh: okay, perhaps it's something about what mikko said about the current dev not working properly with master
[20:39] mikko zfl_blob.c(124): error #186: pointless comparison of unsigned integer with zero assert (size >= 0);
[20:41] pieterh zomg: maybe but I'd not assume so
[20:41] pieterh anyhow, not a biggie
[20:41] pieterh mikko: thanks, I'm fixing that warning
[20:42] pieterh mikko: I can't see where to configure email notifications...
[20:42] mikko go to build
[20:42] mikko configure
[20:43] mikko at bottom you have E-mail Notification
[20:43] mikko so for example http://build.valokuva.org/job/zlf_maint/ -> Configure -> bottom most option
[20:44] pieterh hmm, no configure link I can see
[20:44] mikko sec
[20:45] mikko permission issue prolly
[20:45] mikko just a sec
[20:45] pieterh yeah, looks like
[20:45] mikko retry
[20:45] pieterh aight!
[20:45] mikko should have full perms now
[20:46] pieterh done, excellent, thanks
[20:46] pieterh mikko, this is a really cool thing you've put together here
[20:47] mikko it was incredibly easy to get running
[20:47] mikko and it already resulted into several icc / sun studio bugs being fixed
[20:47] mikko which is a good thing
[20:47] pieterh yes, it's great to build with different compilers like this
[20:50] pieterh ok, g'nite to everyone, time to head home
[20:51] mikko nite
[21:10] rbraley how much faster is inproc vs ,say, tcp across process boundaries?
[21:10] cremes rbraley: probably an order of magnitude faster
[21:10] cremes inproc just changes some internal memory structures; no copying or anything
[21:10] rbraley That was my guess too.
[21:10] cremes tcp transport requires the data go through the kernel buffers
[21:10] rbraley right
[21:13] rbraley do you know of any metrics for that? I am wondering if the serialization and deserialization of Protobufs will make the time difference between inproc and tcp negligible.
[21:16] rbraley I need to know if 12 hops across process boundaries with protocol buffers serializing and deserializing at each can be done in 30ms
[21:16] cremes no need to serialize/deserialize using inproc
[21:16] rbraley but using tcp, I am asking
[21:16] cremes unless you are on windows, check out ipc
[21:17] rbraley we are on windows :(
[21:17] cremes rbraley: ah, oh well. give it a shot and let us know how well it works
[21:17] cremes i haven't seen any benchmarks so yours will be the first
[21:19] rbraley That doesn't inspire confidence :) I don't want to let my client down.
[21:19] cremes why doesn't it inspire confidence?
[21:19] rbraley I guess I will have to do the benchmarks before I build the programs then
[21:20] rbraley to test the infrastructure
[21:20] cremes the only benchmark that matters is your own; i might publish one with protocol buffers but my machine is different, code paths are more/less complex, etc
[21:21] rbraley right, I am just concerned about the time that 12 hops of 0MQ + Protobufs take so I can know how much the infrastructure in my program costs.
[21:22] rbraley if there was such a benchmark already do you think you would have seen it cremes?
[21:23] cremes yes, probably
[21:23] cremes so what you really care about is a protocol buffers benchmark
[21:23] cremes those are easy to find
[21:23] mikko and with tcp transport you often have to marshal the data
[21:23] mikko not sure if that was mentioned
[21:23] cremes what?
[21:23] rbraley how do you mean mikko?
[21:23] mikko with inproc you can just send a pointer to the data
[21:24] cremes mikko: yes, the marshalling is the serialization/deserialization via protocol buffers
[21:24] mikko indeed
[21:24] cremes he can't use inproc; the data is crossing process boundaries
[21:24] mikko ah yes, it was mentioned
[21:24] mikko i didn't get all the way back in backlog
[21:25] mikko sorry for the noise
[21:25] cremes rbraley: if protocol buffers are too slow, check out msgpack or another data serialization library
[21:25] cremes there are *lots* of fast ones out there
[21:25] rbraley I could potentially use inproc, I just don't want long compile times, lack of modularity.
[21:26] cremes rbraley: http://msgpack.org/
[21:26] cremes check out the benches
[21:26] cremes mikko: no worries
[21:26] cremes i thought you knew something i didn't :)
[21:27] mikko unlikely
[21:27] mikko :)
[21:28] cremes :)
[21:28] rbraley ooh msgpack looks nice
[21:28] rbraley would you use it over protobufs cremes?
[21:29] cremes i would test it first... honestly i have no first-hand experience with it... i do *all* of my testing with json and optimize this stuff last
[21:30] cremes btw, i would seriously consider inproc
[21:30] cremes it doesn't reduce modularity *unless* you already plan to have timeouts, process restarts, process recovery, etc
[21:30] cremes otherwise one component in your distributed system (via tcp) can fail and take down everything else
[21:30] cremes if that is acceptable then inproc is a better choice
[21:31] cremes see what i mean?
[21:31] rbraley trying to :)
[21:32] cremes my point is related to component failure in a distributed system
[21:32] cremes unless you are already building in robust recovery, then inproc does not *hurt* you
[21:33] cremes e.g. if a bug in your code that uses inproc takes down the system you are in *exactly* the same position as when one of N distributed components fails
[21:33] cremes that is, the system doesn't work
[21:33] rbraley right
[21:33] cremes so if perf is important, avoid the serialization penalty and just use inproc
[21:34] cremes if you need to scale to multiple machines, adding that in *where necessary* won't cost much
[21:34] cremes that's one of the great things about 0mq; you can scale up or down as your needs change without changing code other than the transport string
[21:34] cremes (and serialize/deserialize where necessary)
[21:35] rbraley One of my requirements is that we can replace components between different runs easily without recompiling
[21:36] cremes still doable
[21:36] cremes put the transport strings in a config file
[21:36] cremes in your code, choose your code path (serialize or skip it) depending on the transport mechanism
[21:36] cremes if inproc then send data
[21:36] cremes else
[21:37] cremes serialize data and send it
[21:37] cremes end
[21:39] rbraley *thinking about this*
[21:45] rbraley I guess I could do a strategy pattern or something, and have the which concrete strategy to use for each component stored in the same config file as the transport string.
[21:45] rbraley then I could have a testing config file and a production config file
[21:46] rbraley sorry if there is too little context to understand what I am talking about :P
[21:50] cremes rbraley: np; i get the gist
[21:50] cremes your proposal would work too
[21:50] cremes lots of ways to skin this cat
[21:52] rbraley "each component is a separate operating system process and the experimenter interface launches them with command-line arguments to say how to connect to each other" that was my first idea
[21:52] rbraley but I don't know if it can do in under 30ms
[21:53] cremes is this running on an ancient PC? are you moving several megabytes with each transmission?
[21:54] rbraley no it should be fairly modern hardware and the messages should be probably a maximum of 200Kb
[21:55] cremes ok, then unless you are writing this in a *very* slow language you have all the time in the world; 30ms is an eternity
[21:55] rbraley even with crossing *windows* process boundaries?
[21:56] rbraley at least one part will be in python
[21:56] cremes only one way to know; you have to try it
[21:59] rbraley well, if it is too slow, I guess there are ways to work around it.