IRC Log

Monday August 29, 2011

[Time] NameMessage
[00:19] mikko crazed: doesn't matter
[00:19] mikko crazed: it can be either way around
[00:19] mikko it usually depends on the use case
[00:20] crazed hmm
[00:20] crazed mikko: what would you recommend in a distributed monitoring application, namely collectd
[00:21] crazed slowly working my wya through the guide, but it would seem that each host would want to be a publisher
[00:21] crazed and then have subscribers in each datacenter
[00:21] mikko in a ways it makes sense for the host to connect
[00:21] mikko that way the host only needs to know where to connect
[00:21] mikko and the monitoring server doesnt need to know about all clients
[00:22] crazed ah yeah that does make sense
[00:22] mikko makes it more flexible i guess
[00:22] mikko especially if you have 'elastic scaling'
[00:22] crazed what about push/pull
[00:22] mikko hosts can come and go
[00:22] mikko what about push/pull?
[00:22] crazed collectd also has that capability
[00:22] crazed and was wondering if that method made any sense
[00:23] mikko do you need to do filtering?
[00:23] crazed hm unlikely
[00:23] mikko push pull might make sense in that case
[00:24] mikko push pull load balances between connected peers
[00:24] crazed i think each datacener would have a "collector" and then those collectors could push to a single point where alerting/graphing can take place
[00:24] mikko where pub sub sends to all
[00:24] mikko that is the biggest difference
[00:24] mikko are you anywhere near london?
[00:25] mikko there is a hands on conference coming up in london
[00:25] mikko which might be interesting
[00:25] crazed ah i wish
[00:25] crazed i'm in nyc
[00:26] mikko you can also use forwarder device to funnel the messages to single location
[00:26] mikko i wrote a small store and forward device some time ago which uses kyoto cabinet for persistence
[00:26] Plouj sustrik: I just don't understand what's going on. The behavior I'm seeing doesn't sem to match the text in the guide.
[00:27] mikko Plouj: if you believe that the behaviour doesn't match the guide can you raise that as an issue?
[00:27] mikko in jira
[00:27] mikko because in either case something needs fixing
[00:27] mikko or clarification
[00:27] Plouj ok
[00:29] crazed mikko: hm the forwarder might be pretty cool idea, i'm not quite sure how to handle sending data down to one place for graphing/alerting, but i'm thinking each subscriber would be generating RRD files, and possibly forward their info down to the single point
[00:29] mikko crazed: you can probably distribute that work to multiple nodes
[00:29] mikko if the graphing is heavy operation
[00:30] crazed yes it is
[00:30] crazed disk intensive over time
[00:30] crazed so it would make sense to have that workload spread out
[00:30] mikko so you could have
[00:30] mikko pub(s) -> sub -> workers
[00:31] mikko where the middle node gets messages from pubs and distributes them to worker nodes
[00:31] mikko besed on criterion N
[00:31] mikko what ever that might be in this scenario
[00:31] crazed hm that makes a lot of sense
[00:31] crazed so it would be push/pull between the subriber and hte workers, right?
[00:32] mikko well, you might want to use ROUTER/DEALER
[00:32] mikko and do custom routing
[00:32] mikko i assume the worker nodes graphing would need to receive certain kind of data
[00:32] mikko there is a chapter in the guide about custom routing
[00:32] crazed ah yeah they would need the same host likely
[00:32] crazed same hosts being monitored sent to the same workers
[00:33] mikko or you could have certain graph types go to same workers
[00:33] mikko rather than per host
[00:33] crazed hm
[00:33] crazed so many possibilities
[00:33] mikko that way you dont need to configure hosts on workers / routers
[00:34] mikko let's say workers A-C deal with CPU and memory graphs
[00:34] mikko D-F could be generating a heatmap of something (requests or so)
[00:34] mikko etc
[00:34] mikko that way you can add more workers to certain graph types if needed
[00:35] mikko and the routers dont need to care about hosts
[00:35] mikko this also makes the setup slightly more resilient to complete failure
[00:36] mikko as in, if you got worker A for host1,host2 and host3 then if A dies those hosts are invisible
[00:36] mikko where as in the earlier scenario certain graphs would be unavailable
[00:37] mikko but it's hard to say anything as absolute truth as i dont know the scenario very well
[00:47] crazed mikko: thanks for the insight, i'll have to explore how things will behave a bit
[00:53] Plouj mikko: done: https://zeromq.jira.com/browse/CZMQ-8
[00:58] mikko Plouj: thanks
[00:58] mikko someone will pick it up soon
[01:01] mikko Plouj: zmq_send (requester, &request2, 0);
[01:02] mikko what is the return value of that?
[01:05] Plouj oh yeah, it returns -1 on the second message
[01:07] mikko i resolved the issue
[01:07] mikko does the hwserver in guide have any error handling?
[01:07] mikko no
[01:08] Plouj nope, it's almost the same as I pasted it
[01:08] mikko http://zguide.zeromq.org/page:all#toc30
[01:08] mikko the error handling comes a bit later
[01:09] Plouj oh
[01:10] Plouj maybe that should be linked in the sentence that I found confusing
[01:10] mikko The REQ-REP socket pair is lockstep. The client does zmq_send(3) and then zmq_recv(3), in a loop (or once if that's all it needs). Doing any other sequence (e.g. sending two messages in a row) will cause an error.
[01:10] mikko this part?
[01:10] Plouj I would link the text "an error" to http://zguide.zeromq.org/page:all#toc30
[01:10] Plouj yes
[01:11] mikko let me see if i can edit the page
[01:13] mikko added a note to pieter
[01:14] mikko thanks
[01:14] Plouj thank you ;)
[04:10] lusis hola. Got a quick question. Been beating my head against this and figured I'd ask before I end up with more yak hair
[04:11] lusis I've got a PUB bound to localhost:5555 and a sub connected to that
[04:12] lusis I'm attempting to mimic an external publisher
[04:12] lusis using a PUB socket but connect instead of bind
[04:12] lusis both python and ruby tests don't seem to work
[05:36] jyfl987 hi, i have checkout the new version of zeromq , and wrote an simple server by c, the problem is that the server runs without any error but dont bind to the target point , i use tcp://*.20120 as and endpoint
[05:38] jyfl987 so is there any thing i might missed?
[07:56] iFire should I use zeromq as a inter programming language ipc/thread FFI interface?
[07:57] iFire yes
[07:57] iFire it's what it's designed for D:
[08:08] iFire I wonder which language would be a good language to design narly nasty highly compressed network binary structures
[08:08] iFire I was tempted to use haskell
[08:08] iFire hmm
[08:08] iFire to prototype at least
[08:15] iFire oh ABNF is a offical standard
[10:37] pieter_hintjens hi y'all
[10:43] mikko hello
[10:43] pieter_hintjens hi mikko!
[10:43] pieter_hintjens how's everything?
[10:45] mikko last week of work before some holidays
[10:45] mikko current contract ends on thursday
[10:45] pieter_hintjens nice... holidays... what's that again? :)
[10:46] mikko i'm not sure how much of a real holiday it will be
[10:46] mikko just some time off work before finding a new contract
[10:48] pieter_hintjens you going to the rabbitmq/zeromq event in London?
[10:49] mikko yeah
[10:49] mikko 23rd
[10:53] pieter_hintjens I won't be able to come but it should be fun
[10:53] mikko yeah, hopefully we can fill the room
[10:54] mikko are you still working in the US?
[10:56] pieter_hintjens yeah, I'm in Dallas for the next few months
[10:56] pieter_hintjens an in Belgium this week
[10:56] pieter_hintjens *am
[10:56] mikko let me know if you need a hand in dallas
[10:57] mikko wouldn't mind some sunshine
[10:57] mikko looking out the window to very gray day
[10:57] pieter_hintjens that's a good idea, in fact
[12:58] cremes lusis: want to show us the code? put it into gist.github.com or pastie.org so we can see what you are doing.
[14:46] mikko back
[15:12] mikko i added a simple store and forward device using kyoto cabinet https://github.com/mkoppanen/pzq
[15:12] mikko it's a bit raw
[15:16] sustrik mikko: that's nice
[15:17] sustrik can you link it from somewhere so that it's accessible to everyone?
[15:17] mikko i need to clean up a bit
[15:17] mikko but i can later
[15:17] mikko i use separate socket for ACKs
[15:17] mikko and use two databases for message tracking
[15:17] mikko one persistent for storing and one in memory database for inflight messages
[15:24] sustrik i see
[15:25] mikko probably not the most effective design
[15:25] mikko but seems to work ok this far
[15:25] mikko store and forward with hard sync to disk isn't going to be speed demon in any case
[15:26] sustrik right
[15:26] sustrik so you forward the message without sending ack
[15:27] mikko i forward the message and wait for ACK on another socket
[15:27] sustrik and send the ack only once the message is persisted?
[15:27] mikko yes
[15:27] sustrik looks like a sound design
[15:27] mikko XREQ <-> store <-> PUSH
[15:28] mikko and on right hand side wait ACKs on PULL
[15:28] mikko the message is only deleted after client sends ACK
[15:28] mikko should refactor the code to use XREQ < store > XREP
[15:28] mikko but i was lazy
[15:29] sustrik and ack is sent from the store only when p
[15:29] sustrik the message is stored
[15:29] mikko yes
[15:29] sustrik nice
[15:29] mikko and receiver on other side needs to send separate ACK
[15:29] mikko so that we know that the message was delivered
[15:29] mikko message is only deleted after the receiver ACKs it
[15:30] sustrik yep, ack on each link
[15:30] mikko there is a configurable sync divisor as well
[15:30] mikko if (m_divisor == 0 || (rand () % m_divisor) == 0)
[15:30] mikko if that matches it does a hard sync to disk
[15:30] mikko so user can configure consistency level
[15:30] sustrik yes, right
[15:31] sustrik i'll have a look at the code later on
[15:32] mikko what i plan to do is to move the ACK socket for the back side on to separate thread
[15:32] mikko as the database does record level locking
[15:32] mikko so that way it's possible to handle ACKs separately from the main flow
[15:33] sustrik right
[15:33] mikko soon i will have more time to hack on this stuff
[15:34] mikko friday will be first holiday day in long time
[15:34] sustrik enjoy it!
[15:41] mikko i sure will :)
[15:55] lusis cremes: ping
[15:56] cremes lusis: pong
[15:57] lusis cremes: finding the gist I did last night now
[15:57] lusis one sec
[15:57] lusis cremes: https://gist.github.com/dcc7474625ab60bd0388
[15:58] lusis cremes: I think I may have realized WHY it won't work
[15:58] lusis or rather why it's not intended to work
[15:59] cremes lusis: which code should i be looking at here? have 3 snippets... which 2 are important?
[15:59] lusis cremes: all of them. The idea is a process that binds as a PUB, another process that acts as a SUB and a third process that connects to the PUB to send messages
[16:01] lusis it's more of an exercise than anything but I have a similar use case. I think the solution in zmq thought process is another socket that handles incoming messages to the PUB and sends them out.
[16:01] cremes yeah, this code can't ever work
[16:01] cremes what you *want* is for your publisher.rb to be a FORWARDER device
[16:02] cremes the sender would connect to it on 6382, and the subscriber would connect to it on 6383
[16:02] cremes and the forwarder just passes the messages through
[16:02] lusis got it. I thought that might be the case
[16:02] lusis just needed to mold my thinking
[16:02] cremes take a look at the man page for zmq_device and search the guide for "forwarder"
[16:02] lusis yep, looking now ;)
[16:04] lusis cremes: many thanks. I think I came to the same conclusion before I posted the question
[16:04] lusis cremes: just needed some verification
[16:05] cremes lusis: you're welcome
[17:30] i0n hi, does zeromq work well with Camel?
[17:35] michelp i0n, is that a language?
[17:36] michelp Caml?
[17:38] michelp i0n, there are OCaml bindings, if that's what you mean http://www.zeromq.org/bindings:ocaml
[20:33] grzesieq I think I need help understanding durable sockets… Lets say I have a streamer device, and many workers connected to it. Is there any way to queue only N messages on the workers, and let rest build up on the streamer? I want to minimize the loss if any worker decides to die.
[21:33] pieter_hintjens OK, folks, zeromq/2.1.9 stable is now out
[21:34] CIA-32 pyzmq: 03Zbigniew Jędrzejewski-Szmek 07master * rdd4a414 10/ zmq/core/message.pyx : threading._Event was renamed to Event in 3.3 ...
[21:34] CIA-32 pyzmq: 03Min RK 07master * r1f61b34 10/ zmq/core/message.pyx : Merge pull request #133 from keszybz/master ...
[21:42] michelp woot!
[21:45] pieter_hintjens anyone here done any android work?
[22:35] CIA-32 pyzmq: 03MinRK 07master * rad9de3c 10/ README.rst : update README for 2.1.9 ...
[22:35] CIA-32 pyzmq: 03MinRK 07v2.1.9 * r8cdd53f 10/ zmq/core/version.pyx : bump version to 2.1.9 - http://git.io/KhlmAw
[22:44] CIA-32 pyzmq: 03MinRK 07gh-pages * r1346c7a 10/ (46 files in 5 dirs): doc release 2.1.9 - http://git.io/fzNU2g
[22:53] CIA-32 pyzmq: 03MinRK 07master * r3b36d30 10/ README.rst : update README for 2.1.9 ...
[22:54] CIA-32 pyzmq: 03MinRK 07v2.1.9 * r12920eb 10/ zmq/core/version.pyx : bump version to 2.1.9 - http://git.io/W3TOYw
[23:18] mikko pieter_hintjens: congrats