IRC Log


Thursday May 5, 2011

[Time] NameMessage
[06:53] djc Bartzy: IIRC you shouldn't share sockets across threads
[06:53] djc (zmq sockets)
[06:54] guido_g fork <- processes
[06:54] guido_g but the statement holds
[06:54] djc yeah
[06:54] djc hmmm, I'm using a device to forward a tcp publisher over inproc
[06:55] djc but it seems to leak like hell
[06:55] djc even though I set HWM on the inproc subs
[06:55] djc presumably if there are no inproc subs the device will just lose the messages, right?
[06:57] djc also is it possible somehow to inspect the size of the queue for a subscriber?
[06:58] djc I don't see anything in the getsockopt docs, so probably not...
[06:58] guido_g hwm should be set on both sides for inproc
[06:58] guido_g its then the sum of both sides, i've heard here
[06:59] djc why's that?
[06:59] guido_g i didn't implement it
[06:59] sustrik the reason is to make it work same for tcp and inproc
[06:59] sustrik so, in tcp you have a buffer on send side and buffer on recv side
[06:59] djc ah, ok
[06:59] djc that makes sense
[07:00] djc so for a forwarder device, if I only have HWM on the pub side, should that suffice?
[07:00] djc or does that just make it clog up on the sub side?
[07:00] sustrik the latter
[07:00] sustrik if you wan't to bound memory usage
[07:01] sustrik set hwm on every socket you are using
[07:01] djc I had a server completely fail last night because the OOM killer for some reason killed everything except the pyzmq-using process :(
[07:02] sustrik yes, you should use HWM consistently
[07:02] sustrik you can also have a look at MAXMSGSIZE option
[07:02] djc sustrik: if the forwarder device doesn't have any clients on the pub side, it will still throw messages away, right?
[07:02] djc like a normal publisher?
[07:02] sustrik yes, the same
[07:03] djc and would it be hard to add a sock opt to get the queue size?
[07:03] sustrik which queue?
[07:03] sustrik there are many queues beneath each socket
[07:03] sustrik one per connection basically
[07:04] djc yeah
[07:04] djc I suppose, on a sub socket, the total of all messages queued from each publisher
[07:04] sustrik also, you can't account for data stored in TCP buffers, in the NIC etc.
[07:04] sustrik what would you use that for?
[07:05] sustrik memory usage management?
[07:25] guido_g some information would be better than no information
[07:25] guido_g one could use the information to see trends and steer the components
[07:26] sustrik you mean in terms of memory management, right?
[07:26] sustrik like "this component is using too much memory so let's ask it to pause for a while"
[07:26] guido_g no, I mean in terms of overall management during system runtime
[07:27] sustrik management of what then, network bandwidth?
[07:27] guido_g the whole system
[07:27] sustrik CPU usage?
[07:27] guido_g the whole system
[07:27] sustrik what's that? :)
[07:27] guido_g omfg
[07:28] guido_g sometimes it's really hard
[07:28] sustrik well, you want to manage something, but you don't say what you want to manage
[07:28] sustrik hard to help you then
[07:28] guido_g the application
[07:28] sustrik can you give an example?
[07:28] guido_g i build an application and need to keep track what it does
[07:29] guido_g messaging is part of this
[07:29] guido_g i don't need exact numbers
[07:29] guido_g it's more important to have measurement-pointsd all over the app
[07:29] guido_g so there is evidence if something goes wrong
[07:30] mikko deja vu
[07:30] mikko good morning
[07:30] guido_g it modernd times, this information might even be used to allert ops if something looks fishy
[07:30] sustrik morning
[07:30] sustrik right
[07:30] guido_g hi mikko
[07:30] sustrik do you monitor tcp buffers?
[07:30] guido_g i knew that would come
[07:30] guido_g if i have to, i'd do it
[07:30] sustrik there's actually a way to do that as i found out yesterday
[07:31] guido_g but messages are a way better overall indicator
[07:31] guido_g because these link directly to the applications actions
[07:31] mikko i think metric of how many messages in zeromq buffers is a useful metric
[07:31] guido_g and therefore produce a "value" for logging/monitoring
[07:32] guido_g mikko: thanks, i owe you a beer
[07:32] mikko guido_g: we've had this discussion before
[07:32] guido_g mikko: you too? ,)
[07:32] sustrik let's maybe discuss it in bxl?
[07:32] guido_g YES!
[07:32] mikko i think the best you can do is to give real-world example how you use this metric
[07:32] sustrik yes, please
[07:32] mikko what do you gain out of this metric, not just "logging" as whole
[07:32] sustrik without the use cases the discussion is kind of pointless
[07:33] guido_g no
[07:33] mikko once there is a real-world example sustrik can relate to it
[07:33] guido_g it'S a service the library should provide
[07:33] djc mikko: for example, you might have found out sooner you erroneously set your HWM after the connect
[07:33] guido_g going to read the internets then...
[07:33] djc I would definitely have found out my single HWM was not enough
[07:33] mikko djc: i wouldn't, because it was the receiver side hwm
[07:33] djc even though I still don't fully understand why that is
[07:33] mikko i wouldve looked at sender side
[07:34] mikko djc: inproc?
[07:34] djc yeah, inproc
[07:34] mikko set sender and receiver hwm
[07:34] djc yeah, sure, WHY, though?
[07:34] mikko because the messages get queued
[07:34] djc the docs say for PUB and SUB sockets, both have action = Drop for HWM
[07:34] mikko in my case i publish frames from kinect
[07:35] mikko sensor gives 30fps
[07:35] mikko and my gui occasionally lags behind a bit
[07:35] djc so it seems like, if I set HWM on the sub, the sub should drop if it gets more stuff from the PUB
[07:35] mikko and then the memory usage starts to sky rocket as the frames get buffered
[07:35] mikko djc: yes
[07:35] mikko that's what i wanted in my case
[07:35] djc that's similar to what I have, I'm pushing JSON messages over a WebSocket and the browser has to update a web page
[07:35] mikko as the sensor feed is real-time
[07:36] mikko this is only problem if you publish faster than you consume
[07:36] mikko guido_g: there is no question in my mind that zeromq should provide this metric
[07:36] mikko sustrik: if you can monitor the size of tcp buffers why not zeromq as well?
[07:37] djc also, does someone here have experience tuning the Linux OOM killer? ;)
[07:38] djc mikko: but the thing is, I'm forwarding from tcp to inproc, and I think the inproc side of the forwarder had no clients at all
[07:38] djc so in that case I don't understand why I need a HWM set
[07:38] djc because the pub side of the forwarder should just drop everything
[07:38] mikko you don't necessarily
[07:38] mikko are you having issues?
[07:39] djc the OOM killer killed one of my servers last night
[07:39] djc because it didn't understand what process it had to kill
[07:40] sustrik mikko: my fear is that people would abuse it to drive application logic
[07:40] guido_g mikko: http://highscalability.com/log-everything-all-time <- my statement in form of an article (note the date!)
[07:40] mikko terminate called after throwing an instance of 'zmq::error_t'
[07:40] sustrik i've had that discussion serveral times
[07:40] mikko shouldn't this be catchable?
[07:40] sustrik guy asks for monitoring the queue size
[07:40] sustrik says it's for "monitoring" purposes
[07:40] guido_g you can'T protect the world
[07:41] djc sustrik: the solution for that would be good docs
[07:41] sustrik further questioning reveals that he actually want's it to dirve the business logic
[07:41] djc which say DON'T USE THIS FOR APP LOGIC
[07:41] djc and then when they complain you can just point to the docs
[07:41] guido_g if ømq should be used for real 24/7 distributed applications, it has to have a way to montior it's behaviour
[07:41] sustrik well, the REQ/REP is meant to do REQ/REP
[07:41] guido_g directly and related to the messages it processes
[07:41] sustrik what you see now is everyone using it to do custom routing
[07:42] guido_g for now it's a black box
[07:42] pieterh sustrik: you're not being fair here
[07:42] pieterh 0MQ lacks any other way to do routing
[07:42] guido_g and ask ops guys what they think of those things
[07:42] guido_g hi pieterh
[07:42] sustrik morning
[07:42] pieterh guido_g: hi
[07:42] pieterh sustrik: if you keep complaining that people 'misuse' your lovely designs
[07:42] guido_g welcome to the pre-warmup for brussels :)
[07:43] pieterh you really start to sound silly
[07:43] pieterh since you don't provide alternatives
[07:43] pieterh and you try to claim that the use cases are invalid
[07:43] pieterh or people are just too stupid
[07:43] pieterh or something
[07:43] sustrik well, if i added every possible feature to 0mq, we'll be back to AMQP style design by now
[07:43] pieterh it's really annoying to have this discussion
[07:43] pieterh no-one is asking for "every possible feature"
[07:43] pieterh that is a straw man
[07:43] guido_g sustrik: resistance is good, up to a certain amount
[07:43] djc sustrik: it's not even a feature, just exposing some more data you already have somehwere
[07:43] sustrik one at a time :)
[07:44] guido_g sustrik: if you refuse too much features, you fail to attract users
[07:44] pieterh stop telling your users they are stupid, it's really bad politics
[07:44] pieterh you don't actually use 0MQ
[07:44] sustrik ok
[07:44] pieterh you are actually incompetent when it comes to knowing about use cases
[07:44] pieterh if you don't trust your users, what do you have?
[07:44] guido_g ahhh
[07:44] djc pieterh: harsh words, man
[07:44] pieterh true words
[07:44] guido_g one-gear down would help more, i think
[07:45] sustrik i would love to hear the use case
[07:45] pieterh either we design 0MQ collectively, or it fails to work
[07:45] djc sustrik: the use case is actual monitoring
[07:45] sustrik what's that?
[07:45] sustrik what it is used for?
[07:45] sustrik etc.
[07:45] djc making sure the message queues don't balloon out because of my failing to set enough HWMs
[07:45] djc and displaying some values about this on a status page
[07:46] djc so I can correct my code by setting more HWMs
[07:46] sustrik ah, so it's for debugging, right?
[07:46] guido_g pieterh: is it ok to have a page on the wiki for such things? and if so, where?
[07:46] djc sustrik: debugging and monitoring go together
[07:46] pieterh guido_g: what such things?
[07:46] djc some code that worked fine may not work once I start to move more messages through the socket
[07:46] sustrik so what's the monitoring part about?
[07:46] guido_g pieterh: features to be beaten^W discussed over
[07:47] pieterh guido_g: for sure, yes... I sometimes make 'topic' pages for that stuff
[07:47] sustrik that would be goog
[07:47] sustrik good
[07:47] sustrik especially if people post use cases there
[07:47] djc sustrik: the monitoring part is just being able to ascertain that something is not going wrong... it's not just debugging
[07:48] guido_g pieterh: i want to present the monitoring use-case, lots of links and some meagre words from me
[07:48] djc for example I'd put it into SNMP and integrate it with my other diagnostics
[07:48] guido_g monitoring is much more, yes
[07:48] pieterh start a topic: page, write what you like, then post link to email list and discuss there
[07:48] djc (for example couchdb replication state, redis uptime, whatever)
[07:49] guido_g pieterh: ah ok
[07:50] pieterh guido_g: it's a wiki, you enter 'http://www.zeromq.org/topic:monitoring-something' in the URL, then click to create the page
[07:50] guido_g pieterh: thanks
[07:50] pieterh you can edit the community page to add to the list of topics already shown
[07:51] guido_g i'm not allowed to do that
[07:51] guido_g the story of my life
[07:52] djc sustrik: so in my pub-sub-forwarded-over-inproc scenario, how many HWMs do I need to set to be safe? in my mental model, if I set the HWM on the inproc subscriber (at the end of the chain), I'd be safe
[07:52] pieterh guido_g: join the wiki, there should be a button somewhere
[07:52] djc but obviously that's not the case, and I don't understand why, from the docs
[07:52] guido_g pieterh: i'm logged in
[07:53] sustrik djc: on every socket
[07:53] djc yeah, I really don't understand why that is
[07:53] pieterh guido_g: did you join the site? You have to be a member to edit pages on it
[07:53] sustrik actually, my feeling is that the whole monitoring discussion would be solved by setting the default HWM to 1000 or somesuch :)
[07:54] pieterh sustrik: yeah, or default in context
[07:54] djc ugh, that would be really bad
[07:54] guido_g pieterh: i could edit the brussels page, but i'm not allowed to create a page (url close the one you gave)
[07:54] sustrik djc: why so?
[07:54] sustrik infinity is a prety lousy default
[07:55] djc sustrik: because I don't want it to lose messages by default
[07:55] guido_g pieterh: i requested http://www.zeromq.org/topic:monitoring-support and clicked on create page
[07:55] pieterh guido_g: sorry, it's 'topics' not 'topic'...
[07:55] pieterh I can make a better interface for creating topics but you're pretty much user number 1 here
[07:55] guido_g pieterh: see, and it works :)
[07:55] guido_g pieterh: thanks again
[07:55] sustrik djc: you mean in pub/sub?
[07:56] djc sustrik: but explain this to me some more... in my mind, a message incoming over tcp gets to the forwarder sub, is then copied onto the inproc pub, then it immediately gets reference-copied to each inproc sub incoming queue, right?
[07:56] djc sustrik: yeah, pub/sub
[07:56] sustrik yes
[07:57] djc sustrik: so if I set HWM on the inproc sub, it should drop things that come in when it's full, so how is that not a good enough solution?
[07:57] djc there doesn't seem to be any opportunity for the other buffers to get clogged up
[07:58] sustrik but it's either/or problem; either you loose messages or run out of memory
[07:58] sustrik there's no way out of that
[07:58] djc sustrik: yes, and that's exactly the choice zeromq shouldn't make for me
[07:58] sustrik ah
[07:58] sustrik i was speaking about default behaviour
[07:58] sustrik not mandatory behaviour
[07:59] sustrik the point was that loosing messages is probably more acceptable than process being killed by OOM killer
[08:00] djc well, I prefer fail fast, and I'll notice the OOM killer sooner than I'll notice messages being dropped
[08:00] djc (actually before the OOM killer I noticed the memory use before it became really problematic and killed the right process myself, and then set the HWM)
[08:00] pieterh djc: if 0MQ told you when it was dropping messages, would that help?
[08:00] djc but you still haven't explained why my single HWM won't work
[08:01] djc pieterh: that might help some, but I'm not sure what the appropriate channel for that would be
[08:01] pieterh well, there's already sys://log for such things
[08:01] pieterh it can be extraordinarily annoying to e.g. have messages dropped silently when they can't be routed
[08:02] pieterh sustrik: you realize that the only way to debug this is to add printfs to libzmq?
[08:02] sustrik yes
[08:02] pieterh i vaguely recall submitting a patch for this, which got rejected
[08:02] sustrik two problems
[08:03] sustrik 1. if there are several publishers it's not clear which message sequence is broken
[08:03] sustrik 2. there should be a way to propagate the "breaking points" down the pub/sub tree
[08:04] sustrik djc: if there's unbounded queue anywhere in the system it'll become the narrow point where the messages get accumulated
[08:04] djc sustrik: that doesn't make sense if the HWM behavior is to drop
[08:04] sustrik so, in congestion situation, that queue will grow and ultimately run out of memory
[08:04] sustrik what should it be instead?
[08:04] djc the docs say the subscriber drops messages, it doesn't say the subscriber blocks messages
[08:05] sustrik yes, for pub/sub it drops
[08:05] sustrik you would prefer blocking?
[08:05] djc no, I prefer dropping, but you'r saying that it actually blocks
[08:05] djc the unbounded queues can only grow if the the downstream queues block instead of drop
[08:05] sustrik did i said that?
[08:06] sustrik ah
[08:06] djc if the downstream queues dropped instead of blocked, the upstream queues wouldn't accumulate
[08:06] sustrik there are more queues there than you are seeing
[08:06] sustrik there's TCP involved which has a blocking behaviour
[08:07] sustrik so, if the network is not keeping up with the feed, it causes TCP to block
[08:07] djc but that would only be if the subscriber side doesn't get a chance to recv its messages and then drop them, right?
[08:07] sustrik if it doesn't catch up
[08:08] sustrik slow hardware, limited bandwidth or whatever
[08:08] djc yeah, we don't have slow hardware or limited bandwidth
[08:09] sustrik ok, so what's the problem once again?
[08:09] sustrik i understand the topology you have
[08:09] sustrik what's the thing you believe is broken?
[08:10] djc okay, the problem is, I set the HWM on the downstream inproc subscribers yesterday afternoon, and yesterday evening the process completely blew up, invoking the OOM killer until the server had to be restarted
[08:10] sustrik there's hwm missing somewhere i would guess
[08:11] djc yes, that's what you've been saying, but it still seems weird to me
[08:11] djc particularly because I *suspect* (but can't be sure at this point) that there were no inproc subscribers at all
[08:11] sustrik aha
[08:12] sustrik maybe there's a bug
[08:12] sustrik hard to say
[08:12] sustrik what was the message load?
[08:12] djc yeah, so for debugging it would help if I could inspect the queue size ;)
[08:12] sustrik very high?
[08:13] djc I don't have hard numbers about the message load yet, I'll make some today
[08:13] sustrik i mean, was it "publish as fast as possible" scenario?
[08:13] sustrik or rather something like "send a message each millisecond"
[08:13] djc the former
[08:13] sustrik ok
[08:14] djc it's tick data from stock exchanges all over the world
[08:14] djc for some 500 different markets
[08:14] sustrik in such cases you are basically testing congestion behaviour of the system
[08:15] sustrik the congestion happens at points where two "pipes" get connected
[08:15] sustrik and the downstream one is slower than upstream one
[08:15] sustrik for example a TCP and 0mq
[08:15] sustrik the only real solution is to set HWM everywhere
[08:16] sustrik that would guarantee that if the congestion happens at any point of the patch
[08:16] sustrik path
[08:16] djc yeah, I just think the docs don't make clear enough why in this kind of scenario dropping messages might not actually drop, but rather block
[08:16] sustrik it won't run out of memory
[08:17] sustrik hm, well, it's not really visible to the user
[08:17] djc it's visible because the server runs out of memory :P
[08:17] sustrik right
[08:18] sustrik feel free to submit a documentation patch
[08:18] sustrik hm
[08:19] sustrik "if the downstream is not able to keep with the message feed, you should expect the memory usage to grow"
[08:19] sustrik "if the HWMs are not set, this can eventually lead to memory exhaustion"
[08:19] sustrik would that do?
[08:20] djc you should emphasize that a single HWM at the downstream end might not suffice
[08:21] sustrik would you mind to word it yourself?
[08:22] sustrik for me it's kind of obvious so i am probably not a right person to write it down
[08:24] skm i have a question on using epgm
[08:24] skm i have two servers connected to 10.80.11.X
[08:24] skm and one broadcasts via epgm://eth0;239.192.1.1:8000
[08:24] skm the other receives on the same
[08:24] skm the messages arent getting through
[08:25] skm is it because of that address? is there another address i should be using?
[08:26] sustrik the address looks ok
[08:26] guido_g a) try the ip instead of the interface
[08:26] sustrik and do you have multicast traffic enabled on your network?
[08:26] guido_g b) check if multicast routes are working
[08:27] skm multicast routes - how do i check that/what do i do? *goes to google*
[08:31] skm iptables
[08:31] skm sign
[08:31] skm sgh*
[08:31] skm sigh*
[08:47] mikko Bartzy|work: yes
[08:48] guido_g good advice :)
[08:50] guido_g python is fine for these things
[08:50] mikko PHP should be good enough for that if you respawn the workers within certain intervals
[08:50] guido_g right
[08:50] guido_g Bartzy|work: i'm mostly a python guy (for ~10 years)
[08:51] guido_g but if you need something fast, use the things you know best
[08:54] guido_g Bartzy|work: just exit the process after a certain amount of requests
[08:54] guido_g a so called watchdog
[08:59] mikko no
[08:59] mikko just create the socket in the fork
[09:08] mikko you have a process running somwhere in the background
[09:08] mikko and you just send from your web script
[09:11] mikko yes
[09:39] mikko create them after fork
[09:40] mikko i dont think they survive fork
[10:09] guido_g mikko: please see http://www.zeromq.org/topics:monitoring-support and contribute, if you like
[10:21] mikko Bartzy|work: have you considered running a device?
[10:21] mikko run a device that listens for incoming jobs
[10:21] mikko and the php scripts connect to the device locally
[10:22] mikko your php scripts can be single threaded
[10:22] mikko no need to fork anything
[10:22] mikko if the script dies the device won't send anything to it anymore
[10:26] mikko have you read the zguide at all?
[10:26] mikko devices are small daemons that forward requests
[10:27] mikko the workers connect to the device
[10:27] mikko and clients connect to the device as well
[10:27] mikko the client doesnt know about individual workers, how many or so
[10:27] mikko Bartzy|work: why would you need only one worker?
[10:28] mikko isn't that exactly what you don't want?
[10:28] mikko if you have the workers as simple individual scripts you can respawn them periodically
[10:28] mikko what is the issue with multiple workers?
[10:28] mikko it simplifies your code hugely
[10:29] mikko it would be without the device
[10:30] mikko but the device maintains the connections with your clients
[10:30] mikko workers can come and go in the background
[10:33] sustrik round-robin
[10:36] sustrik it's unspecified
[10:36] sustrik in reality the order is established as peers connect
[10:36] sustrik however it changes as they disconnect or reach HWMs
[10:38] mikko the guide has examples for custom routing as well
[10:41] sustrik so you have 8 connections to load-balance among
[10:41] sustrik what happens is that messages are sent in this way: 1,2,3,4,5,6,7,8,1,2,3...
[10:42] sustrik so you'll have the tasks fairly balanced between the two machines
[10:42] mikko between machines it's 1,2,1,2,1,2,1,2,
[10:42] mikko and between workers in a machine its 1,2,3,4,1,2,3,4
[10:43] sustrik mikko: ah, there's a device on each box?
[10:43] mikko sustrik: yes
[10:43] sustrik then yes
[10:43] mikko Bartzy|work: each machine has a device
[10:43] mikko you web scripts balance between 1,2,1,2,1,2
[10:43] mikko and the devices round-robin between workers
[10:46] mikko sorry?
[10:46] mikko "so I need a device to handle the 2 devices ?"
[10:46] mikko you can start a device from php script
[10:47] mikko the php script is just used to start the device, all the processing happens inside C code
[10:47] mikko https://github.com/mkoppanen/php-zmq/blob/master/examples/forwarder-device.php
[10:47] mikko there is an example of running a device
[10:54] mikko Bartzy|work: to the device
[11:05] mikko both
[11:05] mikko you can connect zeromq socket to multiple endpoints
[11:15] sustrik the web code is transient, right?
[11:15] sustrik it opens a socket, sends a message and closes the socket?
[11:16] sustrik but then it exits, right?
[11:16] sustrik ok, so, you can run one device on the same box as webserver and make all the webscripts connect to that device
[11:17] sustrik that device in turn would connect to all devices on the worker boxes
[11:17] sustrik yes
[11:18] sustrik or, alternatively
[11:18] sustrik you can do it other way round
[11:18] sustrik the devices on the worker boxes can connect to the central device
[13:26] Guthur I'm planning to use a queue device to develop a 'test farm' where the workers are making a number of potential long running http requests, best case maybe ~7secs, I'm wondering is it reasonable to have a client socket per test case
[13:26] Guthur or is there a better approach
[13:43] djc pyzmq is segfaulting on me :(
[13:51] djc okay, I think this is a buglet: in 2.1, I can't connect a SUB to an inproc publisher before the publisher has bound to it
[13:51] djc this worked in 2.0
[14:05] pieterh djc: that didn't work in 2.0, pretty sure of it
[14:05] pieterh it's a known issue with inproc sockets, in all versions of zmq
[14:07] djc hmmm, it worked for me, although I guess some part of the timing setup could have changed between versions
[14:08] pieterh yes, it'll be timing related
[14:17] travlr it just happened again.. doesn't everbody already know that 0mq will make there lives so much easier ;-}
[14:18] travlr i'm watching FLOSS Weekly (http://twit.tv/floss164) and he should be usinig 0mq forsure!
[14:25] travlr ha! he just mention 0mq, go figure.
[14:33] guido_g http://www.infoq.com/presentations/Large-Scale-Integration-in-Financial-Services <- nice definition of guaranteed messaging and mentions a well known low-level open-source messaging product
[14:36] djc btw, I got some numbers from what we have running here
[14:37] djc apparently we push up to 8000 msg/s, with a msg on average being 150b
[14:37] djc so it's probably not that impressive
[14:44] Guthur guido, that MPI thing?
[14:44] Guthur AMQP
[14:44] guido_g below amqp
[14:44] guido_g on the chart
[14:44] guido_g at minute 9 something
[14:46] Guthur guido_g: hehe, I was trying to be facetious
[14:46] Guthur it does have a question mark after it though, 0MQ?
[14:47] mile is it legal to bind to a SUB socket?
[14:48] mile I need a server listening on 2 sockets, one for each communication direction
[14:48] guido_g Guthur: at least it's mentioned
[14:48] mile but I get assertion failed
[14:49] djc mile: that sounds confused
[14:49] djc listening on two sockets, for each direction?
[14:49] mile I am forwarding messages from a web socket to the backend
[14:49] djc generally in at least one of the possible directions you wouldn't bind, you'd connect
[14:49] mile so I need backend to connect to the gateway
[14:50] Guthur guido_g: I just got and email there 20 mins ago from a colleague wanting to investigate using 0MQ for real-time flow in FX processing
[14:50] Guthur Forex*
[14:50] mile so gateway is binding on a PUB and on a SUB sockets
[14:50] mile instead of SUB may be PULL
[14:50] mile there I'm confused
[14:50] Guthur I had placed on company wide cool tech page a couple of months ago
[14:51] djc mile: you don't "bind" on a SUB
[14:51] djc you "connect" on a SUB
[14:51] djc you "bind" on a PUB
[14:51] djc if it's the inverse thing of what you want to do, don't use PUB/SUB
[14:51] guido_g Guthur: would be nice to see something like that
[14:51] mile djc, I need exatly the inverse :)
[14:52] djc mile: so use PUSH/PULL
[14:52] mile ah, thanks! :)
[14:57] pieterh mile: you can bind a SUB and connect a PUB, it's useful in some cases
[14:58] pieterh imagine you have one stable subscriber and publishers that come and go
[14:58] pieterh djc: sorry to correct you here
[14:58] mile that is my scenario
[14:58] pieterh push-pull and pub-sub do have different semantics
[14:58] djc pieterh: that's fine, that way I get to learn something
[14:58] mile they do the load balancing?
[14:59] pieterh the main difference is if you (accidentally or on purpose) plug in a second subscriber
[14:59] pieterh with pubsub, each sub will still get each message
[14:59] pieterh with pushpull, messages will be load balanced
[14:59] mile so that is semantically the main difference between those two?
[14:59] pieterh the bind/connect direction is independent from the data flow and the pattern
[15:00] pieterh yes
[15:00] pieterh the semantic difference is loadbalancing with pushpull, fanout with pubsub
[15:00] pieterh there are *some* edge cases where the bind/connect direction makes a difference
[15:00] djc pieterh: so could I also bind REQ and connect REP, for example?
[15:00] pieterh djc: yes
[15:00] djc ah, edge cases
[15:00] mile :)
[15:00] pieterh the reason we generally don't do this is that it is confusing
[15:00] djc tell me more about the edge cases ;)
[15:01] pieterh djc: there was one issue reported on the list a while back, but I don't recall the details
[15:01] pieterh could have been HWM processing
[15:01] pieterh it worked differently depending on which sides did the bind & connect
[15:02] pieterh that seems to be a bug, IMO, since formally the bind/connect direction should be irrelevant to message flow
[15:02] Guthur guido_g: I'll certainly try to help this guy realise his proof of concept
[15:04] guido_g Guthur: good luck, i would be happy to have such a project :)
[17:15] cremes i have a non-0mq question for the channel
[17:15] cremes it seems like a lot of folks are using 0mq for market data dissemination for trading systems
[17:16] cremes how do you solve the issue of delivering market data (tick level details) to a web-browser client?
[17:16] cremes are you bridging 0mq to websockets, using flash, or something proprietary?
[17:20] guido_g seems noone delivers the accumulated data :)
[17:22] pieterh anyone want a PATCH socket? :-)
[17:22] pieterh Like a PUB and PULL combined. Grab it at http://zero.mq/patch.
[17:23] pieterh cremes: the only sane way I know of (but no-one is contributing) is an HTTP bridge that turns 0MQ patterns into HTTP ones
[17:23] pieterh at some point I'll make the thing myself
[17:23] cremes pieterh: interesting...
[17:23] pieterh if you have a paying client who needs this, that'd be excellent of course
[17:23] cremes of course! if there were such a client, it would be me
[17:24] pieterh you can make HTTP patterns from the simple to the insanely complex
[17:24] cremes i'm just doing some research in advance... i'm not going to get to it for several months at least
[17:24] pieterh I'd do something RESTful with long polling, nothing more weird
[17:24] pieterh doesn't need websockets
[17:24] pieterh does need a reasonably well built web server
[17:25] cremes pieterh: updating FAQ now with steps for building 0mq on windows mingw
[17:25] cremes how do i make the 0 with a slash in it?
[17:25] cremes what's the key combo?
[17:25] pieterh cremes: you copy paste from somewhere else on the wiki :-)
[17:25] cremes cheater
[17:25] pieterh you can reprogram your keyboard if you're smart
[17:26] cremes clearly, i cannot do that ;)
[17:28] pieterh me neither, I just copy/paste all the time...
[17:31] cremes okay, FAQ updated with mingw instructions
[17:32] pieterh nice
[17:32] pieterh there're also pages per platform afair
[17:36] cremes pieterh: aren't those pages for "tuning"?
[17:37] pieterh there should be a page for Windows
[17:37] pieterh I guess it's this one: http://www.zeromq.org/docs:windows-installations
[17:38] pieterh anyhow, the Search function immediately returns the FAQ page, so it's all good...
[19:27] Guthur pieterh, the use of CLisp as the Common Lisp code link in the guide can cause a little confusion
[19:28] Guthur CLisp is an implementation of Common Lisp and so can sometimes seem like it might use specific features of that implementation
[19:29] Guthur it's only a minor point, but a CL user today was making that assumption
[19:34] pieterh Guthur: I didn't know that
[19:35] pieterh I shortened Common Lisp to CLisp for pragmatic reasons, didn't realize it was a specific implementation
[19:35] pieterh I'll abbreviate it to CL then
[19:36] Guthur pieterh, understandable
[19:36] Guthur CL would be fine
[19:36] Guthur clisp is the GNU implementation actually
[19:36] pieterh ok
[19:38] pieterh Guthur: ok, fixed, and new version published
[19:38] pieterh sorry for the delay... :)
[19:43] Guthur hehe, super service as usual
[20:38] QQ_Nick Hi, I'm fairly new to 0mq but I've encountered a problem when using a PUSH/PULL pattern when using pyzmq.
[20:40] QQ_Nick I've set a HWM on the Push side, and when no client is running this seems to work OK.
[20:42] QQ_Nick But if I start a Pull client with a HWM (say set to 10), all messages from the server are immediately sent to the client
[20:47] djc yes, so?
[20:54] QQ_Nick The server sends 1000 messages to my client which has a HWM of 10 messages set.
[21:01] djc yeah
[21:01] djc this is how it works, the client will try to receive messages as fast as possible
[21:06] QQ_Nick Does the HWM not limit the number of messages that the client queues? My client takes 1/20th second to process each message but still the server sends all 1000 messages to it immediately.
[21:27] mikko sustrik: there?