IRC Log


Wednesday December 8, 2010

[Time] NameMessage
[03:25] PeterTork Hello. I have a rather odd model that I want to build in 0MQ and was looking for some advice there. We have web servers running presentation code, and they are being designed to talk to a one or more processes (which we are jokingly calling the MCP) that will do various validation tests on the RPC calls they are making before handing them to another layer of servers that will handle actually doing stuff. We would like to h
[03:38] Steve-o like to h...?
[03:39] Steve-o Are you looking from Mongrel2? http://mongrel2.org/home
[03:46] Steve-o PeterTork: IRC limits line length, try the mailing list if you want to post a longer question
[03:54] bsiemon hello all, A quick question about an example from the guide. In examples/C/lruqueue.c. If one removes the id assignment from the client threads, the clients see to no longer receive messages.
[03:54] bsiemon on os x, zmq build 2.0.10
[03:56] bsiemon it seems to only happen if the client is running within its own process rather than in a thread with the other example code
[04:04] Steve-o bsiemon: does this also occur with 2.1, or could it be a permission problem with the unix socket endpoint?
[04:04] bsiemon it did also occur in 2.1
[04:05] bsiemon but I think I am just being dumb
[04:05] bsiemon sorry to bother you
[04:06] Steve-o from what I can see from the source code I would move the endpoints to /tmp
[04:06] Steve-o it doesn't look like the code will work out of the box
[04:09] bsiemon Steve-o: I see
[04:09] bsiemon Steve-o: Thanks
[04:09] Steve-o look at zmq_ipc for example usage, http://api.zeromq.org/zmq_ipc.html
[04:13] Steve-o well try a different path, the example path should be the current directory
[04:13] Steve-o so you should see a frontend.ipc and backend.ipc file in the directory listing,
[04:14] bsiemon yes I see that now
[04:15] Steve-o You might have issues with multiple users, discussion can be found on the list: http://www.mail-archive.com/zeromq-dev@lists.zeromq.org/msg03151.html
[04:16] bsiemon Steve-o: If I put the call to s_set_id back into the client code
[04:16] bsiemon Steve-o: It works with out a hitch
[04:17] Steve-o oh ok
[04:18] bsiemon Is there anything special to with uuid generation on os x?
[04:18] bsiemon from what I have read the lib call seem to be different from os x to linux
[04:19] bsiemon I am sure I will slog through it, thanks for the help!
[04:21] Steve-o bsiemon: pass, some Linux versions have another library with the same name
[04:22] Steve-o it does look the same from the documentation, http://developer.apple.com/library/mac/#documentation/Darwin/Reference/ManPages/man3/uuid.3.html
[04:42] Steve-o lol, the lruqueue example aborts on Linux
[08:38] zedas sustrik: looks like debian included mongrel2, but included the old 2.0.6 version before the api change for zmq_init
[08:39] sustrik hm
[08:39] sustrik 0mq is in debian-unstable
[08:40] sustrik we deliberately haven't pushed it to the stable as we weren't 100% sure about the API by then
[08:47] Guthur circa my question yesterday, can anyone confirm that poll timeouts are working with sub nodes
[08:50] sustrik Guthur: are they not?
[08:51] Guthur does seem to be working for me, the code works fine when I set -1
[08:51] Guthur dose/doesn't
[08:51] Guthur but if I actually set a timeout it seems to timeout quickly and I receive nothing
[08:56] sustrik what version are you using?
[09:00] Guthur 2.0.10
[09:00] sustrik ah, that's a documented behaviour
[09:00] Guthur sustrik: ^
[09:00] Guthur oh you seen, sorry
[09:00] sustrik zmq_poll waits for "up to" microseconds
[09:01] Guthur Yeah I seen the upto
[09:01] sustrik anyway, this have been changed in 2.1
[09:01] Guthur but it's not actually 'working' in any sense
[09:01] sustrik it honours timeouts precisely
[09:01] sustrik is there a bug?
[09:01] Guthur ok, but surely as it stands in 2.0.10 its pretty much broke
[09:02] sustrik what's the problem?
[09:02] Guthur Well it timeout so quick that I don't receive anything
[09:02] Guthur and this isn't exactly a slow connection
[09:03] Guthur It's local TCP
[09:03] sustrik well, it's documented behaviour
[09:03] sustrik you have to restart the poll in such case
[09:03] sustrik or switch to 2.1 :)
[09:03] Guthur hehe, ok
[09:03] sustrik actually, it's a performance issue
[09:04] sustrik the poll that can exit prematurely is faster than the one honouring the timeout
[09:04] sustrik what we've done in 2.1 was adding ZMQ_FD
[09:04] Guthur so I suppose in 2.0.10 what one would have to do is engineer some honourable timeout mechanism on top
[09:04] sustrik which allows you to poll and handle timeouts yourself
[09:04] sustrik efficiently
[09:05] sustrik zmq_poll is less efficient but honours timeouts
[09:05] Guthur so really it's just a non blocking poll
[09:05] Guthur essentially
[09:05] Guthur the 2.0.10 version that is
[09:05] sustrik Guthur: it blocks
[09:05] sustrik but can exit at times
[09:05] sustrik basically, it exits when "something happens"
[09:06] sustrik say a new connection is established or somesuch
[09:06] Guthur my behaviour is that it timeouts in approximately less than a second
[09:06] Guthur no matter what I set
[09:06] sustrik possibly the connection being established
[09:06] Guthur I'll try 2.1
[09:07] sustrik ok
[09:07] Guthur my only worry with 2.1 is that it is not yet very mature
[09:07] Guthur i.e. there has been very little room for feedback
[09:09] sustrik it's up to you
[09:09] sustrik in 2.0 you can simply add a wrapper function on top of zmq_poll
[09:09] sustrik that would check for whether the timout was reached
[09:10] sustrik and restart the poll if it was not
[09:10] Guthur it depends on how soon we roll out to our app
[09:10] Guthur our rollout certainly wont be until some time in 2011 so it might be ok
[09:11] sustrik that's three weeks :)
[09:11] sustrik anyway, even the wrapper function is viable
[09:11] sustrik it's 10 lines of code
[09:13] Guthur hehe, I suppose writing a wrapper will give me some coding to do this morning
[09:14] sustrik now = gettimeofday ();
[09:14] sustrik while (true) {
[09:14] sustrik events = zmq_poll ();
[09:14] sustrik if (!events && now + timeout > gettimeofday ())
[09:15] sustrik continue;
[09:15] sustrik break;
[09:15] sustrik }
[09:15] sustrik that's it
[09:15] Guthur man you just sucked the fun out of it, hehe
[09:15] sustrik oooooopw
[09:15] sustrik oops
[09:20] Guthur ah that's better
[09:20] Guthur three lines of C# code
[09:20] sustrik !
[09:21] Guthur Stopwatch timer = new Stopwatch(); timer.Start(); while (timer.ElapsedMilliseconds < timeout && ctx.Poll(items, timeout) == 0);
[09:21] Guthur oh it did do new lines
[09:21] Guthur sorry about hat
[09:21] Guthur that
[09:21] Guthur didn't
[09:21] sustrik np
[09:21] sustrik i see it
[09:21] sustrik yes, that's was the idea with 2.0.x
[09:21] sustrik the performace issue i've mentioned before is the need to measure time at the beginning of the function
[09:22] sustrik if you don't care about honouring timeouts precisely you can avoid that overhead
[09:23] Guthur no problem, I had read the non-honouring part, I should have heeded it more
[09:24] Guthur I kind of assumed it was more precision thing and that it would just be out by a few micros
[09:24] Guthur damn assumptions
[09:26] Guthur cheers for clearing that up sustrik
[09:27] sustrik you are welcome
[09:32] Guthur minor bug in mine there, I assumed milliseconds
[09:32] Guthur small change just, microsecond is really high res
[17:41] mikko howdy
[17:58] jhawk28 sustrik: has the zmq team thought about providing a durable pub/sub socket?
[18:00] jhawk28 durable - req/rep for requesting off queue. Fully persistent on disc for when publisher goes down
[18:01] jhawk28 http://sna-projects.com/kafka/design.php has some interesting approaches to get it to be fast
[18:02] sustrik what if the disk on the middle node fails :)
[18:02] jhawk28 no, not broker style
[18:03] sustrik saving data to disk on endpoints?
[18:03] sustrik you can do that in you app
[18:03] sustrik your*
[18:04] sustrik this is what most applications with reliability requirements do anyway
[18:04] sustrik they have data on disk (in DB or something)
[18:05] mikko i thought about this issue at some point as well
[18:05] mikko came up in to conclusion that it is a lot easier to do in app level
[18:05] mikko as you can choose different kind of backends (such as bdb etc) without having to have that requirement in the core
[18:05] sustrik right
[18:05] sustrik and it's kind of inpossible to push reliability to the network
[18:06] sustrik the networks are notoriously unreliable
[18:06] mikko maybe we could provide an example of durable device
[18:06] mikko rather than incorporate into core
[18:07] jhawk28 I was thinking that I would start with the device route
[18:07] mikko maybe you could do a generic "callback" device
[18:07] jhawk28 but, I was thinking that it might be cleaner if it ended up as a socket
[18:08] mikko have function pointers when message is received and sent
[18:08] mikko jhawk28_: cleaner from what point of view?
[18:08] jhawk28 cleaner API
[18:08] jhawk28 the end user API
[18:08] mikko from application code point of view yes, but it does mean a) design a durable storage b) add a large dependency to core
[18:09] jhawk28 a)
[18:09] mikko designing persistent storage for this kind of task is not a trivial exercise
[18:09] sustrik well, i've seen how AMQP working group struggled for years to meld the networking and persistence together into a single spec
[18:09] jhawk28 base it off kafka or bitcask
[18:09] sustrik and it doesn't really lead to anything sane
[18:09] mikko if you for example look at how long it has taken for Varnish to nail it down
[18:10] jhawk28 kafka is probably closer since it just needs a queue mechanism, but they are both kinda similar
[18:11] jhawk28 biggest problem Ive seen with message queue persistence is that they can't scale to a huge number of messages
[18:11] mikko you possibly want a cleanup mechanism, expiration, maybe you want the store to be accessible from other programs as well
[18:12] jhawk28 kafka basically uses a directory as the message queue
[18:12] jhawk28 files are created in an ordered fashion
[18:13] jhawk28 the client is what determines what message id to start from
[18:13] mikko is there a concept of expiration?
[18:13] jhawk28 the cleanup is by default handled by time or size
[18:14] jhawk28 other mechanisms can be plugged in I believe
[18:15] jhawk28 the client effectively controls their state so they can restart if need
[18:16] mikko bbl, need to commute home ->
[19:28] mikko back
[19:47] wayneeseguin nice
[19:48] cremes welcome
[19:49] wayneeseguin I am new to this and I am trying to determine which socket type to use
[19:50] wayneeseguin I am playing around with an example just to see if I can get it working.
[19:50] cremes wayneeseguin: definitely read the guide at the zeromq site
[19:50] cremes it covers a LOT of novice questions
[19:50] wayneeseguin I had however perhaps I should do it again as that was a month or so ago
[19:53] cremes alternately, ask your question
[19:54] cremes if it's answered in the guide, i'll point you to it
[19:54] cremes othe
[19:54] cremes otherwise, i'll try to answer it
[19:57] wayneeseguin I am fine reading docs :)
[19:57] wayneeseguin I want to have a client - server relationship where the client sends updates to the server on an interval and the server can request information from the client at random
[19:57] wayneeseguin I am unsure if there is one socket type that can facilitate that or if I should be using two distinct ones.
[19:58] sustrik is seems there are two different flows there
[19:59] sustrik PUB/SUB for updates
[19:59] sustrik REQ/REP for requests
[19:59] wayneeseguin ok
[20:01] cremes exactly
[21:48] toni hey there. I have an issue with a XREQclient XREPserver architecture, where the client should resend a message to another server, in case the one he first sent the message dies. I described it in detail here: https://gist.github.com/733965
[22:19] cremes toni__: you are understanding the problem correctly and 0mq is doing exactly what it should
[22:19] cremes when your server dies, the 0mq socket on the client should remove it from its list of usable endpoints
[22:20] toni cremes: exactly
[22:20] cremes you are likely simulating a server death by putting it to sleep or something; make sure that when the server dies that it goes away completely
[22:20] cremes that way the socket will disconnect and you will get the behavior that you want
[22:20] cremes if it hangs forever, as far as 0mq is concerned it is still a valid endpoint
[22:22] toni cremes: Yes, I am simulating a server death...
[22:22] toni I sould make use of socket.close()
[22:23] cremes right; then you'll get what you want
[22:23] cremes \
[22:23] toni cremes: Thanks for your hint!
[22:23] cremes np
[22:51] toni cremes: Okay, I tried it with socket.close() but this does not seem to have any effect. I posted my 2 little code-snippets on gist: https://gist.github.com/734071
[22:54] toni It seems as if the client would still be connected to the server that died
[22:55] cremes toni__: how many of these servers are you starting up?
[22:55] toni I start 3 servers, and then killing two of em
[22:56] cremes and after they print "socket closed" and die, your client is still trying to send new messages to them?
[22:56] toni yes it is
[22:56] cremes try this...
[22:56] cremes 1. have your client start and connect to 3 addresses *first*
[22:56] cremes 2. start one server
[22:56] cremes see what your client does
[22:57] cremes 3. start up the other 2 servers
[22:57] cremes see what your client does
[22:57] cremes 4. kill 2 of your servers
[22:57] cremes see what your client does
[22:57] toni cremes: Ill try it right now
[23:12] toni cremes: I connected the client first. Then I started the first server. The client tries to send also to the other servers which are not running yet. Thats why it s first very slow. Then I start the next server, it goes faster. Starting the third server the client doesnt get any timeout and it s fast. Stopping the servers, it gets slow again...
[23:14] toni seems like, the socket still tries to send to the addresses that are not available yet
[23:35] cremes toni__: weird; why don't you ask on the mailing list and include a gist of your code along with the results you have seen
[23:35] cremes maybe someone else can give a hint
[23:36] toni cremes: I will post my snippets and the testcase. Thanks for your help.
[23:43] oxff is there any central message broker for PUB-SUB in the src distribution?
[23:43] oxff basically i look for something like rabbitmq
[23:43] oxff with an asynchronous i/o compatible c client library
[23:43] oxff so i want to have a central server
[23:43] oxff (cluster)
[23:43] oxff which publishers as well as subscribers can connect to
[23:47] oxff ah reading the doc, i get that this is not the scope of zeromq
[23:47] oxff too bad :/
[23:54] oxff ah, devices seems to be what iw ant