Monday October 24, 2011

[Time] NameMessage
[03:36] jsia has nayone tried receving binary message in zmq using C library?
[03:36] jsia protobuf message in particular
[03:38] minrk yes, I believe people are doing that
[03:40] minrk see:
[03:41] minrk all zmq messages are binary messages
[03:41] minrk I don't use protobuf from C, but I have used protobuf from Python, and more regularly use msgpack
[03:46] mattbillenstein +1 on msgpack
[03:57] nag I'm employing ZMQ in a setup involving PHP -> C daemon and seeing that non-blocking recv's from PHP are requiring several hundred attempts to get messages
[03:57] nag when I benchmark the C daemon, however, it's able to fulfill each request in a matter of milliseconds
[03:58] nag the packets are going over a LAN - what's are the standard recommendations to increase throughput in this scenario?
[03:58] mattbillenstein nag: so the php receive loop takes tens of microseconds per iteration?
[03:59] nag mattbillenstein: the php receive loop takes between .2 and 2.0 seconds
[04:00] nag whereas a separate benchmark of the C routines with the same data and conditions (as best as I'm able to reproduce) takes between 0 and .1 seconds
[04:10] mattbillenstein so you're saying there is a message in flight or in a queue somewhere while the php receive loop is still spinning?
[04:10] mattbillenstein ie, the server has responded
[04:12] nag I'm not certain *where* the message is; yes, I assume that the server has responded
[04:14] nag it's an assumption because the server responds quickly when PHP and apache are not in the picture, and not so quickly otherwise
[04:19] mattbillenstein hmm, thorny problem
[04:20] nag if I can get the retry count on PHP down to <100 consistently, I'd have some very, very impressive throughput
[04:25] mattbillenstein so this is requeest/response? what sort of throughput are you seeing now?
[04:27] nag depends upon the input parameters, but generally about... 1400 writes/sec, 45 reads/sec
[04:28] nag the reads should be at least on par with writes; a "write" means that PHP is making a send call, a "read" means that PHP is making the recv call
[04:28] nag in both cases it's to/from a C daemon (two different processes on two different machines)
[04:28] nag one difference is that the writes are to a local daemon, whereas the reads are across the network
[04:29] nag but is the network responsible for such an extreme degredation or are there other factors (within ZMQ)?
[04:30] mattbillenstein 1.0/45 = 22ms
[04:31] mattbillenstein which isn't good, but isn't terrible for rpc requests in a LAN
[04:31] mattbillenstein on the same box, you're going to get much much lower latencies
[04:32] mattbillenstein you reads are synchronous right? like you can't have more than one in flight at a time?
[04:33] nag no; the server can serve more than one read at a time
[04:34] nag though the client (PHP) can only consume one response from the server, per PHP instance (pageview)
[04:35] mattbillenstein yeah, so the 45 reads/sec is from one process right?
[04:36] mattbillenstein php process
[04:36] nag yes
[04:40] nag though, that's with a concurrency of 8-32 simultaneous requests
[04:40] nag it degrades as concurrency increases
[04:41] nag does that encourage the suspicion that this is caused largely by network latency?
[04:42] nag approx. 45/s with 8 simultaneous, approx. 36/s with 32 simultaneous
[04:47] mattbillenstein what is the size of the messages
[04:47] mattbillenstein ?
[04:47] mattbillenstein I wouldn't think you're saturating the network link, so the rate should remain constant with the number of clients to a point
[04:49] nag the messages are between 80 bytes and 20K
[04:49] nag the latency increases with the size
[04:50] nag I imagine that ZMQ is better able to batch messages together on the inserts, could be one factor
[04:50] nag inserts = writes, sorry
[04:53] mattbillenstein are your reads dependant?
[04:53] mattbillenstein on each other?
[04:54] nag nope, a key is specified via ZMQ to the C daemon, it responds with a discrete message
[04:55] nag to clarify: a key, and optionally a second parameter in the message, delimited by a colon
[04:59] mattbillenstein but the php client blocks until it gets a response?
[04:59] mattbillenstein req-rep / req-rep / repeat?
[05:00] nag mattbillenstein: it blocks up to a point -- it makes several non-blocking recv calls, but exits after 1000 empty retries
[05:01] mattbillenstein I mean the client doesn't have more than one request in flight at a time?
[05:01] mattbillenstein so you're sensitive to latency insofar as request rate
[05:02] nag correct, the client has only one request in flight at a time
[05:03] nag though there are multiple clients
[05:04] mattbillenstein yeah
[05:05] mattbillenstein so request rate per client is inversely proportional to latency
[05:06] nag yes, they seem to be correlated in that way
[05:06] nag :-\
[05:06] mattbillenstein I don't think anything is really broken, just going over the lan is slower than not …
[05:06] mattbillenstein so if you do async rpc, you can probably get a lot higher performance
[05:06] mattbillenstein or perhaps make a read bulk
[05:07] nag right
[05:07] nag unfortunately, the client use case doesn't really permit that
[05:07] mattbillenstein instead of get('value') — get([ .. values ..])
[05:07] mattbillenstein so you pack more than one read into a message
[05:09] nag the second parameter is akin to a bookmark in the dataset, a "start from here" indicator
[05:09] nag so I think the best solution given the latency revelation is to allow for a higher retry count on the first request, since it's likely to take substantially longer
[05:10] mattbillenstein first request probably involves a handshake
[05:10] mattbillenstein which will take longer
[05:10] mattbillenstein are you creating a new context each time?
[05:12] nag on the client side, yes
[05:12] mattbillenstein hmm
[05:12] mattbillenstein that might explain it
[05:12] nag ha
[05:12] mattbillenstein —or part of it
[05:13] mattbillenstein the context I think manages connections
[05:13] nag dear, dear... I suppose this concludes why I shouldn't be working on a Sunday
[05:15] mattbillenstein well, if you could keep a context around in the process
[05:16] mattbillenstein that might help — this isn't really my area of expertise tho — I've only used zmq a bit for some python server/client stuff
[05:16] mattbillenstein the server and client being long-lived processes
[05:16] mattbillenstein that create a context in a module global
[05:17] nag yes, and likewise, I'm unsure of the php/apache behavior with regard to memory alloc/dealloc across requests
[05:17] nag I'll find out shortly
[05:17] nag thanks for your input, mattbillenstein
[05:17] mattbillenstein np — apache recycles the php processes every so many requests right?
[05:18] nag I believe so, and if that's true then persistent context/sockets should exhibit a performance increase
[05:21] mattbillenstein yeah, the handshake isn't insignificant
[05:22] mattbillenstein you're better reusing connections
[07:57] mikko nag: you might be better off with polling rather than non-blocking loop
[07:58] nag mikko: I switched to non-blocking with a limit on retries because I found that apache would hang in certain circumstances and stop serving all requests
[07:58] mikko nag: with poll + timeout?
[07:58] nag it also makes more sense to retry with a sensible limit since it's not mission-critical that every single message reach its destination in time to be useful
[08:00] nag that could work, as well, though I'd attempt to reproduce the apache hang in that circumstance
[08:00] nag and hopefully find myself unsuccessful
[08:00] mikko does the apache hang with blocking recv/send + poll or without poll?
[08:00] mikko as in did the poll mark it readable or writable and it still hung?
[08:00] nag I'm going to benchmark a multi-part approach as well, since it fits well into my use case
[08:02] mikko 1000 retries with noblock is busy waiting
[08:02] mikko so it's going to use a lot more resources than poll
[08:04] nag I hadn't considered poll on the clientside until now
[08:05] nag I'm going to perform some benchmarks using this and other ideas
[08:08] jsia hi is it possible to pass protobuf data to zmq using C api
[08:08] jsia however the payload has a null terminator string within it
[08:08] jsia that is part of the protobuf message
[08:18] mikko jsia: whats the problem?
[08:18] mikko you just send .c_str (), .size () + 1
[08:19] mikko dont use strlen to calculate the length
[08:19] mikko when you receive it, use zmq_msg_size ()
[08:19] mikko and when you send it use some binary safe way to calculate length
[08:29] jsia hi
[08:30] jsia my problem yesterday was solved
[08:30] jsia its with the identity
[08:30] jsia i was getting th eold identity
[08:30] jsia however when I try to pass a protobuf message
[08:30] jsia with a null terminate
[08:30] jsia it cuts the message
[08:30] jsia I used .c_str()
[08:30] jsia this is the string that I was sending
[08:31] jsia \n\021localhost-32099-1\020\000\032\060\n\026tcp://\022\bWANDERER\032\ftestconsumer
[08:31] jsia when I do message.c_str()
[08:31] jsia it stops here \n\021localhost-32099-1\020
[08:31] jsia so I had the message corrupted
[08:37] amitc1 I have a requirement that a PHP web application write messages to a non-blocking queue and other process(es) dequeue them. My current design is PHP app create a ZMQ.PUSH socket, do a connect to the destination address and send the message. While at the destination, a process (Java) create a ZMQ.PULL socket, do a bind on the same address and receive the message. However, when the dequeuer process is down (or not started), the messages that the PHP app sent
[08:37] amitc1
[08:37] amitc1 This has the same problem of lost messages if the dequeuing process does not start by the time the above processes finishes. But adding a while(true) {} to the end of the above method body does not result in any lost messages - all the messages are delivered when the dequeuer starts. So am I correct in the assumption that the ZMQ.Context object being garbage collected causes the problem here? If yes, then how to solve this problem in a PHP web applicat
[08:57] mile is it possible to run multiple instances of an app created with rebar?
[08:57] mile generated scripts seem to break
[08:58] mile once you start multiple nodes, since they rely on the app_name@host for issuing commands
[09:19] mbj amitc1: Try to close your socket, it will block until the message is send.
[09:22] amitc1 @mbj: actually what I want is a non-blocking queue
[09:23] mikko amitc1: what should be the behaviour if you get EAGAIN?
[09:23] mikko just not send the message?
[09:24] mbj amitc1: So you have to create the zmq context so that it survives the request, dunnot how to do that in php. In my ruby apps the zmq_context is created on boot and closed on stop, not for each request.
[09:25] amitc1 yes, on EAGAIN don't send the message (not send the message on the socket, but maybe log somewhere and create an alert)
[09:25] mikko amitc1: are you using zeromq 2.1?
[09:26] amitc1 yes
[09:26] amitc1 @mikko: yes
[09:26] mikko amitc1: so you run dequeuer for short bursts?
[09:27] amitc1 @mikko: The dequer is supposed to run always, like wait in an infinite loop to listen for messages
[09:27] mikko so what is the actual problem here?
[09:27] mikko your first paragraph cust
[09:27] mikko cuts*
[09:27] amitc1 @mikko: but if it is down for sometime (process crash), I don't want the messages to be lost
[09:27] mikko amitc1: you don't want to block but you don't want hte messages to be lost either?
[09:27] mikko then spool them locally
[09:28] amitc1 yes, that's the behaviour I was looking for, and I am able to get that if I use Java
[09:29] mikko there is no way to do that reliably in php
[09:29] amitc1 but since a php application creates a new ZMQ.Context everytime, I don't get the same behaviour
[09:29] mikko the best option is to run a device locally and have that do the spooling
[09:29] amitc1 okay, so I will have to rely on local spooling
[09:30] mikko the context is persistent by default in php
[09:31] amitc1 ok, thanks for the help
[11:51] nag are multipart messages in zmq generally regarded as a way to increase throughput, or as a way to concatenate several disparately-created datums, or other?
[13:49] jsia anyone tried protobuf serialization and passing the serialized string to zmq using C?
[13:55] mikko jsia: how are you handling the message?
[13:55] mikko are you trying to print it using c_str ()?
[13:59] cremes nag: multipart let's you do "scatter-gather" when sending, so it can help perf a little by avoiding unnecessary memcpy()
[14:00] cremes but it's not really there specifically for performance
[14:00] jsia yup
[14:01] nag cremes: understood, thanks
[14:01] mikko jsia: you need to handle the buffer with null safe functions
[14:01] jsia hmmm im using c_str right now
[14:01] mikko jsia: if you need to print it, do it char by char
[14:01] jsia yeah I did that
[14:01] jsia its ok when Im printing it one by one if it is ina loop
[14:02] mikko where do you get the std::string ?
[14:02] mikko you receive a message and copy to string?
[14:02] jsia but when I try assigning each character in the character pointer
[14:02] jsia that's where the problem occurs
[14:02] jsia so when I send it to the queue the data is alrady corrupted
[14:02] jsia I tried modifying the enum to remove the value 0 and start at 1
[14:02] jsia it worked
[14:02] jsia it only encode enum with a value of 0 to \0
[14:03] mikko does the protobuf give you std::string containing the serialized data?
[14:04] jsia I need to store them in a character pointer for sending to the queue
[14:04] mikko jsia: you don't
[14:04] mikko does protobuf give you std::string?
[14:04] mikko or const char * ?
[14:05] jsia based ont he c++ library it is returning string
[14:06] mikko ok
[14:06] mikko so you create zmq message:
[14:06] mikko m = zmq_init_size (str.size ());
[14:06] mikko copy the data to the message:
[14:06] mikko memcpy (zmq_msg_data (m), str.c_str (), str.size ());
[14:06] mikko and send the message
[14:07] mikko on the other side you use zmq_msg_size when you copy the data back a buffer
[14:07] mikko which you can the unserialize
[14:07] jsia I used gdb to trace the actual memory
[14:07] jsia str.c_str ()
[14:07] jsia whenever I call this
[14:07] jsia it already chops off the message
[14:08] mikko you mean when you print it?
[14:08] jsia yup in gdb
[14:08] jsia Im already viewing the memory content
[14:08] jsia I also tried sending it coz I thought that it was just a printing problem
[14:09] jsia but the receiving end is complaingin that it is not a vlid protobuf message
[14:09] mikko show me the code that sends the message
[14:10] jsia zmq_msg_t message; zmq_msg_init_size (&message, strlen (string)); memcpy (zmq_msg_data (&message), string, strlen (string)); rc = zmq_send (socket, &message, 0);
[14:10] mikko no
[14:10] mikko strlen is the problem
[14:11] mikko strlen will cut on first \0
[14:12] jsia hmmm ok ill try to check if .size() will work
[14:13] mikko what is string there ?
[14:13] mikko const char * ?
[14:14] jsia string
[14:14] jsia zmq_msg_t message; zmq_msg_init_size (&message, strlen (string)); memcpy (zmq_msg_data (&message), string, strlen (string)); rc = zmq_send (socket, &message, 0); zmq_msg_close (&message);
[14:14] jsia same problem
[14:15] jsia const char * is the same as char * ?
[14:15] jsia right
[14:15] jsia its just const so u cant change it
[14:15] mikko the code you pasted still uses strlen
[14:16] jsia oops
[14:16] jsia wait
[14:16] mikko put the whole function into
[14:16] jsia int rc; zmq_msg_t message; zmq_msg_init_size (&message, string.size()); memcpy (zmq_msg_data (&message), (const char *) string.c_str(), string.size()); rc = zmq_send (socket, &message, 0); zmq_msg_close (&message); return (rc);
[14:16] mikko that should be fine
[14:16] jsia ok
[14:16] mikko what about the other end?
[14:17] jsia
[14:17] jsia the other end of the queue is already a python code
[14:17] jsia so it does not have this problem
[14:18] jsia I printed the received message at the other end
[14:18] jsia and its already cropped
[14:18] jsia i think I have the same problem with this guy
[14:19] mikko how do you init the 'string string' ?
[14:19] cremes jsia: i recommend you write some code to just focus on the serialization/deserialization aspects
[14:19] cremes and pull all of the 0mq code out
[14:19] cremes 0mq isn't at fault here but it may be obfuscating the problem
[14:19] jsia yeah I did that a while ago
[14:20] cremes write a small program that serialized some data and then immediately deserializes it within the same program
[14:20] jsia i think my problem is with the null terminated string
[14:20] cremes ok
[14:22] mikko void * memcpy ( void * destination, const void * source, size_t num );
[14:22] jsia hmmm
[14:23] jsia let me try casting a void * on it
[14:23] mikko doesnt make a difference
[14:23] mikko jsia: how do you init the string shown here:
[14:24] jsia string message
[14:24] mikko how do you initialise the string?
[14:25] mikko is the string correct size when it comes to s_send ?
[14:25] mikko you can dump the contents char by cjar
[14:25] mikko char*
[14:25] jsia yup
[14:25] jsia I did that
[14:25] jsia I can display the entire string correctly
[14:25] mikko and it's correct when it comes to s_send ?
[14:25] jsia yup
[14:26] mikko in that case you problem is most likely on consumer side
[14:26] jsia before this line
[14:26] jsia memcpy (zmq_msg_data (&message), (void *) string.c_str(), string.size());
[14:26] jsia after this line passed I checked the thing that was copied to the zmq_msg_data
[14:27] mikko and it was still the same?
[14:27] jsia when I checked the data that was copied
[14:27] jsia it was already cropped
[14:27] mikko how did you check?
[14:27] mikko strlen again?
[14:29] jsia gdb
[14:29] jsia gdb display
[14:31] cremes jsia: have you gotten this to work without using any 0mq functions yet?
[14:31] mikko jsia: ggdb print breaks on null char
[14:31] jsia I was not using the print command
[14:31] jsia it shows the data with the memory address
[14:31] jsia nope
[14:32] jsia not yet
[14:32] jsia I already removed the zeromq part
[14:32] jsia its still the same
[14:32] jsia when Im using the string data type I got no issues
[14:32] mikko jsia:
[14:32] jsia the issue comes in when I need to pass char *
[14:32] mikko test this code
[14:32] mikko (gdb) display test.c_str ()
[14:32] mikko 1: test.c_str () = 0x100100098 "abc"
[14:33] jsia ok
[14:33] mikko yet iterating it with for shows the contents
[14:33] mikko jsia:
[14:33] mikko see the comment
[14:36] jsia when you added 4 what did it do?
[14:38] mikko it prints starting from fourth
[14:38] mikko your s_send is now good
[14:38] mikko most likely your issue is on the other end
[14:39] jsia ok ill work with the code that you provided
[14:39] jsia thnx
[14:39] mikko you can print the message char by char
[14:39] mikko if you want to be sure
[14:41] jsia ok
[14:41] jsia ill do that
[14:42] mbj Is it possible to block io on a socket while consuming it? To make sure no message arrives while you are reading the messages from the mailbox.
[14:42] mikko mbj: nope
[14:42] mikko mbj: why would this be necessary?
[14:43] mikko during shutdown?
[14:43] mbj no
[14:43] mikko mbj: what is the use-case?
[14:45] mbj When a worker polls for jobs and no job arrives at the middleman the workers should receive a noop signal to requenue their work request. To make sure you'll not reading the requeues it would be cool if a socket could not receive messages during this operation.
[14:46] mikko i'm not sure i fully comprehend that. what do you count as consuming?
[14:47] mikko does consume also include the work that worker does?
[14:48] mbj mikko: It is differend. But im unable to express it with my english
[14:48] mbj mikko: But thx for your answer.
[14:49] mbj But anyway an idea that could not be expressed is not done jet :)
[14:51] jsia when you print test.c_str() it only shows the first 3 char
[14:51] jsia you need to + <offset> to show the rest of the characters
[14:51] jsia so how do I do that with memcpy
[14:51] jsia since im only passing memcpy is test.c_str()
[14:52] mikko jsia: you dont need to know the offset
[14:52] jsia even if I set the size to a big number still when I print test.c_str() its stil lcopped
[14:52] mikko jsia: you are passing .size () to memcpy
[14:52] mikko so it will copy .size () characters starting from .c_str ()
[14:52] mikko jsia: how do you print it?
[14:53] mikko
[14:53] mikko do something like that
[14:54] jsia
[14:57] mikko jsia: look at the first line
[14:57] mikko it looks ok
[14:57] jsia yeah that was the original message
[14:57] jsia that was not altered
[14:57] jsia it was still a string data type
[15:10] mikko jsia: are you still having issues with this code?
[16:33] hoppy have a problem with a long-running pub/sub. Eventually SUBs stop getting messages while the app issueing them continues to run.
[16:33] hoppy when I restart the receiver, it reestablishes.
[16:36] hoppy centos5 2. zmq 2.1.9 C++
[16:36] cremes hoppy_: you might try posting your issue to the mailing list as well; not everyone hangs out on irc
[16:37] hoppy Iwill try that. thanks
[16:37] mikko hoppy_: did you manage to create reproduce?
[16:37] cremes also, without a set of code that reproduces the issue *or* a bunch of thread backtraces (core dump maybe?) it will
[16:37] mikko or backtrace?
[16:37] cremes be pretty damn hard to help out
[16:37] cremes :)
[16:37] hoppy figures, it's pretty damn hard to reproduce.
[16:38] cremes hoppy_: if you can get gdb to produce a core dump, it might be possible for someone else to load
[16:39] cremes it on their system, poke around and figure out what's happening
[16:39] cremes if it's windows, the same idea should work too (but i don't know how)
[16:43] jond hoppy: is it similar to this one
[19:20] jmslagle Hi guys - has anyone here seen a memory leak issuing using xreq/xrep using the java bindings under linux?
[19:20] jmslagle There seems to be a potential leak in zmq_msg_init_size
[19:21] cremes i don't see an open JIRA regarding a java binding leak, so i guess no one is seeing it :)
[19:22] jmslagle Ok
[19:22] cremes do you have a repro?
[19:22] jmslagle We're working on a case to reproduce it
[19:23] cremes cool
[19:33] jmslagle I wonder if this is pthreads fault.
[19:50] jmslagle Aha
[19:50] jmslagle We think we have it
[19:50] jmslagle (It's not java's fault - it's a bug in xrep it looks like)
[19:50] jmslagle Other dev working on it is going to put a bug in
[20:40] jmslagle Yeah - xrep leaks memory in certain cases
[20:40] jmslagle We're testing a patch
[21:36] freakazoid Someone recommended a C++ template library in one of the zeromq howto videos, but I can't remember the name of the library or find the video again. Anyone have any idea what I'm talking about?
[21:37] freakazoid It was kind of like Loki but I'm pretty sure it was not Loki.
[21:41] indygreg I started playing around with ZMQ 3.0 and am a little confused about identities and label messages. I was utilizing socket identity header messages to perform message routing before. now, instead of an empty message I see a 4 byte message with the label flag set
[21:41] indygreg discussion on mailing list seems to indicate that you get either the old format (empty message, no label) or the new one (label messages then data messages)
[21:42] indygreg I'm seeing a combination of the two. should I not be using ROUTER/DEALER with REQ?
[21:42] indygreg (it also hurts that czmq doesn't seem to support labels yet, so I don't have a reference implementation to learn from)
[21:48] cremes indygreg: it *is* confusing
[21:49] cremes in 3.0, ROUTER/DEALER are *no longer* synonyms for XREP/XREQ
[21:49] cremes so don't use them together
[21:49] cremes XREQ/XREP have the old style envelope and use the IDENTITY that you set via zmq_setsockopt()
[21:49] indygreg cremes: oh, I thought the docs said ROUTER/DEALER were the ones that didn't change?
[21:49] cremes ROUTER/DEALER use a 4-byte label generated by the library and the new style label wire format
[21:50] cremes if the docs do, then i think they are in error
[21:50] cremes it's easy enough to check...
[21:51] indygreg
[21:51] indygreg or does ROUTER/DEALER detect the socket type and do things automagically?
[21:52] cremes indygreg: wow, i don't know... !
[21:53] cremes i didn't realize that ROUTER was compatible with *both* DEALER and REQ
[21:53] indygreg I also find it... frustrating... that now there are effectively 2 multi-part message formats and your application needs to be aware of the differences
[21:53] mattbillenstein router / dealer are the same as xreq/xrep no?
[21:54] cremes mattbillenstein: in 2.1.x they are; in 3.0 they are *not* the same
[21:54] indygreg so now you need a send_multipart_x() and send_multipart_dealer() API instead of a unified one
[21:54] cremes indygreg: this is a good topic to take to the mailing list
[21:54] mattbillenstein ah, word
[21:54] cremes perhaps we can get some other input (not everyone is on irc all of the time)
[21:54] indygreg cremes: will do. I just wanted to make sure I wasn't on crack before I sent an email that was invalid
[21:55] cremes no, you aren't on crack... 0mq is! :)
[21:55] cremes indygreg: it's also possible that the man page in 3.0 is incorrect
[21:56] cremes the source code is the truth
[21:57] indygreg that would make me a sad panda, especially since there have been a few mailing list threads on this in the last few months
[21:57] indygreg one of which I now see was started by cremes :)
[21:58] cremes :)
[22:38] dirtmcgirt czmq is overwriting my existing SIGTERM / SIGINT handlers
[22:38] dirtmcgirt anyway to provide hooks, etc?
[22:39] dirtmcgirt or disable and then call the czmq functionality from within my own handler?
[22:46] arkaitzj Hi
[22:46] arkaitzj I've been looking at the C++ implementation of the pub/sub internals
[22:47] arkaitzj how a trie is used to check against the start of each packet
[22:47] arkaitzj message
[22:47] arkaitzj I was wondering if there is any plan to make that pluggable or configurable in any way
[22:48] arkaitzj a tag based subscription would suit my needs but I don't see it as pluggable
[22:48] arkaitzj has anything been researched on this?
[23:15] indygreg cremes: see mailing list :)
[23:30] jmslagle Ug
[23:30] jmslagle I'm really troubled by the comment at decoder.cpp:64
[23:30] jmslagle Which indicates in_progress should be a 0 byte message
[23:31] jmslagle Which is odd since I'm sitting in gdb with size = 44 at that line