[Time] Name | Message |
[01:25] falconair
|
hi all, a few newbie questions: can zeromq api be used to do raw tcp/ip programming?
|
[02:21] xpromache1
|
hello guys, I'm considering zmq for replacing corba in an application
|
[02:21] xpromache1
|
the typical rpc stuff I think I know how to implement
|
[02:22] xpromache1
|
but at some point a client can send a request to the server that will have to send some large amounts of data back to the clinet
|
[02:22] xpromache1
|
how is this best done?
|
[02:22] xpromache1
|
I guess I need to answer the request with "OK, here it comes", and then to open another socket for pushing the data?
|
[02:22] xpromache1
|
what sort of socket?
|
[04:53] cremes
|
falconair: no
|
[04:53] cremes
|
it's a higher level api than that with a few abstractions
|
[04:54] cremes
|
make sure to read the guide linked off of the main page
|
[04:54] cremes
|
xpromache1: open a socket for each kind of messaging pattern you plan to use
|
[04:54] cremes
|
if you are waiting for a request/reply to start a large data download, use the REQ/REP sockets for that purpose
|
[04:55] cremes
|
for the big data download, consider PUB/SUB or PUSH/PULL depending on what you need
|
[04:55] cremes
|
you may even want to look at PAIR; it really depends on your app requirement
|
[04:56] cremes
|
make sure to read the guide linked off of the main page; it may help you pick the right pattern and its associated sockets
|
[04:56] xpromache1
|
ok, thanks for the answer
|
[04:56] xpromache1
|
is there a way to use all these over a limited number of ports
|
[04:57] xpromache1
|
like one for req/rep
|
[04:57] xpromache1
|
and another one for the data tansfer
|
[05:00] cremes
|
absolutely; each socket should get its own transport (tcp/pgm/ipc/inproc) and port (not necessary for transports other than tcp)
|
[05:01] cremes
|
0mq doesn't use "random" ports; you specifically connect or bind to whatever you need
|
[05:02] xpromache1
|
yes but I want to use the same port for all clients
|
[05:02] xpromache1
|
or maybe this doesn't make sense
|
[05:02] xpromache1
|
I mean it only makes sense when there is some firewalls in between
|
[05:03] xpromache1
|
but this is not the use case where zeromq should be used, right?
|
[05:03] cremes
|
xpromache1: not really sure what you are trying to d
|
[05:04] cremes
|
if you have a complex use-case, you may consider posting it to the ML for comment
|
[05:04] xpromache1
|
not really complex
|
[05:05] xpromache1
|
it's like a database driver
|
[05:05] xpromache1
|
you execute some small queries from time to time
|
[05:05] xpromache1
|
but ocasionally you want to retrieve a lot of data
|
[05:05] xpromache1
|
and I prefer not to have a request for each row
|
[05:07] cremes
|
then you would probably use the pattern i suggested above; REQ/REP for the query, PUB/SUB for the data transmission
|
[05:08] cremes
|
but if thisis over tcp,then the reqp/rep would get its own port as would the pub/sub
|
[05:08] cremes
|
they cannot share the same port simultaneously
|
[05:08] xpromache1
|
so if I have 10 clients connected simultaneously, how many ports do I need?
|
[05:09] cremes
|
2, one for the req/rep and 1 for the pub/sub
|
[05:10] xpromache1
|
but then all the clients will get all the data even from the requests of the other clients?
|
[05:10] cremes
|
they could ignore it if the published topic is not something they are interested in; altnerately, use the XREQ/XREP socket types
|
[05:10] cremes
|
have you read the guide yet?
|
[05:11] xpromache1
|
I did partly but this is not clear to me
|
[05:11] xpromache1
|
exactly what you can multiplex on a server socket
|
[05:12] cremes
|
i may not be able to clarify it completely for you :)
|
[05:12] cremes
|
a 0mq socket is a smart socket; it can connect/bind to multiple endpoints simultaneously
|
[05:13] cremes
|
so in your situation the server would likely bind a REP or XREP socket
|
[05:13] xpromache1
|
yes, that's clear for the response reply pattern
|
[05:13] cremes
|
the clients would come in with REQ (or XREQ) sockets via "connect" call; 0mq handles the multiplexing for you
|
[05:13] xpromache1
|
what is not clear is for the data transfer
|
[05:14] cremes
|
well, i suggested pub/sub but maybe that's not appropriate
|
[05:14] xpromache1
|
so a client connects and sends a request for the data
|
[05:14] cremes
|
you could use the xreq/xrep sockets for the data transfer
|
[05:14] cremes
|
go on
|
[05:14] xpromache1
|
I can send a reply saying connect here to get the data
|
[05:14] cremes
|
sure
|
[05:14] xpromache1
|
but what does it mean here
|
[05:15] cremes
|
"here" where? explain.
|
[05:15] xpromache1
|
that's my question
|
[05:15] xpromache1
|
what do I answer to the client
|
[05:15] xpromache1
|
where to connect to get its data
|
[05:15] xpromache1
|
do I have to start a new thread, create a new socket on a random port, and start pushing data once a client is connected
|
[05:16] xpromache1
|
or there is a smarter way
|
[05:16] cremes
|
you could send in your reply something like: :transport => :tcp, :address => '10.10.4.48', :port => 1800
|
[05:16] cremes
|
then the client would know where to go for getting the data
|
[05:16] cremes
|
you can't do random ports without informing the client what it will be
|
[05:17] xpromache1
|
yes, I realize that
|
[05:17] xpromache1
|
I was hoping that it is somehow possible to do without random ports
|
[05:17] cremes
|
you *could* use another thread or use the poll api to do it all within one
|
[05:17] xpromache1
|
just one fixed port
|
[05:17] xpromache1
|
like I have port x for request/reply
|
[05:17] xpromache1
|
and port y for data transfer
|
[05:17] cremes
|
sure, you could use 1 fixed port but you could run into issues if you have lots of concurrent requests/replies
|
[05:18] cremes
|
you would probably want to use xreq/xrep sockets so your data transfer could target the right client
|
[05:18] cremes
|
take a look at the "advanced routing" section of the guide; i'm pretty sure they wrote about that
|
[05:19] xpromache1
|
but xreq/xrep needs one message in both ways
|
[05:19] cremes
|
not true
|
[05:19] xpromache1
|
I would need something like xpush xpull
|
[05:19] cremes
|
not true
|
[05:19] cremes
|
reqp/rep enforces a strict request/reply/request/reply pattern
|
[05:19] cremes
|
the xreq/xrep sockets do *not* enforce that
|
[05:20] cremes
|
so, using xreq/xrep, you could send a single request and get a multiple message reply
|
[05:21] xpromache1
|
ok, I didn't know that
|
[05:22] xpromache1
|
I will look further into it, thanks for the hints
|
[05:22] cremes
|
no prob...
|
[05:22] cremes
|
time for bed!
|
[05:26] xpromache1
|
me too, I have to go to work in 2 hours :(
|
[06:15] bkad
|
anyone know how to get the zmq_server binary? cant find it in the installation instructions
|
[06:22] guido_g
|
what server?
|
[06:23] bkad
|
zmq_server?
|
[06:23] bkad
|
there's lots of references to it in the zmq docs
|
[06:24] bkad
|
but the ./configure, make, make install didn't build that binary
|
[06:24] guido_g
|
the docs are a bit outdated i fear
|
[06:25] bkad
|
in http://www.zeromq.org/code:examples-exchange
|
[06:25] bkad
|
under Running It
|
[06:25] guido_g
|
there is a devices directory w/ some executables
|
[06:25] guido_g
|
ouch
|
[06:25] guido_g
|
this example is way outdated
|
[06:25] sustrik
|
see the top of the page
|
[06:25] sustrik
|
WARNING: This text is deprecated and refers to an old version of ÃMQ. It remains here for historical interest. DO NOT USE THIS TO LEARN ÃMQ.
|
[06:25] guido_g
|
it'S even pre zeromq2 afair
|
[06:26] bkad
|
ahhh, oops
|
[08:34] sustrik
|
mikko: there's a build patch on the mailing list
|
[08:34] sustrik
|
can you give it a look?
|
[08:35] sustrik
|
mato is on his way to new zealand for next few days
|
[08:56] mikko
|
sustrik: sure
|
[08:56] mikko
|
it looks like my flight is departing today
|
[08:56] mikko
|
causiously excited
|
[08:57] sustrik
|
good luck :)
|
[09:28] mikko
|
reviewed
|
[09:29] sustrik
|
thanks
|
[09:56] mikko
|
Steve-o: hi
|
[09:57] mikko
|
what does the scons build produce?
|
[09:57] mikko
|
Makefiles?
|
[09:57] Steve-o
|
No it runs commands in Python directly
|
[09:58] Steve-o
|
CMake is the Make wrapper
|
[09:58] mikko
|
ok
|
[09:58] Steve-o
|
you are supposed to be able to see the commands by "scons -Q" but I haven't seen that working on my Scons files in a long time
|
[09:59] mikko
|
would've been nice if it produced something that could be directly used
|
[09:59] Steve-o
|
well, all the other build systems kinda suck though :P
|
[10:00] Steve-o
|
Although I think I had to patch Scons again for Intel C on Linux, urgh
|
[10:00] Steve-o
|
CMake just doesn't support anything interesting
|
[10:02] mikko
|
interesting development, my airline sends twitter DMs to inform about flights
|
[10:04] Steve-o
|
Cathay likes to send emails after the flight has left
|
[10:04] mikko
|
im wondering if we should split OS specific builds to separate automake files in zmq
|
[10:04] mikko
|
it might be more maintainable than conditionals in one file
|
[10:04] Steve-o
|
Makes patching a bit cleaner
|
[10:06] Steve-o
|
unless you want to write several new autoconf rules
|
[10:06] mikko
|
yeah
|
[10:08] Steve-o
|
only ~24 tests required :P
|
[10:09] Steve-o
|
many should already have autoconf macros available
|
[10:11] mikko
|
it feels like duplicating work
|
[10:11] mikko
|
hmm
|
[10:11] mikko
|
would be nicer to launch scons from zmq build
|
[10:11] mikko
|
and then just link
|
[10:12] Steve-o
|
I've looked for a long time for building Debian packages
|
[10:13] Steve-o
|
there isn't much out there despite all the big packages using SCons (Blender, Quake 3, etc)
|
[10:14] Steve-o
|
as in, for nice hooks for the deb-buildpackager
|
[10:14] Steve-o
|
you can always build a basic scons command based off uname otherwise
|
[10:15] mikko
|
autoconf might be slightly dreadful but its fairly well supported
|
[10:19] Steve-o
|
I don't think you can easily get multiple build environments in autoconf/automake though
|
[10:19] Steve-o
|
Scons is killing me on crap compiler dependencies for unit tests
|
[10:29] Steve-o
|
autoconf is a bit too cumbersome for testing different compilers on one platform
|
[10:29] Steve-o
|
I'm sure there has to be a better way \:D/
|
[10:30] mikko
|
it is slightly cumbersome yes
|
[14:15] ptrb
|
correct me if I'm wrong, but if I'm sitting in a blocking recv() on a socket in one thread, and I zmq_term() the context in another thread, the recv() should return (in C++, should throw) -- right?
|
[14:22] sustrik
|
ptrb: right
|
[14:22] sustrik
|
ETERM
|
[14:26] ptrb
|
hmm... the zmq_term() goes through OK, but I'm still blocking in the other thread... time for a minimal test case I guess :\
|
[14:28] sustrik
|
yes, please
|
[15:48] mikko
|
sustrik: zmq_term(ZMQ_NOBLOCK) would be useful
|
[15:48] mikko
|
or another function call
|
[15:51] cremes
|
mikko: something similar has been discussed on the ML; sustrik says it isn't possible
|
[15:51] sustrik
|
you mean, instead setting all the sockets to ZMQ_LINGER=0
|
[15:51] sustrik
|
right?
|
[15:54] mikko
|
sustrik: no
|
[15:54] mikko
|
sustrik: i mean instead of manually closing all sockets
|
[15:54] mikko
|
because currently it's not possible to force closing of sockets
|
[15:55] sustrik
|
hm, how would you migrate all the sockets to the thread calling zmq_term()?
|
[15:55] mikko
|
do they need to be migrated to obtain information whether zmq_term would block?
|
[15:56] mikko
|
i meant that zmq_term() would return EGAIN if given a flag
|
[15:56] mikko
|
EAGAIN*
|
[15:56] sustrik
|
how would the main thread know that they are not used in the moment otherwise?
|
[15:56] sustrik
|
at the moment*
|
[15:57] sustrik
|
you cannot grab a socket from underneath the feer of another thread
|
[15:58] mikko
|
sockets is removed from sockets array in context when it's closed?
|
[15:59] sustrik
|
yes
|
[15:59] mikko
|
slightly laggy connection at the moment
|
[15:59] mikko
|
sockets.size() > 0 would block, yes?
|
[15:59] sustrik
|
yes
|
[16:00] mikko
|
sozmq_term_nb woudl return EAGAIN if sockets.size() > 0
|
[16:00] sustrik
|
aha
|
[16:00] mikko
|
no need to migrate anything (?)
|
[16:00] sustrik
|
i though you want zmq_term to close the sockets
|
[16:01] mikko
|
im after info whether call to zmq_term will block
|
[16:01] mikko
|
not to close sockets
|
[16:01] sustrik
|
yes, that should be doable
|
[16:02] sustrik
|
if you have a look atc ctx.cpp
|
[16:02] sustrik
|
::terminate()
|
[16:02] mikko
|
around line 118?
|
[16:02] sustrik
|
the function begins at 107
|
[16:03] sustrik
|
you have 2 waits there
|
[16:03] sustrik
|
line 130:
|
[16:03] sustrik
|
no_sockets_sync.wait ();
|
[16:03] sustrik
|
and line 152:
|
[16:03] sustrik
|
usleep (1000);
|
[16:04] sustrik
|
the first wait waits for all sockets being closed (zmq_close)
|
[16:04] sustrik
|
the second one wait for all pending data to be sent
|
[16:04] mikko
|
ok
|
[16:08] cremes
|
i like this EAGAIN idea for zmq_term()
|
[16:27] Skaag
|
Hello!
|
[16:27] Skaag
|
(again)
|
[16:27] mikko
|
i
|
[16:27] mikko
|
hi
|
[16:27] Skaag
|
hi mikko :)
|
[16:28] Skaag
|
we're about to perform a POC for ZMQ, and if all goes well, convert a system to ZMQ that uses RabbitMQ at the moment.
|
[16:29] Skaag
|
but I wanted to ask, in general, about the theoretical potential of ZMQ to scale to thousands of servers...
|
[16:29] mikko
|
Skaag: can you describe your specific use-case?
|
[16:29] mikko
|
it would give some context
|
[16:29] Skaag
|
And also how practical it is to use the multicast functionality - does it require special network equipment? is it IP Multicast? Or is it just a name really, and the multicast is just synonimous for sending the message at once to many hosts?
|
[16:29] Skaag
|
sure!
|
[16:30] Skaag
|
I have at the moment only 16 nodes, distributed around the world
|
[16:30] Skaag
|
they send once per second a small block of data, statistical data about themselves
|
[16:30] Skaag
|
less than 512 bytes
|
[16:31] Skaag
|
so it's "all to all", and all of them need to collect this data, and produce historical graphs, etc.
|
[16:31] Skaag
|
so i'm asking myself what will happen when this scales, to 1000 nodes
|
[16:32] mikko
|
so every node needs to know about all others?
|
[16:35] cremes
|
sounds like a perfect use-case for pub/sub with a forwarder device
|
[16:35] cremes
|
Skaag: need more info; do you have particular latency requirements? how frequent is the data generated?
|
[16:36] mikko
|
yeah, if latency is not a problem you could batch updates
|
[16:36] Skaag
|
every 1 second, the nodes send to all other nodes a less than 512 byte data structure (json or binary, I don't mind)
|
[16:36] Skaag
|
could also be every 5 seconds. still fine.
|
[16:36] Skaag
|
but while the cluster is small, it would be cool to have fast updates.
|
[16:36] mikko
|
sure
|
[16:37] mikko
|
you could have quick local updates
|
[16:37] Skaag
|
the latency between the nodes is a maximum of 100ms.
|
[16:37] mikko
|
and batch the packets to remote locations
|
[16:37] Skaag
|
there are at the moment 6 locations around the world
|
[16:37] Skaag
|
with a bunch of machines in each location
|
[16:37] mikko
|
update every second locally and send out every 5 seconds
|
[16:37] mikko
|
so every five seconds you would send five updates to remote locations
|
[16:37] cremes
|
multicast/pgm transport won't help you if this is over a public network (internet)
|
[16:38] Skaag
|
ok so I was right to assume it is IP Multicast...
|
[16:38] Skaag
|
and not some internal terminology
|
[16:38] mikko
|
i would probably look into something like zookeeper for service discovery
|
[16:38] Skaag
|
sounds interesting!
|
[16:38] mikko
|
create ephimeral nodes of available endpoints
|
[16:38] mikko
|
i used zookeeper ages ago for services discovery
|
[16:39] mikko
|
wrote a small daemon called 'myservices' and each node on cluster had a configuration file of services it has
|
[16:39] mikko
|
so as long as node was up the ephimeral node existed in zookeeper
|
[16:39] mikko
|
when a node died the ephimeral node is automatically removed by zookeeper
|
[16:40] mikko
|
allows 'automatic' graceful failure handling
|
[16:40] Skaag
|
that's absolutely awesome
|
[16:41] mikko
|
sadly the project never went live so i cant really tell how well it would've worked in production
|
[16:44] mikko
|
im gonna go get some coffee
|
[16:44] mikko
|
bbl
|
[16:45] ngerakines
|
is there a list of large companies that use zmq on the site?
|
[16:52] Skaag
|
mikko: Would love to take a look at your 'myservices' code ;)
|
[16:54] cremes
|
ngerakines: i don't think so
|
[16:54] ngerakines
|
cremes: ok, thanks
|
[16:58] sustrik
|
Skaag: one thing to understand is that "all to all" model is inherently unscalable
|
[16:59] Skaag
|
I thought about creating 'bubbles'
|
[16:59] Skaag
|
so machines in the same DC would update quickly, internally
|
[16:59] sustrik
|
at some point the number of nodes would be so high, that amount of messages will overload any receiver
|
[16:59] Skaag
|
with less frequent updates being sent between such bubbles, aggregated
|
[16:59] sustrik
|
yes, aggregation solves the problem
|
[17:00] Skaag
|
we should probably do that from the start
|
[17:00] sustrik
|
depends on the requirements
|
[17:00] sustrik
|
it the goal is to have 1000 nodes
|
[17:00] sustrik
|
with 1 msg per 5 secs
|
[17:00] sustrik
|
you probably don't need it
|
[17:13] Skaag
|
yes that's more or less the goal.
|
[17:13] Skaag
|
it will take some time to get there, too...
|
[17:14] Skaag
|
for now we have just 25 machines.
|
[17:14] Skaag
|
I expect in 6 month it will maybe grow to 150.
|
[17:14] Skaag
|
so there is time to ameliorate the system.
|
[17:15] Skaag
|
uh, to improve the system.
|
[17:15] Skaag
|
(I think maybe I used a french word?)
|
[17:25] mikko
|
Skaag: what is this system doing?
|
[17:25] mikko
|
what kinda of data are you pushing through?
|
[17:25] mikko
|
just out of interest
|
[17:28] Skaag
|
I can tell you exactly..!
|
[17:28] Skaag
|
it is a bunch of media streamers
|
[17:28] Skaag
|
the important information is cpu load, bandwidth usage, and number of connections
|
[17:28] Skaag
|
and in the future, cost per traffic (actual cost, in dollars)
|
[17:29] Skaag
|
or some combination of cost, and remaining quota
|
[17:29] Skaag
|
but that's "phase 2" stuff :)
|
[17:29] Skaag
|
and this will be used to make smart decisions about where to send new traffic
|
[17:29] Skaag
|
and indirectly, will also be used to generate graphs for me and other admins, to view what's going on in the network
|
[17:31] mikko
|
how do you do the routing on global level?
|
[18:05] Skaag
|
Level3 fibers, connect all the DC's
|
[18:06] Skaag
|
you can see our map here: http://bams.iptp.net/cacti/plugins/weathermap/output/weathermap_7.png
|
[18:07] Skaag
|
right now, I am near m9.msk.ru :-)
|
[18:07] Skaag
|
(physically speaking)
|
[18:11] Skaag
|
and this means that this year, I will celebrate xmas in moscow \o/ yey!
|