Commit f94cca05 authored by Jordan Sissel's avatar Jordan Sissel
Browse files

add zmq for internal messaging from harvesters to the concentrator/emitter

parent 21473dab
CFLAGS+=-Ibuild/include -std=c99 -Wall -pipe -O2
CFLAGS+=-Ibuild/include -std=c99 -Wall -pipe -g
LDFLAGS+=-Lbuild/lib -Wl,-rpath='$$ORIGIN/../lib'
......@@ -24,11 +25,12 @@ rpm deb:
#unixsock.c: build/include/insist.h
backoff.c: backoff.h
harvester.c: harvester.h
emitter.c: emitter.h
lumberjack.c: build/include/insist.h build/include/zeromq.h
lumberjack.c: backoff.h harvester.h
lumberjack.c: backoff.h harvester.h emitter.h
build/bin/lumberjack: | build/bin build/lib/libzmq.$(LIBEXT)
build/bin/lumberjack: lumberjack.o backoff.o harvester.o
build/bin/lumberjack: lumberjack.o backoff.o harvester.o emitter.o
$(CC) $(LDFLAGS) -o $@ $^ $(LIBS)
@echo " => Build complete: $@"
@echo " => Run 'make rpm' to build an rpm (or deb or tarball)"
......@@ -9,6 +9,7 @@ Goal: Something small, fast, and light-weight to ship local logs externally.
## Requirements
* minimal resources
* configurable event data
Simple inputs only:
......@@ -18,3 +19,14 @@ Simple inputs only:
Simple outputs only:
* custom wire event protocol (TBD)
## Tentative idea:
# Ship apache logs in real time to somehost:12345
./lumberjack --target somehost:12345 /var/log/apache/access.log ...
# Ship apache logs with additional log fields:
./lumberjack --target foo:12345 --field host=$HOSTNAME --field role=apt-repo /mnt/apt/access.log
Wire protocol will be msgpack for speed of parsing unless I find something
faster that's easy to use in as many languages.
#ifndef _EMITTER_H_
#define _EMITTER_H_
struct emitter_config {
void *zmq; /* zmq context */
char *zmq_endpoint; /* inproc://whatever */
void *emitter(void *arg);
#endif /* _EMITTER_H_ */
#ifndef _HARVESTER_H_
#define _HARVESTER_H_
struct harvest_config {
char *path; /* the path to harvest */
void *zmq; /* zmq context */
char *zmq_endpoint; /* inproc://whatever */
void *harvest(void *arg);
#endif /* _HARVESTER_H_ */
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment