bin |
examples |
lib |
scripts |
spec |
7y .gitignore |
7y .travis.yml |
7y Dockerfile |
7y Dockerfile.slim |
7y Gemfile |
7y Guardfile |
7y LICENSE.txt |
7y README.md |
7y ROADMAP.md |
7y Rakefile |
7y docker-compose.yaml |
7y sneakers.gemspec |
__
,--' >
`=====
A high-performance RabbitMQ background processing framework for Ruby.
Sneakers is being used in production for both I/O and CPU intensive workloads, and have achieved the goals of high-performance and 0-maintenance, as designed.
Visit the wiki for complete docs.
Add this line to your application's Gemfile:
gem 'sneakers'
And then execute:
$ bundle
Or install it yourself as:
$ gem install sneakers
Set up a Gemfile
source 'https://rubygems.org' gem 'sneakers' gem 'json' gem 'redis'
How do we add a worker? Firstly create a file and name it as boot.rb
then create a worker named as Processor
.
touch boot.rb
require 'sneakers' require 'redis' require 'json' $redis = Redis.new class Processor include Sneakers::Worker from_queue :logs def work(msg) err = JSON.parse(msg) if err["type"] == "error" $redis.incr "processor:#{err["error"]}" end ack! end end
Let's test it out quickly from the command line:
$ sneakers work Processor --require boot.rb
We just told Sneakers to spawn a worker named Processor
, but first --require
a file that we dedicate to setting up environment, including workers and what-not.
If you go to your RabbitMQ admin now, you'll see a new queue named logs
was created. Push a couple messages like below:
{ "type": "error", "message": "HALP!", "error": "CODE001" }
And this is the output you should see at your terminal.
2013-10-11T19:26:36Z p-4718 t-ovqgyb31o DEBUG: [worker-logs:1:213mmy][#<Thread:0x007fae6b05cc58>][logs][{:prefetch=>10, :durable=>true, :ack=>true, :heartbeat_interval=>2, :exchange=>"sneakers"}] Working off: log log
2013-10-11T19:26:36Z p-4718 t-ovqgyrxu4 INFO: log log
2013-10-11T19:26:40Z p-4719 t-ovqgyb364 DEBUG: [worker-logs:1:h23iit][#<Thread:0x007fae6b05cd98>][logs][{:prefetch=>10, :durable=>true, :ack=>true, :heartbeat_interval=>2, :exchange=>"sneakers"}] Working off: log log
2013-10-11T19:26:40Z p-4719 t-ovqgyrx8g INFO: log log
We'll count errors and error types with Redis.
$ redis-cli monitor 1381520329.888581 [0 127.0.0.1:49182] "incr" "processor:CODE001"
We're basically done with the ceremonies and all is left is to do some real work.
Let's use the logging_metrics
provider just for the sake of fun of seeing the metrics as they happen.
# boot.rb require 'sneakers' require 'redis' require 'json' require 'sneakers/metrics/logging_metrics' Sneakers.configure :metrics => Sneakers::Metrics::LoggingMetrics.new # ... rest of code
Now push a message again and you'll see:
2013-10-11T19:44:37Z p-9219 t-oxh8owywg INFO: INC: work.Processor.started
2013-10-11T19:44:37Z p-9219 t-oxh8owywg INFO: TIME: work.Processor.time 0.00242
2013-10-11T19:44:37Z p-9219 t-oxh8owywg INFO: INC: work.Processor.handled.ack
Which increments started
and handled.ack
, and times the work unit.
From here, you can continue over to the Wiki
If you use Docker, there's some benefits to be had and you can use both
docker
and docker-compose
with this project, in order to run tests,
integration tests or a sample worker without setting up RabbitMQ or the
environment needed locally on your development box.
docker build .
docker run --rm sneakers_sneakers:latest
scripts/local_integration
, which will
use docker-compose to orchestrate the topology and the sneakers Docker image
to run the testsTitleScraper
example by
running script/local_worker
. This will use docker-compose as well. It will
also help you get a feeling for how to run Sneakers in a Docker based
production environmentDockerfile.slim
instead of Dockerfile
for production docker builds.
It generates a more compact image, while the "regular" Dockerfile
generates
a fatter image - yet faster to iterate when developingFork, implement, add tests, pull request, get my everlasting thanks and a respectable place here :).
To all Sneakers Contributors - you make this happen, thanks!
Copyright (c) 2015 Dotan Nahum @jondot. See LICENSE for further details.