ruby on rails - Work with two separate redis instances with sidekiq? -
ruby on rails - Work with two separate redis instances with sidekiq? -
good afternoon,
i have 2 separate, related apps. should both have own background queues (read: separate sidekiq & redis processes). however, i'd able force jobs onto app2
's queue app1
.
from simple queue/push perspective, easy if app1
did not have existing sidekiq/redis stack:
# in process, far far away # configure client sidekiq.configure_client |config| config.redis = { :url => 'redis://redis.example.com:7372/12', :namespace => 'mynamespace' } end # force jobs without class definition sidekiq::client.push('class' => 'example::workers::trace', 'args' => ['hello!']) # force jobs overriding default's sidekiq::client.push('queue' => 'example', 'retry' => 3, 'class' => 'example::workers::trace', 'args' => ['hello!'])
however given have called sidekiq.configure_client
, sidekiq.configure_server
app1
, there's step in between here needs happen.
obviously take serialization , normalization code straight within sidekiq , manually force onto app2
's redis queue, seems brittle solution. i'd able utilize client.push
functionality.
i suppose ideal solution someting like:
sidekiqtwo.configure_client { remote connection..... }
sidekiqtwo::client.push(job....)
or even:
$redis_remote = remote_connection.....
sidekiq::client.push(job, $redis_remote)
obviously bit facetious, that's ideal utilize case.
thanks!
so 1 thing according faq, "the sidekiq message format quite simple , stable: it's hash in json format." emphasis mine-- don't think sending json sidekiq brittle do. when want fine-grained command around redis instance send jobs to, in op's situation, i'd write little wrapper allow me indicate redis instance along job beingness enqueued.
for kevin bedell's more general situation round-robin jobs redis instances, i'd imagine don't want have command of redis instance used-- want enqueue , have distribution managed automatically. looks only 1 person has requested far, , they came solution uses redis::distributed
:
datastore_config = yaml.load(erb.new(file.read(file.join(rails.root, "config", "redis.yml"))).result) datastore_config = datastore_config["defaults"].merge(datastore_config[::rails.env]) if datastore_config[:host].is_a?(array) if datastore_config[:host].length == 1 datastore_config[:host] = datastore_config[:host].first else datastore_config = datastore_config[:host].map |host| host_has_port = host =~ /:\d+\z/ if host_has_port "redis://#{host}/#{datastore_config[:db] || 0}" else "redis://#{host}:#{datastore_config[:port] || 6379}/#{datastore_config[:db] || 0}" end end end end sidekiq.configure_server |config| config.redis = ::connectionpool.new(:size => sidekiq.options[:concurrency] + 2, :timeout => 2) redis = if datastore_config.is_a? array redis::distributed.new(datastore_config) else redis.new(datastore_config) end redis::namespace.new('resque', :redis => redis) end end
another thing consider in quest high-availability , fail-over sidekiq pro includes reliability features: "the sidekiq pro client can withstand transient redis outages. enqueue jobs locally upon error , effort deliver jobs 1 time connectivity restored." since sidekiq background processes anyway, short delay if redis instance goes downwards should not impact application. if 1 of 2 redis instances goes downwards , you're using round robin, you've still lost jobs unless you're using feature.
ruby-on-rails redis queue sidekiq
Comments
Post a Comment