2013-12-13 31 views
6

Ich versuche, einige Jobs auf eine Sidekiq-Warteschlange zu schieben, was bis jetzt gut geklappt hat. Der Fehler tritt nur in der Produktion auf.Redis EXECABORT Transaktion wegen vorheriger Fehler verworfen. (Redis :: CommandError)

Stapelüberwachung:

2013-12-13T20:35:04Z 22616 TID-amwho INFO: Sidekiq client with redis options {} 
/home/avishai/apps/XXX/shared/bundle/ruby/1.9.1/gems/redis-3.0.6/lib/redis/pipeline.rb:79:in `finish': EXECABORT Transaction discarded because of previous errors. (Redis::CommandError) 
    from /home/avishai/apps/XXX/shared/bundle/ruby/1.9.1/gems/redis-3.0.6/lib/redis/client.rb:121:in `block in call_pipeline' 
    from /home/avishai/apps/XXX/shared/bundle/ruby/1.9.1/gems/redis-3.0.6/lib/redis/client.rb:245:in `with_reconnect' 
    from /home/avishai/apps/XXX/shared/bundle/ruby/1.9.1/gems/redis-3.0.6/lib/redis/client.rb:119:in `call_pipeline' 
    from /home/avishai/apps/XXX/shared/bundle/ruby/1.9.1/gems/redis-3.0.6/lib/redis.rb:2093:in `block in multi' 
    from /home/avishai/apps/XXX/shared/bundle/ruby/1.9.1/gems/redis-3.0.6/lib/redis.rb:36:in `block in synchronize' 
    from /home/avishai/.rvm/rubies/ruby-1.9.3-p429/lib/ruby/1.9.1/monitor.rb:211:in `mon_synchronize' 
    from /home/avishai/apps/XXX/shared/bundle/ruby/1.9.1/gems/redis-3.0.6/lib/redis.rb:36:in `synchronize' 
    from /home/avishai/apps/XXX/shared/bundle/ruby/1.9.1/gems/redis-3.0.6/lib/redis.rb:2085:in `multi' 
    from /home/avishai/apps/XXX/shared/bundle/ruby/1.9.1/gems/sidekiq-2.17.0/lib/sidekiq/client.rb:159:in `block in raw_push' 
    from /home/avishai/apps/XXX/shared/bundle/ruby/1.9.1/gems/connection_pool-1.2.0/lib/connection_pool.rb:55:in `with' 
    from /home/avishai/apps/XXX/shared/bundle/ruby/1.9.1/gems/sidekiq-2.17.0/lib/sidekiq.rb:67:in `redis' 
    from /home/avishai/apps/XXX/shared/bundle/ruby/1.9.1/gems/sidekiq-2.17.0/lib/sidekiq/client.rb:150:in `raw_push' 
    from /home/avishai/apps/XXX/shared/bundle/ruby/1.9.1/gems/sidekiq-2.17.0/lib/sidekiq/client.rb:50:in `push' 
    from /home/avishai/apps/XXX/shared/bundle/ruby/1.9.1/gems/sidekiq-2.17.0/lib/sidekiq/client.rb:98:in `push' 
    from /home/avishai/apps/XXX/shared/bundle/ruby/1.9.1/gems/sidekiq-2.17.0/lib/sidekiq/worker.rb:83:in `client_push' 
    from /home/avishai/apps/XXX/shared/bundle/ruby/1.9.1/gems/sidekiq-2.17.0/lib/sidekiq/worker.rb:40:in `perform_async' 
    from /home/avishai/apps/XXX/releases/20131213194843/lib/listings_feed/listings_feed_parser.rb:79:in `send_to_sidekiq' 
    from /home/avishai/apps/XXX/releases/20131213194843/lib/listings_feed/listings_feed_parser.rb:74:in `enqueue' 
    from /home/avishai/apps/XXX/releases/20131213194843/lib/listings_feed/parsers/listings.rb:235:in `block in parse' 
    from /home/avishai/apps/XXX/releases/20131213194843/lib/listings_feed/parsers/listings.rb:24:in `block (2 levels) in each_listing' 
    from /home/avishai/apps/XXX/shared/bundle/ruby/1.9.1/gems/nokogiri-1.6.0/lib/nokogiri/xml/reader.rb:107:in `each' 
    from /home/avishai/apps/XXX/releases/20131213194843/lib/listings_feed/parsers/listings.rb:22:in `block in each_listing' 
    from /home/avishai/apps/XXX/releases/20131213194843/lib/listings_feed/parsers/listings.rb:21:in `open' 
    from /home/avishai/apps/XXX/releases/20131213194843/lib/listings_feed/parsers/listings.rb:21:in `each_listing' 
    from /home/avishai/apps/XXX/releases/20131213194843/lib/listings_feed/parsers/listings.rb:36:in `parse' 
    from /home/avishai/apps/XXX/releases/20131213194843/lib/listings_feed/listings_feed.rb:153:in `process!' 
    from run_feed.rb:74:in `block in <main>' 
    from run_feed.rb:71:in `each' 
    from run_feed.rb:71:in `<main>' 

Antwort

3

ich beschlossen dies durch eine FLUSHALL tun, die alle Daten aus dem redis Instanz ausgeräumt . Es scheint, dass irgendwo auf dem Weg die Daten korrumpiert wurden, und das hat es behoben.

+0

FLUSHDB ist etwas weniger hart, wenn du dich mit deinem db verbinden kannst ... aber bummeln über den Datenverlust :-) –

5

begegnete ich auch diesen Fehler, wenn MaxMemory Instanz auf diesem redis schlagen.

Sie können es auf Ihrer Entwickler-Box testen, indem Sie maxmemory (dh .. in /usr/local/etc/redis.conf) auf sthg klein, z.

maxmemory 3MB 

füllen Sie dann Ihre redis mit einem Bündel von Daten auf, starten Sidekiq, und beobachten Sie das Feuerwerk

2014-02-03T14:32:30Z 59365 TID-ov2cb18yw WARN: {"retry"=>true, "queue"=>"default", "backtrace"=>true, "class"=>"MailWorker", "args"=>["Profile::Created", 868], "jid"=>"d7b0589736818c2ffcd58c90", "enqueued_at"=>1391437950.4245608} 
    2014-02-03T14:32:30Z 59365 TID-ov2cb18yw WARN: EXECABORT Transaction discarded because of previous errors. 
    2014-02-03T14:32:30Z 59365 TID-ov2cb18yw WARN: /usr/local/opt/rbenv/versions/1.9.3-p392/lib/ruby/gems/1.9.1/gems/redis-3.0.4/lib/redis/pipeline.rb:76:in `finish' 
    /usr/local/opt/rbenv/versions/1.9.3-p392/lib/ruby/gems/1.9.1/gems/redis-3.0.4/lib/redis/client.rb:121:in `block in call_pipeline' 
    /usr/local/opt/rbenv/versions/1.9.3-p392/lib/ruby/gems/1.9.1/gems/redis-3.0.4/lib/redis/client.rb:243:in `with_reconnect' 
    /usr/local/opt/rbenv/versions/1.9.3-p392/lib/ruby/gems/1.9.1/gems/redis-3.0.4/lib/redis/client.rb:119:in `call_pipeline' 
    /usr/local/opt/rbenv/versions/1.9.3-p392/lib/ruby/gems/1.9.1/gems/redis-3.0.4/lib/redis.rb:2077:in `block in multi' 
    /usr/local/opt/rbenv/versions/1.9.3-p392/lib/ruby/gems/1.9.1/gems/redis-3.0.4/lib/redis.rb:36:in `block in synchronize' 
    /usr/local/opt/rbenv/versions/1.9.3-p392/lib/ruby/1.9.1/monitor.rb:211:in `mon_synchronize' 
    /usr/local/opt/rbenv/versions/1.9.3-p392/lib/ruby/gems/1.9.1/gems/redis-3.0.4/lib/redis.rb:36:in `synchronize' 
    /usr/local/opt/rbenv/versions/1.9.3-p392/lib/ruby/gems/1.9.1/gems/redis-3.0.4/lib/redis.rb:2069:in `multi' 
    /usr/local/opt/rbenv/versions/1.9.3-p392/lib/ruby/gems/1.9.1/gems/redis-namespace-1.3.1/lib/redis/namespace.rb:337:in `namespaced_block' 
    /usr/local/opt/rbenv/versions/1.9.3-p392/lib/ruby/gems/1.9.1/gems/redis-namespace-1.3.1/lib/redis/namespace.rb:224:in `multi' 
    /usr/local/opt/rbenv/versions/1.9.3-p392/lib/ruby/gems/1.9.1/gems/sidekiq-2.13.1/lib/sidekiq/processor.rb:93:in `block in stats' 
    /usr/local/opt/rbenv/versions/1.9.3-p392/lib/ruby/gems/1.9.1/gems/connection_pool-1.1.0/lib/connection_pool.rb:49:in `with' 
    /usr/local/opt/rbenv/versions/1.9.3-p392/lib/ruby/gems/1.9.1/gems/sidekiq-2.13.1/lib/sidekiq.rb:67:in `redis' 
    /usr/local/opt/rbenv/versions/1.9.3-p392/lib/ruby/gems/1.9.1/gems/sidekiq-2.13.1/lib/sidekiq/util.rb:25:in `redis' 
    /usr/local/opt/rbenv/versions/1.9.3-p392/lib/ruby/gems/1.9.1/gems/sidekiq-2.13.1/lib/sidekiq/processor.rb:92:in `stats' 
    /usr/local/opt/rbenv/versions/1.9.3-p392/lib/ruby/gems/1.9.1/gems/sidekiq-2.13.1/lib/sidekiq/processor.rb:46:in `block in process' 
    /usr/local/opt/rbenv/versions/1.9.3-p392/lib/ruby/gems/1.9.1/gems/celluloid-0.14.1/lib/celluloid/calls.rb:25:in `call' 
    /usr/local/opt/rbenv/versions/1.9.3-p392/lib/ruby/gems/1.9.1/gems/celluloid-0.14.1/lib/celluloid/calls.rb:25:in `public_send' 
    /usr/local/opt/rbenv/versions/1.9.3-p392/lib/ruby/gems/1.9.1/gems/celluloid-0.14.1/lib/celluloid/calls.rb:25:in `dispatch' 
    /usr/local/opt/rbenv/versions/1.9.3-p392/lib/ruby/gems/1.9.1/gems/celluloid-0.14.1/lib/celluloid/calls.rb:67:in `dispatch' 
    /usr/local/opt/rbenv/versions/1.9.3-p392/lib/ruby/gems/1.9.1/gems/celluloid-0.14.1/lib/celluloid/future.rb:15:in `block in new' 
    /usr/local/opt/rbenv/versions/1.9.3-p392/lib/ruby/gems/1.9.1/gems/celluloid-0.14.1/lib/celluloid/internal_pool.rb:59:in `call' 
    /usr/local/opt/rbenv/versions/1.9.3-p392/lib/ruby/gems/1.9.1/gems/celluloid-0.14.1/lib/celluloid/internal_pool.rb:59:in `block in create' 
+3

Gibt es eine Möglichkeit zu machen, dass nicht passieren? – Avishai

0

Ein anderer möglicher Grund ist eine gebrochene Master-Slave-Replikation. Zum Beispiel mit min-slaves-to-write Konfigurationsoption Set:

127.0.0.1:6379> set foo 1 
(error) NOREPLICAS Not enough good slaves to write. 

vorübergehend zu beheben führen Sie einfach:

CONFIG SET min-slaves-to-write 0