2016-03-31 11 views
0

Ich habe großes Problem mit ES :( Wenn i bulk (size = 20) eine Menge Dokument ES, ES-Server throw unten Ausnahme ein.Fehler beim Zusammenführen Indizes Elasticsearch

ich viele Thema finden darüber diskutieren , aber nichts. sosad:..., Alle helfen Sie mir, was tatsächlich passiert ist ??? Thks so viel

Sr für mein schlechtes Englisch

I ES 2.3, Client-Transport 2.2.1 mit

Serverkonfiguration

http.port: 9200 
http.max_content_length: 100mb 
node.name: "es_test" 
nod.master: true 
node.data: true 
index.store.type: niofs 
index.number_of_shards: 5 
index.number_of_replicas: 0 
discovery.zen.ping.multicast.enabled: false 
script.inline: on 
script.indexed: on 
bootstrap.mlockall: true 

Erros1

[2016-03-31 07:45:02,601][ERROR][index.engine    ] [es_test] [my_index][1] failed to merge 
java.io.EOFException: read past EOF: NIOFSIndexInput(path="/data/es_test/data/es_test/nodes/0/indices/my_index/1/index/_190.fnm") 
    at org.apache.lucene.store.BufferedIndexInput.refill(BufferedIndexInput.java:336) 
    at org.apache.lucene.store.BufferedIndexInput.readByte(BufferedIndexInput.java:54) 
    at org.apache.lucene.store.BufferedChecksumIndexInput.readByte(BufferedChecksumIndexInput.java:41) 
    at org.apache.lucene.store.DataInput.readInt(DataInput.java:101) 
    at org.apache.lucene.codecs.CodecUtil.checkHeader(CodecUtil.java:195) 
    at org.apache.lucene.codecs.CodecUtil.checkIndexHeader(CodecUtil.java:256) 
    at org.apache.lucene.codecs.lucene50.Lucene50FieldInfosFormat.read(Lucene50FieldInfosFormat.java:115) 
    at org.apache.lucene.index.SegmentCoreReaders.<init>(SegmentCoreReaders.java:99) 
    at org.apache.lucene.index.SegmentReader.<init>(SegmentReader.java:65) 
    at org.apache.lucene.index.ReadersAndUpdates.getReader(ReadersAndUpdates.java:145) 
    at org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:4233) 
    at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:3664) 
    at org.apache.lucene.index.ConcurrentMergeScheduler.doMerge(ConcurrentMergeScheduler.java:588) 
    at org.elasticsearch.index.engine.ElasticsearchConcurrentMergeScheduler.doMerge(ElasticsearchConcurrentMergeScheduler.java:94) 
    at org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:626) 
    Suppressed: org.apache.lucene.index.CorruptIndexException: checksum status indeterminate: remaining=0, please run checkindex for more details (resource=BufferedChecksumIndexInput(NIOFSIndexInput(path="/data/es_test/data/es_test/nodes/0/indices/my_index/1/index/_190.fnm"))) 
     at org.apache.lucene.codecs.CodecUtil.checkFooter(CodecUtil.java:371) 
     at org.apache.lucene.codecs.lucene50.Lucene50FieldInfosFormat.read(Lucene50FieldInfosFormat.java:164) 
     ... 8 more 
[2016-03-31 07:45:02,608][WARN ][index.engine    ] [es_test] [my_index][1] failed engine [already closed by tragic event on the index writer] 
java.io.EOFException: read past EOF: NIOFSIndexInput(path="/data/es_test/data/es_test/nodes/0/indices/my_index/1/index/_190.fnm") 
    at org.apache.lucene.store.BufferedIndexInput.refill(BufferedIndexInput.java:336) 
    at org.apache.lucene.store.BufferedIndexInput.readByte(BufferedIndexInput.java:54) 
    at org.apache.lucene.store.BufferedChecksumIndexInput.readByte(BufferedChecksumIndexInput.java:41) 
    at org.apache.lucene.store.DataInput.readInt(DataInput.java:101) 
    at org.apache.lucene.codecs.CodecUtil.checkHeader(CodecUtil.java:195) 
    at org.apache.lucene.codecs.CodecUtil.checkIndexHeader(CodecUtil.java:256) 
    at org.apache.lucene.codecs.lucene50.Lucene50FieldInfosFormat.read(Lucene50FieldInfosFormat.java:115) 
    at org.apache.lucene.index.SegmentCoreReaders.<init>(SegmentCoreReaders.java:99) 
    at org.apache.lucene.index.SegmentReader.<init>(SegmentReader.java:65) 
    at org.apache.lucene.index.ReadersAndUpdates.getReader(ReadersAndUpdates.java:145) 
    at org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:4233) 
    at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:3664) 
    at org.apache.lucene.index.ConcurrentMergeScheduler.doMerge(ConcurrentMergeScheduler.java:588) 
    at org.elasticsearch.index.engine.ElasticsearchConcurrentMergeScheduler.doMerge(ElasticsearchConcurrentMergeScheduler.java:94) 
    at org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:626) 
    Suppressed: org.apache.lucene.index.CorruptIndexException: checksum status indeterminate: remaining=0, please run checkindex for more details (resource=BufferedChecksumIndexInput(NIOFSIndexInput(path="/data/es_test/data/es_test/nodes/0/indices/my_index/1/index/_190.fnm"))) 
     at org.apache.lucene.codecs.CodecUtil.checkFooter(CodecUtil.java:371) 
     at org.apache.lucene.codecs.lucene50.Lucene50FieldInfosFormat.read(Lucene50FieldInfosFormat.java:164) 
     ... 8 more 
[2016-03-31 07:45:02,609][ERROR][index.engine    ] [es_test] [my_index][4] failed to merge 
java.io.EOFException: read past EOF: NIOFSIndexInput(path="/data/es_test/data/es_test/nodes/0/indices/my_index/4/index/_190.fdx") 
    at org.apache.lucene.store.BufferedIndexInput.refill(BufferedIndexInput.java:336) 
    at org.apache.lucene.store.BufferedIndexInput.readByte(BufferedIndexInput.java:54) 
    at org.apache.lucene.store.BufferedChecksumIndexInput.readByte(BufferedChecksumIndexInput.java:41) 
    at org.apache.lucene.store.DataInput.readInt(DataInput.java:101) 
    at org.apache.lucene.codecs.CodecUtil.checkHeader(CodecUtil.java:195) 
    at org.apache.lucene.codecs.CodecUtil.checkIndexHeader(CodecUtil.java:256) 
    at org.apache.lucene.codecs.compressing.CompressingStoredFieldsReader.<init>(CompressingStoredFieldsReader.java:133) 
    at org.apache.lucene.codecs.compressing.CompressingStoredFieldsFormat.fieldsReader(CompressingStoredFieldsFormat.java:121) 
    at org.apache.lucene.codecs.lucene50.Lucene50StoredFieldsFormat.fieldsReader(Lucene50StoredFieldsFormat.java:173) 
    at org.apache.lucene.index.SegmentCoreReaders.<init>(SegmentCoreReaders.java:117) 
    at org.apache.lucene.index.SegmentReader.<init>(SegmentReader.java:65) 
    at org.apache.lucene.index.ReadersAndUpdates.getReader(ReadersAndUpdates.java:145) 
    at org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:4233) 
    at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:3664) 
    at org.apache.lucene.index.ConcurrentMergeScheduler.doMerge(ConcurrentMergeScheduler.java:588) 
    at org.elasticsearch.index.engine.ElasticsearchConcurrentMergeScheduler.doMerge(ElasticsearchConcurrentMergeScheduler.java:94) 
    at org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:626) 
    Suppressed: org.apache.lucene.index.CorruptIndexException: checksum status indeterminate: remaining=0, please run checkindex for more details (resource=BufferedChecksumIndexInput(NIOFSIndexInput(path="/data/es_test/data/es_test/nodes/0/indices/my_index/4/index/_190.fdx"))) 
     at org.apache.lucene.codecs.CodecUtil.checkFooter(CodecUtil.java:371) 
     at org.apache.lucene.codecs.compressing.CompressingStoredFieldsReader.<init>(CompressingStoredFieldsReader.java:140) 
     ... 10 more 

Erros2

[2016-03-31 20:04:07,419][DEBUG][action.admin.cluster.node.stats] [es_test] failed to execute on node [mplUA6JET92RPgmNx-DPMA] 
RemoteTransportException[[es_test][ip:9300][cluster:monitor/nodes/stats[n]]]; nested: AlreadyClosedException[this IndexReader is closed]; 
Caused by: org.apache.lucene.store.AlreadyClosedException: this IndexReader is closed 
     at org.apache.lucene.index.IndexReader.ensureOpen(IndexReader.java:274) 
     at org.apache.lucene.index.CompositeReader.getContext(CompositeReader.java:101) 
     at org.apache.lucene.index.CompositeReader.getContext(CompositeReader.java:55) 
     at org.apache.lucene.index.IndexReader.leaves(IndexReader.java:438) 
     at org.elasticsearch.search.suggest.completion.Completion090PostingsFormat.completionStats(Completion090PostingsFormat.java:330) 
     at org.elasticsearch.index.shard.IndexShard.completionStats(IndexShard.java:765) 
     at org.elasticsearch.action.admin.indices.stats.CommonStats.<init>(CommonStats.java:164) 
     at org.elasticsearch.indices.IndicesService.stats(IndicesService.java:253) 
     at org.elasticsearch.node.service.NodeService.stats(NodeService.java:157) 
     at org.elasticsearch.action.admin.cluster.node.stats.TransportNodesStatsAction.nodeOperation(TransportNodesStatsAction.java:82) 
     at org.elasticsearch.action.admin.cluster.node.stats.TransportNodesStatsAction.nodeOperation(TransportNodesStatsAction.java:44) 
     at org.elasticsearch.action.support.nodes.TransportNodesAction.nodeOperation(TransportNodesAction.java:92) 
     at org.elasticsearch.action.support.nodes.TransportNodesAction$NodeTransportHandler.messageReceived(TransportNodesAction.java:230) 
     at org.elasticsearch.action.support.nodes.TransportNodesAction$NodeTransportHandler.messageReceived(TransportNodesAction.java:226) 
     at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:75) 
     at org.elasticsearch.transport.TransportService$4.doRun(TransportService.java:376) 
     at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) 
     at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
     at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
     at java.lang.Thread.run(Thread.java:745) 

Antwort

0

Ich schlage vor, dass Sie die Fehlermeldung, da 2,3 bis ES Upgrade?

Die Ursache ist höchstwahrscheinlich, dass Ihre Transport-Client-Version älter als Ihre Cluster-Version ist.

Wenn der Transport-Client verwendet wird, müssen die Versionen kompatibel sein.

+0

Vor ES 2.3, versuche ich ES 2.1 zu verwenden, aber es ist dasselbe Problem, und ES ist neu deploy :) –

+0

Aber haben Sie dann die gleiche Transport-Client-Version verwendet? Wenn die Versionen nicht übereinstimmen, haben Sie das gleiche Problem. –

+0

Ich denke, Client-Version habe kein Problem, zum Beispiel, wenn ich mit ES 2.1 versuche, ich benutze die gleiche Version Client, aber immer noch Fehler. :( –

Verwandte Themen