2016-11-14 1 views
0

Ich habe die folgende Codezeile, die Stanford lexikalischen Parser initialisiert.java.lang.NoSuchMethodError: edu.stanford.nlp.util.Generics.newHashMap() Ljava/util/Map;

Ich bekomme unter Ausnahme nur, wenn ich den Code von einer Java SE-Anwendung in eine Java EE-Anwendung verschieben.

Caused by: java.lang.NoSuchMethodError: edu.stanford.nlp.util.Generics.newHashMap()Ljava/util/Map; 
    at edu.stanford.nlp.parser.lexparser.BinaryGrammar.init(BinaryGrammar.java:223) 
    at edu.stanford.nlp.parser.lexparser.BinaryGrammar.readObject(BinaryGrammar.java:211) 
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) 

Wie wird das verursacht und wie kann ich es lösen?

+0

bitte mehr Stapel mindestens veröffentlichen, bis „die durch ...“ –

+0

i die Frage @KhalilM –

+0

„NoSuchMethodError“ aktualisiert stellen Sie sicher, dass Sie die richtige Version von NLP haben, werde ich davon ausgehen, dass du benutzt hast verschiedene Version beim Kompilieren und Ausführen –

Antwort

4

Sie die FAQ verweisen: http://nlp.stanford.edu/software/corenlp-faq.shtml#nosuchmethoderror

Caused by: java.lang.NoSuchMethodError: edu.stanford.nlp.util.Generics.newHashMap()Ljava/util/Map; 
    at edu.stanford.nlp.pipeline.AnnotatorPool.(AnnotatorPool.java:27) 
    at edu.stanford.nlp.pipeline.StanfordCoreNLP.getDefaultAnnotatorPool(StanfordCoreNLP.java:305) 

then this isn't caused by the shiny new Stanford NLP tools that you've just downloaded. It is because you also have old versions of one or more Stanford NLP tools on your classpath.

The straightforward case is if you have an older version of a Stanford NLP tool. For example, you may still have a version of Stanford NER on your classpath that was released in 2009. In this case, you should upgrade, or at least use matching versions. For any releases from 2011 on, just use tools released at the same time -- such as the most recent version of everything :) -- and they will all be compatible and play nicely together.

The tricky case of this is when people distribute jar files that hide other people's classes inside them. People think this will make it easy for users, since they can distribute one jar that has everything you need, but, in practice, as soon as people are building applications using multiple components, this results in a particular bad form of jar hell. People just shouldn't do this. The only way to check that other jar files do not contain conflicting versions of Stanford tools is to look at what is inside them (for example, with the jar -tf command).

In practice, if you're having problems, the most common cause (in 2013-2014) is that you have ark-tweet-nlp on your classpath. The jar file in their github download hides old versions of many other people's jar files, including Apache commons-codec (v1.4), commons-lang, commons-math, commons-io, Lucene; Twitter commons; Google Guava (v10); Jackson; Berkeley NLP code; Percy Liang's fig; GNU trove; and an outdated version of the Stanford POS tagger (from 2011). You should complain to them for creating you and us grief. But you can then fix the problem by using their jar file from Maven Central. It doesn't have all those other libraries stuffed inside.

+0

Danke @ Frédéric Henri, das Problem war eine alte Version von Stanford Segmentierer. Nachdem ich die Bibliothek gelöscht habe, funktioniert es jetzt. –

1

Als Frédéric sagte, die beste Lösung für alle Abhängigkeiten zu löschen ist, diese Fehlanpassung im laufenden Betrieb verursacht und die Zusammenstellung und die Bibliothek wieder hinzufügen und buil wieder, wenn Sie‘ re mit maven:

<dependency> 
    <groupId>edu.stanford.nlp</groupId> 
    <artifactId>stanford-corenlp</artifactId> 
    <version>3.6.0</version> 
</dependency> 
Verwandte Themen