2016-06-05 4 views
8

Ich habe versucht, den kontinuierlichen SpeechRecognition-Mechanismus zu implementieren. Wenn ich die Spracherkennung starten bekomme ich folgende Meldungen in logcat:Android Kontinuierliche Spracherkennung gibt ERROR_NO_MATCH zu schnell zurück

06-05 12:22:32.892 11753-11753/com.aaa.bbb D/SpeechManager: startSpeechRecognition: 
06-05 12:22:33.022 11753-11753/com.aaa.bbb D/SpeechManager: onError: Error 7 
06-05 12:22:33.352 11753-11753/com.aaa.bbb D/SpeechManager: onReadyForSpeech: 
06-05 12:22:33.792 11753-11753/com.aaa.bbb D/SpeechManager: onBeginningOfSpeech: Beginning 
06-05 12:22:34.492 11753-11753/com.aaa.bbb D/SpeechManager: onEndOfSpeech: Ending 
06-05 12:22:34.612 11753-11753/com.aaa.bbb D/SpeechManager: onError: Error 7 

und diesen Fehler 7, die ERROR_NO_MATCH bedeutet. Wie Sie sehen, heißt es fast sofort. Ist es nicht unangemessen?

Hier sind voll Protokolle zwischen startSpeechRecognition und ersten Fehler 7:

06-05 12:22:32.892 11753-11753/com.aaa.bbb D/SpeechManager: startSpeechRecognition: 
06-05 12:22:32.932 4600-4600/? I/GRecognitionServiceImpl: #startListening [en-US] 

                 --------- beginning of system 
06-05 12:22:32.932 3510-7335/? V/AlarmManager: remove PendingIntent] PendingIntent{6307291: PendingIntentRecord{2af25f6 com.google.android.googlequicksearchbox startService}} 
06-05 12:22:32.932 4600-4600/? W/LocationOracle: Best location was null 
06-05 12:22:32.932 3510-4511/? D/AudioService: getStreamVolume 3 index 90 
06-05 12:22:32.942 3510-7335/? D/SensorService: SensorEventConnection::SocketBufferSize, SystemSocketBufferSize - 102400, 2097152 
06-05 12:22:32.942 3510-7360/? D/Sensors: requested delay = 66667000, modified delay = 0 
06-05 12:22:32.942 3510-7360/? I/Sensors: Proximity old sensor_state 16384, new sensor_state : 16512 en : 1 
06-05 12:22:32.952 4600-4600/? D/SensorManager: registerListener :: 5, TMD4903 Proximity Sensor, 66667, 0, 
06-05 12:22:32.952 4600-11932/? D/SensorManager: Proximity, val = 8.0 [far] 
06-05 12:22:32.952 3510-5478/? I/Sensors: Acc old sensor_state 16512, new sensor_state : 16513 en : 1 
06-05 12:22:32.952 3510-4705/? I/Sensors: Mag old sensor_state 16513, new sensor_state : 16529 en : 1 
06-05 12:22:32.952 3510-4037/? I/AppOps: sendInfoToFLP, code=41 , uid=10068 , packageName=com.google.android.googlequicksearchbox , type=startOp 
06-05 12:22:32.962 3510-4511/? D/SensorService: GravitySensor2 setDelay ns = 66667000 mindelay = 66667000 
06-05 12:22:32.962 3510-4511/? I/Sensors: RotationVectorSensor old sensor_state 16529, new sensor_state : 147601 en : 1 
06-05 12:22:32.972 3510-3617/? V/BroadcastQueue: [background] Process cur broadcast BroadcastRecord{f9fab82 u0 com.google.android.apps.gsa.search.core.location.GMS_CORE_LOCATION qIdx=4}, state= (APP_RECEIVE) DELIVERED for app ProcessRecord{cb66323 4600:com.google.android.googlequicksearchbox:search/u0a68} 
06-05 12:22:32.972 3510-4040/? D/NetworkPolicy: isUidForegroundLocked: 10068, mScreenOn: true, uidstate: 2, mProxSensorScreenOff: false 
06-05 12:22:32.982 3510-7360/? D/AudioService: getStreamVolume 3 index 90 
06-05 12:22:32.982 3510-3971/? I/Sensors: ProximitySensor - 8(cm) 
06-05 12:22:32.992 4600-11315/? I/MicrophoneInputStream: mic_starting [email protected] 
06-05 12:22:32.992 3140-3989/? I/APM::AudioPolicyManager: getInputForAttr() source 6, samplingRate 16000, format 1, channelMask 10,session 84, flags 0 
06-05 12:22:32.992 3140-3989/? V/audio_hw_primary: adev_open_input_stream: request sample_rate:16000 
06-05 12:22:32.992 3140-3989/? V/audio_hw_primary: in->requested_rate:16000, pcm_config_in.rate:48000 in->config.channels=2 
06-05 12:22:32.992 3140-3989/? D/audio_hw_primary: adev_open_input_stream: call echoReference_init(12) 
06-05 12:22:32.992 3140-3989/? V/echo_reference_processing: echoReference_init + 
06-05 12:22:32.992 3140-3989/? I/audio_hw_primary: adev_open_input_stream: input is null, set new input stream 
06-05 12:22:32.992 4600-11932/? D/SensorManager: Proximity, val = 8.0 [far] 
06-05 12:22:32.992 3510-3555/? I/MediaFocusControl: AudioFocus requestAudioFocus() from android.media.AudioManager$8c7dfbdcom.google.android.apps.gsa.speech.audio.c.a$1$c7409b2 req=4flags=0x0 
06-05 12:22:32.992 3140-11937/? I/AudioFlinger: AudioFlinger's thread 0xecac0000 ready to run 
06-05 12:22:33.012 4600-11317/? W/CronetAsyncHttpEngine: Upload request without a content type. 
06-05 12:22:33.012 4600-12335/? I/FavoriteContactNamesSup: get() 
06-05 12:22:33.012 4600-12335/? I/FavoriteContactNamesSup: get() : Execute directly (BG thread) 
06-05 12:22:33.012 4600-12335/? I/FavoriteContactNamesSup: get() 
06-05 12:22:33.012 4600-12335/? I/FavoriteContactNamesSup: get() : Execute directly (BG thread) 
06-05 12:22:33.012 4600-12335/? I/FavoriteContactNamesSup: get() 
06-05 12:22:33.012 3510-4533/? D/BatteryService: [email protected] : batteryPropertiesChanged! 
06-05 12:22:33.012 4600-12335/? I/FavoriteContactNamesSup: get() : Execute directly (BG thread) 
06-05 12:22:33.012 3510-4533/? D/BatteryService: level:80, scale:100, status:2, health:2, present:true, voltage: 4093, temperature: 337, technology: Li-ion, AC powered:false, USB powered:true, POGO powered:false, Wireless powered:false, icon:17303446, invalid charger:0, maxChargingCurrent:0 
06-05 12:22:33.012 3510-4533/? D/BatteryService: online:4, current avg:48, charge type:1, power sharing:false, high voltage charger:false, capacity:280000, batterySWSelfDischarging:false, current_now:240 
06-05 12:22:33.012 3510-3510/? D/BatteryService: Sending ACTION_BATTERY_CHANGED. 
06-05 12:22:33.022 11753-11753/com.aaa.bbb D/SpeechManager: onError: Error 7 

Und hier ist mein Code:

public class SpeechManager { 

private static final String TAG = "SpeechManager"; 
private final MainActivity mActivity; 
private final SpeechRecognizer mSpeechRecognizer; 
private boolean mTurnedOn = false; 
private final Intent mRecognitionIntent; 
private final Handler mHandler; 

public SpeechManager(@NonNull MainActivity activity) { 
    mActivity = activity; 
    mSpeechRecognizer = SpeechRecognizer.createSpeechRecognizer(mActivity.getApplicationContext()); 
    mSpeechRecognizer.setRecognitionListener(new MySpeechRecognizer()); 

    mHandler = new Handler(Looper.getMainLooper()); 

    mRecognitionIntent = new Intent(RecognizerIntent.ACTION_RECOGNIZE_SPEECH); 
// mRecognitionIntent.putExtra(RecognizerIntent.EXTRA_LANGUAGE_MODEL, RecognizerIntent.LANGUAGE_MODEL_FREE_FORM); 
    mRecognitionIntent.putExtra(RecognizerIntent.EXTRA_PARTIAL_RESULTS, false); 
    mRecognitionIntent.putExtra(RecognizerIntent.EXTRA_LANGUAGE, "en-US"); 
} 

public void startSpeechRecognition() { 
    Log.d(TAG, "startSpeechRecognition: "); 
    mTurnedOn = true; 
    mSpeechRecognizer.startListening(mRecognitionIntent); 
} 

public void stopSpeechRecognition() { 
    Log.d(TAG, "stopSpeechRecognition: "); 
    if (mTurnedOn) { 
     mTurnedOn = false; 
     mSpeechRecognizer.stopListening(); 
    } 
} 

public void destroy() { 
    Log.d(TAG, "destroy: "); 
    mSpeechRecognizer.destroy(); 
} 

private class MySpeechRecognizer implements RecognitionListener { 
    @Override 
    public void onReadyForSpeech(Bundle params) { 
     Log.d(TAG, "onReadyForSpeech: "); 
    } 

    @Override 
    public void onBeginningOfSpeech() { 
     Log.d(TAG, "onBeginningOfSpeech: Beginning"); 
    } 

    @Override 
    public void onRmsChanged(float rmsdB) { 
    } 

    @Override 
    public void onBufferReceived(byte[] buffer) { 
     Log.d(TAG, "onBufferReceived: "); 
    } 

    @Override 
    public void onEndOfSpeech() { 
     Log.d(TAG, "onEndOfSpeech: Ending"); 
    } 

    @Override 
    public void onError(int error) { 
     Log.d(TAG, "onError: Error " + error); 
     if (error == SpeechRecognizer.ERROR_NETWORK || error == SpeechRecognizer.ERROR_CLIENT) { 
      mTurnedOn = false; 
      return; 
     } 

     if (mTurnedOn) 
      mHandler.postDelayed(new Runnable() { 
       @Override 
       public void run() { 
//     mSpeechRecognizer.cancel(); 
        startSpeechRecognition(); 
       } 
      }, 100); 
    } 

    @Override 
    public void onResults(Bundle results) { 
     Log.d(TAG, "onResults: "); 
     ArrayList<String> partialResults = results.getStringArrayList(SpeechRecognizer.RESULTS_RECOGNITION); 
     if (partialResults != null && partialResults.size() > 0) { 
      for (String str : partialResults) { 
       Log.d(TAG, "onResults: " + str); 
       if (str.equalsIgnoreCase(mActivity.getString(R.string.turn_off_recognition))) { 
        FlashManager.getInstance().turnOff(); 
        mTurnedOn = false; 
        return; 
       } 
      } 
     } 
     mHandler.postDelayed(new Runnable() { 
      @Override 
      public void run() { 
       startSpeechRecognition(); 
      } 
     }, 100); 
    } 

    @Override 
    public void onPartialResults(Bundle partialResults) { 
     Log.d(TAG, "onPartialResults: "); 
    } 

    @Override 
    public void onEvent(int eventType, Bundle params) { 
     Log.d(TAG, "onEvent: " + eventType); 
    } 
} 
} 

Mein Gerät ist Samsung Hinweis5. Weiß jemand wie ich das beheben kann?

Antwort

2

Dies ist ein bekannter Fehler, über den ich einen Bericht einreichte. Sie können das Problem reproduzieren using this simple gist.

Die einzige Möglichkeit, das SpeechRecognizer Objekt jedes Mal neu erstellen. siehe bearbeiten. Dies führt zu anderen Problemen, die im Hauptteil erwähnt werden, aber keine, die ein Problem für Ihre App verursachen.

Google wird schließlich einen Weg finden, um kontinuierliches Hören zu verhindern, da es nicht das ist, wofür die API entwickelt wurde. Sie sind besser dran, PocketSphinx als langfristige Option zu betrachten.

BEARBEITEN 22.06.16 - Mit der neuesten Google-Version hat sich das Verhalten verschlechtert. Eine neue Lösung ist mit dem Kern verknüpft, der die RecognitionListener Unterklasse bildet, um nur auf "echte" Rückrufe zu reagieren.

EDIT 01.07.16" - Siehe this question für einen weiteren neuen Bug

Verwandte Themen