Immer wenn ich das Setup-Verfahren für Apache Spark Data Analytics versuche, bekomme ich diesen Fehler.IBM Bluemix set_hadoop_config Fehler
In
def set_hadoop_config(credentials):
prefix = "fs.swift.service." + credentials['name']
hconf = sc._jsc.hadoopConfiguration()
hconf.set(prefix + ".auth.url", credentials['auth_url']+'/v3/auth/tokens')
hconf.set(prefix + ".auth.endpoint.prefix", "endpoints")
hconf.set(prefix + ".tenant", credentials['project_id'])
hconf.set(prefix + ".username", credentials['user_id'])
hconf.set(prefix + ".password", credentials['password'])
hconf.setInt(prefix + ".http.port", 8080)
hconf.set(prefix + ".region", credentials['region'])
hconf.setBoolean(prefix + ".public", True)
In
credentials['name'] = 'keystone'
set_hadoop_config(credentials)
Out
---------------------------------------------------------------------------
NameError Traceback (most recent call last)
<ipython-input-6-976c35e1d85e> in <module>()
----> 1 credentials['name'] = 'keystone'
2 set_hadoop_config(credentials)
NameError: name 'credentials' is not defined
Wer weiß, wie dieses Problem zu lösen? Ich bin fest
Dank! Du hast mir geholfen, das Problem zu lösen. –