0
 val topics= "test" 
     val zkQuorum="localhost:2181" 
     val group="test-consumer-group"  
     val sparkConf = new org.apache.spark.SparkConf() 
      .setAppName("XXXXX") 
      .setMaster("local[*]") 
      .set("cassandra.connection.host", "127.0.0.1") 
      .set("cassandra.connection.port", "9042") 

     val ssc = new StreamingContext(sparkConf, Seconds(2)) 
     ssc.checkpoint("checkpoint") 
     val topicMap = topics.split(",").map((_, numThreads.toInt)).toMap 

     val lines = KafkaUtils.createStream(ssc, zkQuorum, group, topicMap).map(_._2) 

Ich erhalte DSTREAM (json) wie diesesWie speichert man Dstream-Daten (json) in Cassandra?

[{"id":100,"firstName":"Beulah","lastName":"Fleming","gender":"female","ethnicity":"SpEd","height":167,"address":27,"createdDate":1494489672243,"lastUpdatedDate":1494489672244,"isDeleted":0},{"id":101,"firstName":"Traci","lastName":"Summers","gender":"female","ethnicity":"Frp","height":181,"address":544,"createdDate":1494510639611,"lastUpdatedDate":1494510639611,"isDeleted":0}] 

Durch diese oben Programm i json Daten in DSTREAM bin immer. Wie werde ich diese Dstream-Daten verarbeiten und in Cassandra oder elastische Suche speichern? Dann, wie ich Daten von DStream (im JSON-Format) abrufen und in Cassandra speichern werde?

Antwort

0

Sie müssen Elemente des Stroms entsprechenden Fallklassen

case class Record(id: String, firstName: String, ...) 
val colums = SomeColums("id", "first_name", ...) 
val mapped = lines.map(whateverDataYouHave => fuctionThatReutrnsARecordObject) 

und implizite Funktion saveToCassandra Verwendung speichern com.datastax.spark.connector._, konvertieren importieren

mapped.saveToCassandra(KEYSPACE_NAME, TABLE_NAME, columns) 

Mehr Informationen erhalten Sie Dokumentation https://github.com/datastax/spark-cassandra-connector/blob/master/doc/5_saving.md