1

Ich habe datastax Cassandra-Cluster in Google Cloud bereitgestellt und in der Lage, die Daten und Abfrage von Cqlsh laden, aber nicht in der Lage, von JAVA-Code zu verbinden. Die folgende Fehlermeldung wird angezeigt.Fehler beim Verbinden von Cassandra-Cluster aus Java-Code

Cassandra Version

3.0.7 

Fehlermeldung

<searchResultResponse><error><errorCode>200</errorCode><errorMessage>All host(s) tried for query failed (tried: /104.155.229.139:9042 (com.datastax.driver.core.exceptions.InvalidQueryException: unconfigured table schema_keyspaces))</errorMessage></error></searchResultResponse> 

nodetool Status

Datacenter: asia-east1-a 

======================== 

Status=Up/Down 

|/ State=Normal/Leaving/Joining/Moving 

-- Address  Load  Tokens  Owns Host ID        Rack 

UN xx.xxx.x.4 974.53 MB 64   ?  e7974879-647f-460a-ac2e-0828bcefe7cb asia-east1-a 

UN xx.xxx.x.2 832.5 MB 64   ?  4d152508-d9ea-4fea-89a6-ef3e86b036ac asia-east1-a 

UN xx.xxx.x.3 942.64 MB 64   ?  de4798b7-2a74-4104-be0b-1ed093183276 asia-east1-a 

Datacenter: europe-west1-b 

========================== 

Status=Up/Down 

|/ State=Normal/Leaving/Joining/Moving 

-- Address  Load  Tokens  Owns Host ID        Rack 

UN xx.xxx.x.4 849.3 MB 64   ?  a9af8255-8f09-4d41-a9a5-5ce769b47cd6 europe-west1-b 

UN xx.xxx.x.2 906.62 MB 64   ?  3389e168-cf8e-4bd2-8947-cbfd42187a64 europe-west1-b 

UN xx.xxx.x.3 945.59 MB 64   ?  c2a561fc-6fa1-440d-8f42-e85a866ed48a europe-west1-b 

Datacenter: us-east1-b 
====================== 

Status=Up/Down 

|/ State=Normal/Leaving/Joining/Moving 

-- Address  Load  Tokens  Owns Host ID        Rack 

UN xx.xxx.x.4 904.41 MB 64   ?  43b49588-841b-4925-bf3f-ab59ca227186 us-east1-b 

UN xx.xxx.x.2 953.32 MB 64   ?  d658b8c8-ee24-4e15-9240-7c4aac92f723 us-east1-b 

UN xx.xxx.x.3 843.16 MB 64   ?  1ee956b8-3823-4324-ac8f-582d312851b3 us-east1-b 

Cassandray.yaml von einem der Knoten

cluster_name: 'Test Cluster' 

num_tokens: 64 



hinted_handoff_enabled: true 
max_hint_window_in_ms: 10800000 # 3 hours 

hinted_handoff_throttle_in_kb: 1024 

max_hints_delivery_threads: 2 

hints_directory: /var/lib/cassandra/hints 

hints_flush_period_in_ms: 10000 

max_hints_file_size_in_mb: 128 


batchlog_replay_throttle_in_kb: 1024 

authenticator: AllowAllAuthenticator 

authorizer: AllowAllAuthorizer 

role_manager: com.datastax.bdp.cassandra.auth.DseRoleManager 

roles_validity_in_ms: 2000 


permissions_validity_in_ms: 2000 


partitioner: org.apache.cassandra.dht.Murmur3Partitioner 

data_file_directories: 
    - /mnt/data 

commitlog_directory: /mnt/commitlog 

disk_failure_policy: stop 

commit_failure_policy: stop 

key_cache_size_in_mb: 

key_cache_save_period: 14400 

row_cache_size_in_mb: 0 

row_cache_save_period: 0 


counter_cache_size_in_mb: 

counter_cache_save_period: 7200 


saved_caches_directory: /mnt/saved_caches 

commitlog_sync: periodic 
commitlog_sync_period_in_ms: 10000 

commitlog_segment_size_in_mb: 32 


seed_provider: 
    # Addresses of hosts that are deemed contact points. 
    # Cassandra nodes use this list of hosts to find each other and learn 
    # the topology of the ring. You must change this if you are running 
    # multiple nodes! 
    - class_name: org.apache.cassandra.locator.SimpleSeedProvider 
     parameters: 
      # seeds is actually a comma-delimited list of addresses. 
      # Ex: "<ip1>,<ip2>,<ip3>" 
      - seeds: "10.142.0.4" 

concurrent_reads: 32 
concurrent_writes: 32 
concurrent_counter_writes: 32 

concurrent_materialized_view_writes: 32 







memtable_allocation_type: heap_buffers 



index_summary_capacity_in_mb: 

index_summary_resize_interval_in_minutes: 60 

trickle_fsync: true 

trickle_fsync_interval_in_kb: 10240 

storage_port: 7000 

ssl_storage_port: 7001 

listen_address: 10.140.0.2 

broadcast_address: 10.140.0.2 



start_native_transport: true 
native_transport_port: 9042 



start_rpc: true 

rpc_address: 0.0.0.0 

rpc_port: 9160 

broadcast_rpc_address: 10.140.0.2 

rpc_keepalive: true 

rpc_server_type: sync 




thrift_framed_transport_size_in_mb: 15 

incremental_backups: false 

snapshot_before_compaction: false 

auto_snapshot: true 

tombstone_warn_threshold: 1000 
tombstone_failure_threshold: 100000 

column_index_size_in_kb: 64 


batch_size_warn_threshold_in_kb: 64 

batch_size_fail_threshold_in_kb: 640 

unlogged_batch_across_partitions_warn_threshold: 10 


compaction_throughput_mb_per_sec: 16 

compaction_large_partition_warning_threshold_mb: 100 

sstable_preemptive_open_interval_in_mb: 50 



read_request_timeout_in_ms: 5000 
range_request_timeout_in_ms: 10000 
write_request_timeout_in_ms: 2000 
counter_write_request_timeout_in_ms: 5000 
cas_contention_timeout_in_ms: 1000 
truncate_request_timeout_in_ms: 60000 
request_timeout_in_ms: 10000 

cross_node_timeout: false 


phi_convict_threshold: 12 

endpoint_snitch: GossipingPropertyFileSnitch 

dynamic_snitch_update_interval_in_ms: 100 
dynamic_snitch_reset_interval_in_ms: 600000 
dynamic_snitch_badness_threshold: 0.1 

request_scheduler: org.apache.cassandra.scheduler.NoScheduler 



server_encryption_options: 
    internode_encryption: none 
    keystore: resources/dse/conf/.keystore 
    keystore_password: cassandra 
    truststore: resources/dse/conf/.truststore 
    truststore_password: cassandra 
    # More advanced defaults below: 
    # protocol: TLS 
    # algorithm: SunX509 
    # store_type: JKS 
    # cipher_suites: [TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_256_CBC_SHA,TLS_DHE_RSA_WITH_AES_128_CBC_SHA,TLS_DHE_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_ 
AES_256_CBC_SHA] 
    # require_client_auth: false 

client_encryption_options: 
    enabled: false 
    # If enabled and optional is set to true encrypted and unencrypted connections are handled. 
    optional: false 
    keystore: resources/dse/conf/.keystore 
    keystore_password: cassandra 
    # require_client_auth: false 
    # Set trustore and truststore_password if require_client_auth is true 
    # truststore: resources/dse/conf/.truststore 
    # truststore_password: cassandra 
    # More advanced defaults below: 
    # protocol: TLS 
    # algorithm: SunX509 
    # store_type: JKS 
    # cipher_suites: [TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_256_CBC_SHA,TLS_DHE_RSA_WITH_AES_128_CBC_SHA,TLS_DHE_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_ 
AES_256_CBC_SHA] 

internode_compression: dc 

inter_dc_tcp_nodelay: false 

tracetype_query_ttl: 86400 
tracetype_repair_ttl: 604800 

gc_warn_threshold_in_ms: 1000 

enable_user_defined_functions: false 

enable_scripted_user_defined_functions: false 

windows_timer_interval: 1 

auto_bootstrap: false 

Danke,

+0

Java-Code, den Sie geschrieben haben, wird benötigt, um das Problem zu lösen. Bitte Postleitzahl eingeben. Yaml-Datei ist höchstwahrscheinlich kein Problem. – Sreekar

Antwort

1

Basierend auf den Tabellennamen es zu analysieren versucht, verwenden Sie wahrscheinlich eine Version des DataStax Java-Treiber älter als 3.0. Cassandra 3.0 ändert die Art und Weise, wie Schematabellen analysiert werden. Daher benötigen Sie eine Version des Java-Treibers 3.0 oder höher (3.1.1 ist die neueste Version).

+0

Danke. Das Problem wird gelöst, indem die richtige Treiberversion angezeigt wird. – user374374

Verwandte Themen