2017-10-26 5 views
0

Ich versuche Hazelcast in einem Kubernetes/Docker Cluster zu starten. Nachdem ich im Internet gegraben habe, habe ich festgestellt, dass jemand schon darüber nachgedacht hat.Fehler beim Starten von Hazelcast in kubernetes

Derzeit versuche ich Kubernetes-lib Hazelcast

 <dependency> 
      <groupId>com.hazelcast</groupId> 
      <artifactId>hazelcast-kubernetes</artifactId> 
      <version>1.0.0</version> 
     </dependency> 

Hier ist meine Config Hazelcast zu verwenden:

<?xml version="1.0" encoding="UTF-8"?> 
<hazelcast xsi:schemaLocation="http://www.hazelcast.com/schema/config hazelcast-config-3.9.xsd" 
      xmlns="http://www.hazelcast.com/schema/config" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> 
    <group> 
     <name>GROUP_NAME</name> 
     <password>GROUP_NAME_PASSWORD</password> 
    </group> 
    <network> 
     <port auto-increment="true">5701</port> 
     <join> 
      <multicast enabled="false"> 
       <multicast-group>224.2.2.3</multicast-group> 
       <multicast-port>54327</multicast-port> 
      </multicast> 
<!--   <tcp-ip enabled="false"> 
       <interface>127.0.0.1</interface> 
      </tcp-ip>--> 
      <!-- activate the Kubernetes plugin --> 
      <discovery-strategies> 
       <discovery-strategy enabled="true" class="com.hazelcast.kubernetes.HazelcastKubernetesDiscoveryStrategy"> 
        <properties> 
         <!-- configure discovery service API lookup --> 
         <property name="service-name">service-name</property> 
         <property name="service-label-name">label-name</property> 
         <property name="service-label-value">true</property> 
         <property name="namespace">default</property> 
        </properties> 
       </discovery-strategy> 
      </discovery-strategies> 
     </join>  
     <interfaces enabled="false"> 
      <interface>10.10.1.*</interface> 
     </interfaces> 
<!--  <symmetric-encryption enabled="true"> 

       encryption algorithm such as 
       DES/ECB/PKCS5Padding, 
       PBEWithMD5AndDES, 
       AES/CBC/PKCS5Padding, 
       Blowfish, 
       DESede 

      <algorithm>PBEWithMD5AndDES</algorithm> 
      salt value to use when generating the secret key 
      <salt>4oItUqH</salt> 
      pass phrase to use when generating the secret key 
      <password>gctuSBc5bKZrSwXk+</password> 
      iteration count to use when generating the secret key 
      <iteration-count>19</iteration-count> 
     </symmetric-encryption>    --> 
    </network> 
    <executor-service> 
     <pool-size>16</pool-size> 
<!--  <max-pool-size>64</max-pool-size>--> 
     <queue-capacity>64</queue-capacity> 
     <statistics-enabled>true</statistics-enabled> 
<!--  <keep-alive-seconds>60</keep-alive-seconds>--> 
    </executor-service> 
    <queue name="default"> 
     <!-- 
      Maximum size of the queue. When a JVM's local queue size reaches the maximum, 
      all put/offer operations will get blocked until the queue size 
      of the JVM goes down below the maximum. 
      Any integer between 0 and Integer.MAX_VALUE. 0 means 
      Integer.MAX_VALUE. Default is 0. 
     --> 
     <max-size>0</max-size> 
     <!-- 
      Maximum number of seconds for each item to stay in the queue. Items that are 
      not consumed in <time-to-live-seconds> will automatically 
      get evicted from the queue. 
      Any integer between 0 and Integer.MAX_VALUE. 0 means 
      infinite. Default is 0. 
     --> 
<!--  <time-to-live-seconds>0</time-to-live-seconds>--> 
    </queue> 
    <map name="default"> 
     <!-- 
      Number of backups. If 1 is set as the backup-count for example, 
      then all entries of the map will be copied to another JVM for 
      fail-safety. Valid numbers are 0 (no backup), 1, 2, 3. 
     --> 
     <backup-count>4</backup-count> 
     <!-- 
      Valid values are: 
      NONE (no eviction), 
      LRU (Least Recently Used), 
      LFU (Least Frequently Used). 
      NONE is the default. 
     --> 
     <eviction-policy>NONE</eviction-policy> 
     <!-- 
      Maximum size of the map. When max size is reached, 
      map is evicted based on the policy defined. 
      Any integer between 0 and Integer.MAX_VALUE. 0 means 
      Integer.MAX_VALUE. Default is 0. 
     --> 
     <max-size>0</max-size> 
     <!-- 
      When max. size is reached, specified percentage of 
      the map will be evicted. Any integer between 0 and 100. 
      If 25 is set for example, 25% of the entries will 
      get evicted. 
     --> 
     <eviction-percentage>25</eviction-percentage> 
     <!-- 
      While recovering from split-brain (network partitioning), 
      map entries in the small cluster will merge into the bigger cluster 
      based on the policy set here. When an entry merge into the 
      cluster, there might an existing entry with the same key already. 
      Values of these entries might be different for that same key. 
      Which value should be set for the key? Conflict is resolved by 
      the policy set here. Default policy is hz.ADD_NEW_ENTRY 

      There are built-in merge policies such as 
      hz.NO_MERGE  ; no entry will merge. 
      hz.ADD_NEW_ENTRY ; entry will be added if the merging entry's key 
           doesn't exist in the cluster. 
      hz.HIGHER_HITS ; entry with the higher hits wins. 
      hz.LATEST_UPDATE ; entry with the latest update wins. 
     --> 
     <merge-policy>hz.ADD_NEW_ENTRY</merge-policy> 
    </map> 
    <!-- Add your own map merge policy implementations here:  
     <merge-policies><map-merge-policy name="MY_MERGE_POLICY"><class-name>com.acme.MyOwnMergePolicy</class-name></map-merge-policy></merge-policies> 
    --> 
</hazelcast> 

Nach dem Versuch, das Programm zum Hazelcast isn't Start zu starten und es erzieht eine Ausnahme

2017-10-25 15:44:34,849 INFO [main] DiscoveryService:65 - [192.168.1.83]:5701 [dev] [3.9] Kubernetes Discovery: Bearer Token { null } 
2017-10-25 15:44:34,888 ERROR [main] Launcher:97 - Unable to start EventEngineManager 
java.lang.RuntimeException: Failed to configure discovery strategies 
     at com.hazelcast.spi.discovery.impl.DefaultDiscoveryService.loadDiscoveryStrategies(DefaultDiscoveryService.java:153) 
     at com.hazelcast.spi.discovery.impl.DefaultDiscoveryService.<init>(DefaultDiscoveryService.java:60) 
     at com.hazelcast.spi.discovery.impl.DefaultDiscoveryServiceProvider.newDiscoveryService(DefaultDiscoveryServiceProvider.java:29) 
     at com.hazelcast.instance.Node.createDiscoveryService(Node.java:265) 
     at com.hazelcast.instance.Node.<init>(Node.java:220) 
     at com.hazelcast.instance.HazelcastInstanceImpl.createNode(HazelcastInstanceImpl.java:160) 
     at com.hazelcast.instance.HazelcastInstanceImpl.<init>(HazelcastInstanceImpl.java:128) 
     at com.hazelcast.instance.HazelcastInstanceFactory.constructHazelcastInstance(HazelcastInstanceFactory.java:195) 
     at com.hazelcast.instance.HazelcastInstanceFactory.newHazelcastInstance(HazelcastInstanceFactory.java:174) 
     at com.hazelcast.instance.HazelcastInstanceFactory.newHazelcastInstance(HazelcastInstanceFactory.java:124) 
     at com.hazelcast.core.Hazelcast.newHazelcastInstance(Hazelcast.java:58) 
     at com.nsn.monitor.eva.eem.engine.state.EventEngineManagerContext.startup(EventEngineManagerContext.java:131) 
     at com.nsn.monitor.eva.eem.EventEngineManager.init(EventEngineManager.java:59) 
     at com.nsn.monitor.eva.eem.Launcher.initialize(Launcher.java:74) 
     at com.nsn.monitor.eva.eem.Launcher.main(Launcher.java:57) 
Caused by: io.fabric8.kubernetes.client.KubernetesClientException: An error has occurred. 
     at io.fabric8.kubernetes.client.KubernetesClientException.launderThrowable(KubernetesClientException.java:53) 
     at io.fabric8.kubernetes.client.utils.HttpClientUtils.createHttpClient(HttpClientUtils.java:144) 
     at io.fabric8.kubernetes.client.BaseClient.<init>(BaseClient.java:41) 
     at io.fabric8.kubernetes.client.DefaultKubernetesClient.<init>(DefaultKubernetesClient.java:90) 
     at com.hazelcast.kubernetes.ServiceEndpointResolver.buildKubernetesClient(ServiceEndpointResolver.java:74) 
     at com.hazelcast.kubernetes.ServiceEndpointResolver.<init>(ServiceEndpointResolver.java:64) 
     at com.hazelcast.kubernetes.HazelcastKubernetesDiscoveryStrategy.<init>(HazelcastKubernetesDiscoveryStrategy.java:75) 
     at com.hazelcast.kubernetes.HazelcastKubernetesDiscoveryStrategyFactory.newDiscoveryStrategy(HazelcastKubernetesDiscoveryStrategyFactory.java:56) 
     at com.hazelcast.spi.discovery.impl.DefaultDiscoveryService.buildDiscoveryStrategy(DefaultDiscoveryService.java:185) 
     at com.hazelcast.spi.discovery.impl.DefaultDiscoveryService.loadDiscoveryStrategies(DefaultDiscoveryService.java:145) 
     ... 14 more 
Caused by: java.security.cert.CertificateParsingException: no more data allowed for version 1 certificate 
     at sun.security.x509.X509CertInfo.parse(X509CertInfo.java:672) 
     at sun.security.x509.X509CertInfo.<init>(X509CertInfo.java:167) 
     at sun.security.x509.X509CertImpl.parse(X509CertImpl.java:1804) 
     at sun.security.x509.X509CertImpl.<init>(X509CertImpl.java:195) 
     at sun.security.provider.X509Factory.engineGenerateCertificate(X509Factory.java:102) 
     at java.security.cert.CertificateFactory.generateCertificate(CertificateFactory.java:339) 
     at io.fabric8.kubernetes.client.internal.CertUtils.createTrustStore(CertUtils.java:68) 
     at io.fabric8.kubernetes.client.internal.CertUtils.createTrustStore(CertUtils.java:62) 
     at io.fabric8.kubernetes.client.internal.SSLUtils.trustManagers(SSLUtils.java:110) 
     at io.fabric8.kubernetes.client.internal.SSLUtils.trustManagers(SSLUtils.java:104) 
     at io.fabric8.kubernetes.client.utils.HttpClientUtils.createHttpClient(HttpClientUtils.java:68) 

Da ich nicht weiß, wo, zeigen könnte jemand gib mir eine Anleitung? Ich weiß nicht, ob dies ein Problem mit dem Zertifikat im Docker, Problem mit meiner Haselnusskonfiguration ist?

Da ich kein Experte in diesem Bereich bin, bin ich total verloren.

Antwort

1

Grundsätzlich war es ein Problem mit dem Zertifikat des Dockers. Geändert und alles hat funktioniert, wie es sollte

Verwandte Themen