2017-05-15 6 views
0

Ich plane, Upgrade DSE 4.8 zu DSE 5.0 zu DSE 5.1 ​​. Derzeit verwende ich DSE4.8 für einzelne Data Center mit 3 Knoten. Bitte schlagen Sie mir DSE Upgrade Schritt für Schritt vor. Ich überprüfte DSE 5.0 Upgrade-Artikel auf Datastax, die ein wenig komplexer für das Verständnis sind.Upgrade von DSE 4.8 auf DSE 5.0

Ich verwende Java 1.8 installiert in meine alle 3 Knoten. und Verwenden von Linux. Vielen Dank im Voraus.

Antwort

0
We just did this activity (actually went from 4.8.13 to 5.1.7 but had to "hop" through 5.0.X first). I'm not sure if you can attach a document to these so this may seem sort of messy. Here is the 4.8 - 5.0 upgrade components. It, of course, has some assumptions (like it's on Linux, you're using RPMs, etc.). Let me know if you have questions outside this forum and I'll gladly help. 

Prerequisites 

1. Ensure you have java version 1.8_0_40 or higher 
a. $ java -version 
2. Ensure you have a backup of the data before starting 
a. Backing up the data ensures there is a rollback plan. While there may be Talena backups available, it is recommended to issue a snapshot as well. 


General Restrictions During Upgrade 
Below are restrictions to follow until each node is completely upgraded (Upgrades are not considered complete until the SSTables are upgrade). 
1. Do not enable new features 
2. Do not run “nodetool repair” 
3. Do not bootstrap nor decommission nodes 
4. Do not issue DDL or TRUNCATE statements 

SOLR Restrictions During Upgrade 
Below is a list of SOLR restrictions to follow until each node is completely upgraded. To determine which nodes are SOLR nodes, issue the below command and look for any node that has a “Search” workload. 
$ dsetool status 

1. Do not update schemas (schema.xml) 
2. Do not reindex 
3. Do not issue DDL and TRUNCATE statements 
Other Restrictions 
Below are a list of other general restrictions to follow during the upgrade 
1. Do not change security credentials or permissions until after the upgrade is complete 

Driver Restrictions 
Check the below URL for driver compatibility. Depending on the driver version, you may need to recompile your client application code. 
http://docs.datastax.com/en/developer/driver-matrix/doc/common/driverMatrix.html 

Preparing To Upgrade 
1. Ensure each node has ample free disk. The amount of disk space varies on multiple factors. 
2. Upgrade all SSTables to ensure they are on the current version This is required for all major Cassandra version changes. If the tables are already upgraded to the latest version, it will return immediately with no action taken. If there are a lot of SSTables to check, provide the “—jobs” option to set the number of SSTables that upgrade simultaneously (the default is 2). 
a. $ nodetool -u <username> -pw <password> upgradesstables 
3. Run “nodetool repair” on each node to ensure they are all in sync. 
4. Back up all configuration files on each node 
a. /etc/default/dse 
b. /etc/dse/dse.yaml 
c. /etc/dse/Cassandra/Cassandra.yaml 
d. /etc/dse/Cassandra/Cassandra-env.sh 


Upgrade From 4.8 to 5.0 
Follow the below steps on each node to upgrade. Some warning messages may be displayed during and after the upgrade. 
1. Upgrade order matters. Upgrade in this order: 
a. If there are multiple datacenters, upgrade every node in one datacenter before upgrading another datacenter. 
2. Upgrade Order: 
a. DSE Analytics 
b. Transactional/DSE Graph 
c. DSE Search/SOLR 
3. Take a backup of each node 
a. $ nodetool -u <username> -pw <password> snapshot -t db_pre_50_upgrade 

ONE NODE AT A TIME 
4. Run nodetool drain (this will effectively disable this node for further connections/use until restarted) 
a. $ nodetool -u <username> -pw <password> drain 
5. Stop the node 
a. # service dse stop 
6. Install new product (assuming not in satellite) 
a. # yum install *.rpm 
Note: 
1) I noticed when I tried to upgrade with the above command, I received an error regarding dependencies with dse-demos (dse-4.8.13 (at least in the lab) had dse-demos module installed, however dse-5.0 didn’t have this rpm so it flagged an error). To get around the problem, I did a “yum remove dse-demo*” first (which also removed dse-full) before I did the “yum install *.rpm”. 

2) Lab config files that were modified are listed below: 
warning: /etc/dse/cassandra/cassandra-env.sh created as /etc/dse/cassandra/cassandra-env.sh.rpmnew 
warning: /etc/dse/cassandra/cassandra.yaml created as /etc/dse/cassandra/cassandra.yaml.rpmnew 

It is advised to use supplied config files and reconcile old values into it v.s. the other way around. 

3) I noticed that even though there wasn’t a warning that this file changed, it did: cassandra-rackdc.properties 



7. Compare and reconcile old configuration files with new ones 
a. diff cassandra.yaml cassandra.yaml.rpmnew | grep -v "^>[ ]*#" | grep -v "^<[ ]*#" > diff.out 
8. Start up Cassandra 
9. Check /var/log/Cassandra/system.log for issues 
a. You will see messages about Cassandra running in degraded mode. This will be the case until all nodes are upgraded 

AFTER ALL NODES RUNNING WITH NEW SOFTWARE 
After all of the nodes have been restarted with DSE 5.0, the SSTables need to be upgraded. Depending on how many SSTables exist will determine how long the process will take. 

On Each Node: 
1) Drop legacy tables: 
a. System_auth.users 
b. System_auth.credentials 
c. System_auth.permissions 
Users in Cassandra 3.0 no longer reside in the above tables, instead, the information is populated in the newer tables (mainly system_auth.roles, system_auth.role_members and system_auth.role_permissions); 

2) Upgrade SSTables 
a. $ nodetool -u <username> -pw <password> upgradesstables –jobs <#> 
i. Where <#> is the number of concurrent jobs you’d like to have running to upgrade the sstables. Default is 2. This number can’t be larger than the “concurrent_compactors” parameter in the Cassandra.yaml file. If it is, you’ll see warnings and the number of jobs will be reduced to the concurrent_compactors value. 
ii. All nodes can be done simultaneously 
iii. You can “tail” the system.log file or run “nodetool compactionstats” to get an idea of where things are at. 
iv. When completed, I noticed a 25% reduction in “data space” consumption. 
v. The lab took about 5 minutes to complete 

OPTIMIZATION 
Create /etc/sysctl.d/cassandra.conf and put the following in it: 
net.ipv4.tcp_keepalive_time=60 
net.ipv4.tcp_keepalive_probes=3 
net.ipv4.tcp_keepalive_intvl=10 
net.core.rmem_max=16777216 
net.core.wmem_max=16777216 
net.core.rmem_default=16777216 
net.core.wmem_default=16777216 
net.core.optmem_max=40960 
net.ipv4.tcp_rmem=4096 87380 16777216 
net.ipv4.tcp_wmem=4096 65536 16777216 

# sysctl -p /etc/sysctl.d/cassandra.conf 
Verwandte Themen