2017-11-10 4 views
0

Ich versuche, Flume-ng-SQL-Source-Arbeit mit Apache Flume arbeiten, so dass ich Oracle DB zu Kafka streamen kann.Kann nicht laden Klasse oracle.jdbc.OracleDriver in Apache-Kanal

Finden Sie eine grundlegende Tutorial hier https://www.toadworld.com/platforms/oracle/w/wiki/11524.streaming-oracle-database-table-data-to-apache-kafka

Im mit der folgenden Version Flume 1.8.0 Gerinne-ng-SQL-Quelle 1.4.4 und ojdbc7.jar

Jetzt, wenn ich versuche das lauch Agent erhalte ich die Fehler org.hibernate.boot.registry.classloading.spi.ClassLoadingException: Kann Klasse laden [oracle.jdbc.OracleDriver]

Hier ist meine flume.conf

agent.channels = ch1 
agent.sinks = kafkaSink 
agent.sources = sql-source 

agent.channels.ch1.type = memory 
agent.channels.ch1.capacity = 1000000 
agent.sources.sql-source.channels = ch1 

agent.sources.sql-source.type = org.keedio.flume.source.SQLSource 

# URL to connect to database 
agent.sources.sql-source.hibernate.connection.url = jdbc:oracle:thin:@myoracleserver:1521:mydb 

# Database connection properties 
agent.sources.sql-source.hibernate.connection.user = user 
agent.sources.sql-source.hibernate.connection.password = pass 

agent.sources.sql-source.hibernate.dialect = org.hibernate.dialect.Oracle12cDialect 
agent.sources.sql-source.hibernate.connection.driver_class = oracle.jdbc.OracleDriver 


agent.sources.sql-source.table = SCHEMA.TABLE 


agent.sources.sql-source.columns.to.select = * 

agent.sources.sql-source.status.file.name = sql-source.status 

# Increment column properties 
agent.sources.sql-source.incremental.column.name = id 
# Increment value is from you want to start taking data from tables (0 will import entire table) 
agent.sources.sql-source.incremental.value = 0 

# Query delay, each configured milisecond the query will be sent 
agent.sources.sql-source.run.query.delay=10000 

# Status file is used to save last readed row 
agent.sources.sql-source.status.file.path = /var/lib/flume 
agent.sources.sql-source.status.file.name = sql-source.status 


agent.sinks.kafkaSink.type=org.apache.flume.sink.kafka.KafkaSink 
agent.sinks.kafkaSink.brokerList=localhost:9092 
agent.sinks.kafkaSink.topic=oradb 
agent.sinks.kafkaSink.channel=ch1 
agent.sinks.kafkaSink.batchSize=10 
Hier

ist mein flume-env.sh

FLUME_CLASSPATH="/home/mache84/flume/apache-flume-1.8.0-bin/lib" 

Hier ist die Ausgabe, wenn ich starten Gerinne

[[email protected] flume]$ apache-flume-1.8.0-bin/bin/flume-ng agent --conf /home/mache84/flume/apache-flume-1.8.0-bin/conf -f /home/mache84/flume/apache-flume-1.8.0-bin/conf/flume.conf -n agent -Dflume.root.logger=INFO,console 
Info: Sourcing environment configuration script /home/mache84/flume/apache-flume-1.8.0-bin/conf/flume-env.sh 
Info: Including Hive libraries found via() for Hive access 
+ exec /usr/lib/jvm/jre-1.8.0-openjdk/bin/java -Xmx20m -Dflume.root.logger=INFO,console -cp '/home/mache84/flume/apache-flume-1.8.0-bin/conf:/home/mache84/flume/apache-flume-1.8.0-bin/lib/*:/home/mache84/flume/apache-flume-1.8.0-bin/lib:/home/mache84/flume/apache-flume-1.8.0-bin/plugins.d/sql-source/lib/*:/home/mache84/flume/apache-flume-1.8.0-bin/plugins.d/sql-source/libext/*:/lib/*' -Djava.library.path= org.apache.flume.node.Application -f /home/mache84/flume/apache-flume-1.8.0-bin/conf/flume.conf -n agent 
2017-11-10 10:54:16,220 (lifecycleSupervisor-1-0) [INFO - org.apache.flume.node.PollingPropertiesFileConfigurationProvider.start(PollingPropertiesFileConfigurationProvider.java:62)] Configuration provider starting 
2017-11-10 10:54:16,223 (conf-file-poller-0) [INFO - org.apache.flume.node.PollingPropertiesFileConfigurationProvider$FileWatcherRunnable.run(PollingPropertiesFileConfigurationProvider.java:134)] Reloading configuration file:/home/mache84/flume/apache-flume-1.8.0-bin/conf/flume.conf 
2017-11-10 10:54:16,228 (conf-file-poller-0) [INFO - org.apache.flume.conf.FlumeConfiguration$AgentConfiguration.addProperty(FlumeConfiguration.java:1016)] Processing:kafkaSink 
2017-11-10 10:54:16,229 (conf-file-poller-0) [INFO - org.apache.flume.conf.FlumeConfiguration$AgentConfiguration.addProperty(FlumeConfiguration.java:1016)] Processing:kafkaSink 
2017-11-10 10:54:16,229 (conf-file-poller-0) [INFO - org.apache.flume.conf.FlumeConfiguration$AgentConfiguration.addProperty(FlumeConfiguration.java:930)] Added sinks: kafkaSink Agent: agent 
2017-11-10 10:54:16,229 (conf-file-poller-0) [INFO - org.apache.flume.conf.FlumeConfiguration$AgentConfiguration.addProperty(FlumeConfiguration.java:1016)] Processing:kafkaSink 
2017-11-10 10:54:16,229 (conf-file-poller-0) [INFO - org.apache.flume.conf.FlumeConfiguration$AgentConfiguration.addProperty(FlumeConfiguration.java:1016)] Processing:kafkaSink 
2017-11-10 10:54:16,229 (conf-file-poller-0) [INFO - org.apache.flume.conf.FlumeConfiguration$AgentConfiguration.addProperty(FlumeConfiguration.java:1016)] Processing:kafkaSink 
2017-11-10 10:54:16,236 (conf-file-poller-0) [INFO - org.apache.flume.conf.FlumeConfiguration.validateConfiguration(FlumeConfiguration.java:140)] Post-validation flume configuration contains configuration for agents: [agent] 
2017-11-10 10:54:16,236 (conf-file-poller-0) [INFO - org.apache.flume.node.AbstractConfigurationProvider.loadChannels(AbstractConfigurationProvider.java:147)] Creating channels 
2017-11-10 10:54:16,242 (conf-file-poller-0) [INFO - org.apache.flume.channel.DefaultChannelFactory.create(DefaultChannelFactory.java:42)] Creating instance of channel ch1 type memory 
2017-11-10 10:54:16,247 (conf-file-poller-0) [INFO - org.apache.flume.node.AbstractConfigurationProvider.loadChannels(AbstractConfigurationProvider.java:201)] Created channel ch1 
2017-11-10 10:54:16,248 (conf-file-poller-0) [INFO - org.apache.flume.source.DefaultSourceFactory.create(DefaultSourceFactory.java:41)] Creating instance of source sql-source, type org.keedio.flume.source.SQLSource 
2017-11-10 10:54:16,250 (conf-file-poller-0) [INFO - org.keedio.flume.source.SQLSource.configure(SQLSource.java:63)] Reading and processing configuration values for source sql-source 
2017-11-10 10:54:16,365 (conf-file-poller-0) [INFO - org.hibernate.annotations.common.reflection.java.JavaReflectionManager.<clinit>(JavaReflectionManager.java:66)] HCANN000001: Hibernate Commons Annotations {4.0.5.Final} 
2017-11-10 10:54:16,370 (conf-file-poller-0) [INFO - org.hibernate.Version.logVersion(Version.java:54)] HHH000412: Hibernate Core {4.3.10.Final} 
2017-11-10 10:54:16,372 (conf-file-poller-0) [INFO - org.hibernate.cfg.Environment.<clinit>(Environment.java:239)] HHH000206: hibernate.properties not found 
2017-11-10 10:54:16,373 (conf-file-poller-0) [INFO - org.hibernate.cfg.Environment.buildBytecodeProvider(Environment.java:346)] HHH000021: Bytecode provider name : javassist 
2017-11-10 10:54:16,393 (conf-file-poller-0) [INFO - org.keedio.flume.source.HibernateHelper.establishSession(HibernateHelper.java:63)] Opening hibernate session 
2017-11-10 10:54:16,472 (conf-file-poller-0) [WARN - org.hibernate.engine.jdbc.connections.internal.DriverManagerConnectionProviderImpl.configure(DriverManagerConnectionProviderImpl.java:93)] HHH000402: Using Hibernate built-in connection pool (not for production use!) 
2017-11-10 10:54:16,473 (conf-file-poller-0) [ERROR - org.apache.flume.node.AbstractConfigurationProvider.loadSources(AbstractConfigurationProvider.java:361)] Source sql-source has been removed due to an error during configuration 
org.hibernate.boot.registry.classloading.spi.ClassLoadingException: Unable to load class [oracle.jdbc.OracleDriver] 
     at org.hibernate.boot.registry.classloading.internal.ClassLoaderServiceImpl.classForName(ClassLoaderServiceImpl.java:245) 
     at org.hibernate.engine.jdbc.connections.internal.DriverManagerConnectionProviderImpl.loadDriverIfPossible(DriverManagerConnectionProviderImpl.java:200) 
     at org.hibernate.engine.jdbc.connections.internal.DriverManagerConnectionProviderImpl.buildCreator(DriverManagerConnectionProviderImpl.java:156) 
     at org.hibernate.engine.jdbc.connections.internal.DriverManagerConnectionProviderImpl.configure(DriverManagerConnectionProviderImpl.java:95) 
     at org.hibernate.boot.registry.internal.StandardServiceRegistryImpl.configureService(StandardServiceRegistryImpl.java:111) 
     at org.hibernate.service.internal.AbstractServiceRegistryImpl.initializeService(AbstractServiceRegistryImpl.java:234) 
     at org.hibernate.service.internal.AbstractServiceRegistryImpl.getService(AbstractServiceRegistryImpl.java:206) 
     at org.hibernate.engine.jdbc.internal.JdbcServicesImpl.buildJdbcConnectionAccess(JdbcServicesImpl.java:260) 
     at org.hibernate.engine.jdbc.internal.JdbcServicesImpl.configure(JdbcServicesImpl.java:94) 
     at org.hibernate.boot.registry.internal.StandardServiceRegistryImpl.configureService(StandardServiceRegistryImpl.java:111) 
     at org.hibernate.service.internal.AbstractServiceRegistryImpl.initializeService(AbstractServiceRegistryImpl.java:234) 
     at org.hibernate.service.internal.AbstractServiceRegistryImpl.getService(AbstractServiceRegistryImpl.java:206) 
     at org.hibernate.cfg.Configuration.buildTypeRegistrations(Configuration.java:1887) 
     at org.hibernate.cfg.Configuration.buildSessionFactory(Configuration.java:1845) 
     at org.keedio.flume.source.HibernateHelper.establishSession(HibernateHelper.java:67) 
     at org.keedio.flume.source.SQLSource.configure(SQLSource.java:73) 
     at org.apache.flume.conf.Configurables.configure(Configurables.java:41) 
     at org.apache.flume.node.AbstractConfigurationProvider.loadSources(AbstractConfigurationProvider.java:326) 
     at org.apache.flume.node.AbstractConfigurationProvider.getConfiguration(AbstractConfigurationProvider.java:101) 
     at org.apache.flume.node.PollingPropertiesFileConfigurationProvider$FileWatcherRunnable.run(PollingPropertiesFileConfigurationProvider.java:141) 
     at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
     at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) 
     at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) 
     at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) 
     at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) 
     at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
     at java.lang.Thread.run(Thread.java:748) 
Caused by: java.lang.ClassNotFoundException: Could not load requested class : oracle.jdbc.OracleDriver 
     at org.hibernate.boot.registry.classloading.internal.ClassLoaderServiceImpl$AggregatedClassLoader.findClass(ClassLoaderServiceImpl.java:230) 
     at java.lang.ClassLoader.loadClass(ClassLoader.java:424) 
     at java.lang.ClassLoader.loadClass(ClassLoader.java:357) 
     at java.lang.Class.forName0(Native Method) 
     at java.lang.Class.forName(Class.java:348) 
     at org.hibernate.boot.registry.classloading.internal.ClassLoaderServiceImpl.classForName(ClassLoaderServiceImpl.java:242) 
     ... 26 more 
2017-11-10 10:54:16,475 (conf-file-poller-0) [INFO - org.apache.flume.sink.DefaultSinkFactory.create(DefaultSinkFactory.java:42)] Creating instance of sink: kafkaSink, type: org.apache.flume.sink.kafka.KafkaSink 
2017-11-10 10:54:16,479 (conf-file-poller-0) [WARN - org.apache.flume.sink.kafka.KafkaSink.translateOldProps(KafkaSink.java:363)] topic is deprecated. Please use the parameter kafka.topic 
2017-11-10 10:54:16,479 (conf-file-poller-0) [WARN - org.apache.flume.sink.kafka.KafkaSink.translateOldProps(KafkaSink.java:374)] brokerList is deprecated. Please use the parameter kafka.bootstrap.servers 
2017-11-10 10:54:16,479 (conf-file-poller-0) [WARN - org.apache.flume.sink.kafka.KafkaSink.translateOldProps(KafkaSink.java:384)] batchSize is deprecated. Please use the parameter flumeBatchSize 
2017-11-10 10:54:16,480 (conf-file-poller-0) [INFO - org.apache.flume.sink.kafka.KafkaSink.configure(KafkaSink.java:314)] Using the static topic oradb. This may be overridden by event headers 
2017-11-10 10:54:16,484 (conf-file-poller-0) [INFO - org.apache.flume.node.AbstractConfigurationProvider.getConfiguration(AbstractConfigurationProvider.java:116)] Channel ch1 connected to [kafkaSink] 
2017-11-10 10:54:16,488 (conf-file-poller-0) [INFO - org.apache.flume.node.Application.startAllComponents(Application.java:137)] Starting new configuration:{ sourceRunners:{} sinkRunners:{kafkaSink=SinkRunner: { policy:[email protected] counterGroup:{ name:null counters:{} } }} channels:{ch1=org.apache.flume.channel.MemoryChannel{name: ch1}} } 
2017-11-10 10:54:16,488 (conf-file-poller-0) [INFO - org.apache.flume.node.Application.startAllComponents(Application.java:144)] Starting Channel ch1 
2017-11-10 10:54:16,489 (conf-file-poller-0) [INFO - org.apache.flume.node.Application.startAllComponents(Application.java:159)] Waiting for channel: ch1 to start. Sleeping for 500 ms 
2017-11-10 10:54:16,524 (lifecycleSupervisor-1-0) [INFO - org.apache.flume.instrumentation.MonitoredCounterGroup.register(MonitoredCounterGroup.java:119)] Monitored counter group for type: CHANNEL, name: ch1: Successfully registered new MBean. 
2017-11-10 10:54:16,525 (lifecycleSupervisor-1-0) [INFO - org.apache.flume.instrumentation.MonitoredCounterGroup.start(MonitoredCounterGroup.java:95)] Component type: CHANNEL, name: ch1 started 
2017-11-10 10:54:16,990 (conf-file-poller-0) [INFO - org.apache.flume.node.Application.startAllComponents(Application.java:171)] Starting Sink kafkaSink 
2017-11-10 10:54:17,033 (lifecycleSupervisor-1-1) [INFO - org.apache.kafka.common.config.AbstractConfig.logAll(AbstractConfig.java:165)] ProducerConfig values: 
     compression.type = none 
     metric.reporters = [] 
     metadata.max.age.ms = 300000 
     metadata.fetch.timeout.ms = 60000 
     reconnect.backoff.ms = 50 
     sasl.kerberos.ticket.renew.window.factor = 0.8 
     bootstrap.servers = [localhost:9092] 
     retry.backoff.ms = 100 
     sasl.kerberos.kinit.cmd = /usr/bin/kinit 
     buffer.memory = 33554432 
     timeout.ms = 30000 
     key.serializer = class org.apache.kafka.common.serialization.StringSerializer 
     sasl.kerberos.service.name = null 
     sasl.kerberos.ticket.renew.jitter = 0.05 
     ssl.keystore.type = JKS 
     ssl.trustmanager.algorithm = PKIX 
     block.on.buffer.full = false 
     ssl.key.password = null 
     max.block.ms = 60000 
     sasl.kerberos.min.time.before.relogin = 60000 
     connections.max.idle.ms = 540000 
     ssl.truststore.password = null 
     max.in.flight.requests.per.connection = 5 
     metrics.num.samples = 2 
     client.id = 
     ssl.endpoint.identification.algorithm = null 
     ssl.protocol = TLS 
     request.timeout.ms = 30000 
     ssl.provider = null 
     ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1] 
     acks = 1 
     batch.size = 16384 
     ssl.keystore.location = null 
     receive.buffer.bytes = 32768 
     ssl.cipher.suites = null 
     ssl.truststore.type = JKS 
     security.protocol = PLAINTEXT 
     retries = 0 
     max.request.size = 1048576 
     value.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer 
     ssl.truststore.location = null 
     ssl.keystore.password = null 
     ssl.keymanager.algorithm = SunX509 
     metrics.sample.window.ms = 30000 
     partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner 
     send.buffer.bytes = 131072 
     linger.ms = 0 

2017-11-10 10:54:17,065 (lifecycleSupervisor-1-1) [INFO - org.apache.kafka.common.utils.AppInfoParser$AppInfo.<init>(AppInfoParser.java:82)] Kafka version : 0.9.0.1 
2017-11-10 10:54:17,065 (lifecycleSupervisor-1-1) [INFO - org.apache.kafka.common.utils.AppInfoParser$AppInfo.<init>(AppInfoParser.java:83)] Kafka commitId : 23c69d62a0cabf06 
2017-11-10 10:54:17,066 (lifecycleSupervisor-1-1) [INFO - org.apache.flume.instrumentation.MonitoredCounterGroup.register(MonitoredCounterGroup.java:119)] Monitored counter group for type: SINK, name: kafkaSink: Successfully registered new MBean. 
2017-11-10 10:54:17,066 (lifecycleSupervisor-1-1) [INFO - org.apache.flume.instrumentation.MonitoredCounterGroup.start(MonitoredCounterGroup.java:95)] Component type: SINK, name: kafkaSink started 

ich kopiert haben die lib ojdbc7.jar in

Apache-flume- 1.8.0-bin/lib und apache-rinne-1.8.0-bin/plugins.d/sql-source/libext/

Dies könnte ein sein sehr grundlegendes Problem, da ich neu in all dem bin.

Vielen Dank im Voraus

Antwort

Verwandte Themen