Ich versuchte pthread_create auf ubuntu14.04 abzufangen, ist der Code wie folgt:Intercept Linux pthread_create Funktion, was zu einer JVM/SSH Absturz
struct thread_param{
void * args;
void *(*start_routine) (void *);
};
typedef int(*P_CREATE)(pthread_t *thread, const pthread_attr_t *attr,void *
(*start_routine) (void *), void *arg);
void *intermedia(void * arg){
struct thread_param *temp;
temp=(struct thread_param *)arg;
//do some other things
return temp->start_routine(temp->args);
}
int pthread_create(pthread_t *thread, const pthread_attr_t *attr, void *
(*start_routine)(void *), void *arg){
static void *handle = NULL;
static P_CREATE old_create=NULL;
if(!handle)
{
handle = dlopen("libpthread.so.0", RTLD_LAZY);
old_create = (P_CREATE)dlsym(handle, "pthread_create");
}
struct thread_param temp;
temp.args=arg;
temp.start_routine=start_routine;
int result=old_create(thread,attr,intermedia,(void *)&temp);
// int result=old_create(thread,attr,start_routine,arg);
return result;
}
Es funktioniert mit meinem eigenen pthread_create Testfall (geschrieben in C). Aber wenn ich es mit Hadoop auf Jvm verwendet, gab es mir die Fehlermeldung wie folgt aus:
Starting namenodes on [ubuntu]
ubuntu: starting namenode, logging to /home/yangyong/work/hadooptrace/hadoop-2.6.5/logs/hadoop-yangyong-namenode-ubuntu.out
ubuntu: starting datanode, logging to /home/yangyong/work/hadooptrace/hadoop-2.6.5/logs/hadoop-yangyong-datanode-ubuntu.out
ubuntu: /home/yangyong/work/hadooptrace/hadoop-2.6.5/sbin/hadoop-daemon.sh: line 131: 7545 Aborted (core dumped) nohup nice -n
$HADOOP_NICENESS $hdfsScript --config $HADOOP_CONF_DIR $command "[email protected]" > "$log" 2>&1 < /dev/null
Starting secondary namenodes [0.0.0.0
#
# A fatal error has been detected by the Java Runtime Environment:
#
# SIGSEGV (0xb) at pc=0x0000000000000000, pid=7585, tid=140445258151680
#
# JRE version: OpenJDK Runtime Environment (7.0_121) (build 1.7.0_121-b00)
# Java VM: OpenJDK 64-Bit Server VM (24.121-b00 mixed mode linux-amd64 compressed oops)
# Derivative: IcedTea 2.6.8
# Distribution: Ubuntu 14.04 LTS, package 7u121-2.6.8-1ubuntu0.14.04.1
# Problematic frame:
# C 0x0000000000000000
#
# Failed to write core dump. Core dumps have been disabled. To enable core dumping, try "ulimit -c unlimited" before starting Java again
#
# An error report file with more information is saved as:
# /home/yangyong/work/hadooptrace/hadoop-2.6.5/hs_err_pid7585.log
#
# If you would like to submit a bug report, please include
# instructions on how to reproduce the bug and visit:
# http://icedtea.classpath.org/bugzilla
#]
A: ssh: Could not resolve hostname a: Name or service not known
#: ssh: Could not resolve hostname #: Name or service not known
fatal: ssh: Could not resolve hostname fatal: Name or service not known
been: ssh: Could not resolve hostname been: Name or service not known
#: ssh: Could not resolve hostname #: Name or service not known
#: ssh: Could not resolve hostname #: Name or service not known
#: ssh: Could not resolve hostname #: Name or service not known
^COpenJDK: ssh: Could not resolve hostname openjdk: Name or service not known
detected: ssh: Could not resolve hostname detected: Name or service not known
version:: ssh: Could not resolve hostname version:: Name or service not known
JRE: ssh: Could not resolve hostname jre: Name or service not known
Gibt es etwas falsch mit meinem Code? Oder liegt es an etwas anderem wie dem Schutzmechanismus von JVM oder SSH? Danke.
Es gibt ein anderes ähnliches Fehlerbeispiel: [link] (https://sourceware.org/ml/glibc-linux/2001-q1/msg00048.html) – Chalex