2014-06-18 18 views
7

Ich bin nach dem Zufallsprinzip (und konsequent) immer ein gebrochenes Rohr in uwsgi ... unten gezeigt. Irgendeine Idee was könnte das verursachen oder wie kann ich debuggen?uwsgi kaputtes Rohr - django, nginx

Ich bin auf Django (tastypie), uwsgi, nginx, und bin ein m3.medium auf aws (Ubuntu 14.04).

[pid: 1516|app: 0|req: 548/1149] 10.0.0.204() {42 vars in 1039 bytes} [Wed Jun 18 16:11:11 2014] GET /api/v1/clock/?format=json => generated 1298 bytes in 20 msecs (HTTP/1.1 200) 4 headers in 119 bytes (1 switches on core 0) 
[pid: 1517|app: 0|req: 594/1150] 10.0.0.204() {42 vars in 1039 bytes} [Wed Jun 18 16:11:12 2014] GET /api/v1/clock/?format=json => generated 1298 bytes in 15 msecs (HTTP/1.1 200) 4 headers in 119 bytes (1 switches on core 0) 
[pid: 1516|app: 0|req: 549/1151] 10.0.0.204() {42 vars in 1039 bytes} [Wed Jun 18 16:11:13 2014] GET /api/v1/clock/?format=json => generated 1298 bytes in 15 msecs (HTTP/1.1 200) 4 headers in 119 bytes (1 switches on core 0) 
[pid: 1516|app: 0|req: 550/1152] 10.0.0.204() {42 vars in 1039 bytes} [Wed Jun 18 16:11:13 2014] GET /api/v1/clock/?format=json => generated 1298 bytes in 14 msecs (HTTP/1.1 200) 4 headers in 119 bytes (1 switches on core 0) 
[pid: 1517|app: 0|req: 595/1153] 10.0.0.204() {42 vars in 1039 bytes} [Wed Jun 18 16:11:14 2014] GET /api/v1/clock/?format=json => generated 1298 bytes in 15 msecs (HTTP/1.1 200) 4 headers in 119 bytes (1 switches on core 0) 
[pid: 1516|app: 0|req: 551/1154] 10.0.0.204() {42 vars in 1039 bytes} [Wed Jun 18 16:11:14 2014] GET /api/v1/clock/?format=json => generated 1298 bytes in 14 msecs (HTTP/1.1 200) 4 headers in 119 bytes (1 switches on core 0) 
[pid: 1517|app: 0|req: 596/1155] 10.0.0.204() {42 vars in 1039 bytes} [Wed Jun 18 16:11:15 2014] GET /api/v1/clock/?format=json => generated 1298 bytes in 12 msecs (HTTP/1.1 200) 4 headers in 119 bytes (1 switches on core 0) 
[pid: 1516|app: 0|req: 552/1156] 10.0.0.204() {42 vars in 1039 bytes} [Wed Jun 18 16:11:15 2014] GET /api/v1/clock/?format=json => generated 1298 bytes in 12 msecs (HTTP/1.1 200) 4 headers in 119 bytes (1 switches on core 0) 
Wed Jun 18 16:11:17 2014 - uwsgi_response_writev_headers_and_body_do(): Broken pipe [core/writer.c line 287] during GET /api/v1/clock/?format=json (10.0.0.204) 
IOError: write error 
[pid: 1512|app: 0|req: 1/1157] 10.0.0.204() {42 vars in 1039 bytes} [Wed Jun 18 16:11:16 2014] GET /api/v1/clock/?format=json => generated 0 bytes in 1460 msecs (HTTP/1.1 200) 4 headers in 0 bytes (0 switches on core 0) 
announcing my loyalty to the Emperor... 
Wed Jun 18 20:11:17 2014 - [emperor] vassal api.ini is now loyal 
[pid: 1516|app: 0|req: 553/1158] 10.0.0.159() {42 vars in 1039 bytes} [Wed Jun 18 16:11:33 2014] GET /api/v1/clock/?format=json => generated 1298 bytes in 14 msecs (HTTP/1.1 200) 4 headers in 119 bytes (1 switches on core 0) 
[pid: 1516|app: 0|req: 554/1159] 10.0.0.204() {46 vars in 908 bytes} [Wed Jun 18 16:11:41 2014] GET /api/v1/clock/ => generated 1298 bytes in 14 msecs (HTTP/1.0 200) 4 headers in 119 bytes (1 switches on core 0) 

ich bemerke die Zahl auf was auch immer verwandt ist Drop zu einer sehr niedrigen Anzahl zu. Beachten Sie die zweite Anfrage hier - 2/1303. Diese Anfrage ist abgelaufen.

[pid: 1516|app: 0|req: 624/1302] 10.0.0.204() {42 vars in 1039 bytes} [Wed Jun 18 16:41:09 2014] GET /api/v1/clock/?format=json => generated 1298 bytes in 12 msecs (HTTP/1.1 200) 4 headers in 119 bytes (1 switches on core 0) 
[pid: 1512|app: 0|req: 2/1303] 10.0.0.204() {42 vars in 1039 bytes} [Wed Jun 18 16:41:10 2014] GET /api/v1/clock/?format=json => generated 1298 bytes in 50 msecs (HTTP/1.1 200) 4 headers in 119 bytes (1 switches on core 0) 
[pid: 1516|app: 0|req: 625/1304] 10.0.0.159() {42 vars in 1039 bytes} [Wed Jun 18 16:41:29 2014] GET /api/v1/clock/?format=json => generated 1298 bytes in 17 msecs (HTTP/1.1 200) 4 headers in 119 bytes (1 switches on core 0) 
[pid: 1517|app: 0|req: 668/1305] 10.0.0.204() {46 vars in 908 bytes} [Wed Jun 18 16:41:31 2014] GET /api/v1/clock/ => generated 1298 bytes in 18 msecs (HTTP/1.0 200) 4 headers in 119 bytes (1 switches on core 0) 

AKTUALISIERT: nginx.conf

user www-data; 
worker_processes 1; 
pid /run/nginx.pid; 

events { 
    worker_connections 1024; 
    # multi_accept on; 
} 

http { 

    client_body_timeout 12; 
    client_header_timeout 12; 
    keepalive_timeout 15; 
    send_timeout 10; 
    client_max_body_size 8m; 

    ## 
    # Basic Settings 
    ## 

    sendfile on; 
    tcp_nopush on; 
    tcp_nodelay on; 
    types_hash_max_size 2048; 
    # server_tokens off; 

    # server_names_hash_bucket_size 64; 
    # server_name_in_redirect off; 

    include /etc/nginx/mime.types; 
    default_type application/octet-stream; 

    ## 
    # Logging Settings 
    ## 

    #access_log off; 
    access_log /var/log/nginx/access.log; 
    error_log /var/log/nginx/error.log; 

    ## 
    # Gzip Settings 
    ## 

    gzip on; 
    gzip_disable "msie6"; 

    # gzip_vary on; 
    # gzip_proxied any; 
    # gzip_comp_level 6; 
    # gzip_buffers 16 8k; 
    # gzip_http_version 1.1; 
    # gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript; 

    ## 
    # nginx-naxsi config 
    ## 
    # Uncomment it if you installed nginx-naxsi 
    ## 

    #include /etc/nginx/naxsi_core.rules; 

    ## 
    # nginx-passenger config 
    ## 
    # Uncomment it if you installed nginx-passenger 
    ## 

    #passenger_root /usr; 
    #passenger_ruby /usr/bin/ruby; 

    ## 
    # Virtual Host Configs 
    ## 

    include /etc/nginx/conf.d/*.conf; 
    include /etc/nginx/sites-enabled/*; 
} 

diese spezifische Website virtuelle Config

upstream django { 
    server unix:/tmp/domain.sock; 
} 

server { 
    listen 80; 

    server_name domain.com; 

    return 301 https://$host$request_uri; 
} 

server { 
    listen 443; 
    server_name domain.com; 

    location /static { 
     alias /home/ubuntu/domain/static; 
    } 

    location/{ 
     proxy_set_header X-Forwarded-Proto https; 

     uwsgi_pass django; 
     include /etc/nginx/uwsgi_params; 
    } 
} 

uwsgi config (Vasallen)

[uwsgi] 
env    = DEBUG=False 
env    = DB_ENVIRONMENT=production 
env    = NEW_RELIC_CONFIG_FILE=config/newrelic.ini 
env    = NEW_RELIC_ENVIRONMENT=production 
chdir   = /home/ubuntu/domain 
home   = /home/ubuntu/domain/venv 
module   = domain.wsgi 
processes  = 20 
uid    = www-data 
gid    = www-data 
chmod-socket = 666 
socket   = /tmp/domain.sock 
stats   = /tmp/domain.stats.sock 

kämpfen von /etc/rc.local die

#!/bin/sh -e 
/usr/local/bin/uwsgi --emperor /etc/uwsgi/vassals --logto /var/log/uwsgi/emperor.log 
exit 0 

Antwort

8

Sie sie beim Booten vom uwsgi Prozess treten bedenkenlos ignorieren können, werden sie durch den Client (oder nginx) Trennen in der Mitte des Antrags ausgelöst. Da die Antwortzeiten sehr niedrig sind, ist es sehr wahrscheinlich eine Client-Trennung. BTW, posten Sie Ihre nginx und uWSGI-Konfiguration für die Sicherheit.

+0

Vielen Dank - aktualisiert den Beitrag mit der Konfig. Kannst du mich wissen lassen, wenn etwas nicht stimmt? Immer noch sehr seltsame/unterbrochene Verbindungsprobleme mit diesem Server. –

+1

Es war eine schlechte Internetverbindung. Vielen Dank. –

Verwandte Themen