Eingang
$ cat infile
Location Group Device#
--------------------------
location1 group01 10
location2 group10 8
location2 groupxx 7
location3 groupAA 11
Ausgabe von Datei lesen
$ awk 'NR==1{print $0,"Total_Device#"}NR==2{print $0"--"}NR>2{a[$1]+=$NF; b[i++] = $0}END{for(i in b){split(b[i],d);print b[i],a[d[1]]}}' infile
Location Group Device# Total_Device#
----------------------------
location1 group01 10 10
location2 group10 8 15
location2 groupxx 7 15
location3 groupAA 11 11
Erklärung
awk ' # call awk
NR==1{ # when awk reads first record
print $0,"Total_Device#" # print current record/row with extra field
}
NR==2{ # when awk reads second record
print $0"--" # print current record with extra string
}
NR>2{ # if no of records greater than 2
a[$1]+=$NF; # sum up last field based on location where array is a
b[i++] = $0 # save row in array b
}
END{
for(i in b){ # loop through array b
split(b[i],d); # split array value where separator being field separator
print b[i],a[d[1]] # print row and location sum
}
}' infile
Output Durch die gleiche Datei zweimal
$ awk 'FNR==NR{if(NR>2){loc[$1]+=$NF};next}FNR==1{print $0,"Total_Device#";next}{print $0,loc[$1]}' infile infile
Location Group Device# Total_Device#
--------------------------
location1 group01 10 10
location2 group10 8 15
location2 groupxx 7 15
location3 groupAA 11 11
Erklärung
awk ' # call awk
FNR==NR{ # this is true when awk reads first file
if(NR>2){ # if no of records is greater than 2
loc[$1]+=$NF # sum up last field based on 1st field
}
next # go to next record, because of this keyword rest of the code will be skipped
}
# here we read same file second time
FNR==1{ # if no of records corresponding to current file is equal to one
print $0,"Total_Device#"; # print current record/row and extra field
next # go to next line
}
{
print $0,loc[$1]; # print current record and sum which is available in array loc
}
' infile infile
Sind Sie fragen, für eine rein 'awk' Lösung oder etwas anderes (' bash', 'sqlite', .. .) wäre auch akzeptabel? – Blaf
Sagen wir, 'bash' wäre auch interessant, aber ich habe Zweifel, dass es auf eine ebenso elegante Art und Weise wie deine 'awk' Antwort unten getan werden könnte: –