我有一个包含两列的数据框。第一列包含“First”、“Second”、“Third”等类别,第二列包含代表我从“Category”中看到特定组的次数的数字。
例如:
Category Frequency
First 10
First 15
First 5
Second 2
Third 14
Third 20
Second 3
我想按类别对数据进行排序并将所有频率相加:
Category Frequency
First 30
Second 5
Third 34
我将如何在 R 中做到这一点?
rowsum
。
使用 aggregate
:
aggregate(x$Frequency, by=list(Category=x$Category), FUN=sum)
Category x
1 First 30
2 Second 5
3 Third 34
在上面的示例中,可以在 list
中指定多个维度。可以通过 cbind
合并相同数据类型的多个聚合指标:
aggregate(cbind(x$Frequency, x$Metric2, x$Metric3) ...
(嵌入@thelatemail 评论),aggregate
也有公式界面
aggregate(Frequency ~ Category, x, sum)
或者,如果您想聚合多列,可以使用 .
表示法(也适用于一列)
aggregate(. ~ Category, x, sum)
或 tapply
:
tapply(x$Frequency, x$Category, FUN=sum)
First Second Third
30 5 34
使用这些数据:
x <- data.frame(Category=factor(c("First", "First", "First", "Second",
"Third", "Third", "Second")),
Frequency=c(10,15,5,2,14,20,3))
您还可以为此目的使用 dplyr 包:
library(dplyr)
x %>%
group_by(Category) %>%
summarise(Frequency = sum(Frequency))
#Source: local data frame [3 x 2]
#
# Category Frequency
#1 First 30
#2 Second 5
#3 Third 34
或者,对于多个汇总列(也适用于一列):
x %>%
group_by(Category) %>%
summarise(across(everything(), sum))
以下是一些关于如何使用内置数据集 mtcars
使用 dplyr 函数按组汇总数据的更多示例:
# several summary columns with arbitrary names
mtcars %>%
group_by(cyl, gear) %>% # multiple group columns
summarise(max_hp = max(hp), mean_mpg = mean(mpg)) # multiple summary columns
# summarise all columns except grouping columns using "sum"
mtcars %>%
group_by(cyl) %>%
summarise(across(everything(), sum))
# summarise all columns except grouping columns using "sum" and "mean"
mtcars %>%
group_by(cyl) %>%
summarise(across(everything(), list(mean = mean, sum = sum)))
# multiple grouping columns
mtcars %>%
group_by(cyl, gear) %>%
summarise(across(everything(), list(mean = mean, sum = sum)))
# summarise specific variables, not all
mtcars %>%
group_by(cyl, gear) %>%
summarise(across(c(qsec, mpg, wt), list(mean = mean, sum = sum)))
# summarise specific variables (numeric columns except grouping columns)
mtcars %>%
group_by(gear) %>%
summarise(across(where(is.numeric), list(mean = mean, sum = sum)))
有关更多信息,包括 %>%
运算符,请参阅 introduction to dplyr。
summarise_all
的 funs()
参数及其相关函数(summarise_at
、summarise_if
)中指定要作为摘要应用的函数
rcs 提供的答案很简单。但是,如果您正在处理更大的数据集并需要提高性能,则有一个更快的替代方案:
library(data.table)
data = data.table(Category=c("First","First","First","Second","Third", "Third", "Second"),
Frequency=c(10,15,5,2,14,20,3))
data[, sum(Frequency), by = Category]
# Category V1
# 1: First 30
# 2: Second 5
# 3: Third 34
system.time(data[, sum(Frequency), by = Category] )
# user system elapsed
# 0.008 0.001 0.009
让我们将其与使用 data.frame 和上面的内容进行比较:
data = data.frame(Category=c("First","First","First","Second","Third", "Third", "Second"),
Frequency=c(10,15,5,2,14,20,3))
system.time(aggregate(data$Frequency, by=list(Category=data$Category), FUN=sum))
# user system elapsed
# 0.008 0.000 0.015
如果您想保留该列,请使用以下语法:
data[,list(Frequency=sum(Frequency)),by=Category]
# Category Frequency
# 1: First 30
# 2: Second 5
# 3: Third 34
使用更大的数据集,这种差异将变得更加明显,如下面的代码所示:
data = data.table(Category=rep(c("First", "Second", "Third"), 100000),
Frequency=rnorm(100000))
system.time( data[,sum(Frequency),by=Category] )
# user system elapsed
# 0.055 0.004 0.059
data = data.frame(Category=rep(c("First", "Second", "Third"), 100000),
Frequency=rnorm(100000))
system.time( aggregate(data$Frequency, by=list(Category=data$Category), FUN=sum) )
# user system elapsed
# 0.287 0.010 0.296
对于多个聚合,您可以组合 lapply
和 .SD
如下
data[, lapply(.SD, sum), by = Category]
# Category Frequency
# 1: First 30
# 2: Second 5
# 3: Third 34
data[, sum(Frequency), by = Category]
。您可以使用 .N
代替 sum()
函数。 data[, .N, by = Category]
。这是一个有用的备忘单:s3.amazonaws.com/assets.datacamp.com/img/blog/…
您还可以使用 by() 函数:
x2 <- by(x$Frequency, x$Category, sum)
do.call(rbind,as.list(x2))
那些其他包(plyr、reshape)具有返回 data.frame 的好处,但值得熟悉 by(),因为它是一个基本函数。
几年后,只是为了添加另一个简单的基本 R 解决方案,由于某种原因这里不存在 - xtabs
xtabs(Frequency ~ Category, df)
# Category
# First Second Third
# 30 5 34
或者如果你想要一个 data.frame
回来
as.data.frame(xtabs(Frequency ~ Category, df))
# Category Freq
# 1 First 30
# 2 Second 5
# 3 Third 34
library(plyr)
ddply(tbl, .(Category), summarise, sum = sum(Frequency))
如果 x
是包含您的数据的数据框,则以下内容将执行您想要的操作:
require(reshape)
recast(x, Category ~ ., fun.aggregate=sum)
虽然我最近在大多数这些类型的操作中转换为 dplyr
,但对于某些事情,sqldf
包仍然非常好(恕我直言,更具可读性)。
以下是如何使用 sqldf
回答此问题的示例
x <- data.frame(Category=factor(c("First", "First", "First", "Second",
"Third", "Third", "Second")),
Frequency=c(10,15,5,2,14,20,3))
sqldf("select
Category
,sum(Frequency) as Frequency
from x
group by
Category")
## Category Frequency
## 1 First 30
## 2 Second 5
## 3 Third 34
只是添加第三个选项:
require(doBy)
summaryBy(Frequency~Category, data=yourdataframe, FUN=sum)
编辑:这是一个非常古老的答案。现在我建议使用 dplyr
中的 group_by
和 summarise
,如 @docendo 答案中所示。
另一种在矩阵或数据框中按组返回和的解决方案又短又快:
rowsum(x$Frequency, x$Category)
当您需要在不同的列上应用不同的聚合函数(并且您必须/想要坚持使用基础 R)时,我发现 ave
非常有用(且高效):
例如
鉴于此输入:
DF <-
data.frame(Categ1=factor(c('A','A','B','B','A','B','A')),
Categ2=factor(c('X','Y','X','X','X','Y','Y')),
Samples=c(1,2,4,3,5,6,7),
Freq=c(10,30,45,55,80,65,50))
> DF
Categ1 Categ2 Samples Freq
1 A X 1 10
2 A Y 2 30
3 B X 4 45
4 B X 3 55
5 A X 5 80
6 B Y 6 65
7 A Y 7 50
我们想按 Categ1
和 Categ2
分组并计算 Samples
的总和和 Freq
的平均值。
这是使用 ave
的可能解决方案:
# create a copy of DF (only the grouping columns)
DF2 <- DF[,c('Categ1','Categ2')]
# add sum of Samples by Categ1,Categ2 to DF2
# (ave repeats the sum of the group for each row in the same group)
DF2$GroupTotSamples <- ave(DF$Samples,DF2,FUN=sum)
# add mean of Freq by Categ1,Categ2 to DF2
# (ave repeats the mean of the group for each row in the same group)
DF2$GroupAvgFreq <- ave(DF$Freq,DF2,FUN=mean)
# remove the duplicates (keep only one row for each group)
DF2 <- DF2[!duplicated(DF2),]
结果 :
> DF2
Categ1 Categ2 GroupTotSamples GroupAvgFreq
1 A X 6 45
2 A Y 9 40
3 B X 7 50
6 B Y 6 65
您可以使用 package Rfast 中的函数 group.sum
。
Category <- Rfast::as_integer(Category,result.sort=FALSE) # convert character to numeric. R's as.numeric produce NAs.
result <- Rfast::group.sum(Frequency,Category)
names(result) <- Rfast::Sort(unique(Category)
# 30 5 34
Rfast 有许多组函数,group.sum
就是其中之一。
从 dplyr 1.0.0
开始,可以使用 across()
函数:
df %>%
group_by(Category) %>%
summarise(across(Frequency, sum))
Category Frequency
<chr> <int>
1 First 30
2 Second 5
3 Third 34
如果对多个变量感兴趣:
df %>%
group_by(Category) %>%
summarise(across(c(Frequency, Frequency2), sum))
Category Frequency Frequency2
<chr> <int> <int>
1 First 30 55
2 Second 5 29
3 Third 34 190
并使用选择助手选择变量:
df %>%
group_by(Category) %>%
summarise(across(starts_with("Freq"), sum))
Category Frequency Frequency2 Frequency3
<chr> <int> <int> <dbl>
1 First 30 55 110
2 Second 5 29 58
3 Third 34 190 380
样本数据:
df <- read.table(text = "Category Frequency Frequency2 Frequency3
1 First 10 10 20
2 First 15 30 60
3 First 5 15 30
4 Second 2 8 16
5 Third 14 70 140
6 Third 20 120 240
7 Second 3 21 42",
header = TRUE,
stringsAsFactors = FALSE)
使用 cast
而不是 recast
(注意 'Frequency'
现在是 'value'
)
df <- data.frame(Category = c("First","First","First","Second","Third","Third","Second")
, value = c(10,15,5,2,14,20,3))
install.packages("reshape")
result<-cast(df, Category ~ . ,fun.aggregate=sum)
要得到:
Category (all)
First 30
Second 5
Third 34
library(tidyverse)
x <- data.frame(Category= c('First', 'First', 'First', 'Second', 'Third', 'Third', 'Second'),
Frequency = c(10, 15, 5, 2, 14, 20, 3))
count(x, Category, wt = Frequency)
按组对变量求和的好方法是
rowsum(numericToBeSummedUp, groups)
来自基地。这里只有 collapse::fsum
和 Rfast::group.sum
更快。
关于速度和内存消耗
collapse::fsum(numericToBeSummedUp, groups)
是给定示例中最好的,使用分组数据框时可以加快速度。
GDF <- collapse::fgroup_by(DF, g) #Create a grouped data.frame with group g
#GDF <- collapse::gby(DF, g) #Alternative
collapse::fsum(GDF) #Calculate sum per group
这与将数据集拆分为每组的子数据集的时间很接近。
不同方法的基准测试表明,对单列求和,collapse::fsum
比 Rfast::group.sum
快两倍,比 rowsum
快 7 倍。紧随其后的是 tapply
、data.table
、by
和 dplyr
。 xtabs
和 aggregate
最慢。
聚合两列 collapse::fsum
也是最快的,比 Rfast::group.sum
快 3 倍,比 rowsum
快 5 倍。紧随其后的是 data.table
、tapply
、by
和 dplyr
。同样,xtabs
和 aggregate
是最慢的。
基准
set.seed(42)
n <- 1e5
DF <- data.frame(g = as.factor(sample(letters, n, TRUE))
, x = rnorm(n), y = rnorm(n) )
library(magrittr)
一些方法允许执行可能有助于加速聚合的任务。
DT <- data.table::as.data.table(DF)
data.table::setkey(DT, g)
DFG <- collapse::gby(DF, g)
DFG1 <- collapse::gby(DF[c("g", "x")], g)
# Optimized dataset for this aggregation task
# This will also consume time!
DFS <- lapply(split(DF[c("x", "y")], DF["g"]), as.matrix)
DFS1 <- lapply(split(DF["x"], DF["g"]), as.matrix)
总结一栏。
bench::mark(check = FALSE
, "aggregate" = aggregate(DF$x, DF["g"], sum)
, "tapply" = tapply(DF$x, DF$g, sum)
, "dplyr" = DF %>% dplyr::group_by(g) %>% dplyr::summarise(sum = sum(x))
, "data.table" = data.table::as.data.table(DF)[, sum(x), by = g]
, "data.table2" = DT[, sum(x), by = g]
, "by" = by(DF$x, DF$g, sum)
, "xtabs" = xtabs(x ~ g, DF)
, "rowsum" = rowsum(DF$x, DF$g)
, "Rfast" = Rfast::group.sum(DF$x, DF$g)
, "base Split" = lapply(DFS1, colSums)
, "base Split Rfast" = lapply(DFS1, Rfast::colsums)
, "collapse" = collapse::fsum(DF$x, DF$g)
, "collapse2" = collapse::fsum(DFG1)
)
# expression min median `itr/sec` mem_alloc `gc/sec` n_itr n_gc
# <bch:expr> <bch:tm> <bch:tm> <dbl> <bch:byt> <dbl> <int> <dbl>
# 1 aggregate 20.43ms 21.88ms 45.7 16.07MB 59.4 10 13
# 2 tapply 1.24ms 1.39ms 687. 1.53MB 30.1 228 10
# 3 dplyr 3.28ms 4.81ms 209. 2.42MB 13.1 96 6
# 4 data.table 1.59ms 2.47ms 410. 4.69MB 87.7 145 31
# 5 data.table2 1.52ms 1.93ms 514. 2.38MB 40.5 190 15
# 6 by 2.15ms 2.31ms 396. 2.29MB 26.7 148 10
# 7 xtabs 7.78ms 8.91ms 111. 10.54MB 50.0 31 14
# 8 rowsum 951.36µs 1.07ms 830. 1.15MB 24.1 378 11
# 9 Rfast 431.06µs 434.53µs 2268. 2.74KB 0 1134 0
#10 base Split 213.42µs 219.66µs 4342. 256B 12.4 2105 6
#11 base Split Rfast 76.88µs 81.48µs 10923. 65.05KB 16.7 5232 8
#12 collapse 121.03µs 122.92µs 7965. 256B 2.01 3961 1
#13 collapse2 85.97µs 88.67µs 10749. 256B 4.03 5328 2
总结两列
bench::mark(check = FALSE
, "aggregate" = aggregate(DF[c("x", "y")], DF["g"], sum)
, "tapply" = list2DF(lapply(DF[c("x", "y")], tapply, list(DF$g), sum))
, "dplyr" = DF %>% dplyr::group_by(g) %>% dplyr::summarise(x = sum(x), y = sum(y))
, "data.table" = data.table::as.data.table(DF)[,.(sum(x),sum(y)), by = g]
, "data.table2" = DT[,.(sum(x),sum(y)), by = g]
, "by" = lapply(DF[c("x", "y")], by, list(DF$g), sum)
, "xtabs" = xtabs(cbind(x, y) ~ g, DF)
, "rowsum" = rowsum(DF[c("x", "y")], DF$g)
, "Rfast" = list2DF(lapply(DF[c("x", "y")], Rfast::group.sum, DF$g))
, "base Split" = lapply(DFS, colSums)
, "base Split Rfast" = lapply(DFS, Rfast::colsums)
, "collapse" = collapse::fsum(DF[c("x", "y")], DF$g)
, "collapse2" = collapse::fsum(DFG)
)
# expression min median `itr/sec` mem_alloc `gc/sec` n_itr n_gc
# <bch:expr> <bch:tm> <bch:tm> <dbl> <bch:byt> <dbl> <int> <dbl>
# 1 aggregate 25.87ms 26.36ms 37.7 20.89MB 132. 4 14
# 2 tapply 2.65ms 3.23ms 312. 3.06MB 22.5 97 7
# 3 dplyr 4.27ms 6.02ms 164. 3.19MB 13.3 74 6
# 4 data.table 2.33ms 3.19ms 309. 4.72MB 57.0 114 21
# 5 data.table2 2.22ms 2.81ms 355. 2.41MB 19.8 161 9
# 6 by 4.45ms 5.23ms 190. 4.59MB 22.5 59 7
# 7 xtabs 10.71ms 13.14ms 76.1 19.7MB 145. 11 21
# 8 rowsum 1.02ms 1.07ms 850. 1.15MB 23.8 393 11
# 9 Rfast 841.57µs 846.88µs 1150. 5.48KB 0 575 0
#10 base Split 360.24µs 368.28µs 2652. 256B 8.16 1300 4
#11 base Split Rfast 113.95µs 119.81µs 7540. 65.05KB 10.3 3661 5
#12 collapse 201.31µs 204.83µs 4724. 512B 2.01 2350 1
#13 collapse2 156.95µs 161.79µs 5408. 512B 2.02 2683 1
n
提高到 1e7
并为表现最好的人重新运行了基准测试。几乎相同的顺序,rowsum
是无与伦比的,data.table2
排在第二位,dplyr
紧随其后。在这么大的数据上,dplyr
在基准测试中的类转换实际上优于 data.table
。
collapse::fsum
也很快,至少在具有更多组的较大数据上是这样。 set.seed(42)
; n <- 1e7
; DF <- data.frame(g = as.factor(sample(1e4, n, TRUE)), x = rnorm(n), y = rnorm(n))
; system.time(group.sum(DF$x, DF$g))
; system.time(fsum(DF$x, DF$g))
gr = GRP(DF, ~ g)
; fsum(DF, gr)
。
collapse::fsum
。
1:nrow(df)
,聚合后是否可以保留每个类别的起始位置?因此,在与聚合折叠后,ID 列最终会变成 1、3、4、7。就我而言,我喜欢aggregate
,因为它会自动处理许多列。