DataFrame.drop_duplicates(self,subset = None,keep ='first',inplace = False)
返回删除了重复行的DataFrame,可选择仅考虑某些列。包括时间索引在内的索引将被忽略。
参数: | subset : 列标签或标签序列,可选 仅考虑用于标识重复项的某些列,默认情况下使用所有列 keep : {'first','last',False},默认'first' first :删除第一次出现的重复项。 last :删除重复项,除了最后一次出现。 False:删除所有重复项。 inplace : 布尔值,默认为False, 是否删除重复项或返回副本 |
返回: | DataFrame |
例子
重复数据
import pandas as pdfrom pandas import DataFrame,Seriesdf = pd.read_csv('c:/wonhero.csv',index_col = 0)df.sort_index(inplace =True )#df.drop_duplicates(inplace = True)print(df.head(5))
输出:
pre open high low close change_price \
time
1990/12/19 96.05 96.05 99.98 95.79 99.98 3.93
1990/12/19 96.05 96.05 99.98 95.79 99.98 3.93
1990/12/19 96.05 96.05 99.98 95.79 99.98 3.93
1990/12/20 99.98 104.30 104.39 99.98 104.39 4.41
1990/12/20 99.98 104.30 104.39 99.98 104.39 4.41
change_percent volume amount
time
1990/12/19 4.0916 1260.0 494000.0
1990/12/19 4.0916 1260.0 494000.0
1990/12/19 4.0916 1260.0 494000.0
1990/12/20 4.4109 197.0 84000.0
1990/12/20 4.4109 197.0 84000.0
去除重复数据
import pandas as pd
from pandas import DataFrame,Series
df = pd.read_csv('c:/wonhero.csv',index_col = 0)
df.sort_index(inplace =True )
df.drop_duplicates(inplace = True)
print(df.head(5))
输出:
pre open high low close change_price \
time
1990/12/19 96.05 96.05 99.98 95.79 99.98 3.93
1990/12/20 99.98 104.30 104.39 99.98 104.39 4.41
1990/12/21 104.39 109.07 109.13 103.73 109.13 4.74
1990/12/21 96.05 96.05 109.13 95.79 109.13 13.08
1990/12/24 109.13 113.57 114.55 109.13 114.55 5.42
change_percent volume amount
time
1990/12/19 4.0916 1260.0 494000.0
1990/12/20 4.4109 197.0 84000.0
1990/12/21 4.5407 28.0 16000.0
1990/12/21 13.6179 1485.0 594000.0
1990/12/24 4.9666 32.0 31000.0