Pandas 10分钟入门(官方说明+个人小测试)_pandas in 10minutes-程序员宅基地

技术标签: pandas  python  

Pandas 10分钟入门


代码下载地址[http://download.csdn.net/download/sirwill/10043185]

In [19]:
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt

Object Creation

In [20]:
s= pd.Series([1,2,3,np.nan,5,6,])   #series 类型数组。
s
Out[20]:
0    1.0
1    2.0
2    3.0
3    NaN
4    5.0
5    6.0
dtype: float64
In [21]:
dates= pd.date_range("20170112",periods=6) #Creating a DataFrame by passing a numpy array, with a datetime index and labeled column
dates
Out[21]:
DatetimeIndex(['2017-01-12', '2017-01-13', '2017-01-14', '2017-01-15',
               '2017-01-16', '2017-01-17'],
              dtype='datetime64[ns]', freq='D')
In [22]:
list(dates)
dates.date
Out[22]:
array([datetime.date(2017, 1, 12), datetime.date(2017, 1, 13),
       datetime.date(2017, 1, 14), datetime.date(2017, 1, 15),
       datetime.date(2017, 1, 16), datetime.date(2017, 1, 17)], dtype=object)
In [23]:
list(dates.date)
Out[23]:
[datetime.date(2017, 1, 12),
 datetime.date(2017, 1, 13),
 datetime.date(2017, 1, 14),
 datetime.date(2017, 1, 15),
 datetime.date(2017, 1, 16),
 datetime.date(2017, 1, 17)]
In [24]:
dates.year
Out[24]:
Int64Index([2017, 2017, 2017, 2017, 2017, 2017], dtype='int64')
In [25]:
list(dates.year)
Out[25]:
[2017, 2017, 2017, 2017, 2017, 2017]
In [26]:
list(dates.day)
Out[26]:
[12, 13, 14, 15, 16, 17]
In [27]:
str(dates.date)
Out[27]:
'[datetime.date(2017, 1, 12) datetime.date(2017, 1, 13)\n datetime.date(2017, 1, 14) datetime.date(2017, 1, 15)\n datetime.date(2017, 1, 16) datetime.date(2017, 1, 17)]'
In [28]:
df=pd.DataFrame(np.random.randn(6,4),index=dates,columns=list("ABCD"))
df
Out[28]:
  A B C D
2017-01-12 -2.258121 2.456196 0.778567 -2.030407
2017-01-13 -0.658348 0.622495 0.388625 0.073587
2017-01-14 0.589219 1.392792 0.605545 1.231538
2017-01-15 -0.151958 -0.655249 -2.114725 -0.669839
2017-01-16 -1.323304 3.143659 0.638996 0.898683
2017-01-17 -0.024935 0.385811 -1.577185 -0.021460
In [29]:
df2 = pd.DataFrame({
         'A' : 1.,
                    'B' : pd.Timestamp('20130102'),
                    'C' : pd.Series(1,index=list(range(4)),dtype='float32'),
                    'D' : np.array([3] * 4,dtype='int32'),
                    'E' : pd.Categorical(["test","train","test","train"]),
                    'F' : 'foo' }) #Creating a DataFrame by passing a dict of objects that can be converted to series-like.
df2
Out[29]:
  A B C D E F
0 1.0 2013-01-02 1.0 3 test foo
1 1.0 2013-01-02 1.0 3 train foo
2 1.0 2013-01-02 1.0 3 test foo
3 1.0 2013-01-02 1.0 3 train foo
In [30]:
df2.dtypes
Out[30]:
A           float64
B    datetime64[ns]
C           float32
D             int32
E          category
F            object
dtype: object
In [31]:
df.dtypes
Out[31]:
A    float64
B    float64
C    float64
D    float64
dtype: object
In [32]:
df2.<TAB> #使用jupyter时按tab键,可以看到代码提示。
  File "<ipython-input-32-9c4c8dafe199>", line 1
    df2.<TAB> #If you’re using IPython, tab completion for column names (as well as public attributes) is automatically enabled.
        ^
SyntaxError: invalid syntax

Viewing Data

In [36]:
df.head()
Out[36]:
  A B C D
2017-01-12 -2.258121 2.456196 0.778567 -2.030407
2017-01-13 -0.658348 0.622495 0.388625 0.073587
2017-01-14 0.589219 1.392792 0.605545 1.231538
2017-01-15 -0.151958 -0.655249 -2.114725 -0.669839
2017-01-16 -1.323304 3.143659 0.638996 0.898683
In [37]:
df.index
Out[37]:
DatetimeIndex(['2017-01-12', '2017-01-13', '2017-01-14', '2017-01-15',
               '2017-01-16', '2017-01-17'],
              dtype='datetime64[ns]', freq='D')
In [38]:
df.columns
Out[38]:
Index(['A', 'B', 'C', 'D'], dtype='object')
In [39]:
df.values
Out[39]:
array([[-2.2581213 ,  2.45619592,  0.77856734, -2.030407  ],
       [-0.65834822,  0.62249451,  0.38862467,  0.07358728],
       [ 0.58921899,  1.39279193,  0.60554535,  1.23153815],
       [-0.1519579 , -0.65524863, -2.1147252 , -0.66983949],
       [-1.32330447,  3.14365936,  0.63899562,  0.89868346],
       [-0.02493461,  0.3858107 , -1.57718486, -0.0214603 ]])
In [40]:
df.describe()
Out[40]:
  A B C D
count 6.000000 6.000000 6.000000 6.000000
mean -0.637908 1.224284 -0.213363 -0.086316
std 1.021078 1.401987 1.282079 1.171045
min -2.258121 -0.655249 -2.114725 -2.030407
25% -1.157065 0.444982 -1.085732 -0.507745
50% -0.405153 1.007643 0.497085 0.026063
75% -0.056690 2.190345 0.630633 0.692409
max 0.589219 3.143659 0.778567 1.231538
In [41]:
df
Out[41]:
  A B C D
2017-01-12 -2.258121 2.456196 0.778567 -2.030407
2017-01-13 -0.658348 0.622495 0.388625 0.073587
2017-01-14 0.589219 1.392792 0.605545 1.231538
2017-01-15 -0.151958 -0.655249 -2.114725 -0.669839
2017-01-16 -1.323304 3.143659 0.638996 0.898683
2017-01-17 -0.024935 0.385811 -1.577185 -0.021460
In [42]:
df.T
Out[42]:
  2017-01-12 00:00:00 2017-01-13 00:00:00 2017-01-14 00:00:00 2017-01-15 00:00:00 2017-01-16 00:00:00 2017-01-17 00:00:00
A -2.258121 -0.658348 0.589219 -0.151958 -1.323304 -0.024935
B 2.456196 0.622495 1.392792 -0.655249 3.143659 0.385811
C 0.778567 0.388625 0.605545 -2.114725 0.638996 -1.577185
D -2.030407 0.073587 1.231538 -0.669839 0.898683 -0.021460
In [43]:
df.sort_index(axis=1,ascending=False) #Sorting by an axis  排序。
Out[43]:
  D C B A
2017-01-12 -2.030407 0.778567 2.456196 -2.258121
2017-01-13 0.073587 0.388625 0.622495 -0.658348
2017-01-14 1.231538 0.605545 1.392792 0.589219
2017-01-15 -0.669839 -2.114725 -0.655249 -0.151958
2017-01-16 0.898683 0.638996 3.143659 -1.323304
2017-01-17 -0.021460 -1.577185 0.385811 -0.024935
In [44]:
df.sort_values(by="B") #Sorting by values
Out[44]:
  A B C D
2017-01-15 -0.151958 -0.655249 -2.114725 -0.669839
2017-01-17 -0.024935 0.385811 -1.577185 -0.021460
2017-01-13 -0.658348 0.622495 0.388625 0.073587
2017-01-14 0.589219 1.392792 0.605545 1.231538
2017-01-12 -2.258121 2.456196 0.778567 -2.030407
2017-01-16 -1.323304 3.143659 0.638996 0.898683
In [45]:
df
Out[45]:
  A B C D
2017-01-12 -2.258121 2.456196 0.778567 -2.030407
2017-01-13 -0.658348 0.622495 0.388625 0.073587
2017-01-14 0.589219 1.392792 0.605545 1.231538
2017-01-15 -0.151958 -0.655249 -2.114725 -0.669839
2017-01-16 -1.323304 3.143659 0.638996 0.898683
2017-01-17 -0.024935 0.385811 -1.577185 -0.021460

Selection

Getting

In [46]:
df["A"]# Selecting a single column, which yields a Series, equivalent to df.A
Out[46]:
2017-01-12   -2.258121
2017-01-13   -0.658348
2017-01-14    0.589219
2017-01-15   -0.151958
2017-01-16   -1.323304
2017-01-17   -0.024935
Freq: D, Name: A, dtype: float64
In [47]:
df.A
Out[47]:
2017-01-12   -2.258121
2017-01-13   -0.658348
2017-01-14    0.589219
2017-01-15   -0.151958
2017-01-16   -1.323304
2017-01-17   -0.024935
Freq: D, Name: A, dtype: float64
In [48]:
df[0:3]  #Selecting via [], which slices the rows.
Out[48]:
  A B C D
2017-01-12 -2.258121 2.456196 0.778567 -2.030407
2017-01-13 -0.658348 0.622495 0.388625 0.073587
2017-01-14 0.589219 1.392792 0.605545 1.231538
In [49]:
df["2017-01-13":"2017-01-17"]
Out[49]:
  A B C D
2017-01-13 -0.658348 0.622495 0.388625 0.073587
2017-01-14 0.589219 1.392792 0.605545 1.231538
2017-01-15 -0.151958 -0.655249 -2.114725 -0.669839
2017-01-16 -1.323304 3.143659 0.638996 0.898683
2017-01-17 -0.024935 0.385811 -1.577185 -0.021460

Selection by Label

In [50]:
dates
Out[50]:
DatetimeIndex(['2017-01-12', '2017-01-13', '2017-01-14', '2017-01-15',
               '2017-01-16', '2017-01-17'],
              dtype='datetime64[ns]', freq='D')
In [51]:
df.loc[dates[0]] #For getting a cross section using a label
Out[51]:
A   -2.258121
B    2.456196
C    0.778567
D   -2.030407
Name: 2017-01-12 00:00:00, dtype: float64
In [52]:
df.loc[:,["A","B"]]
Out[52]:
  A B
2017-01-12 -2.258121 2.456196
2017-01-13 -0.658348 0.622495
2017-01-14 0.589219 1.392792
2017-01-15 -0.151958 -0.655249
2017-01-16 -1.323304 3.143659
2017-01-17 -0.024935 0.385811
In [53]:
df.loc['20170112':'20170116',['A','B']] #Showing label slicing, both endpoints are included
Out[53]:
  A B
2017-01-12 -2.258121 2.456196
2017-01-13 -0.658348 0.622495
2017-01-14 0.589219 1.392792
2017-01-15 -0.151958 -0.655249
2017-01-16 -1.323304 3.143659
In [54]:
df.loc["20170115",["A","B"]] 
Out[54]:
A   -0.151958
B   -0.655249
Name: 2017-01-15 00:00:00, dtype: float64
In [55]:
df.loc[dates[3],"D"] #For getting a scalar value
Out[55]:
-0.6698394854437093
In [56]:
df.at[dates[3],"D"] #For getting fast access to a scalar (equiv to the prior method)
Out[56]:
-0.6698394854437093

Selection by Position

In [57]:
df.iloc[3] #Select via the position of the passed integers
Out[57]:
A   -0.151958
B   -0.655249
C   -2.114725
D   -0.669839
Name: 2017-01-15 00:00:00, dtype: float64
In [58]:
df.iloc[2:5,0:2] # By integer slices, acting similar to numpy/python
Out[58]:
  A B
2017-01-14 0.589219 1.392792
2017-01-15 -0.151958 -0.655249
2017-01-16 -1.323304 3.143659
In [59]:
df.iloc[[1,3,4],[0,2]] #By lists of integer position locations, similar to the numpy/python style
Out[59]:
  A C
2017-01-13 -0.658348 0.388625
2017-01-15 -0.151958 -2.114725
2017-01-16 -1.323304 0.638996
In [60]:
df.iloc[1:3,:]
Out[60]:
  A B C D
2017-01-13 -0.658348 0.622495 0.388625 0.073587
2017-01-14 0.589219 1.392792 0.605545 1.231538
In [61]:
df.iloc[:,1:3]
Out[61]:
  B C
2017-01-12 2.456196 0.778567
2017-01-13 0.622495 0.388625
2017-01-14 1.392792 0.605545
2017-01-15 -0.655249 -2.114725
2017-01-16 3.143659 0.638996
2017-01-17 0.385811 -1.577185
In [62]:
df.iloc[1,1] #For getting a value explicitly
Out[62]:
0.62249451281708756
In [63]:
df.iat[1,1] #For getting fast access to a scalar (equiv to the prior method)
Out[63]:
0.62249451281708756

Boolean Indexing

In [64]:
df[df.A>0] #Using a single column’s values to select data
Out[64]:
  A B C D
2017-01-14 0.589219 1.392792 0.605545 1.231538
In [65]:
df[df>0] #Selecting values from a DataFrame where a boolean condition is met
Out[65]:
  A B C D
2017-01-12 NaN 2.456196 0.778567 NaN
2017-01-13 NaN 0.622495 0.388625 0.073587
2017-01-14 0.589219 1.392792 0.605545 1.231538
2017-01-15 NaN NaN NaN NaN
2017-01-16 NaN 3.143659 0.638996 0.898683
2017-01-17 NaN 0.385811 NaN NaN
In [66]:
df2
Out[66]:
  A B C D E F
0 1.0 2013-01-02 1.0 3 test foo
1 1.0 2013-01-02 1.0 3 train foo
2 1.0 2013-01-02 1.0 3 test foo
3 1.0 2013-01-02 1.0 3 train foo
In [67]:
df
Out[67]:
  A B C D
2017-01-12 -2.258121 2.456196 0.778567 -2.030407
2017-01-13 -0.658348 0.622495 0.388625 0.073587
2017-01-14 0.589219 1.392792 0.605545 1.231538
2017-01-15 -0.151958 -0.655249 -2.114725 -0.669839
2017-01-16 -1.323304 3.143659 0.638996 0.898683
2017-01-17 -0.024935 0.385811 -1.577185 -0.021460
In [68]:
df2=df.copy()
df2
Out[68]:
  A B C D
2017-01-12 -2.258121 2.456196 0.778567 -2.030407
2017-01-13 -0.658348 0.622495 0.388625 0.073587
2017-01-14 0.589219 1.392792 0.605545 1.231538
2017-01-15 -0.151958 -0.655249 -2.114725 -0.669839
2017-01-16 -1.323304 3.143659 0.638996 0.898683
2017-01-17 -0.024935 0.385811 -1.577185 -0.021460
In [69]:
df.equals(df2)
Out[69]:
True
In [70]:
df==df2
Out[70]:
  A B C D
2017-01-12 True True True True
2017-01-13 True True True True
2017-01-14 True True True True
2017-01-15 True True True True
2017-01-16 True True True True
2017-01-17 True True True True
In [71]:
df is df2
Out[71]:
False
In [72]:
df2["E"]=["one","one","two","three","four","three"]
df2
Out[72]:
  A B C D E
2017-01-12 -2.258121 2.456196 0.778567 -2.030407 one
2017-01-13 -0.658348 0.622495 0.388625 0.073587 one
2017-01-14 0.589219 1.392792 0.605545 1.231538 two
2017-01-15 -0.151958 -0.655249 -2.114725 -0.669839 three
2017-01-16 -1.323304 3.143659 0.638996 0.898683 four
2017-01-17 -0.024935 0.385811 -1.577185 -0.021460 three
In [73]:
df2[df2.E.isin(["two","four"])]
Out[73]:
  A B C D E
2017-01-14 0.589219 1.392792 0.605545 1.231538 two
2017-01-16 -1.323304 3.143659 0.638996 0.898683 four
In [74]:
df2[df2["E"].isin(["two","four"])]
Out[74]:
  A B C D E
2017-01-14 0.589219 1.392792 0.605545 1.231538 two
2017-01-16 -1.323304 3.143659 0.638996 0.898683 four

Setting

In [75]:
s1= pd.Series([1,2,3,4,5,6],index=pd.date_range("20171016",periods=6)) #Setting a new column automatically aligns the data by the indexes
s1
Out[75]:
2017-10-16    1
2017-10-17    2
2017-10-18    3
2017-10-19    4
2017-10-20    5
2017-10-21    6
Freq: D, dtype: int64
In [76]:
df.at[dates[0],"A"]=0 #Setting values by label
In [77]:
df
Out[77]:
  A B C D
2017-01-12 0.000000 2.456196 0.778567 -2.030407
2017-01-13 -0.658348 0.622495 0.388625 0.073587
2017-01-14 0.589219 1.392792 0.605545 1.231538
2017-01-15 -0.151958 -0.655249 -2.114725 -0.669839
2017-01-16 -1.323304 3.143659 0.638996 0.898683
2017-01-17 -0.024935 0.385811 -1.577185 -0.021460
In [78]:
df.iat[0,1]=0
df
Out[78]:
  A B C D
2017-01-12 0.000000 0.000000 0.778567 -2.030407
2017-01-13 -0.658348 0.622495 0.388625 0.073587
2017-01-14 0.589219 1.392792 0.605545 1.231538
2017-01-15 -0.151958 -0.655249 -2.114725 -0.669839
2017-01-16 -1.323304 3.143659 0.638996 0.898683
2017-01-17 -0.024935 0.385811 -1.577185 -0.021460
In [79]:
df.loc[:,"D"]=np.array([5]*len(df)) #Setting by assigning with a numpy array
df
Out[79]:
  A B C D
2017-01-12 0.000000 0.000000 0.778567 5
2017-01-13 -0.658348 0.622495 0.388625 5
2017-01-14 0.589219 1.392792 0.605545 5
2017-01-15 -0.151958 -0.655249 -2.114725 5
2017-01-16 -1.323304 3.143659 0.638996 5
2017-01-17 -0.024935 0.385811 -1.577185 5
In [80]:
df2=df.copy()
df2
Out[80]:
  A B C D
2017-01-12 0.000000 0.000000 0.778567 5
2017-01-13 -0.658348 0.622495 0.388625 5
2017-01-14 0.589219 1.392792 0.605545 5
2017-01-15 -0.151958 -0.655249 -2.114725 5
2017-01-16 -1.323304 3.143659 0.638996 5
2017-01-17 -0.024935 0.385811 -1.577185 5
In [81]:
df2[df2>0]=-df2
df2
Out[81]:
  A B C D
2017-01-12 0.000000 0.000000 -0.778567 -5
2017-01-13 -0.658348 -0.622495 -0.388625 -5
2017-01-14 -0.589219 -1.392792 -0.605545 -5
2017-01-15 -0.151958 -0.655249 -2.114725 -5
2017-01-16 -1.323304 -3.143659 -0.638996 -5
2017-01-17 -0.024935 -0.385811 -1.577185 -5

Missing Data

In [83]:
df
Out[83]:
  A B C D
2017-01-12 0.000000 0.000000 0.778567 5
2017-01-13 -0.658348 0.622495 0.388625 5
2017-01-14 0.589219 1.392792 0.605545 5
2017-01-15 -0.151958 -0.655249 -2.114725 5
2017-01-16 -1.323304 3.143659 0.638996 5
2017-01-17 -0.024935 0.385811 -1.577185 5
In [84]:
df1 = df.reindex(index=dates[0:4], columns=list(df.columns) + ['E'])
df1.loc[dates[0]:dates[1],'E'] = 1
df1
Out[84]:
  A B C D E
2017-01-12 0.000000 0.000000 0.778567 5 1.0
2017-01-13 -0.658348 0.622495 0.388625 5 1.0
2017-01-14 0.589219 1.392792 0.605545 5 NaN
2017-01-15 -0.151958 -0.655249 -2.114725 5 NaN
In [85]:
df1.dropna(how="any") #To drop any rows that have missing data
Out[85]:
  A B C D E
2017-01-12 0.000000 0.000000 0.778567 5 1.0
2017-01-13 -0.658348 0.622495 0.388625 5 1.0
In [86]:
df1.fillna(value=5)  # Filling missing data
Out[86]:
  A B C D E
2017-01-12 0.000000 0.000000 0.778567 5 1.0
2017-01-13 -0.658348 0.622495 0.388625 5 1.0
2017-01-14 0.589219 1.392792 0.605545 5 5.0
2017-01-15 -0.151958 -0.655249 -2.114725 5 5.0
In [87]:
df1
Out[87]:
  A B C D E
2017-01-12 0.000000 0.000000 0.778567 5 1.0
2017-01-13 -0.658348 0.622495 0.388625 5 1.0
2017-01-14 0.589219 1.392792 0.605545 5 NaN
2017-01-15 -0.151958 -0.655249 -2.114725 5 NaN
In [88]:
pd.isnull(df1)
Out[88]:
  A B C D E
2017-01-12 False False False False False
2017-01-13 False False False False False
2017-01-14 False False False False True
2017-01-15 False False False False True
In [89]:
df1.isnull()
Out[89]:
  A B C D E
2017-01-12 False False False False False
2017-01-13 False False False False False
2017-01-14 False False False False True
2017-01-15 False False False False True
In [90]:
df1.isna()  #没有这个方法~~
---------------------------------------------------------------------------
AttributeError                            Traceback (most recent call last)
<ipython-input-90-9dd6d031e095> in <module>()
----> 1 df1.isna()  #没有这个方法~~

D:\Users\asus\Anaconda3\lib\site-packages\pandas\core\generic.py in __getattr__(self, name)
   2968             if name in self._info_axis:
   2969                 return self[name]
-> 2970             return object.__getattribute__(self, name)
   2971 
   2972     def __setattr__(self, name, value):

AttributeError: 'DataFrame' object has no attribute 'isna'

Options

Stats

Operations in general exclude missing data. Performing a descriptive statistic

In [91]:
df
Out[91]:
  A B C D
2017-01-12 0.000000 0.000000 0.778567 5
2017-01-13 -0.658348 0.622495 0.388625 5
2017-01-14 0.589219 1.392792 0.605545 5
2017-01-15 -0.151958 -0.655249 -2.114725 5
2017-01-16 -1.323304 3.143659 0.638996 5
2017-01-17 -0.024935 0.385811 -1.577185 5
In [92]:
df.mean()
Out[92]:
A   -0.261554
B    0.814918
C   -0.213363
D    5.000000
dtype: float64
In [93]:
df.mean(1)  #Same operation on the other axis
Out[93]:
2017-01-12    1.444642
2017-01-13    1.338193
2017-01-14    1.896889
2017-01-15    0.519517
2017-01-16    1.864838
2017-01-17    0.945923
Freq: D, dtype: float64
In [94]:
s= pd.Series([1,2,3,np.nan,4,5],index=dates).shift(2) 
# Operating with objects that have different dimensionality and need alignment. In addition, pandas automatically broadcasts along the specified dimension.
s
Out[94]:
2017-01-12    NaN
2017-01-13    NaN
2017-01-14    1.0
2017-01-15    2.0
2017-01-16    3.0
2017-01-17    NaN
Freq: D, dtype: float64
In [95]:
df
Out[95]:
  A B C D
2017-01-12 0.000000 0.000000 0.778567 5
2017-01-13 -0.658348 0.622495 0.388625 5
2017-01-14 0.589219 1.392792 0.605545 5
2017-01-15 -0.151958 -0.655249 -2.114725 5
2017-01-16 -1.323304 3.143659 0.638996 5
2017-01-17 -0.024935 0.385811 -1.577185 5
In [96]:
df.sub(s,axis="index")  #dataFrame与series的减法
Out[96]:
  A B C D
2017-01-12 NaN NaN NaN NaN
2017-01-13 NaN NaN NaN NaN
2017-01-14 -0.410781 0.392792 -0.394455 4.0
2017-01-15 -2.151958 -2.655249 -4.114725 3.0
2017-01-16 -4.323304 0.143659 -2.361004 2.0
2017-01-17 NaN NaN NaN NaN

Apply

In [97]:
df
Out[97]:
  A B C D
2017-01-12 0.000000 0.000000 0.778567 5
2017-01-13 -0.658348 0.622495 0.388625 5
2017-01-14 0.589219 1.392792 0.605545 5
2017-01-15 -0.151958 -0.655249 -2.114725 5
2017-01-16 -1.323304 3.143659 0.638996 5
2017-01-17 -0.024935 0.385811 -1.577185 5
In [98]:
df.apply(np.cumsum)  #行叠加。
Out[98]:
  A B C D
2017-01-12 0.000000 0.000000 0.778567 5
2017-01-13 -0.658348 0.622495 1.167192 10
2017-01-14 -0.069129 2.015286 1.772737 15
2017-01-15 -0.221087 1.360038 -0.341988 20
2017-01-16 -1.544392 4.503697 0.297008 25
2017-01-17 -1.569326 4.889508 -1.280177 30
In [99]:
df.apply(lambda x: x.max()-x.min())
Out[99]:
A    1.912523
B    3.798908
C    2.893293
D    0.000000
dtype: float64

Histogramming

In [100]:
s= pd.Series(np.random.randint(0,7,size=10))
s
Out[100]:
0    4
1    5
2    2
3    0
4    5
5    3
6    4
7    3
8    3
9    0
dtype: int32
In [101]:
s.value_counts()
Out[101]:
3    3
5    2
4    2
0    2
2    1
dtype: int64

String Methods

Series is equipped with a set of string processing methods in the str attribute that make it easy to operate on each element of the array, as in the code snippet below. Note that pattern-matching in str generally uses regular expressions by default (and in some cases always uses them). See more at Vectorized String Methods.

In [102]:
s= pd.Series(['A', 'B', 'C', 'Aaba', 'Baca', np.nan, 'CABA', 'dog', 'cat'])
s.str.lower()
Out[102]:
0       a
1       b
2       c
3    aaba
4    baca
5     NaN
6    caba
7     dog
8     cat
dtype: object
In [103]:
s
Out[103]:
0       A
1       B
2       C
3    Aaba
4    Baca
5     NaN
6    CABA
7     dog
8     cat
dtype: object

Merge 合并

Concat

pandas provides various facilities for easily combining together Series, DataFrame, and Panel objects with various kinds of set logic for the indexes and relational algebra functionality in the case of join / merge-type operations. See the Merging section Concatenating pandas objects together with concat():

In [104]:
df
Out[104]:
  A B C D
2017-01-12 0.000000 0.000000 0.778567 5
2017-01-13 -0.658348 0.622495 0.388625 5
2017-01-14 0.589219 1.392792 0.605545 5
2017-01-15 -0.151958 -0.655249 -2.114725 5
2017-01-16 -1.323304 3.143659 0.638996 5
2017-01-17 -0.024935 0.385811 -1.577185 5
In [105]:
df=pd.DataFrame(np.random.randn(10,4))
df
Out[105]:
  0 1 2 3
0 0.111766 -0.505125 2.156029 0.419152
1 1.068870 1.180587 0.361345 1.090554
2 0.488997 0.281507 -0.738345 -0.242974
3 -1.846709 1.686173 -0.202319 -1.151983
4 0.573012 -1.979189 1.544768 1.594595
5 -0.954571 -0.696788 0.270959 -2.296720
6 -1.511946 1.796113 0.399493 0.412664
7 0.089844 -0.545153 -0.315653 -0.235828
8 -0.747140 1.222900 -1.650812 0.292432
9 0.659855 0.501265 0.363978 1.722914
In [106]:
# break it into pieces
pieces=[df[:3],df[3:7],df[7:]]
pd.concat(pieces)
Out[106]:
  0 1 2 3
0 0.111766 -0.505125 2.156029 0.419152
1 1.068870 1.180587 0.361345 1.090554
2 0.488997 0.281507 -0.738345 -0.242974
3 -1.846709 1.686173 -0.202319 -1.151983
4 0.573012 -1.979189 1.544768 1.594595
5 -0.954571 -0.696788 0.270959 -2.296720
6 -1.511946 1.796113 0.399493 0.412664
7 0.089844 -0.545153 -0.315653 -0.235828
8 -0.747140 1.222900 -1.650812 0.292432
9 0.659855 0.501265 0.363978 1.722914
In [107]:
pieces
Out[107]:
[          0         1         2         3
 0  0.111766 -0.505125  2.156029  0.419152
 1  1.068870  1.180587  0.361345  1.090554
 2  0.488997  0.281507 -0.738345 -0.242974,
           0         1         2         3
 3 -1.846709  1.686173 -0.202319 -1.151983
 4  0.573012 -1.979189  1.544768  1.594595
 5 -0.954571 -0.696788  0.270959 -2.296720
 6 -1.511946  1.796113  0.399493  0.412664,
           0         1         2         3
 7  0.089844 -0.545153 -0.315653 -0.235828
 8 -0.747140  1.222900 -1.650812  0.292432
 9  0.659855  0.501265  0.363978  1.722914]

Join

SQL style merges. See the Database style joining

In [108]:
left=pd.DataFrame({
        "key":["foo","foo"],"lval":[1,2]})
right = pd.DataFrame({
        'key': ['foo', 'foo'], 'rval': [4, 5]})
In [109]:
left
Out[109]:
  key lval
0 foo 1
1 foo 2
In [110]:
right
Out[110]:
  key rval
0 foo 4
1 foo 5
In [111]:
pd.merge(left,right,on="key")
Out[111]:
  key lval rval
0 foo 1 4
1 foo 1 5
2 foo 2 4
3 foo 2 5
In [112]:
left = pd.DataFrame({
        'key': ['foo', 'bar'], 'lval': [1, 2]})
right = pd.DataFrame({
        'key': ['foo', 'bar'], 'rval': [4, 5]})
In [113]:
left
Out[113]:
  key lval
0 foo 1
1 bar 2
In [114]:
right
Out[114]:
  key rval
0 foo 4
1 bar 5
In [115]:
pd.merge(left,right,on="key")
Out[115]:
  key lval rval
0 foo 1 4
1 bar 2 5

Append

In [116]:
df = pd.DataFrame(np.random.randn(8, 4), columns=['A','B','C','D'])
df
Out[116]:
  A B C D
0 -0.852451 1.074357 -0.591892 0.950982
1 -0.977580 1.656374 0.693657 0.718832
2 0.303269 -0.881728 -1.509321 1.219849
3 0.655751 1.235660 1.729038 1.074948
4 0.658413 -1.215348 -1.139623 0.753772
5 1.345115 1.420212 -0.124543 -0.099265
6 1.129623 0.597484 -0.804759 -0.568266
7 -0.770570 0.540917 -0.261607 -0.083751
In [117]:
s=df.iloc[3]
s
Out[117]:
A    0.655751
B    1.235660
C    1.729038
D    1.074948
Name: 3, dtype: float64
In [118]:
df.append(s,ignore_index=True)
Out[118]:
  A B C D
0 -0.852451 1.074357 -0.591892 0.950982
1 -0.977580 1.656374 0.693657 0.718832
2 0.303269 -0.881728 -1.509321 1.219849
3 0.655751 1.235660 1.729038 1.074948
4 0.658413 -1.215348 -1.139623 0.753772
5 1.345115 1.420212 -0.124543 -0.099265
6 1.129623 0.597484 -0.804759 -0.568266
7 -0.770570 0.540917 -0.261607 -0.083751
8 0.655751 1.235660 1.729038 1.074948

Grouping

By “group by” we are referring to a process involving one or more of the following steps • Splitting the data into groups based on some criteria • Applying a function to each group independently • Combining the results into a data structure

In [119]:
df = pd.DataFrame({
        'A' : ['foo', 'bar', 'foo', 'bar',
                          'foo', 'bar', 'foo', 'foo'],
                   'B' : ['one', 'one', 'two', 'three',
                          'two', 'two', 'one', 'three'],
                   'C' : np.random.randn(8),
                   'D' : np.random.randn(8)})
df
Out[119]:
  A B C D
0 foo one -0.523738 -1.363519
1 bar one -0.071920 -2.618027
2 foo two -2.712421 -0.407372
3 bar three -0.635898 -1.942854
4 foo two 0.952073 -0.546110
5 bar two 1.474296 -0.982238
6 foo one -0.529788 -0.213397
7 foo three 0.877394 -0.791663
In [120]:
df.groupby("A").sum()
Out[120]:
  C D
A    
bar 0.766479 -5.543120
foo -1.936480 -3.322062
In [121]:
df.groupby(["A","B"]).sum()  #Grouping by multiple columns forms a hierarchical index, which we then apply the function.
Out[121]:
    C D
A B    
bar one -0.071920 -2.618027
three -0.635898 -1.942854
two 1.474296 -0.982238
foo one -1.053527 -1.576917
three 0.877394 -0.791663
two -1.760347 -0.953482

Reshaping

Stack

In [122]:
tuples = list(zip([['bar', 'bar', 'baz', 'baz',
                     'foo', 'foo', 'qux', 'qux'],
                    ['one', 'two', 'one', 'two',
                     'one', 'two', 'one', 'two']]))
tuples
Out[122]:
[(['bar', 'bar', 'baz', 'baz', 'foo', 'foo', 'qux', 'qux'],),
 (['one', 'two', 'one', 'two', 'one', 'two', 'one', 'two'],)]
In [123]:
tuples = list(zip(*[['bar', 'bar', 'baz', 'baz',
                     'foo', 'foo', 'qux', 'qux'],
                    ['one', 'two', 'one', 'two',
                     'one', 'two', 'one', 'two']]))
tuples
Out[123]:
[('bar', 'one'),
 ('bar', 'two'),
 ('baz', 'one'),
 ('baz', 'two'),
 ('foo', 'one'),
 ('foo', 'two'),
 ('qux', 'one'),
 ('qux', 'two')]
In [124]:
index=pd.MultiIndex.from_tuples(tuples,names=["first","second"])
index
Out[124]:
MultiIndex(levels=[['bar', 'baz', 'foo', 'qux'], ['one', 'two']],
           labels=[[0, 0, 1, 1, 2, 2, 3, 3], [0, 1, 0, 1, 0, 1, 0, 1]],
           names=['first', 'second'])
In [125]:
df=pd.DataFrame(np.random.randn(8,2),index=index,columns=['A', 'B'])
df
Out[125]:
    A B
first second    
bar one -1.101051 -1.126231
two -0.395652 -0.313567
baz one 1.378579 -1.637869
two 0.665960 -0.259749
foo one -0.256181 1.260131
two -0.994720 0.506272
qux one -0.422649 0.191402
two -0.102085 0.975210
In [126]:
df2=df[:4]
df2
Out[126]:
    A B
first second    
bar one -1.101051 -1.126231
two -0.395652 -0.313567
baz one 1.378579 -1.637869
two 0.665960 -0.259749
In [127]:
stacked= df2.stack()
stacked
Out[127]:
first  second   
bar    one     A   -1.101051
               B   -1.126231
       two     A   -0.395652
               B   -0.313567
baz    one     A    1.378579
               B   -1.637869
       two     A    0.665960
               B   -0.259749
dtype: float64

With a “stacked” DataFrame or Series (having a MultiIndex as the index), the inverse operation of stack() is unstack(), which by default unstacks the last level:

In [128]:
stacked.unstack()
Out[128]:
    A B
first second    
bar one -1.101051 -1.126231
two -0.395652 -0.313567
baz one 1.378579 -1.637869
two 0.665960 -0.259749
In [129]:
stacked.unstack(1)
Out[129]:
  second one two
first      
bar A -1.101051 -0.395652
B -1.126231 -0.313567
baz A 1.378579 0.665960
B -1.637869 -0.259749
In [130]:
stacked.unstack(0)
Out[130]:
  first bar baz
second      
one A -1.101051 1.378579
B -1.126231 -1.637869
two A -0.395652 0.665960
B -0.313567 -0.259749

Pivot Tables

In [131]:
df = pd.DataFrame({
        'A' : ['one', 'one', 'two', 'three'] * 3,
                   'B' : ['A', 'B', 'C'] * 4,
                   'C' : ['foo', 'foo', 'foo', 'bar', 'bar', 'bar'] * 2,
                   'D' : np.random.randn(12),
                   'E' : np.random.randn(12)})
df
Out[131]:
  A B C D E
0 one A foo 0.039230 0.134261
1 one B foo 0.952890 -0.499183
2 two C foo -0.778814 -0.655735
3 three A bar 0.798864 0.025109
4 one B bar -0.580050 -1.711672
5 one C bar 0.004300 -0.433591
6 two A foo 0.229248 -2.648814
7 three B foo 0.506488 0.630373
8 one C foo -0.315667 0.031764
9 one A bar -1.547410 0.743825
10 two B bar -0.480958 0.365255
11 three C bar 1.742948 0.692884
In [4]:
pd.pivot_table(df,values="D",index=["A","B"],columns=["C"])
Out[4]:
  C bar foo
A B    
one A 0.932814 -1.440079
B 0.060252 1.071877
C 2.879779 0.355274
three A -0.328442 NaN
B NaN -2.544812
C -1.879058 NaN
two A NaN -1.987377
B 0.220517 NaN
C NaN -0.082820

Time Series

pandas has simple, powerful, and efficient functionality for performing resampling operations during frequency conversion (e.g., converting secondly data into 5-minutely data). This is extremely common in, but not limited to, financial applications.

In [132]:
rng=pd.date_range("1/2/2017",periods=100,freq="S")
rng
Out[132]:
DatetimeIndex(['2017-01-02 00:00:00', '2017-01-02 00:00:01',
               '2017-01-02 00:00:02', '2017-01-02 00:00:03',
               '2017-01-02 00:00:04', '2017-01-02 00:00:05',
               '2017-01-02 00:00:06', '2017-01-02 00:00:07',
               '2017-01-02 00:00:08', '2017-01-02 00:00:09',
               '2017-01-02 00:00:10', '2017-01-02 00:00:11',
               '2017-01-02 00:00:12', '2017-01-02 00:00:13',
               '2017-01-02 00:00:14', '2017-01-02 00:00:15',
               '2017-01-02 00:00:16', '2017-01-02 00:00:17',
               '2017-01-02 00:00:18', '2017-01-02 00:00:19',
               '2017-01-02 00:00:20', '2017-01-02 00:00:21',
               '2017-01-02 00:00:22', '2017-01-02 00:00:23',
               '2017-01-02 00:00:24', '2017-01-02 00:00:25',
               '2017-01-02 00:00:26', '2017-01-02 00:00:27',
               '2017-01-02 00:00:28', '2017-01-02 00:00:29',
               '2017-01-02 00:00:30', '2017-01-02 00:00:31',
               '2017-01-02 00:00:32', '2017-01-02 00:00:33',
               '2017-01-02 00:00:34', '2017-01-02 00:00:35',
               '2017-01-02 00:00:36', '2017-01-02 00:00:37',
               '2017-01-02 00:00:38', '2017-01-02 00:00:39',
               '2017-01-02 00:00:40', '2017-01-02 00:00:41',
               '2017-01-02 00:00:42', '2017-01-02 00:00:43',
               '2017-01-02 00:00:44', '2017-01-02 00:00:45',
               '2017-01-02 00:00:46', '2017-01-02 00:00:47',
               '2017-01-02 00:00:48', '2017-01-02 00:00:49',
               '2017-01-02 00:00:50', '2017-01-02 00:00:51',
               '2017-01-02 00:00:52', '2017-01-02 00:00:53',
               '2017-01-02 00:00:54', '2017-01-02 00:00:55',
               '2017-01-02 00:00:56', '2017-01-02 00:00:57',
               '2017-01-02 00:00:58', '2017-01-02 00:00:59',
               '2017-01-02 00:01:00', '2017-01-02 00:01:01',
               '2017-01-02 00:01:02', '2017-01-02 00:01:03',
               '2017-01-02 00:01:04', '2017-01-02 00:01:05',
               '2017-01-02 00:01:06', '2017-01-02 00:01:07',
               '2017-01-02 00:01:08', '2017-01-02 00:01:09',
               '2017-01-02 00:01:10', '2017-01-02 00:01:11',
               '2017-01-02 00:01:12', '2017-01-02 00:01:13',
               '2017-01-02 00:01:14', '2017-01-02 00:01:15',
               '2017-01-02 00:01:16', '2017-01-02 00:01:17',
               '2017-01-02 00:01:18', '2017-01-02 00:01:19',
               '2017-01-02 00:01:20', '2017-01-02 00:01:21',
               '2017-01-02 00:01:22', '2017-01-02 00:01:23',
               '2017-01-02 00:01:24', '2017-01-02 00:01:25',
               '2017-01-02 00:01:26', '2017-01-02 00:01:27',
               '2017-01-02 00:01:28', '2017-01-02 00:01:29',
               '2017-01-02 00:01:30', '2017-01-02 00:01:31',
               '2017-01-02 00:01:32', '2017-01-02 00:01:33',
               '2017-01-02 00:01:34', '2017-01-02 00:01:35',
               '2017-01-02 00:01:36', '2017-01-02 00:01:37',
               '2017-01-02 00:01:38', '2017-01-02 00:01:39'],
              dtype='datetime64[ns]', freq='S')
In [133]:
ts =pd.Series(np.random.randint(0,500,len(rng)),index=rng)
ts
Out[133]:
2017-01-02 00:00:00    251
2017-01-02 00:00:01     63
2017-01-02 00:00:02    108
2017-01-02 00:00:03    288
2017-01-02 00:00:04    491
2017-01-02 00:00:05    490
2017-01-02 00:00:06    343
2017-01-02 00:00:07    357
2017-01-02 00:00:08     72
2017-01-02 00:00:09    171
2017-01-02 00:00:10    324
2017-01-02 00:00:11    281
2017-01-02 00:00:12    176
2017-01-02 00:00:13     14
2017-01-02 00:00:14    495
2017-01-02 00:00:15    150
2017-01-02 00:00:16     69
2017-01-02 00:00:17    144
2017-01-02 00:00:18    126
2017-01-02 00:00:19    368
2017-01-02 00:00:20    129
2017-01-02 00:00:21    386
2017-01-02 00:00:22    228
2017-01-02 00:00:23    458
2017-01-02 00:00:24     98
2017-01-02 00:00:25    244
2017-01-02 00:00:26    206
2017-01-02 00:00:27     98
2017-01-02 00:00:28     92
2017-01-02 00:00:29    259
                      ... 
2017-01-02 00:01:10    127
2017-01-02 00:01:11    342
2017-01-02 00:01:12    185
2017-01-02 00:01:13    123
2017-01-02 00:01:14     73
2017-01-02 00:01:15    132
2017-01-02 00:01:16    462
2017-01-02 00:01:17    317
2017-01-02 00:01:18    180
2017-01-02 00:01:19    247
2017-01-02 00:01:20     97
2017-01-02 00:01:21    401
2017-01-02 00:01:22    342
2017-01-02 00:01:23    382
2017-01-02 00:01:24    304
2017-01-02 00:01:25     47
2017-01-02 00:01:26    193
2017-01-02 00:01:27    334
2017-01-02 00:01:28    196
2017-01-02 00:01:29    297
2017-01-02 00:01:30    195
2017-01-02 00:01:31    236
2017-01-02 00:01:32    200
2017-01-02 00:01:33    490
2017-01-02 00:01:34    196
2017-01-02 00:01:35    201
2017-01-02 00:01:36    397
2017-01-02 00:01:37    494
2017-01-02 00:01:38    482
2017-01-02 00:01:39    267
Freq: S, Length: 100, dtype: int32
In [7]:
ts.resample("5Min").sum()
Out[7]:
2017-01-02    22939
Freq: 5T, dtype: int32
In [9]:
ts.resample("1Min").sum()
Out[9]:
2017-01-02 00:00:00    13896
2017-01-02 00:01:00     9043
Freq: T, dtype: int32

Time zone representation.零时区 UTC表示。

In [10]:
rng= pd.date_range("2/1/2017 00:00",periods=5,freq="D")
rng
Out[10]:
DatetimeIndex(['2017-02-01', '2017-02-02', '2017-02-03', '2017-02-04',
               '2017-02-05'],
              dtype='datetime64[ns]', freq='D')
In [12]:
ts=pd.Series(np.random.randn(len(rng)),index=rng)
ts
Out[12]:
2017-02-01    0.329594
2017-02-02    2.097319
2017-02-03    1.852023
2017-02-04   -0.213452
2017-02-05    0.160873
Freq: D, dtype: float64
In [13]:
tsUtc=ts.tz_localize("UTC")
tsUtc
Out[13]:
2017-02-01 00:00:00+00:00    0.329594
2017-02-02 00:00:00+00:00    2.097319
2017-02-03 00:00:00+00:00    1.852023
2017-02-04 00:00:00+00:00   -0.213452
2017-02-05 00:00:00+00:00    0.160873
Freq: D, dtype: float64

Convert to another time zone.时区转换。

In [14]:
tsUtc.tz_convert("US/Eastern")
Out[14]:
2017-01-31 19:00:00-05:00    0.329594
2017-02-01 19:00:00-05:00    2.097319
2017-02-02 19:00:00-05:00    1.852023
2017-02-03 19:00:00-05:00   -0.213452
2017-02-04 19:00:00-05:00    0.160873
Freq: D, dtype: float64
In [15]:
tsUtc
Out[15]:
2017-02-01 00:00:00+00:00    0.329594
2017-02-02 00:00:00+00:00    2.097319
2017-02-03 00:00:00+00:00    1.852023
2017-02-04 00:00:00+00:00   -0.213452
2017-02-05 00:00:00+00:00    0.160873
Freq: D, dtype: float64

Converting between time span representations

In [16]:
rng=pd.date_range("1/8/2017",periods=5,freq="M")
rng
Out[16]:
DatetimeIndex(['2017-01-31', '2017-02-28', '2017-03-31', '2017-04-30',
               '2017-05-31'],
              dtype='datetime64[ns]', freq='M')
In [18]:
ts=pd.Series(np.random.randn(len(rng)),rng)
ts
Out[18]:
2017-01-31    0.904523
2017-02-28   -0.470144
2017-03-31   -0.373244
2017-04-30    0.860448
2017-05-31    0.176226
Freq: M, dtype: float64
In [20]:
ps=ts.to_period()
ps
Out[20]:
2017-01    0.904523
2017-02   -0.470144
2017-03   -0.373244
2017-04    0.860448
2017-05    0.176226
Freq: M, dtype: float64
In [21]:
ps.to_timestamp()
Out[21]:
2017-01-01    0.904523
2017-02-01   -0.470144
2017-03-01   -0.373244
2017-04-01    0.860448
2017-05-01    0.176226
Freq: MS, dtype: float64
In [22]:
ps
Out[22]:
2017-01    0.904523
2017-02   -0.470144
2017-03   -0.373244
2017-04    0.860448
2017-05    0.176226
Freq: M, dtype: float64

Converting between period and timestamp enables some convenient arithmetic functions to be used. In the following example, we convert a quarterly frequency with year ending in November to 9am of the end of the month following the quarter end:

In [23]:
prng=pd.period_range("1990Q1","2017Q4",freq="Q-NOV")
prng
Out[23]:
PeriodIndex(['1990Q1', '1990Q2', '1990Q3', '1990Q4', '1991Q1', '1991Q2',
             '1991Q3', '1991Q4', '1992Q1', '1992Q2',
             ...
             '2015Q3', '2015Q4', '2016Q1', '2016Q2', '2016Q3', '2016Q4',
             '2017Q1', '2017Q2', '2017Q3', '2017Q4'],
            dtype='period[Q-NOV]', length=112, freq='Q-NOV')
In [25]:
ts= pd.Series(np.random.randn(len(prng)),prng)
ts.head()
Out[25]:
1990Q1    1.193031
1990Q2    0.621627
1990Q3   -0.235553
1990Q4    0.642938
1991Q1    0.247024
Freq: Q-NOV, dtype: float64
In [26]:
ts.index=(prng.asfreq("M","e")+1).asfreq("H","s")+9
ts.head()
Out[26]:
1990-03-01 09:00    1.193031
1990-06-01 09:00    0.621627
1990-09-01 09:00   -0.235553
1990-12-01 09:00    0.642938
1991-03-01 09:00    0.247024
Freq: H, dtype: float64

Categoricals

In [34]:
df = pd.DataFrame({
        "id":[1,2,3,4,5,6],"raw_grade":["a","a","c","b","b","f"]})
df
Out[34]:
  id raw_grade
0 1 a
1 2 a
2 3 c
3 4 b
4 5 b
5 6 f

Convert the raw grades to a categorical data type.

In [35]:
df["grade"]=df.raw_grade.astype("category")
df
Out[35]:
  id raw_grade grade
0 1 a a
1 2 a a
2 3 c c
3 4 b b
4 5 b b
5 6 f f
In [36]:
df.grade #Convert the raw grades to a categorical data type
Out[36]:
0    a
1    a
2    c
3    b
4    b
5    f
Name: grade, dtype: category
Categories (4, object): [a, b, c, f]
In [37]:
# Rename the categories to more meaningful names (assigning to Series.cat.categories is inplace!)

df.grade.cat.categories=["very good","good","nomal","bad"]
df
Out[37]:
  id raw_grade grade
0 1 a very good
1 2 a very good
2 3 c nomal
3 4 b good
4 5 b good
5 6 f bad
In [38]:
# Reorder the categories and simultaneously add the missing categories (methods under Series .cat return a new Series per default).

df.grade=df.grade.cat.set_categories(["very bad", "bad", "medium","good", "very good"])
df.grade
Out[38]:
0    very good
1    very good
2          NaN
3         good
4         good
5          bad
Name: grade, dtype: category
Categories (5, object): [very bad, bad, medium, good, very good]
In [39]:
df
Out[39]:
  id raw_grade grade
0 1 a very good
1 2 a very good
2 3 c NaN
3 4 b good
4 5 b good
5 6 f bad

Sorting is per order in the categories, not lexical order

In [40]:
df.sort_values(by="grade")
Out[40]:
  id raw_grade grade
2 3 c NaN
5 6 f bad
3 4 b good
4 5 b good
0 1 a very good
1 2 a very good

Grouping by a categorical column shows also empty categories

In [41]:
df.groupby("grade").size()
Out[41]:
grade
very bad     0
bad          1
medium       0
good         2
very good    2
dtype: int64

Plotting

In [43]:
ts=pd.Series(np.random.randn(1000),index=pd.date_range("1/1/2017",periods=1000))
ts.head()
Out[43]:
2017-01-01   -0.745067
2017-01-02   -0.070895
2017-01-03    0.233542
2017-01-04   -0.206597
2017-01-05    0.891064
Freq: D, dtype: float64
In [45]:
ts=ts.cumsum()
ts.head()
Out[45]:
2017-01-01   -0.745067
2017-01-02   -1.561029
2017-01-03   -2.143449
2017-01-04   -2.932466
2017-01-05   -2.830418
Freq: D, dtype: float64
In [48]:
ts.plot()
Out[48]:
<matplotlib.axes._subplots.AxesSubplot at 0x19bf6a6e278>
In [50]:
df=pd.DataFrame(np.random.randn(1000,4),index=ts.index,columns=["A","B","C","D"])
df.head()
Out[50]:
  A B C D
2017-01-01 -1.940139 -0.476590 -0.154066 1.692812
2017-01-02 0.399891 0.268976 0.596209 -0.484979
2017-01-03 0.814519 -0.142193 -0.084394 -0.687342
2017-01-04 0.385848 -1.230059 -0.093327 -0.096652
2017-01-05 0.407435 -0.849347 0.379192 0.172933
In [51]:
df=df.cumsum()
In [53]:
plt.figure()
df.plot()
plt.legend(loc="best")
plt.show()
<matplotlib.figure.Figure at 0x19bf8855da0>
<matplotlib.figure.Figure at 0x19bf897dc88>

Getting Data In/Out

CSV

In [ ]:
df.to_csv("foo.csv")
In [ ]:
pd.read_csv("foo.csv")

HDF5

In [ ]:
df.to_hdf("foo.h5","df")
In [ ]:
pd.read_hdf("foo.h5","df")

Excel

In [ ]:
df.to_excel('foo.xlsx', sheet_name='Sheet1')
In [ ]:
pd.read_excel('foo.xlsx', 'Sheet1', index_col=None, na_values=['NA'])
In [ ]:
 
In [ ]:
 
In [ ]:
 
In [ ]:
 
In [ ]:
 
In [ ]:
 
In [ ]:
 
In [ ]:
 
In [ ]:
 
In [ ]:
 
In [ ]:
 
In [ ]:
 
In [ ]:
 
In [ ]:
 
In [ ]:
 
In [ ]:
 
In [ ]:
 
In [ ]:
 
In [ ]:
 
In [ ]:
 
In [ ]:
 
In [ ]:
 
In [ ]:
 
In [ ]:
 
In [ ]:
 
In [ ]:
 
In [ ]:
 
In [ ]:
 
In [ ]:
 
版权声明:本文为博主原创文章,遵循 CC 4.0 BY-SA 版权协议,转载请附上原文出处链接和本声明。
本文链接:https://blog.csdn.net/sirwill/article/details/78379813

智能推荐

程序如何运行:编译、链接、装入_浮动目标码模块-程序员宅基地

文章浏览阅读1.9w次,点赞47次,收藏173次。1. 地址相关概念1. 物理地址(physical address) 物理内存,真实存在的插在主板内存槽上的内存条的容量的大小. 内存是由若干个存储单元组成的,每个存储单元有一个编号,这种编号可唯一标识一个存储单元,称为内存地址(或物理地址)。我们可以把内存看成一个从0字节一直到内存最大容量逐字节编号的存储单元数组,即每个存储单元与内存地址的编号相对应。_浮动目标码模块

SpringMVC(四、统一异常处理_spring mvc 统一异常处理-程序员宅基地

文章浏览阅读653次。使用@RestControllerAdvice注解对项目可能产生的异常进行统一处理。_spring mvc 统一异常处理

STM32基础学习_32单片机学习资料-程序员宅基地

文章浏览阅读2.5k次,点赞3次,收藏22次。P3 串口电路一键下载原理分析上拉电路三极管b为基极,c为集电极,e为发射极作开关使用时,NPN型三极管:b接低电平,则电路截止,b接高电平则电路饱和导通;PNP型三极管:b接高电平,则电路截止,b接低电平则电路饱和导通P5 初识STM32PCB打样选择深圳嘉立创公司引脚顺序:从黑点开始逆时针旋转为正方向写好的程序编译之后都是一条条指令,存放在 FLASH中,内核要读取这些指令来执行程序就必须通过 ICode 总线..._32单片机学习资料

RxJava源码分析之subscribeOn和observeOn_subscribeon observeon-程序员宅基地

文章浏览阅读6.1k次,点赞2次,收藏4次。RxJava源码分析之subscribeOn和observeOnRxJava的特色就是可以改变他的任务线程,可以很优雅的在子线程和主线程中切换,而切换用到的两个主要方法是subscribeOn()和observeOn().备注:因本人水平有限,以下分析只代表本人所见,如有不当,请见谅并指出。subscribeOn()和observeOn()的区别subscr_subscribeon observeon

『SQLServer系列教程』——IF/WHILE/CASE逻辑控制语句用法_sqlserver if用法-程序员宅基地

文章浏览阅读1.6k次,点赞4次,收藏19次。读完这篇文章里你能收获到:1 学会SQLServer中IF/WHILE/CASE逻辑控制语句用法,2 提供实际操作的案例SQL脚本_sqlserver if用法

Postman—命令行执行脚本及生成报告(postman+Newman+Jenkins)_postman 命令行-程序员宅基地

文章浏览阅读720次。Postman—命令行执行脚本、生成报告、Jenkins持续集成_postman 命令行

随便推点

怎么在线预览.doc,.docx,.ofd,.pdf,.wps,.cad文件以及Office文档的在线解析方式。_ofd在线预览-程序员宅基地

文章浏览阅读4.3k次,点赞3次,收藏4次。Office文件在线预览是目前移动化办公的一种新趋势。Office在线预览指的是Office系列的文件在线查看而不依附域客户端的存在。在浏览器或者浏览器控件中可以预览查看Word、PDF、Excel、OFD、PPT等格式文档。usdoc文档在线服务正是为了解决这一问题而做出了优化的访问,充分发挥了前后端结合的优势,提供更好的阅读体验,Office在线预览指的是Office系列的文件在线查看而不依附域客户端的存在。在浏览器或者浏览器控件中可以预览查看Word、PDF、Excel、OFD、PPT等格式文档地址_ofd在线预览

40个值得你关注的jQuery插件_jquery 页面可视控制插件-程序员宅基地

文章浏览阅读413次。jQuery开发者社区应该是网站开发中最勤奋和活跃的社区之一了。他们源源不断的为我们提供免费而又实用的插件。我把最近搜集到的40个非常实用的插件分享给大家。 一、滚动插件jQuery WaypointsWaypoints 是一个 jQuery 用来实现捕获各种滚动事件的插件,例如实现无翻页的内容浏览,或者固定某个元素不让滚动等等。支持主流浏览器版本。 _jquery 页面可视控制插件

GD32芯片包下载和安装教程-程序员宅基地

文章浏览阅读1.5w次,点赞30次,收藏62次。芯片包1. 下载芯片包官方下载链接:http://www.keil.com/dd2/pack/这次安装的是GD32F30x系列的芯片包将芯片包下载到Keil_5的安装根目录下。2. 安装芯片包双击芯片包.pack文件点击Next安装中。。。点击Finish完成打开Keil_5新建工程,芯片包已经安装好了。..._gd32芯片包

计算机理论参考文献,计算机理论英文参考文献 计算机理论论文参考文献哪里找...-程序员宅基地

文章浏览阅读378次。【100个】计算机理论英文参考文献供您参考,希望能解决毕业生们的计算机理论论文参考文献哪里找相关问题,整理好参考文献那就开始写计算机理论论文吧!一、计算机理论论文参考文献范文[1]抑制OFDM信号峰均比的PTS算法分析与优化.胡茂凯.陈西宏.刘强,2011陕西省电子学会“信息感知与三网融合”前沿技术学术研讨会[2]基于有性繁殖的小生境遗传算法与多峰函数优化.田玉龙.吴清.赵卫国,2007第18届全..._计算机取证英语文献

flutter启动错误:Error connecting to the service protocol: HttpException: Connection closed before full h_flutter error: httpexception: invalid proxy config-程序员宅基地

文章浏览阅读1.8k次。报错:Error connecting to the service protocol: HttpException: Connection closed before full header was received, uri = http://127.0.0.1:52491/Z2UadkBDgn8=/ws原因:可能是因为android版本太高,从Q换成P就可以了我用的f..._flutter error: httpexception: invalid proxy configuration proxy null:null, i

戴尔服务器液晶屏显示COG02O,已解决: T140/T340/T440/T640/R240/R340/R440/R540/R640/R740/R840/R940/M640 服务器LCD液晶屏操作方法...-程序员宅基地

文章浏览阅读768次。Polaris(14G)服务器前面板的LCD液晶屏该如何使用呢?(PS:Polaris服务器前面板的LCD液晶屏是选配部件,如果没有加配则没有LCD液晶显示屏)首先,给大家解释下每个按钮的功能:只有三个按钮,操作是不是很简单呀,具体有哪些强大的功能大家可以在官网下载对应机型的手册具体查看下面举两个常用的例子:1、查看报错信息服务器前面板LCD显示屏如果是黄色的话,此时LCD液晶屏会滚动报错信息,我..._戴尔服务器t640 配显示器

推荐文章

热门文章

相关标签