pandas笔记:ch06数据加载、存储于文件格式
ch06数据加载、存储于文件格式¶
In :
from __future__ import division
from numpy.random import randn
import numpy as np
import os
import sys
import matplotlib.pyplot as plt
np.random.seed(12345)
plt.rc('figure', figsize=(10, 6))
from pandas import Series, DataFrame
import pandas as pd
np.set_printoptions(precision=4)
In :
%pwd
Out:
'D:\\zwPython\\py35\\notebooks\\Python for Data Analysis'
读写文本格式的数据¶
表6-1:pandas中的解析函数¶
read_csv 从文件、URL、文件型对象中加载带分隔符的数据。默认分隔符为逗号
read_table 从文件、URL、文件型对象中加载带分隔符的数据。默认分隔符为制表符('\t')
read_fwf 读取定宽格式数据(也就是说,没有分隔符)
read_clipboard 读取剪贴板的数据,可以看做read_table的剪贴板版。在将网页转化为表格时很有用
In :
!type ch06\ex1.csv
a,b,c,d,message
1,2,3,4,hello
5,6,7,8,world
9,10,11,12,foo
In :
df = pd.read_csv('ch06/ex1.csv')
df
Out:
abcdmessage
0
1
2
3
4
hello
1
5
6
7
8
world
2
9
10
11
12
foo
In :
'read_table默认sep="\t",因此使用它需要制定分隔符sep=","'
pd.read_table('ch06/ex1.csv', sep=',')
Out:
abcdmessage
0
1
2
3
4
hello
1
5
6
7
8
world
2
9
10
11
12
foo
In :
'没有标题行的文件'
!type ch06\ex2.csv
1,2,3,4,hello
5,6,7,8,world
9,10,11,12,foo
In :
'没有标题行文件:分配默认列名'
pd.read_csv('ch06/ex2.csv', header=None)
Out:
01234
0
1
2
3
4
hello
1
5
6
7
8
world
2
9
10
11
12
foo
In :
'没有标题行文件:自定义列名'
pd.read_csv('ch06/ex2.csv', names=['a', 'b', 'c', 'd', 'message'])
Out:
abcdmessage
0
1
2
3
4
hello
1
5
6
7
8
world
2
9
10
11
12
foo
In :
'指定某列为索引'
names = ['a', 'b', 'c', 'd', 'message']
pd.read_csv('ch06/ex2.csv', names=names, index_col='message')
Out:
abcd
message
hello
1
2
3
4
world
5
6
7
8
foo
9
10
11
12
In :
!type ch06\csv_mindex.csv
key1,key2,value1,value2
one,a,1,2
one,b,3,4
one,c,5,6
one,d,7,8
two,a,9,10
two,b,11,12
two,c,13,14
two,d,15,16
In :
'传入多列作为层次化索引'
parsed = pd.read_csv('ch06/csv_mindex.csv', index_col=['key1', 'key2'])
parsed
Out:
value1value2
key1key2
onea
1
2
b
3
4
c
5
6
d
7
8
twoa
9
10
b
11
12
c
13
14
d
15
16
In :
list(open('ch06/ex3.txt'))
Out:
[' A B C\n',
'aaa -0.264438 -1.026059 -0.619500\n',
'bbb0.9272720.302904 -0.032399\n',
'ccc -0.264273 -0.386314 -0.217601\n',
'ddd -0.871858 -0.3483821.100491\n']
In :
'使用正则表达式去除\n,这里 \s+ 是正则表达式的用法'
result = pd.read_table('ch06/ex3.txt', sep='\s+')
result
'A,B,C的数量为3,而列的数量为4,因此,pandas推断第一列为索引'
Out:
ABC
aaa
-0.264438
-1.026059
-0.619500
bbb
0.927272
0.302904
-0.032399
ccc
-0.264273
-0.386314
-0.217601
ddd
-0.871858
-0.348382
1.100491
In :
'skiprows:跳过第一行,第三行,第四行'
!type ch06\ex4.csv
pd.read_csv('ch06/ex4.csv', skiprows=)
# hey!
a,b,c,d,message
# just wanted to make things more difficult for you
# who reads CSV files with computers, anyway?
1,2,3,4,hello
5,6,7,8,world
9,10,11,12,foo
Out:
abcdmessage
0
1
2
3
4
hello
1
5
6
7
8
world
2
9
10
11
12
foo
In :
!type ch06\ex5.csv
result = pd.read_csv('ch06/ex5.csv')
result
something,a,b,c,d,message
one,1,2,3,4,NA
two,5,6,,8,world
three,9,10,11,12,foo
Out:
somethingabcdmessage
0
one
1
2
3
4
NaN
1
two
5
6
NaN
8
world
2
three
9
10
11
12
foo
In :
'isnull:判断数值是否为NaN'
pd.isnull(result)
Out:
somethingabcdmessage
0
False
False
False
False
False
True
1
False
False
False
True
False
False
2
False
False
False
False
False
False
In :
'na_values=:接受列表——方式1'
result = pd.read_csv('ch06/ex5.csv', na_values=['NULL'])
result
Out:
somethingabcdmessage
0
one
1
2
3
4
NaN
1
two
5
6
NaN
8
world
2
three
9
10
11
12
foo
In :
'na_values=:接受列表——方式2'
result = pd.read_csv('ch06/ex5.csv', na_values=['world','foo'])
result
Out:
somethingabcdmessage
0
one
1
2
3
4
NaN
1
two
5
6
NaN
8
NaN
2
three
9
10
11
12
NaN
In :
'na_values=:接受字典,为各列指定不同的Na标记值'
sentinels = {'message': ['foo', 'NA'], 'something': ['two']}
pd.read_csv('ch06/ex5.csv', na_values=sentinels)
Out:
somethingabcdmessage
0
one
1
2
3
4
NaN
1
NaN
5
6
NaN
8
world
2
three
9
10
11
12
NaN
表6-2:read_csv/read_table函数的参数¶
path 表示文件系统位置、URL、文件型对象的字符串
sep,delimiter用于对行各字段进行拆分的字符序列或正则表达式
header 用作列明的行号。默认为0(第一行),如果没有header行就应该设置为None
index_col 用作行索引的列编号或列名。可以是单个名称/数字或由多个名称/数字组成的列表(层次化索引)
names 用于结果的列名列表,结合header = None(默认会附加这么设置:header = None)
skiprows需要忽略的行数(从文件开始处算起),或需要跳过的行号列表(从0开始)
na_values 一组用于替换NA的值
comment用于将注释信息从行尾拆分出去的字符(一个或多个)
parse_dates尝试将数据解析为日期,默认为False。如果为True,则尝试解析所有列。此外,还可以指定需要解析的一组列号或列名。如果列表的元素为列表或 元组,就会将多个列组合到一起在进行日期解析工作(例如,日期/时间分别位于两个列中)
keep_date_col如果连接多列解析日期,则保持参与连接的列。默认为False。
converters由列号/列名跟函数之间的映射关系组成的字典。例如,{'foo':f}会对foo列的所有值应用函数f
dayfirst当解析有歧义的日期时,将其看做国际格式(例如,7/6/2012→June 7,2012)。默认为False
date_parser 用于解析日期的函数
nrows 需要读取的行数(从文件开始处算起)
iterator返回一个TextParser以便逐块读取文件
chunksize 文件块的大小(用于迭代)
skip_footer 需要忽略的行数(从文件末尾处算起)
verbose 打印各种解析器输出信息,比如‘非数值列中缺失值的数量’等
encoding用于Unicode的文本编码格式。例如,‘utf-’表示用UTF-8编码的文本
squeeze 如果数据经解析后仅含一列,则返回Series
thousands千分位分隔符,如‘,’或‘.’
逐块获取文本文件¶
In :
result = pd.read_csv('ch06/ex6.csv')
result.head(10)
Out:
onetwothreefourkey
0
0.467976
-0.038649
-0.295344
-1.824726
L
1
-0.358893
1.404453
0.704965
-0.200638
B
2
-0.501840
0.659254
-0.421691
-0.057688
G
3
0.204886
1.074134
1.388361
-0.982404
R
4
0.354628
-0.133116
0.283763
-0.837063
Q
5
1.817480
0.742273
0.419395
-2.251035
Q
6
-0.776764
0.935518
-0.332872
-1.875641
U
7
-0.913135
1.530624
-0.572657
0.477252
K
8
0.358480
-0.497572
-0.367016
0.507702
S
9
-1.740877
-1.160417
-1.637830
2.172201
G
In :
'传入nrows获取指定行数数据'
pd.read_csv('ch06/ex6.csv', nrows=5)
Out:
onetwothreefourkey
0
0.467976
-0.038649
-0.295344
-1.824726
L
1
-0.358893
1.404453
0.704965
-0.200638
B
2
-0.501840
0.659254
-0.421691
-0.057688
G
3
0.204886
1.074134
1.388361
-0.982404
R
4
0.354628
-0.133116
0.283763
-0.837063
Q
In :
'传入chunksize:获取区域数据:其本质上返回该区域的迭代器'
chunker = pd.read_csv('ch06/ex6.csv', chunksize=1000)
chunker
Out:
<pandas.io.parsers.TextFileReader at 0x55ec240>
In :
chunker = pd.read_csv('ch06/ex6.csv', chunksize=573)
i = 0
'每次调用chunker,就会以文件上次处理后的位置为开始进行迭代'
for each in chunker:
i+=1
print(i,end=' ')
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18
In :
chunker = pd.read_csv('ch06/ex6.csv', chunksize=1000)
tot = Series([])
#suma = 3
for piece in chunker:
'''suma += 1
if suma > 5:
break
print(piece)
'''
tot = tot.add(piece['key'].value_counts(), fill_value=0)
tot = tot.sort_values(ascending=False)
In :
piece['key'].value_counts()
Out:
K 42
M 41
U 39
Y 38
P 38
E 35
R 34
Q 34
T 33
B 33
F 33
I 33
H 33
A 32
Z 31
G 31
L 30
N 29
J 29
S 29
C 28
W 28
X 28
3 26
D 25
V 23
6 22
4 21
O 19
0 17
2 15
8 15
9 15
5 15
7 14
1 12
Name: key, dtype: int64
In :
tot[:10]
Out:
E 368
X 364
L 346
O 343
Q 340
M 338
J 337
F 335
K 334
H 330
dtype: float64
将数据写出到文本格式¶
In :
data = pd.read_csv('ch06/ex5.csv')
data
Out:
somethingabcdmessage
0
one
1
2
3
4
NaN
1
two
5
6
NaN
8
world
2
three
9
10
11
12
foo
In :
data.to_csv('ch06/out.csv')
!type ch06\out.csv
,something,a,b,c,d,message
0,one,1,2,3.0,4,
1,two,5,6,,8,world
2,three,9,10,11.0,12,foo
In :
'sys.stdout: 将数据输出到当前屏幕'
'sep= 修改字符串之间的间隔符'
data.to_csv(sys.stdout, sep='|')
|something|a|b|c|d|message
0|one|1|2|3.0|4|
1|two|5|6||8|world
2|three|9|10|11.0|12|foo
In :
'na_rep= 修改缺失值的填充形式'
data.to_csv(sys.stdout, na_rep='NULL')
,something,a,b,c,d,message
0,one,1,2,3.0,4,NULL
1,two,5,6,NULL,8,world
2,three,9,10,11.0,12,foo
In :
'忽略行列标签'
data.to_csv(sys.stdout, index=False, header=False)
one,1,2,3.0,4,
two,5,6,,8,world
three,9,10,11.0,12,foo
In :
'只读取部分列'
data.to_csv(sys.stdout, index=False, columns=['a', 'b', 'c'])
a,b,c
1,2,3.0
5,6,
9,10,11.0
In :
'Series的 to_csv的方法:'
dates = pd.date_range('1/1/2000', periods=7)
ts = Series(np.arange(7), index=dates)
ts.to_csv('ch06/tseries.csv')
!type ch06\tseries.csv
2000-01-01,0
2000-01-02,1
2000-01-03,2
2000-01-04,3
2000-01-05,4
2000-01-06,5
2000-01-07,6
In :
'Series的from_csv方法'
'read_csv返回DataFrame,如果要返回Series,需要用from_csv'
Series.from_csv('ch06/tseries.csv', parse_dates=True)
Out:
2000-01-01 0
2000-01-02 1
2000-01-03 2
2000-01-04 3
2000-01-05 4
2000-01-06 5
2000-01-07 6
dtype: int64
In :
'squeeze=True: 也可以让pd.read_csv() 返回 Series'
pd.read_csv('ch06/tseries.csv',squeeze=True)
Out:
2000-01-010
0
2000-01-02
1
1
2000-01-03
2
2
2000-01-04
3
3
2000-01-05
4
4
2000-01-06
5
5
2000-01-07
6
手工处理分隔符的格式¶
csv模块
In :
!type ch06\ex7.csv
"a","b","c"
"1","2","3"
"1","2","3","4"
In :
'把已打开的文件或者文件对象传递给 csv.reader'
import csv
f = open('ch06/ex7.csv')
'csv.reader本质上返回一个迭代器'
reader = csv.reader(f)
In :
for line in reader:
print(line)
['a', 'b', 'c']
['1', '2', '3']
['1', '2', '3', '4']
In :
'对数据格式进行处理'
lines = list(csv.reader(open('ch06/ex7.csv')))
header, values = lines, lines
data_dict = {h: v for h, v in zip(header, zip(*values))}
data_dict
Out:
{'a': ('1', '1'), 'b': ('2', '2'), 'c': ('3', '3')}
In :
'通过继承csv.Dialect,定义专有的分隔符处理工具'
class my_dialect(csv.Dialect):
lineterminator = '\n'
delimiter = ';'
quotechar = '"'
quoting = csv.QUOTE_MINIMAL
!type ch06\ex7.csv
"a","b","c"
"1","2","3"
"1","2","3","4"
In :
f = open('ch06/ex7.csv')
reader = csv.reader(f,dialect=my_dialect)
list(reader)
Out:
[['a,"b","c"'], ['1,"2","3"'], ['1,"2","3","4"']]
In :
'也可以对 csv.reader 使用关键字参数,而不用定义子类'
f = open('ch06/ex7.csv')
reader = csv.reader(f,delimiter='|')
list(reader)
Out:
[['a,"b","c"'], ['1,"2","3"'], ['1,"2","3","4"']]
表6-3:csv语句支选项¶
delimiter 用于分隔字段的单字符字符串。默认为‘,’ lineterminator 用于写操作的行结束符,默认为‘\r\n’。读操作将忽略此选项,他能认出跨平台的行结束符 quotechar 用于带有特殊字符(如分隔符)的字段的引用符号。默认为‘"’ quoting 引用约定。可选值包括csv.QUOTE_ALL(引用所有字段),csv.QUOTE_MINIMAL(只引用带有诸如分隔符之类特殊字符的字段), csv.QUOTE_NONNUMERIC以及csv.QUOTE_NON(不引用)。完整信息请参考Python的文档,默认为csv.QUOTE_MINIMAL skipinitialspace 忽略分隔符后面的空白符。默认为False doublequote 如何处理字段内的引用符号。如果为True,则双写。完整信息及行为请参见在线文档 escapechar 用于对分隔符进行转义的字符串(如果quoting被设置为csv.QUOTE_NONE的话)。默认禁用
In :
with open('mydata.csv', 'w') as f:
writer = csv.writer(f, dialect=my_dialect)
writer.writerow(('one', 'two', 'three'))
writer.writerow(('1', '2', '3'))
writer.writerow(('4', '5', '6'))
writer.writerow(('7', '8', '9'))
In :
!type mydata.csv
one;two;three
1;2;3
4;5;6
7;8;9
JSON 数据¶
json(JavaScript Object Notation) json非常接近有效的Python代码:基本类型有对象(字典)、数组(列表)、字符串、数值、布尔值以及NULL。对象中的所有的键都必须是字符串。
In :
obj = """
{"name": "Wes",
"places_lived": ["United States", "Spain", "Germany"],
"pet": null,
"siblings": [{"name": "Scott", "age": 25, "pet": "Zuko"},
{"name": "Katie", "age": 33, "pet": "Cisco"}]
}
"""
In :
'loads:将json字符串转换成Python形式'
import json
result = json.loads(obj)
result
Out:
{'name': 'Wes',
'pet': None,
'places_lived': ['United States', 'Spain', 'Germany'],
'siblings': [{'age': 25, 'name': 'Scott', 'pet': 'Zuko'},
{'age': 33, 'name': 'Katie', 'pet': 'Cisco'}]}
In :
'dumps:将Python对象转换成json数据'
asjson = json.dumps(result)
In :
'把json对象转化成DataFrame或其他分析数据结构就有你决定了:'
siblings = DataFrame(result['siblings'], columns=['name', 'age'])
siblings
Out:
nameage
0
Scott
25
1
Katie
33
XML and HTML, Web scraping(略)¶
NB. The Yahoo! Finance API has changed and this example no longer works
In [ ]:
from lxml.html import parse
from urllib2 import urlopen
parsed = parse(urlopen('http://finance.yahoo.com/q/op?s=AAPL+Options'))
doc = parsed.getroot()
In [ ]:
links = doc.findall('.//a')
links
In [ ]:
lnk = links
lnk
lnk.get('href')
lnk.text_content()
In [ ]:
urls =
urls[-10:]
In [ ]:
tables = doc.findall('.//table')
calls = tables
puts = tables
In [ ]:
rows = calls.findall('.//tr')
In [ ]:
def _unpack(row, kind='td'):
elts = row.findall('.//%s' % kind)
return
In [ ]:
_unpack(rows, kind='th')
_unpack(rows, kind='td')
In [ ]:
from pandas.io.parsers import TextParser
def parse_options_data(table):
rows = table.findall('.//tr')
header = _unpack(rows, kind='th')
data = ]
return TextParser(data, names=header).get_chunk()
In [ ]:
call_data = parse_options_data(calls)
put_data = parse_options_data(puts)
call_data[:10]
Parsing XML with lxml.objectify(略)¶
In [ ]:
%cd ch06/mta_perf/Performance_XML_Data
In [ ]:
!head -21 Performance_MNR.xml
In [ ]:
from lxml import objectify
path = 'Performance_MNR.xml'
parsed = objectify.parse(open(path))
root = parsed.getroot()
In [ ]:
data = []
skip_fields = ['PARENT_SEQ', 'INDICATOR_SEQ',
'DESIRED_CHANGE', 'DECIMAL_PLACES']
for elt in root.INDICATOR:
el_data = {}
for child in elt.getchildren():
if child.tag in skip_fields:
continue
el_data = child.pyval
data.append(el_data)
In [ ]:
perf = DataFrame(data)
perf
In [ ]:
root
In [ ]:
root.get('href')
In [ ]:
root.text
二进制数据格式¶
In :
frame = pd.read_csv('ch06/ex1.csv')
frame
Out:
abcdmessage
0
1
2
3
4
hello
1
5
6
7
8
world
2
9
10
11
12
foo
In :
'to_pickle:把数据以pickle形式保存到磁盘上'
frame.to_pickle('ch06/frame_pickle')
In :
'read_pickle:读取磁盘上的pickle数据'
pd.read_pickle('ch06/frame_pickle')
Out:
abcdmessage
0
1
2
3
4
hello
1
5
6
7
8
world
2
9
10
11
12
foo
Using HDF5 format(略)¶
In [ ]:
store = pd.HDFStore('mydata.h5')
store['obj1'] = frame
store['obj1_col'] = frame['a']
store
In [ ]:
store['obj1']
In [ ]:
store.close()
os.remove('mydata.h5')
Interacting with HTML and Web APIs(略)¶
In [ ]:
import requests
url = 'https://api.github.com/repos/pydata/pandas/milestones/28/labels'
resp = requests.get(url)
resp
In [ ]:
data[:5]
In [ ]:
issue_labels = DataFrame(data)
issue_labels
Interacting with databases(略)¶
In [ ]:
import sqlite3
query = """
CREATE TABLE test
(a VARCHAR(20), b VARCHAR(20),
c REAL, d INTEGER
);"""
con = sqlite3.connect(':memory:')
con.execute(query)
con.commit()
In [ ]:
data = [('Atlanta', 'Georgia', 1.25, 6),
('Tallahassee', 'Florida', 2.6, 3),
('Sacramento', 'California', 1.7, 5)]
stmt = "INSERT INTO test VALUES(?, ?, ?, ?)"
con.executemany(stmt, data)
con.commit()
In [ ]:
cursor = con.execute('select * from test')
rows = cursor.fetchall()
rows
In [ ]:
cursor.description
In [ ]:
DataFrame(rows, columns=zip(*cursor.description))
In [ ]:
import pandas.io.sql as sql
sql.read_sql('select * from test', con)
页:
[1]