设为首页 收藏本站
查看: 832|回复: 0

[经验分享] pandas笔记:ch06数据加载、存储于文件格式

[复制链接]

尚未签到

发表于 2017-7-9 19:07:34 | 显示全部楼层 |阅读模式
  ch06














数据加载、存储于文件格式¶






In [1]:






from __future__ import division
from numpy.random import randn
import numpy as np
import os
import sys
import matplotlib.pyplot as plt
np.random.seed(12345)
plt.rc('figure', figsize=(10, 6))
from pandas import Series, DataFrame
import pandas as pd
np.set_printoptions(precision=4)







In [2]:






%pwd









Out[2]:


'D:\\zwPython\\py35\\notebooks\\Python for Data Analysis'









读写文本格式的数据¶










表6-1:pandas中的解析函数¶

read_csv      从文件、URL、文件型对象中加载带分隔符的数据。默认分隔符为逗号
read_table    从文件、URL、文件型对象中加载带分隔符的数据。默认分隔符为制表符('\t')
read_fwf      读取定宽格式数据(也就是说,没有分隔符)
read_clipboard 读取剪贴板的数据,可以看做read_table的剪贴板版。在将网页转化为表格时很有用






In [19]:






!type ch06\ex1.csv












a,b,c,d,message
1,2,3,4,hello
5,6,7,8,world
9,10,11,12,foo







In [20]:






df = pd.read_csv('ch06/ex1.csv')
df









Out[20]:




abcdmessage


0
1
2
3
4
hello

1
5
6
7
8
world

2
9
10
11
12
foo







In [21]:






'read_table默认sep="\t",因此使用它需要制定分隔符sep=","'
pd.read_table('ch06/ex1.csv', sep=',')









Out[21]:




abcdmessage


0
1
2
3
4
hello

1
5
6
7
8
world

2
9
10
11
12
foo







In [23]:






'没有标题行的文件'
!type ch06\ex2.csv












1,2,3,4,hello
5,6,7,8,world
9,10,11,12,foo







In [24]:






'没有标题行文件:分配默认列名'
pd.read_csv('ch06/ex2.csv', header=None)









Out[24]:




01234


0
1
2
3
4
hello

1
5
6
7
8
world

2
9
10
11
12
foo







In [25]:






'没有标题行文件:自定义列名'
pd.read_csv('ch06/ex2.csv', names=['a', 'b', 'c', 'd', 'message'])









Out[25]:




abcdmessage


0
1
2
3
4
hello

1
5
6
7
8
world

2
9
10
11
12
foo







In [26]:






'指定某列为索引'
names = ['a', 'b', 'c', 'd', 'message']
pd.read_csv('ch06/ex2.csv', names=names, index_col='message')









Out[26]:




abcd
message   


hello
1
2
3
4

world
5
6
7
8

foo
9
10
11
12







In [27]:






!type ch06\csv_mindex.csv












key1,key2,value1,value2
one,a,1,2
one,b,3,4
one,c,5,6
one,d,7,8
two,a,9,10
two,b,11,12
two,c,13,14
two,d,15,16







In [28]:






'传入多列作为层次化索引'
parsed = pd.read_csv('ch06/csv_mindex.csv', index_col=['key1', 'key2'])
parsed









Out[28]:




  value1value2
key1key2  


onea
1
2

b
3
4

c
5
6

d
7
8

twoa
9
10

b
11
12

c
13
14

d
15
16







In [29]:






list(open('ch06/ex3.txt'))









Out[29]:


['            A         B         C\n',
'aaa -0.264438 -1.026059 -0.619500\n',
'bbb  0.927272  0.302904 -0.032399\n',
'ccc -0.264273 -0.386314 -0.217601\n',
'ddd -0.871858 -0.348382  1.100491\n']






In [32]:






'使用正则表达式去除\n,这里 \s+ 是正则表达式的用法'
result = pd.read_table('ch06/ex3.txt', sep='\s+')
result
'A,B,C的数量为3,而列的数量为4,因此,pandas推断第一列为索引'









Out[32]:




ABC


aaa
-0.264438
-1.026059
-0.619500

bbb
0.927272
0.302904
-0.032399

ccc
-0.264273
-0.386314
-0.217601

ddd
-0.871858
-0.348382
1.100491







In [31]:






'skiprows:跳过第一行,第三行,第四行'
!type ch06\ex4.csv
pd.read_csv('ch06/ex4.csv', skiprows=[0, 2, 3])












# hey!
a,b,c,d,message
# just wanted to make things more difficult for you
# who reads CSV files with computers, anyway?
1,2,3,4,hello
5,6,7,8,world
9,10,11,12,foo




Out[31]:




abcdmessage


0
1
2
3
4
hello

1
5
6
7
8
world

2
9
10
11
12
foo







In [33]:






!type ch06\ex5.csv
result = pd.read_csv('ch06/ex5.csv')
result












something,a,b,c,d,message
one,1,2,3,4,NA
two,5,6,,8,world
three,9,10,11,12,foo




Out[33]:




somethingabcdmessage


0
one
1
2
3
4
NaN

1
two
5
6
NaN
8
world

2
three
9
10
11
12
foo







In [34]:






'isnull:判断数值是否为NaN'
pd.isnull(result)









Out[34]:




somethingabcdmessage


0
False
False
False
False
False
True

1
False
False
False
True
False
False

2
False
False
False
False
False
False







In [35]:






'na_values=:接受列表——方式1'
result = pd.read_csv('ch06/ex5.csv', na_values=['NULL'])
result









Out[35]:




somethingabcdmessage


0
one
1
2
3
4
NaN

1
two
5
6
NaN
8
world

2
three
9
10
11
12
foo







In [40]:






'na_values=:接受列表——方式2'
result = pd.read_csv('ch06/ex5.csv', na_values=['world','foo'])
result









Out[40]:




somethingabcdmessage


0
one
1
2
3
4
NaN

1
two
5
6
NaN
8
NaN

2
three
9
10
11
12
NaN







In [37]:






'na_values=:接受字典,为各列指定不同的Na标记值'
sentinels = {'message': ['foo', 'NA'], 'something': ['two']}
pd.read_csv('ch06/ex5.csv', na_values=sentinels)









Out[37]:




somethingabcdmessage


0
one
1
2
3
4
NaN

1
NaN
5
6
NaN
8
world

2
three
9
10
11
12
NaN











表6-2:read_csv/read_table函数的参数¶

path    表示文件系统位置、URL、文件型对象的字符串
sep,delimiter  用于对行各字段进行拆分的字符序列或正则表达式
header   用作列明的行号。默认为0(第一行),如果没有header行就应该设置为None
index_col 用作行索引的列编号或列名。可以是单个名称/数字或由多个名称/数字组成的列表(层次化索引)
names    用于结果的列名列表,结合header = None(默认会附加这么设置:header = None)
skiprows  需要忽略的行数(从文件开始处算起),或需要跳过的行号列表(从0开始)
na_values 一组用于替换NA的值
comment  用于将注释信息从行尾拆分出去的字符(一个或多个)
parse_dates  尝试将数据解析为日期,默认为False。如果为True,则尝试解析所有列。此外,还可以指定需要解析的一组列号或列名。如果列表的元素为列表或          元组,就会将多个列组合到一起在进行日期解析工作(例如,日期/时间分别位于两个列中)
keep_date_col  如果连接多列解析日期,则保持参与连接的列。默认为False。
converters  由列号/列名跟函数之间的映射关系组成的字典。例如,{'foo':f}会对foo列的所有值应用函数f
dayfirst  当解析有歧义的日期时,将其看做国际格式(例如,7/6/2012→June 7,2012)。默认为False
date_parser 用于解析日期的函数
nrows    需要读取的行数(从文件开始处算起)
iterator  返回一个TextParser以便逐块读取文件
chunksize 文件块的大小(用于迭代)
skip_footer 需要忽略的行数(从文件末尾处算起)
verbose   打印各种解析器输出信息,比如‘非数值列中缺失值的数量’等
encoding  用于Unicode的文本编码格式。例如,‘utf-’表示用UTF-8编码的文本
squeeze   如果数据经解析后仅含一列,则返回Series
thousands  千分位分隔符,如‘,’或‘.’









逐块获取文本文件¶






In [4]:






result = pd.read_csv('ch06/ex6.csv')
result.head(10)









Out[4]:




onetwothreefourkey


0
0.467976
-0.038649
-0.295344
-1.824726
L

1
-0.358893
1.404453
0.704965
-0.200638
B

2
-0.501840
0.659254
-0.421691
-0.057688
G

3
0.204886
1.074134
1.388361
-0.982404
R

4
0.354628
-0.133116
0.283763
-0.837063
Q

5
1.817480
0.742273
0.419395
-2.251035
Q

6
-0.776764
0.935518
-0.332872
-1.875641
U

7
-0.913135
1.530624
-0.572657
0.477252
K

8
0.358480
-0.497572
-0.367016
0.507702
S

9
-1.740877
-1.160417
-1.637830
2.172201
G







In [42]:






'传入nrows获取指定行数数据'
pd.read_csv('ch06/ex6.csv', nrows=5)









Out[42]:




onetwothreefourkey


0
0.467976
-0.038649
-0.295344
-1.824726
L

1
-0.358893
1.404453
0.704965
-0.200638
B

2
-0.501840
0.659254
-0.421691
-0.057688
G

3
0.204886
1.074134
1.388361
-0.982404
R

4
0.354628
-0.133116
0.283763
-0.837063
Q







In [43]:






'传入chunksize:获取区域数据:其本质上返回该区域的迭代器'
chunker = pd.read_csv('ch06/ex6.csv', chunksize=1000)
chunker









Out[43]:


<pandas.io.parsers.TextFileReader at 0x55ec240>






In [69]:






chunker = pd.read_csv('ch06/ex6.csv', chunksize=573)
i = 0
'每次调用chunker,就会以文件上次处理后的位置为开始进行迭代'
for each in chunker:
i+=1
print(i,end=' ')












1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18






In [58]:






chunker = pd.read_csv('ch06/ex6.csv', chunksize=1000)
tot = Series([])
#suma = 3
for piece in chunker:
'''suma += 1
    if suma > 5:
        break
    print(piece)
    '''
tot = tot.add(piece['key'].value_counts(), fill_value=0)
tot = tot.sort_values(ascending=False)







In [70]:






piece['key'].value_counts()









Out[70]:


K    42
M    41
U    39
Y    38
P    38
E    35
R    34
Q    34
T    33
B    33
F    33
I    33
H    33
A    32
Z    31
G    31
L    30
N    29
J    29
S    29
C    28
W    28
X    28
3    26
D    25
V    23
6    22
4    21
O    19
0    17
2    15
8    15
9    15
5    15
7    14
1    12
Name: key, dtype: int64






In [60]:






tot[:10]









Out[60]:


E    368
X    364
L    346
O    343
Q    340
M    338
J    337
F    335
K    334
H    330
dtype: float64









将数据写出到文本格式&para;






In [71]:






data = pd.read_csv('ch06/ex5.csv')
data









Out[71]:




somethingabcdmessage


0
one
1
2
3
4
NaN

1
two
5
6
NaN
8
world

2
three
9
10
11
12
foo







In [72]:






data.to_csv('ch06/out.csv')
!type ch06\out.csv












,something,a,b,c,d,message
0,one,1,2,3.0,4,
1,two,5,6,,8,world
2,three,9,10,11.0,12,foo







In [73]:






'sys.stdout: 将数据输出到当前屏幕'
'sep= 修改字符串之间的间隔符'
data.to_csv(sys.stdout, sep='|')












|something|a|b|c|d|message
0|one|1|2|3.0|4|
1|two|5|6||8|world
2|three|9|10|11.0|12|foo







In [74]:






'na_rep= 修改缺失值的填充形式'
data.to_csv(sys.stdout, na_rep='NULL')












,something,a,b,c,d,message
0,one,1,2,3.0,4,NULL
1,two,5,6,NULL,8,world
2,three,9,10,11.0,12,foo







In [75]:






'忽略行列标签'
data.to_csv(sys.stdout, index=False, header=False)












one,1,2,3.0,4,
two,5,6,,8,world
three,9,10,11.0,12,foo







In [76]:






'只读取部分列'
data.to_csv(sys.stdout, index=False, columns=['a', 'b', 'c'])












a,b,c
1,2,3.0
5,6,
9,10,11.0







In [77]:






'Series的 to_csv的方法:'
dates = pd.date_range('1/1/2000', periods=7)
ts = Series(np.arange(7), index=dates)
ts.to_csv('ch06/tseries.csv')
!type ch06\tseries.csv












2000-01-01,0
2000-01-02,1
2000-01-03,2
2000-01-04,3
2000-01-05,4
2000-01-06,5
2000-01-07,6







In [78]:






'Series的from_csv方法'
'read_csv返回DataFrame,如果要返回Series,需要用from_csv'
Series.from_csv('ch06/tseries.csv', parse_dates=True)









Out[78]:


2000-01-01    0
2000-01-02    1
2000-01-03    2
2000-01-04    3
2000-01-05    4
2000-01-06    5
2000-01-07    6
dtype: int64






In [5]:






'squeeze=True: 也可以让pd.read_csv() 返回 Series'
pd.read_csv('ch06/tseries.csv',squeeze=True)









Out[5]:




2000-01-010


0
2000-01-02
1

1
2000-01-03
2

2
2000-01-04
3

3
2000-01-05
4

4
2000-01-06
5

5
2000-01-07
6










手工处理分隔符的格式&para;
  csv模块






In [79]:






!type ch06\ex7.csv












"a","b","c"
"1","2","3"
"1","2","3","4"







In [7]:






'把已打开的文件或者文件对象传递给 csv.reader'
import csv
f = open('ch06/ex7.csv')
'csv.reader本质上返回一个迭代器'
reader = csv.reader(f)







In [8]:






for line in reader:
print(line)












['a', 'b', 'c']
['1', '2', '3']
['1', '2', '3', '4']







In [9]:






'对数据格式进行处理'
lines = list(csv.reader(open('ch06/ex7.csv')))
header, values = lines[0], lines[1:]
data_dict = {h: v for h, v in zip(header, zip(*values))}
data_dict









Out[9]:


{'a': ('1', '1'), 'b': ('2', '2'), 'c': ('3', '3')}






In [11]:






'通过继承csv.Dialect,定义专有的分隔符处理工具'
class my_dialect(csv.Dialect):
lineterminator = '\n'
delimiter = ';'
quotechar = '"'
quoting = csv.QUOTE_MINIMAL
!type ch06\ex7.csv












"a","b","c"
"1","2","3"
"1","2","3","4"







In [12]:






f = open('ch06/ex7.csv')
reader = csv.reader(f,dialect=my_dialect)
list(reader)









Out[12]:


[['a,"b","c"'], ['1,"2","3"'], ['1,"2","3","4"']]






In [88]:






'也可以对 csv.reader 使用关键字参数,而不用定义子类'
f = open('ch06/ex7.csv')
reader = csv.reader(f,delimiter='|')
list(reader)









Out[88]:


[['a,"b","c"'], ['1,"2","3"'], ['1,"2","3","4"']]










表6-3:csv语句支选项&para;
  delimiter 用于分隔字段的单字符字符串。默认为‘,’ lineterminator 用于写操作的行结束符,默认为‘\r\n’。读操作将忽略此选项,他能认出跨平台的行结束符 quotechar 用于带有特殊字符(如分隔符)的字段的引用符号。默认为‘"’ quoting 引用约定。可选值包括csv.QUOTE_ALL(引用所有字段),csv.QUOTE_MINIMAL(只引用带有诸如分隔符之类特殊字符的字段), csv.QUOTE_NONNUMERIC以及csv.QUOTE_NON(不引用)。完整信息请参考Python的文档,默认为csv.QUOTE_MINIMAL skipinitialspace 忽略分隔符后面的空白符。默认为False doublequote 如何处理字段内的引用符号。如果为True,则双写。完整信息及行为请参见在线文档 escapechar 用于对分隔符进行转义的字符串(如果quoting被设置为csv.QUOTE_NONE的话)。默认禁用







In [89]:






with open('mydata.csv', 'w') as f:
writer = csv.writer(f, dialect=my_dialect)
writer.writerow(('one', 'two', 'three'))
writer.writerow(('1', '2', '3'))
writer.writerow(('4', '5', '6'))
writer.writerow(('7', '8', '9'))







In [91]:






!type mydata.csv












one;two;three
1;2;3
4;5;6
7;8;9










JSON 数据&para;
  json(JavaScript Object Notation) json非常接近有效的Python代码:基本类型有对象(字典)、数组(列表)、字符串、数值、布尔值以及NULL。对象中的所有的键都必须是字符串。






In [93]:






obj = """
{"name": "Wes",
"places_lived": ["United States", "Spain", "Germany"],
"pet": null,
"siblings": [{"name": "Scott", "age": 25, "pet": "Zuko"},
              {"name": "Katie", "age": 33, "pet": "Cisco"}]
}
"""







In [94]:






'loads:将json字符串转换成Python形式'
import json
result = json.loads(obj)
result









Out[94]:


{'name': 'Wes',
'pet': None,
'places_lived': ['United States', 'Spain', 'Germany'],
'siblings': [{'age': 25, 'name': 'Scott', 'pet': 'Zuko'},
{'age': 33, 'name': 'Katie', 'pet': 'Cisco'}]}






In [95]:






'dumps:将Python对象转换成json数据'
asjson = json.dumps(result)







In [96]:






'把json对象转化成DataFrame或其他分析数据结构就有你决定了:'
siblings = DataFrame(result['siblings'], columns=['name', 'age'])
siblings









Out[96]:




nameage


0
Scott
25

1
Katie
33










XML and HTML, Web scraping(略)&para;
  NB. The Yahoo! Finance API has changed and this example no longer works






In [ ]:






from lxml.html import parse
from urllib2 import urlopen
parsed = parse(urlopen('http://finance.yahoo.com/q/op?s=AAPL+Options'))
doc = parsed.getroot()







In [ ]:






links = doc.findall('.//a')
links[15:20]







In [ ]:






lnk = links[28]
lnk
lnk.get('href')
lnk.text_content()







In [ ]:






urls = [lnk.get('href') for lnk in doc.findall('.//a')]
urls[-10:]







In [ ]:






tables = doc.findall('.//table')
calls = tables[9]
puts = tables[13]







In [ ]:






rows = calls.findall('.//tr')







In [ ]:






def _unpack(row, kind='td'):
elts = row.findall('.//%s' % kind)
return [val.text_content() for val in elts]







In [ ]:






_unpack(rows[0], kind='th')
_unpack(rows[1], kind='td')







In [ ]:






from pandas.io.parsers import TextParser
def parse_options_data(table):
rows = table.findall('.//tr')
header = _unpack(rows[0], kind='th')
data = [_unpack(r) for r in rows[1:]]
return TextParser(data, names=header).get_chunk()







In [ ]:






call_data = parse_options_data(calls)
put_data = parse_options_data(puts)
call_data[:10]










Parsing XML with lxml.objectify(略)&para;






In [ ]:






%cd ch06/mta_perf/Performance_XML_Data







In [ ]:






!head -21 Performance_MNR.xml







In [ ]:






from lxml import objectify
path = 'Performance_MNR.xml'
parsed = objectify.parse(open(path))
root = parsed.getroot()







In [ ]:






data = []
skip_fields = ['PARENT_SEQ', 'INDICATOR_SEQ',
'DESIRED_CHANGE', 'DECIMAL_PLACES']
for elt in root.INDICATOR:
el_data = {}
for child in elt.getchildren():
if child.tag in skip_fields:
continue
el_data[child.tag] = child.pyval
data.append(el_data)







In [ ]:






perf = DataFrame(data)
perf







In [ ]:






root







In [ ]:






root.get('href')







In [ ]:






root.text










二进制数据格式&para;






In [97]:






frame = pd.read_csv('ch06/ex1.csv')
frame









Out[97]:




abcdmessage


0
1
2
3
4
hello

1
5
6
7
8
world

2
9
10
11
12
foo







In [100]:






'to_pickle:把数据以pickle形式保存到磁盘上'
frame.to_pickle('ch06/frame_pickle')







In [101]:






'read_pickle:读取磁盘上的pickle数据'
pd.read_pickle('ch06/frame_pickle')









Out[101]:




abcdmessage


0
1
2
3
4
hello

1
5
6
7
8
world

2
9
10
11
12
foo










Using HDF5 format(略)&para;






In [ ]:






store = pd.HDFStore('mydata.h5')
store['obj1'] = frame
store['obj1_col'] = frame['a']
store







In [ ]:






store['obj1']







In [ ]:






store.close()
os.remove('mydata.h5')










Interacting with HTML and Web APIs(略)&para;






In [ ]:






import requests
url = 'https://api.github.com/repos/pydata/pandas/milestones/28/labels'
resp = requests.get(url)
resp







In [ ]:






data[:5]







In [ ]:






issue_labels = DataFrame(data)
issue_labels










Interacting with databases(略)&para;






In [ ]:






import sqlite3
query = """
CREATE TABLE test
(a VARCHAR(20), b VARCHAR(20),
c REAL,        d INTEGER
);"""
con = sqlite3.connect(':memory:')
con.execute(query)
con.commit()







In [ ]:






data = [('Atlanta', 'Georgia', 1.25, 6),
('Tallahassee', 'Florida', 2.6, 3),
('Sacramento', 'California', 1.7, 5)]
stmt = "INSERT INTO test VALUES(?, ?, ?, ?)"
con.executemany(stmt, data)
con.commit()







In [ ]:






cursor = con.execute('select * from test')
rows = cursor.fetchall()
rows







In [ ]:






cursor.description







In [ ]:






DataFrame(rows, columns=zip(*cursor.description)[0])







In [ ]:






import pandas.io.sql as sql
sql.read_sql('select * from test', con)

运维网声明 1、欢迎大家加入本站运维交流群:群②:261659950 群⑤:202807635 群⑦870801961 群⑧679858003
2、本站所有主题由该帖子作者发表,该帖子作者与运维网享有帖子相关版权
3、所有作品的著作权均归原作者享有,请您和我们一样尊重他人的著作权等合法权益。如果您对作品感到满意,请购买正版
4、禁止制作、复制、发布和传播具有反动、淫秽、色情、暴力、凶杀等内容的信息,一经发现立即删除。若您因此触犯法律,一切后果自负,我们对此不承担任何责任
5、所有资源均系网友上传或者通过网络收集,我们仅提供一个展示、介绍、观摩学习的平台,我们不对其内容的准确性、可靠性、正当性、安全性、合法性等负责,亦不承担任何法律责任
6、所有作品仅供您个人学习、研究或欣赏,不得用于商业或者其他用途,否则,一切后果均由您自己承担,我们对此不承担任何法律责任
7、如涉及侵犯版权等问题,请您及时通知我们,我们将立即采取措施予以解决
8、联系人Email:admin@iyunv.com 网址:www.yunweiku.com

所有资源均系网友上传或者通过网络收集,我们仅提供一个展示、介绍、观摩学习的平台,我们不对其承担任何法律责任,如涉及侵犯版权等问题,请您及时通知我们,我们将立即处理,联系人Email:kefu@iyunv.com,QQ:1061981298 本贴地址:https://www.yunweiku.com/thread-392165-1-1.html 上篇帖子: 2017.3.22-morning 下篇帖子: 网络攻防 第六周学习总结
您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

扫码加入运维网微信交流群X

扫码加入运维网微信交流群

扫描二维码加入运维网微信交流群,最新一手资源尽在官方微信交流群!快快加入我们吧...

扫描微信二维码查看详情

客服E-mail:kefu@iyunv.com 客服QQ:1061981298


QQ群⑦:运维网交流群⑦ QQ群⑧:运维网交流群⑧ k8s群:运维网kubernetes交流群


提醒:禁止发布任何违反国家法律、法规的言论与图片等内容;本站内容均来自个人观点与网络等信息,非本站认同之观点.


本站大部分资源是网友从网上搜集分享而来,其版权均归原作者及其网站所有,我们尊重他人的合法权益,如有内容侵犯您的合法权益,请及时与我们联系进行核实删除!



合作伙伴: 青云cloud

快速回复 返回顶部 返回列表