已经忘了什么时候开通的腾讯微博,大概是比注册新浪微博还早的时候,而且当时腾讯微博会默认自动同步空间说说的内容,导致删完说说之后还是会有所谓的黑历史留在腾讯微博上,一直有想删除的想法但是每次看到大几百的数量顿时就懒了。。这次终于是写了个脚本给批量删掉了。
这是自用笔记所以所写的代码有可能不是很通用,遇错请自行修改。。
- 最简单的 js,直接复制到浏览器的 console 中运行,但是速度比较慢。
var count = 0;
function clickdelbtn() {
console.log(count + '-1');
document.getElementsByClassName('delBtn')[0].click()
setTimeout(clickdelchose, 3000);
}
function clickdelchose() {
console.log(count + '-2');
document.getElementsByClassName('delChose')[0].children[2].children[0].click()
count += 1;
setTimeout(clickdelbtn, 3000);
}
- 然后写了个 Python,需要用到 cookie,速度挺快但是删了大概几页之后会返回验证码,需要自己手动删一条填个验证码再继续跑。。
import requests
import json
import re
import csv
import time
talk_headers = ['bkname', 'contAdd', 'content', 'count', 'counts', 'eventId', 'flag', 'from', 'fromIco', 'fromTxt',
'fromid', 'gender', 'height', 'height', 'icon', 'id', 'image', 'imageInfo', 'img', 'media', 'miniMedia',
'name', 'nick', 'node', 'node', 'passCert', 'phoneCert', 'pic', 'qinfo', 'realtime', 'rich', 'sign',
'signIcon', 'signSubType', 'source', 'status', 'syncQzone', 'tid', 'time', 'timestamp', 'tran', 'tv',
'tvs', 'type', 'type', 'type', 'videos', 'width']
t_cookie = ''
talksCount = 0
lastTimestamp = ''lastId =''
startTime = time.time()
def delTalk(talk):
print('- - Delete Id:', talk['id'])
print('- - - Content:', talk['content'])
print('- - - Time:', talk['time'])
url = 'http://api.t.qq.com/old/delete.php'
payload = {
'id': talk['id'],
'apiType': 14,
'apiHost': 'http://api.t.qq.com'
}
headers = {
'Referer': 'http://api.t.qq.com/proxy.html',
'Cookie': t_cookie
}
dsTime = time.time()
r = requests.post(url, data=payload, headers=headers)
data = json.loads(
r.text.replace('result', '"result"').replace('msg', '"msg"').replace('info', '"info"').replace('\'', '"'))
if data['result'] != 0:
raise RuntimeError('Delete Failed.')
print('- - Msg:', data['msg'], '', time.time() - dsTime,'seconds.')
def handleTalks(talks):
global talksCount, lastTimestamp, lastId
if len(talks) == 0:
raise RuntimeError('No talks.')
talksCount = talksCount + len(talks)
print('- Got', len(talks), 'Talks. Total:', talksCount)
lastId = talks[len(talks) - 1]['id']
lastTimestamp = talks[len(talks) - 1]['timestamp']
with open('talks.csv', 'a') as f:
f_csv = csv.DictWriter(f, talk_headers)
f_csv.writerows(talks)
for talk in talks:
delTalk(talk)
def getTalks(page):
global lastTimestamp, lastId
print('Page:', page)
url = 'http://api.t.qq.com/asyn/index.php'
if page > 1:
params = {
'id': lastId,
'time': lastTimestamp,
'page': page,
'isrecom': 0,
'apiType': 14,
'apiHost': 'http://api.t.qq.com'
}
else:
params = {
'page': page,
'isrecom': 0,
'apiType': 14,
'apiHost': 'http://api.t.qq.com'
}
headers = {
'Referer': 'http://api.t.qq.com/proxy.html',
'Cookie': t_cookie
}
rsTime = time.time()
r = requests.get(url, params=params, headers=headers)
t = re.sub(r"(msg:)\'(.*?)\'", r'\1"\2"', r.text)
t = re.sub(r"(\'user\':)\'(.*?)\'", r'\1"\2"', t)
data = json.loads(t.replace('result', '"result"').replace('msg', '"msg"').replace('\'info\'','"info"') \
.replace('\'user\'','"user"').replace('\'hasNext\'', '"hasNext"').replace('\'time\'','"time"'). \
replace('\'talk\'','"talk"').replace('\'noSign\'', '"noSign"') \
.replace('\'signuserinfo\'','"signuserinfo"'))
print('- Connected:', data['msg'], '', time.time() - rsTime,'seconds.')
if data['result'] != 0:
raise RuntimeError('No result.')
info = data['info']
print('- User:', info['user'])
handleTalks(info['talk'])
if info['hasNext']:
getTalks(page + 1)
if __name__ == '__main__':
print('Project Start.')
with open('talks.csv', 'a') as f:
f_csv = csv.DictWriter(f, talk_headers)
f_csv.writeheader()
try:
getTalks(1)
print('---------------------------')
print('Deleted All', talksCount, 'Talks.')
print('Runtime:', time.time() - startTime, 'seconds.')
except RuntimeError as e:
for i in e.args:
print(i)
不管怎样很快就能删完了,反正这种脚本也就只会用到一次╮(╯▽╰)╭。