本文目錄一覽:
- 1、python怎麼在Linux實現創建用戶?
- 2、python設計函數,實現會員註冊,要求用戶名長度不小於3,密碼長度不小於6, 註冊時兩次輸入密碼必須相同
- 3、python註冊程序編寫?
- 4、python批量註冊賬號 數據庫會崩潰嗎
- 5、python怎麼爬取簡書用戶名
python怎麼在Linux實現創建用戶?
題主你好,
我以創建user1-user10個用戶為例, 代碼為測試截圖如下:
1. 系統中不存在user1-user10:
執行腳本:
可以看到腳本執行後, 顯示成功添加用戶數10個, 失敗0個.並且/etc/shadow中相應的用戶信息也有了.
————-
我們刪掉user0, user1和user2:
可以看到/etc/shadow中的信息已經沒有了user0-user2的信息了,此時我們再執行上面的腳本:
可以看到輸出顯示, 成功創建3個,就是我們之前刪掉的那三個. 還有7個創建失敗的, 因為用戶本身就存在所以創建失敗了.
希望可以幫到題主, 歡迎追問.
python設計函數,實現會員註冊,要求用戶名長度不小於3,密碼長度不小於6, 註冊時兩次輸入密碼必須相同
def log_in():
username=input(“輸入用戶名(不小於3位)”)
if len(username)=3:
password=input(“密碼(不小於六位)”)
if len(password)=6:
pass_1=input(“再次輸入密碼”)
if password==pass_1:
print(“註冊成功”)
else:
print(“兩次輸入密碼不一致”)
log_in()
else:
print(“密碼長度不符合要求”)
log_in()
else:
print(“帳號長度不符合要求”)
log_in()
log_in()
python註冊程序編寫?
import
re
username_pattern
=
re.compile(r’_.{2,29}’)
password_pattern
=
re.compile(r'(?=.*_)(?=.*\d)(?=.*[A-Za-z]).{6,18}’)
然後用相應的pattern,去match用戶輸入的字符串即可。
python批量註冊賬號 數據庫會崩潰嗎
不會的,一般不會同時運行太多賬號,數據庫可支撐力是很大的,不用擔心這個問題,要批量註冊所需要用的代碼如下:
# -*- coding:utf-8 -*-
import random,urllib,urllib2
import re,time
x=input(“請輸入需要註冊的數量:”)
# x=raw_input() #轉換成字符串的
def h(i,y):
user=str(random.randrange(10000000,99999999))
QQ=str(random.randrange(10001,999999999999))
pwd=str(random.randrange(100000,99999999))
url=””
data={“username”:user,
“password”:pwd,
“repassword”:pwd,
“email”:QQ+”@qq.com”,
“qq”:QQ,
“sex”:”0″,
“action”:”newuser”,
“submit”:””}
data=urllib.urlencode(data)
req=urllib2.Request(url,data=data)
print data
# html=urllib2.urlopen(req).read()
# print(html)
html=urllib2.urlopen(req).read().decode(‘gbk’)
# print(type(html))
reg=u’您已成功註冊成為本站用戶’
reg=re.compile(reg)
r=re.findall(reg,html)
if r!=[]:
print(“註冊成功,賬號為%s,密碼為%s,目前註冊到第%s,還剩%s個”%(user,pwd,i+1,y-i-1))
f=open(“c:\user.txt”,”a”)
f.write(“%s—-%s—-%s@qq.com—-%s\n” %(user,pwd,QQ,QQ))
# f.write(“qq—-123456”)
f.close()
for i in range(x):
h(i,x)
# 延時
time.sleep(2)
python怎麼爬取簡書用戶名
初步的思路
今天在用Scrapy寫代碼的時候,對網頁的結構也有了大致的分析,再加上之前羅羅攀的思路,初步我是通過專題入口
熱門專題
image.png
image.png
專題管理員 (一般粉絲、文章、字數、收穫喜歡、這幾項數據都非常漂亮)
image.png
image.png
以上紅框里的數據項就是我需要爬取的字段
但是以上的思路存在一點的問題:
存在一些簡書用戶並不是一些熱門專題的管理員,但是其人氣粉絲量也很高,這個思路可能無法將這些用戶爬取下來
進階的思路
熱門專題
專題關注的人
專題關注的人的動態
推薦作者 粉絲信息
image.png
image.png
image.png
優點:
數據大而全,基本包含了99%的用戶(個人猜測,不嚴謹)
缺點:
因為許多用戶不止關注一個專題,而且其中包含了大量的新註冊用戶(數據很多為空),並且也有大量重複數據需要去重
代碼部分:
jianshu.py 還在調試階段,待更新…
# -*- coding: utf-8 -*-
import sys
import json
import requests
import scrapy
import re
from lxml import etree
from scrapy.http import Request
reload(sys)
sys.path.append(‘..’)
sys.setdefaultencoding(‘utf-8’)
class jianshu(scrapy.Spider):
name = ‘jianshu’
# topic_category = [‘city’]
topic_category = [‘recommend’, ‘hot’, ‘city’]
base_url = ‘lections?page=%sorder_by=%s’
cookies={
‘UM_distinctid’: ’15b89d53a930-02ab95f11ccae2-51462d15-1aeaa0-15b89d53a9489b’,
‘CNZZDATA1258679142’: ‘1544557204-1492664886-%7C1493280769’,
‘_session_id’: ‘Q3RteU9BeTA3UVh1bHp1d24ydmZJaGdkRDZJblE3SWg3dTlNR2J1WmJ5NS9HNlpOZVg4ZUk0TnNObE5wYXc3SjhYcU5WR0NKZ3RhcE9veFVDU2RNWkpqNE44MWxuVmtoR1ZDVXBFQ29Kc1kzZmd4SVNZakJyWVN4c1RFQXZNTFhmUUtxemVDVWlVU1l3VW92NFpTeEE2Q0ppUVN0QVFEMUpLZjFHdHViR21zZko2b1lFTW9DR08yNDh5Z0pvd0VJRzc4aFBqRnZYbGt6QXlmSzMxdU1QTVFwUVcxdUViaElqZzh2Y1RwcENtSWxWbW5PMUVGZ2UrZ2xVcm1NTlpMK2x2UTdOWlZjUVNPK1dCTERpMnd6U3ZxbXlROENML2VseTRHUTBqbFE1ZUlqN1FqazJJK0tsV1htdEt1bnl5MkhCbHNJTmh1ejFLTW9pYVcrVmx0bit1blNXV1VCQ3JNbHAvK1Z5T1ZvUk5IMVMzR1dUNHBlWFZBamcwYjQxSzBjZVRvMGRZSDRmV0xtTGZHekF1M3V6dGcwMHhpQ24zdmVKelV5eDRFSWZ4QT0tLW1uSXNLakp6SW54SUo0QU16a2dFSkE9PQ%3D%3D–0849c37208f8c573960d857029c7d6a15145c419’,
‘remember_user_token’:’W1szNDgxMjU3XSwiJDJhJDEwJDlSS3VLcFFWMlZzNFJuOFFNS1JQR3UiLCIxNDk0MjEzNDQ3LjYwODEwNzgiXQ%3D%3D–9241542a4e44d55acaf8736a1d57dd0e96ad4e7a’,
‘_ga’: ‘GA1.2.2016948485.1492666105’,
‘_gid’: ‘GA1.2.382495.1494550475’,
‘Hm_lpvt_0c0e9d9b1e7d617b3e6842e85b9fb068’: ‘1494550475’,
‘Hm_lvt_0c0e9d9b1e7d617b3e6842e85b9fb068’: ‘1494213432,1494213612,1494321303,1494387194’
}
headers = {
‘Accept-Encoding’: ‘gzip, deflate, sdch’,
‘Accept – Language’: ‘zh – CN, zh;q = 0.8’,
‘Connection’: ‘close’,
‘Cookie’: ‘UM_distinctid=15b89d53a930-02ab95f11ccae2-51462d15-1aeaa0-15b89d53a9489b; CNZZDATA1258679142=1544557204-1492664886-%7C1493280769; remember_user_token=W1szNDgxMjU3XSwiJDJhJDEwJDlSS3VLcFFWMlZzNFJuOFFNS1JQR3UiLCIxNDk0MjEzNDQ3LjYwODEwNzgiXQ%3D%3D–9241542a4e44d55acaf8736a1d57dd0e96ad4e7a; _ga=GA1.2.2016948485.1492666105; _gid=GA1.2.824702661.1494486429; _gat=1; Hm_lvt_0c0e9d9b1e7d617b3e6842e85b9fb068=1494213432,1494213612,1494321303,1494387194; Hm_lpvt_0c0e9d9b1e7d617b3e6842e85b9fb068=1494486429; _session_id=czl6dzVOeXdYaEplRVdndGxWWHQzdVBGTll6TVg5ZXFDTTI5cmN2RUsvS2Y2d3l6YlkrazZkZWdVcmZDSjFuM2tpMHpFVHRTcnRUVnAyeXhRSnU5UEdhaGMrNGgyMTRkeEJYOE9ydmZ4N1prN1NyekFibkQ5K0VrT3paUWE1bnlOdzJrRHRrM0Z2N3d3d3hCcFRhTWdWU0lLVGpWWjNRdjArZkx1V2J0bGJHRjZ1RVBvV25TYnBQZmhiYzNzOXE3VWNBc25YSS93WUdsTEJFSHVIck4wbVI5aWJrUXFaMkJYdW41WktJUDl6OVNqZ2k0NWpGL2dhSWx0S2FpNzhHcFZvNGdQY012QlducWgxNVhoUEN0dUpCeUI4bEd3OXhiMEE2WEplRmtaYlR6VTdlZXFsaFFZMU56M2xXcWwwbmlZeWhVb0dXKzhxdEtJaFZKaUxoZVpUZEZPSnBGWmF3anFJaFZpTU9Icm4wcllqUFhWSzFpYWF4bTZmSEZ1QXdwRWs3SHNEYmNZelA4VG5zK0wvR0MwZDdodlhZakZ6OWRVbUFmaE5JMTIwOD0tLXVyVEVSeVdOLy9Cak9nVG0zV0hueVE9PQ%3D%3D–ea401e8c501e7b749d593e1627dbaa88ab4befc2’,
‘User-Agent’: ‘Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/58.0.3029.81 Safari/537.36’,
‘Host’:”,
“X-Requested-With”: ‘XMLHttpRequest’
}
def get_total_page(self):
#獲取專題總頁數 包含3個字典的列表 [{“hot”: xx}, {“recommend”:xx}, {“city”: xx}]
total_page_list = []
for order in self.topic_category:
order = order.decode(‘utf-8’)
total_page = 100
dict = {}
for page in range(1, total_page):
url = self.base_url % (page, order)
html = requests.get(url, headers=self.headers).content
selector = etree.HTML(html)
#print html
try:
elements = selector.xpath(‘//*[@id=”list-container”]/div[1]/div/h4/a/text()’)[0]
if elements is not Exception:
continue
except Exception :
dict[‘total_page’] = page – 1
dict[‘category’] = order
break
total_page_list.append(dict)
return total_page_list
def get_topic_info(self):
#獲取專題信息
topic_info_list = []
total_page_list = self.get_total_page()
base_url = self.base_url
for dict in total_page_list:
category = dict[‘category’]
total_page = int(dict[‘total_page’])
for page in range(1, total_page + 1):
url = base_url % (page, category)
html = requests.get(url, headers=self.headers,cookies=self.cookies).content
selector = etree.HTML(html)
topic_href = selector.xpath(‘//*[@id=”list-container”]’)[0]
for href in topic_href:
dict = {}
topic_name = href.xpath(‘./div/h4/a/text()’)[0]
topic_url = “” + href.xpath(‘./div/h4/a/@href’)[0]
topic_img_url = href.xpath(‘./div/a/img/@src’)[0]
img_num = topic_img_url.split(“/”)[5]
dict[‘topic_name’] = topic_name
dict[‘topic_url’] = topic_url
#
dict[‘img_num’] = img_num
topic_info_list.append(dict)
return topic_info_list
def get_topic_admin_info(self):
#獲取管理員信息
topic_admin_info_list = []
topic_info_list = self.get_topic_info()
for d in topic_info_list:
img_num = str(d[‘img_num’])
base_url = “s/editors_and_subscribers” % img_num
base_url_response = requests.get(base_url, headers=self.headers, cookies=self.cookies)
json_data_base = json.loads(base_url_response.text.decode(‘utf-8’))
editors_total_pages = json_data_base[‘editors_total_pages’]
for page in range(1, int(editors_total_pages) + 1):
if page == 1:
editors = json_data_base[‘editors’]
for editor in editors:
dict = {}
dict[‘nickname’] = editor[‘nickname’]
dict[‘slug’] = editor[‘slug’]
topic_admin_info_list.append(dict)
else:
try:
url = “}/editors?page={}”.format(img_num, page)
response = requests.get(url,headers=self.headers,cookies=self.cookies)
json_data = json.loads(response.text.decode(‘utf-8’))
editors = json_data[‘editors’]
for editor in editors:
dict = {}
dict[‘nickname’] = editor[‘nickname’]
dict[‘slug’] = editor[‘slug’]
topic_admin_info_list.append(dict)
except Exception:
pass
return topic_admin_info_list
def get_followers_following_list(self):
# 獲取管理員粉絲列表
followers_list = []
topic_admin_list = self.get_topic_admin_info()
followers_base_url = “s/%s/followers”
for dict in topic_admin_list:
url = followers_base_url % dict[‘slug’]
headers = self.headers
headers[‘Referer’] = url
headers[‘DNT’] = ‘1’
response = requests.get(url, headers=headers, cookies=self.cookies).content
total_followers = re.fi
原創文章,作者:小藍,如若轉載,請註明出處:https://www.506064.com/zh-hk/n/291028.html