Python是一種流行的編程語言,具有易於使用的語言結構, 因此在自然語言處理(NLP)中備受歡迎。在本文中,我們將介紹Python中文本相似度的計算方法,包括文本相似度計算、文檔相似度、文本匹配、相似度分析、文本進度條注釋等方面。
一、Python文本相似度計算
文本相似度計算是一種常見的NLP任務。在Python中,我們可以使用NLTK(Natural Language Toolkit)等庫來完成這項任務。以下是一個計算文本相似度的示例代碼:
import nltk
from nltk.tokenize import word_tokenize
from nltk.corpus import stopwords
from nltk.stem import PorterStemmer
from nltk.stem import WordNetLemmatizer
from sklearn.metrics.pairwise import cosine_similarity
# 從文本中提取文本特徵
def extract_features(text):
# 令牌化並去除停用詞
stop_words = set(stopwords.words('english'))
words = [word for word in word_tokenize(text.lower()) if word.isalpha() and word not in stop_words]
#stemming
stemmer = PorterStemmer()
words = [stemmer.stem(word) for word in words]
#lemmatization
lemmatizer = WordNetLemmatizer()
words = [lemmatizer.lemmatize(word) for word in words]
#返回所有單詞的列表
return words
# 計算文本相似度
def calculate_similarity(text1,text2):
#提取文本特徵
text1_words = extract_features(text1)
text2_words = extract_features(text2)
#將提取出的文本特徵轉換成向量
all_words = list(set(text1_words + text2_words))
text1_vector = [text1_words.count(word) for word in all_words]
text2_vector = [text2_words.count(word) for word in all_words]
# 計算餘弦相似度
cosine_sim = cosine_similarity([text1_vector],[text2_vector])[0][0]
return cosine_sim
#測試
text1 = "Python is a popular programming language."
text2 = "Python is commonly used in natural language processing."
print("Similarity score:",calculate_similarity(text1,text2))
二、Python文檔相似度
文檔相似度是指計算兩篇文檔之間的相似度。在Python中,我們可以使用gensim等庫來實現這一任務。以下是一個計算文檔相似度的示例代碼:
import gensim
from gensim import corpora, similarities
#處理文本並將其轉換成相應的向量表示形式
def prepare_corpus(docs):
stoplist = set('for a of the and to in'.split())
texts = [[word for word in doc.lower().split() if word not in stoplist] for doc in docs]
dictionary = corpora.Dictionary(texts)
corpus = [dictionary.doc2bow(text) for text in texts]
#tf-idf
tfidf = models.TfidfModel(corpus)
#將文檔向量化
corpus_tfidf = tfidf[corpus]
return dictionary,corpus_tfidf
#計算文檔相似度
def calculate_similarity(docs):
dictionary,corpus_tfidf = prepare_corpus(docs)
#將語料庫轉換成lsi模型
lsi = models.LsiModel(corpus_tfidf,id2word=dictionary,num_topics=2)
#計算相似性
index = similarities.MatrixSimilarity(lsi[corpus_tfidf])
#返回相似度矩陣
return index
#測試
docs = ["Python is a popular programming language used in natural language processing",
"Natural language processing is a subfield of computer science and artificial intelligence",
"Python is useful for web development and scientific computing"]
index = calculate_similarity(docs)
sims = index[lsi[corpus_tfidf]]
print(list(enumerate(sims)))
三、Python文本相似度匹配
文本匹配是指在給定一組文本和一個查詢文本的情況下,找到與查詢文本最相似的文本。在Python中,我們可以使用fuzzywuzzy等庫來完成這項任務。以下是一個計算文本匹配度的示例代碼:
from fuzzywuzzy import fuzz
# 計算字符串相似度
def calculate_similarity(str1,str2):
# 計算jaro_winkler係數
jaro_winkler = fuzz.jaro_winkler(str1,str2)
return jaro_winkler
#測試
str1 = "Python is a popular programming language."
str2 = "Python is used for web development and scientific computing."
print("Similarity score:",calculate_similarity(str1,str2))
四、Python相似度分析
相似度分析是指使用文本相似度來分析文本並從中提取有意義的信息。在Python中,我們可以使用pandas等庫來完成這項任務。以下是幾個分析篇幅相似度的示例代碼:
import pandas as pd
import numpy as np
# 進行數據分析
def analyze_similarity(df):
#計算相似度矩陣
similarity_matrix = np.zeros([len(df),len(df)])
for i in range(len(df)):
for j in range(i,len(df)):
similarity_matrix[i][j] = calculate_similarity(df.iloc[i]['text'],df.iloc[j]['text'])
similarity_matrix[j][i] = similarity_matrix[i][j]
#轉換為DataFrame
df_similarity = pd.DataFrame(similarity_matrix,index=df['id'],columns=df['id'])
#篩選相似度大於等於0.7的文章
mask = df_similarity >= 0.7
df_pairs = pd.DataFrame({'id1':df_similarity[mask].stack().index.get_level_values('level_0'),
'id2':df_similarity[mask].stack().index.get_level_values('level_1')}).drop_duplicates()
#返回符合條件的文章
return df_pairs
#測試
df = pd.DataFrame({'id':[1,2,3],
'text':["Python is a popular programming language.",
"Python is used for web development and scientific computing.",
"Natural language processing is a subfield of computer science and artificial intelligence."]})
df_pairs = analyze_similarity(df)
print(df_pairs)
五、Python文本進度條注釋
在Python中,我們可以使用tqdm等庫來為某些長時間運行的任務添加進度條注釋。以下是一個將文本向量化過程添加進度條注釋的示例代碼:
from tqdm import tqdm
#將文本向量化並顯示進度條注釋
def vectorize_text(docs):
vectors = []
for doc in tqdm(docs):
vector = extract_features(doc)
vectors.append(vector)
return vectors
#測試
docs = ["Python is a popular programming language.",
"Natural language processing is a subfield of computer science and artificial intelligence.",
"Python is used for web development and scientific computing."]
vectors = vectorize_text(docs)
print(vectors)
原創文章,作者:小藍,如若轉載,請註明出處:https://www.506064.com/zh-hk/n/181659.html
微信掃一掃
支付寶掃一掃