
2025年最新LangChain Agent教程:從入門到精通
FBRef 主頁
繼續(xù),我們的目標(biāo)是獲得一個(gè)時(shí)間序列數(shù)據(jù)集,其中包含此頁面、幾個(gè)日期中可用的信息,以及比賽報(bào)告鏈接中包含的信息,其中包含有關(guān)比賽的更多具體統(tǒng)計(jì)信息。下圖中還有一個(gè)匹配報(bào)告示例。
Fbref 匹配報(bào)告
然后,通過查看網(wǎng)站及其結(jié)構(gòu),很明顯我們不需要處理 JavaScript 代碼,這會(huì)使我們的抓取任務(wù)稍微復(fù)雜一些,所以我們從現(xiàn)在開始使用BeautifulSoup 。我們現(xiàn)在應(yīng)該根據(jù)我們需要的信息來規(guī)劃我們的抓取結(jié)構(gòu),因?yàn)樽ト∑骶€性工作以捕獲我們想要的信息。該代碼嵌入在類“scrapper”中,并且在其中實(shí)現(xiàn)了它的功能。
class scrapper:
"""
Class used to scrap football data
:param path: The chrome driver path in your computer. Only used to get today matches information.
:def getMatches(): Gets past matches information from the leagues chosen in a certain period.
Uses beautifulSoup framework
:def getMatchesToday(): Gets predicted lineups and odds about matches to be played today.
Uses selenium framework
"""
def __init__(self, path='D:/chromedriver_win32/chromedriver.exe'):
self.originLink = 'https://fbref.com'
self.path=path
self.baseFolder = os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
self.dataFolder = os.path.join(self.baseFolder, 'data')
self.scoresHome = []
self.scoresAway = []
self.homeTeams = []
self.awayTeams = []
self.scoresHome = []
self.scoresAway = []
self.dates = []
self.homeXG = []
self.awayXG = []
那么,讓我們按照我所遵循的步驟進(jìn)行:
在比賽頁面中,到達(dá)指定日期
yearNow, monthNow, dayNow = self._getDate(day)
urlDay = self.originLink + "/en/matches/{year}-{month}-{day}".format(year=yearNow, month=monthNow, day=dayNow)
print(urlDay)
html = urlopen(urlDay)
bs = BeautifulSoup(html.read(), 'html.parser')
def _getDate(self, date):
"""
Helper function used to format url in the desired date in getMatches()
:param date: datetime.date object
:return: The formatted year, month and day of the date object
"""
year = str(date.year)
month = str(date.month) if date.month >= 10 else '0' + str(date.month)
day = str(date.day) if date.day >= 10 else '0' + str(date.day)
return year, month, day
這個(gè)過程和下面的所有過程都是在用戶定義的迭代宇宙中每天進(jìn)行的。函數(shù) getMatches() 有一個(gè)開始日期和一個(gè)結(jié)束日期,它設(shè)置了抓取器將執(zhí)行的邊界。
2.獲取每張冠軍表
championshipTables = bs.find_all('div', {'class':'table_wrapper'})
errorList = []
for i in range(len(championshipTables)):
try:
championshipTables[i].find('a', {'href':re.compile('^/en/comps/')}).get_text()
except AttributeError:
errorList.append(i)
for error in errorList:
del championshipTables[error]
desiredTables = [ch for ch in championshipTables if ch.find('a', {'href':re.compile('^/en/comps/')}).get_text() in leagues]
按照第一步的例子,聯(lián)賽變量可以由用戶輸入,所以他選擇他想要報(bào)廢的聯(lián)賽。我們還可以在代碼中看到一個(gè) try-except 子句,它處理結(jié)構(gòu)錯(cuò)誤,例如網(wǎng)站中可能出現(xiàn)的假表。
3.從每個(gè)冠軍表中,從比賽行中獲取信息
for table in desiredTables:
time.sleep(4)
matchesLinks = []
homeTeams = table.find_all('td', {'data-stat':'home_team'})
for team in homeTeams:
self.homeTeams.append(team.get_text())
self.dates.append(day)
awayTeams = table.find_all('td', {'data-stat':'away_team'})
for team in awayTeams:
self.awayTeams.append(team.get_text())
scores = table.find_all('td', {'data-stat':'score'})
for score in scores:
scoreHome, scoreAway = self._getScore(score.get_text())
self.scoresHome.append(scoreHome)
self.scoresAway.append(scoreAway)
matchesLinks.append(score.find('a', {'href':re.compile('^/')})['href'])
if table.find_all('td', {'data-stat':'home_xg'}):
homeXG = table.find_all('td', {'data-stat':'home_xg'})
awayXG = table.find_all('td', {'data-stat':'away_xg'})
for xg in homeXG:
self.homeXG.append(xg.get_text())
for xg in awayXG:
self.awayXG.append(xg.get_text())
else:
for team in homeTeams:
self.homeXG.append(np.nan)
self.awayXG.append(np.nan)
在這里,除了在我們的列表中添加我們最開始想要的信息外,我突出顯示了睡眠時(shí)間,用于控制我們?cè)谝欢〞r(shí)間內(nèi)發(fā)出的請(qǐng)求數(shù)量,避免我們的IP被禁止。另外值得注意的是每個(gè)比賽報(bào)告鏈接的存儲(chǔ),它包含在分?jǐn)?shù)變量中。通過從分?jǐn)?shù)變量而不是“匹配報(bào)告”中捕獲鏈接,我們可以避免在內(nèi)存中存儲(chǔ)延遲或取消的匹配鏈接。這引導(dǎo)我們進(jìn)入下一步:
4.獲取每場(chǎng)比賽報(bào)告并檢索信息
for link in matchesLinks:
dfMatchStats.loc[len(dfMatchStats)] = self._getMatchStats(link)
def _getMatchStats(self, url):
"""
Helper function to extract the match stats for each match in getMatches()
:param url: The match report url - is extracted in getMatches()
:return: List with match stats
"""
stats={"Fouls":[np.nan, np.nan], "Corners":[np.nan, np.nan], "Crosses":[np.nan, np.nan], "Touches":[np.nan, np.nan],
"Tackles":[np.nan, np.nan], "Interceptions":[np.nan, np.nan],"Aerials Won":[np.nan, np.nan],
"Clearances":[np.nan, np.nan], "Offsides":[np.nan, np.nan], "Goal Kicks":[np.nan, np.nan], "Throw Ins":[np.nan, np.nan],
"Long Balls":[np.nan, np.nan]}
matchStatsList = []
htmlMatch = urlopen(self.originLink + url)
bsMatch = BeautifulSoup(htmlMatch.read(), 'html.parser')
homeLineup = bsMatch.find('div', {'class':'lineup', 'id':'a'})
if not homeLineup:
homePlayers = []
awayPlayers = []
for i in range(0,11):
homePlayers.append(np.nan)
awayPlayers.append(np.nan)
yellowCardsHome = np.nan
redCardsHome = np.nan
yellowCardsAway = np.nan
redCardsAway = np.nan
matchStatsList.extend([yellowCardsHome, redCardsHome, yellowCardsAway, redCardsAway])
for key, value in stats.items():
matchStatsList.extend(value)
return homePlayers + awayPlayers + matchStatsList
homePlayers = homeLineup.find_all('a', {'href':re.compile('^/en/players')})[0:11]
homePlayers = [player.get_text() for player in homePlayers]
awayLineup = bsMatch.find('div', {'class':'lineup', 'id':'b'})
awayPlayers = awayLineup.find_all('a', {'href':re.compile('^/en/players')})[0:11]
awayPlayers = [player.get_text() for player in awayPlayers]
matchCards = bsMatch.find_all('div', {'class':'cards'})
yellowCardsHome = len(matchCards[0].find_all('span', {'class':'yellow_card'})) + len(matchCards[0].find_all('span', {'class':'yellow_red_card'}))
redCardsHome = len(matchCards[0].find_all('span', {'class':'red_card'})) + len(matchCards[0].find_all('span', {'class':'yellow_red_card'}))
yellowCardsAway = len(matchCards[1].find_all('span', {'class':'yellow_card'})) + len(matchCards[1].find_all('span', {'class':'yellow_red_card'}))
redCardsAway = len(matchCards[1].find_all('span', {'class':'red_card'})) + len(matchCards[1].find_all('span', {'class':'yellow_red_card'}))
matchStatsList.extend([yellowCardsHome, redCardsHome, yellowCardsAway, redCardsAway])
extraStatsPanel = bsMatch.find("div", {"id":"team_stats_extra"})
for statColumn in extraStatsPanel.find_all("div", recursive=False):
column = statColumn.find_all("div")
columnValues = [value.get_text() for value in column]
for index, value in enumerate(columnValues):
if not value.isdigit() and value in stats:
stats[value] = [int(columnValues[index-1]), int(columnValues[index+1])]
for key, value in stats.items():
matchStatsList.extend(value)
return homePlayers + awayPlayers + matchStatsList
正如您所看到的,這個(gè)過程有點(diǎn)棘手,所以讓我們做一個(gè)簡(jiǎn)單的解釋。黃色和紅色卡片是通過將黃色或紅色類別的卡片對(duì)象的數(shù)量相加而得出的。其他統(tǒng)計(jì)數(shù)據(jù)來自:
作為一個(gè)額外的步驟,我意識(shí)到需要?jiǎng)?chuàng)建一個(gè)檢查點(diǎn)觸發(fā)器,因?yàn)榕老x可能會(huì)面臨無法預(yù)料的錯(cuò)誤,或者 fbref 可能會(huì)不允許您的 IP 發(fā)出新請(qǐng)求,而這種情況將意味著大量時(shí)間的浪費(fèi)。然后,每個(gè)月的每個(gè)第一天,我們都會(huì)保存到目前為止的爬蟲工作,以防萬一發(fā)生意外錯(cuò)誤,我們有一個(gè)安全檢查點(diǎn)可以檢索。
僅此而已。在下面代碼的底部,您可以看到日期更新 iteraroe 和格式化最終數(shù)據(jù)框所需的操作。
if day.day == 1:
# if the process crashes, we have a checkpoint every month starter
dfCheckpoint = dfMatchStats.copy()
dfCheckpoint["homeTeam"] = self.homeTeams
dfCheckpoint["awayTeam"] = self.awayTeams
dfCheckpoint["scoreHome"] = self.scoresHome
dfCheckpoint["scoreAway"] = self.scoresAway]
dfCheckpoint["homeXG"] = self.homeXG
dfCheckpoint["awayXG"] = self.awayXG
dfCheckpoint["date"] = self.dates
dfCheckpoint.to_pickle(os.path.join(self.dataFolder, 'checkPoint.pkl'))
day = day + timedelta(days=1)
dfMatchStats["homeTeam"] = self.homeTeams
dfMatchStats["awayTeam"] = self.awayTeams
dfMatchStats["scoreHome"] = self.scoresHome
dfMatchStats["scoreAway"] = self.scoresAway
dfMatchStats["homeXG"] = self.homeXG
dfMatchStats["awayXG"] = self.awayXG
dfMatchStats["date"] = self.dates
return dfMatchStats
數(shù)據(jù)框預(yù)覽
整個(gè)過程允許我們抓取一些數(shù)據(jù)來建立模型來預(yù)測(cè)足球比賽,但我們?nèi)匀恍枰ト∮嘘P(guān)即將舉行的比賽的數(shù)據(jù),以便我們可以對(duì)已經(jīng)收集的數(shù)據(jù)做一些有用的事情。我為此找到的最佳來源是SofaScore,該應(yīng)用程序還收集和存儲(chǔ)有關(guān)比賽和球員的信息,但不僅如此,它們還在Bet365中提供每場(chǎng)比賽的實(shí)際賠率。
SofaScore 特別處理 JavaScript 代碼,這意味著 html 腳本并不完全可供我們與 BeautifulSoup 一起使用。這意味著我們需要使用另一個(gè)框架來抓取他們的信息。我選擇了廣泛使用的Selenium包,它使我們能夠像人類用戶一樣通過 Python 代碼上網(wǎng)沖浪。您實(shí)際上可以看到網(wǎng)絡(luò)驅(qū)動(dòng)程序在您選擇的瀏覽器中點(diǎn)擊和導(dǎo)航——我選擇了 Chrome。
在下圖中,您可以看到 SofaScore 主頁以及正在進(jìn)行或即將進(jìn)行的比賽,在右側(cè),您可以看到當(dāng)您點(diǎn)擊特定比賽然后點(diǎn)擊“LINEUPS”時(shí)會(huì)發(fā)生什么。
SofaScore 界面
使用 Selenium,正如我所解釋的,它的工作方式就像人類用戶在網(wǎng)上沖浪一樣,您可能會(huì)認(rèn)為這個(gè)過程會(huì)慢一點(diǎn),這是事實(shí)。因此,我們必須在每個(gè)步驟中更加小心,這樣我們就不會(huì)點(diǎn)擊不存在的按鈕,一旦 JavaScript 代碼僅在用戶執(zhí)行某些操作后呈現(xiàn),例如當(dāng)我們點(diǎn)擊特定匹配項(xiàng)時(shí),服務(wù)器會(huì)采取需要一些時(shí)間來渲染我們?cè)诘诙垐D片中看到的側(cè)邊菜單,如果代碼在此期間嘗試單擊陣容按鈕,則會(huì)返回錯(cuò)誤。現(xiàn)在,讓我們來看看代碼。
def _getDriver(self, path='D:/chromedriver_win32/chromedriver.exe'):
chrome_options = Options()
return webdriver.Chrome(executable_path=path, options=chrome_options)
def getMatchesToday(self):
self.driver = self._getDriver(path=self.path)
self.driver.get("https://www.sofascore.com/")
WebDriverWait(self.driver, 20).until(EC.element_to_be_clickable((By.CLASS_NAME, "slider")))
oddsButton = self.driver.find_element(By.CLASS_NAME, "slider")
oddsButton.click()
homeTeam=[]
awayTeam=[]
odds=[]
homeOdds = []
drawOdds = []
awayOdds = []
正如我提到的,在啟動(dòng)驅(qū)動(dòng)程序并到達(dá) SofaScore 的 URL 后,我們需要等到賠率按鈕呈現(xiàn)后才能單擊它。我們還為我們創(chuàng)建了列表來存儲(chǔ)抓取的信息。
2.店鋪匹配主要信息
WebDriverWait(self.driver, 5).until(EC.visibility_of_element_located((By.CLASS_NAME, 'fvgWCd')))
matches = self.driver.find_elements(By.CLASS_NAME, 'js-list-cell-target')
for match in matches:
if self._checkExistsByClass('blXay'):
homeTeam.append(match.find_element(By.CLASS_NAME, 'blXay').text)
awayTeam.append(match.find_element(By.CLASS_NAME, 'crsngN').text)
if match.find_element(By.CLASS_NAME, 'haEAMa').text == '-':
oddsObject = match.find_elements(By.CLASS_NAME, 'fvgWCd')
for odd in oddsObject:
odds.append(odd.text)
while(len(odds) > 0):
homeOdds.append(odds.pop(0))
drawOdds.append(odds.pop(0))
awayOdds.append(odds.pop(0))
這里沒有什么特別的,但是考慮到在第 8 行我們只過濾還沒有開始的匹配是很好的。我這樣做是因?yàn)樘幚碚谶M(jìn)行的比賽會(huì)使賠率變得更加棘手,而且目前還不清楚未來的投注模擬器將如何工作,而且它可能無法在實(shí)時(shí)結(jié)果中正常工作。
3.獲得陣容
df = pd.DataFrame({"homeTeam":homeTeam, "awayTeam":awayTeam, "homeOdds":homeOdds, "drawOdds":drawOdds, "awayOdds":awayOdds})
lineups = self._getLineups()
df = pd.concat([df, lineups], axis=1).iloc[:,:-1]
return df
def _getLineups(self):
matches = self.driver.find_elements(By.CLASS_NAME, "kusmLq")
nameInPanel = ""
df = pd.DataFrame(columns=["{team}Player{i}".format(team="home" if i <=10 else "away", i=i+1 if i <=10 else i-10) for i in range(0,22)])
df["homeTeam"] = []
for match in matches:
self.driver.execute_script("arguments[0].click()", match)
#wait until panel is refreshed
waiter = WebDriverWait(driver=self.driver, timeout=10, poll_frequency=1)
waiter.until(lambda drv: drv.find_element(By.CLASS_NAME, "dsMMht").text != nameInPanel)
nameInPanel = self.driver.find_element(By.CLASS_NAME, "dsMMht").text
if self._checkExistsByClass("jwanNG") and self.driver.find_element(By.CLASS_NAME, "jwanNG").text == "LINEUPS":
lineupButton = self.driver.find_element(By.CLASS_NAME, "jwanNG")
lineupButton.click()
# wait until players are avilable
WebDriverWait(self.driver, 20).until(EC.visibility_of_element_located((By.CLASS_NAME, "kDQXnl")))
players = self.driver.find_elements(By.CLASS_NAME, "kDQXnl")
playerNames=[]
for player in players:
playerNames.append(player.find_elements(By.CLASS_NAME, "sc-eDWCr")[2].accessible_name)
playerNames = [self._isCaptain(playerName) for playerName in playerNames]
playerNames.append(nameInPanel)
df.loc[len(df)] = playerNames
else:
df.loc[len(df), "homeTeam"] = nameInPanel
return df
def _isCaptain(self, name):
if name.startswith("(c) "):
name = name[4:]
return name
數(shù)據(jù)框預(yù)覽
總結(jié)上面的代碼塊,我們等到比賽的側(cè)邊菜單加載完畢,單擊陣容按鈕并獲取球員姓名。我們需要注意一下,因?yàn)槊總€(gè)團(tuán)隊(duì)的隊(duì)長的名字在網(wǎng)站上都是格式化的,所以我們創(chuàng)建了一個(gè)輔助函數(shù)來處理它。然后,我們將每場(chǎng)比賽的球員姓名存儲(chǔ)在數(shù)據(jù)框中,最后在整個(gè)過程之后,我們將比賽信息與預(yù)測(cè)陣容連接起來。
那么,今天就到此為止。在這篇文章中,我們構(gòu)建了兩個(gè)抓取工具,可以收集過去的足球比賽信息,也可以收集未來的比賽信息。這只是項(xiàng)目的開始,一旦您可以期待有關(guān)獲取包含玩家信息的數(shù)據(jù)集、預(yù)測(cè)器建模和最后的投注策略模擬器的新文章。
本文轉(zhuǎn)載自微信公眾號(hào)@python學(xué)研大本營
2025年最新LangChain Agent教程:從入門到精通
Python實(shí)現(xiàn)五子棋AI對(duì)戰(zhàn)的詳細(xì)教程
2025年AI代碼生成工具Tabnine AI的9個(gè)替代者推薦
一步步教你配置Obsidian Copilot實(shí)現(xiàn)API集成
如何使用python和django構(gòu)建后端rest api
如何將soap api轉(zhuǎn)換為rest api
如何使用REST API自動(dòng)化工具提升效率
如何處理REST API響應(yīng)的完整指南
快速上手 Python 創(chuàng)建 REST API
對(duì)比大模型API的內(nèi)容創(chuàng)意新穎性、情感共鳴力、商業(yè)轉(zhuǎn)化潛力
一鍵對(duì)比試用API 限時(shí)免費(fèi)