本文目錄一覽:
php中curl爬蟲 怎麼樣通過網頁獲取所有鏈接
本文承接上面兩篇,本篇中的示例要調用到前兩篇中的函數,做一個簡單的URL採集。一般php採集網絡數據會用file_get_contents、file和cURL。不過據說cURL會比file_get_contents、file更快更專業,更適合採集。今天就試試用cURL來獲取網頁上的所有鏈接。示例如下:
?php
/*
* 使用curl 採集hao123.com下的所有鏈接。
*/
include_once(‘function.php’);
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, ”);
// 只需返回HTTP header
curl_setopt($ch, CURLOPT_HEADER, 1);
// 頁面內容我們並不需要
// curl_setopt($ch, CURLOPT_NOBODY, 1);
// 返回結果,而不是輸出它
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
$html = curl_exec($ch);
$info = curl_getinfo($ch);
if ($html === false) {
echo “cURL Error: ” . curl_error($ch);
}
curl_close($ch);
$linkarr = _striplinks($html);
// 主機部分,補全用
$host = ”;
if (is_array($linkarr)) {
foreach ($linkarr as $k = $v) {
$linkresult[$k] = _expandlinks($v, $host);
}
}
printf(“p此頁面的所有鏈接為:/ppre%s/pren”, var_export($linkresult , true));
?
function.php內容如下(即為上兩篇中兩個函數的合集):
?php
function _striplinks($document) {
preg_match_all(“‘s*as.*?hrefs*=s*([“‘])?(?(1) (.*?)\1 | ([^s]+))’isx”, $document, $links);
// catenate the non-empty matches from the conditional subpattern
while (list($key, $val) = each($links[2])) {
if (!empty($val))
$match[] = $val;
} while (list($key, $val) = each($links[3])) {
if (!empty($val))
$match[] = $val;
}
// return the links
return $match;
}
/*===================================================================*
Function: _expandlinks
Purpose: expand each link into a fully qualified URL
Input: $links the links to qualify
$URI the full URI to get the base from
Output: $expandedLinks the expanded links
*===================================================================*/
function _expandlinks($links,$URI)
{
$URI_PARTS = parse_url($URI);
$host = $URI_PARTS[“host”];
preg_match(“/^[^?]+/”,$URI,$match);
$match = preg_replace(“|/[^/.]+.[^/.]+$|”,””,$match[0]);
$match = preg_replace(“|/$|”,””,$match);
$match_part = parse_url($match);
$match_root =
$match_part[“scheme”].”://”.$match_part[“host”];
$search = array( “|^http://”.preg_quote($host).”|i”,
“|^(/)|i”,
“|^(?!http://)(?!mailto:)|i”,
“|/./|”,
“|/[^/]+/../|”
);
$replace = array( “”,
$match_root.”/”,
$match.”/”,
“/”,
“/”
);
$expandedLinks = preg_replace($search,$replace,$links);
return $expandedLinks;
}
?
php curl提交頭部信息錯誤
CURLOPT_HTTPHEADER:
一個用來設置HTTP頭字段的數組。使用如下的形式的數組進行設置: array(‘Content-type: text/plain’, ‘Content-length: 100’)
?php
function getwebcontent($url){
$ch = curl_init();
$data = array (
‘ap’ = ‘2’,
‘c1’ = ‘4’,
‘c2’ = ‘4’,
‘g_w’ = ‘0100’,
‘dd’ = ‘0’,
‘h’ = ‘8’,
‘iasign’ = ‘bedvkt2gyd9vkgrx’,
‘pp’ = ‘200’,
);
$headers[‘X-rvt’] = ‘IA401004bedvkt2gyd9vkgrx82lIsT’;
$headers[‘Referer’] = ‘;c2=4g_w=0100h=1’;
$headers[‘Accept-Language’] = ‘zh-CN,zh;q=0.8,en-US;q=0.5,en;q=0.3’;
$headers[‘Cookie’] = ‘iasign=bedvkt2gyd9vkgrx;’;
$timeout = 20;
curl_setopt ($ch, CURLOPT_URL, $url);
curl_setopt ($ch, CURLOPT_POST, 1);
curl_setopt ($ch, CURLOPT_VERBOSE , 0);
curl_setopt ($ch, CURLOPT_HEADER, 1);
curl_setopt ($ch, CURLOPT_RETURNTRANSFER, 0);
curl_setopt ($ch, CURLOPT_CONNECTTIMEOUT, $timeout);
curl_setopt ($ch, CURLOPT_FOLLOWLOCATION, 0);
curl_setopt ($ch, CURLOPT_POSTFIELDS, $data );
curl_setopt ($ch, CURLOPT_HTTPHEADER , $headers);
$contents = curl_exec($ch);
curl_close($ch);
return $contents;
}
$c = getwebcontent(”);
print($c);
請問高手,如何解決php的curl內存不夠的問題呢?
curl下載的文件內容是可以直接輸出到文件,而不是內存,請設置這個選項:
$fp = fopen(‘temp.jpg’, ‘w’);
curl_setopt($c, CURLOPT_RETURNTRANSFER, false);
curl_setopt($c, CURLOPT_FILE, $fp);
也可以通過設置memory_limit來提高最大內存使用量
ini_set(‘memory_limit’, ‘1024M’);
您的採納就是我的動力!
原創文章,作者:小藍,如若轉載,請註明出處:https://www.506064.com/zh-hk/n/282923.html