好得很程序员自学网

<tfoot draggable='sEl'></tfoot>

python扫描proxy并且如何获取可用代理ip的示例分享

下面小编就为大家带来一篇python扫描proxy并获取可用代理ip的实例。小编觉得挺不错的,现在就分享给大家,也给大家做个参考。一起跟随小编过来看看吧

r = requests.get(url = url,headers = headers)
 soup = bs(r.content,"html.parser")
 data = soup.find_all(name = 'tr',attrs = {'class':re测试数据pile('|[^odd]')})
 for i in data:

  soup = bs(str(i),'html.parser')
  data2 = soup.find_all(name = 'td')
  ip = str(data2[1].string)
  port = str(data2[2].string)
  types = str(data2[5].string).lower() 

  proxy = {}
  proxy[types] = '%s:%s'%(ip,port) 
url = 'http://1212.ip138测试数据/ic.asp'
r = requests.get(url = url,proxies = proxy,timeout = 6) 
<html>

<head>

<meta xxxxxxxxxxxxxxxxxx>

<title> 您的IP地址 </title>

</head>

<body style="margin:0px"><center>您的IP是:[xxx.xxx.xxx.xxx] 来自:xxxxxxxx</center></body></html> 
#coding=utf-8

import requests
import re
from bs4 import BeautifulSoup as bs
import Queue
import threading 

class proxyPick(threading.Thread):
 def __init__(self,queue):
  threading.Thread.__init__(self)
  self._queue = queue

 def run(self):
  while not self._queue.empty():
   url = self._queue.get()

   proxy_spider(url)

def proxy_spider(url):
 headers = {
   .......
  }

 r = requests.get(url = url,headers = headers)
 soup = bs(r.content,"html.parser")
 data = soup.find_all(name = 'tr',attrs = {'class':re测试数据pile('|[^odd]')})

 for i in data:

  soup = bs(str(i),'html.parser')
  data2 = soup.find_all(name = 'td')
  ip = str(data2[1].string)
  port = str(data2[2].string)
  types = str(data2[5].string).lower() 


  proxy = {}
  proxy[types] = '%s:%s'%(ip,port)
  try:
   proxy_check(proxy,ip)
  except Exception,e:
   print e
   pass

def proxy_check(proxy,ip):
 url = 'http://1212.ip138测试数据/ic.asp'
 r = requests.get(url = url,proxies = proxy,timeout = 6)

 f = open('E:/url/ip_proxy.txt','a+')

 soup = bs(r.text,'html.parser')
 data = soup.find_all(name = 'center')
 for i in data:
  a = re.findall(r'\[(.*?)\]',i.string)
  if a[0] == ip:
   #print proxy
   f.write('%s'%proxy+'\n')
   print 'write down'
   
 f.close()

#proxy_spider()

def main():
 queue = Queue.Queue()
 for i in range(1,2288):
  queue.put('http://HdhCmsTestxicidaili测试数据/nn/'+str(i))

 threads = []
 thread_count = 10

 for i in range(thread_count):
  spider = proxyPick(queue)
  threads.append(spider)

 for i in threads:
  i.start()

 for i in threads:
  i.join()

 print "It's down,sir!"

if __name__ == '__main__':
 main() 

这样我们就能把网站上所提供的能用的代理ip全部写入文件ip_proxy.txt文件中了

以上就是python扫描proxy并且如何获取可用代理ip的示例分享的详细内容,更多请关注Gxl网其它相关文章!

查看更多关于python扫描proxy并且如何获取可用代理ip的示例分享的详细内容...

  阅读:53次