MySQL--二(删除)

MySQL通常通过 delete、truncate、drop 关键字进行删除;这三种都可以用来删除数据,但场景不同。

一、从执行速度上来说

drop>truncate>delete

二、从原理上讲

1、delete

        delete属于数据库DML操作语言,只删除数据不删除表的结构;

        在InnoDB中,delete其实不会真的把数据删除,mysql实际上只是给删除的数据打上了一个不可视的标签,标记为已删除,因此delete删除数据时,表文件在磁盘上所占空间大小不会变,存储空间不会释放。虽然未释放磁盘空间,但下次插入数据时仍然可以重用这一部分(重用  -->  覆盖);

        delete执行时,会先将删除数据缓存到rollback segement中,事务提交后生效;

        delete from table_name删除表的全部数据,对于MYISAM会释放磁盘空间,InnoDB不会释放磁盘空间;

        对于delete from table_name where ...带条件的删除,MYISAM和InnoDB都不会释放磁盘空间;

        delete操作后使用optimize table table_name(空间优化)会立刻释放磁盘空间,无论是MYISAM还是InnoDB;

        delete操作是一行一行执行删除的,且同时将该行的操作日志记录在表空间中方便回滚和重做操作,生成的大量日志会占用磁盘空间。

2、truncate

       truncate属于数据库DDL定义语言,不走事务;

        truncate from table table_name立刻释放磁盘空间,无论是MYISAM还是InnoDB。truncate table类似于drop table然后create;

        truncate能够快速清空一个表,并且重置auto_increme(自增索引)的值;

3、drop

       drop属于数据库DDL定义语言,同truncate;

        drop table table_name立刻释放磁盘空间;

可以这么理解,一本书,delete是把目录撕了,truncate是把书的内容撕下来烧了,drop是把书烧了!!!

MySQL--一(MYISAM和InnoDB)

聚簇索引和辅助索引

聚簇索引:数据是存放在叶子节点上。

辅助索引:叶子节点上是不存放真实数据的,而是存放聚簇索引的索引值。

        聚簇索引和辅助索引都是B+树的结构,区别就是叶子节点上是否存放真实的数据。那么为什么有这个区分呢。其实也很好理解,我们都知道,Mysql中的每一个索引都是一颗B+树来存储的,如果我每一颗B+树的叶子节点上面都存储数据的话,这个空间的要求是不是很大呢?当然,也可以所有的索引都不存储真实的数据,都存储指向真实数据的索引,这是MyISAM存储引擎的做法。

        在Innodb存储引擎中,DBA一般都会要求我们建立一个自增长的,为整数型的值为主键索引。如果没有唯一索引的话,Mysql会自己帮我们维护一个索引,就是会帮我们这张表建立一个隐藏列,用作聚簇索引的建立。而每一张表只有这一个聚簇索引,并且存储真实的数据。

        对比的来看,MyISAM存储引擎使用的都是辅助索引,即没有一棵B+树上存储了真实数据,都是指向真实数据的索引,这也是为什么当Mysql的表使用了MyISAM存储引擎的时候,会产生三个文件。一个表结构、一个就是真实数据、一个索引文件。而使用Innodb存储引擎的表,只有两个,一个表结构,另外一个包含了索引和真实数据。


MYISAM和InnoDB的区别

         1、InnoDB支持事务,MyISAM不支持,对于InnoDB每一条SQL语言都默认封装成事务,自动提交,这样会影响速度,所以最好把多条SQL语言放在begin和commit之间,组成一个事务;

        2、InnoDB支持外键,而MyISAM不支持。对一个包含外键的InnoDB表转为MYISAM会失败;

         3、InnoDB是聚集索引,使用B+Tree作为索引结构,数据文件是和(主键)索引绑在一起的(表数据文件本身就是按B+Tree组织的一个索引结构),必须要有主键,通过主键索引效率很高。但是辅助索引需要两次查询,先查询到主键,然后再通过主键查询到数据。因此,主键不应该过大,因为主键太大,其他索引也都会很大。MyISAM是非聚集索引,也是使用B+Tree作为索引结构,索引和数据文件是分离的,索引保存的是数据文件的指针。主键索引和辅助索引是独立的。

        也就是说:InnoDB的B+树主键索引的叶子节点就是数据文件,辅助索引的叶子节点是主键的值;而MyISAM的B+树主键索引和辅助索引的叶子节点都是数据文件的地址指针。

        4、InnoDB不保存表的具体行数,执行select count(*) from table时需要全表扫描。而MyISAM用一个变量保存了整个表的行数,执行上述语句时只需要读出该变量即可,速度很快。

        5、InnoDB表必须有唯一索引(如主键)(用户没有指定的话会自己找/生产一个隐藏列Row_id来充当默认主键),而Myisam可以没有。

        6、Innodb存储文件有frm、ibd,而Myisam是frm、MYD、MYI

        Innodb:frm是表定义文件,ibd是数据文件

        Myisam:frm是表定义文件,myd是数据文件,myi是索引文件



python爬虫+IP代理

"""
Created on Wed Nov 23 00:03:30 2022

@author: fch
"""
import time

'''
网站网址:https://m.7160.top/rentiyishu/
'''
import random
from pyquery import PyQuery as pq
import requests
import os
from joblib import Parallel, delayed
import socket
socket.setdefaulttimeout(10)


# 忽略requests证书警告
from requests.packages.urllib3.exceptions import InsecureRequestWarning
requests.packages.urllib3.disable_warnings(InsecureRequestWarning)

requests.DEFAULT_RETRIES = 5  # 增加重试连接次数

IP_list=[]
with open('./IP代理池.txt','r') as f:
    for line in f:
        line = line.strip('\n')  # 删除换行符
        IP_list.append(line)
f.close()

'''
random随机读取到后为字符串类型,用eval转回字典
'''
# proxie = eval(random.choice(IP_list))

user_agent_list = [
        'Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10_6_8; en-us) AppleWebKit/534.50 (KHTML, like Gecko) Version/5.1 Safari/534.50',
        'Mozilla/5.0 (Windows; U; Windows NT 6.1; en-us) AppleWebKit/534.50 (KHTML, like Gecko) Version/5.1 Safari/534.50',
        'Mozilla/5.0 (Windows NT 10.0; WOW64; rv:38.0) Gecko/20100101 Firefox/38.0',
        'Mozilla/5.0 (Windows NT 10.0; WOW64; Trident/7.0; .NET4.0C; .NET4.0E; .NET CLR 2.0.50727; .NET CLR 3.0.30729; .NET CLR 3.5.30729; InfoPath.3; rv:11.0) like Gecko',
        'Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; Trident/5.0)',
        'Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 6.0; Trident/4.0)',
        'Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.0)',
        'Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1)',
        'Mozilla/5.0 (Macintosh; Intel Mac OS X 10.6; rv:2.0.1) Gecko/20100101 Firefox/4.0.1',
        'Mozilla/5.0 (Windows NT 6.1; rv:2.0.1) Gecko/20100101 Firefox/4.0.1',
        'Opera/9.80 (Macintosh; Intel Mac OS X 10.6.8; U; en) Presto/2.8.131 Version/11.11',
        'Opera/9.80 (Windows NT 6.1; U; en) Presto/2.8.131 Version/11.11',
        'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_7_0) AppleWebKit/535.11 (KHTML, like Gecko) Chrome/17.0.963.56 Safari/535.11',
        'Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; Maxthon 2.0)',
        'Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; TencentTraveler 4.0)',
        'Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1)',
        'Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; The World)',
        'Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; Trident/4.0; SE 2.X MetaSr 1.0; SE 2.X MetaSr 1.0; .NET CLR 2.0.50727; SE 2.X MetaSr 1.0)',
        'Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; 360SE)',
        'Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; Avant Browser)',
        'Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1)',
        'Mozilla/5.0 (iPhone; U; CPU iPhone OS 4_3_3 like Mac OS X; en-us) AppleWebKit/533.17.9 (KHTML, like Gecko) Version/5.0.2 Mobile/8J2 Safari/6533.18.5',
        'Mozilla/5.0 (iPod; U; CPU iPhone OS 4_3_3 like Mac OS X; en-us) AppleWebKit/533.17.9 (KHTML, like Gecko) Version/5.0.2 Mobile/8J2 Safari/6533.18.5',
        'Mozilla/5.0 (iPad; U; CPU OS 4_3_3 like Mac OS X; en-us) AppleWebKit/533.17.9 (KHTML, like Gecko) Version/5.0.2 Mobile/8J2 Safari/6533.18.5',
        'Mozilla/5.0 (Linux; U; Android 2.3.7; en-us; Nexus One Build/FRF91) AppleWebKit/533.1 (KHTML, like Gecko) Version/4.0 Mobile Safari/533.1',
        'MQQBrowser/26 Mozilla/5.0 (Linux; U; Android 2.3.7; zh-cn; MB200 Build/GRJ22; CyanogenMod-7) AppleWebKit/533.1 (KHTML, like Gecko) Version/4.0 Mobile Safari/533.1',
        'Opera/9.80 (Android 2.3.4; Linux; Opera Mobi/build-1107180945; U; en-GB) Presto/2.8.149 Version/11.10',
        'Mozilla/5.0 (Linux; U; Android 3.0; en-us; Xoom Build/HRI39) AppleWebKit/534.13 (KHTML, like Gecko) Version/4.0 Safari/534.13',
        'Mozilla/5.0 (BlackBerry; U; BlackBerry 9800; en) AppleWebKit/534.1+ (KHTML, like Gecko) Version/6.0.0.337 Mobile Safari/534.1+',
        'Mozilla/5.0 (hp-tablet; Linux; hpwOS/3.0.0; U; en-US) AppleWebKit/534.6 (KHTML, like Gecko) wOSBrowser/233.70 Safari/534.6 TouchPad/1.0',
        'Mozilla/5.0 (SymbianOS/9.4; Series60/5.0 NokiaN97-1/20.0.019; Profile/MIDP-2.1 Configuration/CLDC-1.1) AppleWebKit/525 (KHTML, like Gecko) BrowserNG/7.1.18124',
        'Mozilla/5.0 (compatible; MSIE 9.0; Windows Phone OS 7.5; Trident/5.0; IEMobile/9.0; HTC; Titan)',
        'UCWEB7.0.2.37/28/999',
        'NOKIA5700/ UCWEB7.0.2.37/28/999',
        'Openwave/ UCWEB7.0.2.37/28/999',
        'Mozilla/4.0 (compatible; MSIE 6.0; ) Opera/UCWEB7.0.2.37/28/999',
        'Mozilla/6.0 (iPhone; CPU iPhone OS 8_0 like Mac OS X) AppleWebKit/536.26 (KHTML, like Gecko) Version/8.0 Mobile/10A5376e Safari/8536.25',
    ]

def Res_url(url):
    requests.packages.urllib3.disable_warnings(InsecureRequestWarning)
    headers = {'User-Agent': random.choice(user_agent_list),
               'Referer': 'https://m.7160.top/rentiyishu/',
               "Connection": "close"}

    try:
        s=requests.session()
        s.keep_alive = False
        r = s.get(url, headers=headers, proxies=eval(random.choice(IP_list)), timeout=(5, 10),verify=False,allow_redirects=False)
        r.raise_for_status()
        r.encoding = r.apparent_encoding
        return r.text
    except:
        print(url)
        return 'over'

'''
爬取首页中所有伪图片的链接地址和名称,返回成列表
'''
def Parse_page(html):
    page_url_list,fil_name_list=[],[]
    doc = pq(html)
    items = doc('.listUll2 li').items()
    for item in items:
        page_url_list.append(item('div a').attr('href'))
        fil_name_list.append(item('p').text())
    return page_url_list,fil_name_list


def Get_img(page_url, file_name):
    num = 1
    base_url = page_url.split('.')
    while True:
        if num == 1:
            url = page_url
        else:
            url = base_url[0] + '.' + base_url[1] + '.' + base_url[2] + '_' + str(num) + '.' + base_url[3]
        time.sleep(1)
        html = Res_url(url)
        if html != 'over':
            doc = pq(html)
            img_url = doc('.ArticleBox img').attr('src')
            img_basename = doc('.ArticleBox img').attr('alt')
            if img_basename==None:
                img_name=img_basename
            else:
                img_name=img_basename.replace('/','')
            if img_url == None:
                num += 1
            else:
                Save_img(img_url, file_name, img_name)
                num += 1
        else:
            break   #跳出while循环


def Save_img(img_url, file_name, img_name):
    time.sleep(1)
    requests.packages.urllib3.disable_warnings(InsecureRequestWarning)
    headers = {'User-Agent': random.choice(user_agent_list),
               'Referer': 'https://m.7160.top/rentiyishu/',
               "Connection": "close"}
    houzui = img_url.split('.')[-1]
    addr = './images_3/' + file_name
    if not os.path.isdir(addr):
        os.makedirs(addr)
    r=requests.get(img_url, headers=headers, proxies=eval(random.choice(IP_list)), verify=False).content
    if r.status_code==200:
        with open('./images_3/{}/{}.{}'.format(file_name, img_name, houzui), 'wb') as f:
            f.write(r.content)
            f.close()
    else:
        print('图片地址不存在')


def main():
    page_num = 1
    base_url = 'https://m.7160.top/rentiyishu/index'
    while True:
        if page_num == 1:
            url = base_url + '.html'
        else:
            url = base_url + '_' + str(page_num) + '.html'
        html = Res_url(url)
        if html != 'over':
            print('保存第', page_num, '页')
            page_url_list, file_name_list=Parse_page(html)
            '''
            多线程:n_jobs=30即同时30个线程
            '''
            Parallel(n_jobs=5)(delayed(Get_img)(page_url,file_name) for page_url,file_name in zip(page_url_list,file_name_list))
        else:
            print('程序结束')
            return '程序结束'
        page_num += 1
main()

IP代理池

import requests
from bs4 import BeautifulSoup
import time
'''
IP代理网站的页数
'''
num=51


list_ip = []
list_port = []
list_type=[]
list_headers_ip = []


def check_ip(list_ip):
    headers = {
        'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.164 Safari/537.36 Edg/91.0.864.71',
        'Connection': 'close',
        'Referer': 'https://m.7160.top/rentiyishu/'
    }
    # url = 'https://www.baidu.com'  # 以百度为例,检测IP的可行性
    url = 'https://pic.99ym.cn/d/file/202009/mz1kqud4v2i.jpg'

    can_use = []
    for ip in list_ip:
        try:
            response = requests.get(url=url, headers=headers, proxies=ip, timeout=3, verify=False)  # 在0.1秒之内请求百度的服务器
            if response.status_code == 200:
                can_use.append(ip)
        except Exception as e:
            print(e)

    return can_use


for start in range(1, num):

    url = 'https://www.kuaidaili.com/free/inha/{}/'.format(start)  # 每页15个数据,共爬取10页
    print("正在处理url: ", url)

    headers = {
        'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.164 Safari/537.36 Edg/91.0.864.71'}
    response = requests.get(url=url, headers=headers)

    soup = BeautifulSoup(response.text, 'html.parser')

    ip = soup.select('#list > table > tbody > tr > td:nth-child(1)')
    port = soup.select('#list > table > tbody > tr > td:nth-child(2)')
    type = soup.select('#list > table > tbody > tr > td:nth-child(4)')
    for i in ip:
        list_ip.append(i.get_text())
    for i in port:
        list_port.append(i.get_text())
    for i in type:
        list_type.append(i.get_text())
    time.sleep(0.5)  # 防止爬取太快,数据爬取不全

# 代理ip的形式:        'http':'http://119.14.253.128:8088'

for i in range(len(list_ip)):
    if list_type[i]=='HTTP':
        proxies = {
            'HTTP':'http://'+list_ip[i]+':'+list_port[i]
        }
    else:
        proxies = {
            'HTTPS': 'https://' + list_ip[i] + ':' + list_port[i]
        }
    list_headers_ip.append(proxies)
can_use = check_ip(list_headers_ip)
print('能用的代理IP为:', can_use)
print('能用的代理IP数量为:', len(can_use))

with open('./IP代理池.txt','w') as f:
    for i in can_use:
        f.write(str(i)+'\n')
f.close()

图片爬虫(小改动)

'''
网站网址:http://www.meinuzi.com/
'''

import random
from pyquery import PyQuery as pq
import requests

proxies=[
         {'HTTP':'http://122.140.5.115:9999'},
         {'HTTP':'http://113.101.96.66:8080'},
         {'HTTP':'http://113.124.86.24:9999'},
         {'HTTP':'http://121.13.252.58:41564'},
         {'HTTP':'http://61.216.185.88:60808'},
         {'HTTP':'http://58.20.184.187:9091'},
         {'HTTP':'http://183.236.123.242:8060'},
         {'HTTP':'http://116.9.163.205:58080'},
         {'HTTP':'http://222.74.73.202:42055'},
         {'HTTP':'http://183.247.202.208:30001'},
         {'HTTP':'http://39.108.101.55:1080'},
         {'HTTP':'http://47.105.91.226:8118'},
         ]
proxie=random.choice(proxies)
user_agent_list = ['Chrome/86.0.4240.198',
                   'Chrome/101.0.4951.64',
                   'Chrome/96.0.4664.45',
                   'Chrome/94.0.4606.41'
                   ]
headers = {'User-Agent': random.choice(user_agent_list),
           'Referer':'http://www.meinuzi.com'}

'''
请求url,返回HTML
'''
def Req_url(url):
    try:
        r=requests.get(url,headers=headers,proxies=proxie)
        r.raise_for_status()
        r.encoding=r.apparent_encoding
        return r.text
    except:
        # print('ERROR!')
        return 'over'

'''
解析初始页面,找到图片页面的网址
'''
def Parse_page(html):
    doc=pq(html)
    items=doc('.m-list-main li').items()
    i=1
    for item in items:
        page_url=item('.u-img a').attr('href')
        name=item('.u-img img').attr('alt')
        print("保存第",i,'张图')
        Get_img(page_url,name)
        i+=1

'''
解析图片页面的网址,找到每张图片的url
'''
def Get_img(page_url,name):
    html=Req_url(page_url)
    doc=pq(html)
    img_baseurl=doc('.g-mn .m-list-content img').attr('src')
    Save_img(img_baseurl,name)

'''
根据得到的图片url,保存到文件夹
'''
def Save_img(img_baseurl,name):
    img_url=headers['Referer']+img_baseurl
    data=requests.get(img_url,headers=headers,proxies=proxie,timeout=15).content
    houzui=img_url.split('.')[-1]
    with open('./images_1/{}.{}'.format(name,houzui),'wb') as f:
        f.write(data)


def main():

    num = 7  # 从第num页开始

    '''
    当返回值为‘over’的时候,说明页面不存在,即已经爬取全部页面
    '''
    while True:
        page_url='http://www.meinuzi.com/index_'+str(num)+'.html'

        html = Req_url(page_url)
        if html != 'over':
            '''
            页面存在但无图片数据,标签中只有换行符时
            '''
            doc=pq(html)
            item = doc('.m-list-main ul').text()
            if item!='' \
                     '':
                print('**********第',num,'页**********')
                Parse_page(html)
                num+=1
            else:
                print('程序结束')
                return '程序结束'
        else:
            print('程序结束')
            return '程序结束'
main()






图片爬虫(代理ip)

# -*- coding: utf-8 -*-
"""
Created on Sat Nov 19 22:44:37 2022

@author: fch
"""

import random
from pyquery import PyQuery as pq
import requests
import os
import time
import json


proxies=[
         {'HTTP':'http://122.140.5.115:9999'},
         {'HTTP':'http://113.101.96.66:8080'},
         {'HTTP':'http://113.124.86.24:9999'},
         {'HTTP':'http://121.13.252.58:41564'},
         {'HTTP':'http://61.216.185.88:60808'},
         {'HTTP':'http://58.20.184.187:9091'},
         {'HTTP':'http://183.236.123.242:8060'},
         {'HTTP':'http://116.9.163.205:58080'},
         {'HTTP':'http://222.74.73.202:42055'},
         {'HTTP':'http://183.247.202.208:30001'},
         {'HTTP':'http://39.108.101.55:1080'},
         {'HTTP':'http://47.105.91.226:8118'},
         ]
proxie=random.choice(proxies)
user_agent_list = ['Chrome/86.0.4240.198',
                   'Chrome/101.0.4951.64',
                   'Chrome/96.0.4664.45',
                   'Chrome/94.0.4606.41'
                   ]
headers = {'User-Agent': random.choice(user_agent_list)}
list1 = [
        "Chrome/68.0.3440.106",
        "Chrome/67.0.3396.99",
        "Chrome/64.0.3282.186",
        "Chrome/62.0.3202.62",
        "Chrome/45.0.2454.101"
    ]


header={'User-Agent': random.choice(list1),
        'Referer':'http://81.68.202.74/',
        'sec-ch-ua': 'Google Chrome',
        'sec-ch-ua-platform':'Windows'
        }
base_url = 'http://81.68.202.74/datu/page/'

# verify=False忽略SSL警告
def Base_page(Page_num):
    url = base_url + str(Page_num)
    try:
        r = requests.get(url, headers=headers,proxies=proxie)
        r.raise_for_status()
        r.encoding = r.apparent_encoding
        return r.text
    except:
        print('ERROR!!!')


def Page_url(fir_html):
    doc = pq(fir_html)
    items = doc('.posts-wrapper .entry-media').items()
    num = 1
    for item in items:
        page_url=item('a').attr('href')
        fil_name=item('.lazyload').attr('alt').replace(":","")
        # print(fil_name,page_url)
        print('------ 个数:',num,'------')
        Parse_img(page_url, fil_name)
        num+=1


def Parse_img(page_url,fil_name):
    time.sleep(0.25)
    i=1
    r = requests.get(page_url, headers=headers,proxies=proxie)
    r.raise_for_status()
    r.encoding = r.apparent_encoding
    sec_html=r.text
    doc=pq(sec_html)
    items=doc('.entry-content p img').items()
    for item in items:
        dict={
            'url':item.attr('src'),
            'name':item.attr('title')
        }
        print("保存第",i,'张图')
        i+=1
        time.sleep(1.25)
        Save_img(dict,fil_name)

def Save_img(dic,fil_name):
    url=dic['url']
    # time.sleep(0.25)
    data = requests.get(url,headers=header,proxies=proxie).content
    name=dic['name']
    houzui=url.split('.')[-1]
    addr='./images_2/'+fil_name
    if not os.path.isdir(addr):
        os.makedirs(addr)
    with open('./images_2/{}/{}.{}'.format(fil_name,name,houzui),'wb') as f:
        f.write(data)

def main():
    #左闭右开,到21页
    for Page_num in range(1,22):
        print("*************第",Page_num,'页*************')
        fir_html = Base_page(Page_num)
        Page_url(fir_html)
    print("结束!")

main()