a亚洲精品_精品国产91乱码一区二区三区_亚洲精品在线免费观看视频_欧美日韩亚洲国产综合_久久久久久久久久久成人_在线区

首頁 > 數據庫 > Oracle > 正文

Oracle 11g RAC 搭建詳細步驟

2024-08-29 13:54:40
字體:
來源:轉載
供稿:網友

Oracle RAC 搭建步驟詳解

前期準備

數據庫:11.2.0.4

OS:Centos 6.8

ip分配:

#publice ip

192.168.180.2 rac1

192.168.180.3 rac2

#PRivate ip

10.10.10.2  rac1-priv

10.10.10.3  rac2-priv

#vip

192.168.180.4 rac1-vip

192.168.180.5 rac2-vip

#scan ip

192.168.180.6 rac-scan

RAC 存儲配置:

   OCR_VOTING  3個 4G

   DATA         1個 50G

   FRA_ARC      1個 20G

1. 配置udev(所有節點)

關于RAC 共享存儲可參考:http://blog.csdn.net/shiyu1157758655/article/details/56837550

用如下腳本獲取綁定腳本:注意這里的 c d e f g是要作為共享的存儲的磁盤,要根據自己實際情況修改。

for i in c d e f g ;

do

echo "KERNEL==/"sd*/", BUS==/"scsi/",PROGRAM==/"/sbin/scsi_id --whitelisted --replace-whitespace--device=/dev//$name/", RESULT==/"`/sbin/scsi_id --whitelisted--replace-whitespace --device=/dev/sd$i`/", NAME=/"asm-disk$i/",OWNER=/"grid/", GROUP=/"asmadmin/",MODE=/"0660/""     >> /etc/udev/rules.d/99-oracle-asmdevices.rules

done

重啟start_udev

2. 添加組和用戶(所有節點)

groupadd -g 1000 oinstall

groupadd -g 1200 asmadmin

groupadd -g 1201 asmdba

groupadd -g 1202 asmoper

groupadd -g 1300 dba

groupadd -g 1301 oper

useradd -m -u 1100 -g oinstall -Gasmadmin,asmdba,asmoper,dba -d /home/grid-s /bin/bash grid

useradd -m -u 1101 -g oinstall -Gdba,oper,asmdba -d /home/oracle -s/bin/bash oracle

--將用戶grid添加到dba組:

[root@rac1 app]# gpasswd -a grid dba

Adding user grid to group dba

--確認用戶信息:

[root@rac1 ~]# id oracle

uid=502(oracle)gid=507(oinstall)groups=507(oinstall),502(dba),503(oper),506(asmdba)

[root@rac1 ~]# id grid

uid=1100(grid) gid=507(oinstall)groups=507(oinstall),504(asmadmin),506(asmdba),505(asmoper)

--修改密碼:

passwd oracle

passwd grid

3.  禁用防火墻和SELNUX(所有節點)

關閉防火墻:

service iptables status

service iptables stop

chkconfig iptables off

chkconfig iptables –list    

設置/etc/selinux/config 文件,將SELINUX設置為disabled。

[root@rac1 ~]# cat/etc/selinux/config

# This file controls the state of SELinux on the system.

# SELINUX= can take one of these three values:

#     enforcing - SELinux securitypolicy is enforced.

#     permissive - SELinux printswarnings instead of enforcing.

#     disabled - No SELinux policyis loaded.

SELINUX=disabled

# SELINUXTYPE= can take one of these two values:

#     targeted - Targeted processesare protected,

#     mls - Multi Level Securityprotection.

SELINUXTYPE=targeted

4.  配置時間同步(所有節點)

這里我們使用CTSS.所以要停用 NTP 服務,并從初始化序列中禁用該服務,并刪除 ntp.conf 文件。以 root 用戶身份在兩個 OracleRAC 節點上運行以下命令:

[root@rac1 ~]# /sbin/service ntpd stop
Shutting down ntpd: [ OK ]
[root@rac1 ~]# chkconfig ntpd off
[root@rac1 ~]# mv /etc/ntp.conf/etc/ntp.conf.original
[root@rac1 ~]# chkconfig ntpd --list
ntpd 0:off 1:off 2:off 3:off 4:off 5:off 6:off

還要刪除以下文件:

rm /var/run/ntpd.pid

此文件保存了 NTP 后臺程序的 pid

5.  創建目錄結構(所有節點)

Inventory 目錄:/u01/app/oraInventory

ORACLE BASE 目錄:/u01/app/oracle

Grid Infrastructure HOME 目錄:/u01/app/grid/product/11.2.0/grid_1

RDBMS HOME 目錄:/u01/app/oracle/product/11.2.0/db_1

配置相關目錄的屬主及權限

Inventory 目錄:屬主,grid:oinstall;權限,775

ORACLE BASE 目錄:屬主,oracle:oinstall;權限,775

Grid Infrastructure HOME 目錄:屬主,grid:oinstall;權限,775

RDBMS HOME 目錄:屬主,oracle:oinstall;權限,775

mkdir -p/u01/app/oraInventory

mkdir -p/u01/app/oracle

mkdir -p/u01/app/grid/product/11.2.0/grid_1

mkdir -p/u01/app/oracle/product/11.2.0/db_1

[root@rac1 ~]#chown -R grid:oinstall /u01/app/oraInventory/

[root@rac1 ~]#chown -R oracle:oinstall /u01/app/oracle/

[root@rac1 ~]#chown -R grid:oinstall /u01/app/grid/product/12.1.0/grid_1/

[root@rac1 ~]#chown -R oracle:oinstall /u01/app/oracle/product/12.1.0/db_1/

6.  配置主機/etc/hosts 別名(所有節點)

如果需要沒有計劃采用 DNS 服務器,需要在服務器本地配置服務器主機名與 IP 地址的映射

關系。具體涉及的主要配置文件為:

/etc/hosts

該配置文件主要的配置信息建議如下

    127.0.0.1   localhost  //記住這個一定要加上

#public ip

192.168.180.2  rac1

192.168.180.3  rac2

#private ip

10.10.10.2     rac1-priv

10.10.10.3     rac2-priv

#vip

192.168.180.4  rac1-vip

192.168.180.5  rac2-vip

#scan ip

192.168.180.6  rac-scan

注意:除了本機主機名映射地址條目外,兩個節點的/etc/hosts 配置文件需要一致。上述建議

信息中:public ip、virtual ip、scan 為提供服務的 ip;privateip 為心跳私有 ip

7.  配置節點互信機制(所有節點)

需要將 grid 用戶、oracle 用戶配置好集群節點之間的互信機制。配置集群節點互信機制建議

采用ssh 協議。主要涉及的配置文件有:

$HOME/.ssh/authorized_keys

注意:此文件需要手工創建。

具體可參考:http://blog.csdn.net/shiyu1157758655/article/details/56838603

8.  配置環境變量(所有節點)

需要對 grid 用戶、oracle 用戶配置好環境變量。

grid 用戶的環境變量配置建議如下:

umask 022

PS1='$ORACLE_SID'":"'$PWD'"@"`hostname`">"

ORACLE_BASE=/u01/app/oracle; export ORACLE_BASE

ORACLE_HOME=/u01/app/grid/product/11.2.0/grid_1; export ORACLE_HOME

ORACLE_SID=+ASM1; export ORACLE_SID

ORACLE_TERM=xterm; export ORACLE_TERM

TMPDIR=/var/tmp; export TMPDIR

NLS_DATE_FORMAT="YYYY/MM/DD hh24:mi:ss"; export NLS_DATE_FORMAT

ORA_NLS33=$ORACLE_HOME/ocommon/nls/admin/data; export ORA_NLS33

TNS_ADMIN=$ORACLE_HOME/network/admin; export TNS_ADMIN DISPLAY

LIBPATH=$ORACLE_HOME/lib; export LIBPATH

PATH=$PATH:$ORACLE_HOME/bin:/usr/sbin; export PATH

Oracle用戶的環境變量配置建議如下:

umask 022

PS1='$ORACLE_SID'":"'$PWD'"@"`hostname`">"

ORACLE_BASE=/u01/app/oracle_base; export ORACLE_BASE

ORACLE_HOME=$ORACLE_BASE/product/11.2.0/db_1; export ORACLE_HOME

ORACLE_SID=rac1; export ORACLE_SID

ORACLE_TERM=xterm; export ORACLE_TERM

TMPDIR=/var/tmp; export TMPDIR

NLS_DATE_FORMAT="YYYY/MM/DD hh24:mi:ss"; export NLS_DATE_FORMAT

ORA_NLS33=$ORACLE_HOME/ocommon/nls/admin/data; export ORA_NLS33

TNS_ADMIN=$ORACLE_HOME/network/admin; export TNS_ADMIN DISPLAY

LIBPATH=$ORACLE_HOME/lib; export LIBPATH

PATH=$PATH:$ORACLE_HOME/bin:/usr/sbin; export PATH

9.  修改/etc/security/limits.conf(所有節點)

以 root 用戶身份,在每個 OracleRAC 節點上,在 /etc/security/limits.conf 文件中添加如下內容,或者執行執行如下命令:

root@rac1 ~]# cat >> /etc/security/limits.conf <<EOF
grid soft nproc 2047
grid hard nproc 16384
grid soft nofile 1024
grid hard nofile 65536
oracle soft nproc 2047
oracle hard nproc 16384
oracle soft nofile 1024
oracle hard nofile 65536
EOF

10. 修改/etc/pam.d/login(所有節點)

在每個 OracleRAC 節點上,在/etc/pam.d/login 文件中添加或編輯下面一行內容:

[root@rac1 ~]# cat >> /etc/pam.d/login <<EOF
session required pam_limits.so
EOF

11. 配置系統默認 profile (所有節點)

確認以下配置加載到系統的默認 profile 中。具體涉及到的主要配置文件如下:

if [ $USER = "oracle" ] || [ $USER = "grid" ]; then

if [ $SHELL = "/bin/ksh" ]; then

  ulimit -p 16384

  ulimit -n 65536

else

  ulimit -u 16384 -n 65536

fi

 umask 022

fi

12. 修改/etc/sysctl.conf(所有節點)

#vi /etc/sysctl.conf

kernel.shmmax = 4294967295

kernel.shmall = 2097152

kernel.shmmni = 4096

kernel.sem = 250 32000 100 128

fs.file-max = 6815744

net.ipv4.ip_local_port_range = 9000 65500

net.core.rmem_default=262144

net.core.rmem_max=4194304

net.core.wmem_default=262144

net.core.wmem_max=1048576

fs.aio-max-nr=1048576

kernel.panic_on_oops=1

使修改的參數生效:

[root@rac1 ~]# sysctl -p

13. 安裝前檢查

+ASM1:/src/oracle/grid@rac1>./runcluvfy.sh stage -pre crsinst -nrac1,rac2

Performing pre-checks for cluster services setup

Checking node reachability...

Node reachability check passed from node "rac1"

Checking user equivalence...

User equivalence check passed for user "grid"

 

Checking node connectivity...

 

Checking hosts config file...

 

Verification of the hosts config file successful

 

Node connectivity passed for subnet "192.168.180.0" with node(s)rac2,rac1

TCP connectivity check passed for subnet "192.168.180.0"

 

Node connectivity passed for subnet "10.10.10.0" with node(s)rac2,rac1

TCP connectivity check passed for subnet "10.10.10.0"

 

 

Interfaces found on subnet "192.168.180.0" that are likelycandidates for VIP are:

rac2 eth0:192.168.180.3

rac1 eth0:192.168.180.2

 

Interfaces found on subnet "10.10.10.0" that are likelycandidates for a private interconnect are:

rac2 eth1:10.10.10.3

rac1 eth1:10.10.10.2

Checking subnet mask consistency...

Subnet mask consistency check passed for subnet "192.168.180.0".

Subnet mask consistency check passed for subnet "10.10.10.0".

Subnet mask consistency check passed.

 

Node connectivity check passed

 

Checking multicast communication...

 

Checking subnet "192.168.180.0" for multicast communication withmulticast group "230.0.1.0"...

Check of subnet "192.168.180.0" for multicast communication withmulticast group "230.0.1.0" passed.

 

Checking subnet "10.10.10.0" for multicast communication withmulticast group "230.0.1.0"...

Check of subnet "10.10.10.0" for multicast communication withmulticast group "230.0.1.0" passed.

 

Check of multicast communication passed.

 

Checking ASMLib configuration.

Check for ASMLib configuration passed.

Total memory check passed

Available memory check passed

Swap space check passed

Free disk space check passed for "rac2:/var/tmp"

Free disk space check passed for "rac1:/var/tmp"

Check for multiple users with UID value 1100 passed

User existence check passed for "grid"

Group existence check passed for "oinstall"

Group existence check passed for "dba"

Membership check for user "grid" in group "oinstall"[as Primary] passed

Membership check for user "grid" in group "dba" passed

Run level check passed

Hard limits check passed for "maximum open file descriptors"

Soft limits check passed for "maximum open file descriptors"

Hard limits check passed for "maximum user processes"

Soft limits check passed for "maximum user processes"

System architecture check passed

Kernel version check passed

Kernel parameter check passed for "semmsl"

Kernel parameter check passed for "semmns"

Kernel parameter check passed for "semopm"

Kernel parameter check passed for "semmni"

Kernel parameter check passed for "shmmax"

Kernel parameter check passed for "shmmni"

Kernel parameter check passed for "shmall"

Kernel parameter check passed for "file-max"

Kernel parameter check passed for "ip_local_port_range"

Kernel parameter check passed for "rmem_default"

Kernel parameter check passed for "rmem_max"

Kernel parameter check passed for "wmem_default"

Kernel parameter check passed for "wmem_max"

Kernel parameter check passed for "aio-max-nr"

Package existence check passed for "make"

Package existence check passed for "binutils"

Package existence check passed for "gcc(x86_64)"

Package existence check passed for "libaio(x86_64)"

Package existence check passed for "glibc(x86_64)"

Package existence check passed for "compat-libstdc++-33(x86_64)"

Package existence check passed for "elfutils-libelf(x86_64)"

Package existence check failed for"elfutils-libelf-devel"

Check failed on nodes:

rac2,rac1

Package existence check passed for "glibc-common"

Package existence check passed for "glibc-devel(x86_64)"

Package existence check passed for "glibc-headers"

Package existence check passed for "gcc-c++(x86_64)"

Package existence check passed for "libaio-devel(x86_64)"

Package existence check passed for "libgcc(x86_64)"

Package existence check passed for "libstdc++(x86_64)"

Package existence check passed for "libstdc++-devel(x86_64)"

Package existence check passed for "sysstat"

Package existence check passed for "pdksh"

Package existence check passed for "expat(x86_64)"

Check for multiple users with UID value 0 passed

Current group ID check passed

 

Starting check for consistency of primary group of root user

 

Check for consistency of root user's primary group passed

 

Starting Clock synchronization checks using Network Time Protocol(NTP)...

 

NTP Configuration file check started...

No NTP Daemons or Services were found to be running

 

Clock synchronization check using Network Time Protocol(NTP) passed

 

Core file name pattern consistency check passed.

 

User "grid" is not part of "root" group. Check passed

Default user file creation mask check passed

Checking consistency of file "/etc/resolv.conf" across nodes

 

File "/etc/resolv.conf" does not have both domain and searchentries defined

domain entry in file "/etc/resolv.conf" is consistent acrossnodes

search entry in file "/etc/resolv.conf" is consistent acrossnodes

PRVF-5636 : The DNS response time for an unreachable nodeexceeded "15000" ms on following nodes: rac2,rac1

 

File "/etc/resolv.conf" is not consistent across nodes

 

Time zone consistency check passed

 

Pre-check for cluster services setup was unsuccessful on all the nodes.

   要根據上面的檢查去進行排查解決,如果沒有配置DNS的話,

PRVF-5636 : The DNS response time for an unreachable nodeexceeded "15000" ms on following nodes: rac2,rac1

這個可以忽略。

14.安裝grid

在節點1,用grid用戶運行runInstaller。

 

[root@rac1app]# chown -R grid:oinstall oracle  //2個節點都要執行

 

 

上面2個可以忽略

 

 

 

節點1上執行:

[root@rac1~]# /u01/app/oraInventory/orainstRoot.sh

Changing permissions of /u01/app/oraInventory.

Adding read,write permissions for group.

Removing read,write,execute permissions for world.

 

Changing groupname of /u01/app/oraInventory to oinstall.

The execution of the script is complete.

 

節點2上執行:

[root@rac2~]#/u01/app/oraInventory/orainstRoot.sh

Changingpermissions of /u01/app/oraInventory.

Addingread,write permissions for group.

Removingread,write,execute permissions for world.

 

Changinggroupname of /u01/app/oraInventory to oinstall.

Theexecution of the script is complete.

 

節點1上執行:

[root@rac1~]# /u01/app/grid/product/11.2.0/grid_1/root.sh

Performingroot user Operation for Oracle 11g

 

Thefollowing environment variables are set as:

    ORACLE_OWNER= grid

    ORACLE_HOME=  /u01/app/grid/product/11.2.0/grid_1

 

Enterthe full pathname of the local bin directory: [/usr/local/bin]:

   Copying dbhome to /usr/local/bin ...

   Copying oraenv to /usr/local/bin ...

   Copying coraenv to /usr/local/bin ...

 

 

Creating/etc/oratab file...

Entrieswill be added to the /etc/oratab file as needed by

DatabaseConfiguration Assistant when a database is created

Finishedrunning generic part of root script.

Nowproduct-specific root actions will be performed.

Usingconfiguration parameter file:/u01/app/grid/product/11.2.0/grid_1/crs/install/crsconfig_params

Creatingtrace directory

Userignored Prerequisites during installation

InstallingTrace File Analyzer

Failedto create keys in the OLR, rc = 127, Message:

 shared object file: No such file or directory

 

Failedto create keys in the OLR at/u01/app/grid/product/11.2.0/grid_1/crs/install/crsconfig_lib.pm line 7660.

duct/11.2.0/grid_1/crs/install/u01/app/grid/product/11.2.0/grid_1/crs/install/rootcrs.pl execution failed

 

這個報錯是缺少包造成的,在2個節點上都安裝這個包就行了,具體可參考:http://blog.csdn.net/shiyu1157758655/article/details/59486625

 

[root@rac1 ~]# rpm -ivh/os/Packages/compat-libcap1-1.10-1.x86_64.rpm

warning:/os/Packages/compat-libcap1-1.10-1.x86_64.rpm: Header V3 RSA/SHA256 Signature,key ID c105b9de: NOKEY

Preparing...               ########################################### [100%]

   1:compat-libcap1        ########################################### [100%]

再次執行:

[root@rac1~]# /u01/app/grid/product/11.2.0/grid_1/root.sh

Performingroot user operation for Oracle 11g

 

Thefollowing environment variables are set as:

    ORACLE_OWNER= grid

    ORACLE_HOME=  /u01/app/grid/product/11.2.0/grid_1

 

Enterthe full pathname of the local bin directory: [/usr/local/bin]:

Thecontents of "dbhome" have not changed. No need to overwrite.

Thecontents of "oraenv" have not changed. No need to overwrite.

Thecontents of "coraenv" have not changed. No need to overwrite.

 

Entrieswill be added to the /etc/oratab file as needed by

DatabaseConfiguration Assistant when a database is created

Finishedrunning generic part of root script.

Nowproduct-specific root actions will be performed.

Usingconfiguration parameter file:/u01/app/grid/product/11.2.0/grid_1/crs/install/crsconfig_params

Userignored Prerequisites during installation

InstallingTrace File Analyzer

OLRinitialization - successful

  root wallet

  root wallet cert

  root cert export

  peer wallet

  profile reader wallet

  pa wallet

  peer wallet keys

  pa wallet keys

  peer cert request

  pa cert request

  peer cert

  pa cert

  peer root cert TP

  profile reader root cert TP

  pa root cert TP

  peer pa cert TP

  pa peer cert TP

  profile reader pa cert TP

  profile reader peer cert TP

  peer user cert

  pa user cert

AddingClusterware entries to upstart

CRS-2672:Attempting to start 'ora.mdnsd' on 'rac1'

CRS-2676:Start of 'ora.mdnsd' on 'rac1' succeeded

CRS-2672:Attempting to start 'ora.gpnpd' on 'rac1'

CRS-2676:Start of 'ora.gpnpd' on 'rac1' succeeded

CRS-2672:Attempting to start 'ora.CSSdmonitor' on 'rac1'

CRS-2672:Attempting to start 'ora.gipcd' on 'rac1'

CRS-2676:Start of 'ora.gipcd' on 'rac1' succeeded

CRS-2676:Start of 'ora.cssdmonitor' on 'rac1' succeeded

CRS-2672:Attempting to start 'ora.cssd' on 'rac1'

CRS-2672:Attempting to start 'ora.diskmon' on 'rac1'

CRS-2676:Start of 'ora.diskmon' on 'rac1' succeeded

CRS-2676:Start of 'ora.cssd' on 'rac1' succeeded

 

ASMcreated and started successfully.

 

DiskGroup OCR_VOTING created successfully.

 

clscfg:-install mode specified

Successfullyaccumulated necessary OCR keys.

CreatingOCR keys for user 'root', privgrp 'root'..

Operationsuccessful.

CRS-4256:Updating the profile

Successfuladdition of voting disk 5ddc4ea587f04f18bfb066d1d3ff07d9.

Successfuladdition of voting disk 64611da725794f0ebf206204283eff9a.

Successfuladdition of voting disk 46a2f2c1a5a14f4fbf58e6505f889674.

Successfullyreplaced voting disk group with +OCR_VOTING.

CRS-4256:Updating the profile

CRS-4266:Voting file(s) successfully replaced

##  STATE   File Universal Id               File Name Disk group

--  -----   -----------------               --------- ---------

 1. ONLINE  5ddc4ea587f04f18bfb066d1d3ff07d9 (/dev/asm-diskc) [OCR_VOTING]

 2. ONLINE  64611da725794f0ebf206204283eff9a (/dev/asm-diskd) [OCR_VOTING]

 3. ONLINE  46a2f2c1a5a14f4fbf58e6505f889674 (/dev/asm-diske) [OCR_VOTING]

Located3 voting disk(s).

CRS-2672:Attempting to start 'ora.asm' on 'rac1'

CRS-2676:Start of 'ora.asm' on 'rac1' succeeded

CRS-2672:Attempting to start 'ora.OCR_VOTING.dg' on 'rac1'

CRS-2676:Start of 'ora.OCR_VOTING.dg' on 'rac1' succeeded

ConfigureOracle Grid Infrastructure for a Cluster ... succeeded

節點2上執行:

[root@rac2~]# /u01/app/grid/product/11.2.0/grid_1/root.sh

Performingroot user operation for Oracle 11g

 

Thefollowing environment variables are set as:

    ORACLE_OWNER= grid

    ORACLE_HOME=  /u01/app/grid/product/11.2.0/grid_1

 

Enterthe full pathname of the local bin directory: [/usr/local/bin]:

   Copying dbhome to /usr/local/bin ...

   Copying oraenv to /usr/local/bin ...

   Copying coraenv to /usr/local/bin ...

 

 

Creating/etc/oratab file...

Entrieswill be added to the /etc/oratab file as needed by

DatabaseConfiguration Assistant when a database is created

Finishedrunning generic part of root script.

Nowproduct-specific root actions will be performed.

Usingconfiguration parameter file:/u01/app/grid/product/11.2.0/grid_1/crs/install/crsconfig_params

Creatingtrace directory

Userignored Prerequisites during installation

InstallingTrace File Analyzer

OLRinitialization - successful

AddingClusterware entries to upstart

 terminating

Anactive cluster was found during exclusive startup, restarting to join thecluster

ConfigureOracle Grid Infrastructure for a Cluster ... succeeded

 

點擊ok繼續安裝:

這里有個報錯,我選擇忽略掉,貌似沒啥影響。

到此grid已經安裝成功。

 

15. 創建ASM磁盤組

以grid用戶登陸,執行asmca

選擇【create】

 

16. 安裝Oracle RDBMS軟件

在創建完磁盤組之后,即可安裝RDBMS軟件

注意:安裝RDBMS軟件,需要使用oracle用戶創建

./runInstaller

這里僅安裝軟件

忽略上面2個報錯

點擊安裝即可

17. 創建數據庫

以oracle用戶執行dbca

點擊執行即可

18. 總結

到此oracle11g rac 安裝完成,期間遇到的各種問題,需要耐心的一個個解決。

 


發表評論 共有條評論
用戶名: 密碼:
驗證碼: 匿名發表
主站蜘蛛池模板: 精品国产乱码久久久久久闺蜜 | 亚洲欧美视频一区 | 人人澡超碰碰97碰碰碰 | 免费一区二区三区视频在线 | 一区二区三区国产 | 中文字字幕一区二区三区四区五区 | 91精品国产91久久久久久最新 | 欧美日韩一区不卡 | 成人在线视频免费 | 成人欧美一区二区三区黑人孕妇 | 91福利在线导航 | 欧美黄色a视频 | 亚洲综合色视频在线观看 | 毛片免费看| 四虎在线看片 | 四季久久免费一区二区三区四区 | 一级片在线观看 | 三级电影网址 | 国产九九精品 | 国产精品无码永久免费888 | 欧美日韩1区 | 天天操,夜夜操 | 黄网站免费在线 | 日韩久久久久久 | 一区二区三区在线 | www.久久久 | 国产成人激情 | 国产精品成人一区二区三区夜夜夜 | 亚洲国产日韩a在线播放性色 | 午夜视 | 国产精品美女在线观看直播 | 国产99热 | 久久久久久毛片免费观看 | 久久99精品久久久久久琪琪 | 精品免费视频 | 成人在线不卡 | 国产欧美日韩中文字幕 | 久久精品91 | 91精品麻豆日日躁夜夜躁 | 欧美日韩精品一区二区三区在线观看 | 红杏aⅴ成人免费视频 |