Kubernetes v1.13版本發(fā)布后,kubeadm才正式進(jìn)入GA,可以生產(chǎn)使用,用kubeadm部署kubernetes集群也是以后的發(fā)展趨勢(shì)。目前Kubernetes的對(duì)應(yīng)鏡像倉(cāng)庫(kù),在國(guó)內(nèi)阿里云也有了鏡像站點(diǎn),使用kubeadm部署Kubernetes集群變得簡(jiǎn)單并且容易了很多,本文使用kubeadm帶領(lǐng)大家快速部署Kubernetes v1.13.2版本。
創(chuàng)新互聯(lián)專業(yè)為企業(yè)提供通化網(wǎng)站建設(shè)、通化做網(wǎng)站、通化網(wǎng)站設(shè)計(jì)、通化網(wǎng)站制作等企業(yè)網(wǎng)站建設(shè)、網(wǎng)頁(yè)設(shè)計(jì)與制作、通化企業(yè)網(wǎng)站模板建站服務(wù),10多年通化做網(wǎng)站經(jīng)驗(yàn),不只是建網(wǎng)站,更提供有價(jià)值的思路和整體網(wǎng)絡(luò)服務(wù)。
注意:請(qǐng)不要把目光僅僅放在部署上,如果你是新手,推薦先熟悉用二進(jìn)制文件部署后,再來(lái)學(xué)習(xí)用kubeadm部署。二進(jìn)制文件部署請(qǐng)查看我博客的其他文章。
系統(tǒng)版本:CentOS 7.6
內(nèi)核:3.10.0-957.el7.x86_64
Kubernetes: v1.13.2
Docker-ce: 18.06
推薦硬件配置:2核2G
Keepalived保證apiserever服務(wù)器的IP高可用
Haproxy實(shí)現(xiàn)apiserver的負(fù)載均衡
為了減少服務(wù)器數(shù)量,haproxy、keepalived配置在node-01和node-02。
節(jié)點(diǎn)名稱 | 角色 | IP | 安裝軟件 |
---|---|---|---|
負(fù)載VIP | VIP | 10.31.90.200 | |
node-01 | master | 10.31.90.201 | kubeadm、kubelet、kubectl、docker、haproxy、keepalived |
node-02 | master | 10.31.90.202 | kubeadm、kubelet、kubectl、docker、haproxy、keepalived |
node-03 | master | 10.31.90.203 | kubeadm、kubelet、kubectl、docker |
node-04 | node | 10.31.90.204 | kubeadm、kubelet、kubectl、docker |
node-05 | node | 10.31.90.205 | kubeadm、kubelet、kubectl、docker |
node-06 | node | 10.31.90.206 | kubeadm、kubelet、kubectl、docker |
service網(wǎng)段 | 10.245.0.0/16 |
sed -ri 's#(SELINUX=).*#\1disabled#' /etc/selinux/config
setenforce 0
systemctl disable firewalld
systemctl stop firewalld
swapoff -a
cat >>/etc/hosts<<EOF
10.31.90.201 node-01
10.31.90.202 node-02
10.31.90.203 node-03
10.31.90.204 node-04
10.31.90.205 node-05
10.31.90.206 node-06
EOF
在node-01創(chuàng)建ssh密鑰。
[root@node-01 ~]# ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Created directory '/root/.ssh'.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:26z6DcUarn7wP70dqOZA28td+K/erv7NlaJPLVE1BTA root@node-01
The key's randomart image is:
+---[RSA 2048]----+
| E..o+|
| . o|
| . |
| . . |
| S o . |
| .o X oo .|
| oB +.o+oo.|
| .o*o+++o+o|
| .++o+Bo+=B*B|
+----[SHA256]-----+
分發(fā)node-01的公鑰,用于免密登錄其他服務(wù)器
for n in `seq -w 01 06`;do ssh-copy-id node-$n;done
cat <<EOF > /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_nonlocal_bind = 1
net.ipv4.ip_forward = 1
vm.swappiness=0
EOF
sysctl --system
cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
EOF
chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4
cat << EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
wget http://mirrors.aliyun.com/repo/Centos-7.repo -O /etc/yum.repos.d/CentOS-Base.repo
wget http://mirrors.aliyun.com/repo/epel-7.repo -O /etc/yum.repos.d/epel.repo
wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo
在node-01和node-02安裝keepalived和haproxy
yum install -y keepalived haproxy
keepalived配置
node-01的priority
為100,node-02的priority
為90,其他配置一樣。
[root@node-01 ~]# cat /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {
notification_email {
feng110498@163.com
}
notification_email_from Alexandre.Cassen@firewall.loc
smtp_server 127.0.0.1
smtp_connect_timeout 30
router_id LVS_1
}
vrrp_instance VI_1 {
state MASTER
interface eth0
lvs_sync_daemon_inteface eth0
virtual_router_id 88
advert_int 1
priority 100
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
10.31.90.200/24
}
}
haproxy配置
node-01和node-02的haproxy配置是一樣的。此處我們監(jiān)聽(tīng)的是10.31.90.200的8443端口,因?yàn)閔aproxy是和k8s apiserver是部署在同一臺(tái)服務(wù)器上,都用6443會(huì)沖突。
global
chroot /var/lib/haproxy
daemon
group haproxy
user haproxy
log 127.0.0.1:514 local0 warning
pidfile /var/lib/haproxy.pid
maxconn 20000
spread-checks 3
nbproc 8
defaults
log global
mode tcp
retries 3
option redispatch
listen https-apiserver
bind 10.31.90.200:8443
mode tcp
balance roundrobin
timeout server 900s
timeout connect 15s
server apiserver01 10.31.90.201:6443 check port 6443 inter 5000 fall 5
server apiserver02 10.31.90.202:6443 check port 6443 inter 5000 fall 5
server apiserver03 10.31.90.203:6443 check port 6443 inter 5000 fall 5
systemctl enable keepalived && systemctl start keepalived
systemctl enable haproxy && systemctl start haproxy
由于kubeadm對(duì)Docker的版本是有要求的,需要安裝與kubeadm匹配的版本。
由于版本更新頻繁,請(qǐng)指定對(duì)應(yīng)的版本號(hào),本文采用1.13.2版本,其它版本未經(jīng)測(cè)試。
yum install -y kubelet-1.13.2 kubeadm-1.13.2 kubectl-1.13.2 ipvsadm ipset docker-ce-18.06.1.ce
#啟動(dòng)docker
systemctl enable docker && systemctl start docker
#設(shè)置kubelet開機(jī)自啟動(dòng)
systemctl enable kubelet
使用kubeadm config print init-defaults > kubeadm-init.yaml 打印出默認(rèn)配置,然后在根據(jù)自己的環(huán)境修改配置.
[root@node-01 ~]# cat kubeadm-init.yaml
apiVersion: kubeadm.k8s.io/v1beta1
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
token: abcdef.0123456789abcdef
ttl: 24h0m0s
usages:
- signing
- authentication
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 10.31.90.201
bindPort: 6443
nodeRegistration:
criSocket: /var/run/dockershim.sock
name: node-01
taints:
- effect: NoSchedule
key: node-role.kubernetes.io/master
---
apiVersion: kubeadm.k8s.io/v1beta1
kind: ClusterConfiguration
apiServer:
timeoutForControlPlane: 4m0s
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controlPlaneEndpoint: "10.31.90.200:8443"
DNS:
type: CoreDNS
etcd:
local:
dataDir: /var/lib/etcd
imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers
kubernetesVersion: v1.13.2
networking:
dnsDomain: cluster.local
podSubnet: ""
serviceSubnet: "10.245.0.0/16"
scheduler: {}
controllerManager: {}
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: "ipvs"
[root@node-01 ~]# kubeadm config images pull --config kubeadm-init.yaml
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.13.2
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.13.2
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.13.2
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.13.2
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.1
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.2.24
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.2.6
[root@node-01 ~]# kubeadm init --config kubeadm-init.yaml
[init] Using Kubernetes version: v1.13.2
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [node-01 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.12.0.1 10.31.90.201 10.31.90.200]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [node-01 localhost] and IPs [10.31.90.201 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [node-01 localhost] and IPs [10.31.90.201 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "admin.conf" kubeconfig file
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 22.503955 seconds
[uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.13" in namespace kube-system with the configuration for the kubelets in the cluster
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "node-01" as an annotation
[mark-control-plane] Marking the node node-01 as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node node-01 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: abcdef.0123456789abcdef
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[addons] Applied essential addon: kube-proxy
Your Kubernetes master has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of machines by running the following on each node
as root:
kubeadm join 10.31.90.200:8443 --token abcdef.0123456789abcdef --discovery-token-ca-cert-hash sha256:84201a329ec4388263e97303c6e4de50c2de2aa157a3b961cb8a6f325fadedb1
kubeadm init主要執(zhí)行了以下操作:
[init]:指定版本進(jìn)行初始化操作
[preflight] :初始化前的檢查和下載所需要的Docker鏡像文件
[kubelet-start] :生成kubelet的配置文件”/var/lib/kubelet/config.yaml”,沒(méi)有這個(gè)文件kubelet無(wú)法啟動(dòng),所以初始化之前的kubelet實(shí)際上啟動(dòng)失敗。
[certificates]:生成Kubernetes使用的證書,存放在/etc/kubernetes/pki目錄中。
[kubeconfig] :生成 KubeConfig 文件,存放在/etc/kubernetes目錄中,組件之間通信需要使用對(duì)應(yīng)文件。
[control-plane]:使用/etc/kubernetes/manifest目錄下的YAML文件,安裝 Master 組件。
[etcd]:使用/etc/kubernetes/manifest/etcd.yaml安裝Etcd服務(wù)。
[wait-control-plane]:等待control-plan部署的Master組件啟動(dòng)。
[apiclient]:檢查Master組件服務(wù)狀態(tài)。
[uploadconfig]:更新配置
[kubelet]:使用configMap配置kubelet。
[patchnode]:更新CNI信息到Node上,通過(guò)注釋的方式記錄。
[mark-control-plane]:為當(dāng)前節(jié)點(diǎn)打標(biāo)簽,打了角色Master,和不可調(diào)度標(biāo)簽,這樣默認(rèn)就不會(huì)使用Master節(jié)點(diǎn)來(lái)運(yùn)行Pod。
[bootstrap-token]:生成token記錄下來(lái),后邊使用kubeadm join往集群中添加節(jié)點(diǎn)時(shí)會(huì)用到
kubectl默認(rèn)會(huì)在執(zhí)行的用戶家目錄下面的.kube目錄下尋找config文件。這里是將在初始化時(shí)[kubeconfig]步驟生成的admin.conf拷貝到.kube/config。
[root@node-01 ~]# mkdir -p $HOME/.kube
[root@node-01 ~]# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@node-01 ~]# chown $(id -u):$(id -g)$HOME/.kube/config
在該配置文件中,記錄了API Server的訪問(wèn)地址,所以后面直接執(zhí)行kubectl命令就可以正常連接到API Server中。
[root@node-01 ~]# kubectl get cs
NAME STATUS MESSAGE ERROR
scheduler Healthy ok
controller-manager Healthy ok
etcd-0 Healthy {"health": "true"}
[root@node-01 ~]# kubectl get node
NAME STATUS ROLES AGE VERSION
node-01 NotReady master 14m v1.13.2
目前只有一個(gè)節(jié)點(diǎn),角色是Master,狀態(tài)是NotReady。
在node-01將證書文件拷貝至其他master節(jié)點(diǎn)
USER=root
CONTROL_PLANE_IPS="node-02 node-03"
for host in ${CONTROL_PLANE_IPS}; do
ssh "${USER}"@$host "mkdir -p /etc/kubernetes/pki/etcd"
scp /etc/kubernetes/pki/ca.* "${USER}"@$host:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/sa.* "${USER}"@$host:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/front-proxy-ca.* "${USER}"@$host:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/etcd/ca.* "${USER}"@$host:/etc/kubernetes/pki/etcd/
scp /etc/kubernetes/admin.conf "${USER}"@$host:/etc/kubernetes/
done
在其他master執(zhí)行,注意--experimental-control-plane
參數(shù)
kubeadm join 10.31.90.200:8443 --token abcdef.0123456789abcdef --discovery-token-ca-cert-hash sha256:84201a329ec4388263e97303c6e4de50c2de2aa157a3b961cb8a6f325fadedb1 --experimental-control-plane
注意:token有效期是有限的,如果舊的token過(guò)期,可以使用
kubeadm token create --print-join-command
重新創(chuàng)建一條token。
在node-04、node-05、node-06執(zhí)行,注意沒(méi)有--experimental-control-plane
參數(shù)
kubeadm join 10.31.90.200:8443 --token abcdef.0123456789abcdef --discovery-token-ca-cert-hash sha256:84201a329ec4388263e97303c6e4de50c2de2aa157a3b961cb8a6f325fadedb1
Master節(jié)點(diǎn)NotReady的原因就是因?yàn)闆](méi)有使用任何的網(wǎng)絡(luò)插件,此時(shí)Node和Master的連接還不正常。目前最流行的Kubernetes網(wǎng)絡(luò)插件有Flannel、Calico、Canal、Weave這里選擇使用flannel。
[root@node-01 ~]# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
所有的節(jié)點(diǎn)已經(jīng)處于Ready狀態(tài)。
[root@node-01 ~]# kubectl get node
NAME STATUS ROLES AGE VERSION
node-01 Ready master 35m v1.13.2
node-02 Ready master 36m v1.13.2
node-03 Ready master 36m v1.13.2
node-04 Ready <none> 40m v1.13.2
node-05 Ready <none> 40m v1.13.2
node-06 Ready <none> 40m v1.13.2
查看pod
[root@node-01 ~]# kubectl get pod -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-89cc84847-j8mmg 1/1 Running 0 1d
coredns-89cc84847-rbjxs 1/1 Running 0 1d
etcd-node-01 1/1 Running 1 1d
etcd-node-02 1/1 Running 0 1d
etcd-node-03 1/1 Running 0 1d
kube-apiserver-node-01 1/1 Running 0 1d
kube-apiserver-node-02 1/1 Running 0 1d
kube-apiserver-node-03 1/1 Running 0 1d
kube-controller-manager-node-01 1/1 Running 2 1d
kube-controller-manager-node-02 1/1 Running 0 1d
kube-controller-manager-node-03 1/1 Running 0 1d
kube-proxy-jfbmv 1/1 Running 0 1d
kube-proxy-lvkms 1/1 Running 0 1d
kube-proxy-qx7kh 1/1 Running 0 1d
kube-proxy-xst5v 1/1 Running 0 1d
kube-proxy-zfwrk 1/1 Running 0 1d
kube-proxy-ztg6j 1/1 Running 0 1d
kube-scheduler-node-01 1/1 Running 1 1d
kube-scheduler-node-02 1/1 Running 1 1d
kube-scheduler-node-03 1/1 Running 1 1d
kube-flannel-ds-amd64-87wzj 1/1 Running 0 1d
kube-flannel-ds-amd64-lczwm 1/1 Running 0 1d
kube-flannel-ds-amd64-lwc2j 1/1 Running 0 1d
kube-flannel-ds-amd64-mwlfq 1/1 Running 0 1d
kube-flannel-ds-amd64-nj2mk 1/1 Running 0 1d
kube-flannel-ds-amd64-wx7vd 1/1 Running 0 1d
查看ipvs的狀態(tài)
[root@node-01 ~]# ipvsadm -L -n
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 10.245.0.1:443 rr
-> 10.31.90.201:6443 Masq 1 2 0
-> 10.31.90.202:6443 Masq 1 0 0
-> 10.31.90.203:6443 Masq 1 2 0
TCP 10.245.0.10:53 rr
-> 10.32.0.3:53 Masq 1 0 0
-> 10.32.0.4:53 Masq 1 0 0
TCP 10.245.90.161:80 rr
-> 10.45.0.1:80 Masq 1 0 0
TCP 10.245.90.161:443 rr
-> 10.45.0.1:443 Masq 1 0 0
TCP 10.245.149.227:1 rr
-> 10.31.90.204:1 Masq 1 0 0
-> 10.31.90.205:1 Masq 1 0 0
-> 10.31.90.206:1 Masq 1 0 0
TCP 10.245.181.126:80 rr
-> 10.34.0.2:80 Masq 1 0 0
-> 10.45.0.0:80 Masq 1 0 0
-> 10.46.0.0:80 Masq 1 0 0
UDP 10.245.0.10:53 rr
-> 10.32.0.3:53 Masq 1 0 0
-> 10.32.0.4:53 Masq 1 0 0
至此kubernetes集群部署完成。如有問(wèn)題歡迎在下面留言交流。希望大家多多關(guān)注和點(diǎn)贊,謝謝!
名稱欄目:kubeadm安裝kubernetes1.13.2多master高可用集群
鏈接分享:http://aaarwkj.com/article16/igohgg.html
成都網(wǎng)站建設(shè)公司_創(chuàng)新互聯(lián),為您提供云服務(wù)器、網(wǎng)站營(yíng)銷、網(wǎng)頁(yè)設(shè)計(jì)公司、網(wǎng)站改版、App開發(fā)、營(yíng)銷型網(wǎng)站建設(shè)
聲明:本網(wǎng)站發(fā)布的內(nèi)容(圖片、視頻和文字)以用戶投稿、用戶轉(zhuǎn)載內(nèi)容為主,如果涉及侵權(quán)請(qǐng)盡快告知,我們將會(huì)在第一時(shí)間刪除。文章觀點(diǎn)不代表本網(wǎng)站立場(chǎng),如需處理請(qǐng)聯(lián)系客服。電話:028-86922220;郵箱:631063699@qq.com。內(nèi)容未經(jīng)允許不得轉(zhuǎn)載,或轉(zhuǎn)載時(shí)需注明來(lái)源: 創(chuàng)新互聯(lián)