linux环境搭建整理文档
[计算机]环境搭建手册
![[计算机]环境搭建手册](https://img.taocdn.com/s3/m/48f2ad0b453610661ed9f4f3.png)
操作系统实验课环境搭建1. Vwmare的使用注:关于vwmare与vwmare上linux的安装详见老师给的《操作系统原理课程设计实验手册》。
本文档使用的vwmare版本是Workstation 6.5 正式版,linux系统是red hat linux.1.1 了解什么是虚拟机虚拟机是指运行在Windows或Linux计算机上的一个应用程序,这个应用程序“模拟”了一个基于x86的标准PC的环境。
这个环境和普通的计算机一样,都有芯片组、CPU、内存、显卡、声卡、网卡、软驱、硬盘、光驱、串口、并口、USB控制器、SCSI控制器等设备,提供这个应用程序的“窗口”就是虚拟机的显示器。
在使用上,这台虚拟机和真正的物理主机没有太大的区别,都需要分区、格式化、安装操作系统、安装应用程序和软件,总之,就像一台真正的计算机一样。
使用虚拟机可以轻松模拟多种系统环境,低成本实现硬件环境模拟,还可以模拟实现各种网络环境。
1.2 VMware软件安装运行安装文件,出现以下界面选择自定义或默认软件安装选择安装路径选择是否安装桌面快捷方式开始安装软件安装完成同意用户协议VMware界面1.3 创建一个新的虚拟机选择ISO文件安装(6.5版新功能)选择可以安装的Windows版本选择可以安装Linux的版本选择虚拟机的路径选择虚拟机硬盘大小新建虚拟机的信息创建虚拟机完成修改虚拟机设置修改虚拟机光驱设置创建虚拟机还原点输入还原点名字和信息从还原点还原虚拟机管理虚拟机的还原点2. Linux 的使用2.1 如何开机首先启动Vmware workstation,进入Vmware workstation主界面。
点击或者工具栏里面的按钮开机。
稍等一会儿出现如下界面,用鼠标点击界面,输入用户名:root(注:用鼠标点击界面后,你会发现鼠标进入VMWARE界面出不来了,按CTRL+ALT组合键可以实现鼠标在两个系统间的切换)。
按[Enter]键,出现下图,提示输入密码。
Linux基础知识文档word版本

Linux 基础知识目录1.前言使用Linux系统与使用WIN系统比较,有着较大的区别,并且在开发过程中,将会更多地使用文本界面,甚至从来不用GUI(当然,source insight除外)。
在Linux 下开发、调试程序与WIN32的环境有一些区别,需要掌一些常用的命令和工具的用法,这些命令和工具只有CLI。
希望通过学习下面的Linux基础知识,大家能够尽快熟悉Linux的开发环境,为以后的工作奠定扎实的基础。
当大家觉得CLI比GUI要来得方便时(事实也的确如此),也就应该有不错的Linux功底了。
所以建议在用Linux的时候,不要使用GUI。
2.安装与配置2.1Linux的安装与配置这部分指导大家在虚拟机中安装Linux系统,建立基本的开发环境。
用虚拟机而不用真实安装在硬盘上的Linux 系统的原因是,为了方便。
前面提到过,我们需要用source insight编辑代码,而在Linux系统中编译代码,经常需要在两个系统下切换。
2.2安装通过我们三部的服务器上的RHEL4 WS的安装镜像来安装Linux。
建议参数:在Vmware中,分配10G以上的虚拟硬盘,分配384M以上的内存空间,网络选择桥接或者NAT。
提示:可以不用将4个ISO都下到本地,在XP中可以映射网络驱动器,在vmware中可以直接将ISO指定为光驱。
安装的时候,建议选择“完整安装”,有兴趣的话可以在以后的时间里选择安装自己所需要的包。
安装的细节,如分区等,希望大家自己研究。
完整安装的时间视机器速度,1小时左右。
2.3配置安装好后,需要配置以下几项,方便使用。
开启一些服务:在终端中输入ntsysv,选中smb和sshd。
配置samba文件共享,让虚拟机作为文件服务器,使我们的主、客系统能够互相传送文件。
添加smb帐户:smbpasswd –a root修改smb配置文件:vi /etc/samba/smb.conf,在末尾添加如下内容:[root]path = /valid users = rootcreate mask = 0600directory mask = 0700writeable = yes修改防火墙配置文件(/etc/sysconfig/iptables),开启smb服务端口,在-A RH-Firewall-1-INPUT -p 51 -j ACCEPT 之后加上-A RH-Firewall-1-INPUT -p tcp -m tcp --dport 445 -j ACCEPT如果没有“-A RH-Firewall-1-INPUT -p 51 -j ACCEPT”一行的话,在“-A RH-Firewall-1-INPUT -j REJECT --reject-with icmp-host-prohibited”之前加上也可以。
适合入门--k8s环境搭建

1. 初始化安装1. 配置hosts[root@k8s0x ~]# vi /etc/hosts192.168.1.111 k8s01192.168.1.112 k8s02192.168.1.113 k8s032. 关闭防火墙[root@k8s0x ~]# systemctl stop firewalld[root@k8s0x ~]# systemctl disable firewalld3. 关闭SELINUX[root@k8s0x ~]# setenforce 0[root@k8s0x ~]# sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config4. 禁用swap[root@k8s0x ~]# swapoff -a[root@k8s0x ~]# sysctl -w vm.swappiness=0[root@k8s0x ~]# sed -i /swap/s/^/#/g /etc/fstab5. 配置系统参数[root@k8s0x ~]# cat <<EOF > /etc/sysctl.d/k8s.confnet.ipv4.ip_forward = 1net.bridge.bridge-nf-call-ip6tables = 1net.bridge.bridge-nf-call-iptables = 1EOF[root@k8s0x ~]# sysctl -p /etc/sysctl.d/k8s.conf6. 安装Docker# 配置docker安装源[root@k8s0x ~]# cat > /etc/yum.repos.d/docker-ce.repo << EOF[docker-ce]name=Kubernetesbaseurl=https:///docker-ce/linux/centos/7/x86_64/stable/enabled=1gpgcheck=1repo_gpgcheck=1gpgkey=https:///docker-ce/linux/centos/gpgEOF# 安装并启动docker[root@k8s0x ~]# yum install -y docker-ce[root@k8s0x ~]# systemctl start docker[root@k8s0x ~]# systemctl enable docker7. 配置免密登陆[root@k8s01 ~]# ssh-keygen[root@k8s01 ~]# ssh-copy-id k8s01[root@k8s01 ~]# ssh-copy-id k8s02[root@k8s01 ~]# ssh-copy-id k8s038. 获取Kubernetes安装包# 下载安装包并拷贝到其它节点[root@k8s01 ~]#wget /Kubernetes/kubernetes-v1.11.1.tgz[root@k8s01 ~]# scp kubernetes-v1.11.1.tgz k8s02:[root@k8s01 ~]# scp kubernetes-v1.11.1.tgz k8s03:[root@k8s01 ~]# for host in k8s01 k8s02 k8s03do# 配置hyperkubessh $host "tar xvf kubernetes-v1.11.1.tgz -C /tmp;cp /tmp/hyperkube /usr/bin;cp /tmp/etcdctl /usr/bin;ln -sf /usr/bin/hyperkube /usr/bin/kubelet;ln -sf/usr/bin/hyperkube /usr/bin/kubectl"# 配置kubernetes镜像ssh $host "docker load < /tmp/k8s-img.tgz"# 配置kubernetes CNIssh $host "mkdir -p /opt/cni/bin;tar xvf /tmp/cni-plugins-v0.7.0.tgz -C/opt/cni/bin/"done# 配置CFSSL(只需在k8s01节点执行)[root@k8s01 ~]# tar xvf /tmp/cfssl-v1.2.tgz -C /usr/bin9. 环境清理脚本(仅用于重装)[root@k8s01 ~]# for host in k8s01 k8s02 k8s03dossh $host systemctl stop kubeletssh $host systemctl stop dockerssh $host rm -rf /usr/bin/hyperkubessh $host rm -rf /usr/bin/kubectlssh $host rm -rf /usr/bin/kubeletssh $host rm -rf /usr/local/bin/*ssh $host rm -rf /etc/etcdssh $host rm -rf /etc/kubernetesssh $host rm -rf /var/lib/docker/*ssh $host rm -rf /var/lib/etcd/*ssh $host rm -rf /var/lib/kubelet/pki/*ssh $host rm -rf /usr/lib/systemd/system/kubelet.servicessh $host rm -rf /etc/systemd/system/kubelet.service.d/10-kubelet.conf ssh $host systemctl daemon-reloadssh $host systemctl start dockerdone2. 配置Haproxy1. 创建haproxy配置文件[root@k8s01 ~]# cat << EOF > /tmp/haproxy.cfggloballog 127.0.0.1 local0log 127.0.0.1 local1 noticetune.ssl.default-dh-param 2048defaultslog globalmode httpoption dontlognulltimeout connect 5000mstimeout client 1800000mstimeout server 1800000mslisten statsbind :9090mode httpbalancestats uri /haproxy_statsstats auth admin:admin123stats admin if TRUEfrontend api-httpsmode tcpbind :6443default_backend api-backendbackend api-backendmode tcpserver k8s01 192.168.1.111:5443 checkserver k8s02 192.168.1.112:5443 checkserver k8s03 192.168.1.113:5443 checkEOF2. 安装haproxy服务[root@k8s01 ~]# for host in k8s01 k8s02 k8s03dossh $host yum install -y haproxydone3. 分发配置文件并启动服务[root@k8s01 ~]# for host in k8s01 k8s02 k8s03doscp /tmp/haproxy.cfg ${host}:/etc/haproxy/haproxy.cfgssh $host systemctl restart haproxyssh $host systemctl enable haproxydone3. 配置etcd1. 创建证书# 证书配置[root@k8s01 ~]# mkdir /tmp/etcd && cd /tmp/etcd[root@k8s01 etcd]# cat << EOF > ca-config.json{"signing":{"default":{"expiry":"87600h"},"profiles":{"kubernetes":{"usages":["signing","key encipherment","server auth","client auth"],"expiry":"87600h"}}}}EOF[root@k8s01 etcd]# cat << EOF > etcd-ca-csr.json{"CN":"etcd","key":{"algo":"rsa","size":2048},"names":[{"C":"CN","ST":"Shang hai","L":"Shanghai","O":"etcd","OU":"Etcd Security"}]}EOF[root@k8s01 etcd]# cat << EOF > etcd-csr.json{"CN":"etcd","key":{"algo":"rsa","size":2048},"names":[{"C":"CN","ST":"Shang hai","L":"Shanghai","O":"etcd","OU":"Etcd Security"}]}EOF# 创建证书[root@k8s01 etcd]# cfssl gencert -initca etcd-ca-csr.json | cfssljson -bare ca[root@k8s01 etcd]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem-config=ca-config.json-hostname=127.0.0.1,192.168.1.111,192.168.1.112,192.168.1.113-profile=kubernetes etcd-csr.json | cfssljson -bare etcd2. 创建StaticPod[root@k8s01 ~]# grep k8s /etc/hosts|while read host_ip host_namedoETCD_NAME=$host_nameETCD_INITIAL_ADVERTISE_PEER_URLS="https://${host_ip}:2380"ETCD_LISTEN_PEER_URLS="https://${host_ip}:2380"ETCD_LISTEN_CLIENT_URLS="https://${host_ip}:2379,http://127.0.0.1:2379" ETCD_ADVERTISE_CLIENT_URLS="https://${host_ip}:2379"ETCD_INITIAL_CLUSTER="k8s01=https://192.168.1.111:2380,k8s02=https://192.168 .1.112:2380,k8s03=https://192.168.1.113:2380"ETCD_ENDPOINTS="https://${host_ip}:2379"cat << EOF > /tmp/etcd/etcd-${host_name}.yamlapiVersion: v1kind: Podmetadata:annotations:scheduler.alpha.kubernetes.io/critical-pod: ""creationTimestamp: nulllabels:component: etcdtier: control-planename: etcdnamespace: kube-systemspec:containers:- command:- etcd- --name=$ETCD_NAME- --data-dir=/var/lib/etcd- --initial-cluster-state=new- --initial-cluster-token=etcd-cluster-0- --initial-advertise-peer-urls=$ETCD_INITIAL_ADVERTISE_PEER_URLS- --listen-peer-urls=$ETCD_LISTEN_PEER_URLS- --listen-client-urls=$ETCD_LISTEN_CLIENT_URLS- --advertise-client-urls=$ETCD_ADVERTISE_CLIENT_URLS- --initial-cluster=$ETCD_INITIAL_CLUSTER- --client-cert-auth=true- --cert-file=/etc/kubernetes/pki/etcd/etcd.pem- --key-file=/etc/kubernetes/pki/etcd/etcd-key.pem- --trusted-ca-file=/etc/kubernetes/pki/etcd/ca.pem- --peer-client-cert-auth=true- --peer-cert-file=/etc/kubernetes/pki/etcd/etcd.pem- --peer-key-file=/etc/kubernetes/pki/etcd/etcd-key.pem - --peer-trusted-ca-file=/etc/kubernetes/pki/etcd/ca.pem image: k8s.gcr.io/etcd:3.2.18imagePullPolicy: IfNotPresentlivenessProbe:exec:command:- /bin/sh- -ec- ETCDCTL_API=3 etcdctl --endpoints=$ETCD_ENDPOINTS --cacert=/etc/kubernetes/pki/etcd/ca.pem--cert=/etc/kubernetes/pki/etcd/etcd.pem--key=/etc/kubernetes/pki/etcd/etcd-key.pemget foofailureThreshold: 8initialDelaySeconds: 15timeoutSeconds: 15name: etcdresources: {}volumeMounts:- mountPath: /var/lib/etcdname: etcd-data- mountPath: /etc/kubernetes/pkiname: etcd-certshostNetwork: truepriorityClassName: system-cluster-criticalvolumes:- hostPath:path: /var/lib/etcdtype: DirectoryOrCreatename: etcd-data- hostPath:path: /etc/kubernetes/pkitype: DirectoryOrCreatename: etcd-certsstatus: {}EOFdone3. 分发配置文件[root@k8s01 ~]# for host in k8s01 k8s02 k8s03dossh $host mkdir -p /etc/kubernetes/pki/etcd /etc/kubernetes/manifests/scp /tmp/etcd/{ca.pem,etcd.pem,etcd-key.pem} ${host}:/etc/kubernetes/pki/etcd scp /tmp/etcd/etcd-${host}.yaml ${host}:/etc/kubernetes/manifests/etcd.yaml done4. 配置kube-apiserver1. 创建证书# 证书配置[root@k8s01 ~]# mkdir /tmp/kubernetes && cd /tmp/kubernetes[root@k8s01 kubernetes]# cat << EOF > ca-config.json{"signing":{"default":{"expiry":"87600h"},"profiles":{"kubernetes":{"usages":["signing","key encipherment","server auth","client auth"],"expiry":"87600h"}}}}EOF[root@k8s01 kubernetes]# cat << EOF > ca-csr.json {"CN":"kubernetes","key":{"algo":"rsa","size":2048},"names":[{"C":"CN","ST": "Shanghai","L":"Shanghai","O":"Kubernetes","OU":"Kubernetes-manual"}]}EOF[root@k8s01 kubernetes]# cat << EOF > apiserver-csr.json{"CN":"kube-apiserver","key":{"algo":"rsa","size":2048},"names":[{"C":"CN"," ST":"Shanghai","L":"Shanghai","O":"Kubernetes","OU":"Kubernetes-manual"}]} EOF[root@k8s01 kubernetes]# cat << EOF > front-proxy-ca-csr.json {"CN":"kubernetes","key":{"algo":"rsa","size":2048}}EOF[root@k8s01 kubernetes]# cat << EOF > front-proxy-client-csr.json {"CN":"front-proxy-client","key":{"algo":"rsa","size":2048}}EOF# 创建证书[root@k8s01 pki]# openssl genrsa -out sa.key 2048[root@k8s01 pki]# openssl rsa -in sa.key -pubout -out sa.pub[root@k8s01 pki]# cfssl gencert -initca ca-csr.json | cfssljson -bare ca [root@k8s01 pki]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem-config=ca-config.json-hostname=10.96.0.1,192.168.1.111,192.168.1.112,192.168.1.113,127.0.0.1,kube rnetes.default -profile=kubernetes apiserver-csr.json | cfssljson -bare apiserver[root@k8s01 pki]# cfssl gencert -initca front-proxy-ca-csr.json | cfssljson -bare front-proxy-ca[root@k8s01 pki]# cfssl gencert -ca=front-proxy-ca.pem-ca-key=front-proxy-ca-key.pem -config=ca-config.json -profile=kubernetes front-proxy-client-csr.json | cfssljson -bare front-proxy-client2. 创建StaticPod[root@k8s01 ~]# grep k8s /etc/hosts|while read host_ip host_namedoETCD_SERVERS="https://192.168.1.111:2379,https://192.168.1.112:2379,https:// 192.168.1.113:2379"cat << EOF > /tmp/kubernetes/kube-apiserver-${host_name}.yamlapiVersion: v1kind: Podmetadata:annotations:scheduler.alpha.kubernetes.io/critical-pod: ""creationTimestamp: nulllabels:component: kube-apiservertier: control-planename: kube-apiservernamespace: kube-systemspec:containers:- command:- kube-apiserver- --advertise-address=$host_ip- --allow-privileged=true- --authorization-mode=Node,RBAC- --client-ca-file=/etc/kubernetes/pki/ca.pem- --disable-admission-plugins=PersistentVolumeLabel- --enable-admission-plugins=NodeRestriction- --enable-bootstrap-token-auth=true- --etcd-cafile=/etc/kubernetes/pki/etcd/ca.pem- --etcd-certfile=/etc/kubernetes/pki/etcd/etcd.pem- --etcd-keyfile=/etc/kubernetes/pki/etcd/etcd-key.pem- --etcd-servers=$ETCD_SERVERS- --insecure-port=0- --kubelet-client-certificate=/etc/kubernetes/pki/apiserver.pem- --kubelet-client-key=/etc/kubernetes/pki/apiserver-key.pem- --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname- --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.pem- --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client-key.pem - --requestheader-allowed-names=front-proxy-client- --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem- --requestheader-extra-headers-prefix=X-Remote-Extra-- --requestheader-group-headers=X-Remote-Group- --requestheader-username-headers=X-Remote-User- --secure-port=5443- --service-account-key-file=/etc/kubernetes/pki/sa.pub- --service-cluster-ip-range=10.96.0.0/12- --tls-cert-file=/etc/kubernetes/pki/apiserver.pem- --tls-private-key-file=/etc/kubernetes/pki/apiserver-key.pem image: k8s.gcr.io/kube-apiserver:v1.11.1imagePullPolicy: IfNotPresentlivenessProbe:failureThreshold: 8httpGet:host: $host_ippath: /healthzport: 5443scheme: HTTPSinitialDelaySeconds: 15timeoutSeconds: 15name: kube-apiserverresources:requests:cpu: 250mvolumeMounts:- mountPath: /etc/kubernetes/pkiname: k8s-certsreadOnly: true- mountPath: /etc/ssl/certsname: ca-certsreadOnly: true- mountPath: /etc/pkiname: etc-pkireadOnly: true- mountPath: /usr/share/ca-certificatesname: usr-share-ca-certificatesreadOnly: true- mountPath: /usr/local/share/ca-certificatesname: usr-local-share-ca-certificatesreadOnly: true- mountPath: /etc/ca-certificatesname: etc-ca-certificatesreadOnly: truehostNetwork: truepriorityClassName: system-cluster-criticalvolumes:- hostPath:path: /etc/pkitype: DirectoryOrCreatename: etc-pki- hostPath:path: /usr/share/ca-certificatestype: DirectoryOrCreatename: usr-share-ca-certificates- hostPath:path: /usr/local/share/ca-certificatestype: DirectoryOrCreatename: usr-local-share-ca-certificates- hostPath:path: /etc/ca-certificatestype: DirectoryOrCreatename: etc-ca-certificates- hostPath:path: /etc/kubernetes/pkitype: DirectoryOrCreatename: k8s-certs- hostPath:path: /etc/ssl/certstype: DirectoryOrCreatename: ca-certsstatus: {}EOFdone3. 分发配置文件[root@k8s01 ~]# for host in k8s01 k8s02 k8s03dossh $host mkdir -p /etc/kubernetes/pki/ /etc/kubernetes/manifests/scp/tmp/kubernetes/{apiserver-key.pem,apiserver.pem,ca.pem,front-proxy-ca.pem,f ront-proxy-client-key.pem,front-proxy-client.pem,sa.pub}${host}:/etc/kubernetes/pki/scp /tmp/kubernetes/kube-apiserver-${host}.yaml${host}:/etc/kubernetes/manifests/kube-apiserver.yamldone5. 配置kube-controller-manager1. 创建证书# 证书配置[root@k8s01 ~]# cd /tmp/kubernetes[root@k8s01 kubernetes]# cat << EOF > manager-csr.json{"CN":"system:kube-controller-manager","key":{"algo":"rsa","size":2048},"nam es":[{"C":"CN","ST":"Shanghai","L":"Shanghai","O":"system:kube-controller-ma nager","OU":"Kubernetes-manual"}]}EOF# 创建证书[root@k8s01 kubernetes]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem-config=ca-config.json -profile=kubernetes manager-csr.json | cfssljson -bare controller-manager2. 生成controller-manager配置文件[root@k8s01 kubernetes]# export KUBE_APISERVER=https://127.0.0.1:6443[root@k8s01 kubernetes]# kubectl config set-cluster kubernetes--certificate-authority=ca.pem --embed-certs=true --server=${KUBE_APISERVER} --kubeconfig=/tmp/kubernetes/controller-manager.conf[root@k8s01 kubernetes]# kubectl config set-credentialssystem:kube-controller-manager --client-certificate=controller-manager.pem--client-key=controller-manager-key.pem --embed-certs=true--kubeconfig=/tmp/kubernetes/controller-manager.conf[root@k8s01 kubernetes]# kubectl config set-contextsystem:kube-controller-manager@kubernetes --cluster=kubernetes--user=system:kube-controller-manager--kubeconfig=/tmp/kubernetes/controller-manager.conf[root@k8s01 kubernetes]# kubectl config use-contextsystem:kube-controller-manager@kubernetes--kubeconfig=/tmp/kubernetes/controller-manager.conf3. 创建StaticPod[root@k8s01 ~]# cat << EOF > /tmp/kubernetes/kube-controller-manager.yaml apiVersion: v1kind: Podmetadata:annotations:scheduler.alpha.kubernetes.io/critical-pod: ""creationTimestamp: nulllabels:component: kube-controller-managertier: control-planename: kube-controller-managernamespace: kube-systemspec:containers:- command:- kube-controller-manager- --address=127.0.0.1- --cluster-signing-cert-file=/etc/kubernetes/pki/ca.pem- --cluster-signing-key-file=/etc/kubernetes/pki/ca-key.pem- --controllers=*,bootstrapsigner,tokencleaner- --kubeconfig=/etc/kubernetes/controller-manager.conf- --leader-elect=true- --root-ca-file=/etc/kubernetes/pki/ca.pem- --service-account-private-key-file=/etc/kubernetes/pki/sa.key - --use-service-account-credentials=trueimage: k8s.gcr.io/kube-controller-manager:v1.11.1imagePullPolicy: IfNotPresentlivenessProbe:failureThreshold: 8httpGet:host: 127.0.0.1path: /healthzport: 10252scheme: HTTPinitialDelaySeconds: 15timeoutSeconds: 15name: kube-controller-managerresources:requests:cpu: 200mvolumeMounts:- mountPath: /etc/kubernetes/controller-manager.confname: kubeconfigreadOnly: true- mountPath: /etc/pkiname: etc-pkireadOnly: true- mountPath: /usr/share/ca-certificatesname: usr-share-ca-certificatesreadOnly: true- mountPath: /usr/local/share/ca-certificatesname: usr-local-share-ca-certificatesreadOnly: true- mountPath: /etc/ca-certificatesname: etc-ca-certificatesreadOnly: true- mountPath: /etc/kubernetes/pkiname: k8s-certsreadOnly: true- mountPath: /etc/ssl/certsname: ca-certsreadOnly: truehostNetwork: truepriorityClassName: system-cluster-criticalvolumes:- hostPath:path: /etc/ca-certificatestype: DirectoryOrCreatename: etc-ca-certificates- hostPath:path: /etc/kubernetes/pkitype: DirectoryOrCreatename: k8s-certs- hostPath:path: /etc/ssl/certstype: DirectoryOrCreatename: ca-certs- hostPath:path: /etc/kubernetes/controller-manager.conf type: FileOrCreatename: kubeconfig- hostPath:path: /etc/pkitype: DirectoryOrCreatename: etc-pki- hostPath:path: /usr/share/ca-certificatestype: DirectoryOrCreatename: usr-share-ca-certificates- hostPath:path: /usr/local/share/ca-certificatestype: DirectoryOrCreatename: usr-local-share-ca-certificatesstatus: {}EOF4. 分发配置文件[root@k8s01 ~]# for host in k8s01 k8s02 k8s03doscp /tmp/kubernetes/{ca.pem,ca-key.pem,sa.key} ${host}:/etc/kubernetes/pki/ scp /tmp/kubernetes/controller-manager.conf ${host}:/etc/kubernetes/controller-manager.confscp /tmp/kubernetes/kube-controller-manager.yaml${host}:/etc/kubernetes/manifests/kube-controller-manager.yamldone6. 配置kube-scheduler1. 创建证书# 证书配置[root@k8s01 ~]# cd /tmp/kubernetes[root@k8s01 kubernetes]# cat << EOF > scheduler-csr.json{"CN":"system:kube-scheduler","key":{"algo":"rsa","size":2048},"names":[{"C" :"CN","ST":"Shanghai","L":"Shanghai","O":"system:kube-scheduler","OU":"Kuber netes-manual"}]}EOF# 创建证书[root@k8s01kubernetes]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem-config=ca-config.json -profile=kubernetes scheduler-csr.json | cfssljson -bare scheduler2. 生成scheduler配置文件[root@k8s01 kubernetes]# export KUBE_APISERVER=https://127.0.0.1:6443[root@k8s01 kubernetes]# kubectl config set-cluster kubernetes--certificate-authority=ca.pem --embed-certs=true --server=${KUBE_APISERVER} --kubeconfig=/tmp/kubernetes/scheduler.conf[root@k8s01 kubernetes]# kubectl config set-credentials system:kube-scheduler --client-certificate=scheduler.pem --client-key=scheduler-key.pem--embed-certs=true --kubeconfig=/tmp/kubernetes/scheduler.conf[root@k8s01 kubernetes]# kubectl config set-contextsystem:kube-scheduler@kubernetes --cluster=kubernetes--user=system:kube-scheduler --kubeconfig=/tmp/kubernetes/scheduler.conf [root@k8s01 kubernetes]# kubectl config use-contextsystem:kube-scheduler@kubernetes --kubeconfig=/tmp/kubernetes/scheduler.conf3. 创建StaticPod[root@k8s01 kubernetes]# cat << EOF > /tmp/kubernetes/kube-scheduler.yaml apiVersion: v1kind: Podmetadata:annotations:scheduler.alpha.kubernetes.io/critical-pod: "" creationTimestamp: nulllabels:component: kube-schedulertier: control-planename: kube-schedulernamespace: kube-systemspec:containers:- command:- kube-scheduler- --address=127.0.0.1- --kubeconfig=/etc/kubernetes/scheduler.conf - --leader-elect=trueimage: k8s.gcr.io/kube-scheduler:v1.11.1imagePullPolicy: IfNotPresentlivenessProbe:failureThreshold: 8httpGet:host: 127.0.0.1path: /healthzport: 10251scheme: HTTPinitialDelaySeconds: 15timeoutSeconds: 15name: kube-schedulerresources:requests:cpu: 100mvolumeMounts:- mountPath: /etc/kubernetes/scheduler.confname: kubeconfigreadOnly: truehostNetwork: truepriorityClassName: system-cluster-criticalvolumes:- hostPath:path: /etc/kubernetes/scheduler.conftype: FileOrCreatename: kubeconfigstatus: {}EOF4. 分发配置文件[root@k8s01 ~]# for host in k8s01 k8s02 k8s03doscp /tmp/kubernetes/scheduler.conf ${host}:/etc/kubernetes/scheduler.conf scp /tmp/kubernetes/kube-scheduler.yaml${host}:/etc/kubernetes/manifests/kube-scheduler.yamldone7. 配置kube-proxy1. 创建证书# 证书配置[root@k8s01 ~]# cd /tmp/kubernetes[root@k8s01 kubernetes]# cat << EOF > proxy-csr.json{"CN":"system:kube-proxy","key":{"algo":"rsa","size":2048},"names":[{"C":"CN ","ST":"Shanghai","L":"Shanghai","O":"system:kube-proxy","OU":"Kubernetes-ma nual"}]}EOF# 创建证书[root@k8s01kubernetes]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem-config=ca-config.json -profile=kubernetes proxy-csr.json | cfssljson -bare proxy2. 生成scheduler配置文件[root@k8s01 kubernetes]# export KUBE_APISERVER=https://127.0.0.1:6443[root@k8s01 kubernetes]# kubectl config set-cluster kubernetes--certificate-authority=ca.pem --embed-certs=true --server=${KUBE_APISERVER} --kubeconfig=/tmp/kubernetes/proxy.conf[root@k8s01 kubernetes]# kubectl config set-credentials system:kube-proxy--client-certificate=proxy.pem --client-key=proxy-key.pem --embed-certs=true --kubeconfig=/tmp/kubernetes/proxy.conf[root@k8s01 kubernetes]# kubectl config set-contextsystem:kube-proxy@kubernetes --cluster=kubernetes --user=system:kube-proxy--kubeconfig=/tmp/kubernetes/proxy.conf[root@k8s01 kubernetes]# kubectl config use-contextsystem:kube-proxy@kubernetes --kubeconfig=/tmp/kubernetes/proxy.conf3. 创建StaticPod[root@k8s01 kubernetes]# cat << EOF > /tmp/kubernetes/kube-proxy.yamlapiVersion: v1kind: Podmetadata:name: kube-proxynamespace: kube-systemspec:hostNetwork: truecontainers:- name: kube-proxyimage: k8s.gcr.io/kube-proxy:v1.11.1imagePullPolicy: IfNotPresentsecurityContext:privileged: truecommand:- kube-proxy- --kubeconfig=/etc/kubernetes/proxy.conf- --cluster-cidr=10.244.0.0/16- --v=2- --masquerade-all- --alsologtostderr=true- --logtostderr=falsesecurityContext:privileged: truevolumeMounts:- mountPath: /var/log/kubernetes/proxyreadOnly: falsename: kubeproxylog- mountPath: /etc/kubernetes/proxy.confreadOnly: truename: kubeconfig- mountPath: /lib/modulesname: lib-modulesreadOnly: true- mountPath: /var/run/dbus/system_bus_socket name: system-bus-socketreadOnly: truevolumes:- hostPath:path: /var/log/kubernetes/proxyname: kubeproxylog- hostPath:path: /etc/kubernetes/proxy.confname: kubeconfig- hostPath:path: /lib/modulesname: lib-modules- hostPath:path: /var/run/dbus/system_bus_socketname: system-bus-socketEOF4. 分发配置文件[root@k8s01 ~]# for host in k8s01 k8s02 k8s03doscp /tmp/kubernetes/proxy.conf ${host}:/etc/kubernetes/proxy.confscp /tmp/kubernetes/kube-proxy.yaml${host}:/etc/kubernetes/manifests/kube-proxy.yamldone8. 配置kubelet1. 创建证书# 证书配置[root@k8s01 ~]# cd /tmp/kubernetes[root@k8s01 kubernetes]# cat << EOF > kubelet-csr.json{"CN":"system:node:\$NODE","key":{"algo":"rsa","size":2048},"names":[{"C":"C N","L":"Shanghai","ST":"Shanghai","O":"system:nodes","OU":"Kubernetes-manual "}]}EOF# 创建证书[root@k8s01 kubernetes]# for NODE in k8s01 k8s02 k8s03docp kubelet-csr.json kubelet-$NODE-csr.json;sed -i "s/\$NODE/$NODE/g" kubelet-$NODE-csr.json;cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json-hostname=$NODE -profile=kubernetes kubelet-$NODE-csr.json | cfssljson -bare kubelet-$NODEdone2. 生成kubelet配置文件[root@k8s01 kubernetes]# export KUBE_APISERVER=https://127.0.0.1:6443[root@k8s01 kubernetes]# for NODE in k8s01 k8s02 k8s03dohyperkube kubectl config set-cluster kubernetes--certificate-authority=ca.pem --embed-certs=true --server=${KUBE_APISERVER} --kubeconfig=/tmp/kubernetes/kubelet-${NODE}.confhyperkube kubectl config set-credentials system:node:${NODE}--client-certificate=kubelet-${NODE}.pem --client-key=kubelet-${NODE}-key.pem --embed-certs=true --kubeconfig=/tmp/kubernetes/kubelet-${NODE}.confhyperkube kubectl config set-context system:node:${NODE}@kubernetes--cluster=kubernetes --user=system:node:${NODE}--kubeconfig=/tmp/kubernetes/kubelet-${NODE}.confhyperkube kubectl config use-context system:node:${NODE}@kubernetes--kubeconfig=/tmp/kubernetes/kubelet-${NODE}.confdone3. 创建systemd启动文件(不配置网络禁用KUBELET_NETWORK_ARGS参数即可) # systemd启动配置[root@k8s01 kubernetes]# cat << EOF > /tmp/kubernetes/kubelet.service[Unit]Description=kubelet: The Kubernetes Node AgentDocumentation=http://kubernetes.io/docs/[Service]ExecStart=/usr/bin/kubeletRestart=alwaysStartLimitInterval=0RestartSec=10[Install]WantedBy=multi-user.targetEOF# systemd启动变量[root@k8s01 kubernetes]# cat << EOF > /tmp/kubernetes/10-kubelet.conf [Service]Environment="KUBELET_KUBECONFIG_ARGS=--kubeconfig=/etc/kubernetes/kubelet.co nf"Environment="KUBELET_SYSTEM_PODS_ARGS=--pod-manifest-path=/etc/kubernetes/ma nifests --allow-privileged=true"Environment="KUBELET_NETWORK_ARGS=--network-plugin=cni--cni-conf-dir=/etc/cni/net.d --cni-bin-dir=/opt/cni/bin"Environment="KUBELET_DNS_ARGS=--cluster-dns=10.96.0.10--cluster-domain=cluster.local"Environment="KUBELET_AUTHZ_ARGS=--authorization-mode=Webhook--client-ca-file=/etc/kubernetes/pki/ca.pem"Environment="KUBELET_CADVISOR_ARGS=--cadvisor-port=0"Environment="KUBELET_CERTIFICATE_ARGS=--rotate-certificates=true--cert-dir=/var/lib/kubelet/pki"Environment="KUBELET_EXTRA_ARGS=--node-labels=node-role.kubernetes.io/master='' --logtostderr=true --v=0"ExecStart=ExecStart=/usr/bin/kubelet \$KUBELET_KUBECONFIG_ARGS\$KUBELET_SYSTEM_PODS_ARGS \$KUBELET_NETWORK_ARGS \$KUBELET_DNS_ARGS\$KUBELET_AUTHZ_ARGS \$KUBELET_CADVISOR_ARGS \$KUBELET_CERTIFICATE_ARGS\$KUBELET_EXTRA_ARGSEOF4. 分发配置文件[root@k8s01 ~]# for host in k8s01 k8s02 k8s03dossh $host mkdir -p /etc/systemd/system/kubelet.service.d /var/lib/kubelet/var/log/kubernetesscp /tmp/kubernetes/kubelet-${host}.conf ${host}:/etc/kubernetes/kubelet.conf scp /tmp/kubernetes/kubelet.service${host}:/usr/lib/systemd/system/kubelet.servicescp /tmp/kubernetes/10-kubelet.conf${host}:/etc/systemd/system/kubelet.service.d/10-kubelet.confssh $host systemctl daemon-reloaddone9. 启动服务与初始化配置1. 创建证书# 证书配置[root@k8s01 ~]# cd /tmp/kubernetes[root@k8s01 kubernetes]# cat << EOF > admin-csr.json {"CN":"admin","key":{"algo":"rsa","size":2048},"names":[{"C":"CN","ST":"Shan ghai","L":"Shanghai","O":"system:masters","OU":"Kubernetes-manual"}]}EOF# 创建证书[root@k8s01 kubernetes]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem-config=ca-config.json -profile=kubernetes admin-csr.json | cfssljson -bare admin。
luci框架普通linux搭建简易文档 -回复

luci框架普通linux搭建简易文档-回复"luci框架普通linux搭建简易文档"luci框架是一款用于构建网络管理界面的开源框架。
它是OpenWrt 路由器操作系统的默认Web界面,可供用户通过图形化界面进行路由器的配置和管理。
在本文中,我们将详细介绍如何在Linux上搭建luci框架。
第一步:安装OpenWrt路由器操作系统要搭建luci框架,我们首先需要安装OpenWrt路由器操作系统。
您可以从OpenWrt官方网站(一旦下载完成,您需要将镜像文件刷写到路由器设备上。
具体刷写方法因设备而异,您可以查阅您的设备文档或参考OpenWrt官方网站上的详细教程。
第二步:安装必要的软件在安装luci框架之前,我们需要确保一些必要的软件已经安装在系统中。
这些软件包括:- LuCI库和应用程序- uhttpd(轻量级的HTTP服务器)在大多数Linux发行版上,您可以使用包管理器来安装这些软件。
例如,在Debian/Ubuntu上,您可以运行以下命令来安装所需软件:sudo apt-get updatesudo apt-get install luci uhttpd第三步:配置uhttpd在luci框架中,uhttpd是一个提供Web服务的轻量级HTTP服务器。
我们需要进行一些配置来启用uhttpd并与luci框架配合使用。
首先,您需要编辑uhttpd的配置文件。
在大多数Linux发行版上,uhttpd的配置文件位于`/etc/config/uhttpd`。
使用您喜欢的文本编辑器打开这个文件。
在配置文件中,您需要确保以下几个选项被正确配置:- `option listen_http '0.0.0.0:80'`:这个选项指定uhttpd监听的IP 地址和端口。
默认情况下,luci框架会监听路由器的80端口,请确保该选项正确设置。
- `option home '/www'`:这个选项指定Web服务器的根目录。
Linux下环境部署文档

Linux下环境部署文档1、使用root账户登录Linux环境,打开一个shell终端,如图:2、安装JDK、Mysql、Tomcat服务器,执行命令如下(每行命令输入后按回车键) //前提条件:已经将Linux下的安装套件拷贝到/Linux/setup目录[root@localhost /]# cd /usr//创建临时安装目录nationz[root@localhost usr]# mkdir nationz[root@localhost usr]# cd /Linux/setup[root@localhost setup]# cp apache-tomcat-5.5.31.tar.gz /usr/nationz/[root@localhost setup]# cp jdk-6u23-linux-i586.rpm /usr/nationz/[root@localhost setup]# cp MySQL-server-5.5.8-1.rhel5.i386.rpm /usr/nationz/ [root@localhost setup]# cp MySQL-client-5.5.8-1.rhel5.i386.rpm /usr/nationz/ [root@localhost setup]# cd /usr/nationz///开始安装jdk,此时的安装目录如下://执行命令如下:[root@localhost nationz]# rpm -ivh jdk-6u23-linux-i586.rpm//安装完JDK后,解压缩tomcat的.gz格式包,执行命令//输入java –version命令后,可以看到如下红色信息代表安装JDK成功//java version "1.6.0_23"//Java(TM) SE Runtime Environment (build 1.6.0_23-b05)//Java HotSpot(TM) Server VM (build 19.0-b09, mixed mode)//JDK默认安装目录为/usr/java/目录[root@localhost nationz]# gunzip apache-tomcat-5.5.31.tar.gz//执行命令后,此时的文件结构如下,可以看到原来的apache-tomcat-5.5.31.tar.gz已经被解//压成apache-tomcat-5.5.31.tar。
Linux搭建Radius服务器

Linux搭建Radius服务器安装环境介绍以下服务器信息为该⽂档安装Radius服务环境服务器信息:CentOS7内核版本:3.10.0-1062.el7.x86_64安装软件版本freeradius-utils-3.0.4-8.el7_3.x86_64freeradius-3.0.4-8.el7_3.x86_64freeradius-mysql-3.0.4-8.el7_3.x86_64⼀,Radius安装1.1 查看服务器信息:uname -a(可操作也可以省略)1.2 更新yum源(可操作也可以省略)yum update1.3 查看软件安装包yum list | grep freeradius1.4 安装软件包yum install freeradius freeradius-utils1.5 查看包是否安装rpm -qa |grep freeradius⼆,配置freeradius2.1修改client配置⽂件vi/etc/raddb/clients.conf配置⽂件。
将⽂件241-244⾏取消注销,将radius对当前⽹段开放,修改共享密钥(该⽂档配置密钥为songchen)将ipaddr修改成0.0.0.0/0是表⽰任何⽹段的ip都可以访问2.2修改users配置⽂件vi/etc/raddb/users,添加⽤户信息。
将⽂件87-88⾏取消注销,或者⾃⼰新增也⾏,(song为测试⽤户,chen为测试密码)2.3启动radius服务并设置开机启动systemctl start radiusd 启动服务systemctl enable radiusd 设置开机启动systemctl status radiusd 查看radiusd状态注意:如果连接不上,请关闭防⽕墙 安装之前,⼀定要关闭防⽕墙1. 关闭防⽕墙systemctl stop firewalld.service2. 禁⽌firewall开机启动systemctl disable firewalld.service3. 查看默认防⽕墙状态firewall-cmd --state。
Linux下at91sam9x25嵌软开发测试环境搭建文档

硬件环境软件环境1 安装虚拟机1.1虚拟机选择Ubuntu 11.10以上版本(升级比较方便)1.2虚拟机的配置与升级apt-cache search package 搜索包apt-cache show package 获取包的相关信息,如说明、大小、版本等sudo apt-get install package 安装包sudo apt-get install package - - reinstall 重新安装包sudo apt-get -f install 修复安装"-f = ——fix-missing"sudo apt-get remove package 删除包sudo apt-get remove package - - purge 删除包,包括删除配置文件等sudo apt-get update 更新源sudo apt-get upgrade 更新已安装的包sudo apt-get dist-upgrade 升级系统sudo apt-get dselect-upgrade 使用dselect 升级apt-cache depends package 了解使用依赖apt-cache rdepends package 是查看该包被哪些包依赖sudo apt-get build-dep package 安装相关的编译环境apt-get source package 下载该包的源代码sudo apt-get clean && sudo apt-get autoclean 清理无用的包sudo apt-get check 检查是否有损坏的依赖2 Linux下安装交叉编译环境2.1安装步骤1)下载arm-2011.03-42-arm-none-eabi-i686-pc-linux-2)命令行安装# tar xvzf arm-2011.03-42-arm-none-eabi-i686-pc-linux-# cd arm-2011.033 安装arm设备编程工具SAM Boot Assistant(SAM-BA)3.1 Windows下安装1)安装sam-ba_;2)安装USB CDC驱动;图 3.1图 3.2图 3.3图 3.4图 3.5图 3.7打开SAM-BA 2图 3.8图 3.93.2 Linux下安装1)解压sam-ba_;2)安装USB CDC驱动;1/ Login with administrator rights2/ Unload usbserial module if it is already running #rmmod usbserial3/ Load usbserial kernel module#modprobe usbserial vendor=0x03eb product=0x61244/ Verify that the USB connection is established#lsusb -d 03eb:6124Bus 004 Device 006: ID 03eb:6124 Atmel Corp5/ Know which USB connection is established#dmesgkernel: usb 4-2: new full speed USB device using uhci_hcd and address 5kernel: usb 4-2: configuration #1 chosen from 1 choicekernel: usbserial_generic 4-2:1.0: generic converter detectedkernel: usbserial_generic: probe of 4-2:1.0 failed with error -5kernel: usbserial_generic 4-2:1.1: generic converter detectedkernel: usb 4-2: generic converter now attached to ttyUSBx=> you will have to use /dev/ttyUSBx to connect to your boardRunning SAM-BA CDC Serial version :Launch 'sam-ba_cdc_ file, and select your board and the /dev/ttyUSBxdevice where your board in mounted on.- Update the kernel:# apt-get install linux-image-generic linux-headers-generic- On 64 bits version install 32 bits libraries:# apt-get install ia32-libs- Give sam-ba execute permission if needed:$ chmod +x sam-ba- Connect the board- Create a symlink on /dev/ttyACM0# ln -s /dev/ttyACM0 /dev/ttyUSB0- Launch sam-baTested on:Ubuntu 10.04 64 bits (Ubuntu 10.10 32 bits (Ubuntu 10.10 64 bits (Ubuntu 10.10 64 bits (Ubuntu 11.10 64 bits alpha3How to check if your kernel is up to date ?$ dmesgIf you have something like that (not exactly the same) it's ok:[227274.230016] usb 5-1: new full speed USB device using uhci_hcd and address 5[227274.395739] cdc_acm 5-1:1.0: This device cannot do calls on its own. It is not a modem.[227274.395768] cdc_acm 5-1:1.0: ttyACM0: USB ACM deviceIf you don't have this part: 'This device cannot do calls on its own. It is not a modem.',your kernel is probably not up to date or the cdc_acm patch has not been backported.4 示例4.1 下载AT91Bootstrap源码1)得到源码;2)解压# tar xvzf AT91Bootstrap-5series_#cd AT91Bootstrap-5series_1.24.2 配置AT91Bootstrap和选择启动媒介1) 从NAND FLASH启动#make at91sam9xnf_defconfig2)添加环境变量#vi .profilePATH="$PATH:/root/Public/arm-2011.03/bin"export PATH#souce .profile3)配置AT91Bootstrap#make menuconfig4.3 编译AT91Bootstrap#export $CROSS_COMPILE=” arm-none-eabi-”#make clear#make在../AT91Bootstrap-5series_1.2/binaries下产生at91sam9x5ek-nandflashboot- 4.4 使用AT91Bootstrap二进制文件1)从NAND flash启动A T91Bootstrap图 4.1在NAND和SPI无效的前提下,启动SAM-BA,烧AT91Bootstrap到NAND flash,如图4.1所示:(1)在SAM-BA图形用户界面上选择NandFlash媒介选项卡;(2)1)在NAND有效的前提下,在Scripts下拉列表框中选择“Enable NandFlash”;然后点击“Execute”按钮,完成NandFlash的初始化,如图4.2所示;图 4.2.12)清除芯片上原来烧的信息图 4.2.2结果如图 4.5所示。
luci框架普通linux搭建简易文档

luci框架普通linux搭建简易文档Luci框架是一个用于嵌入式设备的开源Web界面管理工具。
它基于OpenWrt项目开发,提供了一个简单易用的管理界面,使用户可以通过Web浏览器对嵌入式设备进行管理和配置。
本文将详细介绍如何在普通Linux系统中搭建Luci框架。
第一步:安装OpenWrt首先,我们需要安装OpenWrt操作系统。
打开终端,输入以下命令以更新软件包列表:sudo apt update然后,安装OpenWrt所需的软件包。
输入以下命令将OpenWrt 软件包安装到系统中:sudo apt install openwrt安装完成后,我们可以开始配置OpenWrt。
第二步:配置OpenWrt在配置OpenWrt之前,我们需要了解一些基本的网络知识。
首先,确定你的网络适配器名称。
可以使用以下命令查看:ifconfig然后,编辑网络配置文件。
输入以下命令以编辑网络配置文件:sudo nano /etc/config/network在文件中,你需要设置你的网络适配器名称和IP地址。
例如,你的网络适配器名称是"eth0",IP地址是"192.168.1.1",你可以将配置文件的内容修改如下:config interface 'lan'option ifname 'eth0'option proto 'static'option ipaddr '192.168.1.1'option netmask '255.255.255.0'option gateway '192.168.1.1'保存并关闭文件。
然后,重启网络服务以使配置生效:sudo /etc/init.d/network restart完成网络配置后,我们可以继续安装配置Luci框架。
第三步:安装Luci框架首先,我们需要下载Luci框架的源代码。
- 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
- 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
- 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。
湖南风险管理项目linux环境搭建目录湖南风险管理项目linux环境搭建 (1)一、Oracle安装与配置 (2)1、创建用户 (2)2、安装必备文件 (2)3、修改配置文件, (2)4、安装oracle (4)5、oracle安装过程中问题及解决办法 (5)二、Weblogic安装与配置 (5)1、安装JDK (5)2、更改配置文件切换至图形界面安装 (6)3、安装weblogic (7)4、新建domain域 (7)5、应用发布 (8)6、Weblogic安装与配置中遇到问题及解决方法汇总: (8)三、SVN安装与配置 (8)1 安装 (8)2配置 (9)3启动SVN (10)4 基本测试 (10)5导入工程 (10)四、风险管理项目部署 (10)一、Oracle安装与配置1、创建用户主目录为/home/oracle 。
useradd–d /home/oracle oraclepassed oracle10。
2、安装必备文件插入LINUX安装盘,将安装盘挂载到某个目录上:mount /dev/sdr0 /mnt/cdrom如果不知道安装盘的路径可以通过df–hl查看,然后cd /mnt/cdrom/Server,执行Server下的安装文件,Server下的安装文件都是RPM格式的,所以用ORACLE自带的安装命令执行,执行如下:# rpm -Uvh setarch-2*# rpm -Uvh make-3*# rpm -Uvh glibc-2*# rpm -Uvh libaio-0*# rpm -Uvhcompat-libstdc++-33-3*# rpm -Uvh compat-gcc-34-3*# rpm -Uvh compat-gcc-34-c++-3*# rpm -Uvh gcc-4*# rpm -Uvh libXp-1* (libXp-1(控制图形界面的,如果没有安装这个包,在运行runInstaller的时候,就说找都不到libawt.so: libXp.so.6: cannot open shared object file: No such file or directory occurred.)# rpm -Uvh openmotif22-*# rpm -Uvh compat-db-4*如果某些工具不能安装,安装程序B时提示“程序A is needed by 程序B”,则说明安装B 前,必须先安装A,如果没有A程序的话,可以强制安装,执行命令:rpm -i compat-db-4* --force --nodeps;实在不行,不能安装的程序先跳过3、修改配置文件,在安装前需要修改一系列的配置文件。
vi /etc/redhat-release 将红帽版本改成4(前提是当前系统是4以上版本),安装完成后,还得改回来。
修改内核参数#vi /etc/sysctl.conf(在配置文件后,将下面的全部粘贴到文件的最后,其实不是修改,是增加)kernel.shmall = 2097152kernel.shmmax = 2147483648kernel.shmmni = 4096kernel.sem = 250 32000 100 128fs.file-max = 65536net.ipv4.ip_local_port_range = 1024 65000net.core.rmem_default=262144net.core.rmem_max=262144net.core.wmem_default=262144net.core.wmem_max=262144修改后执行sysctl–p 使之立即生效。
vi /etc/security/limits.conf(增加下面四段,为了提高ORACLE在LINUX运行的性能)oracle soft nproc 2047oracle hard nproc 16384oracle soft nofile 1024oracle hard nofile 65536接下来更改/etc/pam.d/login文件,添加下面的内容,使shell limit生效:#vi /etc/pam.d/login --在里面添加如下内容session required pam_limits.so添加IP地址,也可用LINUX自带工具图进行修改vi /etc/sysconfig/network-scripts/ifcfg-eth0—添加如下DEVICE=eth0BOOTPROTO=staticHWADDR=00:0C:29:4B:17:C4ONBOOT=yesIPADDR=192.168.68.98---IPNETMASK=255.255.255.0—子网掩码GATEWAY=192.168.68.10—网关如果要添加DNS的话,nameserver后的即是DNS地址vi /etc/resolv.confsearch nameserver 210.34.0.14nameserver 210.34.0.2配置ORACLE 的环境变量,在此之前需先创建环境变量的目录(包括安装目录)mkdir /oracle/product/10.2.0/db_1,如果不创建切换到ORACLE用户下时会报错,并chown -R oracle /oracle,将当前目录付给ORACLE用户,并设置权限chmod -R 775 /oracle。
最后vi .bash_profile,记住一点要在用户的主目录下执行,因为此文件存放在用户主目录下,添加如下内容:export ORACLE_BASE=/oracle(ORACLE的安装目录,建议是根目录下,主目录和安装目录是分开的)export ORACLE_HOME=$ORACLE_BASE/product/10.2.0/db_1export ORACLE_SID=orcl (看你要设置的SID是什么,我的是PDBQZ,就将orcl修改为PDBQZ)export PATH=$PATH:$HOME/bin:$ORACLE_HOME/binexport LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/libCLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlibexport CLASSPATH要配置生效执行source .bash_profile,注意力,环境变量的分割是用分号,而不是逗号,这和windows的有区别vi /etc/selinux/config,将selinux=disabled4、安装oracle最后将安装文件拷贝到/ORACKE 或者/HOME/ORACLE下,解压,进入目录,在ORACLE用户下输入./runInstaller,运行后,如果报DISPLAY 尚未设置的错,则需要在root用户下输入:xdpyinfo,查看name of display,应该是127.0.0.1:1.0或127.0.0.1:0.0或:0.0或:1.0。
然后输入xhost +;再切换到oracle 用户下,执行export DISPLAY=:0.0具体值参照root下的值,然后执行xdpyinfo,如果显示的结果和root用户下一样的话,就可以了,最后执行export LANG=us,修改安装字符集,避免安装时出现乱码。
安装完成后恢复语言版本,执行export LC_CTYPE=zh_CN.UTF-85、oracle安装过程中问题及解决办法在ORACLE 用户下执行安装文件,报错,如果是Xlib的话:root命令行下输入:[]# xhostlocal:oracle non-network local connections being added to access control list;在安装的过程中可能会出现/tmp文件不可写,并至少60M空间的错误,这是可能是文件系统和ORACLE不兼容,可以更改逻辑扩展分区文件系统格式,或者在/tmp下建一个目录oratmp,在ORACLE的环境变量中加入TMP=/tmpTPMDIR=$TMP二、Weblogic安装与配置本文档适用于在linux环境中安装weblogic.jar的压缩包,bin包可以省略JDK安装。
环境:redhat5.8linux for 64bitJDK:jdk-6u24-linux-x64.binWeblogic:wls1034_generic.jar1、安装JDK用以在linux下能够执行.jar文件1.1在根目录下创建文件夹JDK $mkdir /jdk1.2将jdk-6u24-linux-x64.bin文件复制到jdk文件夹中1.3在root用户中创建weblogic用户$useradd–d /usr/weblogic–m weblogic(-d 父目录–m主目录)1.4将jdk文件夹权限赋给weblogic用户$chown–R weblogic:weblogic /jdk1.5在weblogic用户下安装jdk1.5.1切换至weblogic用户$su–weblogic1.5.2 进入jdk的目录:$cd /jdk/jdk-6u24-linux-x64.bin1.5.3 安装jdk:$./ jdk-6u24-linux-x64.bin安装时会出现证书页面,一直按ENTER键,当出现确认界面,输入“yes”,等待安装结束1.6配置jdk环境变量:vi .bash_profile在配置文件末尾加上:JAVA_HOME=/usr/share/jdk1.5.0_05PATH=$JAVA_HOME/bin:$PATHCLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jarexport JAVA_HOMEexport PATHexport CLASSPATH重新登录。
2、更改配置文件切换至图形界面安装2.1 打开配置文件:vi /etc/inittab配置文件内容如下#Default runlevel.Therunlevels used by RHS are:# 0-halt(Do NOT set initdefault to this)# 1 -Single user mode# 2-Multiuser,without NFS(The same as 3,if you do not have networking)# 3-Full multiuser mode# 4-unused # 5-X11 //选择此项,系统在登录时将进入图形化登录界面# 6-reboot(Do NOT set initdefault to this)#id:3:initdefault: //此处若改为3,系统将被引导进入文本登录提示符界面;为5,系统则进入图形安装界面。