ubuntu14.04部署openstack-juno步骤
一.Set Network
Vmware安装三台ubuntu14.04虚拟机,用图形工具设置IP地址,或直接vi /etc/network/interface
二.hostname
vi /etc/hosts
# controller
192.168.128.100 controller
# network
192.168.128.101 compute
# compute1
192.168.128.102 network
三.update system
sudo apt-get update && sudo apt-get update --fix-missing && sudo apt-get upgrade -y && sudo apt-get dist-upgrade -y
四.ntp
all node都安装
apt-get install ntp
controller:
Edit the /etc/ntp.conf
server controller iburst
restrict -4 default kod notrap nomodify
restrict -6 default kod notrap nomodify
network:
server controller iburst
compute:
server controller iburst
ntpq -p 验证时间
注:需要注销多余的ntp server
四1.Controller:安装mysql
apt-get 自动安装mysql5.5有问题,故手动下载mysql5.6安装,解压是.deb文件
dpkg -i *.deb
apt-get install python-mysqldb
apt-get install phpmyadmin
编辑[mysqld]部分,设置启用 InnoDB,UTF-8 字符集和 UTF-8 排序规则
/etc/mysql/https://www.360docs.net/doc/a218805327.html,f
default-storage-engine = innodb
innodb_file_per_table
collation-server = utf8_general_ci
init-connect = 'SET NAMES utf8'
bind-address = 0.0.0.0
五.OpenStack Juno Packages 安装juno包
all node:所有节点都要安装
注:To configure prerequisites:Install the python-software-properties package to ease repository management apt-get install python-software-properties #安装提示已被 software-properties-common替代
apt-get install software-properties-common
注:To enable the OpenStack repository:Enable the Ubuntu Cloud archive repository:
add-apt-repository cloud-archive:juno
注:To finalize installation Upgrade the packages on your system:
apt-get update && apt-get dist-upgrade
六.Controller RabbitMQ
apt-get install rabbitmq-server
rabbitmqctl change_password guest guest
七.安装 keystone Identity 服务
1、控制节点上安装 OpenStack 的身份服务:
# apt-get install keystone
2、编辑/etc/keystone/keystone.conf 文件,更改[database]部分:
[database]
# The SQLAlchemy connection string used to connect to the database
connection = mysql://keystone:KEYSTONE_DBPASS@controller/keystone
3、默认情况下,Ubuntu 的软件包创建一个 SQLite 数据库。删除 keystone.db
在/var/lib/keystone/目录下删除文件,以便它不会使用错误:
# rm /var/lib/keystone/keystone.db
4、使用您先前为 root 设置的登录密码。创建一个 Keystone 数据库用户:
$ mysql -u root -p
mysql> CREATE DATABASE keystone;
mysql> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' \
IDENTIFIED BY 'feimeng';
mysql> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' \
IDENTIFIED BY 'feimeng';
mysql> exit
5、创建身份服务数据库表:
su -s /bin/sh -c "keystone-manage db_sync" keystone
rm -f /var/lib/keystone/keystone.db
service keystone restart
/var/log/keystone/keystone-tokenflush.log:
(crontab -l -u keystone 2>&1 | grep -q token_flush) || echo '@hourly /usr/bin/keystone-manage token_flush >/var/log/keystone/keystone-tokenflush.log 2>&1' >> /var/spool/cron/crontabs/keystone
八.Controller keystone default user
env
可以写到/etc/profile中,再运行source /etc/profile
export OS_SERVICE_TOKEN=feimeng
export OS_SERVICE_ENDPOINT=http://controller:35357/v2.0
创建用户及角色:
keystone tenant-create --name admin --description "Admin Tenant"
keystone user-create --name admin --pass feimeng --email admin@https://www.360docs.net/doc/a218805327.html,
keystone role-create --name admin
keystone user-role-add --tenant admin --user admin --role admin
keystone role-create --name _member_
keystone user-role-add --tenant admin --user admin --role _member_
keystone tenant-create --name demo --description "Demo Tenant"
keystone user-create --name demo --pass feimeng --email demo@https://www.360docs.net/doc/a218805327.html,
keystone user-role-add --tenant demo --user demo --role _member_
keystone tenant-create --name service --description "Service Tenant"
Controller keystone endpoint:
创建一个服务条目的身份服务并指定身份认证服务的 API 端点
keystone service-create --name keystone --type identity --description "OpenStack Identity"
keystone endpoint-create --service-id $(keystone service-list | awk '/ identity / {print $2}') --publicurl http://controller:5000/v2.0 --internalurl http://controller:5000/v2.0 --adminurl http://controller:35357/v2.0 --region regionOne
验证身份服务的安装
unset OS_SERVICE_TOKEN
unset OS_SERVICE_ENDPOINT
keystone --os-username=admin --os-password=feimeng --os-auth-url=http://controller:35357/v2.0 token-get
keystone --os-username=admin --os-password=feimeng --os-auth-url=http://controller:35357/v2.0 nenant-list
root@controller:/home/mengfei# keystone --os-tenant-name admin --os-username admin --os-password feimeng --os-auth-url http://controller:35357/v2.0 token-get
+-----------+----------------------------------+
| Property | Value |
+-----------+----------------------------------+
| expires | 2014-11-14T03:59:34Z |
| id | 151ee3f9f0eb4168bd145c2d4ae5c2fb |
| tenant_id | 0d8f52e0c974485da995ed6964c84fdc |
| user_id | 72bfd7738cc64decb88e7b5b2080fa43 |
+-----------+----------------------------------+
root@controller:/home/mengfei#
root@controller:/home/mengfei# keystone --os-tenant-name admin --os-username admin --os-password feimeng --os-auth-url http://controller:35357/v2.0 tenant-list
+----------------------------------+---------+---------+
| 0d8f52e0c974485da995ed6964c84fdc | admin | True |
| 4b7bab4524194ffabc2df4227abf0189 | demo | True |
| 3221533ffd0e4e55b55d696ec7a2a047 | service | True |
+----------------------------------+---------+---------+
root@controller:/home/mengfei#
在您的环境简化命令行设置--os-*变量用法。创建 admin-openrc.sh 文件来管
理管理员凭据和管理终点:
export OS_USERNAME=admin
export OS_PASSWORD=ADMIN_PASS
export OS_TENANT_NAME=admin
export OS_AUTH_URL=http://controller:35357/v2.0
读取环境变量:
source admin-openrc.sh
root@controller:/home/mengfei# keystone token-get
+-----------+----------------------------------+
| Property | Value |
+-----------+----------------------------------+
| expires | 2014-11-14T04:05:12Z |
| id | dfb9fe7d8adf41ad9a0e261175ad9823 |
| tenant_id | 0d8f52e0c974485da995ed6964c84fdc |
| user_id | 72bfd7738cc64decb88e7b5b2080fa43 |
+-----------+----------------------------------+
root@controller:/home/mengfei# keystone user-list
+----------------------------------+-------+---------+--------------+
| id | name | enabled | email |
+----------------------------------+-------+---------+--------------+
| 72bfd7738cc64decb88e7b5b2080fa43 | admin | True | admin@https://www.360docs.net/doc/a218805327.html, |
| 02f2136a31d84bee98989eb5b3263ac3 | demo | True | demo@https://www.360docs.net/doc/a218805327.html, |
+----------------------------------+-------+---------+--------------+
root@controller:/home/mengfei# keystone user-role-list --user admin --tenant admin
+----------------------------------+----------+----------------------------------+----------------------------------+
| id | name | user_id | tenant_id |
+----------------------------------+----------+----------------------------------+----------------------------------+
| cc4cdeca07cf473abe092a4e4b578e29 | _member_ | 72bfd7738cc64decb88e7b5b2080fa43 | 0d8f52e0c974485da995ed6964c84fdc |
| df6fecd283d0430397e4404051d0c032 | admin | 72bfd7738cc64decb88e7b5b2080fa43 | 0d8f52e0c974485da995ed6964c84fdc |
+----------------------------------+----------+----------------------------------+----------------------------------+
root@controller:/home/mengfei#
九:Controller Glance 安装glance服务
控制节点上创建glance数据库:
mysql -u root -p
CREATE DATABASE glance;
GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' IDENTIFIED BY 'feimeng';
GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' IDENTIFIED BY 'feimeng';
rm /var/lib/glance/glance.sqlite
安装软件:
apt-get install glance python-glanceclient
编辑/etc/glance/glance-api.conf 和/ etc/glance/ glance-registry.conf
[database]
connection = mysql://glance:feimeng@controller/glance
[keystone_authtoken]
auth_uri = http://controller:5000
auth_host = controller
auth_port = 35357
auth_protocol = http
admin_tenant_name = service
admin_user = glance
admin_password = feimeng
[paste_deploy]
flavor = keystone
编辑/etc/glance/glance-api.conf
[DEFAULT]
rpc_backend = rabbit
rabbit_host = controller
rabbit_password = guest
创建数据库表的 image 服务:
su -s /bin/sh -c "glance-manage db_sync" glance
创建 glance用户,为用户指定电子邮件地址。使用 service租户和给用户 admin 角色:
keystone user-create --name=glance --pass=feimeng --email=glance@https://www.360docs.net/doc/a218805327.html,
keystone user-role-add --user=glance --tenant=service --role=admin
注册服务,并创建端点:
keystone service-create --name=glance --type=image --description="OpenStack Image Service"
keystone endpoint-create \
--service-id=$(keystone service-list | awk '/ image / {print $2}') \
--publicurl=http://controller:9292 \
--internalurl=http://controller:9292 \
--adminurl=http://controller:9292
重启 glance 服务:
service glance-registry restart
service glance-api restart
上传镜像:
curl -O https://www.360docs.net/doc/a218805327.html,/0.3.3/cirros-0.3.3-x86_64-disk.img
glance image-create --name "cirros-0.3.3-x86_64" --file cirros-0.3.3-x86_64-disk.img --disk-format qcow2 --container-format bare --is-public True --progress
下载镜像:
glance image-download "cirros-0.3.3-i386-disk" > cirros.img
cd /home/mengfei/soft
root@controller:/home/mengfei/soft# glance image-create --name "cirros-0.3.3-x86_64" --file cirros-0.3.3-x86_64-disk.img --disk-format qcow2 --container-format bare --is-public True --progress
[=============================>] 100%
+------------------+--------------------------------------+
| checksum | 133eae9fb1c98f45894a4e60d8736619 |
| container_format | bare |
| created_at | 2014-11-14T03:42:12 |
| deleted | False |
| deleted_at | None |
| disk_format | qcow2 |
| id | 62f996a1-a507-4d24-91f7-ec35433e4a32 |
| is_public | True |
| min_disk | 0 |
| min_ram | 0 |
| name | cirros-0.3.3-x86_64 |
| owner | 0d8f52e0c974485da995ed6964c84fdc |
| protected | False |
| size | 13200896 |
| status | active |
| updated_at | 2014-11-14T03:42:13 |
| virtual_size | None |
+------------------+--------------------------------------+
root@controller:/home/mengfei/soft#
root@controller:/home/mengfei/soft# glance image-list
+--------------------------------------+---------------------+-------------+------------------+----------+--------+
| ID | Name | Disk Format | Container Format | Size | Status |
+--------------------------------------+---------------------+-------------+------------------+----------+--------+
| 62f996a1-a507-4d24-91f7-ec35433e4a32 | cirros-0.3.3-x86_64 | qcow2 | bare | 13200896 | active |
+--------------------------------------+---------------------+-------------+------------------+----------+--------+
root@controller:/home/mengfei/soft#
十.Controller Nova 控制节点安装nova 服务
apt-get install nova-api nova-cert nova-conductor nova-consoleauth nova-novncproxy nova-scheduler python-novaclient 控制节点上创建nova数据库:
mysql -u root -p
CREATE DATABASE nova;
GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' IDENTIFIED BY 'feimeng';
GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY 'feimeng';
创建nova用户,使用 service 租户和给用户 admin 角色:
keystone user-create --name=nova --pass=feimeng --email=nova@https://www.360docs.net/doc/a218805327.html,
keystone user-role-add --user=nova --tenant=service --role=admin
编辑/etc/nova/nova.conf
[DEFAULT]
dhcpbridge_flagfile=/etc/nova/nova.conf
dhcpbridge=/usr/bin/nova-dhcpbridge
logdir=/var/log/nova
state_path=/var/lib/nova
lock_path=/var/lock/nova
force_dhcp_release=True
libvirt_use_virtio_for_bridges=True
verbose=True
ec2_private_dns_show_ip=True
enabled_apis=ec2,osapi_compute,metadata
auth_strategy = keystone
#compute_scheduler_driver=nova.scheduler.simple.SimpleScheduler
#osapi_compute_extension=https://www.360docs.net/doc/a218805327.html,pute.contrib.standard_extensions
#allow_admin_api=true
#s3_host=controller
#cc_host=controller
rpc_backend = rabbit
rabbit_host = controller
rabbit_password = guest
my_ip = 192.168.128.100
vncserver_listen = 192.168.128.100
vncserver_proxyclient_address = 192.168.128.100
[database]
connection = mysql://nova:feimeng@controller/nova
[keystone_authtoken]
auth_uri = http://controller:5000
identity_uri = http://controller:35357
admin_tenant_name = service
admin_user = nova
admin_password = feimeng
[glance]
host = controller
注册服务,并指定端点:
keystone service-create --name=nova --type=compute --description="OpenStack Compute" keystone endpoint-create \
--service-id=$(keystone service-list | awk '/ compute / {print $2}') \
--publicurl=http://controller:8774/v2/%\(tenant_id\)s \
--internalurl=http://controller:8774/v2/%\(tenant_id\)s \
--adminurl=http://controller:8774/v2/%\(tenant_id\)s
su -s /bin/sh -c "nova-manage db sync" nova
rm /var/lib/nova/nova.sqlite
重启计算服务
service nova-api restart
service nova-cert restart
service nova-consoleauth restart
service nova-scheduler restart
service nova-conductor restart
service nova-novncproxy restart
验证您的配置,列出可用的 images:
nova image-list
nova service-list
nova-manage service list
+--------------------------------------+---------------------+--------+--------+
| ID | Name | Status | Server |
+--------------------------------------+---------------------+--------+--------+
| 62f996a1-a507-4d24-91f7-ec35433e4a32 | cirros-0.3.3-x86_64 | ACTIVE | |
+--------------------------------------+---------------------+--------+--------+
root@controller:/var/log/nova#
root@controller:/var/log/nova# nova service-list
+----+------------------+------------+----------+---------+-------+----------------------------+-----------------+
| Id | Binary | Host | Zone | Status | State | Updated_at | Disabled Reason |
+----+------------------+------------+----------+---------+-------+----------------------------+-----------------+
| 1 | nova-cert | controller | internal | enabled | up | 2014-11-14T07:19:20.000000 | - |
| 2 | nova-consoleauth | controller | internal | enabled | up | 2014-11-14T07:19:21.000000 | - | | 3 | nova-scheduler | controller | internal | enabled | up | 2014-11-14T07:19:21.000000 | - | | 4 | nova-conductor | controller | internal | enabled | up | 2014-11-14T07:19:21.000000 | - | | 5 | nova-compute | compute | nova | enabled | down | - | - |
+----+------------------+------------+----------+---------+-------+----------------------------+-----------------+
root@controller:/var/log/nova#
root@controller:/var/log/nova# nova-manage service list
Binary Host Zone Status State Updated_At
nova-cert controller internal enabled :-) 2014-11-14 07:20:00
nova-consoleauth controller internal enabled :-) 2014-11-14 07:20:01
nova-scheduler controller internal enabled :-) 2014-11-14 07:20:01
nova-conductor controller internal enabled :-) 2014-11-14 07:20:01
nova-compute compute nova enabled XXX None
root@controller:/var/log/nova#
十一.Compute Node 计算节点安装nova服务
apt-get install nova-compute
apt-get install python-novaclient
编辑/etc/nova/nova.conf
[DEFAULT]
dhcpbridge_flagfile=/etc/nova/nova.conf
dhcpbridge=/usr/bin/nova-dhcpbridge
logdir=/var/log/nova
state_path=/var/lib/nova
lock_path=/var/lock/nova
force_dhcp_release=True
libvirt_use_virtio_for_bridges=True
verbose=True
ec2_private_dns_show_ip=True
api_paste_config=/etc/nova/api-paste.ini
enabled_apis=ec2,osapi_compute,metadata
rpc_backend = rabbit
rabbit_host = controller
rabbit_password = guest
my_ip = 192.168.128.101
vnc_enabled = True
vncserver_listen = 0.0.0.0
vncserver_proxyclient_address =192.168.128.101
[keystone_authtoken]
auth_uri = http://controller:5000/v2.0
identity_uri = http://controller:35357
admin_tenant_name = service
admin_user = nova
admin_password = feimeng
[glance]
host = controller
编辑在/etc/nova/nova-compute.conf
[libvirt]
virt_type = qemu
service nova-compute restart
rm -f /var/lib/nova/nova.sqlite
验证nova compute服务
source admin-openrc.sh
nova service-list
root@compute:/home/mengfei# nova service-list
+----+------------------+------------+----------+---------+-------+----------------------------+-----------------+
| Id | Binary | Host | Zone | Status | State | Updated_at | Disabled Reason |
+----+------------------+------------+----------+---------+-------+----------------------------+-----------------+
| 1 | nova-cert | controller | internal | enabled | up | 2014-11-14T09:43:22.000000 | - |
| 2 | nova-consoleauth | controller | internal | enabled | up | 2014-11-14T09:43:22.000000 | - |
| 3 | nova-scheduler | controller | internal | enabled | up | 2014-11-14T09:43:21.000000 | - |
| 4 | nova-conductor | controller | internal | enabled | up | 2014-11-14T09:43:21.000000 | - |
| 5 | nova-compute | compute | nova | enabled | up | 2014-11-14T09:43:13.000000 | - |
+----+------------------+------------+----------+---------+-------+----------------------------+-----------------+
root@compute:/home/mengfei#
注:计算节点安装nova-compute时花了不少时间,nova-compute服务始终起不来,经反复重装nova-compute,才解决
有时直接apt-get install nova-compute时,可能由源的问题,安装不全,apt-get update/upgrade后,
再次apt-get install nova-compute时提示安装了很多相差的软件,尤其nova-common,尝试了几次安装成功了
apt-get install python-novaclient 安装这个,是可以在计算节点执行nova的命令。
十二.Controller node – Neutron控制节点安装neutron
创建 neutron 数据库
mysql -u root -p
CREATE DATABASE neutron;
GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY 'feimeng';
GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY 'feimeng';
为网络创建身份服务凭证
创建 neutron 用户
keystone user-create --name neutron --pass feimeng --email neutron@https://www.360docs.net/doc/a218805327.html,
keystone user-role-add --user neutron --tenant service --role admin
创建 neutron service:
keystone service-create --name neutron --type network --description "OpenStack Networking"
创建 service endpoint:
keystone endpoint-create \
--service-id $(keystone service-list | awk '/ network / {print $2}') \
--publicurl http://controller:9696 \
--adminurl http://controller:9696 \
--internalurl http://controller:9696
安装网络组件
apt-get install neutron-server neutron-plugin-ml2 python-neutronclient
配置网络服务组件
编辑 /etc/neutron/neutron.conf
[DEFAULT]
verbose = True
auth_strategy = keystone
rpc_backend = rabbit
rabbit_host = controller
rabbit_password = feimeng
core_plugin = ml2
service_plugins = router
allow_overlapping_ips = True
notify_nova_on_port_status_changes = True
notify_nova_on_port_data_changes = True
nova_url = http://controller:8774/v2
nova_admin_auth_url = http://controller:35357/v2.0
nova_region_name = regionOne
nova_admin_username = nova
nova_admin_tenant_id = 3221533ffd0e4e55b55d696ec7a2a047
nova_admin_password = 12345
[keystone_authtoken]
auth_uri = http://controller:5000/v2.0
identity_uri = http://controller:35357
admin_tenant_name = service
admin_user = neutron
admin_password = feimeng
[database]
connection = mysql://neutron:feimeng@controller/neutron
配置模块 2 层(ML2)插件
ML2 插件使用开放的VSwitch(OVS)机制来构建虚拟网络。然而,控制器节点不需要传感器、代理或服务,因为它不处理实例的网络流量
编辑/etc/neutron/plugins/ml2/ml2_conf.ini
[ml2]
tenant_network_types = vlan
[ml2_type_vlan]
network_vlan_ranges = physnet2:1000:2999
[securitygroup]
enable_security_group = True
enable_ipset = True
firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
配置计算使用网络
编辑/etc/nova/nova.conf
[DEFAULT]
network_api_class = https://www.360docs.net/doc/a218805327.html,work.neutronv2.api.API
security_group_api = neutron
linuxnet_interface_driver = https://www.360docs.net/doc/a218805327.html,work.linux_net.LinuxOVSInterfaceDriver
firewall_driver = nova.virt.firewall.NoopFirewallDriver
[neutron]
url = http://controller:9696
auth_strategy = keystone
admin_auth_url = http://controller:35357/v2.0
admin_tenant_name = service
admin_username = neutron
admin_password = feimeng
同步数据Populate the database
su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade juno" neutron
root@controller:/home/mengfei# su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade juno" neutron
INFO [alembic.migration] Context impl MySQLImpl.
INFO [alembic.migration] Will assume non-transactional DDL.
INFO [alembic.migration] Running upgrade None -> havana, havana_initial
INFO [alembic.migration] Running upgrade havana -> e197124d4b9, add unique constraint to members
INFO [alembic.migration] Running upgrade e197124d4b9 -> 1fcfc149aca4, Add a unique constraint on (agent_type, host) columns to prevent a race
condition when an agent entry is 'upserted'.
INFO [alembic.migration] Running upgrade 1fcfc149aca4 -> 50e86cb2637a, nsx_mappings
INFO [alembic.migration] Running upgrade 50e86cb2637a -> 1421183d533f, NSX DHCP/metadata support
INFO [alembic.migration] Running upgrade 1421183d533f -> 3d3cb89d84ee, nsx_switch_mappings
INFO [alembic.migration] Running upgrade 3d3cb89d84ee -> 4ca36cfc898c, nsx_router_mappings
INFO [alembic.migration] Running upgrade 4ca36cfc898c -> 27cc183af192, ml2_vnic_type
INFO [alembic.migration] Running upgrade 27cc183af192 -> 50d5ba354c23, ml2 binding:vif_details
INFO [alembic.migration] Running upgrade 50d5ba354c23 -> 157a5d299379, ml2 binding:profile
INFO [alembic.migration] Running upgrade 157a5d299379 -> 3d2585038b95, VMware NSX rebranding
INFO [alembic.migration] Running upgrade 3d2585038b95 -> abc88c33f74f, lb stats
INFO [alembic.migration] Running upgrade abc88c33f74f -> 1b2580001654, nsx_sec_group_mapping
INFO [alembic.migration] Running upgrade 1b2580001654 -> e766b19a3bb, nuage_initial
INFO [alembic.migration] Running upgrade e766b19a3bb -> 2eeaf963a447, floatingip_status
INFO [alembic.migration] Running upgrade 2eeaf963a447 -> 492a106273f8, Brocade ML2 Mech. Driver
INFO [alembic.migration] Running upgrade 492a106273f8 -> 24c7ea5160d7, Cisco CSR VPNaaS
INFO [alembic.migration] Running upgrade 24c7ea5160d7 -> 81c553f3776c, bsn_consistencyhashes
INFO [alembic.migration] Running upgrade 81c553f3776c -> 117643811bca, nec: delete old ofc mapping tables
INFO [alembic.migration] Running upgrade 117643811bca -> 19180cf98af6, nsx_gw_devices
INFO [alembic.migration] Running upgrade 19180cf98af6 -> 33dd0a9fa487, embrane_lbaas_driver
INFO [alembic.migration] Running upgrade 538732fa21e1 -> 5ac1c354a051, n1kv segment allocs for cisco n1kv plugin
INFO [alembic.migration] Running upgrade 5ac1c354a051 -> icehouse, icehouse
INFO [alembic.migration] Running upgrade icehouse -> 54f7549a0e5f, set_not_null_peer_address
INFO [alembic.migration] Running upgrade 54f7549a0e5f -> 1e5dd1d09b22, set_not_null_fields_lb_stats
INFO [alembic.migration] Running upgrade 1e5dd1d09b22 -> b65aa907aec, set_length_of_protocol_field
INFO [alembic.migration] Running upgrade b65aa907aec -> 33c3db036fe4, set_length_of_description_field_metering
INFO [alembic.migration] Running upgrade 33c3db036fe4 -> 4eca4a84f08a, Remove ML2 Cisco Credentials DB
INFO [alembic.migration] Running upgrade 4eca4a84f08a -> d06e871c0d5, set_admin_state_up_not_null_ml2
INFO [alembic.migration] Running upgrade d06e871c0d5 -> 6be312499f9, set_not_null_vlan_id_cisco
INFO [alembic.migration] Running upgrade 6be312499f9 -> 1b837a7125a9, Cisco APIC Mechanism Driver
INFO [alembic.migration] Running upgrade 1b837a7125a9 -> 10cd28e692e9, nuage_extraroute
INFO [alembic.migration] Running upgrade 10cd28e692e9 -> 2db5203cb7a9, nuage_floatingip
INFO [alembic.migration] Running upgrade 2db5203cb7a9 -> 5446f2a45467, set_server_default
INFO [alembic.migration] Running upgrade 5446f2a45467 -> db_healing, Include all tables and make migrations unconditional. INFO [alembic.migration] Context impl MySQLImpl.
INFO [alembic.migration] Will assume non-transactional DDL.
INFO [https://www.360docs.net/doc/a218805327.html,pare] Detected server default on column 'cisco_ml2_apic_epgs.provider'
INFO [https://www.360docs.net/doc/a218805327.html,pare] Detected removed index 'cisco_n1kv_vlan_allocations_ibfk_1' on 'cisco_n1kv_vlan_allocations'
INFO [https://www.360docs.net/doc/a218805327.html,pare] Detected server default on column 'cisco_n1kv_vxlan_allocations.allocated'
INFO [https://www.360docs.net/doc/a218805327.html,pare] Detected removed index 'cisco_n1kv_vxlan_allocations_ibfk_1' on 'cisco_n1kv_vxlan_allocations'
INFO [https://www.360docs.net/doc/a218805327.html,pare] Detected removed index 'embrane_pool_port_ibfk_2' on 'embrane_pool_port'
INFO [https://www.360docs.net/doc/a218805327.html,pare] Detected removed index 'firewall_rules_ibfk_1' on 'firewall_rules'
INFO [https://www.360docs.net/doc/a218805327.html,pare] Detected removed index 'firewalls_ibfk_1' on 'firewalls'
INFO [https://www.360docs.net/doc/a218805327.html,pare] Detected server default on column 'meteringlabelrules.excluded'
INFO [https://www.360docs.net/doc/a218805327.html,pare] Detected server default on column 'ml2_port_bindings.host'
INFO [https://www.360docs.net/doc/a218805327.html,pare] Detected added column 'nuage_routerroutes_mapping.destination'
INFO [https://www.360docs.net/doc/a218805327.html,pare] Detected added column 'nuage_routerroutes_mapping.nexthop'
INFO [https://www.360docs.net/doc/a218805327.html,pare] Detected server default on column 'poolmonitorassociations.status'
INFO [https://www.360docs.net/doc/a218805327.html,pare] Detected added index 'ix_quotas_tenant_id' on '['tenant_id']'
INFO [https://www.360docs.net/doc/a218805327.html,pare] Detected NULL on column 'tz_network_bindings.phy_uuid'
INFO [https://www.360docs.net/doc/a218805327.html,pare] Detected NULL on column 'tz_network_bindings.vlan_id'
INFO [neutron.db.migration.alembic_migrations.heal_script] Detected removed foreign key u'nuage_floatingip_pool_mapping_ibfk_2' on table u'nuage_floatingip_pool_mapping'
INFO [alembic.migration] Running upgrade db_healing -> 3927f7f7c456, L3 extension distributed mode
INFO [alembic.migration] Running upgrade 3927f7f7c456 -> 2026156eab2f, L2 models to support DVR
INFO [alembic.migration] Running upgrade 2026156eab2f -> 37f322991f59, removing_mapping_tables
INFO [alembic.migration] Running upgrade 37f322991f59 -> 31d7f831a591, add constraint for routerid
INFO [alembic.migration] Running upgrade 31d7f831a591 -> 5589aa32bf80, L3 scheduler additions to support DVR
INFO [alembic.migration] Running upgrade 5589aa32bf80 -> 884573acbf1c, Drop NSX table in favor of the extra_attributes one
INFO [alembic.migration] Running upgrade 884573acbf1c -> 4eba2f05c2f4, correct Vxlan Endpoint primary key
INFO [alembic.migration] Running upgrade 4eba2f05c2f4 -> 327ee5fde2c7, set_innodb_engine
INFO [alembic.migration] Running upgrade 327ee5fde2c7 -> 3b85b693a95f, Drop unused servicedefinitions and servicetypes tables.
INFO [alembic.migration] Running upgrade 3b85b693a95f -> aae5706a396, nuage_provider_networks
INFO [alembic.migration] Running upgrade aae5706a396 -> 32f3915891fd, cisco_apic_driver_update
INFO [alembic.migration] Running upgrade 32f3915891fd -> 58fe87a01143, cisco_csr_routing
INFO [alembic.migration] Running upgrade 58fe87a01143 -> 236b90af57ab, ml2_type_driver_refactor_dynamic_segments INFO [alembic.migration] Running upgrade 236b90af57ab -> 86d6d9776e2b, Cisco APIC Mechanism Driver
INFO [alembic.migration] Running upgrade 86d6d9776e2b -> 16a27a58e093, ext_l3_ha_mode
INFO [alembic.migration] Running upgrade 16a27a58e093 -> 3c346828361e, metering_label_shared
INFO [alembic.migration] Running upgrade 3c346828361e -> 1680e1f0c4dc, Remove Cisco Nexus Monolithic Plugin
INFO [alembic.migration] Running upgrade 1680e1f0c4dc -> 544673ac99ab, add router port relationship
INFO [alembic.migration] Running upgrade 544673ac99ab -> juno, juno
重启计算服务:
service nova-api restart
service nova-scheduler restart
service nova-conductor restart
重启网络服务:
service neutron-server restart
验证网络服务:
neutron ext-list
root@controller:/home/mengfei# neutron ext-list
+-----------------------+-----------------------------------------------+
| alias | name |
+-----------------------+-----------------------------------------------+
| security-group | security-group |
| l3_agent_scheduler | L3 Agent Scheduler |
| ext-gw-mode | Neutron L3 Configurable external gateway mode |
| binding | Port Binding |
| provider | Provider Network |
| agent | agent |
| quotas | Quota management support |
| dhcp_agent_scheduler | DHCP Agent Scheduler |
| l3-ha | HA Router extension |
| multi-provider | Multi Provider Network |
| external-net | Neutron external network |
| router | Neutron L3 Router |
| allowed-address-pairs | Allowed Address Pairs |
| extraroute | Neutron Extra Route |
| extra_dhcp_opt | Neutron Extra DHCP opts |
| dvr | Distributed Virtual Router |
+-----------------------+-----------------------------------------------+
root@controller:/home/mengfei#
十三.Network node – Neutron 网络节点安装neutron
配置 OpenStack 的网络,必须启用某些内核网络功能
编辑/etc/sysctl.conf
net.ipv4.ip_forward=1
net.ipv4.conf.all.rp_filter=0
net.ipv4.conf.default.rp_filter=0
实现更改:
sysctl –p
安装网络组件
apt-get install neutron-plugin-ml2 neutron-plugin-openvswitch-agent neutron-l3-agent neutron-dhcp-agent openvswitch-datapath-dkms
#apt-get install neutron-plugin-ml2 neutron-plugin-openvswitch-agent neutron-l3-agent neutron-dhcp-agent ipset
配置网络常见的组件
联网常见组件配置包括认证机制,消息代理和插件
编辑/etc/neutron/neutron.conf
[DEFAULT]
rpc_backend = rabbit
rabbit_host = controller
rabbit_password = feimeng
auth_strategy = keystone
core_plugin = ml2
service_plugins = router
allow_overlapping_ips = True
[keystone_authtoken]
auth_uri = http://controller:5000/v2.0
identity_uri = http://controller:35357
admin_tenant_name = service
admin_user = neutron
admin_password = feimeng
配置模块化第 2 层(ML2)插件
该 ML2 插件使用 Open vSwitch 上(OVS)的机制(代理)来构建虚拟网络架构的实例编辑/etc/neutron/plugins/ml2/ml2_conf.ini
[ml2]
tenant_network_types = vlan
type_drivers = local,flat,vlan,gre,vxlan
mechanism_drivers = openvswitch,linuxbridge
[ml2_type_vlan]
network_vlan_ranges = physnet2:1000:2999
[securitygroup]
enable_security_group = True
enable_ipset = True
firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver [ovs]
local_ip = 10.10.10.102
tenant_network_type = vlan
integration_bridge = br-int
network_vlan_ranges = physnet2:1000:2999
bridge_mappings = physnet2:br-eth1
配置三层(L3)代理
在第 3 层(L3)中,例如虚拟网络提供路由服务
vi /etc/neutron/l3_agent.ini
[DEFAULT]
verbose = True
interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver
use_namespaces = True
external_network_bridge = br-ex
在 DHCP 代理中,虚拟网络提供 DHCP 服务
vi /etc/neutron/dhcp_agent.ini
[DEFAULT]
verbose = True
interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
use_namespaces = True
配置元数据代理
元数据代理提供的配置信息,如远程凭据使用实例
vi /etc/neutron/metadata_agent.ini
[DEFAULT]
verbose = True
auth_url = http://controller:5000/v2.0
auth_region = regionOne
admin_tenant_name = service
admin_user = neutron
admin_password = feimeng
nova_metadata_ip = controller
metadata_proxy_shared_secret = feimeng
控制节点上执行下面两个步骤
在控制节点上,更换 METADATA_SECRET 与您选择的元数据代理
vi /etc/nova/nova.conf
[DEFAULT]
service_neutron_metadata_proxy = true
neutron_metadata_proxy_shared_secret = feimeng
在控制节点上,重启计算 API 服务
service nova-api restart
配置开放式的 vSwitch(OVS)服务
重启 OVS 服务:
service openvswitch-switch restart
添加集成桥
ovs-vsctl add-br br-eth1
添加一个端口连接到集成桥,连接到内部物理网络接口
ovs-vsctl add-port br-eth1 eth1
vi /etc/network/interface ????
完成安装,重启网络服务
service neutron-plugin-openvswitch-agent restart
service neutron-l3-agent restart
service neutron-dhcp-agent restart
service neutron-metadata-agent restart
验证网络服务
neutron agent-list
root@network:/var/log/neutron# neutron agent-list
+--------------------------------------+--------------------+---------+-------+----------------+---------------------------+
| id | agent_type | host | alive | admin_state_up | binary |
+--------------------------------------+--------------------+---------+-------+----------------+---------------------------+
| 8c399f6a-ce19-4eb9-b245-dff9d662d577 | Metadata agent | network | :-) | True | neutron-metadata-agent |
| a9f73657-bb65-491f-b63e-3cb720e6d7bd | DHCP agent | network | :-) | True | neutron-dhcp-agent |
| bbee1b57-6062-40ce-b20a-e2dd60ef4d7e | Open vSwitch agent | network | :-) | True | neutron-openvswitch-agent | | ebaf331c-efff-43b6-b37f-099897f1cee3 | L3 agent | network | :-) | True | neutron-l3-agent |
+--------------------------------------+--------------------+---------+-------+----------------+---------------------------+
root@network:/var/log/neutron#
十四.Compute node – Neutron 计算节点安装neutron
配置计算节点
在配置 OpenStack 的网络中,必须启用某些内核网络功能
vi /etc/sysctl.conf
net.ipv4.conf.all.rp_filter=0
net.ipv4.conf.default.rp_filter=0
实现更改:
sysctl -p
安装网络组件
翻译中文:apt-get install neutron-common neutron-plugin-ml2 neutron-plugin-openvswitch-agent openvswitch-datapath-dkms
网上官方:apt-get install neutron-plugin-ml2 neutron-plugin-openvswitch-agent ipset
联网常见组件配置包括认证机制,消息代理和插件
vi /etc/neutron/neutron.conf
[DEFAULT]
verbose = True
rpc_backend = rabbit
rabbit_host = controller
rabbit_password = guest
auth_strategy = keystone
core_plugin = ml2
service_plugins = router
allow_overlapping_ips = True
[keystone_authtoken]
auth_uri = http://controller:5000/v2.0
identity_uri = http://controller:35357
admin_tenant_name = service
admin_user = neutron
admin_password = feimeng
配置模块化第 2 层(ML2)插件
该 ML2 插件使用 Open vSwitch 上(OVS)的机制(代理)来构建虚拟网络架构的实例
vi /etc/neutron/plugins/ml2/ml2_conf.ini
[ml2]
tenant_network_types = vlan
type_drivers = local,flat,vlan,gre,vxlan
mechanism_drivers = openvswitch,linuxbridge
[ml2_type_vlan]
network_vlan_ranges = physnet2:1000:2999
[securitygroup]
enable_security_group = True
enable_ipset = True
firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
[ovs]
local_ip = 10.10.10.101
tenant_network_type = vlan
integration_bridge = br-int
network_vlan_ranges = physnet2:1000:2999
bridge_mappings = physnet2:br-eth1
配置 Open vSwitch(OVS)服务
重启 OVS 服务:
service openvswitch-switch restart
添加集成桥:
ovs-vsctl add-br br-eth1
ovs-vsctl add-port br-eth1 eth1
配置计算使用网络
vi /etc/nova/nova.conf
[DEFAULT]
network_api_class = https://www.360docs.net/doc/a218805327.html,work.neutronv2.api.API
security_group_api = neutron
linuxnet_interface_driver = https://www.360docs.net/doc/a218805327.html,work.linux_net.LinuxOVSInterfaceDriver
firewall_driver = nova.virt.firewall.NoopFirewallDriver
[neutron]
url = http://controller:9696
auth_strategy = keystone
admin_auth_url = http://controller:35357/v2.0
admin_tenant_name = service
admin_username = neutron
admin_password = feimeng
完成安装
重启计算服务并启动 Open vSwitch (OVS)代理:
service nova-compute restart
service neutron-plugin-openvswitch-agent restart
十五.Horizon Dashboard安装图形管理软件
apt-get install openstack-dashboard apache2 libapache2-mod-wsgi memcached python-memcache
注意对于 Ubuntu 用户
删除 openstack-dashboard-ubuntu-theme 包:
apt-get remove --purge openstack-dashboard-ubuntu-theme
修改dashboard配置
修改 CACHES['default']['LOCATION']的值在/etc/openstack-dashboard/local_settings.py 中匹配设置在/etc/memcached.conf vi /etc/openstack-dashboard/local_settings.py
OPENSTACK_HOST = "controller"
ALLOWED_HOSTS = ['*']
'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
'LOCATION': '127.0.0.1:11211',
}
}
#TIME_ZONE = "Asia/Shanghai"
TIME_ZONE = "UTC"
注:设置为"Asia/Shanghai",apache2日志报时区错误
启动 Apache Web 服务器和 memcached
service apache2 restart
service memcached restart
登录dashboard,用户名admin,密码feimeng ;即keystone创建用户admin的密码http://192.168.128.100/horizon
十六:验证所有服务
source /home/mengfei/admin-openrc.sh
keystone user-list
keystone role-list
keystone tenant-list
keystone service-list
keystone endpoint-list
nova-manage service list
neutron agent-list
nova image-list
root@controller:/home/mengfei# keystone user-list
+----------------------------------+---------+---------+---------------------+
| id | name | enabled | email |
+----------------------------------+---------+---------+---------------------+
| 72bfd7738cc64decb88e7b5b2080fa43 | admin | True | admin@https://www.360docs.net/doc/a218805327.html, |
| 02f2136a31d84bee98989eb5b3263ac3 | demo | True | demo@https://www.360docs.net/doc/a218805327.html, |
| 9516f705e48d4c2aa2947192342f0fd0 | glance | True | glance@https://www.360docs.net/doc/a218805327.html, |
| 27d35c685ba14f9886893609c002340e | neutron | True | neutron@https://www.360docs.net/doc/a218805327.html, | | e31227b975084f07a7cd6140893b574d | nova | True | nova@https://www.360docs.net/doc/a218805327.html, |
+----------------------------------+---------+---------+---------------------+
root@controller:/home/mengfei# keystone role-list
+----------------------------------+----------+
| id | name |
+----------------------------------+----------+
| cc4cdeca07cf473abe092a4e4b578e29 | _member_ |
| df6fecd283d0430397e4404051d0c032 | admin |
+----------------------------------+----------+
root@controller:/home/mengfei# keystone tenant-list
+----------------------------------+---------+---------+
| id | name | enabled |
+----------------------------------+---------+---------+
| 0d8f52e0c974485da995ed6964c84fdc | admin | True |
| 4b7bab4524194ffabc2df4227abf0189 | demo | True |
| 3221533ffd0e4e55b55d696ec7a2a047 | service | True |
+----------------------------------+---------+---------+
+----------------------------------+----------+----------+-------------------------+
| id | name | type | description |
+----------------------------------+----------+----------+-------------------------+
| 9d8c28615887487983ba4c087af310c3 | glance | image | OpenStack Image Service |
| ac3f2d9c44434fbb828afc3ffb2027de | keystone | identity | OpenStack Identity |
| 77e89331ddaf478494b00215e10deab3 | neutron | network | OpenStack Networking |
| 77f1d1de8afe486f8f3a5d32b03e1326 | nova | compute | OpenStack Compute |
+----------------------------------+----------+----------+-------------------------+
root@controller:/home/mengfei# keystone endpoint-list
+----------------------------------+-----------+-----------------------------------------+-----------------------------------------+--------------------------------
---------+----------------------------------+
| id | region | publicurl | internalurl | adminurl | service_id |
+----------------------------------+-----------+-----------------------------------------+-----------------------------------------+--------------------------------
---------+----------------------------------+
| 2c11bdfeeaf64c619b8e90e7ca96e237 | regionOne | http://controller:5000/v2.0 | http://controller:5000/v2.0 | http://controller:35357/v2.0 | ac3f2d9c44434fbb828afc3ffb2027de |
| 4ee7554e398248f8bc687386c89f9f4a | regionOne | http://controller:9292 | http://controller:9292 | http://controller:9292 | 9d8c28615887487983ba4c087af310c3 |
| 721f58c74f014dfca03eb68290194933 | regionOne | http://controller:9696 | http://controller:9696 | http://controller:9696 | 77e89331ddaf478494b00215e10deab3 |
| 7b4f59b3935a4b308da05aa711fad35b | regionOne | http://controller:8774/v2/%(tenant_id)s | http://controller:8774/v2/%(tenant_id)s | http://controller:8774/v2/%(tenant_id)s | 77f1d1de8afe486f8f3a5d32b03e1326 |
+----------------------------------+-----------+-----------------------------------------+-----------------------------------------+--------------------------------
---------+----------------------------------+
root@controller:/home/mengfei#
root@controller:/home/mengfei# nova-manage service list
neutron agent-listBinary Host Zone Status State Updated_At
nova-cert controller internal enabled :-) 2014-11-14 14:06:59
nova-consoleauth controller internal enabled :-) 2014-11-14 14:07:03
nova-scheduler controller internal enabled :-) 2014-11-14 14:07:01
nova-conductor controller internal enabled :-) 2014-11-14 14:06:58
nova-compute compute nova enabled :-) 2014-11-14 14:07:01
root@controller:/home/mengfei# neutron agent-list
+--------------------------------------+--------------------+---------+-------+----------------+---------------------------+
| id | agent_type | host | alive | admin_state_up | binary |
+--------------------------------------+--------------------+---------+-------+----------------+---------------------------+
| 3828c1df-bb60-4dbf-b064-6d45287ed21b | Open vSwitch agent | compute | :-) | True | neutron-openvswitch-agent |
| 8c399f6a-ce19-4eb9-b245-dff9d662d577 | Metadata agent | network | :-) | True | neutron-metadata-agent |
| a9f73657-bb65-491f-b63e-3cb720e6d7bd | DHCP agent | network | :-) | True | neutron-dhcp-agent |
| bbee1b57-6062-40ce-b20a-e2dd60ef4d7e | Open vSwitch agent | network | :-) | True | neutron-openvswitch-agent |
| ebaf331c-efff-43b6-b37f-099897f1cee3 | L3 agent | network | :-) | True | neutron-l3-agent |
+--------------------------------------+--------------------+---------+-------+----------------+---------------------------+
root@controller:/home/mengfei#
root@controller:/home/mengfei# nova image-list
+--------------------------------------+---------------------+--------+--------+
| ID | Name | Status | Server |
+--------------------------------------+---------------------+--------+--------+
| 62f996a1-a507-4d24-91f7-ec35433e4a32 | cirros-0.3.3-x86_64 | ACTIVE | |
+--------------------------------------+---------------------+--------+--------+
root@controller:/home/mengfei#
创建外部网络:在控制节点执行
source /home/mengfei/admin-openrc.sh
neutron net-create demo-ex --shared --router:external=True
neutron subnet-create demo-ex --name demo-ex-subnet \
--allocation-pool start=192.168.128.200,end=192.168.128.230 \
--disable-dhcp --gateway 192.168.128.1 192.168.128.0/24 --dns-nameserver 8.8.8.8
注:有时创建后,路由中内网接口始终是down,这网络就会不通。有时建外网时外部子网用192.168.0.0/16 就可以,只是偶尔测到,并不能说明什么
neutron net-delete demo-ex
neutron subnet-delete demo-ex-subnet
root@controller:/home/mengfei# neutron net-create demo-ex --router:external=True
Created a new network:
+---------------------------+--------------------------------------+
| Field | Value |
+---------------------------+--------------------------------------+
| admin_state_up | True |
| id | a72cfd21-8604-431e-ba42-65bd9fc3b114 |
| name | demo-ex |
| provider:network_type | vlan |
| provider:physical_network | physnet2 |
| provider:segmentation_id | 1002 |
| router:external | True |
| shared | False |
| status | ACTIVE |
| subnets | |
| tenant_id | 0d8f52e0c974485da995ed6964c84fdc |
+---------------------------+--------------------------------------+
root@controller:/home/mengfei#
root@controller:/home/mengfei# neutron subnet-create demo-ex --name demo-ex-subnet \
> --allocation-pool start=192.168.128.200,end=192.168.128.210 \
> --disable-dhcp --gateway 192.168.128.1 192.168.128.0/24 --dns-nameserver 8.8.8.8
Created a new subnet:
+-------------------+--------------------------------------------------------+
| Field | Value |
+-------------------+--------------------------------------------------------+
| allocation_pools | {"start": "192.168.128.200", "end": "192.168.128.230"} |
| cidr | 192.168.128.0/24 |
| dns_nameservers | 8.8.8.8 |
| enable_dhcp | False |
| gateway_ip | 192.168.128.1 |
| host_routes | |
| id | de3ff776-8dc3-4d41-86a8-d0f1e14c0e97 |
| ip_version | 4 |
| ipv6_address_mode | |
| ipv6_ra_mode | |
| name | demo-ex-subnet |
| network_id | 7741a549-f6fb-483d-b1d9-f8dc191b5f4a |
| tenant_id | 0d8f52e0c974485da995ed6964c84fdc |
+-------------------+--------------------------------------------------------+
root@controller:/home/mengfei#