openstack下
接上篇
十余年的溧水网站建设经验,针对设计、前端、开发、售后、文案、推广等六对一服务,响应快,48小时及时工作处理。成都全网营销推广的优势是能够根据用户设备显示端的尺寸不同,自动调整溧水建站的显示方式,使网站能够适用不同显示终端,在浏览器中调整网站的宽度,无论在任何一种浏览器上浏览网站,都能展现优雅布局与设计,从而大程度地提升浏览体验。创新互联从事“溧水网站设计”,“溧水网站推广”以来,每个客户项目都认真落实执行。
计算服务:
安装配置控制节点:
yum install openstack-nova-api openstack-nova-conductor \
openstack-nova-console openstack-nova-novncproxy \
openstack-nova-scheduler
此时,缺少一个包: python-pygments需要自己下载并安装
1、获得 admin 凭证来获取只有管理员能执行的命令的访问权限:
#. admin-openrc
2、要创建服务证书,完成这些步骤:
创建 nova 用户:
openstack user create --domain default \
--password-prompt nova
给 nova 用户添加 admin 角色:
openstack role add --project service --user nova admin
创建 nova 服务实体:
openstack service create --name nova \
--description "OpenStack Compute" compute
创建 Compute 服务 API 端点 :
# openstack endpoint create --region RegionOne \
> compute public http://172.25.33.10:8774/v2.1/%\(tenant_id\)s
# openstack endpoint create --region RegionOne compute internal http://172.25.33.10:8774/v2.1/%\(tenant_id\)s
+--------------+---------------------------------------------+
| Field | Value |
+--------------+---------------------------------------------+
| enabled | True |
| id | 44b3adb6ce2348908abbf4d3f9a52f2b |
| interface | internal |
| region | RegionOne |
| region_id | RegionOne |
| service_id | a394a2c40c144d6fb9db567a1105c44a |
| service_name | nova |
| service_type | compute |
| url | http://172.25.33.10:8774/v2.1/%(tenant_id)s |
+--------------+---------------------------------------------+
# openstack endpoint create --region RegionOne compute admin http://172.25.33.10:8774/v2.1/%\(tenant_id\)s
编辑``/etc/nova/nova.conf``文件并完成下面的操作:
1、在``[DEFAULT]``部分,只启用计算和元数据API
[DEFAULT]
enabled_apis = osapi_compute,metadata
在``[api_database]``和``[database]``部分,配置数据库的连接:
[api_database]
connection = MySQL+pymysql://nova:nova@172.25.33.10/nova_api
[database]
connection = mysql+pymysql://nova:nova@172.25.33.10/nova
在 “[DEFAULT]” 和 “[oslo_messaging_rabbit]”部分,配置 “RabbitMQ” 消息队列访问:
[DEFAULT]
rpc_backend = rabbit
[oslo_messaging_rabbit]
rabbit_host = controller
rabbit_userid = openstack
rabbit_password = rabbit
在 “[DEFAULT]” 和 “[keystone_authtoken]” 部分,配置认证服务访问
[DEFAULT]
auth_strategy = keystone
[keystone_authtoken]
auth_uri = http://172.25.33.10:5000
auth_url = http://172.25.33.10:35357
memcached_servers = 172.25.33.10:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
password = nova
在 [DEFAULT 部分,配置``my_ip`` 来使用控制节点的管理接口的IP 地址。
[DEFAULT]
my_ip = 10.0.0.11
在 [DEFAULT] 部分,使能 Networking 服务:
[DEFAULT]
use_neutron = True
firewall_driver = nova.virt.firewall.NoopFirewallDriver
默认情况下,计算服务使用内置的防火墙服务。由于网络服务包含了防火墙服务,你必须使用``nova.virt.firewall.NoopFirewallDriver``防火墙服务来禁用掉计算服务内置的防火墙服务
在``[vnc]``部分,配置VNC代理使用控制节点的管理接口IP地址
[vnc]
vncserver_listen = $my_ip
vncserver_proxyclient_address = $my_ip
在 [glance] 区域,配置镜像服务 API 的位置:
[glance]
api_servers = http://controller:9292
在 [oslo_concurrency] 部分,配置锁路径:
[oslo_concurrency]
lock_path = /var/lib/nova/tmp
同步Compute 数据库:
# su -s /bin/sh -c "nova-manage api_db sync" nova
# su -s /bin/sh -c "nova-manage db sync" nova
# systemctl enable openstack-nova-api.service \
openstack-nova-consoleauth.service openstack-nova-scheduler.service \
openstack-nova-conductor.service openstack-nova-novncproxy.service
# systemctl start openstack-nova-api.service \
openstack-nova-consoleauth.service openstack-nova-scheduler.service \
openstack-nova-conductor.service openstack-nova-novncproxy.service
# grep ^[a-Z] /etc/nova/nova.conf
rpc_backend = rabbit
enabled_apis = osapi_compute,metadata
auth_strategy = keystone
my_ip = 172.25.33.10
use_neutron = True
firewall_driver = nova.virt.firewall.NoopFirewallDriver
debug=true
connection = mysql+pymysql://nova:nova@172.25.33.10/nova_api
connection = mysql+pymysql://nova:nova@172.25.33.10/nova
api_servers = http://172.25.33.10:9292
auth_uri = http://172.25.33.10:5000
auth_url = http://172.25.33.10:35357
memcached_servers = 172.25.33.10:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
password = nova
lock_path = /var/lib/nova/tmp
rabbit_host = 172.25.33.10
rabbit_userid = openstack
rabbit_password = rabbit
vncserver_listen = $my_ip
vncserver_proxyclient_address = $my_ip
安装和配置计算节点:
minion2:172.25.33.11
安装软件包:
# yum install openstack-nova-compute
编辑``/etc/nova/nova.conf``文件并完成下面的操作
在``[DEFAULT]`` 和 [oslo_messaging_rabbit]部分,配置``RabbitMQ``消息队列的连接:
[DEFAULT]
rpc_backend = rabbit
[oslo_messaging_rabbit]
rabbit_host =172.25.33.10
rabbit_userid = openstack
rabbit_password =rabbit
在 “[DEFAULT]” 和 “[keystone_authtoken]” 部分,配置认证服务访问
[DEFAULT]
auth_strategy = keystone
[keystone_authtoken]
auth_uri = http://172.25.33.10:5000
auth_url = http://172.25.33.10:35357
memcached_servers = 172.25.33.10:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
password = nova
在 [DEFAULT] 部分,配置 my_ip 选项
[DEFAULT]
my_ip = MANAGEMENT_INTERFACE_IP_ADDRESS
将其中的 MANAGEMENT_INTERFACE_IP_ADDRESS 替换为计算节点上的管理网络接口的IP 地址
my_ip =172.25.33.11
在 [DEFAULT] 部分,使能 Networking 服务:
[DEFAULT]
use_neutron = True
firewall_driver = nova.virt.firewall.NoopFirewallDriver
缺省情况下,Compute 使用内置的防火墙服务。由于 Networking 包含了防火墙服务,所以你必须通过使用 nova.virt.firewall.NoopFirewallDriver 来去除 Compute 内置的防火墙服务
在``[vnc]``部分,启用并配置远程控制台访问:
[vnc]
enabled = True
vncserver_listen = 0.0.0.0
vncserver_proxyclient_address = $my_ip
novncproxy_base_url = http://172.25.33.10:6080/vnc_auto.html
在 [glance] 区域,配置镜像服务 API 的位置:
[glance]
api_servers = http://172.25.33.10:9292
在 [oslo_concurrency] 部分,配置锁路径:
[oslo_concurrency]
lock_path = /var/lib/nova/tmp
官方文档漏掉的配置:回报错误:oslo_service.service [-] Error starting thread.
或PlacementNotConfigured: This compute is not configured to talk to the placement service
[placement]
auth_uri = http://172.25.33.10:5000
auth_url = http://172.25.33.10:35357
memcached_servers = 172.25.33.10:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
password = nova
os_region_name = RegionOne
完成安装
1、确定您的计算节点是否支持虚拟机的硬件加速。
#egrep -c '(vmx|svm)' /proc/cpuinfo
如果这个命令返回了 one or greater 的值,那么你的计算节点支持硬件加速且不需要额外的配置。
如果这个命令返回了 zero 值,那么你的计算节点不支持硬件加速。你必须配置 libvirt 来使用 QEMU 去代替 KVM
# egrep -c '(vmx|svm)' /proc/cpuinfo
0
在 /etc/nova/nova.conf 文件的 [libvirt] 区域做出如下的编辑
[libvirt]
virt_type = qemu
2、启动计算服务及其依赖,并将其配置为随系统自动启动:
# systemctl enable libvirtd.service openstack-nova-compute.service
# systemctl start libvirtd.service openstack-nova-compute.service
验证操作:在控制节点172.25.33.10上进行
获得 admin 凭证来获取只有管理员能执行的命令的访问权限:
#. admin-openrc
列出服务组件,以验证是否成功启动并注册了每个进程:
# openstack compute service list
+----+------------------+------------------+----------+---------+-------+--------------------+
| ID | Binary | Host | Zone | Status | State | Updated At |
+----+------------------+------------------+----------+---------+-------+--------------------+
| 1 | nova-conductor | server10.example | internal | enabled | up | 2017-04-04T14:07:4 |
| | | | | | | 9.000000 |
| 2 | nova-scheduler | server10.example | internal | enabled | up | 2017-04-04T14:07:5 |
| | | | | | | 1.000000 |
| 3 | nova-consoleauth | server10.example | internal | enabled | up | 2017-04-04T14:07:5 |
| | | | | | | 0.000000 |
| 6 | nova-compute | server11.example | nova | enabled | up | 2017-04-04T14:07:5 |
| | | .com | | | | 1.000000
网络服务:
控制节点:
OpenStack网络(neutron)管理OpenStack环境中所有虚拟网络基础设施(VNI),物理网络基础设施(PNI)的接入层。OpenStack网络允许租户创建包括像 firewall, :term:`load balancer`和 :term:`virtual private network (×××)`等这样的高级虚拟网络拓扑。
配置:
1、获得 admin 凭证来获取只有管理员能执行的命令的访问权限:
. admin-openrc
2、要创建服务证书,完成这些步骤:
创建``neutron``用户:
openstack user create --domain default --password-prompt neutron
添加``admin`` 角色到``neutron`` 用户:
openstack role add --project service --user neutron admin
创建``neutron``服务实体:
# openstack service create --name neutron \
> --description "OpenStack Networking" network
创建网络服务API端点
# openstack endpoint create --region RegionOne \
> network public http://172.25.33.10:9696
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | 0092457b66b84d869d710e84c715219c |
| interface | public |
| region | RegionOne |
| region_id | RegionOne |
| service_id | a33565b8fdfa4531963fdbb74245d960 |
| service_name | neutron |
| service_type | network |
| url | http://172.25.33.10:9696 |
+--------------+----------------------------------+
# openstack endpoint create --region RegionOne network internal http://172.25.33.10:9696
# openstack endpoint create --region RegionOne network admin http://172.25.33.10:9696
本网络实例采用公共网络:
选项1采用尽可能简单的架构进行部署,只支持实例连接到公有网络(外部网络)。没有私有网络(个人网络),路由器以及浮动IP地址。只有``admin``或者其他特权用户才可以管理公有网络
选项2在选项1的基础上多了layer-3服务,支持实例连接到私有网络。``demo``或者其他没有特权的用户可以管理自己的私有网络,包含连接公网和私网的路由器。另外,浮动IP地址可以让实例使用私有网络连接到外部网络,例如互联网
yum install openstack-neutron openstack-neutron-ml2 \
openstack-neutron-linuxbridge ebtables
配置服务组件
Networking 服务器组件的配置包括数据库、认证机制、消息队列、拓扑变化通知和插件。
编辑``/etc/neutron/neutron.conf`` 文件并完成如下操作:
在 [database] 部分,配置数据库访问
[database]
connection = mysql+pymysql://neutron:neutron@172.25.33.10/neutron
在``[DEFAULT]``部分,启用Modular Layer 2 (ML2)插件,路由服务和重叠的IP地址:
[DEFAULT]
core_plugin = ml2
service_plugins =
在 “[DEFAULT]” 和 “[oslo_messaging_rabbit]”部分,配置 “RabbitMQ” 消息队列的连接:
[DEFAULT]
rpc_backend = rabbit
[oslo_messaging_rabbit]
rabbit_host = 172.25.33.10
rabbit_userid = openstack
rabbit_password =rabbit
在 “[DEFAULT]” 和 “[keystone_authtoken]” 部分,配置认证服务访问:
[DEFAULT]
auth_strategy = keystone
[keystone_authtoken]
auth_uri = http://172.25.33.10:5000
auth_url = http://172.25.33.10:35357
memcached_servers = 172.25.33.10:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = neutron
在``[DEFAULT]``和``[nova]``部分,配置网络服务来通知计算节点的网络拓扑变化:
[DEFAULT]
notify_nova_on_port_status_changes = True
notify_nova_on_port_data_changes = True
[nova]
auth_url = http://172.25.33.10:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = nova
password = nova
在 [oslo_concurrency] 部分,配置锁路径:
[oslo_concurrency]
lock_path = /var/lib/neutron/tmp
配置 Modular Layer 2 (ML2) 插件
ML2插件使用Linuxbridge机制来为实例创建layer-2虚拟网络基础设施
编辑``/etc/neutron/plugins/ml2/ml2_conf.ini``文件并完成以下操作:
在``[ml2]``部分,启用flat和VLAN网络以及VXLAN网络::
[ml2]
type_drivers = flat,vlan
在``[ml2]``部分,禁用私有网络:
[ml2]
tenant_network_types =
在``[ml2]``部分,启用Linuxbridge机制:
[ml2]
mechanism_drivers = linuxbridge
在``[ml2]`` 部分,启用端口安全扩展驱动:
[ml2]
extension_drivers = port_security
在``[ml2_type_flat]``部分,配置公共虚拟网络为flat网络
[ml2_type_flat]
flat_networks = provider
在 ``[securitygroup]``部分,启用 ipset 增加安全组规则的高效性:
[securitygroup]
enable_ipset = True
配置Linuxbridge代理
Linuxbridge代理为实例建立layer-2虚拟网络并且处理安全组规则。
编辑``/etc/neutron/plugins/ml2/linuxbridge_agent.ini``文件并且完成以下操作:
在``[linux_bridge]``部分,将公共虚拟网络和公共物理网络接口对应起来:
[linux_bridge]
physical_interface_mappings =public:eth0
将``PUBLIC_INTERFACE_NAME`` 替换为底层的物理公共网络接口
在``[vxlan]``部分,禁用VXLAN覆盖网络
[vxlan]
enable_vxlan = False
在 ``[securitygroup]``部分,启用安全组并配置 Linuxbridge iptables firewall driver:
[securitygroup]
enable_security_group = True
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
配置DHCP代理
The DHCP agent provides DHCP services for virtual networks
编辑``/etc/neutron/dhcp_agent.ini``文件并完成下面的操作:
在``[DEFAULT]``部分,配置Linuxbridge驱动接口,DHCP驱动并启用隔离元数据,这样在公共网络上的实例就可以通过网络来访问元数据
[DEFAULT]
interface_driver = neutron.agent.linux.interface.BridgeInterfaceDriver
dhcp_driver = neutron.agent.linux.dhcp.DNSmasq
enable_isolated_metadata = True
配置元数据代理
编辑``/etc/neutron/metadata_agent.ini``文件并完成以下操作:
在``[DEFAULT]`` 部分,配置元数据主机以及共享密码:
[DEFAULT]
nova_metadata_ip = 172.25.33.10
metadata_proxy_shared_secret =redhat
为计算节点配置网络服务
编辑``/etc/nova/nova.conf``文件并完成以下操作:
在``[neutron]``部分,配置访问参数,启用元数据代理并设置密码:
[neutron]
url = http://172.25.33.10:9696
auth_url = http:/172.25.33.10:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = neutron
service_metadata_proxy = True
metadata_proxy_shared_secret = redhat
完成安装
网络服务初始化脚本需要一个超链接 /etc/neutron/plugin.ini``指向ML2插件配置文件/etc/neutron/plugins/ml2/ml2_conf.ini``。如果超链接不存在,使用下面的命令创建它:
ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
同步数据库:
su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf \
--config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron
最后显示OK 即为成功
重启计算API 服务
# systemctl restart openstack-nova-api.service
开机启动
# systemctl enable neutron-server.service \
neutron-linuxbridge-agent.service neutron-dhcp-agent.service \
neutron-metadata-agent.service
# systemctl start neutron-server.service \
neutron-linuxbridge-agent.service neutron-dhcp-agent.service \
neutron-metadata-agent.service
对于网络选项2,同样启用layer-3服务并设置其随系统自启动
# systemctl enable neutron-l3-agent.service
# systemctl start neutron-l3-agent.service
计算节点:
# yum install openstack-neutron-linuxbridge ebtables ipset
Networking 通用组件的配置包括认证机制、消息队列和插件
编辑``/etc/neutron/neutron.conf`` 文件并完成如下操作:
在``[database]`` 部分,注释所有``connection`` 项,因为计算节点不直接访问数据库。
在“[DEFAULT]” 和 “[oslo_messaging_rabbit]”部分,配置 “RabbitMQ” 消息队列的连接:
[DEFAULT]
rpc_backend = rabbit
[oslo_messaging_rabbit]
rabbit_host = 172.25.33.10
rabbit_userid = openstack
rabbit_password = rabbit
在 “[DEFAULT]” 和 “[keystone_authtoken]” 部分,配置认证服务访问:
[DEFAULT]
auth_strategy = keystone
[keystone_authtoken]
auth_uri = http://172.25.33.10:5000
auth_url = http://172.25.33.10:35357
memcached_servers = 172.25.33.10:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = neturon
在 [oslo_concurrency] 部分,配置锁路径:
[oslo_concurrency]
lock_path = /var/lib/neutron/tmp
选择公有网络:(可以将minion1上的配置考过来)
配置Linuxbridge代理
Linuxbridge代理为实例建立layer-2虚拟网络并且处理安全组规则。
编辑``/etc/neutron/plugins/ml2/linuxbridge_agent.ini``文件并且完成以下操作:
在``[linux_bridge]``部分,将公共虚拟网络和公共物理网络接口对应起来:
[linux_bridge]
physical_interface_mappings =public:eth0
在``[vxlan]``部分,禁止VXLAN覆盖网络:
[vxlan]
enable_vxlan = False
在 ``[securitygroup]``部分,启用安全组并配置 Linuxbridge iptables firewall driver:
[securitygroup]
enable_security_group = True
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDr
编辑``/etc/nova/nova.conf``文件并完成下面的操作:
在``[neutron]`` 部分,配置访问参数:
[neutron]
url = http://172.25.33.10:9696
auth_url = http://172.25.33.10:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = neutron
重启计算服务:
# systemctl restart openstack-nova-compute.service
开机启动:
# systemctl enable neutron-linuxbridge-agent.service
# systemctl start neutron-linuxbridge-agent.service
检验:
neutron ext-listneutron ext-list
neutron CLI is deprecated and will be removed in the future. Use openstack CLI instead.
+---------------------------+--------------------------------------------------+
| alias | name |
+---------------------------+--------------------------------------------------+
| default-subnetpools | Default Subnetpools |
| availability_zone | Availability Zone |
| network_availability_zone | Network Availability Zone |
| binding | Port Binding |
| agent | agent |
| subnet_allocation | Subnet Allocation |
| dhcp_agent_scheduler | DHCP Agent Scheduler |
| tag | Tag support |
| external-net | Neutron external network |
| flavors | Neutron Service Flavors |
| net-mtu | Network MTU |
| network-ip-availability | Network IP Availability |
| quotas | Quota management support |
| provider | Provider Network |
| multi-provider | Multi Provider Network |
| address-scope | Address scope |
| subnet-service-types | Subnet service types |
| standard-attr-timestamp | Resource timestamps |
| service-type | Neutron Service Type Management |
| tag-ext | Tag support for resources: subnet, subnetpool, |
| | port, router |
| extra_dhcp_opt | Neutron Extra DHCP opts |
| standard-attr-revisions | Resource revision numbers |
| pagination | Pagination support |
| sorting | Sorting support |
| security-group | security-group |
| rbac-policies | RBAC Policies |
| standard-attr-description | standard-attr-description |
| port-security | Port Security |
| allowed-address-pairs | Allowed Address Pairs |
| project-id | project_id field enabled |
+---------------------------+--------------------------------------------------+
列出代理以验证启动 neutron 代理是否成功:
# neutron agent-list
neutron CLI is deprecated and will be removed in the future. Use openstack CLI instead.
+----------+------------+----------+-------------------+-------+----------------+---------------+
| id | agent_type | host | availability_zone | alive | admin_state_up | binary |
+----------+------------+----------+-------------------+-------+----------------+---------------+
| 0d135b32 | DHCP agent | server10 | nova | :-) | True | neutron-dhcp- |
| -f115-4d | | .example | | | | agent |
| 2f-8296- | | | | | | |
| 27c6590c | | | | | | |
| a08c | | | | | | |
| 6c603475 | Metadata | server10 | | :-) | True | neutron- |
| -571a-4b | agent | .example | | | | metadata- |
| de-a414- | | | | | | agent |
| b6531938 | | | | | | |
| 8508 | | | | | | |
| b8667984 | Linux | server11 | | :-) | True | neutron- |
| -0d75 | bridge | .example | | | | linuxbridge- |
| -47bf- | agent | .com | | | | agent |
| 958b-c88 | | | | | | |
| 6244ff1f | | | | | | |
| 7 | | | | | | |
+----------+------------+----------+-------------------+-------+----------------+---------------+
配置文件一览:
控制节点:
# cat /etc/neutron/neutron.conf
[DEFAULT]
rpc_backend = rabbit
core_plugin = ml2
service_plugins =
auth_strategy = keystone
notify_nova_on_port_status_changes = True
notify_nova_on_port_data_changes = True
[database]
connection = mysql+pymysql://neutron:neutron@172.25.33.10/neutron
[oslo_messaging_rabbit]
rabbit_host = 172.25.33.10
rabbit_userid = openstack
rabbit_password = rabbit
[keystone_authtoken]
auth_uri = http://172.25.33.10:5000
auth_url = http://172.25.33.10:35357
memcached_servers = 172.25.33.10:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = neutron
[nova]
auth_url = http://172.25.33.10:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
网站栏目:openstack下
当前链接:http://myzitong.com/article/pddcdd.html