当前位置:   article > 正文

[管理篇6]VMWare搭建Openstack——将FlatDHCP网络转化为GRE网络——具体实施_allow_overlapping_ips = true

allow_overlapping_ips = true

前面已经简单介绍了多种网络模型,虽然不会像其他博客一样将复杂的网络知识给予原理性的讲解,但是读者应该已经知道大概OpenStack支持的网络方式是什么样子的,由于本博客最早介绍的安装部署的网络模式为FlatDHCP,这次是将该方式更换为GRE。


其实对于OpenStack来说,或者说对于云计算和虚拟化来说,就是一个软件定义硬件,对于网络来说就是软件定义网络,也就是经常看到的SDN,其实在更换不同的网络模型,其实对于已经安装好的环境并不需要进行修改,所要修改的就是不同的配置文件,下面我就将各个节点的配置文件进行展示,用户只需要按照如下信息配置,可能能够将原来的flatDHCP更换为GRE网络的。


注意:该文档是在以前基础上的修改,并不适合重新搭建GRE。


对于控制节点


/etc/neutron/neutron.conf配置文件,需要查看如下信息

  1. [DEFAULT]
  2. core_plugin = ml2
  3. service_plugins = router
  4. allow_overlapping_ips = True


显示一下最终配置后的信息
  1. root@controller:~# grep ^[a-z] /etc/neutron/neutron.conf
  2. state_path = /var/lib/neutron
  3. lock_path = $state_path/lock
  4. core_plugin = ml2
  5. service_plugins = router
  6. auth_strategy = keystone
  7. dhcp_agent_notification = True
  8. allow_overlapping_ips = True
  9. rpc_backend = neutron.openstack.common.rpc.impl_kombu
  10. control_exchange = neutron
  11. rabbit_host = 192.168.3.180
  12. rabbit_password = mq4smtest
  13. rabbit_port = 5672
  14. rabbit_userid = guest
  15. notification_driver = neutron.openstack.common.notifier.rpc_notifier
  16. notify_nova_on_port_status_changes = True
  17. notify_nova_on_port_data_changes = True
  18. nova_url = http://192.168.3.180:8774/v2
  19. nova_admin_username =nova
  20. nova_admin_tenant_id =2b71b7f509124584a0a891aae2d58f78
  21. nova_admin_password =nova4smtest
  22. nova_admin_auth_url = http://192.168.3.180:35357/v2.0
  23. root_helper = sudo /usr/bin/neutron-rootwrap /etc/neutron/rootwrap.conf
  24. auth_uri=http://192.168.3.180:5000
  25. auth_host = 192.168.3.180
  26. auth_port = 35357
  27. auth_protocol = http
  28. signing_dir = $state_path/keystone-signing
  29. admin_tenant_name = service
  30. admin_user = neutron
  31. admin_password = neutron4smtest
  32. signing_dir = $state_path/keystone-signing
  33. connection = mysql://neutrondbadmin:neutron4smtest@192.168.3.180/neutron
  34. service_provider=VPN:openswan:neutron.services.vpn.service_drivers.ipsec.IPsecVPNDriver:default

配置Modular Layer 2 (ML2) 插件,ML2插件使用 Open vSwitch (OVS)机制,为实例创建虚拟网络框架。
/etc/neutron/plugins/ml2/ml2_conf.ini配置文件
  1. [ml2]
  2. type_drivers = gre
  3. tenant_network_types = gre
  4. mechanism_drivers = openvswitch
  5. [ml2_type_gre]
  6. tunnel_id_ranges = 1:1000
  7. [securitygroup]
  8. firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
  9. enable_security_group = True

显示更新后的信息
  1. root@controller:~# grep ^[a-z] /etc/neutron/plugins/ml2/ml2_conf.ini
  2. type_drivers = gre
  3. tenant_network_types = gre
  4. mechanism_drivers =openvswitch
  5. tunnel_id_ranges = 1:1000


配置Nova配置信息,对于/etc/nova/nova.conf

  1. [DEFAULT]
  2. network_api_class = nova.network.neutronv2.api.API
  3. neutron_url = http://controller:9696
  4. neutron_auth_strategy = keystone
  5. neutron_admin_tenant_name = service
  6. neutron_admin_username = neutron
  7. neutron_admin_password = neutron密码
  8. neutron_admin_auth_url = http://controller:35357/v2.0
  9. linuxnet_interface_driver = nova.network.linux_net.LinuxOVSInterfaceDriver
  10. firewall_driver = nova.virt.firewall.NoopFirewallDriver
  11. security_group_api = neutron
显示最终信息
  1. root@controller:~# grep ^[a-z] /etc/nova/nova.conf
  2. dhcpbridge_flagfile=/etc/nova/nova.conf
  3. dhcpbridge=/usr/bin/nova-dhcpbridge
  4. logdir=/var/log/nova
  5. state_path=/var/lib/nova
  6. lock_path=/var/lock/nova
  7. force_dhcp_release=True
  8. iscsi_helper=tgtadm
  9. libvirt_use_virtio_for_bridges=True
  10. connection_type=libvirt
  11. root_helper=sudo nova-rootwrap /etc/nova/rootwrap.conf
  12. verbose=True
  13. ec2_private_dns_show_ip=True
  14. api_paste_config=/etc/nova/api-paste.ini
  15. volumes_path=/var/lib/nova/volumes
  16. enabled_apis=ec2,osapi_compute,metadata
  17. rpc_backend = rabbit
  18. rabbit_host = 192.168.3.180
  19. rabbit_userid = guest
  20. rabbit_password = mq4smtest
  21. rabbit_port = 5672
  22. my_ip = 192.168.3.180
  23. vncserver_listen = 192.168.3.180
  24. vncserver_proxyclient_address = 192.168.3.180
  25. auth_strategy = keystone
  26. network_api_class = nova.network.neutronv2.api.API
  27. neutron_url = http://192.168.3.180:9696
  28. neutron_auth_strategy = keystone
  29. neutron_admin_tenant_name = service
  30. neutron_admin_username = neutron
  31. neutron_admin_password = neutron4smtest
  32. neutron_admin_auth_url = http://192.168.3.180:35357/v2.0
  33. linuxnet_interface_driver = nova.network.linux_net.LinuxOVSInterfaceDriver
  34. firewall_driver = nova.virt.firewall.NoopFirewallDriver
  35. security_group_api = neutron
  36. service_neutron_metadata_proxy = True
  37. neutron_metadata_proxy_shared_secret = neutron4smtest
  38. auth_uri = http://192.168.3.180:5000
  39. auth_host = 192.168.3.180
  40. auth_port = 35357
  41. auth_protocol = http
  42. admin_tenant_name = service
  43. admin_user = nova
  44. admin_password = nova4smtest
  45. connection = mysql://novadbadmin:nova4smtest@192.168.3.180/nova
  46. enable_security_group = True
  47. firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver

注意:可能看起来我的信息比较多,这是因为在配置网络节点时可能会更新配置控制节点的信息



重启服务即可

# service nova-api restart
# service nova-scheduler restart
# service nova-conductor restart
# service neutron-server restart


=============================================================

网络节点

1、配置/etc/neutron/neutron.conf,添加或者修改如下

  1. [DEFAULT]
  2. core_plugin = ml2
  3. service_plugins = router
  4. allow_overlapping_ips = True

如果需要获得比较详细的日志信息,需要在[DEFAULT]关键字添加verbose = True,该配置是可选项。

显示所有信息

  1. sm@network:~$ sudo grep ^[a-z] /etc/neutron/neutron.conf
  2. state_path = /var/lib/neutron
  3. lock_path = $state_path/lock
  4. core_plugin = ml2
  5. service_plugins = router
  6. auth_strategy = keystone
  7. allow_overlapping_ips = True
  8. rpc_backend = neutron.openstack.common.rpc.impl_kombu
  9. rabbit_host = 192.168.3.180
  10. rabbit_password = mq4smtest
  11. rabbit_port = 5672
  12. rabbit_userid = guest
  13. notification_driver = neutron.openstack.common.notifier.rpc_notifier
  14. root_helper = sudo /usr/bin/neutron-rootwrap /etc/neutron/rootwrap.conf
  15. auth_host = 192.168.3.180
  16. auth_port = 35357
  17. auth_protocol = http
  18. auth_uri = http://192.168.3.180:5000
  19. admin_tenant_name = service
  20. admin_user = neutron
  21. admin_password = neutron4smtest
  22. signing_dir = $state_path/keystone-signing
  23. connection = sqlite:var/lib/neutron/neutron.sqlite


2、关于L3的路由层,需要修改/etc/neutron/l3_agent.ini,

  1. [DEFAULT]
  2. interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver
  3. use_namespaces = True
显示所有信息
  1. sm@network:~$ sudo cat /etc/neutron/l3_agent.ini
  2. [DEFAULT]
  3. # Show debugging output in log (sets DEBUG log level output)
  4. debug = True
  5. # L3 requires that an interface driver be set. Choose the one that best
  6. # matches your plugin.
  7. # interface_driver =
  8. # Example of interface_driver option for OVS based plugins (OVS, Ryu, NEC)
  9. # that supports L3 agent
  10. interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver
  11. # Use veth for an OVS interface or not.
  12. # Support kernels with limited namespace support
  13. # (e.g. RHEL 6.5) so long as ovs_use_veth is set to True.
  14. # ovs_use_veth = False
  15. # Example of interface_driver option for LinuxBridge
  16. # interface_driver = neutron.agent.linux.interface.BridgeInterfaceDriver
  17. # Allow overlapping IP (Must have kernel build with CONFIG_NET_NS=y and
  18. # iproute2 package that supports namespaces).
  19. use_namespaces = True
  20. # If use_namespaces is set as False then the agent can only configure one router.
  21. # This is done by setting the specific router_id.
  22. # router_id =
  23. # When external_network_bridge is set, each L3 agent can be associated
  24. # with no more than one external network. This value should be set to the UUID
  25. # of that external network. To allow L3 agent support multiple external
  26. # networks, both the external_network_bridge and gateway_external_network_id
  27. # must be left empty.
  28. # gateway_external_network_id =
  29. # Indicates that this L3 agent should also handle routers that do not have
  30. # an external network gateway configured. This option should be True only
  31. # for a single agent in a Neutron deployment, and may be False for all agents
  32. # if all routers must have an external network gateway
  33. # handle_internal_only_routers = True
  34. # Name of bridge used for external network traffic. This should be set to
  35. # empty value for the linux bridge. when this parameter is set, each L3 agent
  36. # can be associated with no more than one external network.
  37. external_network_bridge = br-ex
  38. # TCP Port used by Neutron metadata server
  39. # metadata_port = 9697
  40. # Send this many gratuitous ARPs for HA setup. Set it below or equal to 0
  41. # to disable this feature.
  42. # send_arp_for_ha = 0
  43. # seconds between re-sync routers' data if needed
  44. # periodic_interval = 40
  45. # seconds to start to sync routers' data after
  46. # starting agent
  47. # periodic_fuzzy_delay = 5
  48. # enable_metadata_proxy, which is true by default, can be set to False
  49. # if the Nova metadata server is not available
  50. # enable_metadata_proxy = True
  51. # Location of Metadata Proxy UNIX domain socket
  52. # metadata_proxy_socket = $state_path/metadata_proxy
  53. # router_delete_namespaces, which is false by default, can be set to True if
  54. # namespaces can be deleted cleanly on the host running the L3 agent.
  55. # Do not enable this until you understand the problem with the Linux iproute
  56. # utility mentioned in https://bugs.launchpad.net/neutron/+bug/1052535 and
  57. # you are sure that your version of iproute does not suffer from the problem.
  58. # If True, namespaces will be deleted when a router is destroyed.
  59. router_delete_namespaces = True
  60. # Timeout for ovs-vsctl commands.
  61. # If the timeout expires, ovs commands will fail with ALARMCLOCK error.
  62. # ovs_vsctl_timeout = 10


3、设置DHCP服务器配置文件,/etc/neutron/dhcp_agent.ini

  1. [DEFAULT]
  2. interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver
  3. dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
  4. use_namespaces = True
显示所有信息
  1. sm@network:~$ sudo cat /etc/neutron/dhcp_agent.ini
  2. [DEFAULT]
  3. # Show debugging output in log (sets DEBUG log level output)
  4. # debug = False
  5. verbose = True
  6. # The DHCP agent will resync its state with Neutron to recover from any
  7. # transient notification or rpc errors. The interval is number of
  8. # seconds between attempts.
  9. # resync_interval = 5
  10. # The DHCP agent requires an interface driver be set. Choose the one that best
  11. # matches your plugin.
  12. # interface_driver =
  13. # Example of interface_driver option for OVS based plugins(OVS, Ryu, NEC, NVP,
  14. # BigSwitch/Floodlight)
  15. interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver
  16. # Name of Open vSwitch bridge to use
  17. # ovs_integration_bridge = br-int
  18. # Use veth for an OVS interface or not.
  19. # Support kernels with limited namespace support
  20. # (e.g. RHEL 6.5) so long as ovs_use_veth is set to True.
  21. # ovs_use_veth = False
  22. # Example of interface_driver option for LinuxBridge
  23. # interface_driver = neutron.agent.linux.interface.BridgeInterfaceDriver
  24. # The agent can use other DHCP drivers. Dnsmasq is the simplest and requires
  25. # no additional setup of the DHCP server.
  26. dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
  27. # Allow overlapping IP (Must have kernel build with CONFIG_NET_NS=y and
  28. # iproute2 package that supports namespaces).
  29. use_namespaces = True
  30. # The DHCP server can assist with providing metadata support on isolated
  31. # networks. Setting this value to True will cause the DHCP server to append
  32. # specific host routes to the DHCP request. The metadata service will only
  33. # be activated when the subnet does not contain any router port. The guest
  34. # instance must be configured to request host routes via DHCP (Option 121).
  35. # enable_isolated_metadata = False
  36. # Allows for serving metadata requests coming from a dedicated metadata
  37. # access network whose cidr is 169.254.169.254/16 (or larger prefix), and
  38. # is connected to a Neutron router from which the VMs send metadata
  39. # request. In this case DHCP Option 121 will not be injected in VMs, as
  40. # they will be able to reach 169.254.169.254 through a router.
  41. # This option requires enable_isolated_metadata = True
  42. # enable_metadata_network = False
  43. # Number of threads to use during sync process. Should not exceed connection
  44. # pool size configured on server.
  45. # num_sync_threads = 4
  46. # Location to store DHCP server config files
  47. # dhcp_confs = $state_path/dhcp
  48. # Domain to use for building the hostnames
  49. # dhcp_domain = openstacklocal
  50. # Override the default dnsmasq settings with this file
  51. #dnsmasq_config_file = /etc/neutron/dnsmasq-neutron.conf
  52. # Comma-separated list of DNS servers which will be used by dnsmasq
  53. # as forwarders.
  54. # dnsmasq_dns_servers =
  55. # Limit number of leases to prevent a denial-of-service.
  56. # dnsmasq_lease_max = 16777216
  57. # Location to DHCP lease relay UNIX domain socket
  58. # dhcp_lease_relay_socket = $state_path/dhcp/lease_relay
  59. # Location of Metadata Proxy UNIX domain socket
  60. # metadata_proxy_socket = $state_path/metadata_proxy
  61. # dhcp_delete_namespaces, which is false by default, can be set to True if
  62. # namespaces can be deleted cleanly on the host running the dhcp agent.
  63. # Do not enable this until you understand the problem with the Linux iproute
  64. # utility mentioned in https://bugs.launchpad.net/neutron/+bug/1052535 and
  65. # you are sure that your version of iproute does not suffer from the problem.
  66. # If True, namespaces will be deleted when a dhcp server is disabled.
  67. #dhcp_delete_namespaces = True
  68. # Timeout for ovs-vsctl commands.
  69. # If the timeout expires, ovs commands will fail with ALARMCLOCK error.
  70. # ovs_vsctl_timeout = 10


4、配置元数据配置文件,/etc/neutron/metadata_agent.ini

  1. [DEFAULT]
  2. ...
  3. auth_url = http://controller:5000/v2.0
  4. auth_region = regionOne
  5. admin_tenant_name = service
  6. admin_user = neutron
  7. admin_password = NEUTRON_PASS
  8. nova_metadata_ip = controller
  9. metadata_proxy_shared_secret = 元数据密码

显示我网络节点机器的所有配置信息

  1. sm@network:~$ sudo cat /etc/neutron/metadata_agent.ini
  2. [DEFAULT]
  3. # Show debugging output in log (sets DEBUG log level output)
  4. # debug = True
  5. verbose=True
  6. # The Neutron user information for accessing the Neutron API.
  7. auth_url = http://192.168.3.180:5000/v2.0
  8. auth_region = RegionOne
  9. # Turn off verification of the certificate for ssl
  10. # auth_insecure = False
  11. # Certificate Authority public key (CA cert) file for ssl
  12. # auth_ca_cert =
  13. admin_tenant_name = service
  14. admin_user = neutron
  15. admin_password = neutron4smtest
  16. # Network service endpoint type to pull from the keystone catalog
  17. # endpoint_type = adminURL
  18. # IP address used by Nova metadata server
  19. nova_metadata_ip = 192.168.3.180
  20. # TCP Port used by Nova metadata server
  21. # nova_metadata_port = 8775
  22. # When proxying metadata requests, Neutron signs the Instance-ID header with a
  23. # shared secret to prevent spoofing. You may select any string for a secret,
  24. # but it must match here and in the configuration used by the Nova Metadata
  25. # Server. NOTE: Nova uses a different key: neutron_metadata_proxy_shared_secret
  26. metadata_proxy_shared_secret = neutron4smtest
  27. # Location of Metadata Proxy UNIX domain socket
  28. # metadata_proxy_socket = $state_path/metadata_proxy
  29. # Number of separate worker processes for metadata server
  30. # metadata_workers = 0
  31. # Number of backlog requests to configure the metadata server socket with
  32. # metadata_backlog = 128
  33. # URL to connect to the cache backend.
  34. # Example of URL using memory caching backend
  35. # with ttl set to 5 seconds: cache_url = memory://?default_ttl=5
  36. # default_ttl=0 parameter will cause cache entries to never expire.
  37. # Otherwise default_ttl specifies time in seconds a cache entry is valid for.
  38. # No cache is used in case no value is passed.
  39. # cache_url =


这时候,需要去控制节点,对Nova配置文件进行添加

  1. [DEFAULT]
  2. ...
  3. service_neutron_metadata_proxy = true
  4. neutron_metadata_proxy_shared_secret = 元数据密码

两个元数据密码必须一致

重启控制节点的service nova-api restart


5、对网络节点的/etc/neutron/plugins/ml2/ml2_conf.ini修改

  1. [ml2]
  2. type_drivers = gre
  3. tenant_network_types = gre
  4. mechanism_drivers = openvswitch
  5. [ml2_type_gre]
  6. tunnel_id_ranges = 1:1000
  7. [ovs]
  8. local_ip = INSTANCE_TUNNELS_INTERFACE_IP_ADDRESS
  9. tunnel_type = gre
  10. enable_tunneling = True
  11. [securitygroup]
  12. firewall_driver = neutron.agent.linux.iptables_firewall.
  13. OVSHybridIptablesFirewallDriver
  14. enable_security_group = True
这里面特别注意的是local_ip的设置,这里面需要为GRE添加隧道IP,也就是我们前面说的数据IP,即网络节点eth1的IP:10.0.1.21,如果没有[ovs]需要自行添加。

我们可以看一下如下所示,计算节点和网络节点就是通过GRE隧道来进行连通的。



显示我机器该配置文件的所有信息

  1. sm@network:~$ sudo cat /etc/neutron/plugins/ml2/ml2_conf.ini
  2. [ml2]
  3. # (ListOpt) List of network type driver entrypoints to be loaded from
  4. # the neutron.ml2.type_drivers namespace.
  5. #
  6. type_drivers =gre
  7. # Example: type_drivers = flat,vlan,gre,vxlan
  8. # (ListOpt) Ordered list of network_types to allocate as tenant
  9. # networks. The default value 'local' is useful for single-box testing
  10. # but provides no connectivity between hosts.
  11. #
  12. tenant_network_types = gre
  13. # Example: tenant_network_types = vlan,gre,vxlan
  14. # (ListOpt) Ordered list of networking mechanism driver entrypoints
  15. # to be loaded from the neutron.ml2.mechanism_drivers namespace.
  16. mechanism_drivers = openvswitch
  17. # Example: mechanism_drivers = openvswitch,mlnx
  18. # Example: mechanism_drivers = arista
  19. # Example: mechanism_drivers = cisco,logger
  20. # Example: mechanism_drivers = openvswitch,brocade
  21. # Example: mechanism_drivers = linuxbridge,brocade
  22. [ml2_type_flat]
  23. # (ListOpt) List of physical_network names with which flat networks
  24. # can be created. Use * to allow flat networks with arbitrary
  25. # physical_network names.
  26. #
  27. #flat_networks = external
  28. # Example:flat_networks = physnet1,physnet2
  29. # Example:flat_networks = *
  30. [ml2_type_vlan]
  31. # (ListOpt) List of <physical_network>[:<vlan_min>:<vlan_max>] tuples
  32. # specifying physical_network names usable for VLAN provider and
  33. # tenant networks, as well as ranges of VLAN tags on each
  34. # physical_network available for allocation as tenant networks.
  35. #
  36. # network_vlan_ranges =
  37. # Example: network_vlan_ranges = physnet1:1000:2999,physnet2
  38. [ml2_type_gre]
  39. # (ListOpt) Comma-separated list of <tun_min>:<tun_max> tuples enumerating ranges of GRE tunnel IDs that are available for tenant network allocation
  40. tunnel_id_ranges = 1:1000
  41. [ml2_type_vxlan]
  42. # (ListOpt) Comma-separated list of <vni_min>:<vni_max> tuples enumerating
  43. # ranges of VXLAN VNI IDs that are available for tenant network allocation.
  44. #
  45. # vni_ranges =
  46. # (StrOpt) Multicast group for the VXLAN interface. When configured, will
  47. # enable sending all broadcast traffic to this multicast group. When left
  48. # unconfigured, will disable multicast VXLAN mode.
  49. #
  50. # vxlan_group =
  51. # Example: vxlan_group = 239.1.1.1
  52. [securitygroup]
  53. # Controls if neutron security group is enabled or not.
  54. # It should be false when you use nova security group.
  55. enable_security_group = True
  56. firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
  57. [ovs]
  58. local_ip = 10.0.1.21
  59. tunnel_type = gre
  60. enable_tunneling = True


6、添加外部网桥

ovs-vsctl add-br br-ex
添加外部网桥对于的网卡
ovs-vsctl add-port br-ex eth2
因为外部网桥就是将外部网络访问虚拟机的通道,所以对应的网络接口就是前面设置的网络节点的eth2(混杂模式)


我们也可以通过命令来查看相关的网桥与网卡的信息(本信息包括已经创建的虚拟机)

  1. sm@network:~$ sudo ovs-vsctl show
  2. [sudo] password for sm:
  3. 1dbfa213-3eed-4aa2-accc-c429ec586687
  4. Bridge br-ex
  5. Port "qg-a50c5be1-6d"
  6. Interface "qg-a50c5be1-6d"
  7. type: internal
  8. Port "eth2"
  9. Interface "eth2"
  10. Port phy-br-ex
  11. Interface phy-br-ex
  12. Port br-ex
  13. Interface br-ex
  14. type: internal
  15. Bridge br-int
  16. fail_mode: secure
  17. Port int-br-ex
  18. Interface int-br-ex
  19. Port "tap8595720b-76"
  20. tag: 3
  21. Interface "tap8595720b-76"
  22. type: internal
  23. Port "qr-1d4d38d4-25"
  24. tag: 3
  25. Interface "qr-1d4d38d4-25"
  26. type: internal
  27. Port br-int
  28. Interface br-int
  29. type: internal
  30. Port patch-tun
  31. Interface patch-tun
  32. type: patch
  33. options: {peer=patch-int}
  34. Bridge br-tun
  35. Port "gre-c0a80db5"
  36. Interface "gre-c0a80db5"
  37. type: gre
  38. options: {in_key=flow, local_ip="10.0.1.21", out_key=flow, remote_ip="192.168.13.181"}
  39. Port br-tun
  40. Interface br-tun
  41. type: internal
  42. Port "gre-c0a803b5"
  43. Interface "gre-c0a803b5"
  44. type: gre
  45. options: {in_key=flow, local_ip="10.0.1.21", out_key=flow, remote_ip="192.168.3.181"}
  46. Port "gre-0a00011f"
  47. Interface "gre-0a00011f"
  48. type: gre
  49. options: {in_key=flow, local_ip="10.0.1.21", out_key=flow, remote_ip="10.0.1.31"}
  50. Port patch-int
  51. Interface patch-int
  52. type: patch
  53. options: {peer=patch-tun}
  54. Port "gre-c0a803b6"
  55. Interface "gre-c0a803b6"
  56. type: gre
  57. options: {in_key=flow, local_ip="10.0.1.21", out_key=flow, remote_ip="192.168.3.182"}
  58. ovs_version: "2.0.2"

重启服务

 service neutron-plugin-openvswitch-agent restart
 service neutron-l3-agent restart
 service neutron-dhcp-agent restart
 service neutron-metadata-agent restart
 service openvswitch-switch restart

=============================================================

计算节点


1、编辑/etc/neutron/neutron.conf

  1. [DEFAULT]
  2. core_plugin = ml2
  3. service_plugins = router
  4. allow_overlapping_ips = True
查看所有信息
  1. sm@computer:~$ sudo grep ^[a-z] /etc/neutron/neutron.conf
  2. [sudo] password for sm:
  3. verbose = True
  4. state_path = /var/lib/neutron
  5. lock_path = $state_path/lock
  6. core_plugin = ml2
  7. service_plugins = router
  8. auth_strategy = keystone
  9. allow_overlapping_ips = True
  10. rpc_backend = neutron.openstack.common.rpc.impl_kombu
  11. rabbit_host = 192.168.3.180
  12. rabbit_password = mq4smtest
  13. rabbit_port = 5672
  14. rabbit_userid = guest
  15. notification_driver = neutron.openstack.common.notifier.rpc_notifier
  16. root_helper = sudo /usr/bin/neutron-rootwrap /etc/neutron/rootwrap.conf
  17. auth_host = 192.168.3.180
  18. auth_port = 35357
  19. auth_protocol = http
  20. admin_tenant_name = service
  21. admin_user = neutron
  22. admin_password = neutron4smtest
  23. signing_dir = $state_path/keystone-signing
  24. auth_uri=http://192.168.3.180:5000
  25. connection = sqlite:var/lib/neutron/neutron.sqlite
  26. service_provider=LOADBALANCER:Haproxy:neutron.services.loadbalancer.drivers.haproxy.plugin_driver.HaproxyOnHostPluginDriver:default
  27. service_provider=VPN:openswan:neutron.services.vpn.service_drivers.ipsec.IPsecVPNDriver:default


2、对计算节点的/etc/neutron/plugins/ml2/ml2_conf.ini修改,跟网络节点类似

  1. [ml2]
  2. type_drivers = gre
  3. tenant_network_types = gre
  4. mechanism_drivers = openvswitch
  5. [ml2_type_gre]
  6. tunnel_id_ranges = 1:1000
  7. [ovs]
  8. local_ip = INSTANCE_TUNNELS_INTERFACE_IP_ADDRESS
  9. tunnel_type = gre
  10. enable_tunneling = True
  11. [securitygroup]
  12. firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
  13. enable_security_group = True
这里面特别注意的是local_ip的设置,这里面需要为GRE添加隧道IP,也就是我们前面说的数据IP,即计算节点eth1的IP:10.0.1.31,如果没有[ovs]需要自行添加。

查看所有信息

  1. sm@computer:~$ sudo cat /etc/neutron/plugins/ml2/ml2_conf.ini
  2. [ml2]
  3. # (ListOpt) List of network type driver entrypoints to be loaded from
  4. # the neutron.ml2.type_drivers namespace.
  5. #
  6. type_drivers = gre
  7. # Example: type_drivers = flat,vlan,gre,vxlan
  8. # (ListOpt) Ordered list of network_types to allocate as tenant
  9. # networks. The default value 'local' is useful for single-box testing
  10. # but provides no connectivity between hosts.
  11. #
  12. tenant_network_types = gre
  13. # Example: tenant_network_types = vlan,gre,vxlan
  14. # (ListOpt) Ordered list of networking mechanism driver entrypoints
  15. # to be loaded from the neutron.ml2.mechanism_drivers namespace.
  16. mechanism_drivers = openvswitch
  17. # Example: mechanism_drivers = openvswitch,mlnx
  18. # Example: mechanism_drivers = arista
  19. # Example: mechanism_drivers = cisco,logger
  20. # Example: mechanism_drivers = openvswitch,brocade
  21. # Example: mechanism_drivers = linuxbridge,brocade
  22. [ml2_type_flat]
  23. # (ListOpt) List of physical_network names with which flat networks
  24. # can be created. Use * to allow flat networks with arbitrary
  25. # physical_network names.
  26. #
  27. # flat_networks =
  28. # Example:flat_networks = physnet1,physnet2
  29. # Example:flat_networks = *
  30. [ml2_type_vlan]
  31. # (ListOpt) List of <physical_network>[:<vlan_min>:<vlan_max>] tuples
  32. # specifying physical_network names usable for VLAN provider and
  33. # tenant networks, as well as ranges of VLAN tags on each
  34. # physical_network available for allocation as tenant networks.
  35. #
  36. # network_vlan_ranges =
  37. # Example: network_vlan_ranges = physnet1:1000:2999,physnet2
  38. [ml2_type_gre]
  39. # (ListOpt) Comma-separated list of <tun_min>:<tun_max> tuples enumerating ranges of GRE tunnel IDs that are available for tenant network allocation
  40. tunnel_id_ranges =1:1000
  41. [ml2_type_vxlan]
  42. # (ListOpt) Comma-separated list of <vni_min>:<vni_max> tuples enumerating
  43. # ranges of VXLAN VNI IDs that are available for tenant network allocation.
  44. #
  45. # vni_ranges =
  46. # (StrOpt) Multicast group for the VXLAN interface. When configured, will
  47. # enable sending all broadcast traffic to this multicast group. When left
  48. # unconfigured, will disable multicast VXLAN mode.
  49. #
  50. # vxlan_group =
  51. # Example: vxlan_group = 239.1.1.1
  52. [securitygroup]
  53. # Controls if neutron security group is enabled or not.
  54. # It should be false when you use nova security group.
  55. enable_security_group = True
  56. firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
  57. [ovs]
  58. local_ip = 10.0.1.31
  59. tunnel_type = gre
  60. enable_tunneling = True

3、配置计算节点的/etc/nova/nova.conf

  1. [DEFAULT]
  2. network_api_class = nova.network.neutronv2.api.API
  3. neutron_url = http://controller:9696
  4. neutron_auth_strategy = keystone
  5. neutron_admin_tenant_name = service
  6. neutron_admin_username = neutron
  7. neutron_admin_password = NEUTRON_PASS
  8. neutron_admin_auth_url = http://controller:35357/v2.0
  9. linuxnet_interface_driver = nova.network.linux_net.LinuxOVSInterfaceDriver
  10. firewall_driver = nova.virt.firewall.NoopFirewallDriver
  11. security_group_api = neutron
查看所有信息
  1. sm@computer:~$ sudo cat /etc/nova/nova.conf
  2. [DEFAULT]
  3. dhcpbridge_flagfile=/etc/nova/nova.conf
  4. dhcpbridge=/usr/bin/nova-dhcpbridge
  5. logdir=/var/log/nova
  6. state_path=/var/lib/nova
  7. lock_path=/var/lock/nova
  8. force_dhcp_release=True
  9. iscsi_helper=tgtadm
  10. libvirt_use_virtio_for_bridges=True
  11. connection_type=libvirt
  12. root_helper=sudo nova-rootwrap /etc/nova/rootwrap.conf
  13. verbose=True
  14. ec2_private_dns_show_ip=True
  15. api_paste_config=/etc/nova/api-paste.ini
  16. volumes_path=/var/lib/nova/volumes
  17. enabled_apis=ec2,osapi_compute,metadata
  18. rpc_backend = rabbit
  19. rabbit_host = 192.168.3.180
  20. rabbit_userid = guest
  21. rabbit_password = mq4smtest
  22. rabbit_port = 5672
  23. my_ip = 192.168.3.181
  24. vnc_enabled = True
  25. vncserver_listen = 0.0.0.0
  26. vncserver_proxyclient_address = 192.168.3.181
  27. vnc_enabled = True
  28. novncproxy_base_url = http://192.168.3.180:6080/vnc_auto.html
  29. vif_plugging_timeout = 10
  30. vif_plugging_is_fatal = False
  31. auth_strategy = keystone
  32. glance_host = 192.168.3.180
  33. network_api_class = nova.network.neutronv2.api.API
  34. neutron_url = http://192.168.3.180:9696
  35. neutron_auth_strategy = keystone
  36. neutron_admin_tenant_name = service
  37. neutron_admin_username = neutron
  38. neutron_admin_password = neutron4smtest
  39. neutron_admin_auth_url = http://192.168.3.180:35357/v2.0
  40. linuxnet_interface_driver = nova.network.linux_net.LinuxOVSInterfaceDriver
  41. firewall_driver = nova.virt.firewall.NoopFirewallDriver
  42. security_group_api = neutron
  43. [keystone_authtoken]
  44. auth_uri = http://192.168.3.180:5000
  45. auth_host = 192.168.3.180
  46. auth_port = 35357
  47. auth_protocol = http
  48. admin_tenant_name = service
  49. admin_user = nova
  50. admin_password = nova4smtest
  51. [database]
  52. connection = mysql://novadbadmin:nova4smtest@192.168.3.180/nova


4、重启服务

service nova-compute restart
service neutron-plugin-openvswitch-agent restart


声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/繁依Fanyi0/article/detail/192493
推荐阅读
相关标签
  

闽ICP备14008679号