“ubuntu教程”install configure openstack network service

来源: 电脑维修教程 阅读:     发表时间:

http://hj192837.blog.51cto.com/655995/1419795 based on ubuntu 14.04 lts x86_64 configure neutron controller node: 1. on keystone node mysql -uroot -p mysql> create database neutron; mysql> grant all p

http://hj192837.blog.51cto.com/655995/1419795

based on ubuntu 14.04 lts x86_64

configure neutron controller node:

1. on keystone node

mysql -uroot -p

mysql> create database neutron;

mysql> grant all privileges on neutron.* to 'neutron'@'localhost' identified by 'neutron-dbpass';

mysql> grant all privileges on neutron.* to 'neutron'@'%' identified by 'neutron-dbpass';

mysql> flush privileges;

# create a neutron user

keystone user-create --tenant service --name neutron --pass neutron-user-password

# add role to the neutron user

keystone user-role-add --user neutron --tenant service --role admin

# create the neutron service

keystone service-create --name=neutron --type=network --description="neutron network service"

# create a networking endpoint

keystone endpoint-create --region regionone --service neutron --publicurl=http://neutron-server:9696--internalurl=http://neutron-server:9696--adminurl=http://neutron-server:9696

2. on neutron server node, here we use keystone node on it

for using neutron networking

aptitude update

aptitude -y install linux-image-generic-lts-trusty linux-headers-generic-lts-trusty

reboot

3. aptitude -y install neutron-server neutron-plugin-ml2

4. vi /etc/neutron/neutron.conf

[database]

connection=mysql://neutron:neutron@mysql-server/neutron

[default]

auth_strategy=keystone

rpc_backend=neutron.openstack.common.rpc.impl_kombu

rabbit_host = controller

rabbit_password = guest-pass

notify_nova_on_port_status_changes=true

notify_nova_on_port_data_changes=true

nova_url=http://controller:8774/v2

nova_admin_username=nova

nova_admin_tenant_id=$(keystone tenant-list | awk '/service/ { print $2 }')

nova_admin_password=nova-user-password

nova_admin_auth_url=http://controller:35357/v2.0

core_plugin=ml2

service_plugins=router

allow_overlapping_ips = true

verbose = true

[keystone_authtoken]

auth_host=controller

auth_port = 35357

auth_protocol = http

auth_uri=http://controller:5000

admin_tenant_name=service

admin_user=neutron

admin_password=neutron-user-password

comment out any lines in the [service_providers] section

5. vi /etc/neutron/plugins/ml2/ml2_conf.ini

[ml2]

type_drivers=gre

tenant_network_types=gre

mechanism_drivers=openvswitch

[ml2_type_gre]

tunnel_id_ranges=1:1000

[securitygroup]

firewall_driver=neutron.agent.linux.iptables_firewall.ovshybridiptablesfirewalldriver

enable_security_group=true

6. on nova controller node

vi /etc/nova/nova.conf

[default]

network_api_class=nova.network.neutronv2.api.api

neutron_url=http://neutron-server:9696

neutron_auth_strategy=keystone

neutron_admin_tenant_name=service

neutron_admin_username=neutron

neutron_admin_password=neutron-user-password

neutron_admin_auth_url=http://controller:35357/v2.0

linuxnet_interface_driver=nova.network.linux_net.linuxovsinterfacedriver

firewall_driver=nova.virt.firewall.noopfirewalldriver

security_group_api=neutron

vif_plugging_is_fatal=false

vif_plugging_timeout=0

7. service nova-api restart

service nova-scheduler restart

service nova-conductor restart

8.chown -r neutron:neutron /etc/neutron /var/log/neutron

service neutron-server restart

neutron network node

1. eth0 for management/public/floating (192.168.1.0/24), eth1 for internal/flat (192.168.30.0/24), it's recommended to use seperated nic for management network

vi /etc/network/interface

auto eth2

iface eth2 inet manual

up ip link set dev $iface up

down ip link set dev $iface down

2. vi /etc/hosts

# remove or comment the line beginning with 127.0.1.1

192.168.1.10 controller

192.168.1.11 node1

192.168.1.12 neutronnet

3. aptitude -y install ntp

vi /etc/ntp.conf

server 192.168.1.10

restrict 192.168.1.10

service ntp restart

4. aptitude -y install python-mysqldb

5. for using neutron networking

aptitude update

aptitude -y install linux-image-generic-lts-trusty linux-headers-generic-lts-trusty

reboot

6. enable packet forwarding and disable packet destination filtering

vi /etc/sysctl.conf

net.ipv4.ip_forward=1

net.ipv4.conf.all.rp_filter=0

net.ipv4.conf.default.rp_filter=0

sysctl -p

7. aptitude -y install neutron-plugin-ml2 neutron-plugin-openvswitch-agent neutron-l3-agent neutron-dhcp-agent

8. vi /etc/neutron/neutron.conf

[default]

auth_strategy=keystone

rpc_backend=neutron.openstack.common.rpc.impl_kombu

rabbit_host = controller

rabbit_password = guest-pass

core_plugin=ml2

service_plugins=router

allow_overlapping_ips = true

verbose = true

[keystone_authtoken]

auth_host=controller

auth_port = 35357

auth_protocol = http

auth_uri=http://controller:5000

admin_tenant_name=service

admin_user=neutron

admin_password=neutron-user-password

comment out any lines in the [service_providers] section

9. vi /etc/neutron/l3_agent.ini

interface_driver=neutron.agent.linux.interface.ovsinterfacedriver

use_namespaces=true

verbose = true

vi /etc/neutron/dhcp_agent.ini

interface_driver=neutron.agent.linux.interface.ovsinterfacedriver

dhcp_driver=neutron.agent.linux.dhcp.dnsmasq

use_namespaces=true

verbose = true

10. vi /etc/neutron/metadata_agent.ini

auth_url =http://controller:5000/v2.0

auth_region = regionone

admin_tenant_name = service

admin_user = neutron

admin_password = neutron-user-password

nova_metadata_ip = controller

metadata_proxy_shared_secret = metadata-password

verbose = true

11. on nova controller node

vi /etc/nova/nova.conf

neutron_metadata_proxy_shared_secret=metadata-password

service_neutron_metadata_proxy=true

service nova-api restart

12. vi /etc/neutron/plugins/ml2/ml2_conf.ini

[ml2]

type_drivers=gre

tenant_network_types=gre

mechanism_drivers=openvswitch

[ml2_type_gre]

tunnel_id_ranges=1:1000

[ovs]

local_ip = instance_tunnels_interface_ip_address #192.168.30.12

tunnel_type = gre

enable_tunneling = true

[securitygroup]

firewall_driver=neutron.agent.linux.iptables_firewall.ovshybridiptablesfirewalldriver

enable_security_group=true

13. service openvswitch-switch restart

ovs-vsctl add-br br-int

ovs-vsctl add-br br-ex

ovs-vsctl add-port br-ex eth2

ethtool -k eth2 gro off

ethtool -k eth2

vi /etc/network/interfaces

iface eth2 inet manual

post-up /sbin/ethtool -k eth2 gro off

14. chown -r neutron:neutron /etc/neutron /var/log/neutron

service neutron-plugin-openvswitch-agent restart

service neutron-l3-agent restart

service neutron-dhcp-agent restart

service neutron-metadata-agent restart

neutron computer node setup

1. eth0 for management/public/floating (192.168.1.0/24), eth1 for internal/flat (192.168.20.0/24), it's recommended to use seperated nic for management network

2. vi /etc/hosts

# remove or comment the line beginning with 127.0.1.1

192.168.1.10 controller

192.168.1.11 node1

192.168.1.12 neutronnet

3. aptitude -y install qemu-kvm libvirt-bin virtinst bridge-utils

modprobe vhost_net

echo vhost_net >> /etc/modules

4. aptitude -y install ntp

vi /etc/ntp.conf

server 192.168.1.10

restrict 192.168.1.10

service ntp restart

5. aptitude -y install python-mysqldb

6. aptitude -y install nova-compute-kvm python-guestfs

7. dpkg-statoverride --update --add root root 0644 /boot/vmlinuz-$(uname -r)

vi /etc/kernel/postinst.d/statoverride

#!/bin/sh

version="$1"

# passing the kernel version is required

[ -z "${version}" ] && exit 0

dpkg-statoverride --update --add root root 0644 /boot/vmlinuz-${version}

chmod +x /etc/kernel/postinst.d/statoverride

8. vi /etc/nova/nova.conf

[default]

auth_strategy=keystone

rpc_backend = rabbit

rabbit_host = controller

rabbit_password = guest-pass

my_ip=192.168.1.11

vnc_enabled=true

vncserver_listen=0.0.0.0

vncserver_proxyclient_address=192.168.1.11

novncproxy_base_url=http://controller:6080/vnc_auto.html

glance_host=controller

[keystone_authtoken]

auth_host=controller

auth_port=35357

auth_protocol=http

auth_uri=http://controller:5000

admin_user=nova

admin_password=nova-user-password

admin_tenant_name=service

[database]

connection=mysql://nova:nova-database-password@mysql-server/nova

rm -rf /var/lib/nova/nova.sqlite

9. chown -r nova:nova /etc/nova /var/log/nova

service nova-compute restart

now for neutron plugin agent:

10. disable packet destination filtering

vi /etc/sysctl.conf

net.ipv4.conf.all.rp_filter=0

net.ipv4.conf.default.rp_filter=0

sysctl -p

11. for using neutron networking

aptitude update

aptitude -y install linux-image-generic-lts-trusty linux-headers-generic-lts-trusty

reboot

12. aptitude -y install neutron-common neutron-plugin-ml2 neutron-plugin-openvswitch-agent

13. vi /etc/neutron/neutron.conf

[default]

auth_strategy=keystone

rpc_backend=neutron.openstack.common.rpc.impl_kombu

rabbit_host = controller

rabbit_password = guest-pass

core_plugin=ml2

service_plugins=router

allow_overlapping_ips = true

verbose = true

[keystone_authtoken]

auth_host=controller

auth_port = 35357

auth_protocol = http

auth_uri=http://controller:5000

admin_tenant_name=service

admin_user=neutron

admin_password=neutron-user-password

comment out any lines in the [service_providers] section

14. vi /etc/neutron/plugins/ml2/ml2_conf.ini

[ml2]

type_drivers=gre

tenant_network_types=gre

mechanism_drivers=openvswitch

[ml2_type_gre]

tunnel_id_ranges=1:1000

[ovs]

local_ip = instance_tunnels_interface_ip_address #192.168.30.11

tunnel_type = gre

enable_tunneling = true

[securitygroup]

firewall_driver=neutron.agent.linux.iptables_firewall.ovshybridiptablesfirewalldriver

enable_security_group=true

15. service openvswitch-switch restart

ovs-vsctl add-br br-int

16. vi /etc/nova/nova.conf

network_api_class=nova.network.neutronv2.api.api

neutron_url=http://neutron-server:9696

neutron_auth_strategy=keystone

neutron_admin_tenant_name=service

neutron_admin_username=neutron

neutron_admin_password=neutron-user-password

neutron_admin_auth_url=http://controller:35357/v2.0

linuxnet_interface_driver=nova.network.linux_net.linuxovsinterfacedriver

firewall_driver=nova.virt.firewall.noopfirewalldriver

security_group_api=neutron

vif_plugging_is_fatal=false

vif_plugging_timeout=0

17. service nova-compute restart

18. chown -r neutron:neutron /etc/neutron /var/log/neutron

service neutron-plugin-openvswitch-agent restart

creating neutron network

on controller node:

1. to check neutron-server is communicating with its agents

neutron agent-list

source ~/adminrc (through step 1~2)

# create external network

neutron net-create ext-net --shared --router:external=true [ --provider:network_type gre --provider:segmentation_id seg_id ]

note: seg_id is the tunnel id.

2. # create subnet on external network

neutron subnet-create ext-net --name ext-subnet --allocation-pool start=floating_ip_start,end=floating_ip_end --disable-dhcp --gateway external_network_gateway external_network_cidr

neutron subnet-create ext-net --name ext-subnet --allocation-pool start=192.168.1.200,end=192.168.1.210 --disable-dhcp --dns-nameserver 210.22.84.3 --dns-nameserver 210.22.70.3 --gateway 192.168.1.1 192.168.1.0/24

3. # create tenant network

source ~/demo1rc (through step 3~7)

neutron net-create demo-net

4. # create subnet on tenant network

neutron subnet-create demo-net --name demo-subnet --gateway tenant_network_gateway tenant_network_cidr

neutron subnet-create demo-net --name demo-subnet --dns-nameserver x.x.x.x --gateway 10.10.10.1 10.10.10.0/24

5. # create virtual router to connect external and tenant network

neutron router-create demo-router

6. # attach the router to the tenant subnet

neutron router-interface-add demo-router demo-subnet

7. # attach the router to the external network by setting it as the gateway

neutron router-gateway-set demo-router ext-net

note: the tenant router gateway should occupy the lowest ip address inthe floating ip address range -- 192.168.1.200

neutron net-list

neutron subnet-list

neutron router-port-list demo-router

launch instances

for demo1 tenant:

source ~/demo1rc

neutron security-group-create --description "test security group" test-sec

# permit icmp

neutron security-group-rule-create --protocol icmp --direction ingress --remote-ip-prefix 0.0.0.0/0 test-sec

# permit ssh

neutron security-group-rule-create --protocol tcp --port-range-min 22 --port-range-max 22 --direction ingress --remote-ip-prefix 0.0.0.0/0 test-sec

neutron security-group-rule-list

nova keypair-add demokey > demokey.pem

nova keypair-list

nova flavor-list

nova image-list

neutron net-list

neutron subnet-list

demonet=`neutron net-list | grep demo-net | awk '{ print $2 }'`

nova boot --flavor 1 --image "cirros 0.3.2" --key-name demokey --security-groups test-sec --nic net-id=$demonet cirros

notes: you should have enough memory on kvm nodes, or you will not get instances created.

1. you can use vmware workstation to build images, then upload to glance using dashboard

ubuntu

1). vi /etc/hosts to remove 127.0.1.1. item

2). enable ssh login

3). enable dhcp client on interface

4). enable normal username/password

5). set root password

centos/redhat

1). rm -rf /etc/ssh/ssh_host_*

2). vi /etc/sysconfig/network-scripts/ifcfg-ethx to remove hwaddr and uuid items

3). rm -rf /etc/udev/rules.d/70-persistent-net.rules

4). enable ssh login

5). enable dhcp client on interface (also vi /etc/sysconfig/network, /etc/resolv.conf)

6). enable normal username/password

7). set root password

2. launch instance without keypair

nova commands:

nova list; nova show cirros

nova stop cirros

nova start cirros

# get vnc console address via web browser like below:

nova get-vnc-console cirros novnc

# create a floating ip addresson the ext-net external network

neutron floatingip-create ext-net

neutron floatingip-list

# associate the floating ip address with your instance even it's running

nova floating-ip-associate cirros 192.168.1.201

( nova floating-ip-disassociate cirros 192.168.1.201 )

nova list

ping 192.168.1.201 (floating ip)

using xshell or putty to ssh -i demokey.pem cirros@192.168.1.201 (username: cirros, password: cubswin:))

[ for ubuntu cloud image: username is ubuntu, for fedora cloud image: username is fedora ]

now we can ping and ssh to 192.168.1.201, and cirros can access internet now.

notes: you should have enough space in /var/lib/nova/instances for store vms, you can mount partition to it ( using local or shared storages).

fixed ip addresses with openstack neutron for tenant networks

neutron subnet-list

neutron subnet-show demo-subnet

neutron port-create demo-net --fixed-ip ip_address=10.10.10.10 --name vm-name

nova boot --flavor 1 --image "cirros 0.3.2" --key-name demokey --security-groups test-sec --nic port-id=xxx vm-name

access novnc console from internetmethod1

1. add another interface face to internet on nova controller (normally keystone+dashboard node)

2. assign a public ip address

3. on computer node, vi /etc/nova/nova.conf

novncproxy_base_url=http://public_ip_address_of_nova_controller:6080/vnc_auto.html

service nova-compute restart

4. nova get-vnc-console cirros novnc

http://public_ip_address_of_nova_controller:6080/vnc_auto.html?token=4f9c1f7e-4288-4fda-80ad-c1154a954673

access novnc console from internetmethod2

1. you can publish dashboard web site to internet (normally keystone+dashboard node)

2. on computer node, vi /etc/nova/nova.conf

novncproxy_base_url=http://public_ip_address_of_firewall:6080/vnc_auto.html

service nova-compute restart

3. nova get-vnc-console cirros novnc

http://public_ip_address_of_firewall:6080/vnc_auto.html?token=4f9c1f7e-4288-4fda-80ad-c1154a954673

以上是:解决“ubuntu教程”install configure openstack network service问题的详细资料教程