WHEREIS

ㅁ 기본 구성

    keystone 

           ㄴ 모든 프로젝트의 인증을 통합 관리

    glance

           ㄴ 서버 이미지를 관리

    nova   

           ㄴ서버의 가상화를 지원

    neutron

           SDN(Software Define Network)를 지원

 

 

 

ㅁ 추가 컴포넌트

    horizon

           GUI기반 관리툴

    Swift

           ㄴ오브젝트 스토리지로 파일을 Rest API 를 통해서 관리할 수 있습니다. 아마존의 S3와 같음

 

 

ㅁ 오픈스택 설치방법

 

     1. packstack (자동 설치 툴) 을 이용한 설치

           -> 설치가 굉장히 쉬움, 단 소규모 환경에서만 사용하기를 권장 함(강사님께서)

           -> https://www.rdoproject.org/install/packstack/

 

 

     2. 매뉴얼 설치

           -> https://docs.openstack.org/ko_KR/install-guide/

 

           위에 패키지를 하나하나 구성하기에, 설치과정이 매우 복잡하고 번거로움.

 

root@controller ~]# vi /root/openstack.txt
CONFIG_DEFAULT_PASSWORD=abc123
CONFIG_CEILOMETER_INSTALL=n
CONFIG_AODH_INSTALL=n
CONFIG_KEYSTONE_ADMIN_PW=abc123
CONFIG_PROVISION_DEMO=n
CONFIG_NEUTRON_L2_AGENT=openvswitch
CONFIG_NEUTRON_OVS_BRIDGE_IFACES=br-ex:ens33

export OS_USERNAME=admin
export OS_PASSWORD=ADMIN_PASS
export OS_PROJECT_NAME=admin
export OS_USER_DOMAIN_NAME=Default
 export OS_PROJECT_DOMAIN_NAME=Default
 export OS_AUTH_URL=http://controller:5000/v3
export OS_IDENTITY_API_VERSION=3

ADMIN_PASS

https://cloud-images.ubuntu.com/minimal/releases/xenial/release/

https://cloud-images.ubuntu.com/minimal/releases/xenial/release/ubuntu-16.04-minimal-cloudimg-amd64-disk1.img
http://192.168.1.78/share/cirros-0.3.4-x86_64-disk.img

ss  -nlp|grep 9292

openstack image create --container-format bare --disk-format qcow2 --file cirros-ile cirros-0.3.4-x86_64-disk.img --public cirros-0.3.4

yum install -y openstack-utils

---------------------------------------------------------------
Openstack Rocky CentOS7 based
------------------------------------------------------------
yum repolist 
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
 * base: data.aonenetworks.kr
 * centos-qemu-ev: data.aonenetworks.kr
 * extras: data.aonenetworks.kr
 * updates: data.aonenetworks.kr
repo id                                        repo name                                 status
base/7/x86_64                                  CentOS-7 - Base                            10,019
centos-ceph-luminous/7/x86_64                  CentOS-7 - Ceph Luminous                      224
centos-openstack-rocky/7/x86_64                CentOS-7 - OpenStack rocky                2,343+2
centos-qemu-ev/7/x86_64                        CentOS-7 - QEMU EV                             75
extras/7/x86_64                                CentOS-7 - Extras                             419
updates/7/x86_64                               CentOS-7 - Updates                          2,236

vi /etc/hosts
10.0.0.11 controller
10.0.0.31 compute1
10.0.0.41 block1

SQL database
---------------------------------------------------
Install and configure components¶
Install the packages:

# yum install mariadb mariadb-server python2-PyMySQL -y
vi /etc/my.cnf.d/openstack.cnf 
[mysqld]
bind-address = 10.0.0.11

default-storage-engine = innodb
innodb_file_per_table = on
max_connections = 4096
collation-server = utf8_general_ci
character-set-server = utf8

# systemctl enable mariadb.service
# systemctl start mariadb.service

# mysql_secure_installation

Message queue
-------------------------------------------------------------
Install and configure components¶
Install the package:

# yum install rabbitmq-server
Start the message queue service and configure it to start when the system boots:

# systemctl enable rabbitmq-server.service
# systemctl start rabbitmq-server.service
Add the openstack user:

# rabbitmqctl add_user openstack RABBIT_PASS

Creating user "openstack" ...
Replace RABBIT_PASS with a suitable password.

Permit configuration, write, and read access for the openstack user:

# rabbitmqctl set_permissions openstack ".*" ".*" ".*"

Memcached
--------------------------------------------------------------------------
Install and configure components¶
Install the packages:

# yum install memcached python-memcached
vi  /etc/sysconfig/memcached 

OPTIONS="-l 127.0.0.1,::1,controller"

# systemctl enable memcached.service
# systemctl start memcached.service

Etcd
--------------------------------------------------------------------------
Install and configure components¶
Install the package:

vi  /etc/etcd/etcd.conf 

#[Member]
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="http://10.0.0.11:2380"
ETCD_LISTEN_CLIENT_URLS="http://10.0.0.11:2379"
ETCD_NAME="controller"
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="http://10.0.0.11:2380"
ETCD_ADVERTISE_CLIENT_URLS="http://10.0.0.11:2379"
ETCD_INITIAL_CLUSTER="controller=http://10.0.0.11:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster-01"
ETCD_INITIAL_CLUSTER_STATE="new"
Finalize installation¶
Enable and start the etcd service:

# systemctl enable etcd
# systemctl start etcd
# yum install etcd

--------------------------------------------------------------------------------
Minimal deployment for Rocky¶
At a minimum, you need to install the following services. Install the services in the order specified below:

Identity service – keystone installation for Rocky

Image service – glance installation for Rocky

Compute service – nova installation for Rocky

Networking service – neutron installation for Rocky

We advise to also install the following components after you have installed the minimal deployment services:

Dashboard – horizon installation for Rocky

Block Storage service – cinder installation for Rocky
-------------------------------------------------------------
Keystone 설치
------------------------------------------------------------- 
Keystone Installation Tutorial for Red Hat Enterprise Linux and CentOS
Abstract
Contents
Identity service overview
Install and configure
Create a domain, projects, users, and roles
Verify operation
Create OpenStack client environment scripts

Install and configure
Prerequisites¶
Use the database access client to connect to the database server as the root user:

$ mysql -u root -p
Create the keystone database:

MariaDB [(none)]> CREATE DATABASE keystone;
Grant proper access to the keystone database:

MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' \
IDENTIFIED BY 'KEYSTONE_DBPASS';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' \
IDENTIFIED BY 'KEYSTONE_DBPASS';

Install and configure components¶
----------------------------------------------------------------
yum install openstack-keystone httpd mod_wsgi
vi /etc/keystone/keystone.conf
742 connection = mysql+pymysql://keystone:KEYSTONE_DBPASS@controller/keyston
2828 provider = fernet

su -s /bin/sh -c "keystone-manage db_sync" keystone
keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone
keystone-manage credential_setup --keystone-user keystone --keystone-group keystone

keystone-manage bootstrap --bootstrap-password ADMIN_PASS \
  --bootstrap-admin-url http://controller:5000/v3/ \
  --bootstrap-internal-url http://controller:5000/v3/ \
  --bootstrap-public-url http://controller:5000/v3/ \
  --bootstrap-region-id RegionOne

Configure the Apache HTTP server¶
-----------------------------------------------------------------
vi /etc/httpd/conf/httpd.conf
 96 ServerName controller

ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/
systemctl enable httpd.service
systemctl start httpd.service

export OS_USERNAME=admin
export OS_PASSWORD=ADMIN_PASS
export OS_PROJECT_NAME=admin
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_DOMAIN_NAME=Default
export OS_AUTH_URL=http://controller:5000/v3
export OS_IDENTITY_API_VERSION=3
-----------------------------------------------------------------------

openstack domain create --description "An Example Domain" example
openstack project create --domain default \
  --description "Service Project" service
openstack project create --domain default \
  --description "Demo Project" myproject
openstack user create --domain default \
  --password abc123 myuser                
openstack role create myrole
openstack role add --project myproject --user myuser myrole

Verify operation
------------------------------------------------------------------------
unset OS_AUTH_URL OS_PASSWORD
 openstack --os-auth-url http://controller:5000/v3 \
  --os-project-domain-name Default --os-user-domain-name Default \
  --os-project-name admin --os-username admin token issue

ADMIN_PASS

openstack --os-auth-url http://controller:5000/v3 \
  --os-project-domain-name Default --os-user-domain-name Default \
  --os-project-name myproject --os-username myuser token issue
abc123

Create OpenStack client environment scripts
----------------------------------------------------------
Creating the scripts¶
vi admin-openrc
export OS_PROJECT_DOMAIN_NAME=Default
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD=ADMIN_PASS
export OS_AUTH_URL=http://controller:5000/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2

vi demo-openrc
export OS_PROJECT_DOMAIN_NAME=Default
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_NAME=myproject
export OS_USERNAME=myuser
export OS_PASSWORD=abc123
export OS_AUTH_URL=http://controller:5000/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2


Using the scripts¶
-------------------------------------------------------------
. admin-openrc
 openstack token issue


image 설치
-------------------------------------------------------------
Prerequisites

mysql -u root -pabc123
Create the glance database:

MariaDB [(none)]> CREATE DATABASE glance;
Grant proper access to the glance database:

MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' \
  IDENTIFIED BY 'GLANCE_DBPASS';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' \
  IDENTIFIED BY 'GLANCE_DBPASS';

. admin-openrc
To create the service credentials, complete these steps:

Create the glance user:

$ openstack user create --domain default --password GLANCE_PASS glance

 openstack role add --project service --user glance admin
Create the glance service entity:

$ openstack service create --name glance \
  --description "OpenStack Image" image

Create the Image service API endpoints:

openstack endpoint create --region RegionOne \
  image public http://controller:9292
openstack endpoint create --region RegionOne \
  image internal http://controller:9292
openstack endpoint create --region RegionOne \
  image admin http://controller:9292

Install and configure components¶
----------------------------------------------------------------
yum install openstack-glance
vi /etc/glance/glance-api.conf

1901 connection = mysql+pymysql://glance:GLANCE_DBPASS@controller/glance
3471 [keystone_authtoken]

www_authenticate_uri  = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = glance
password = GLANCE_PASS

[paste_deploy]
# ...
4422 flavor = keystone

[glance_store]
# ...
2042 stores = file,http
default_store = file
filesystem_store_datadir = /var/lib/glance/images/

----------------------------------------------------------------------
vi /etc/glance/glance-registry.conf file 

[database]
# ...
1146 connection = mysql+pymysql://glance:GLANCE_DBPASS@controller/glance

1253 [keystone_authtoken]
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = glance
password = GLANCE_PASS

2178 [paste_deploy]
# ...
flavor = keystone
-------------------------------------------------------
su -s /bin/sh -c "glance-manage db_sync" glance

systemctl enable openstack-glance-api.service \
  openstack-glance-registry.service
systemctl start openstack-glance-api.service \
  openstack-glance-registry.service

Verify operation
wget http://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img
 openstack image create "cirros" \
  --file cirros-0.4.0-x86_64-disk.img \
  --disk-format qcow2 --container-format bare \
  --public
 openstack image list

-----------------------------------------------------------
Nova Controller
--------------------------------------------------------------

 mysql -u root -pabc123
CREATE DATABASE nova_api;
CREATE DATABASE nova;
CREATE DATABASE nova_cell0;
CREATE DATABASE placement;

Grant proper access to the databases:

GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' \
  IDENTIFIED BY 'NOVA_DBPASS';
GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' \
  IDENTIFIED BY 'NOVA_DBPASS';

GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' \
  IDENTIFIED BY 'NOVA_DBPASS';
GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' \
  IDENTIFIED BY 'NOVA_DBPASS';

GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' \
  IDENTIFIED BY 'NOVA_DBPASS';
GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' \
  IDENTIFIED BY 'NOVA_DBPASS';

GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'localhost' \
  IDENTIFIED BY 'PLACEMENT_DBPASS';
GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'%' \
  IDENTIFIED BY 'PLACEMENT_DBPASS';

 . admin-openrc
openstack user create --domain default --password NOVA_PASS nova
openstack role add --project service --user nova admin
openstack service create --name nova \
  --description "OpenStack Compute" compute

 openstack endpoint create --region RegionOne \
  compute public http://controller:8774/v2.1

openstack role add --project service --user nova admin
openstack endpoint create --region RegionOne \
  compute internal http://controller:8774/v2.1
openstack endpoint create --region RegionOne \
  compute admin http://controller:8774/v2.1

openstack user create --domain default --password PLACEMENT_PASS  placement
openstack role add --project service --user placement admin

openstack service create --name placement \
  --description "Placement API" placement

openstack endpoint create --region RegionOne \
  placement public http://controller:8778
openstack endpoint create --region RegionOne \
  placement internal http://controller:8778
openstack endpoint create --region RegionOne \
  placement admin http://controller:8778

 yum install openstack-nova-api openstack-nova-conductor \
  openstack-nova-console openstack-nova-novncproxy \
  openstack-nova-scheduler openstack-nova-placement-api -y

------------------------------------------------------------------------------
vi /etc/nova/nova.conf 

In the [DEFAULT] section, enable only the compute and metadata APIs:

[DEFAULT]
# ...
enabled_apis = osapi_compute,metadata
transport_url = rabbit://openstack:RABBIT_PASS@controller
my_ip=10.0.0.11
use_neutron = true
firewall_driver = nova.virt.firewall.NoopFirewallDriver

3475 [api_database]
# ...
connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova_api

4560 [database]
# ...
connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova

8953 [placement_database]
# ...
connection = mysql+pymysql://placement:PLACEMENT_DBPASS@controller/placement

3182 [api]
# ...
auth_strategy = keystone

6063 [keystone_authtoken]
# ...
auth_url = http://controller:5000/v3
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = nova
password = NOVA_PASS

[vnc]
10710 enabled = true
# ...
server_listen = $my_ip
server_proxyclient_address = $my_ip

[glance]
# ...
5265 api_servers = http://controller:9292

7998 [oslo_concurrency]
# ...
lock_path = /var/lib/nova/tmp

[placement]
# ...
9033|8820 region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://controller:5000/v3
username = placement
password = PLACEMENT_PASS
------------------------------------------------------
Due to a packaging bug, you must enable access to the Placement API by adding the following 
configuration to 

vi /etc/httpd/conf.d/00-nova-placement-api.conf
끝에 추가


   = 2.4>
      Require all granted
   
   
      Order allow,deny
      Allow from all
   


 systemctl restart httpd
su -s /bin/sh -c "nova-manage api_db sync" nova
su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova
su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova
su -s /bin/sh -c "nova-manage db sync" nova
su -s /bin/sh -c "nova-manage cell_v2 list_cells" nova

systemctl enable openstack-nova-api.service \
  openstack-nova-consoleauth openstack-nova-scheduler.service \
  openstack-nova-conductor.service openstack-nova-novncproxy.service

systemctl start openstack-nova-api.service \
  openstack-nova-consoleauth openstack-nova-scheduler.service \
  openstack-nova-conductor.service openstack-nova-novncproxy.service

openstack-status
-------------------------------------------------------------------------------------------
Install and configure a compute node

yum install openstack-nova-compute -y
vi /etc/nova/nova.conf

[DEFAULT]
enabled_apis = osapi_compute,metadata
transport_url = rabbit://openstack:RABBIT_PASS@controller
my_ip = MANAGEMENT_INTERFACE_IP_ADDRESS
use_neutron = true
firewall_driver = nova.virt.firewall.NoopFirewallDriver

[api]
# ...
auth_strategy = keystone

[keystone_authtoken]
# ...
auth_url = http://controller:5000/v3
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = nova
password = NOVA_PASS

[vnc]
# ...
enabled = true
server_listen = 0.0.0.0
server_proxyclient_address = $my_ip
novncproxy_base_url = http://controller:6080/vnc_auto.html

[glance]
# ...
api_servers = http://controller:9292

[oslo_concurrency]
# ...
lock_path = /var/lib/nova/tmp

[placement]
# ...
region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://controller:5000/v3
username = placement
password = PLACEMENT_PASS

[libvirt]
# ...
6367 virt_type = qemu

systemctl enable libvirtd.service openstack-nova-compute.service
# systemctl start libvirtd.service openstack-nova-compute.service

Add the compute node to the cell database¶
 controller> . admin-openrc

$ openstack compute service list --service nova-compute

Discover compute hosts:

# su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova

vi /etc/nova/nova.conf
9602 [scheduler]
discover_hosts_in_cells_interval = 300

Verify operation
. admin-openrc
openstack compute service list
openstack catalog list
 nova-status upgrade check
-----------------------------------------------------------------------------------------
all-in-one에 nova 추가하기

1. ntp clinet : chrony.conf 
2. yum install -y openstack-selinux python2-openstackclient
3. /etc/hosts
10.0.0.100 controller
10.0.0.101 compute1

4. yum install openstack-nova-compute -y
cp /etc/nova/nova.conf /etc/nova/nova.conf.old
5.scp 10.0.0.100:/etc/nova/nova.conf /etc/nova
  ls -l /etc/nova/nova.conf
  root  nova
  vi /etc/nova/nova.conf
  my_ip=10.0.0.101
  vncserver_proxyclient=10.0.0.101
6.systemctl enable libvirtd.service openstack-nova-compute.service
systemctl start libvirtd.service openstack-nova-compute.service 

controller
------------------------------------------------------------------------------
vi /etc/sysconfig/iptables
13번 line 아래
-A INPUT -s 10.0.0.101/32 -p tcp -m multiport --dports 5671,5672 -m comment --comment "001 amqp incoming amqp_10.0.0.101" -j ACCEPT
-A INPUT -s 10.0.0.101/32 -p tcp -m multiport --dports 5671,5672 -j ACCEPT
-A INPUT -s 10.0.0.100/32 -p tcp -m multiport --dports 5671,5672 -j ACCEPT

systemctl reload iptables
-----------------------------------------------------------------------------------


compute node에서
#systemctl stop openstack-nova-compute
#systemctl start openstack-nova-compute

7. controller> . keystonerc_admin

$ openstack compute service list --service nova-compute

Discover compute hosts:

# su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova

vi /etc/nova/nova.conf
9602 [scheduler]
discover_hosts_in_cells_interval = 300

Verify operation
. keystonerc_admin
openstack compute service list
openstack catalog list
 nova-status upgrade check
------------------------------------------------------------------------------------------

Networking service
Install and configure controller node

Prerequisites
mysql -uroot -pabc123
CREATE DATABASE neutron;
GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' \
  IDENTIFIED BY 'NEUTRON_DBPASS';
 GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' \
  IDENTIFIED BY 'NEUTRON_DBPASS';

 . admin-openrc
 openstack user create --domain default --password NEUTRON_PASS neutron
openstack role add --project service --user neutron admin
openstack service create --name neutron \
  --description "OpenStack Networking" network
 openstack endpoint create --region RegionOne \
  network public http://controller:9696
openstack endpoint create --region RegionOne \
  network internal http://controller:9696
openstack endpoint create --region RegionOne \
  network admin http://controller:9696
-------------------------------------------------------------------
Configure networking options¶

Networking Option 1: Provider networks
Networking Option 2: Self-service networks

Networking Option 2: Self-service networks
--------------------------------------------------------
yum install openstack-neutron openstack-neutron-ml2 \
  openstack-neutron-linuxbridge ebtables -y

vi /etc/neutron/neutron.conf
744 connection = mysql+pymysql://neutron:NEUTRON_DBPASS@controller/neutron

[DEFAULT]
# ...
core_plugin = ml2
service_plugins = router
allow_overlapping_ips = true
transport_url = rabbit://openstack:RABBIT_PASS@controller
auth_strategy = keystone
notify_nova_on_port_status_changes = true
notify_nova_on_port_data_changes = true

[keystone_authtoken]
# ...
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = NEUTRON_PASS

[nova]
# ...
auth_url = http://controller:5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = nova
password = NOVA_PASS

[oslo_concurrency]
# ...
1231 lock_path = /var/lib/neutron/tmp

-------------------------------------------------------------
vi  /etc/neutron/plugins/ml2/ml2_conf.ini

[ml2]
# ...
type_drivers = flat,vlan,vxlan
tenant_network_types = vxlan
mechanism_drivers = linuxbridge,l2population
extension_drivers = port_security
[ml2_type_flat]
# ...
flat_networks = provider
[ml2_type_vxlan]
# ...
vni_ranges = 1:1000
[securitygroup]
# ...
enable_ipset = true
-------------------------------------------------------------
vi /etc/neutron/plugins/ml2/linuxbridge_agent.ini 

[linux_bridge]
157 physical_interface_mappings = provider:ens33

[vxlan]
enable_vxlan = true
local_ip = 10.0.0.11
l2_population = true

[securitygroup]
# ...
enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver


[root@controller ~]# lsmod|grep  br_netfilter
[root@controller ~]# modprobe  br_netfilter
[root@controller ~]# lsmod|grep  br_netfilter
br_netfilter           22256  0 
bridge                151336  1 br_netfilter
[root@controller ~]# vi /etc/sysctl.conf 
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1

[root@controller ~]# sysctl -p
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
---------------------------------------------------------
Configure the layer-3 agent

vi /etc/neutron/l3_agent.ini
[DEFAULT]
# ...
interface_driver = linuxbridge
-------------------------------------------------------
vi /etc/neutron/dhcp_agent.ini 
[DEFAULT]
# ...
interface_driver = linuxbridge
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
enable_isolated_metadata = true
-----------------------------------------------------------------
vi /etc/neutron/metadata_agent.ini
[DEFAULT]
nova_metadata_host = controller
metadata_proxy_shared_secret = METADATA_SECRET
-------------------------------------------------------------------

vi /etc/nova/nova.conf

[neutron]
# ...
url = http://controller:9696
auth_url = http://controller:5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = NEUTRON_PASS
service_metadata_proxy = true
metadata_proxy_shared_secret = METADATA_SECRET

Finalize installation¶

 ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf \
  --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron
systemctl restart openstack-nova-api.service
systemctl enable neutron-server.service \
  neutron-linuxbridge-agent.service neutron-dhcp-agent.service \
  neutron-metadata-agent.service
# systemctl start neutron-server.service \
  neutron-linuxbridge-agent.service neutron-dhcp-agent.service \
  neutron-metadata-agent.service
--------------------------------------------------------------------------------
For networking option 2, also enable and start the layer-3 service:

systemctl enable neutron-l3-agent.service
systemctl start neutron-l3-agent.service
---------------------------------------------------------------------------------


Install and configure compute node

 yum install openstack-neutron-linuxbridge ebtables ipset -y
vi /etc/neutron/neutron.conf

In the [database] section, comment out any connection options because compute nodes do not directly access the database.

In the [DEFAULT] section, configure RabbitMQ message queue access:

[DEFAULT]
# ...
transport_url = rabbit://openstack:RABBIT_PASS@controller
auth_strategy = keystone

[keystone_authtoken]
# ...
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = NEUTRON_PASS

[oslo_concurrency]
# ...
lock_path = /var/lib/neutron/tmp

vi /etc/nova/nova.conf
[neutron]
# ...
url = http://controller:9696
auth_url = http://controller:5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = NEUTRON_PASS
----------------------------------------------------------------
/etc/neutron/plugins/ml2/linuxbridge_agent.ini
[linux_bridge]
physical_interface_mappings = provider:PROVIDER_INTERFACE_NAME
[vxlan]
enable_vxlan = true
local_ip = OVERLAY_INTERFACE_IP_ADDRESS
l2_population = true
[securitygroup]
# ...
enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver

[root@controller ~]# lsmod|grep  br_netfilter
[root@controller ~]# modprobe  br_netfilter
[root@controller ~]# lsmod|grep  br_netfilter
br_netfilter           22256  0 
bridge                151336  1 br_netfilter
[root@controller ~]# vi /etc/sysctl.conf 
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1

[root@controller ~]# sysctl -p
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
---------------------------------------------------------

Finalize installation¶
Restart the Compute service:

# systemctl restart openstack-nova-compute.service
Start the Linux bridge agent and configure it to start when the system boots:

# systemctl enable neutron-linuxbridge-agent.service
# systemctl start neutron-linuxbridge-agent.service

[root@controller ~]# . admin-openrc 
[root@controller ~]# openstack network agent list
+--------------------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+
| ID                                   | Agent Type         | Host       | Availability Zone | Alive | State | Binary                    |
+--------------------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+
| 03df6417-fff8-4e1f-b63c-2166b2856dde | L3 agent           | controller | nova              | :-)   | UP    | neutron-l3-agent          |
| 2b4558e8-6e17-43e6-9615-3c0b833b38cb | DHCP agent         | controller | nova              | :-)   | UP    | neutron-dhcp-agent        |
| 2e25a640-dc03-4cdc-862a-dfb7a4146ec1 | Linux bridge agent | compute1   | None              | :-)   | UP    | neutron-linuxbridge-agent |
| 3838b84f-fdca-440a-a1b7-c9cab543ac5a | Metadata agent     | controller | None              | :-)   | UP    | neutron-metadata-agent    |
| 75733779-679b-4a20-9ec7-5dcadc009012 | Linux bridge agent | controller | None              | :-)   | UP    | neutron-linuxbridge-agent |
+--------------------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+

----------------------------------------------------------
all-in-one controller에 nova 추가 하기

10.0.0.31 system에서 작업
 yum install openstack-neutron-linuxbridge ebtables ipset -y
[root@compute1 ~]# cp /etc/neutron/neutron.conf /etc/neutron/neutron.conf.old
[root@compute1 ~]# vi /etc/neutron/neutron.conf
[root@compute1 ~]# scp 10.0.0.100:/etc/neutron/neutron.conf /etc/neutron
neutron.conf                                                          100%   71KB  27.4MB/s   00:00    
[root@compute1 ~]# ls -l /etc/neutron/neutron.conf
-rw-r----- 1 root neutron 72511  7월 25 14:59 /etc/neutron/neutron.conf

vi /etc/neutron/plugins/ml2/linuxbridge_agent.ini
[linux_bridge]
physical_interface_mappings = provider:ens33
[vxlan]
enable_vxlan = true
local_ip = 10.0.0.101
l2_population = true
[securitygroup]
# ...
enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver

[root@controller ~]# lsmod|grep  br_netfilter
[root@controller ~]# modprobe  br_netfilter
[root@controller ~]# lsmod|grep  br_netfilter
br_netfilter           22256  0 
bridge                151336  1 br_netfilter
[root@controller ~]# vi /etc/sysctl.conf 
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1

[root@controller ~]# sysctl -p
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
---------------------------------------------------------
# systemctl enable neutron-linuxbridge-agent.service
# systemctl start neutron-linuxbridge-agent.service

[root@controller ~]# . keystonerc_admin
[root@controller ~]# openstack network agent list
[root@controller ~(keystone_admin)]# openstack network agent list
+--------------------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+
| ID                                   | Agent Type         | Host       | Availability Zone | Alive | State | Binary                    |
+--------------------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+
| 188c780c-f4ed-44ad-8433-4e6eb486ee4c | Metadata agent     | controller | None              | :-)   | UP    | neutron-metadata-agent    |
| 452d66a5-a374-4322-8a74-109126fc2c6c | Linux bridge agent | compute1   | None              | :-)   | UP    | neutron-linuxbridge-agent |
| 747faa58-bd78-4cba-920c-b7a5458d82ec | DHCP agent         | controller | nova              | :-)   | UP    | neutron-dhcp-agent        |
| 7809b85b-69aa-454b-ab50-787bb2ff73c1 | Open vSwitch agent | controller | None              | :-)   | UP    | neutron-openvswitch-agent |
| 9f5ca34b-d967-41cc-9afa-b145e5c082b2 | L3 agent           | controller | nova              | :-)   | UP    | neutron-l3-agent          |
| aebbbb33-31f3-457d-9fc1-45f8d40a1ec3 | Metering agent     | controller | None              | :-)   | UP    | neutron-metering-agent    |
+--------------------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+

-------------------------------------------------------------------------
Dashboard

Install and configure components¶
yum install openstack-dashboard -y
 

185 OPENSTACK_HOST = "controller"
       ALLOWED_HOSTS = ['*', 'localhost']
162 line에 교체

SESSION_ENGINE = 'django.contrib.sessions.backends.cache'

156 CACHES = {
    'default': {
         'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
         'LOCATION': 'controller:11211',
    }
}
77 OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True
OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user"
OPENSTACK_API_VERSIONS = {
    "identity": 3,
    "image": 2,
    "volume": 2,
}

vi /etc/httpd/conf.d/openstack-dashboard.conf
WSGIApplicationGroup %{GLOBAL}  (4line에 추가)
 systemctl restart httpd.service memcached.service

http://10.0.0.11/dashboard  
default/admin/ADMIN_PASS or myuser/abc123
설정->timezone 변경

-------------------------------------------------------------------
인스턴스 올리기

https://docs.openstack.org/install-guide/launch-instance.html


이 글을 공유합시다

facebook twitter kakaoTalk kakaostory naver band
loading