Changes

Jump to navigation Jump to search
1,666 bytes added ,  18:12, 28 November 2021
Line 32: Line 32:  
  netplan generate
 
  netplan generate
 
  netplan apply
 
  netplan apply
 +
 +
=== IPv4 ===
    
Bender is globally accessible through a public, static IPv4 address.
 
Bender is globally accessible through a public, static IPv4 address.
 +
 +
Sonic assigned a /28 network to Sugar Labs. IP assignments are managed in our DNS configuration. Search for "Sonico IP pool" in <code>masters/sugarlabs.org.zone</code>.
 +
 +
=== IPv6 ===
 
IPv6 configuration is being discussed with Sonic net admins.
 
IPv6 configuration is being discussed with Sonic net admins.
    +
=== Bridges ===
 
The br0 bridge is created at startup and shared with the virtual machines hosted on Bender. It gives the VMs unfiltered access to the external network. There's no DHCP, all machines must define a static IP configuration, taking care not to stomp onto each other.
 
The br0 bridge is created at startup and shared with the virtual machines hosted on Bender. It gives the VMs unfiltered access to the external network. There's no DHCP, all machines must define a static IP configuration, taking care not to stomp onto each other.
   Line 41: Line 48:     
Guests simply need to be configured to accept IPv6 routing advertisements. The DNS must be assigned manually.
 
Guests simply need to be configured to accept IPv6 routing advertisements. The DNS must be assigned manually.
 +
 +
== Virtualization ==
 +
 +
Virtual machines are KVM guests managed with libvirt.
 +
 +
=== Storage ===
 +
 +
All virtual disks come from a pool backed by the main LVM VG:
 +
 +
  virsh # pool-define-as nvme-pool logical --source-name nvme-pool
 +
  Pool nvme-pool defined
 +
 
 +
  virsh # pool-start nvme-pool
 +
  Pool nvme-pool started
 +
 
 +
  virsh # pool-autostart nvme-pool
 +
  Pool nvme-pool marked as autostarted
 +
 
 +
  virsh # pool-info nvme-pool
 +
  Name:          nvme-pool
 +
  UUID:          5812819f-b8bf-484e-98fb-2e100fe83df2
 +
  State:          running
 +
  Persistent:    yes
 +
  Autostart:      yes
 +
  Capacity:      1.64 TiB
 +
  Allocation:    250.00 GiB
 +
  Available:      1.40 TiB
 +
 +
Disks assigned to VMs will appear here:
 +
 +
virsh # vol-list nvme-pool
 +
  Name        Path
 +
-----------------------------------------
 +
  aslo1-root  /dev/nvme-pool/aslo1-root
 +
  aslo1-srv    /dev/nvme-pool/aslo1-srv
 +
  backup      /dev/nvme-pool/backup
 +
 +
Disks can be created, listed and deleted using the vol-* commands:
 +
 +
  virsh # vol-create-as nvme-pool testvm-root 20G
 +
  Vol testvm-root created
 +
 
 +
  virsh # vol-info testvm-root --pool nvme-pool
 +
  Name:          testvm-root
 +
  Type:          block
 +
  Capacity:      20.00 GiB
 +
  Allocation:    20.00 GiB
 +
 
 +
  virsh # vol-delete testvm-root --pool nvme-pool
 +
  Vol testvm-root deleted
 +
 +
Please avoid allocating large VM volumes as image file within the host's root filesystem: they're slow and hard to manage. It's ok to use images for test VMs.

Navigation menu