Changes

Jump to navigation Jump to search
1,798 bytes added ,  03:54, 31 July 2022
no edit summary
Line 14: Line 14:  
* 2TB NVMe SSD
 
* 2TB NVMe SSD
 
* Ubuntu 20.04.2 LTS
 
* Ubuntu 20.04.2 LTS
 +
 +
[[Image:BenderRacked.jpg|thumb|320px]]
    
== Info ==
 
== Info ==
Line 24: Line 26:  
* [[User:MrBIOS|Alex Perez]], MrBIOS on #sugar Libera.chat IRC network
 
* [[User:MrBIOS|Alex Perez]], MrBIOS on #sugar Libera.chat IRC network
 
* [[User:Bernie|Bernie Innocenti]], bernie on #sugar Libera.chat IRC network
 
* [[User:Bernie|Bernie Innocenti]], bernie on #sugar Libera.chat IRC network
 +
 +
== Hosting ==
 +
 +
[[Image:SonicColo.jpg|thumb|320px]]
 +
 +
Hosted by Sonic in Santa Rosa CA
    
== Network configuration ==
 
== Network configuration ==
Line 40: Line 48:     
=== IPv6 ===
 
=== IPv6 ===
IPv6 configuration is being discussed with Sonic net admins.
+
IPv6 configuration is a bit weird.
 +
 
 +
* Public block: 2001:5a8:601:f::/64
 +
* Sonic gateway: 2001:5a8:5:3a::15:0/127
 +
* Transport IP: 2001:5a8:5:3a::15:1/127
 +
 
 +
The gateway is configured to route all traffic for our netblock to the transport IP, which is currently assigned to bender.
    
=== Bridges ===
 
=== Bridges ===
Line 48: Line 62:     
Guests simply need to be configured to accept IPv6 routing advertisements. The DNS must be assigned manually.
 
Guests simply need to be configured to accept IPv6 routing advertisements. The DNS must be assigned manually.
 +
 +
== Virtualization ==
 +
 +
Virtual machines are KVM guests managed with libvirt.
 +
 +
=== Storage ===
 +
 +
All virtual disks come from a pool backed by the main LVM VG:
 +
 +
  virsh # pool-define-as nvme-pool logical --source-name nvme-pool
 +
  Pool nvme-pool defined
 +
 
 +
  virsh # pool-start nvme-pool
 +
  Pool nvme-pool started
 +
 
 +
  virsh # pool-autostart nvme-pool
 +
  Pool nvme-pool marked as autostarted
 +
 
 +
  virsh # pool-info nvme-pool
 +
  Name:          nvme-pool
 +
  UUID:          5812819f-b8bf-484e-98fb-2e100fe83df2
 +
  State:          running
 +
  Persistent:    yes
 +
  Autostart:      yes
 +
  Capacity:      1.64 TiB
 +
  Allocation:    250.00 GiB
 +
  Available:      1.40 TiB
 +
 +
Disks assigned to VMs will appear here:
 +
 +
virsh # vol-list nvme-pool
 +
  Name        Path
 +
-----------------------------------------
 +
  aslo1-root  /dev/nvme-pool/aslo1-root
 +
  aslo1-srv    /dev/nvme-pool/aslo1-srv
 +
  backup      /dev/nvme-pool/backup
 +
 +
Disks can be created, listed and deleted using the vol-* commands:
 +
 +
  virsh # vol-create-as nvme-pool testvm-root 20G
 +
  Vol testvm-root created
 +
 
 +
  virsh # vol-info testvm-root --pool nvme-pool
 +
  Name:          testvm-root
 +
  Type:          block
 +
  Capacity:      20.00 GiB
 +
  Allocation:    20.00 GiB
 +
 
 +
  virsh # vol-delete testvm-root --pool nvme-pool
 +
  Vol testvm-root deleted
 +
 +
Please avoid allocating large VM volumes as image file within the host's root filesystem: they're slow and hard to manage. It's ok to use images for test VMs.

Navigation menu