Machine/bender: Difference between revisions

No edit summary
Tag: visualeditor-switched
Chimosky (talk | contribs)
 
(22 intermediate revisions by 3 users not shown)
Line 1: Line 1:
<noinclude>{{TOCright}}</noinclude>
== Info ==
Bender and [[Machine/papert | Papert]] are two twin blade servers donated to Sugar Labs in 2021.
 
Bender is our primary KVM host, while Papert is a hot standby and [[Service/backup|backup]] machine.


== Hostnames ==
== Hostnames ==
* bender.sugarlabs.org
* bender.sugarlabs.org
* papert.sugarlabs.org
== Machines ==
* [[Machine/lightwave]], [[Service/Nameservers]],
* [[Machine/weblate]], with [[Service/Weblate]], or https://weblate.sugarlabs.org and https://translate.sugarlabs.org


== Hardware ==
== Hardware ==
Line 13: Line 21:
* 64GB RAM
* 64GB RAM
* 2TB NVMe SSD
* 2TB NVMe SSD
* Ubuntu 20.04.2 LTS
== Info ==
Owned by Sugar Labs, Inc.
Hosted by Sonic in Santa Rosa, CA


Bender and Papert are two twin KVM hosts bought by Sugar Labs in 2021.
[[Image:BenderRacked.jpg|thumb|320px]]


== Admins ==
== Admins ==
* [[User:MrBIOS|Alex Perez]], MrBIOS on #sugar Libera.chat IRC network
* [[User:MrBIOS|Alex Perez]], MrBIOS on #sugar Libera.chat IRC network
* [[User:Bernie|Bernie Innocenti]], bernie on #sugar Libera.chat IRC network
* [[User:Bernie|Bernie Innocenti]], @bernie:matrix.org on Sugar Systems
 
== Hosting ==
 
[[Image:SonicColo.jpg|thumb|320px]]
 
Hosted by Sonic in Santa Rosa CA


== Network configuration ==
== Network configuration ==
Line 33: Line 42:
  netplan apply
  netplan apply


Bender is globally accessible through a public, static IPv4 address.
=== IPv4 ===
IPv6 configuration is being discussed with Sonic net admins.
 
Bender and Papert are globally accessible through a public, static IPv4 address
 
Sonic assigned a /28 network to Sugar Labs. IP assignments are managed in our DNS configuration. Search for "Sonic IP pool" in <code>masters/sugarlabs.org.zone</code>.
 
* Usable IPv4 addresses: 192.184.220.210 ~ 192.184.220.222 (13 addresses)
* Subnet Mask:    255.255.255.240
* Default Gateway: 192.184.220.209
* DNS:            8.8.8.8, 8.8.4.4
 
=== IPv6 ===
IPv6 configuration is a bit weird.
 
* Public block: 2001:5a8:601:f::/64
* Sonic gateway: 2001:5a8:5:3a::15:0/127
* Transport IP: 2001:5a8:5:3a::15:1/127


The gateway is configured to route all traffic for our netblock to the transport IP, which is currently assigned to bender.


The br0 bridge is created libvirt on startup with <code>/etc/libvirt/qemu/networks/default.xml</code>. This is a NAT interface and is not meant for VMs directly  
=== Bridges ===
The br0 bridge is created at startup and shared with the virtual machines hosted on Bender. It gives the VMs unfiltered access to the external network. There's no DHCP, all machines must define a static IP configuration, taking care not to stomp onto each other.
 
There is also a virbr0 bridge created by libvirt on startup from <code>/etc/libvirt/qemu/networks/default.xml</code>. This is a NAT interface and is not meant for VMs directly serving on the Internet.


Guests simply need to be configured to accept IPv6 routing advertisements. The DNS must be assigned manually.
Guests simply need to be configured to accept IPv6 routing advertisements. The DNS must be assigned manually.
== Virtualization ==
Virtual machines are KVM guests managed with libvirt.
=== Storage ===
All virtual disks come from a pool backed by the main LVM VG:
  virsh # pool-define-as nvme-pool logical --source-name nvme-pool
  Pool nvme-pool defined
 
  virsh # pool-start nvme-pool
  Pool nvme-pool started
 
  virsh # pool-autostart nvme-pool
  Pool nvme-pool marked as autostarted
 
  virsh # pool-info nvme-pool
  Name:          nvme-pool
  UUID:          5812819f-b8bf-484e-98fb-2e100fe83df2
  State:          running
  Persistent:    yes
  Autostart:      yes
  Capacity:      1.64 TiB
  Allocation:    250.00 GiB
  Available:      1.40 TiB
Disks assigned to VMs will appear here:
virsh # vol-list nvme-pool
  Name        Path
-----------------------------------------
  aslo1-root  /dev/nvme-pool/aslo1-root
  aslo1-srv    /dev/nvme-pool/aslo1-srv
  backup      /dev/nvme-pool/backup
Disks can be created, listed and deleted using the vol-* commands:
  virsh # vol-create-as nvme-pool testvm-root 20G
  Vol testvm-root created
 
  virsh # vol-info testvm-root --pool nvme-pool
  Name:          testvm-root
  Type:          block
  Capacity:      20.00 GiB
  Allocation:    20.00 GiB
 
  virsh # vol-delete testvm-root --pool nvme-pool
  Vol testvm-root deleted
'''NOTE''': avoid allocating large VM volumes as image file within the host's root filesystem, as they're slow and hard to manage. It's ok to use images for test VMs.