Changes

Jump to navigation Jump to search
3,141 bytes added ,  03:54, 31 July 2022
no edit summary
Line 1: Line 1: −
== Bender ==
+
<noinclude>{{TOCright}}</noinclude>
   −
=== Hardware ===
+
== Hostnames ==
* Crappy minitower
+
* bender.sugarlabs.org
* Core 2 Duo 1.8GHz
  −
* 4GB RAM
  −
* 2x1TB RAID1, 2x320GB RAID1
  −
* Fedora 11 x86_64
     −
=== Location ===
+
== Hardware ==
Hosted by [http://www.develer.com/ Develer]
+
* HPE ProLiant DL360 Gen10 1RU server
 +
** Dual socket, current configuration has one CPU
 +
* [https://ark.intel.com/content/www/us/en/ark/products/199342/intel-xeon-gold-5218r-processor-27-5m-cache-2-10-ghz.html Xeon Gold 5218R]
 +
** 20 cores/40 threads
 +
** 2.1GHz base frequency, 4.0GHz max turbo frequency
 +
** 27.5MB of cache
 +
* 64GB RAM
 +
* 2TB NVMe SSD
 +
* Ubuntu 20.04.2 LTS
   −
=== Admins ===
+
[[Image:BenderRacked.jpg|thumb|320px]]
* [[User:Bernie|Bernie Innocenti]], _bernie on #sugar Freenode
+
 
* [[User:sascha_silbe|sascha_silbe]] (silbe on #sugar Freenode)
+
== Info ==
* Stefano Fedrigo <aleph AT develer.com>, sometimes _aleph on Freenode (local access, office hours CET)
+
Owned by Sugar Labs, Inc.
* Develer Infrastructure <it AT lists.develer.com> (local access, office hours CET)
+
Hosted by Sonic in Santa Rosa, CA
 +
 
 +
Bender and Papert are two twin KVM hosts bought by Sugar Labs in 2021.
 +
 
 +
== Admins ==
 +
* [[User:MrBIOS|Alex Perez]], MrBIOS on #sugar Libera.chat IRC network
 +
* [[User:Bernie|Bernie Innocenti]], bernie on #sugar Libera.chat IRC network
 +
 
 +
== Hosting ==
 +
 
 +
[[Image:SonicColo.jpg|thumb|320px]]
 +
 
 +
Hosted by Sonic in Santa Rosa CA
 +
 
 +
== Network configuration ==
 +
 
 +
Network configuration is managed via [https://netplan.io/ netplan]. To modify, do:
 +
 
 +
vi /etc/netplan/bender.yaml
 +
netplan generate
 +
netplan apply
 +
 
 +
=== IPv4 ===
 +
 
 +
Bender is globally accessible through a public, static IPv4 address.
 +
 
 +
Sonic assigned a /28 network to Sugar Labs. IP assignments are managed in our DNS configuration. Search for "Sonico IP pool" in <code>masters/sugarlabs.org.zone</code>.
 +
 
 +
=== IPv6 ===
 +
IPv6 configuration is a bit weird.
 +
 
 +
* Public block: 2001:5a8:601:f::/64
 +
* Sonic gateway: 2001:5a8:5:3a::15:0/127
 +
* Transport IP: 2001:5a8:5:3a::15:1/127
 +
 
 +
The gateway is configured to route all traffic for our netblock to the transport IP, which is currently assigned to bender.
 +
 
 +
=== Bridges ===
 +
The br0 bridge is created at startup and shared with the virtual machines hosted on Bender. It gives the VMs unfiltered access to the external network. There's no DHCP, all machines must define a static IP configuration, taking care not to stomp onto each other.
 +
 
 +
There is also a virbr0 bridge created by libvirt on startup from <code>/etc/libvirt/qemu/networks/default.xml</code>. This is a NAT interface and is not meant for VMs directly serving on the Internet.
 +
 
 +
Guests simply need to be configured to accept IPv6 routing advertisements. The DNS must be assigned manually.
 +
 
 +
== Virtualization ==
 +
 
 +
Virtual machines are KVM guests managed with libvirt.
 +
 
 +
=== Storage ===
 +
 
 +
All virtual disks come from a pool backed by the main LVM VG:
 +
 
 +
  virsh # pool-define-as nvme-pool logical --source-name nvme-pool
 +
  Pool nvme-pool defined
 +
 
 +
  virsh # pool-start nvme-pool
 +
  Pool nvme-pool started
 +
 
 +
  virsh # pool-autostart nvme-pool
 +
  Pool nvme-pool marked as autostarted
 +
 
 +
  virsh # pool-info nvme-pool
 +
  Name:          nvme-pool
 +
  UUID:          5812819f-b8bf-484e-98fb-2e100fe83df2
 +
  State:          running
 +
  Persistent:    yes
 +
  Autostart:      yes
 +
  Capacity:      1.64 TiB
 +
  Allocation:    250.00 GiB
 +
  Available:      1.40 TiB
 +
 
 +
Disks assigned to VMs will appear here:
 +
 
 +
virsh # vol-list nvme-pool
 +
  Name        Path
 +
-----------------------------------------
 +
  aslo1-root  /dev/nvme-pool/aslo1-root
 +
  aslo1-srv    /dev/nvme-pool/aslo1-srv
 +
  backup      /dev/nvme-pool/backup
 +
 
 +
Disks can be created, listed and deleted using the vol-* commands:
 +
 
 +
  virsh # vol-create-as nvme-pool testvm-root 20G
 +
  Vol testvm-root created
 +
 
 +
  virsh # vol-info testvm-root --pool nvme-pool
 +
  Name:          testvm-root
 +
  Type:          block
 +
  Capacity:      20.00 GiB
 +
  Allocation:    20.00 GiB
 +
 
 +
  virsh # vol-delete testvm-root --pool nvme-pool
 +
  Vol testvm-root deleted
 +
 
 +
Please avoid allocating large VM volumes as image file within the host's root filesystem: they're slow and hard to manage. It's ok to use images for test VMs.

Navigation menu