Machine/bender: Difference between revisions

Created page with '== Bender == === Hardware === * Crappy minitower * Core 2 Duo 1.8GHz * 4GB RAM * 2x1TB RAID1, 2x320GB RAID1 * Fedora 11 x86_64 === Location === Hosted by [http://www.develer.co…'
 
Chimosky (talk | contribs)
 
(42 intermediate revisions by 6 users not shown)
Line 1: Line 1:
== Bender ==
== Info ==
Bender and [[Machine/papert | Papert]] are two twin blade servers donated to Sugar Labs in 2021.


=== Hardware ===
Bender is our primary KVM host, while Papert is a hot standby and [[Service/backup|backup]] machine.
* Crappy minitower
* Core 2 Duo 1.8GHz
* 4GB RAM
* 2x1TB RAID1, 2x320GB RAID1
* Fedora 11 x86_64


=== Location ===
== Hostnames ==
Hosted by [http://www.develer.com/ Develer]
* bender.sugarlabs.org
* papert.sugarlabs.org


=== Admins ===
== Machines ==
* [[User:Bernie|Bernie Innocenti]], _bernie on #sugar Freenode
* [[Machine/lightwave]], [[Service/Nameservers]],
* [[User:sascha_silbe|sascha_silbe]] (silbe on #sugar Freenode)
* [[Machine/weblate]], with [[Service/Weblate]], or https://weblate.sugarlabs.org and https://translate.sugarlabs.org
* Stefano Fedrigo <aleph AT develer.com>, sometimes _aleph on Freenode (local access, office hours CET)
 
* Develer Infrastructure <it AT lists.develer.com> (local access, office hours CET)
== Hardware ==
* HPE ProLiant DL360 Gen10 1RU server
** Dual socket, current configuration has one CPU
* [https://ark.intel.com/content/www/us/en/ark/products/199342/intel-xeon-gold-5218r-processor-27-5m-cache-2-10-ghz.html Xeon Gold 5218R]
** 20 cores/40 threads
** 2.1GHz base frequency, 4.0GHz max turbo frequency
** 27.5MB of cache
* 64GB RAM
* 2TB NVMe SSD
 
[[Image:BenderRacked.jpg|thumb|320px]]
 
== Admins ==
* [[User:MrBIOS|Alex Perez]], MrBIOS on #sugar Libera.chat IRC network
* [[User:Bernie|Bernie Innocenti]], @bernie:matrix.org on Sugar Systems
 
== Hosting ==
 
[[Image:SonicColo.jpg|thumb|320px]]
 
Hosted by Sonic in Santa Rosa CA
 
== Network configuration ==
 
Network configuration is managed via [https://netplan.io/ netplan]. To modify, do:
 
vi /etc/netplan/bender.yaml
netplan generate
netplan apply
 
=== IPv4 ===
 
Bender and Papert are globally accessible through a public, static IPv4 address
 
Sonic assigned a /28 network to Sugar Labs. IP assignments are managed in our DNS configuration. Search for "Sonic IP pool" in <code>masters/sugarlabs.org.zone</code>.
 
* Usable IPv4 addresses: 192.184.220.210 ~ 192.184.220.222 (13 addresses)
* Subnet Mask:    255.255.255.240
* Default Gateway: 192.184.220.209
* DNS:            8.8.8.8, 8.8.4.4
 
=== IPv6 ===
IPv6 configuration is a bit weird.
 
* Public block: 2001:5a8:601:f::/64
* Sonic gateway: 2001:5a8:5:3a::15:0/127
* Transport IP: 2001:5a8:5:3a::15:1/127
 
The gateway is configured to route all traffic for our netblock to the transport IP, which is currently assigned to bender.
 
=== Bridges ===
The br0 bridge is created at startup and shared with the virtual machines hosted on Bender. It gives the VMs unfiltered access to the external network. There's no DHCP, all machines must define a static IP configuration, taking care not to stomp onto each other.
 
There is also a virbr0 bridge created by libvirt on startup from <code>/etc/libvirt/qemu/networks/default.xml</code>. This is a NAT interface and is not meant for VMs directly serving on the Internet.
 
Guests simply need to be configured to accept IPv6 routing advertisements. The DNS must be assigned manually.
 
== Virtualization ==
 
Virtual machines are KVM guests managed with libvirt.
 
=== Storage ===
 
All virtual disks come from a pool backed by the main LVM VG:
 
  virsh # pool-define-as nvme-pool logical --source-name nvme-pool
  Pool nvme-pool defined
 
  virsh # pool-start nvme-pool
  Pool nvme-pool started
 
  virsh # pool-autostart nvme-pool
  Pool nvme-pool marked as autostarted
 
  virsh # pool-info nvme-pool
  Name:          nvme-pool
  UUID:          5812819f-b8bf-484e-98fb-2e100fe83df2
  State:          running
  Persistent:    yes
  Autostart:      yes
  Capacity:      1.64 TiB
  Allocation:    250.00 GiB
  Available:      1.40 TiB
 
Disks assigned to VMs will appear here:
 
virsh # vol-list nvme-pool
  Name        Path
-----------------------------------------
  aslo1-root  /dev/nvme-pool/aslo1-root
  aslo1-srv    /dev/nvme-pool/aslo1-srv
  backup      /dev/nvme-pool/backup
 
Disks can be created, listed and deleted using the vol-* commands:
 
  virsh # vol-create-as nvme-pool testvm-root 20G
  Vol testvm-root created
 
  virsh # vol-info testvm-root --pool nvme-pool
  Name:          testvm-root
  Type:          block
  Capacity:      20.00 GiB
  Allocation:    20.00 GiB
 
  virsh # vol-delete testvm-root --pool nvme-pool
  Vol testvm-root deleted
 
'''NOTE''': avoid allocating large VM volumes as image file within the host's root filesystem, as they're slow and hard to manage. It's ok to use images for test VMs.