Difference between revisions of "Machine/bender"

From Sugar Labs
Jump to navigation Jump to search
 
(20 intermediate revisions by 2 users not shown)
Line 1: Line 1:
<noinclude>{{TOCright}}</noinclude>
+
== Info ==
 +
Bender and [[Machine/papert | Papert]] are two twin blade servers donated to Sugar Labs in 2021.
 +
 
 +
Bender is our primary KVM host, while Papert is a hot standby and [[Service/backup|backup]] machine.
  
 
== Hostnames ==
 
== Hostnames ==
 
* bender.sugarlabs.org
 
* bender.sugarlabs.org
 +
* papert.sugarlabs.org
  
 
== Hardware ==
 
== Hardware ==
Line 13: Line 17:
 
* 64GB RAM
 
* 64GB RAM
 
* 2TB NVMe SSD
 
* 2TB NVMe SSD
* Ubuntu 20.04.2 LTS
 
  
== Info ==
+
[[Image:BenderRacked.jpg|thumb|320px]]
Owned by Sugar Labs, Inc.
 
Hosted by Sonic in Santa Rosa, CA
 
 
 
Bender and Papert are two twin KVM hosts bought by Sugar Labs in 2021.
 
  
 
== Admins ==
 
== Admins ==
 
* [[User:MrBIOS|Alex Perez]], MrBIOS on #sugar Libera.chat IRC network
 
* [[User:MrBIOS|Alex Perez]], MrBIOS on #sugar Libera.chat IRC network
* [[User:Bernie|Bernie Innocenti]], bernie on #sugar Libera.chat IRC network
+
* [[User:Bernie|Bernie Innocenti]], @bernie:matrix.org on Sugar Systems
 +
 
 +
== Hosting ==
 +
 
 +
[[Image:SonicColo.jpg|thumb|320px]]
 +
 
 +
Hosted by Sonic in Santa Rosa CA
  
 
== Network configuration ==
 
== Network configuration ==
  
Develer has a 10mbit up/downlink and asks to limit bandwidth usage from hosted services.
+
Network configuration is managed via [https://netplan.io/ netplan]. To modify, do:
There is a traffic shaper, but it may have trouble shaping 6to4 traffic.
+
 
 +
vi /etc/netplan/bender.yaml
 +
netplan generate
 +
netplan apply
  
Bender is globally accessible through public, static IPv4 address.
+
=== IPv4 ===
  
The virbr0 bridge is created by libvirt on startup with <code>/etc/libvirt/qemu/networks/default.xml</code>.
+
Bender and Papert are globally accessible through a public, static IPv4 address
Libvirt does not yet support assigning IPv6 addresses to bridges, therefore we do this in <code>/etc/rc.local</code>:
 
  
ip addr add 2002:5395:9edc:1::1/64 dev virbr0
+
Sonic assigned a /28 network to Sugar Labs. IP assignments are managed in our DNS configuration. Search for "Sonic IP pool" in <code>masters/sugarlabs.org.zone</code>.
  
To automatically configure network and on the guests, Bender also runs radvd, the IPv6 Routing Advertisement
+
* Usable IPv4 addresses: 192.184.220.210 ~ 192.184.220.222 (13 addresses)
daemon. The contents of <code>/etc/radvd.conf</code> are:
+
* Subnet Mask:    255.255.255.240
 +
* Default Gateway: 192.184.220.209
 +
* DNS:             8.8.8.8, 8.8.4.4
  
interface virbr0
+
=== IPv6 ===
{
+
IPv6 configuration is a bit weird.
        IgnoreIfMissing on;
+
 
        AdvSendAdvert on;
+
* Public block: 2001:5a8:601:f::/64
        MinRtrAdvInterval 30;
+
* Sonic gateway: 2001:5a8:5:3a::15:0/127
        MaxRtrAdvInterval 100;
+
* Transport IP: 2001:5a8:5:3a::15:1/127
        AdvDefaultPreference low;
+
 
        AdvHomeAgentFlag off;
+
The gateway is configured to route all traffic for our netblock to the transport IP, which is currently assigned to bender.
       
+
 
        #bernie: subnet 1 of our /48 6to4 on Develer Consiagnet
+
=== Bridges ===
        prefix 2002:5395:9edc:1::1/64
+
The br0 bridge is created at startup and shared with the virtual machines hosted on Bender. It gives the VMs unfiltered access to the external network. There's no DHCP, all machines must define a static IP configuration, taking care not to stomp onto each other.
        {
+
 
                AdvOnLink on;
+
There is also a virbr0 bridge created by libvirt on startup from <code>/etc/libvirt/qemu/networks/default.xml</code>. This is a NAT interface and is not meant for VMs directly serving on the Internet.
                AdvAutonomous on;
 
        };
 
};
 
  
 
Guests simply need to be configured to accept IPv6 routing advertisements. The DNS must be assigned manually.
 
Guests simply need to be configured to accept IPv6 routing advertisements. The DNS must be assigned manually.
  
== Hosted VMs ==
+
== Virtualization ==
All buildslaves currently run off bender as KVM virtual machines managed by libvirtd.
+
 
 +
Virtual machines are KVM guests managed with libvirt.
 +
 
 +
=== Storage ===
 +
 
 +
All virtual disks come from a pool backed by the main LVM VG:
 +
 
 +
  virsh # pool-define-as nvme-pool logical --source-name nvme-pool
 +
  Pool nvme-pool defined
 +
 
 +
  virsh # pool-start nvme-pool
 +
  Pool nvme-pool started
 +
 
 +
  virsh # pool-autostart nvme-pool
 +
  Pool nvme-pool marked as autostarted
 +
 
 +
  virsh # pool-info nvme-pool
 +
  Name:          nvme-pool
 +
  UUID:          5812819f-b8bf-484e-98fb-2e100fe83df2
 +
  State:          running
 +
  Persistent:    yes
 +
  Autostart:      yes
 +
  Capacity:      1.64 TiB
 +
  Allocation:    250.00 GiB
 +
  Available:      1.40 TiB
 +
 
 +
Disks assigned to VMs will appear here:
  
The machines are not globally addressable on IPv4 and are only reachable by IPv6. The
+
  virsh # vol-list nvme-pool
IPv6 address is dynamically configured by radvd to a subnet of the public 6to4 net.
+
  Name        Path
 +
-----------------------------------------
 +
  aslo1-root  /dev/nvme-pool/aslo1-root
 +
  aslo1-srv    /dev/nvme-pool/aslo1-srv
 +
  backup      /dev/nvme-pool/backup
  
All the VMs are in /srv/images. See the README in the same directory.
+
Disks can be created, listed and deleted using the vol-* commands:
  
They have been created with
+
  virsh # vol-create-as nvme-pool testvm-root 20G
 +
  Vol testvm-root created
 +
 
 +
  virsh # vol-info testvm-root --pool nvme-pool
 +
  Name:          testvm-root
 +
  Type:          block
 +
  Capacity:      20.00 GiB
 +
  Allocation:    20.00 GiB
 +
 
 +
  virsh # vol-delete testvm-root --pool nvme-pool
 +
  Vol testvm-root deleted
  
virt-install --name buildslave-name --ram 512 \
+
'''NOTE''': avoid allocating large VM volumes as image file within the host's root filesystem, as they're slow and hard to manage. It's ok to use images for test VMs.
--disk path=/srv/images/buildslave-name.img,size=10
 
--network network:default \
 
--location "mirror-http-address" \
 
--extra-args="console=tty0 console=ttyS0,115200n8 serial"
 

Latest revision as of 17:12, 1 July 2024

Info

Bender and Papert are two twin blade servers donated to Sugar Labs in 2021.

Bender is our primary KVM host, while Papert is a hot standby and backup machine.

Hostnames

  • bender.sugarlabs.org
  • papert.sugarlabs.org

Hardware

  • HPE ProLiant DL360 Gen10 1RU server
    • Dual socket, current configuration has one CPU
  • Xeon Gold 5218R
    • 20 cores/40 threads
    • 2.1GHz base frequency, 4.0GHz max turbo frequency
    • 27.5MB of cache
  • 64GB RAM
  • 2TB NVMe SSD
BenderRacked.jpg

Admins

Hosting

SonicColo.jpg

Hosted by Sonic in Santa Rosa CA

Network configuration

Network configuration is managed via netplan. To modify, do:

vi /etc/netplan/bender.yaml
netplan generate
netplan apply

IPv4

Bender and Papert are globally accessible through a public, static IPv4 address

Sonic assigned a /28 network to Sugar Labs. IP assignments are managed in our DNS configuration. Search for "Sonic IP pool" in masters/sugarlabs.org.zone.

  • Usable IPv4 addresses: 192.184.220.210 ~ 192.184.220.222 (13 addresses)
  • Subnet Mask: 255.255.255.240
  • Default Gateway: 192.184.220.209
  • DNS: 8.8.8.8, 8.8.4.4

IPv6

IPv6 configuration is a bit weird.

* Public block: 2001:5a8:601:f::/64
* Sonic gateway: 2001:5a8:5:3a::15:0/127
* Transport IP: 2001:5a8:5:3a::15:1/127

The gateway is configured to route all traffic for our netblock to the transport IP, which is currently assigned to bender.

Bridges

The br0 bridge is created at startup and shared with the virtual machines hosted on Bender. It gives the VMs unfiltered access to the external network. There's no DHCP, all machines must define a static IP configuration, taking care not to stomp onto each other.

There is also a virbr0 bridge created by libvirt on startup from /etc/libvirt/qemu/networks/default.xml. This is a NAT interface and is not meant for VMs directly serving on the Internet.

Guests simply need to be configured to accept IPv6 routing advertisements. The DNS must be assigned manually.

Virtualization

Virtual machines are KVM guests managed with libvirt.

Storage

All virtual disks come from a pool backed by the main LVM VG:

 virsh # pool-define-as nvme-pool logical --source-name nvme-pool 
 Pool nvme-pool defined
 
 virsh # pool-start nvme-pool
 Pool nvme-pool started
 
 virsh # pool-autostart nvme-pool
 Pool nvme-pool marked as autostarted
 
 virsh # pool-info nvme-pool
 Name:           nvme-pool
 UUID:           5812819f-b8bf-484e-98fb-2e100fe83df2
 State:          running
 Persistent:     yes
 Autostart:      yes
 Capacity:       1.64 TiB
 Allocation:     250.00 GiB
 Available:      1.40 TiB

Disks assigned to VMs will appear here:

virsh # vol-list nvme-pool
 Name         Path
-----------------------------------------
 aslo1-root   /dev/nvme-pool/aslo1-root
 aslo1-srv    /dev/nvme-pool/aslo1-srv
 backup       /dev/nvme-pool/backup

Disks can be created, listed and deleted using the vol-* commands:

 virsh # vol-create-as nvme-pool testvm-root 20G
 Vol testvm-root created
 
 virsh # vol-info testvm-root --pool nvme-pool
 Name:           testvm-root
 Type:           block
 Capacity:       20.00 GiB
 Allocation:     20.00 GiB
 
 virsh # vol-delete testvm-root --pool nvme-pool
 Vol testvm-root deleted

NOTE: avoid allocating large VM volumes as image file within the host's root filesystem, as they're slow and hard to manage. It's ok to use images for test VMs.