Changes

Jump to navigation Jump to search
590 bytes added ,  18:12, 28 November 2021
Line 2: Line 2:     
== Hostnames ==
 
== Hostnames ==
* bender.codewiz.org
   
* bender.sugarlabs.org
 
* bender.sugarlabs.org
    
== Hardware ==
 
== Hardware ==
* Crappy minitower
+
* HPE ProLiant DL360 Gen10 1RU server
* Core 2 Duo 1.8GHz
+
** Dual socket, current configuration has one CPU
* 6GB RAM
+
* [https://ark.intel.com/content/www/us/en/ark/products/199342/intel-xeon-gold-5218r-processor-27-5m-cache-2-10-ghz.html Xeon Gold 5218R]
* 2x1TB RAID1, 2x320GB RAID1
+
** 20 cores/40 threads
* Fedora 13 x86_64
+
** 2.1GHz base frequency, 4.0GHz max turbo frequency
 +
** 27.5MB of cache
 +
* 64GB RAM
 +
* 2TB NVMe SSD
 +
* Ubuntu 20.04.2 LTS
   −
== Location ==
+
== Info ==
Hosted by [http://www.develer.com/ Develer]
+
Owned by Sugar Labs, Inc.
 +
Hosted by Sonic in Santa Rosa, CA
 +
 
 +
Bender and Papert are two twin KVM hosts bought by Sugar Labs in 2021.
    
== Admins ==
 
== Admins ==
* [[User:Bernie|Bernie Innocenti]], _bernie on #sugar Freenode
+
* [[User:MrBIOS|Alex Perez]], MrBIOS on #sugar Libera.chat IRC network
* [[User:dnarvaez|Daniel Narvaez]], dnarvaez on #sugar Freenode
+
* [[User:Bernie|Bernie Innocenti]], bernie on #sugar Libera.chat IRC network
* [[User:sascha_silbe|sascha_silbe]] (silbe on #sugar Freenode)
  −
* Stefano Fedrigo <aleph AT develer.com>, sometimes _aleph on Freenode (local access, office hours CET)
  −
* Develer Infrastructure <it AT lists.develer.com> (local access, office hours CET)
      
== Network configuration ==
 
== Network configuration ==
   −
Develer has a 10mbit up/downlink and asks to limit bandwidth usage from hosted services.
+
Network configuration is managed via [https://netplan.io/ netplan]. To modify, do:
There is a traffic shaper, but it may have trouble shaping 6to4 traffic.
+
 
 +
vi /etc/netplan/bender.yaml
 +
netplan generate
 +
netplan apply
   −
Bender is globally accessible through public, static IPv4 as well as the 6to4 subnet associated
+
=== IPv4 ===
to it: 2002:5395:9edc::/48.
     −
The tun6to4 interface on bender is assigned the globally visible address 2002:5395:9edc::1.
+
Bender is globally accessible through a public, static IPv4 address.
The subnet 2002:5395:9edc:1::/64 of our 6to4 net is assigned to the bridge virbr0, which binds together
  −
several virtual interfaces connected to the libvirt guests. With this network setup, IPv6 routing works
  −
naturally, without the need to add any special routing rules on bender.
     −
The virbr0 bridge is created by libvirt on startup with <code>/etc/libvirt/qemu/networks/default.xml</code>.
+
Sonic assigned a /28 network to Sugar Labs. IP assignments are managed in our DNS configuration. Search for "Sonico IP pool" in <code>masters/sugarlabs.org.zone</code>.
Libvirt does not yet support assigning IPv6 addresses to bridges, therefore we do this in <code>/etc/rc.local</code>:
     −
ip addr add 2002:5395:9edc:1::1/64 dev virbr0
+
=== IPv6 ===
 +
IPv6 configuration is being discussed with Sonic net admins.
   −
To automatically configure network and on the gursts, Bender also runs radvd, the IPv6 Routing Advertisement
+
=== Bridges ===
daemon. The contents of <code>/etc/radvd.conf</code> are:
+
The br0 bridge is created at startup and shared with the virtual machines hosted on Bender. It gives the VMs unfiltered access to the external network. There's no DHCP, all machines must define a static IP configuration, taking care not to stomp onto each other.
   −
interface virbr0
+
There is also a virbr0 bridge created by libvirt on startup from <code>/etc/libvirt/qemu/networks/default.xml</code>. This is a NAT interface and is not meant for VMs directly serving on the Internet.
{
  −
        IgnoreIfMissing on;
  −
        AdvSendAdvert on;
  −
        MinRtrAdvInterval 30;
  −
        MaxRtrAdvInterval 100;
  −
        AdvDefaultPreference low;
  −
        AdvHomeAgentFlag off;
  −
       
  −
        #bernie: subnet 1 of our /48 6to4 on Develer Consiagnet
  −
        prefix 2002:5395:9edc:1::1/64
  −
        {
  −
                AdvOnLink on;
  −
                AdvAutonomous on;
  −
        };
  −
};
      
Guests simply need to be configured to accept IPv6 routing advertisements. The DNS must be assigned manually.
 
Guests simply need to be configured to accept IPv6 routing advertisements. The DNS must be assigned manually.
    +
== Virtualization ==
 +
 +
Virtual machines are KVM guests managed with libvirt.
 +
 +
=== Storage ===
 +
 +
All virtual disks come from a pool backed by the main LVM VG:
 +
 +
  virsh # pool-define-as nvme-pool logical --source-name nvme-pool
 +
  Pool nvme-pool defined
 +
 
 +
  virsh # pool-start nvme-pool
 +
  Pool nvme-pool started
 +
 
 +
  virsh # pool-autostart nvme-pool
 +
  Pool nvme-pool marked as autostarted
 +
 
 +
  virsh # pool-info nvme-pool
 +
  Name:          nvme-pool
 +
  UUID:          5812819f-b8bf-484e-98fb-2e100fe83df2
 +
  State:          running
 +
  Persistent:    yes
 +
  Autostart:      yes
 +
  Capacity:      1.64 TiB
 +
  Allocation:    250.00 GiB
 +
  Available:      1.40 TiB
 +
 +
Disks assigned to VMs will appear here:
 +
 +
virsh # vol-list nvme-pool
 +
  Name        Path
 +
-----------------------------------------
 +
  aslo1-root  /dev/nvme-pool/aslo1-root
 +
  aslo1-srv    /dev/nvme-pool/aslo1-srv
 +
  backup      /dev/nvme-pool/backup
   −
== Hosted VMs ==
+
Disks can be created, listed and deleted using the vol-* commands:
All buildslaves currently run off bender as KVM virtual machines managed by libvirtd.
     −
The machines are not globally addressable on IPv4 and are only reachable by IPv6. The
+
  virsh # vol-create-as nvme-pool testvm-root 20G
IPv6 address is dynamically configured by radvd to a subnet of the public 6to4 net.
+
  Vol testvm-root created
Please remember to update the sugarlanbs.org zone when adding new build slaves.
+
 
{{Special:PrefixIndex/{{PAGENAME}}/}}
+
  virsh # vol-info testvm-root --pool nvme-pool
 +
  Name:          testvm-root
 +
  Type:          block
 +
  Capacity:      20.00 GiB
 +
  Allocation:    20.00 GiB
 +
 
 +
  virsh # vol-delete testvm-root --pool nvme-pool
 +
  Vol testvm-root deleted
   −
[[Category:Machine]]
+
Please avoid allocating large VM volumes as image file within the host's root filesystem: they're slow and hard to manage. It's ok to use images for test VMs.

Navigation menu