Tag Archives: FreeBSD

Mirroring OmniOS: The Complete Guide; Part One

Chapter Ⅰ

I know that “Complete Guide” and “Part One” are oxymorons, but hey, be happy that I’m publishing in parts, otherwise I’d completely ignore this blog post.

Two weeks ago I decided to play with illumos again. I was speaking with a friend and we were sharing our frustrations regarding Open-Source contribution. We write the code, we submit, we get feedback, we submit again, and then we’re ghosted. It’s like the LinkedIn or Tinder version of Software Engineering.

Then I asked him about his best open-source experience and he told me “illumos of course!”.

I was amazed. I thought you had to be very technical in order to even build illumos, but turns out they have an amazing documentation on building illumos and OmniOS (an illumos distribution) has done work to make sure that the system can be self-hosted (i.e. The OS can build itself).

So, I decided to fire up OmniOS on our hackerspace server running FreeBSD inside a bhyve VM.

The installation went smoothly, but the IPS packages were slow to download, and I might be wrong (please correct me if I am) but IPS doesn’t seem to be keeping a local copy of the files. It always downloads. Is that configurable?

Regardless. I thought that the best way to contribute is to advocate. In order to do that I needed to make sure that IPS servers are fast in Armenia. Hence the mirroring project started.

Obey!

Requirements

Here are some terminology that I will use in this blog post, just so we are on the same page.

  • OmniOS: an illumos distribution
  • Origin: OmniOS’s IPS servers at pkg.omnios.org
  • Local: A copy of the Origin
  • Repository: A collection of software
  • Core: The Core Repository of OmniOS
  • Extras: The Extra Repository of OmniOS
  • IPS or PKG: The Image Packaging System and its utility, pkg
  • Zone: an illumos Zone (similar to FreeBSD Jails, Linux Containers, chroot) running on OmniOS

Now that we are on the same page, let’s talk about our setup and what we need.

  • An internet connection: duh!
  • A domain name: I decided to use pkg.omnios.illumos.am. Yes, I’m lucky like that.
  • A publicly accessible IP address.
  • A server: I am running OmniOS Stable (r151048) inside a VM. You can use bare-metal or a cloud VM if you want.
  • Storage: I am currently using around 50GB of storage, expect that to go around 300GB when we get to Part Three

Pre-Mirroring Setup

Before we setup our mirror, let’s make sure that we have a good infrastructure that we can maintain.

Here’s what we’ll create

  • A Zone that will act as the HTTP(s) server using nginx at IP address 10.10.0.80
  • A Zone that will do the mirroring using IPS tools at 10.10.0.51
  • An virtual dumb switch (etherstub) that will connect the Zones and the Global-Zone (a.k.a The Host) together. The GZ will have an address of 10.10.0.1
  • ZFS datasets for each Core and Extras Repository (for each release)

Please note that there are many ways to do this, for example, having everything in a Global Zone, running IPS mirroring and nginx in a single Zone, not using etherstub at all, etc. But I like this setup as it will allow us to “grow” in the future.

From now on, omnios# means that we’re in the Global Zone and zone0# means we’re inside a Zone named zone0.

Let’s start with setting up our etherstub and connecting our Global Zone to it

omnios# dladm create-etherstub switch0
omnios# dladm create-vnic -l switch0 vnic0
omnios# ipadm create-if vnic0
omnios# ipadm create-addr -T static -a 10.10.0.1/24 vnic0/switch0

Done!

Now, we will setup our Zones using the zadm utility. Install zadm by running

omnios# pkg install zadm

After installing zadm, we’ll create a dataset for our Zones

omnios# zfs create -o mountpoint=/zones rpool/zones

This assumes that your ZFS pool is named rpool.

Finally, we can create our Zones. Running

omnios# zadm create -b pkgsrc www0

will open your $EDITOR, where you need to modify some JSON, here’s what mine looks like!

{
   "autoboot" : "true",
   "brand" : "pkgsrc",
   "ip-type" : "exclusive",
"dns-domain" : "omnios.illumos.am", "net" : [ { "allowed-address" : "10.10.0.80/24", "defrouter" : "10.10.0.1", "global-nic" : "switch0", "physical" : "www0" } ], "pool" : "", "scheduling-class" : "", "zonename" : "www0", "zonepath" : "/zones/www0" }

After saving the file, zadm will install the Zone.

Now let’s setup our mirroring Zone. Do the same but change the Zone name to repo, the brand to lipkg (and -b lipkg) and set the IP address to 10.10.0.51/24.

All we need now is to forward the HTTP/HTTPS traffic to www0 Zone and allow all Zones to access the internet using NAT.

Create and edit the IPFilter’s NAT file at /etc/ipf/ipnat.conf, here’s an example configuration

map vioif0 10.10.0.0/24 -> 212.34.250.10

rdr vioif0 212.34.250.10/32 port 80 -> 10.10.0.80 port 80 tcp
rdr vioif0 212.34.250.10/32 port 443 -> 10.10.0.80 port 443 tcp

Make sure you set the correct interface name and the correct external IP address.

Finally, we can boot our Zones!

omnios# zadm boot www0
omnios# zadm boot repo

You should see the following output when you run zadm again

omnios# zadm
NAME              STATUS     BRAND       RAM    CPUS  SHARES
global            running    ipkg        56G      12       1
repo              running    lipkg         -       -       1
www0              running    pkgsrc        -       -       1

Great! Let’s setup the mirroring process.

Mirroring Setup

Let’s create a ZFS dataset for repos for each release

repo# zfs create -o mountpoint=/repo rpool/zones/repo/ROOT/repo      
repo# zfs create rpool/zones/repo/ROOT/repo/r151048      
repo# zfs create rpool/zones/repo/ROOT/repo/r151048/core 
repo# zfs create rpool/zones/repo/ROOT/repo/r151048/extra

And then we use the pkgrepo command to create a repository

repo# pkgrepo create /repo/r151048/core
repo# pkgrepo create /repo/r151048/extra

And finally, we can start receiving the packages from Origin to Local

repo# pkgrecv -s https://pkg.omnios.org/r151048/core/  -d /repo/r151048/core  '*'
repo# pkgrecv -s https://pkg.omnios.org/r151048/extra/ -d /repo/r151048/extra '*'

This will take a while depending on your internet connection speed and the load on OmniOS’s Origin. It’s like a good investment, we spend load and time now so we save traffic and time later 🙂

After it’s done, we need to set the publisher of these repos the same as Origin.

repo# pkgrepo set -s /repo/r151048/core   publisher/prefix=omnios
repo# pkgrepo set -s /repo/r151048/extra/ publisher/prefix=extra.omnios

And we’re done!

Now need to serve these repos using IPS’s depot server.

We will create two instances of the depotd server, one for core and one for extra.

  • r151048/core will run on 5148
  • r151048/extra will run on 1148
  • (in the future) r151050/core will run on 5150
  • (in the future) r151050/extra will run on 1150

We start with core

repo# svccfg -s pkg/server add r151048_core
repo# svccfg -s pkg/server:r151048_core addpg pkg application
repo# svccfg -s pkg/server:r151048_core setprop pkg/inst_root = /repo/r151048/core/
repo# svccfg -s pkg/server:r151048_core setprop pkg/port = 5148
repo# svccfg -s pkg/server:r151048_core setprop pkg/proxy_base = https://pkg.omnios.illumos.am/r151048/core

And we do the same for extra

repo# svccfg -s pkg/server add r151048_extra
repo# svccfg -s pkg/server:r151048_extra addpg pkg application
repo# svccfg -s pkg/server:r151048_extra setprop pkg/inst_root = /repo/r151048/extra/
repo# svccfg -s pkg/server:r151048_extra setprop pkg/port = 1148
repo# svccfg -s pkg/server:r151048_extra setprop pkg/proxy_base = https://pkg.omnios.illumos.am/r151048/extra

Finally, we enable the services

repo# svcadm enable  pkg/server:r151048_core pkg/server:r151048_extra
repo# svcadm restart pkg/server:r151048_core pkg/server:r151048_extra

Let’s check!

We’re good! Now let’s setup Nginx 🙂

The Web Server

This part is pretty easy, we login into www0, install nginx, and setup some paths. I will be posting a copy-pasta of my configs, I assume you can do the rest 🙂

www0# pkgin update
www0# pkgin install nginx

Thank you SmartOS! 🧡

In my nginx.conf, I added

include vhosts/*.conf;

and then in /opt/local/etc/nginx/vhosts I created a file
named pkg.omnios.illumos.am.conf, which looks like this

server {
        listen 80;
        server_name pkg.omnios.illumos.am;

        location /.well-known/acme-challenge/ {
          alias /opt/local/www/acme/.well-known/acme-challenge/;
        }

        location / {
            return 301 "https://pkg.omnios.illumos.am";
        }
}

server {
    listen       443 ssl;
    server_name  pkg.omnios.illumos.am;

    ssl_certificate      /etc/ssl/pkg.omnios.illumos.am/fullchain.pem;
    ssl_certificate_key  /etc/ssl/pkg.omnios.illumos.am/key.pem;
    location /r151048/core/ {
                proxy_pass http://10.10.0.51:5148/;
    }

    location /r151048/extra/ {
                proxy_pass http://10.10.0.51:1148/;
    }

    location / {
# This needs to be changed, later... add_header Content-Type text/plain; return 200 "ok..."; } }

Finally, we just need to enable nginx

www0# svcadm enable pkgsrc/nginx

and check!

Using the Local Repos

This part is actually pretty easy. We just need to remove everything that exists and add our own. I will be running this on a computer named dna0.

dna0# pkg set-publisher -M '*' -G '*' omnios
dna0# pkg set-publisher -M '*' -G '*' extra.omnios
dna0# pkg set-publisher -O https://pkg.omnios.illumos.am/r151048/core omnios
dna0# pkg set-publisher -O https://pkg.omnios.illumos.am/r151048/extra extra.omnios
dna0# pkg publisher PUBLISHER TYPE STATUS P LOCATION extra.omnios origin online F https://pkg.omnios.illumos.am/r151048/extra/ omnios origin online F https://pkg.omnios.illumos.am/r151048/core/

We’re good! 🙂

Fetching Updates

By the time I wanted to publish this I noticed that there’s a new OmniOS Weekly Update, so I thought, hey, maybe I should try updating the Local Repo as well… how do we do that?

Turns out I just need to pkgrecv again, and then run a refresh command.

pkgrecv -v -s https://pkg.omnios.org/r151048/core/ -d /repo/r151048/core/ '*'
pkgrepo -s /repo/r151048/core refresh

And looks like we’re good! Maybe we can setup a simple cronjob 🙂

Final Notes

This has been an amazing experience. Since I started using OmniOS two weeks ago, I’ve setup the mirror, I installed two OmniOS deployments on production for two organization, and I talked about it during our Armenian Hackers Radio Podcast. With this mirror completely setup, I can advocate even more!

I’d like to send my thanks (and later, my money) to the OmniOS team for the amazing work they’re doing, special thanks to andyf for answering all of my questions, neirac for pushing me to try more illumos in my life and everyone who contributed to the docs and blog posts that I used. I’ll leave some links below.

Finally, for the coming (two) posts I will talk about mirroring downloads.OmniOS.org (for ISO/USB/ZFS images) and the pkgsrc repository run by SmartOS/MNX.

Thank you for reading and thank you, illumos-community for being so nice ^_^

That’s all folks…

Links

Reply via email.

bhyve CPU Allocation Test for 256 core machine

During the last bhyve weekly call, Michael Dexter asked me to run the bhyve CPU Allocation Test that he wrote in order to see if number of CPUs in the guest makes the system boot longer.

Here’s a post with the details of the test and my findings.

The host machines runs the following

# uname -a
FreeBSD genomic.abi.am 13.2-RELEASE FreeBSD 13.2-RELEASE releng/13.2-n254617-525ecfdad597 GENERIC amd64

# sysctl hw.model hw.ncpu
hw.model: AMD EPYC 7702 64-Core Processor
hw.ncpu: 256

# dmidecode -t processor | grep 'Socket Designation'
        Socket Designation: CPU1
        Socket Designation: CPU2

# sysctl hw.physmem hw.realmem hw.usermem
hw.physmem: 2185602236416
hw.realmem: 2200361238528
hw.usermem: 2091107983360

Basically, it’s FreeBSD 13.2, with 2TB of RAM, 2 CPUs with 64 cores each, 2 threads each, totaling 256 vCores

The test runs a bhyve VM with minimal FreeBSD, that’s built with OccamBSD. The main changes are the following:

  • /boot/loader.conf has the line autoboot_delay="0"
  • There are no service enabled
  • /etc/rc.local has the line shutdown -p now

The machine boots and then it shuts down.

Here’s what I’ve got in the log file →

Host CPUs: 256
1 booted in 9 seconds
2 booted in 9 seconds
3 booted in 9 seconds
4 booted in 9 seconds
5 booted in 9 seconds
6 booted in 9 seconds
7 booted in 9 seconds
8 booted in 9 seconds
9 booted in 10 seconds
10 booted in 10 seconds
11 booted in 10 seconds
12 booted in 11 seconds
13 booted in 10 seconds
14 booted in 11 seconds
15 booted in 12 seconds
16 booted in 9 seconds
17 booted in 12 seconds
18 booted in 18 seconds
19 booted in 14 seconds
20 booted in 15 seconds
21 booted in 22 seconds
22 booted in 17 seconds
23 booted in 23 seconds
24 booted in 10 seconds
25 booted in 10 seconds
26 booted in 17 seconds
27 booted in 14 seconds
28 booted in 15 seconds
29 booted in 12 seconds
30 booted in 15 seconds
31 booted in 31 seconds
32 booted in 19 seconds
33 booted in 15 seconds
34 booted in 32 seconds
35 booted in 18 seconds
36 booted in 22 seconds
37 booted in 24 seconds
38 booted in 17 seconds
39 booted in 24 seconds
40 booted in 13 seconds
41 booted in 15 seconds
42 booted in 23 seconds
43 booted in 37 seconds
44 booted in 21 seconds
45 booted in 19 seconds
46 booted in 12 seconds
47 booted in 17 seconds
48 booted in 19 seconds
49 booted in 17 seconds
50 booted in 18 seconds
51 booted in 15 seconds
52 booted in 20 seconds
53 booted in 14 seconds
54 booted in 22 seconds
55 booted in 18 seconds
56 booted in 17 seconds
57 booted in 92 seconds
58 booted in 15 seconds
59 booted in 15 seconds
60 booted in 17 seconds
61 booted in 16 seconds
62 booted in 22 seconds
63 booted in 17 seconds
64 booted in 12 seconds
65 booted in 17 seconds

At the 66th core, bhyve crashes, with the following line

Booting the VM with 66 vCPUs
Assertion failed: (curaddr - startaddr < SMBIOS_MAX_LENGTH), function smbios_build, file /usr/src/usr.sbin/bhyve/smbiostbl.c, line 936.
Abort trap (core dumped)    

At this point, bhyve crashes with every ncpu+1, so I had to stop the loop from running.

I had to look into the topology of the CPUs, which FreeBSD can report using

sysctl -n kern.sched.topology_spec

<groups>
 <group level="1" cache-level="0">
  <cpu count="256" mask="ffffffffffffffff,ffffffffffffffff,ffffffffffffffff,ffffffffffffffff">0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255</cpu>
  <children>
   <group level="2" cache-level="0">

[...]

   </group>
  </children>
 </group>
</groups>

You can find the whole output here: kern.sched.topology_spec.xml.txt

The system that we need for production requires 240 vCores. This topology gave me the idea to run that manually, using the socket, cores and threads options →

bhyve -c 240,sockets=2,cores=60,threads=2 -m 1024 -H -A \
    -l com1,stdio \
    -l bootrom,BHYVE_UEFI.fd \
    -s 0,hostbridge \
    -s 2,virtio-blk,vm.raw \
    -s 31,lpc \
    vm0

And it booted all fine! 🙂

240 booted in 33 seconds

For production, however, I use vm-bhyve, so I’ve added the following to my configuration →

cpu="240"
cpu_sockets="2"
cpu_cores="60"
cpu_threads="2"
memory="1856G"

And yes, for those who are wondering, bhyve can virtualize 1.8T of vDRAM all fine 🙂

For my debugging nerds, I’ve also uploaded the bhyve.core file to my server, you may get it at bhyve-cpu-allocation–256.tgz

As long as this is helpful for someone out there, I’ll be happy. Sometimes I forget that not everyone runs massive clusters like we do.

That’s all folks…

Reply via email.

FreeBSD Jail booting & running Devuan GNU+Linux with OpenRC

Two years ago I wrote a blog post named VoidLinux in FreeBSD Jail; with init, where we installed and “booted” VoidLinux in a FreeBSD Jail. I think it’s time to revise that post.

This time we will be using Devuan GNU+Linux, boot things using OpenRC and put some native FreeBSD binaries inside the Linux Jail.

Here’s what I’m running at the moment

root@srv0:~ # uname -v
FreeBSD 13.2-RELEASE releng/13.2-n254617-525ecfdad597 GENERIC

To bootstrap the Devuan system, we need debootstrap. Specifically, debootstrap that ships with Devuan Chimaera. We can start by installing debootstrap from ports/packages, and then we can modify the rest.

pkg install -y debootstrap

Now we need to fetch Devuan’s debootstrap, extract it, put some files into our debootstrap and set some symbolic links.

# Path might change over time, check https://pkginfo.devuan.org/ for the exact link
fetch http://deb.devuan.org/merged/pool/DEVUAN/main/d/debootstrap/debootstrap_1.0.123+devuan3_all.deb

# .deb files are messy, make a directory
mkdir debootstrap_devuan
mv debootstrap_1.0.123+devuan3_all.deb debootstrap_devuan/
cd debootstrap_devuan/
tar xf debootstrap_1.0.123+devuan3_all.deb
tar xf data.tar.gz

# We need chimaera (latest, symlink) and ceres (origin)
cp usr/share/debootstrap/scripts/ceres usr/share/debootstrap/scripts/chimaera /usr/local/share/debootstrap/scripts/

Now we can bootstrap our system. I will be using a ZFS filesystem, but this can be done without ZFS as well.

Keep in mind that my Jail’s path is going to be /usr/local/jails/devuan0, modify this path as needed 🙂

zfs create zroot/jails/devuan0

debootstrap --no-check-gpg --arch=amd64 chimaera /usr/local/jails/devuan0/ http://pkgmaster.devuan.org/merged/

The installation should start now but at some point there, we’ll get the following error:

I: Configuring libpam-runtime...
I: Configuring login...
I: Configuring util-linux...
I: Configuring mount...
I: Configuring sysvinit-core...
W: Failure while configuring required packages.
W: See /usr/local/jails/devuan0/debootstrap/debootstrap.log for details (possibly the package package is at fault)

DON’T PANIC! This is fine 🙂 We just need to chroot inside, fix this manually and install OpenRC


chroot /usr/local/jails/devuan0 /bin/bash
# Fix base packages
dpkg --force-depends -i /var/cache/apt/archives/*.deb
# Set Cache-Start
echo "APT::Cache-Start 251658240;" > /etc/apt/apt.conf.d/00chroot
# Install OpenRC
apt update
apt install openrc

We have almost everything ready. We just need to create a password database file that the jail(8) command uses internally.

cd /usr/local/jails/devuan0/etc/
echo "root::0:0::0:0:Charlie &:/root:/bin/bash" > master.passwd
pwd_mkdb -d ./ -p master.passwd
# Restore the Linux passwd file
cp passwd- passwd

We can also move our statically linked FreeBSD binaries into the Linux Jail so we can use them when needed

cp -a /rescue /usr/local/jails/devuan0/native

Now we just need our Jail configuration file. We can put that at /etc/jail.conf.d/devuan0.conf

(This assumes that you’re network is configured similar to “VNET Jail HowTo Part 2: Networking”

# vim: set syntax=sh:
exec.clean;
allow.raw_sockets;
mount.devfs;

devuan0 {
  # ID == epair index :)
  $id             = "0";
  $bridge         = "bridge0";
  # Set a domain :)
  $domain         = "bsd.am";
  vnet;
  vnet.interface = "epair${id}b";

  mount.fstab     = "/etc/jail.conf.d/${name}.fstab";

  exec.prestart   = "ifconfig epair${id} create up";
  exec.prestart  += "ifconfig epair${id}a up descr vnet-${name}";
  exec.prestart  += "ifconfig ${bridge} addm epair${id}a up";

  exec.start      = "/sbin/openrc default";

  exec.stop       = "/sbin/openrc shutdown";

  exec.poststop   = "ifconfig ${bridge} deletem epair${id}a";
  exec.poststop  += "ifconfig epair${id}a destroy";

  host.hostname   = "${name}.${domain}";
  path            = "/usr/local/jails/devuan0";

  # Maybe mkdir this path :)
  exec.consolelog = "/var/log/jail/${name}.log";

  persist;
  allow.socket_af;
}

As you have guessed, we also need an fstab file, that should go into /etc/jail.conf.d/devuan0.fstab

devfs       /usr/local/jails/devuan0/dev      devfs     rw                   0 0
tmpfs       /usr/local/jails/devuan0/dev/shm  tmpfs     rw,size=1g,mode=1777 0 0
fdescfs     /usr/local/jails/devuan0/dev/fd   fdescfs   rw,linrdlnk          0 0
linprocfs   /usr/local/jails/devuan0/proc     linprocfs rw                   0 0
linsysfs    /usr/local/jails/devuan0/sys      linsysfs  rw                   0 0
tmpfs       /usr/local/jails/devuan0/tmp      tmpfs     rw,mode=1777         0 0

Finally, let’s load some kernel modules (in case they haven’t yet)

service linux enable
service linux start
kldload netlink

Let’s start our Jail!

jail -c -f /etc/jail.conf.d/devuan0.conf

Is it running?

 # jls -N
 JID             IP Address      Hostname                      Path
 devuan0                         devuan0.bsd.am                /usr/local/jails/devuan0

Yes it is!

Now we can jexec into it and run things!

root@srv0:~ # jexec -l devuan0 /bin/bash
root@devuan0:~# uname -a
Linux devuan0.bsd.am 4.4.0 FreeBSD 13.2-RELEASE releng/13.2-n254617-525ecfdad597 GENERIC x86_64 GNU/Linux

The process tree looks neat as well!

root@devuan0:~# ps f
  PID TTY      STAT   TIME COMMAND
74682 pts/1    S      0:00 /bin/bash
78212 pts/1    R+     0:00  \_ ps f
48412 ?        Ss     0:00 /usr/sbin/cron
41190 ?        Ss     0:00 /usr/sbin/rsyslogd

Let’s do some networking things! Let’s setup networking and install OpenSSH.
(This assumes that you’re network is configured similar to “VNET Jail HowTo Part 2: Networking”)

# Setup network interfaces
/native/ifconfig lo0 inet 127.0.0.1/8 up
/native/ifconfig epair0b inet 10.0.0.10/24 up
/native/route add default 10.0.0.1

# Install and start OpenSSH server
apt-get --no-install-recommends install openssh-server
rc-service ssh start

You should be able to ping things now

~# ping -n -c 1 bsd.am
ping: WARNING: setsockopt(ICMP_FILTER): Protocol not available
PING  (37.252.73.34) 56(84) bytes of data.
64 bytes from 37.252.73.34: icmp_seq=1 ttl=55 time=2.60 ms

---  ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 2.603/2.603/2.603/0.000 ms

To make the networking configuration persistent, we can use the rc.local file that OpenRC executes at boot.

chmod +x /etc/rc.local
echo '/native/ifconfig lo0 inet 127.0.0.1/8 up' >> /etc/rc.local
echo '/native/ifconfig epair0b inet 10.0.0.10/24 up' >> /etc/rc.local
echo '/native/route add default 10.0.0.1' >> /etc/rc.local

Do you know what this means? It means that now you can have proper ZFS, DTrace and pf firewalling with Linux. Congrats, now you have clean waters.

That’s all folks…

P.S. I would like to thank my mentor, norayr, for showing me how to start/stop OpenRC manually, and the awesome folks at #devuan for their help.

Reply via email.

Incident Postmortem: BSD.am home server @ 3-4 July 2023

Incident Information

Between the hours of Mon Jul 3 03:05:59 2023 and Tue Jul 4 01:10:15 2023 the home server named BSD.am (also known as pingvinashen.am) was completely down.

The event was triggered by a battery issue due to high temperature at the apartment where the home server resides.

A battery swell caused the computer to shut down as it produced higher than normal heat into the system.

The event was detected by the monitoring system at mon.bsd.am which notified the operators using email and chat systems (XMPP).

This incident affected 100% of the users of the following services:

  • jabber.am public XMPP server
  • conference.jabber.am public XMPP MUC server
  • օրագիր.հայ public WriteFreely instance
  • սարեան.ցանցառներ.հայ public Lobste.rs instance
  • BIND.am public DNS server and its zones
  • Multiple hosted blogs, including this one you’re reading.
  • A private ZNC server for Armenian Hackers Community
  • git.bsd.am public Gitea server
  • A matterbridge instance connecting multiple communities
  • A Huginn instance automating tasks (such as RSS to Telegram, RSS to newsletter) for Armenian Hackers Communities
  • A newsletter instance running listmonk.app
  • A private Miniflux.app server for Armenian Hackers Community
  • FreeBSD Jail users’ meetup website

Multiple community members contacted the operator (yours truly) asking for an ETA.

Response

After receiving an email at Mon Jul 3 03:06:49 2023, the Chief Debugging Officer (yours truly) started analyzing the possible issue. According to Monit (mon.bsd.am) all the services were unavailable and the server was not reachable by IP (based on ICMP).

The usual possibility, network failure at the ISP level, was ruled out, as the second home server (arnet.am) was functioning properly.

The person closest to the server physically, was the operator’s sibling (lucy.vartanian.am), however she did not have the background in Unix system administration nor in hardware maintenance. Also, she was asleep.

Hours later the siblings (yours truly) organized a FaceTime call to debug the issues remotely.

The system did boot the kernel properly, however it would shutdown before the services could complete their startup.

Clearly, the machine needed to be shipped to the operator (yours truly) to be debugged at the spot.

So that’s what the team did.

IMG 6689
Precise addresses are removed for privacy

Recovery

At the operator’s (yours truly) location, the BIOS logs have listed that the system suffered from a ASF2 Force Off. This usually means a thermal problem.

The operator (yours truly) disassembled the laptop, hoping the system needs a little dust clean-up and a thermal paste update.

Turns out the problem was actually a swollen battery.

IMG 6683
IMG 6684
IMG 6685

After removing the battery, the system booted fine. Just to be sure that the swollen battery was the root cause, a complete system stress test was ran. No issues detected (Well, except “Missing Battery”).

The systems was returned to its residency, connected to the internet and all services were accessible again.

IMG 6690
Precise addresses are removed for privacy

Next Steps

  • Install a new battery in the future, as the laptop is not connected to a UPS
  • Make sure to test the hardware during environmental changes (too cold, too hot, etc)
  • Run a simple status page with an RSS feed in a separate environment and notify users

If you’re new here, then first of all I’d like to thank you for reading this IR Postmortem article.

Yes, this was an IR Postmortem of a home server of a tiny community in a tiny country. This was not about Amazon, Google, Netflix, etc.

I wrote this for two reasons.

First, I wanted to show you how awesome the actual internet is. You see, when Amazon dies, everything dies with it. Your startup infra, your website, your hobby projects, everything.

When my server dies, only my server dies. And that’s the beauty of the internet. If you can, please, keep that beauty going.

Second, I run a small security company, illuria, Inc., where we help companies harden their environment and recover from incidents. It’s been years since I wrote an IR postmortem personally (my team members who do that are way smarter than me!), and I thought it would be a nice exercise to write it all by myself 🙂

I hope you liked this.

That’s all folks…

Reply via email.

Antranig Vartanian

July 1, 2023

A customer asked me to help them setup a tiny lab with many open-source tools. They are planning to move from corporate services to open-source alternatives such as NextCloud, Gitea, etc.

Unfortunately, they run only Linux, Ubuntu to be more specific, and as a UNIX gentlemen, I didn’t want to put everything into a single host, so I decided to use containers, in this case, LXC, a.k.a Linux Containers.

How hard could it be?

Oh god, layers of abstraction on within the system that have no idea about each other.

Like, who would assume that LXC would automatically download and install dnsmasq and assign IP addresses without my knowledge, or that it would push rules into the firewall?

The more I use Linux Container, the more I understand why FreeBSD Jails / illumos Zones didn’t win.

People don’t want automation or control, they want “please do this for me as I don’t wanna do it myself” tools.

I’d expect at least a message post-installation that says “We have installed and configured dnsmasq, reconfigured some systemd things, modified the following file (which is not mentioned in any man page, so you can use Google instead of man/apropos) and will use IP address ranges that you didn’t approve”

Is this why Docker won? Is it because people DIDN’T want to learn how to do software packaging? I hope not. I wanna believe its because developers wanted to “think operationally”

Oh, and from a FreeBSD perspective, what’s even more weird is that

  1. there are no proper manual pages.
  2. the documentation is weird. It talks about a utility named lxc but I’m using 20 utilities named lxc-*, and I still cannot find the proper documentation for that
  3. it’s very much segmented. For example, on FreeBSD, we talk about which is better, jail.conf, BastilleBSD, pot, AppJail or Jailer. Here the same utility (lxc) that has multiple config files with no proper versioning, pretty complex manual pages and the not even examples or HowTos.

I’m looking at this and thinking ”oh well, if we build a proper tool, I bet we can win some of the market” until you realize, of course, that when people hear FreeBSD, they will be thinking ”it’s not Linux? maybe it’s not worth it, otherwise I would’ve heard about it”

I’m just angry here. Please ignore my rants.

Cheers y’all.

Reply via email.

FreeBSD package repo with specific versions

illuria’s ProfilerX runs on LureOS, which is our custom operating system based on FreeBSD.

To update the operating system we rely on two tools, pkg(8) for packages and freebsd-update for the base.

Initially, I’ve setup our poudriere and package repo in the FreeBSD way, so our URL looks like /FreeBSD:13:amd64/devel and /FreeBSD:13:amd64/prod. This is done by expanding the ${ABI} variable, similar to what FreeBSD does in FreeBSD.conf.

Initially, this worked fine, but now that there’s a new FreeBSD out there (13.2), I didn’t want to put the new packages in the old URL, but rather have a URL for each major.minor version. This is mostly for the enterprises who take their time to upgrade software.

Turns out the easiest way to do this is (after reading the pkg.conf(5) manual page) is to use the VERSION_MAJOR and VERSION_MINOR variables.

The new LureOS will use /${ABI}/${VERSION_MINOR}/repo, which will expand to /FreeBSD:13:amd64/1/devel, making it easier for us to extend life after a new release.

That’s all folks…

Reply via email.

libucl wrapper in Oberon-2 for Vishap Oberon Compiler

Like I said in my previous post, this is a long project and it relies on a lot of things 🙂

Wrapping libxo was fun, but wrapping libucl was way more complicated. However, it is done. It’s not a complete port, however, it has the basics to get started. The goes is to have all wrappers match the their libraries.

The source is at antranigv/voclibucl and here’s a screenshot of what it can do.

Screenshot 2023 04 08 at 6 46 14 PM

Next, I will be improving these wrappers and then work on lzc, a.k.a. Lib_ZFS_Core 😉

See you soon 🙂

Reply via email.

libxo wrapper in Oberon-2 for Vishap Oberon Compile

I’m working on a new project, which is still only 10% done. For that project I chose to use the Oberon–2 programming language and the Vishap Oberon Compiler.

After seeing libxo on FreeBSD, I’m not sure I can go back to write or printf, so I decided to write an Oberon wrapper for it.

I just finished the basics but it’s already usable for day-to-day outputs, containers/lists/instances and exit codes.

The source is at antranigv/voclibxo and here’s a screenshot of what it can do.

Screenshot 2023 04 05 at 4 40 45 PM

Next, I will be wrapping libucl in Oberon.

See you soon 🙂

Reply via email.

Antranig Vartanian

March 29, 2023

After weeks of thinking, I decided that I need to fork Jailer. Yes, I want to fork my own code. There are two reasons to do this.

  1. Keep the promise of Jailer being “very compatible with FreeBSD”
  2. Have a new version that pushes these limits of compatibility.

The fork is going to be named bant, which is Armenian for jail. I think we’re all tired of Greek names at this point 🙂

I’ll share the details of bant as soon as I have a prototype, which means at least couple of weeks.

Meanwhile, Jailer will be the very-compatible-with-FreeBSD version, that doesn’t brake things and allows new users to use Jails with ease.

Fingers crossed…

Reply via email.

Design Guidelines vs Pushing The Limits

One of the design guidelines of Jailer is don’t break FreeBSD. As in if someone installed and used Jailer, and then deleted the Jailer binary and libraries, their Jails would still run without any issues. We do this with minimal intervention, for example, jailer init patches FreeBSD’s /etc/rc.d/jail, but in a way that you wouldn’t feel the difference much. We don’t create new rc.conf variables, we just change couple of loops. In a way, you can keep these changes even if you delete Jailer so your system would be much improved. Obviously, we do sent these patches to FreeBSD src.

But I’m in front of an issue right now. On one side, I want to keep these guidelines, on the other, pushing the limit will allow me to improve Jailer way more than I expected.

These are the things that I think about before sleep, or during the shower. I gave a promise, that I will not break the Jail ecosystem. But what if, just what if, the ecosystem was broken in the first place?

Some of you might know, that we’ve been working on integrating libucl with Jail. The experiments have been going well, in such that I feel I want to integrate these experiments with Jailer already, even before they get into FreeBSD (and they might even not get in at all).

My dream of Jailer and its ecosystem is complex. I feel that these integration would do good on the long-term, but I want to keep the short term alive as well.

One idea is to fork Jailer, keep two versions of it. One version that’s FreeBSD compliant, and another one that is pushing the limits.

This is going to be an interesting week…

That’s all folks…

Reply via email.