January 12, 2021

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://ubuntu.com/wp-content/uploads/ccb7/Screenshot-from-2021-01-12-19-24-04.png" width="720" /> </noscript>

Assuring the security of PostgreSQL and all open source database systems is critical as many learned with the PgMiner botnet attacks in December 2020. Having an understanding of, and visibility into, how these attacks happen and following standard best practices is the best way to make sure that your data is not at risk.

This blog details the latest security issue with PostgreSQL, how to fix/prevent these attacks and how to ensure security of your PostgreSQL database instances.

Overview and prevention of the PgMiner botnet attacks

Attacks like the PgMiner botnet attack essentially scrape across the Internet looking for misconfigured PostgreSQL servers. This process involves scanning blocks of IP addresses, identifying Postgres servers and then attempting to brute force attack the authentication on those servers. 

The good news for Ubuntu users, is that Ubuntu Server has a secure experience with Postgres out of the box, which is well-documented in Getting Started with PostgreSQL in the Ubuntu Server Guide

A Postgres user on Ubuntu systems does not have a password by default, preventing attackers from accessing the system account via SSH. Only users who already have superuser access to the system can su postgres to authenticate as the system user. From there, a unique password can be created for connecting to the Postgres service. 

By default, these connections are not exposed to the outside network. As outlined in the server guide, the postgresql.conf file would need to be edited by the user to allow the service to listen on a network interface available outside the host. 

The Postgres service on Ubuntu is designed to limit connections via the pg_hba.conf file, enabling a security best practice: In order to permit a client access to authenticate to the Postgres server, the account, database and IP address of the client must be allowed in the pg_hba.conf file. 

It is recommended that users keep the permitted clients as explicit and narrow in their definition as possible, and to:

  1. Only allow permissions to the particular databases each specific user should have access to
  2. Only allow those users to connect from an allowed list of network addresses

Securing open source databases

With PostgreSQL’s install base increasing by 52 percent in 2020, and with open source database adoption increasing year on year, securing the technology that stores company and customer data is critical. Access controls and authentication measures are key concerns when managing the security databases, but as with any software, unidentified and unpatched vulnerabilities should also be a key concern. If vulnerabilities go undetected and updates are not implemented, insecure applications and systems could lead to unauthorised access, leakage and corruption of data

When assessing your database security, consider where gaps in security may be most prevalent. For example, with an increase in multi-cloud use, security best practices may not yet be applied in the public cloud, or vulnerability remediation delayed due to lack of visibility and accountability across an organisation.

CVE patching for PostgreSQL on AWS, Azure

Vulnerability patching for open source databases and applications like PostgreSQL running in public clouds is a key concern for security and infrastructure teams. Ubuntu’s open source security extends to systems and applications on AWS and Azure through a comprehensive, secure and compliant image – Ubuntu Pro.

Ubuntu Pro is a premium Ubuntu OS image that allows enterprises to benefit from extended maintenance, broader security coverage and critical compliance features by simply selecting and running an image on a public cloud— with no contract required. 

Key features of Ubuntu Pro include:

  • 10 years of stability,  with extended security maintenance and CVE patching backported to the existing version of the application
  • Security coverage for hundreds of open source applications like PostgreSQL, Apache Kafka, NGINX, MongoDB and Redis.
  • Kernel Livepatch, which allows for continuous security patching and higher uptime and availability by allowing kernel security updates to be applied without a reboot
  • Customised FIPS and Common Criteria EAL-compliant components for use in environments under compliance regimes such as FedRAMP, PCI, HIPAA and ISO
  • Optional up to 24/7 phone support

Additional PostgreSQL support

With IT teams using diverse technologies across different platforms, becoming an expert on each piece of the puzzle is not likely or scalable. Additionally, 40% of respondents in a 2019 Percona survey cited ‘Lack of support’ as a top concern with open source data management. Depending on team capacity and an organisation’s reliance on a technology, additional support services may be needed to give teams access to open source database experts. 

Canonical provides 24/7, enterprise-grade support for PostgreSQL through Ubuntu Advantage for Applications. Ubuntu Advantage is a single, per-node package of the most comprehensive enterprise security and support for open source infrastructure and applications, with managed service offerings available.

Full-stack application support includes PostgreSQL and other open source database technologies, like MySQL, Redis and ElasticSearch, with response time guaranteed through subscription SLAs. See which applications are covered, and contact us with any questions you may have.

Offloading PostgreSQL security and operations

Open source is ubiquitous in applications, and more than 80 percent of all cyberattacks specifically target applications. Application attacks are both harder to detect and more difficult to contain compared to network attacks. Hackers take the easiest path when determining exploits and target applications with the best attack surface opportunities. 

More and more enterprises are realising that managing their PostgreSQL databases and overall open source estate will entail significant investments of time, resources and budget, impacting both developer productivity and the overall software development lifecycle. Cyberattacks such as PgMiner botnet are a stark reminder of the need for active security monitoring and timely issue resolution by application-management and security teams. 2020 Open Source Security and Risk Analysis report, from Synopsis highlights that 99% of analysed enterprise application codebases contain open source software. Given the large number of open source applications and databases in enterprises, it is difficult to have dedicated teams for each open source application with relevant experience to manage them and keep them secure. 

Enterprises now have the option of offloading the complexity of managing open source applications like PostgreSQL to managed service providers such as Canonical. Canonical’s engineers ensure that open source databases and apps remain secure and performant at all times with active monitoring and full life-cycle management. 

With Canonical’s fully managed PostgreSQL service, engineers will keep Postgres and open source apps secure and updated with real-time issue resolution and patching wherever they run – on Kubernetes, in the public or private cloud.

Get in touch for a PostgreSQL deployment assessment > 

on January 12, 2021 07:54 PM
I have a bunch of Ubuntu machines on my local network at home. They all periodically need to check for updates then download & install them. Rather than have them all reach out to the official mirrors externally to my network, I decided to run my own mirror internally. This post is just a set of notes for anyone else who might be looking to do something similar. I also do a lot of software building, and re-building, which pulls all kinds of random libraries, compilers and other packages from the archive.
on January 12, 2021 12:00 PM

January 11, 2021

Welcome to the Ubuntu Weekly Newsletter, Issue 665 for the week of January 3 – 9, 2021. The full version of this issue is available here.

In this issue we cover:

The Ubuntu Weekly Newsletter is brought to you by:

  • Krytarik Raido
  • Bashing-om
  • Chris Guiver
  • Wild Man
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, this issue of the Ubuntu Weekly Newsletter is licensed under a Creative Commons Attribution ShareAlike 3.0 License

on January 11, 2021 11:53 PM

TL;DR

I'm grateful for translations by translators. But translating everything causes icons to break. Ubuntu MATE 20.04 has several broken icons and most of them are fixed in Ubuntu MATE 20.10 already.

Advice: Please do NOT translate the 'Icon' text, just leave that translation blank (""). Copy/pasting the English text will cause superfluous lines in .desktop files and might cause additional work later (if the original name is updated, you will need to copy and paste that string again). So getting a 100% translation score, might even be non-optimal.

Ubuntu MATE 20.04.1 with broken icons

You probably know the feeling of being the IT guy for your family (in this specific case, my mother-in-law). Her Linux laptop needed to be upgraded to the latest LTS, so I did that for her.

Back when she got the laptop, I installed a non-LTS release. That was required, otherwise her brand spanking new hardware, wouldn't have worked correctly.

I tried using the GUI to upgrade the system, but that didn't work. Usually I live in the terminal, so I quickly went to my comfort zone. I noticed the repositories were not available anymore, of course, this was not an LTS. That meant also that 'do-release-upgrade' did not work. Fortunately I was around when that tool did not exist yet, so I knew to manually modify apt sources files and run apt-get manually. The upgrade was a success of course. But, what is that, why am missing icons here? I also run Ubuntu MATE on some of my other systems and the icons never broke before. The upgrade seemed to have been flawless, but still something went wrong? No, that couldn't be... and it wasn't.

Switching her desktop to English, instead of Dutch (Nederlands), "fixed" the icons. That is strange, but is providing the user of the laptop with a workaround. Luckily my mother-in-law is proficient in English, but prefers Dutch. And there are enough people (I know some of them) who can not read/write/speak English and are dependant on translations. So I thought I'd go fix the issue (or at least, so I thought).

screenshots

Ubuntu 20.04: ubuntu mate 20.04

Ubuntu 20.10: ubuntu mate 20.10

The .desktop file

Checking the .desktop file (I'm going to use /usr/share/applications/mate-screensaver-preferences.desktop here as an example), I noticed the following lines:

# Translators: Do NOT translate or transliterate this text (this is an icon file name)!
Icon[ca]=preferences-desktop-screensaver
Icon[cs]=preferences-desktop-screensaver
Icon[da]=preferences-desktop-screensaver
Icon[es]=preferences-desktop-screensaver
Icon[gl]=preferences-desktop-screensaver
Icon[it]=preferences-desktop-screensaver
Icon[lt]=preferences-desktop-screensaver
Icon[ms]=preferences-desktop-screensaver
Icon[nb]=preferences-desktop-screensaver
Icon[nl]=voorkeuren-bureaublad-schermbeveiliging
Icon[uk]=preferences-desktop-screensaver
Icon[zh_TW]=preferences-desktop-screensaver
Icon=preferences-desktop-screensaver

Hmmm, apparently several translations exist (which generally have been kept identical to original English text).

Note: if a localized translation exists, that will be used. If no localized translation exists, the original English one will be used.

Let me have a look at their source code on their github. It contains several .po files, which contain the translations. So it's only a matter of cloning the repository and submitting a pull request... wrong. I already made the repository fork, when I noticed the commit log. It shows that the translations are being synced from Transifex.

P.S. I should've checked the ubuntu-mate website first, since they have an entire section about translations.

Transifex

Transifex seems to be a proprietary system for doing translations, but I need to go there to fix this issue, so let's get this fixed. Apparently I need to 'join' the team to even see the strings and translations and also to fix them. Would be nice if guest access (read only) would be enabled, because then I could at least check if I will be bothering the correct team. And once you sent a request to join, there is no way to track it, or see the team members etc. (unless you are part of that team perhaps). But nevermind, let's continue.

Clicking the 'Join team' button, I made the assumption that I would automagically be joined to that team. Somehow that did not happen immediately (e.g. it requires human intervention). And I thought this was just going to be quick 'go in, fix it, leave' thing...

Current status

My translator membership was declined, which I don't mind actually, since I don't want to become a full-fledged translator (I just want to fix this specific bug). A helpful translator (with access) checked it out and is working on it.

Joining all teams for each item/language is a quite a hassle (and once declined, sending them all the messages to ask them for fixes etc.), so I'm "only" going to scratch my own itch here. But it seems prudent to give all translators a headsup about this, so they might fix it in their translations (if applicable), hence this blog post.

If anyone could eventually get the updated translations into Ubuntu 20.04, that would be much appreciated ;-)

Advice: Please do NOT translate the 'Icon' text, just leave that translation blank (""). Copy/pasting the English text will cause superfluous lines in .desktop files and might cause additional work later (if the original name is updated, you will need to copy and paste that string again). So getting a 100% translation score, might even not-optimal.

on January 11, 2021 07:00 PM
Another in a series of “I have identified a problem here!”. I appear have quite a few video games. More than I can probably play in my time left on Earth. Let’s set aside all the retro games I have for a moment, and consider only the ones that run on my primary computer, a PC. To be clear, I’m only talking about ‘native’ games. Aside: I hate the word ‘native’ in this context, because what’s native?
on January 11, 2021 12:00 PM

January 10, 2021

OpenUK Honours

Stuart Langridge

So, I was awarded a medal.

OpenUK, who are a non-profit organisation supporting open source software, hardware, and data, and are run by Amanda Brock, have published the honours list for 2021 of what they call “100 top influencers across the UK’s open technology communities”. One of them is me, which is rather nice. One’s not supposed to blow one’s own trumpet at a time like this, but to borrow a line from Edmund Blackadder it’s nice to let people know that you have a trumpet.

There are a bunch of names on this list that I suspect anyone in a position to read this might recognise. Andrew Wafaa at ARM, Neil McGovern of GNOME, Ben Everard the journalist and Chris Lamb the DPL and Jonathan Riddell at KDE. Jeni Tennison and Jimmy Wales and Simon Wardley. There are people I’ve worked with or spoken alongside or had a pint with or all of those things — Mark Shuttleworth, Rob McQueen, Simon Phipps, Michael Meeks. And those I know as friends, which makes them doubly worthy: Alan Pope, Laura Czajkowski, Dave Walker, Joe Ressington, Martin Wimpress. And down near the bottom of the alphabetical list, there’s me, slotted in between Terence Eden and Sir Tim Berners-Lee. I’ll take that position and those neighbours, thank you very much, that’s lovely.

I like working on open source things. It’s been a strange quarter-of-a-century, and my views have changed a lot in that time, but I’m typing this right now on an open source desktop and you’re probably viewing it in an open source web rendering engine. Earlier this very week Alan Pope suggested an app idea to me and two days later we’d made Hushboard. It’s a trivial app, but the process of having made it is sorta emblematic in my head — I really like that we can go from idea to published Ubuntu app in a couple of days, and it’s all open-source while I’m doing it. I like that I got to go and have a curry with Colin Watson a little while ago, the bloke who introduced me to and inspired me with free software all those years ago, and he’s still doing it and inspiring me and I’m still doing it too. I crossed over some sort of Rubicon relatively recently where I’ve been doing open source for more of my life than I haven’t been doing it. I like that as well.

There are a lot of problems with the open source community. I spoke about divisiveness over “distros” in Linux a while back. It’s still not clear how to make open source software financially sustainable for developers of it. The open source development community is distinctly unwelcoming at best and actively harassing and toxic at worst to a lot of people who don’t look like me, because they don’t look like me. There’s way too much of a culture of opposing popularity because it is popularity and we don’t know how to not be underdogs who reflexively bite at the cool kids. Startups take venture capital and make a billion dollars when the bottom 90% of their stack is open source that they didn’t write, and then give none of it back. Products built with open source, especially on the web, assume (to use Bruce Lawson’s excellent phrasing) that you’re on the Wealthy Western Web. The list goes on and on and on and these are only the first few things on it. To the extent that I have any influence as one of the one hundred top influencers in open source in the UK, those are the sort of things I’d like to see change. I don’t know whether having a medal helps with that, but last year, 2020, was an extremely tough year for almost everyone. 2021 has started even worse: we’ve still got a pandemic, the fascism has gone from ten to eleven, and none of the problems I mentioned are close to being fixed. But I’m on a list with Tim Berners-Lee, so I feel a little bit warmer than I did. Thank you for that, OpenUK. I’ll try to share the warmth with others.

Yr hmbl crspndnt, wearing his medal

on January 10, 2021 03:30 PM

January 09, 2021

The Community Council has concluded that we need a new evaluation of the Ubuntu Local Communities project itself and this should be done by a Local Communities Research Committee.

You can read the thoughts behind this call and what we are looking for on the Community Hub:
https://discourse.ubuntu.com/t/local-communities-research-committee/20186

If you think you can and want to make a contribution to Ubuntu here, please send your nomination to community-council at lists.ubuntu.com.

Nominations are now open and will close on Saturday, January 23, 2021 at 23:59 UTC. After that, the Community Council will review the submissions and appoint the Local Communities Research Committee.

Originally posted to the loco-contacts mailing list on Fri Jan 8 20:55:02 UTC 2021 by Torsten Franz

on January 09, 2021 05:27 PM

January 07, 2021

Ep 124 – Especial 2021

Podcast Ubuntu Portugal

No primeiro episódio gravado em 2021, fazemos as nossas previsões pessoais para no ano para o Ubuntu, Software Livre e tecnologia um relacionada, e ainda discutimos as previsões apresentadas por ouvintes!

Já sabem: oiçam, subscrevam e partilhem!

  • https://ansol.org/dominio-publico-2021
  • https://www.humblebundle.com/books/linux-apress-books?partner=PUP
  • https://www.humblebundle.com/books/cybersecurity-cryptography-wiley-books?partner=PUP
  • http://keychronwireless.refr.cc/tiagocarrondo
  • https://shop.nitrokey.com/de_DE/shop/product/nk-pro-2-nitrokey-pro-2-3?aff_ref=3
  • https://shop.nitrokey.com/de_DE/shop?aff_ref=3

Apoios

Podem apoiar o podcast usando os links de afiliados do Humble Bundle, porque ao usarem esses links para fazer uma compra, uma parte do valor que pagam reverte a favor do Podcast Ubuntu Portugal.
E podem obter tudo isso com 15 dólares ou diferentes partes dependendo de pagarem 1, ou 8.
Achamos que isto vale bem mais do que 15 dólares, pelo que se puderem paguem mais um pouco mais visto que têm a opção de pagar o quanto quiserem.

Se estiverem interessados em outros bundles não listados nas notas usem o link https://www.humblebundle.com/?partner=PUP e vão estar também a apoiar-nos.

Atribuição e licenças

Este episódio foi produzido por Diogo Constantino e Tiago Carrondo e editado por Alexandre Carrapiço, o Senhor Podcast.

A música do genérico é: “Won’t see it comin’ (Feat Aequality & N’sorte d’autruche)”, por Alpha Hydrae e está licenciada nos termos da [CC0 1.0 Universal License](https://creativecommons.org/publicdomain/zero/1.0/).

Este episódio e a imagem utilizada estão licenciados nos termos da licença: Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0), cujo texto integral pode ser lido aqui. Estamos abertos a licenciar para permitir outros tipos de utilização, contactem-nos para validação e autorização.

on January 07, 2021 10:45 PM

Time to Branch Out

Ubuntu Blog

Branches are an under-used but important feature of the Snap Store publishing capabilities. Indeed as I’m writing this post, I’ve never had a need to use the feature, and I’ve been publishing snaps for four and a half years. Let’s fix that!

Start with acorns

The rationale for branches is simple. Each snap in the Snap Store has a default track called ‘latest’ in which there are four channels named ‘stable’, ‘beta’, ‘candidate’ and ‘edge’. These are all typical buckets in which snaps are published for an extended period, perhaps months or maybe even years. Branches on the other hand are short-lived silos for publishing snaps. 

As a developer you may have a published application which has bugs users experience but you cannot reproduce. A temporary branch can be used to hold a test build of the application you’re working on to solve a bug.

If you’re tracking and fixing multiple bugs in parallel, each can have their own separate branch under the same snap name in the Snap Store. Branches are ‘hidden’, so unless someone guesses the name of it, users aren’t going to stumble upon potentially broken bug-fix builds of your application. 

Branches only live for 30 days, after which they’re deleted, and any user with the snap will be moved to the latest track for the channel. So a user who tested the branch latest/stable/fix-bug-12 and didn’t switch to another channel within 30 days, will be moved to the latest/stable channel on their next refresh.

Germinate

Let’s take an example. A user filed an issue on the Atom snap under the snapcrafters GitHub and provided a pull request. We can grab the pull request, build the snap with their fixes, test and publish to the store in a branch so they can try it out.

This could be automated with tools like GitHub Actions, but in lieu of that setup, let’s explain it with the manual steps.

$ git clone https://github.com/aminya/atom-2.git
$ cd atom-2
$ git checkout -b aminya-libstdc++6 master
$ git pull https://github.com/aminya/atom-2.git libstdc++6
$ snapcraft --use-lxd

Building atom
Building launcher
Staging atom
Staging launcher
Priming atom
Priming launcher
Snapping
Snapped atom_1.53.0_amd64.snap

Install the application locally to make sure we didn’t completely break it.

$ snap install atom_1.53.0_amd64.snap --dangerous

Upload to the Snap Store and release it to a branch. I selected the latest track as it’s the only track this snap uses. Other snaps may use different tracks for each supported release (e.g. node) or have separate tracks for stable and insider builds (e.g. Skype). We’re fixing the stable release, so I’m using a branch off the stable channel.

$ snapcraft upload atom*.snap --release=latest/stable/fix-65
Preparing to upload 'atom_1.53.0_amd64.snap'.
After uploading, the resulting snap revision will be released to 'latest/stable/fix-65' when it passes the Snap Store review.
Install the review-tools from the Snap Store for enhanced checks before uploading this snap.
Pushing 'atom_1.53.0_amd64.snap' [============================] 100%
released
Revision 269 of 'atom' created.
Track Arch Channel Version Revision Expires at
latest amd64 stable 1.53.0 265
candidate ↑ ↑
beta ↑ ↑
edge 1.53.0 268
stable/fix-65 1.53.0 269 2021-02-05T10:34:51Z

We can already see the branch exists under the latest track, stable branch, but we may want to confirm this – especially if the upload happened in GitHub Actions, a CI or other remote system where we can’t easily see the above output.

$ snapcraft status atom
Track Arch Channel Version Revision Expires at
latest amd64 stable 1.53.0 265
candidate ↑ ↑
beta ↑ ↑
edge 1.53.0 268
stable/fix-65 1.53.0 269 2021-02-05T10:34:51Z

Note, as mentioned earlier, only we as publishers can see the new branch. If a non-publisher looked at the channel map they wouldn’t see it. Non-publishers don’t have access to the snapcraft status command for this snap, and snap info just doesn’t show branches.

$ snap info atom 

channels:
latest/stable: 1.53.0 2020-11-10 (265) 242MB classic
latest/candidate: ↑
latest/beta: ↑
latest/edge: 1.53.0 2020-12-09 (268) 224MB classic
installed: 1.53.0 (x2) 224MB classic

I’m still currently tracking the build I “side loaded” onto my machine, which you can see with the “x” prefixed revision on the last line. We can refresh to the branch hosted in the store. Note that we can optionally omit the ‘latest’ track name, because it’s the default (and only) track. This also allows us to test the instructions we can provide to the author of the pull request.

$ snap refresh atom --amend --channel stable/fix-65
atom (stable/fix-65) 1.53.0 from Snapcrafters refreshed

Note: The --amend option is only required for us because we’re switching from a locally installed revision to one from the store. Users who only installed from the store won’t need that.

Now we have the fix published, we can let the contributor know via a comment on the pull request. Something like this will do nicely:

“Thanks very much for the pull request. I don’t have the ability to reproduce the issue right now. I have published a build of the snap incorporating your fix in a branch. Please could you install the build on a clean system, or if you have the snap already installed, refresh to this branch, and test it?

snap install atom –channel stable/fix-65
or:
snap refresh atom –channel stable/fix-65

If you’re happy with the fix, I’ll land this PR.
Thanks again!”

Once the user replies that this fixes their issue, we can land the PR and roll this into the next stable release. If it doesn’t, well, that’s more software engineering on the to-do list!

Get planting

Of course it’s not just bug fixes which can use branches. Perhaps you have a new feature to soft-launch in the application, or design changes you’d like to experiment with. Having a short-lived branch which is only known by a limited set of testers can be advantageous.

Branches are one of those features that sets the Snap Store apart from some other distribution methods for Linux. It’s not something most publishers will use, but once you know it’s there, it can be quite handy with only a small learning curve.

Join us over on the snapcraft forums if you’d like to discuss this or other features of snapcraft.

Photo by Colin Watts on Unsplash

on January 07, 2021 01:42 PM

January 04, 2021

Over the past year there has been focused work on improving the test coverage of the Linux Kernel with stress-ng.  Increased test coverage exercises more kernel code and hence improves the breadth of testing, allowing us to be more confident that more corner cases are being handled correctly.

The test coverage has been improved in several ways:

  1. testing more system calls; most system calls are being now exercised
  2. adding more ioctl() command tests
  3. exercising system call error handling paths
  4. exercise more system call options and flags
  5. keeping track of new features added to recent kernels and adding stress test cases for these
  6. adding support for new architectures (RISC-V for example)

Each stress-ng release is run with various stressor options against the latest kernel (built with gcov enabled).  The gcov data is processed with lcov to produce human readable kernel source code containing coverage annotations to help inform where to add more test coverage for the next release cycle of stress-ng. 

Linux Foundation sponsored Piyush Goyal for 3 months to add test cases that exercise system call test failure paths and I appreciate this help in improving stress-ng. I finally completed this tedious task at the end of 2020 with the release of stress-ng 0.12.00.

Below is a chart showing how the kernel coverage generated by stress-ng has been increasing since 2015. The amber line shows lines of code exercised and the green line shows kernel functions exercised.

 


..one can see that there was a large increase of kernel test coverage in the latter half of 2020 with stress-ng.  In all, 2020 saw ~20% increase on kernel coverage, most of this was driven using the gcov analysis, however, there is more to do.

What next?  Apart from continuing to add support for new kernel system calls and features I hope to improve the kernel coverage test script to exercise more file systems; it will be interesting to see what kind of bugs get found. I'll also be keeping the stress-ng project page refreshed as this tracks bugs that stress-ng has found in the Linux kernel.

As it stands, release 0.12.00 was a major milestone for stress-ng as it marks the completion of the major work items to improve kernel test coverage.

on January 04, 2021 04:44 PM

Full Circle Weekly News #195

Full Circle Magazine


Ubuntu’s Snap Theming Will See Changes for the Better
https://ubuntu.com//blog/snaps-and-themes-on-the-path-to-seamless-desktop-integration
GTK4 Is Available After 4 Years In Development
https://blog.gtk.org/2020/12/16/gtk-4-0/
Linux Mint 20.1 Ulyssa Beta Out
https://blog.linuxmint.com/?p=3989

Rescuezilla 2.1.2 Out
https://github.com/rescuezilla/rescuezilla/releases/tag/2.1.2

Manjaro ARM 20.12 Out
https://forum.manjaro.org/t/manjaro-arm-20-12-released/43709

Linux Kernel 5.11 rc1 Out
https://www.lkml.org/lkml/2020/12/27/180

Bash 5.1 Out
https://lists.gnu.org/archive/html/info-gnu/2020-12/msg00003.html

Darktable 3.4 Out
https://github.com/darktable-org/darktable/releases/tag/release-3.4.0

Thunderbird 78.6.0 Out
https://www.thunderbird.net/en-US/thunderbird/78.6.0/releasenotes/

LibreOffice 7.0.4 Out
https://9to5linux.com/libreoffice-7-0-4-office-suite-released-with-more-than-110-bug-fixes

Kdenlive 20.12 Out
https://news.itsfoss.com/kdenlive-20-12/

Anbox Cloud 1.8.2 Out
https://discourse.ubuntu.com/t/anbox-cloud-1-8-2-has-been-released/19951

on January 04, 2021 11:21 AM

January 03, 2021

Wrong About Signal

Bryan Quigley

Another update - it's been 6 months and Signal still does not let you register.

Updated Riot was renamed to Element. XMPP info added in comment.

A couple years ago I was a part of a discussion about encrypted messaging.

  • I was in the Signal camp - we needed it to be quick and easy to setup for users to get setup. Using existing phone numbers makes it easy.
  • Others were in the Matrix camp - we need to start from scratch and make it distributed so no one organization is in control. We should definitely not tie it to phone numbers.

I was wrong.

Signal has been moving in the direction of adding PINs for some time because they realize the danger of relying on the phone number system. Signal just mandated PINs for everyone as part of that switch. Good for security? I really don't think so. They did it so you could recover some bits of "profile, settings, and who you’ve blocked".

Before PIN

If you lose your phone your profile is lost and all message data is lost too. When you get a new phone and install Signal your contacts are alerted that your Safety Number has changed - and should be re-validated.

>>Where profile data lives1318.60060075387.1499999984981Where profile data livesYour Devices

After PIN

If you lost your phone you can use your PIN to recover some parts of your profile and other information. I am unsure if Safety Number still needs to be re-validated or not.

Your profile (or it's encryption key) is stored on at least 5 servers, but likely more. It's protected by secure value recovery.

There are many awesome components of this setup and it's clear that Signal wanted to make this as secure as possible. They wanted to make this a distributed setup so they don't even need to tbe only one hosting it. One of the key components is Intel's SGX which has several known attacks. I simply don't see the value in this and it means there is a new avenue of attack.

>>Where profile data lives1370.275162.94704773529975250.12499999999997371.0529522647003Where profile data livesYour DevicesSignal servers

PIN Reuse

By mandating user chosen PINs, my guess is the great majority of users will reuse the PIN that encrypts their phone. Why? PINs are re-used a lot to start, but here is how the PIN deployment went for a lot of Signal users:

  1. Get notification of new message
  2. Click it to open Signal
  3. Get Mandate to set a PIN before you can read the message!

That's horrible. That means people are in a rush to set a PIN to continue communicating. And now that rushed or reused PIN is stored in the cloud.

Hard to leave

They make it easy to get connections upgraded to secure, but their system to unregister when you uninstall has been down Since June 28th at least (tried last on July22nd). Without that, when you uninstall Signal it means:

  • you might be texting someone and they respond back but you never receive the messages because they only go to Signal
  • if someone you know joins Signal their messages will be automatically upgraded to Signal messages which you will never receive

Conclusion

In summary, Signal got people to hastily create or reuse PINs for minimal disclosed security benefits. There is a possibility that the push for mandatory cloud based PINS despite all of the pushback is that Signal knows of active attacks that these PINs would protect against. It likely would be related to using phone numbers.

I'm trying out the Element which uses the open Matrix network. I'm not actively encouraging others to join me, but just exploring the communities that exist there. It's already more featureful and supports more platforms than Signal ever did.

Maybe I missed something? Feel free to make a PR to add comments

Comments

kousu posted

In the XMPP world, Conversastions has been leading the charge to modernize XMPP, with an index of popular public groups (jabber.network) and a server validator. XMPP is mobile-battery friendly, and supports server-side logs wrapped in strong, multi-device encryption (in contrast to Signal, your keys never leave your devices!). Video calling even works now. It can interact with IRC and Riot (though the Riot bridge is less developed). There is a beautiful Windows client, a beautiful Linux client and a beautiful terminal client, two good Android clients, a beautiful web client which even supports video calling (and two others). It is easy to get an account from one of the many servers indexed here or here, or by looking through libreho.st. You can also set up your own with a little bit of reading. Snikket is building a one-click Slack-like personal-group server, with file-sharing, welcome channels and shared contacts, or you can integrate it with NextCloud. XMPP has solved a lot of problems over its long history, and might just outlast all the centralized services.

Bryan Reply

I totally forgot about XMPP, thanks for sharing!

on January 03, 2021 08:18 PM

January 02, 2021

In episode 100 of Late Night Linux I talked a little bit about trying out Pi Hole and AdGuard to replace my home grown ad blocker based on dnsmasq and a massive hosts file.

I came down in favour of Pi Hole for a couple of reasons but the deciding factor was that Pi Hole felt a bit more open and that it was built on top of dnsmasq which allowed me to reuse config for TFTP which netboots some devices which needed it.

Now that I’ve been using Pi Hole for a few months I have a much better understanding of its limitations and the big one for me is performance. Not the performance when servicing DNS requests but performance when querying the stats data, when reloading block lists and when enabling and disabling certain lists. I suspect a lot of the problems I was having is down to flaky SD cards.

I fully expect that for most people this will never be a problem, but for me it was an itch I wanted to scratch, so here’s what I did:

Through the actually quite generous Amazon Alexa AWS Credits promotion I have free money to spend on AWS services, so I spun up a t2.micro EC2 instance (1 vCPU, 1GB RAM – approx £10 a month) running Ubuntu.

I installed Pi Hole on that instance along with Wireguard which connects it back to my local network at home. I used this guide from Linode to get Wireguard set up.

The Pi Hole running in AWS hosts the large block files and is configured with a normal upstream DNS server as its upstream (I’m using Cloudflare).

Pi Hole running in AWS configured with Cloudflare as its upstream DNS

I use three Ad block lists:

Pi Hole running on a t2.micro instance is really speedy. I can reload the block list in a matter of seconds (versus minutes on the Pi) and querying the stats database no longer locks up and crashes Pi Hole’s management engine FTL.

The Pi Hole running on my LAN is configured to use the above AWS based Pi Hole as its upstream DNS server and also has a couple of additional block lists for YouTube and TikTok.

This allows me use Pi Hole on a Pi as the DHCP server on my LAN and benefit from the GUI to configure things. I can quickly and easily block YouTube when the kids have done enough and won’t listen to reason and the heavy lifting of bulk ad blocking is done on an AWS EC2 instance. The Pi on the LAN will cache a good amount of DNS and so everything whizzes along quickly.

Pi Hole on the LAN has a block list of about 3600 hosts, whereas the version running in AWS has over 1.5 million.

All things considered I’m really happy with Pi Hole and the split-load set up I have now makes it even easier to live with. I would like to see an improved Pi Hole API for enabling and disabling specific Ad lists so that I can make it easier to automate (e.g. unblock YouTube for two hours on a Saturday morning). I think that will come in time. The split-load set up also allows for easy fallback should the AWS machine need maintenance – it would be nice to have a “DNS server of last resort” in Pi Hole to make that automatic. Perhaps it already does, I should investigate.

Why not just run Pi Hole on a more powerful computer in the first place? That would be too easy.

If you fancy trying out Pi Hole in the cloud or just playing with Wireguard you can get $100 free credit with Linode with the Late Night Linux referral code: https://linode.com/latenightlinux

on January 02, 2021 05:45 PM

Here’s a list of some Debian packaging work for December 2020.

2020-12-01: Sponsor package mangohud (0.6.1-1) for Debian unstable (mentors.debian.net request).

2020-12-01: Sponsor package spyne (2.13.16-1) for Debian unstable (Python team request).

2020-12-01: Sponsor package python-xlrd (1.2.0-1) for Debian unstable (Python team request).

2020-12-01: Sponsor package buildbot for Debian unstable (Python team request).

2020-12-08: Upload package calamares (3.2.35.1-1) to Debian unstable.

2020-12-09: Upload package btfs (2.23-1) to Debian unstable.

2020-12-09: Upload package feed2toot (0.15-1) to Debian unstable.

2020-12-09: Upload package gnome-shell-extension-harddisk-led (23-1) to Debian unstable.

2020-12-10: Upload package feed2toot (0.16-1) to Debian unstable.

2020-12-10: Upload package gnome-shell-extension-harddisk-led (24-1) to Debian unstable.

2020-12-13: Upload package xabacus (8.3.1-1) to Debian unstable.

2020-12-14: Upload package python-aniso8601 (8.1.0-1) to Debian unstable.

2020-12-19: Upload package rootskel-gtk (1.42) to Debian unstable.

2020-12-21: Sponsor package goverlay (0.4.3-1) for Debian unstable (mentors.debian.net request).

2020-12-21: Sponsor package pastel (0.2.1-1) for Debian unstable (Python team request).

2020-12-22: Sponsor package python-requests-toolbelt (0.9.1-1) for Debian unstable (Python team request).

2020-12-22: Upload kpmcore (20.12.0-1) to Debian unstable.

2020-12-26: Upload package bundlewrap (4.3.0-1) to Debian unstable.

2020-12-26: Review package python-strictyaml (1.1.1-1) (Needs some more work) (Python team request).

2020-12-26: Review package buildbot (2.9.3-1) (Needs some more work) (Python team request).

2020-12-26: Review package python-vttlib (0.9.1+dfsg-1) (Needs some more work) (Python team request).

2020-12-26: Sponsor package python-formencode (2.0.0-1) for Debian unstable (Python team request).

2020-12-26: Sponsor package pylev (1.2.0-1) for Debian unstable (Python team request).

2020-12-26: Review package python-absl (Needs some more work) (Python team request).

2020-12-26: Sponsor package python-moreorless (0.3.0-2) for Debian unstable (Python team request).

2020-12-26: Sponsor package peewee (3.14.0+dfsg-1) for Debian unstable (Python team request).

2020-12-28: Sponsor package pympler (0.9+dfsg1-1) for Debian unstable (Python team request).

2020-12-28: Sponsor package bidict (0.21.2-1) for Debian unstable (Python team request).

on January 02, 2021 07:19 AM

Start of Year: 2021

Stephen Michael Kellat

In no particular order:

  • The new year began without a civil emergency locally. After all that has happened lately that is a bit of a relief.

  • The garage studio is developing a bit of a moisture problem. The video cameras that we have for filming online church services don’t work well with such high levels of moisture. Efforts are in progress to break down the studio and move it inside the house. Where exactly this will all be set up and how it will function is frankly beyond me at the moment.

  • Editing on the third story continues. The second reader has had a chance to look at it. Apparently the ending is a wee bit abrupt, there are some story gaps, and I apparently left some plot development off-stage. More writing will be done. Some folks out there use dedicated writing programs geared towards authors but I am using Visual Studio Code and the novel package on CTAN as well as the markdown package on CTAN.

  • People forget that the Comprehensive TeX Archive Network has packages covering the use of different types of markup within LaTeX apparently.

  • As much as I would prefer to avoid the matter it looks like I have to consider relocating at some point in 2021. That’s something for another time and place, though.

  • I am getting subscription fatigue. I recognize that Substack is apparently the greatest thing since sliced bread nowadays. The cost of a monthly subscription to one newsletter is the same as the cost to get home delivery of USA TODAY. You get a wee bit more content in a weekday newspaper delivered to your front door compared to a niche e-mail newsletter. As to why I canceled my subscription to USA TODAY, that related directly to the failures of the newspaper delivery person rather than any deficiency on the part of the content itself. I prefer newsprint over digital editions anyhow.

  • Have I ever mentioned that Windows Subsystem for Linux is awesome when you’re not allowed to install Xubuntu alongside Windows or in lieu of Windows? Working within operational confines does get interesting…

on January 02, 2021 03:37 AM

January 01, 2021

On Hiatus

Simon Raffeiner

There have been no new posts on this blog for the last 20 months, so I am finally putting the site on hiatus.

The post On Hiatus appeared first on LIEBERBIBER.

on January 01, 2021 12:13 PM

As you may know, I am Qt 5 maintainer in Debian. Maintaning Qt means not only bumping the version each time a new version is released, but also making sure Qt builds successfully on all architectures that are supported in Debian (and for some submodules, the automatic tests pass).

An important sort of build failures are endianness specific failures. Most widely used architectures (x86_64, aarch64) are little endian. However, Debian officially supports one big endian architecture (s390x), and unofficially a few more ports are provided, such as ppc64 and sparc64.

Unfortunately, Qt upstream does not have any big endian machine in their CI system, so endianness issues get noticed only when the packages fail to build on our build daemons. In the last years I have discovered and fixed some such issues in various parts of Qt, so I decided to write a post to illustrate how to write really cross-platform C/C++ code.

Issue 1: the WebP image format handler (code review)

The relevant code snippet is:

if (srcImage.format() != QImage::Format_ARGB32)
    srcImage = srcImage.convertToFormat(QImage::Format_ARGB32);
// ...
if (!WebPPictureImportBGRA(&picture, srcImage.bits(), srcImage.bytesPerLine())) {
    // ...
}

The code here is serializing the images into QImage::Format_ARGB32 format, and then passing the bytes into WebP’s import function. With this format, the image is stored using a 32-bit ARGB format (0xAARRGGBB). This means that the bytes will be 0xBB, 0xGG, 0xRR, 0xAA or little endian and 0xAA, 0xRR, 0xGG, 0xBB on big endian. However, WebPPictureImportBGRA expects the first format on all architectures.

The fix was to use QImage::Format_RGBA8888. As the QImage documentation says, with this format the order of the colors is the same on any architecture if read as bytes 0xRR, 0xGG, 0xBB, 0xAA.

Issue 2: qimage_converter_map structure (code review)

The code seems to already support big endian. But maybe you can spot the error?

#if Q_BYTE_ORDER == Q_LITTLE_ENDIAN
        0,
        convert_ARGB_to_ARGB_PM,
#else
        0,
        0
#endif

It is the missing comma! It is present in the little endian block, but not in the big endian one. This was fixed trivially.

Issue 3: QHandle, part of Qt 3D module (code review)

QHandle class uses a union that is declared as follows:

struct Data {
    quint32 m_index : IndexBits;
    quint32 m_counter : CounterBits;
    quint32 m_unused : 2;
};
union {
    Data d;
    quint32 m_handle;
};

The sizes are declared such as IndexBits + CounterBits + 2 is always equal to 32 (four bytes).

Then we have a constructor that sets the values of Data struct:

QHandle(quint32 i, quint32 count)
{
    d.m_index = i;
    d.m_counter = count;
    d.m_unused = 0;
}

The value of m_handle will be different depending on endianness! So the test that was expecting a particular value with given constructor arguments was failing. I fixed it by using the following macro:

#if Q_BYTE_ORDER == Q_BIG_ENDIAN
#define GET_EXPECTED_HANDLE(qHandle) ((qHandle.index() << (qHandle.CounterBits + 2)) + (qHandle.counter() << 2))
#else /* Q_LITTLE_ENDIAN */
#define GET_EXPECTED_HANDLE(qHandle) (qHandle.index() + (qHandle.counter() << qHandle.IndexBits))
#endif

Issue 4: QML compiler (code review)

The QML compiler used a helper class named LEUInt32 (based on QLEInteger) that always stored the numbers in little endian internally. This class can be safely mixed with native quint32 on little endian systems, but not on big endian.

Usually the compiler would warn about type mismatch, but here the code used reinterpret_cast, such as:

quint32 *objectTable = reinterpret_cast<quint32*>(data + qmlUnit->offsetToObjects);

So this was not noticed on build time, but the compiler was crashing. The fix was trivial again, replacing quint32 with QLEUInt32.

Issue 5: QModbusPdu, part of Qt Serial Bus module (code review)

The code snippet is simple:

QModbusPdu::FunctionCode code = QModbusPdu::Invalid;
if (stream.readRawData((char *) (&code), sizeof(quint8)) != sizeof(quint8))
    return stream;

QModbusPdu::FunctionCode is an enum, so code is a multi-byte value (even if only one byte is significant). However, (char *) (&code) returns a pointer to the first byte of it. It is the needed byte on little endian systems, but it is the wrong byte on big endian ones!

The correct fix was using a temporary one-byte variable:

quint8 codeByte = 0;
if (stream.readRawData((char *) (&codeByte), sizeof(quint8)) != sizeof(quint8))
    return stream;
QModbusPdu::FunctionCode code = (QModbusPdu::FunctionCode) codeByte;

Issue 6: qt_is_ascii (code review)

This function, as the name says, checks whether a string is ASCII. It does that by splitting the string into 4-byte chunks:

while (ptr + 4 <= end) {
    quint32 data = qFromUnaligned<quint32>(ptr);
    if (data &= 0x80808080U) {
        uint idx = qCountTrailingZeroBits(data);
        ptr += idx / 8;
        return false;
    }
    ptr += 4;
}

idx / 8 is the number of trailing zero bytes. However, the bytes which are trailing on little endian are actually leading on big endian! So we can use qCountLeadingZeroBits there.

Issue 7: the bundled copy of tinycbor (upstream pull request)

Similar to issue 5, the code was reading into the wrong byte:

if (bytesNeeded <= 2) {
    read_bytes_unchecked(it, &it->extra, 1, bytesNeeded);
    if (bytesNeeded == 2)
        it->extra = cbor_ntohs(it->extra);
}

extra has type uint16_t, so it has two bytes. When we need only one byte, we read into the wrong byte, so the resulting number is 256 times higher on big endian than it should be. Adding a temporary one-byte variable fixed it.

Issue 8: perfparser, part of Qt Creator (code review)

Here it is not trivial to find the issue just looking at the code:

qint32 dataStreamVersion = qToLittleEndian(QDataStream::Qt_DefaultCompiledVersion);

However the linker was producing an error:

undefined reference to `QDataStream::Version qbswap(QDataStream::Version)'

On little endian systems, qToLittleEndian is a no-op, but on big endian systems, it is a template function defined for some known types. But it turns out we need to explicitly convert enum values to a simple type, so the fix was passing qint32(QDataStream::Qt_DefaultCompiledVersion) to that function.

Issue 9: Qt Personal Information Management (code review)

The code in test was trying to represent a number as a sequence of bytes, using reinterpret_cast:

static inline QContactId makeId(const QString &managerName, uint id)
{
    return QContactId(QStringLiteral("qtcontacts:basic%1:").arg(managerName), QByteArray(reinterpret_cast<const char *>(&id), sizeof(uint)));
}

The order of bytes will be different on little endian and big endian systems! The fix was adding this line to the beginning of the function:

id = qToLittleEndian(id);

This will cause the bytes to be reversed on big endian systems.

What remains unfixed

There are still some bugs, which require deeper investigation, for example:

P.S. We are looking for new people to help with maintaining Qt 6. Join our team if you want to do some fun work like described above!

on January 01, 2021 09:35 AM

December 31, 2020

Ep 123 – Especial de fim-de-ano

Podcast Ubuntu Portugal

Tornamos histórias enfadonhas em aventuras fantásticas, acontecimentos cinzentos em verdadeiros contos de fadas, ou então falamos só sobre Ubuntu e outras cenas… Aqui fica mais um episódio no vosso podcast preferido.

Já sabem: oiçam, subscrevam e partilhem!

  • https://events.ccc.de/
  • https://support.logi.com/hc/en-us/articles/360025903194
  • https://podcastindex.org/podcast/765561
  • https://www.podchaser.com/podcasts/podcast-ubuntu-portugal-530916
  • https://github.com/subspacecloud/subspace
  • https://www.humblebundle.com/books/hacking-101-no-starch-press-books?partner=PUP
  • https://www.humblebundle.com/books/cybersecurity-cryptography-wiley-books?partner=PUP
  • https://www.humblebundle.com/books/infrastructure-ops-oreilly-books?partner?PUP
  • http://keychronwireless.refr.cc/tiagocarrondo
  • https://shop.nitrokey.com/de_DE/shop/product/nk-pro-2-nitrokey-pro-2-3?aff_ref=3
  • https://shop.nitrokey.com/de_DE/shop?aff_ref=3

Apoios

Podem apoiar o podcast usando os links de afiliados do Humble Bundle, porque ao usarem esses links para fazer uma compra, uma parte do valor que pagam reverte a favor do Podcast Ubuntu Portugal.
E podem obter tudo isso com 15 dólares ou diferentes partes dependendo de pagarem 1, ou 8.
Achamos que isto vale bem mais do que 15 dólares, pelo que se puderem paguem mais um pouco mais visto que têm a opção de pagar o quanto quiserem.

Se estiverem interessados em outros bundles não listados nas notas usem o link https://www.humblebundle.com/?partner=PUP e vão estar também a apoiar-nos.

Atribuição e licenças

Este episódio foi produzido por Diogo Constantino e Tiago Carrondo e editado por Alexandre Carrapiço, o Senhor Podcast.

A música do genérico é: “Won’t see it comin’ (Feat Aequality & N’sorte d’autruche)”, por Alpha Hydrae e está licenciada nos termos da [CC0 1.0 Universal License](https://creativecommons.org/publicdomain/zero/1.0/).

Este episódio e a imagem utilizada estão licenciados nos termos da licença: Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0), cujo texto integral pode ser lido aqui. Estamos abertos a licenciar para permitir outros tipos de utilização, contactem-nos para validação e autorização.

on December 31, 2020 10:45 PM

Ubuntu life in 2020

Torsten Franz

The year 2020 was quite extraordinary, because a lot of things developed quite differently from how they were supposed to because of the Covid-19 crisis. Even though a lot of things happen virtually at Ubuntu, it also had an impact on my Ubuntu life.

Every year I attend a few trade fairs to present Ubuntu and/or give talks. In 2020, this only took place virtually and in a very limited way for me. In March, the Chemnitzer Linuxtage were cancelled and one fair after the other was cancelled.

In my home town I go to a Fablab where we also work on Ubuntu. After the meetings in January and February, this was also cancelled. Now and then this still took place virtually, but somehow it didn’t create the same atmosphere as when we met in real life.

With the team members of the German-speaking Ubuntu forum (ubuntuusers.de) we organise a team meeting every year, which is always very funny and partly productive. In 2020 it had to be cancelled. Since I have also reduced my other contacts to help contain the virus, I have only met two people from the Ubuntu environment in real life since March.

But, of course, Ubuntu life was also progressing in 2020. The whole year I had the responsibility as project leader for ubuntuusers.de in a three man team and had some issues to deal with there. In „ubuntu Deutschland e.V.“ I am the chairman and had to take care of tax benefits again this year, which we were able to do successfully.

I also deal with translations in Ubuntu, namely into German. There are always ups and downs here and things don’t always go well. At the beginning of the year, we were at 86.71 per cent with the German Ubuntu translations. One year later, we are now at 86.33 per cent. Okay, a little bit less, but overall almost at the same level. By the way, this means that the German translation in Ubuntu was and still is number 2. Only in Ukrainian has Ubuntu been translated more so far. Perhaps becoming number 1 is once again a goal we can tackle in 2021.

In 2020, my LoCo has also lost its verified status. This is mainly due to the fact that there was no longer a LoCo Council and therefore no application was written. However, there have now been a few movements, so that we can also tackle this at the beginning of 2021. I also had a hand in these movements. In October, I stood for election to the Community Council and was also elected to this board. In the last two months, I was able to move a few issues forward and clean up the mess.

In Ubuntu we say: I am because we are. This saying has been very interesting in 2020, because many of my work colleagues and friends have focused on exactly one part of it: what I do has an effect on my fellow human beings and vice versa. Perhaps we can also develop this approach socially and see this not only in the crisis, but also in life as a whole.

Now there is only one thing left for me to say: Happy New Year 2021.

on December 31, 2020 12:00 PM

The Brexit Deal

Jonathan Riddell

Now that both halves of the Brexit Deal (Withdrawal Agreement and Trade Deal) have been written the UK is finally in a position to spend some months having a discourse about their merits before having a referendum on whether to go with it or go with the status quo. Alas the broken democratic setup won’t allow that as there was a referendum over 4 years ago without the basics needed for discussion. One lesson that needs to be learnt, but I haven’t seen anyone propose, is to require referendums to have pre-written legislation or international agreement text on what is being implemented.

This on top of the occasionally discussed fixes needed to democracy around transparency of campaigning funds, proper fines when they steal data, banning or limiting online advertising, transparency around advertising and proper fines for campaigns that over-spend.

The new GB <-> UK setup will of course remove freedoms and add vast amounts of new bureaucracy. It might get three of the UK’s countries out of the properly run court of the ECJ but for what end? To be replaced with endless committees discussing the exact same points and the threat of tariffs when standards diverge. Making predictions in this game is daft but I’m pretty sure the UK will push the boundaries on when labour or environmental standards it can reduce soon, probably starting with the working time directive. What export tariffs or quotas will be introduced once that is changed?

The trade deal is incomplete of course and there will be endless future negotiations about services and data transfer and the like. This is only the start of the Brexit process and politicians who claim this is the end are, as we have become used, talking lies. The worries of no-deal Brexit have lessened but the new customs checks going out of GB and the ones to come in future months coming into GB will cause some shortages, prices to rise, businesses to struggle, service companies and the jobs they hold to move abroad. The rise in business related fraud will be a hidden but very real cost.

Johnson deliberately ran down the clock to wait until the final days before making the trade deal. It’s a disgusting tactic which removes the very small democratic oversight that could be expected (the UK parliament having long since had the power removed to approve or deny any such deal). Again I’ve not read anyone pointing out this deliberate tactic which caused much stress on businesses and individuals by playing up the chances of a cliff edge Brexit but it must have been the plan all along. It means he’ll get applauded in the right wing press for limiting democracy, and nobody will be any the wiser.

There is a new bureaucratic border from Scotland and Wales to Northern Ireland with lorry parks and checks for goods. What I haven’t seen any coverage of is increased checks for people crossing. The police have always had the power to check IDs when people crossed into or out of Northern Ireland but that’s not much used since the violence subsided. Now that free movement remains in Ireland but is removed from Great Britain (making Northern Ireland a bit of a no-mans land I suppose) those checks must surely be upgraded to stop foreigners coming over here doing whatever it is the racists moaned about. This will be a new front of low level human rights abuses that will need to be watched, I wonder if anyone is doing so.

With the new setup comes new political campaigning. The election next May will again vote in a Scottish government on a pledge to hold an independence referendum but of course it’ll be blocked by Johnson and delegitimised by the unionists. The Scottish cringe (“too small, too poor”) was a strong factor in the 2014 referendum to make people vote No and it’ll come into play in a new force this time. Firstly with whether any referendum is legitimate. The Catalan referendum of 2014 was accompanied by a massive propaganda campaign by the Spanish Tories (the PP) with huge adverts saying it was illegal and therefor illegitimate. The same thing will happen here. Unlike in Spain there’s a small chance the legal route will be open, UK parliament says there is a Claim of Right for Scots to choose their own form of government so there must be some legal method for that to express itself. I doubt the Court of Session and certainly not the UK Supreme Court will magically give the Scottish Parliament the power to hold a decisive referendum, but maybe thay’ll allow a not-quite-decisive one (which will be deligitimised all it can be by unionists) or maybe they’ll require the UK parliament to hold one (which will be rigged if it ever happens). But there’s every chance the courts will agree that we’ve had our referendum and we need to eat our cereal. In which case it’s hard to see what to do, many Scots won’t accept the Catalan method of just holding one with out agreement and there is a strong need to carry the popular will when holding a referendum. And while I’m a supporter of the Catalan method, one has to admit that it hasn’t worked, there’s been no international support for their self determination right as unfair and illogical as that is.

There will be new concerns in the new referendum. The new border from Scotland to Northern Ireland (and everywhere else that has flight connections to the EU) is made concrete. We can reasonably assume the new bureaucracy there will be moved Scotland to England after independence. Massive new lorry parks and customs checks might be needed. Freedom of movement will remain with the common travel area but might the English want to impose ID checks like you get going between Scotland and Northern Ireland? While I care about my freedoms Europe wide there border from Scotland to England holds a stronger emotional impact for all. When I first wrote to a newspaper to say the border should be closed for Covid controls that was then taken up by the Scottish Governement and many people protested. It’s now law and even the Tories support it on health grounds (except Mundell) but it will be heart breaking to see it happen for customs as well and it’ll be a strong issue in the debate to come.

Join us in campaigning for an independent Scotland in the EU with Yes for EU and sign the European Movement in Scotland petition.

Happy new year.

on December 31, 2020 11:08 AM

December 30, 2020

A custom and global shortcut key to mute / unmute yourself in Zoom or Google Meet

Just like everyone else, 2020 was the year of having more and more video-conference calls. How many times did we struggle to find the meeting window during a call, and say “Sorry, I was on mute”? I tried to address the pain and ended up with the following setup.

xdotool

xdotool is a great automation tool for X, and it can search a window, activate it, and simulate keyboard input. That’s a perfect match for the use case here.

Here is an example command for Google Meet.

$ xdotool search --name '^Meet - .+ - Chromium$' \
    windowactivate --sync \
    key ctrl+d

In the chained commands, it does:

  1. search windows named like Meet - <MEETING_ID> - Chromium
  2. activate the first window passed by the previous line and wait until it gets active (--sync)
  3. send a keystroke as Ctrl+D, which is the default shortcut in Meet

By the way, my main browser is Firefox, but I have to use Chromium to join Meet calls since it tends to have less CPU utilization.

You can do something similar for Zoom with Alt+A.

$ xdotool search --name '^Zoom Meeting$' \
    windowactivate --sync \
    key alt+a

Microsoft Teams should work with xdotool and Ctrl+Shift+M at least for the web version.

GNOME keyboard shortcuts

The commands above can be mapped to a shortcut key with GNOME.

It’s pretty simple, but some tricks may be required. As far as I see, gsd-media-keys will invoke a command when a shortcut key is pressed, not released. In my case, I use Ctrl+space as the shortcut key, so Meet may recognize keys pressed as Ctrl+space + Ctrl+D = Ctrl+space+D which doesn’t trigger the mute/unmute behavior actually. Keys can be canceled with keyup, so the key command was turned into keyup space key ctrl+d in the end.

Also, I wanted to use the same shortcut key for multiple services, and I have the following line which tries Google Meet first, then Zoom if no Meet window is found. It should work most of the cases unless you join multiple meetings at the same time.

sh -c "
    xdotool search --name '^Meet - .+ - Chromium$'
        windowactivate --sync
        keyup space key ctrl+d
    || xdotool search --name '^Zoom Meeting$'
        windowactivate --sync
        keyup ctrl+space key alt+a
"

--clearmodifiers can be used to simplify the whole commands, but when I tried, it sometimes left the Ctrl key pressed depending on the timing I released the key.

Hardware mute/unmute button

Going further, I wanted to have a dedicated button to mute/unmute myself especially for some relaxed meetings where I don’t have to keep my hands on the keyboard all the time.

Back in October, I bought a USB volume controller, which is recognized as “STMicroelectronics USB Volume Control” from the OS. It was around 15 USD.

It emits expected events as KEY_VOLUMEUP and KEY_VOLUMEDOWN with the dial, and KEY_MUTE when the knob is pressed.

I created a “hwdb” file to remap the mute key to something else as follows in /etc/udev/hwdb.d/99-local-remap-usb-volume-control.hwdb.

# STMicroelectronics USB Volume Control
# Remap the click (Mute) to XF86Launch
evdev:input:b0003v0483p572D*
 KEYBOARD_KEY_c00e2=prog4

Once the hardware database is updated with systemd-hwdb update and the device is unplugged and plugged again (if without udevadm commands), I was able to map Launch4(prog4) to the xdotool commands in GNOME successfully.

It looks like everyone had the same idea. There are fancier buttons out there :-)

on December 30, 2020 06:49 PM

December 28, 2020

¿Será buena para Linux una más que factible migración de x86 a ARM? ¿Significará la muerte de Linux? Creemos que se avecinan tiempos oscuros… Y el navegador Edge llega a Linux. Escúchanos en: Ivoox Telegram Youtube Y en tu cliente de podcast habitual con el RSS
on December 28, 2020 07:13 PM

December 25, 2020

First off, I want to wish everyone a Happy Holidays and a Merry Christmas. I know 2020 has been a hard year for so many, and I hope you and your families are healthy and making it through the year.

Over the past few years, I’ve gotten into making holiday ornaments for friends and family. In 2017, I did a snowflake PCB ornament. In 2018, I used laser cutting service Ponoko to cut acrylic fir trees with interlocking pieces. In 2019, I used my new 3D printer to print 3-dimensional snowflakes. In 2020, I’ve returned to my roots and gone with another PCB design. As a huge fan of DEFCON #badgelife, it felt appropriate to go back this way. I ended up with a touch-sensitive snowman with 6 LEDs.

Front of Ornament

The ornament features a snowman created by the use of the black silkscreen and white soldermask. The front artwork was created by drawing it in Inkscape, then exporting to a PNG, and pulling into KiCad’s bmp2component. Of course, bmp2component wants to put this as a footprint, so I had to adjust the resulting kicad_mod file to put things on the silkscreen layer.

There are 6 LEDs. The eyes and buttons are white LEDs and the nose, befitting the typical carrot, is an orange LED. All the remaining components are on the reverse.


Back of Ornament

The back of the ornament houses all of the working bits. The main microcontroller is the Microchip ATtiny84A. It directly drives the LEDs via 6 of the I/O pins with 200Ω resistors for current limiting.

The power supply, at the lower right of the back side, is a boost converter to maintain 3.6V (necessary for the white LEDs with a bit of overhead) out of the coin cell battery. Coin cells start at 3V, which can barely run a white LED under a lot of conditions, but they drop fairly quickly. This power supply will keep things going down to at least 2.2V of input. Note that the actual chip for the power supply is a 2mm-by-2mm component – I didn’t realize just how hard that would be to actually assemble until I had them in my hands!

At the bottom left of the back is the capacitive touch sensor, the Microchip AT42QT1010. It connects to a copper area on the front of the ornament to detect a touch in that area. It produces a signal when the touch is detected, but that had to be debounced in software due to stray signals, probably from the LEDs.


Each ornament was hand assembled, leading to a limited run of 14. (15 if you count a prototype that’s wired up to a power supply instead of a battery supply.) The firmware running on the microcontroller is written in C, and was programmed onto the boards using the Tigard. I had intended to use pogo pins to program via the pads above the microcontroller, but I ended up using a chip clip to program instead.

I hope this might inspire others to give DIY PCB artwork a try. It’s quite simple if you know some basic electronics, and it’s really fun to see something you built come to life. Merry Christmas to all, and may 2021 be infinitely better than 2020.

on December 25, 2020 08:00 AM

December 24, 2020

S13E40 – Ravens

Ubuntu Podcast from the UK LoCo

This week we have been fixing network and audio noise and playing Hotshot Racing. We look back and celebrate the good things that happened in 2020, bring you some GUI love and go over all your wonderful feedback.

It’s Season 13 Episode 40 of the Ubuntu Podcast! Alan Pope, Mark Johnson and Martin Wimpress are connected and speaking to your brain.

In this week’s show:

Normal use

mangohud /path/to/app

Steam launcher

Open Properties for a game in Steam and set this in “SET LAUNCH OPTIONS…”

mangohud %command%

That’s all for this week! If there’s a topic you’d like us to discuss, or you have any feedback on previous shows, please send your comments and suggestions to show@ubuntupodcast.org or Tweet us or Toot us or Comment on our Facebook page or comment on our sub-Reddit.

on December 24, 2020 03:00 PM

Second Half Update For December 2020

Stephen Michael Kellat

In no particular order:

  • Contribution efforts on my part are held back due to other matters requiring attention. I know LP Bug #1905548 needs attention. This is just the time of year when not much gets accomplished usually anyhow.

  • Like many at churches across the United States I have not ever had a desire to emulate Kenneth Copeland or other televangelists. What do you call it when it is either not prudent or not possible for a church to meet in-person which results in services having to be streamed online? Outside an outlet like EWTN it would certainly seem like having to engage in televangelism of a sort after all. Various open source pieces of software have been used as I end up producing things in the garage. Those results are presently posted to YouTube based upon surveying the served audience and what online services they utilize. It is not as if I am operating a numbers station.

  • Using Ubuntu via the Windows Subsystem for Linux has been exciting. It makes having to use a Windows 10 laptop quite bearable.

  • The website of Erie Looking Productions is offline as I am trying to figure out where I want to move its hosting to. I want to start distributing hosting of my sites across different providers. The number of odd crashes Google services had the past couple months just have me twitchy.

  • It does seem like alternative education is going to be a big driver in 2021 perhaps.

  • The other odd thing to watch in 2021 is possibly going to be riscv64, I think. If somebody comes up with a mass market laptop with performance somewhat exceeding that of a Raspberry Pi 400 but on a riscv64 base I think I will be quite interested in buying.

on December 24, 2020 04:23 AM

December 19, 2020

The previous post went over the planned redundancy aspect of this setup at the storage, networking and control plane level. Now let’s see how to get those systems installed and configured for this setup.

Firmware updates and configuration

First thing first, whether its systems coming from an eBay seller or straight from the factory, the first step is always to update all firmware to the latest available.

In my case, that meant updating the SuperMicro BMC firmware and then the BIOS/UEFI firmware too. Once done, perform a factory reset of both the BMC and UEFI config and then go through the configuration to get something that suits your needs.

The main things I had to tweak other than the usual network settings and accounts were:

  • Switch the firmware to UEFI only and enable Secure Boot
    This involves flipping all option ROMs to EFI, disabling CSM and enabling Secure Boot using the default keys.
  • Enable SR-IOV/IOMMU support
    Useful if you ever want to use SR-IOV or PCI device passthrough.
  • Disable unused devices
    In my case, the only storage backplane is connected to a SAS controller with nothing plugged into the SATA controller, so I disabled it.
  • Tweak storage drive classification
    The firmware allows configuring if a drive is HDD or SSD, presumably to control spin up on boot.

Base OS install

With that done, I grabbed the Ubuntu 20.04.1 LTS server ISO, dumped it onto a USB stick and booted the servers from it.

I had all servers and their BMCs connected to my existing lab network to make things easy for the initial setup, it’s easier to do complex network configuration after the initial installation.

The main thing to get right at this step is the basic partitioning for your OS drive. My original plan was to carve off some space from the NVME drive for the OS, unfortunately after an initial installation done that way, I realized that my motherboard doesn’t support NVME booting so ended up reinstalling, this time carving out some space from the SATA SSD instead.

In my case, I ended up creating a 35GB root partition (ext4) and 4GB swap partition, leaving the rest of the 2TB drive unpartitioned for later use by Ceph.

With the install done, make sure you can SSH into the system, also check that you can access the console through the BMC both through VGA and through the IPMI text console. That last part can be done by dumping a file in /etc/default/grub.d/ that looks like:

GRUB_CMDLINE_LINUX="${GRUB_CMDLINE_LINUX} console=tty0 console=ttyS1,115200n8"

Finally you’ll want to make sure you apply any pending updates and reboot, then check dmesg for anything suspicious coming from the kernel. Better catch compatibility and hardware issues early on.

Networking setup

On the networking front you may remember I’ve gotten configs with 6 NICs, two gigabit ports and four 10gbit ports. The gigabit NICs are bonded together and go to the switch, the 10gbit ports are used to create a mesh with each server using a two ports bond to the others.

Combined with the dedicated BMC ports, this ends up looking like this:

Here we can see the switch receiving its uplink over LC fiber, each server has its BMC plugged into a separate switch port and VLAN (green cables), each server is also connected to the switch with a two port bond (black cables) and each server is connected to the other two using a two port bond (blue cables).

Ubuntu uses Netplan for its network configuration these days, the configuration on those servers looks something like this:

network:
  version: 2
  ethernets:
    enp3s0f0:
      accept-ra: false
      dhcp4: false
      dhcp6: false
      mtu: 9000
    enp3s0f1:
      accept-ra: false
      dhcp4: false
      dhcp6: false
      mtu: 9000
    enp1s0f0:
      accept-ra: false
      dhcp4: false
      dhcp6: false
      mtu: 9000
    enp1s0f1:
      accept-ra: false
      dhcp4: false
      dhcp6: false
      mtu: 9000
    ens1f0:
      accept-ra: false
      dhcp4: false
      dhcp6: false
      mtu: 9000
    ens1f1:
      accept-ra: false
      dhcp4: false
      dhcp6: false
      mtu: 9000

  bonds:
    # Connection to first other server
    bond-mesh01:
      interfaces:
        - enp3s0f0
        - enp3s0f1
      accept-ra: false
      dhcp4: false
      dhcp6: false
      mtu: 9000
      parameters:
        mode: 802.3ad
        lacp-rate: fast
        mii-monitor-interval: 100
        transmit-hash-policy: layer3+4

    # Connection to second other server
    bond-mesh02:
      interfaces:
        - enp1s0f0
        - enp1s0f1
      accept-ra: false
      dhcp4: false
      dhcp6: false
      mtu: 9000
      parameters:
        mode: 802.3ad
        lacp-rate: fast
        mii-monitor-interval: 100
        transmit-hash-policy: layer3+4

    # Connection to the switch
    bond-sw01:
      interfaces:
        - ens1f0
        - ens1f1
      link-local: []
      accept-ra: false
      dhcp4: false
      dhcp6: false
      mtu: 1500
      parameters:
        mode: 802.3ad
        lacp-rate: fast
        mii-monitor-interval: 100
        transmit-hash-policy: layer3+4

  vlans:
    # WAN-HIVE
    bond-sw01.50:
      link: bond-sw01
      id: 50
      link-local: []
      accept-ra: false
      dhcp4: false
      dhcp6: false
      mtu: 1500

    # INFRA-UPLINK
    bond-sw01.100:
      link: bond-sw01
      id: 100
      link-local: []
      accept-ra: false
      dhcp4: false
      dhcp6: false
      mtu: 1500

    # INFRA-HOSTS
    bond-sw01.101:
      link: bond-sw01
      id: 101
      link-local: []
      accept-ra: false
      dhcp4: false
      dhcp6: false
      mtu: 1500

    # INFRA-BMC
    bond-sw01.102:
      link: bond-sw01
      id: 102
      link-local: []
      accept-ra: false
      dhcp4: false
      dhcp6: false
      mtu: 1500

  bridges:
    # WAN-HIVE
    br-wan-hive:
      interfaces:
        - bond-sw01.50
      link-local: []
      accept-ra: false
      dhcp4: false
      dhcp6: false
      mtu: 1500

    # INFRA-UPLINK
    br-uplink:
      interfaces:
        - bond-sw01.100
      link-local: []
      accept-ra: false
      dhcp4: false
      dhcp6: false
      mtu: 1500

    # INFRA-HOSTS
    br-hosts:
      interfaces:
        - bond-sw01.101
      accept-ra: true
      dhcp4: false
      dhcp6: false
      mtu: 1500
      nameservers:
        search:
          - stgraber.net
        addresses:
          - 2602:XXXX:Y:10::1

    # INFRA-BMC
    br-bmc:
      interfaces:
        - bond-sw01.102
      link-local: []
      accept-ra: false
      dhcp4: false
      dhcp6: false
      mtu: 1500

That’s the part which is common to all servers, then on top of that, each server needs its own tiny bit of config to setup the right routes to its other two peers, this looks like this:

network:
  version: 2
  bonds:
    # server 2
    bond-mesh01:
      addresses:
        - 2602:XXXX:Y:ZZZ::101/64
      routes:
        - to: 2602:XXXX:Y:ZZZ::100/128
          via: fe80::ec7c:7eff:fe69:55fa

    # server 3
    bond-mesh02:
      addresses:
        - 2602:XXXX:Y:ZZZ::101/64
      routes:
        - to: 2602:XXXX:Y:ZZZ::102/128
          via: fe80::8cd6:b3ff:fe53:7cc

  bridges:
    br-hosts:
      addresses:
        - 2602:XXXX:Y:ZZZ::101/64

My setup is pretty much entirely IPv6 except for a tiny bit of IPv4 for some specific services so that’s why everything above very much relies on IPv6 addressing, but the same could certainly be done using IPv4 instead.

With this setup, I have a 2Gbit/s bond to the top of the rack switch configured to use static addressing but using the gateway provided through IPv6 router advertisements. I then have a first 20Gbit/s bond to the second server with a static route for its IP and then another identical bond to the third server.

This allows all three servers to communicate at 20Gbit/s and then at 2Gbit/s to the outside world. The fast links will almost exclusively be carrying Ceph, OVN and LXD internal traffic, the kind of traffic that’s using a lot of bandwidth and requires good latency.

To complete the network setup, OVN is installed using the ovn-central and ovn-host packages from Ubuntu and then configured to communicate using the internal mesh subnet.

This part is done by editing /etc/default/ovn-central on all 3 systems and updating OVN_CTL_OPTS to pass a number of additional parameters:

  • --db-nb-addr to the local address
  • --db-sb-addr to the local address
  • --db-nb-cluster-local-addr to the local address
  • --db-sb-cluster-local-addr to the local address
  • --db-nb-cluster-remote-addr to the first server’s address
  • --db-sb-cluster-remote-addr to the first server’s address
  • --ovn-northd-nb-db to all the addresses (port 6641)
  • --ovn-northd-sb-db to all the addresses (port 6642)

The first server shouldn’t have the remote-addr ones set as it’s the bootstrap server, the others will then join that initial server and join the cluster at which point that startup argument isn’t needed anymore (but it doesn’t really hurt to keep it in the config).

If OVN was running unclustered, you’ll want to reset it by wiping /var/lib/ovn and restarting ovn-central.service.

Storage setup

On the storage side, I won’t go over how to get a three nodes Ceph cluster, there are many different ways to achieve that using just about every deployment/configuration management tool in existence as well as upstream’s own ceph-deploy tool.

In short, the first step is to deploy a Ceph monitor (ceph-mon) per server, followed by a Ceph manager (ceph-mgr) and a Ceph metadata server (ceph-mds). With that done, one Ceph OSD (ceph-osd) per drive needs to be setup. In my case, both the HDDs and the NVME SSD are consumed in full for this while for the SATA SSD I created a partition using the remaining space from the installation and put that into Ceph.

At that stage, you may want to learn about Ceph crush maps and do any tweaking that you want based on your storage setup.

In my case, I have two custom crush rules, one which targets exclusively HDDs and one which targets exclusively SSDs. I’ve also made sure that each drive has the proper device class and I’ve tweaked the affinity a bit such that the faster drives will be prioritized for the first replica.

I’ve also created an initial ceph fs filesystem for use by LXD with:

ceph osd pool create lxd-cephfs_metadata 32 32 replicated replicated_rule_ssd
ceph osd pool create lxd-cephfs_data 32 32 replicated replicated_rule_hdd
ceph fs new lxd-cephfs lxd-cephfs_metadata lxd-cephfs_data
ceph fs set lxd-cephfs allow_new_snaps true

This makes use of those custom rules, putting the metadata on SSD with the actual data on HDD.

The cluster should then look something a bit like that:

root@langara:~# ceph status
  cluster:
    id:     dd7a8436-46ff-4017-9fcb-9ef176409fc5
    health: HEALTH_OK
 
  services:
    mon: 3 daemons, quorum abydos,langara,orilla (age 37m)
    mgr: langara(active, since 41m), standbys: abydos, orilla
    mds: lxd-cephfs:1 {0=abydos=up:active} 2 up:standby
    osd: 12 osds: 12 up (since 37m), 12 in (since 93m)
 
  task status:
    scrub status:
        mds.abydos: idle
 
  data:
    pools:   5 pools, 129 pgs
    objects: 16.20k objects, 53 GiB
    usage:   159 GiB used, 34 TiB / 34 TiB avail
    pgs:     129 active+clean

With the OSDs configured like so:

root@langara:~# ceph osd tree
ID  CLASS  WEIGHT    TYPE NAME         STATUS  REWEIGHT  PRI-AFF
-1         34.02979  root default                               
-3         11.34326      host abydos                            
 4    hdd   3.63869          osd.4         up   1.00000  0.12500
 7    hdd   5.45799          osd.7         up   1.00000  0.25000
 0    ssd   0.46579          osd.0         up   1.00000  1.00000
10    ssd   1.78079          osd.10        up   1.00000  0.75000
-5         11.34326      host langara                           
 5    hdd   3.63869          osd.5         up   1.00000  0.12500
 8    hdd   5.45799          osd.8         up   1.00000  0.25000
 1    ssd   0.46579          osd.1         up   1.00000  1.00000
11    ssd   1.78079          osd.11        up   1.00000  0.75000
-7         11.34326      host orilla                            
 3    hdd   3.63869          osd.3         up   1.00000  0.12500
 6    hdd   5.45799          osd.6         up   1.00000  0.25000
 2    ssd   0.46579          osd.2         up   1.00000  1.00000
 9    ssd   1.78079          osd.9         up   1.00000  0.75000

LXD setup

The last piece is building up a LXD cluster which will then be configured to consume both the OVN networking and Ceph storage.

For OVN support, using an LTS branch of LXD won’t work as 4.0 LTS predates OVN support, so instead I’ll be using the latest stable release.

Installation is as simple as: snap install lxd --channel=latest/stable

Then on run lxd init on the first server, answer yes to the clustering question, make sure the hostname is correct and that the address used is that on the mesh subnet, then create the new cluster setting an initial password and skipping over all the storage and network questions, it’s easier to configure those by hand later on.

After that, run lxd init on the remaining two servers, this time pointing them to the first server to join the existing cluster.

With that done, you have a LXD cluster:

root@langara:~# lxc cluster list
+----------+-------------------------------------+----------+--------+-------------------+--------------+----------------+
|   NAME   |                 URL                 | DATABASE | STATE  |      MESSAGE      | ARCHITECTURE | FAILURE DOMAIN |
+----------+-------------------------------------+----------+--------+-------------------+--------------+----------------+
| server-1 | https://[2602:XXXX:Y:ZZZ::100]:8443 | YES      | ONLINE | fully operational | x86_64       | default        |
+----------+-------------------------------------+----------+--------+-------------------+--------------+----------------+
| server-2 | https://[2602:XXXX:Y:ZZZ::101]:8443 | YES      | ONLINE | fully operational | x86_64       | default        |
+----------+-------------------------------------+----------+--------+-------------------+--------------+----------------+
| server-3 | https://[2602:XXXX:Y:ZZZ::102]:8443 | YES      | ONLINE | fully operational | x86_64       | default        |
+----------+-------------------------------------+----------+--------+-------------------+--------------+----------------+

Now that cluster needs to be configured to access OVN and to use Ceph for storage.

On the OVN side, all that’s needed is: lxc config set network.ovn.northbound_connection tcp:<server1>:6641,tcp:<server2>:6641,tcp:<server3>:6641

As for Ceph creating a Ceph RBD storage pool can be done with:

lxc storage create ssd ceph source=lxd-ssd --target server-1
lxc storage create ssd ceph source=lxd-ssd --target server-2
lxc storage create ssd ceph source=lxd-ssd --target server-3
lxc storage create ssd ceph

And for Ceph FS:

lxc storage create shared cephfs source=lxd-cephfs --target server-1
lxc storage create shared cephfs source=lxd-cephfs --target server-2
lxc storage create shared cephfs source=lxd-cephfs --target server-3
lxc storage create shared cephfs

In my case, I’ve also setup a lxd-hdd pool, resulting in a final setup of:

root@langara:~# lxc storage list
+--------+-------------+--------+---------+---------+
|  NAME  | DESCRIPTION | DRIVER |  STATE  | USED BY |
+--------+-------------+--------+---------+---------+
| hdd    |             | ceph   | CREATED | 1       |
+--------+-------------+--------+---------+---------+
| shared |             | cephfs | CREATED | 0       |
+--------+-------------+--------+---------+---------+
| ssd    |             | ceph   | CREATED | 16      |
+--------+-------------+--------+---------+---------+

Up next

The next post is likely to be quite network heavy, going into why I’m using dynamic routing and how I’ve got it all setup. This is the missing piece of the puzzle in what I’ve shown so far as without it, you’d need an external router with a bunch of static routes to send traffic to the OVN networks.

on December 19, 2020 12:46 AM

December 18, 2020

A Debian LTS logo Like each month, here comes a report about the work of paid contributors to Debian LTS.

Individual reports

In November, 239.25 work hours have been dispatched among 13 paid contributors. Their reports are available:

Evolution of the situation

In November we held the last LTS team meeting for 2020 on IRC, with the next one coming up at the end of January.
We announced a new formalized initiative for Funding Debian projects with money from Freexian’s LTS service.
Finally, we would like to remark once again that we are constantly looking for new contributors. Please contact Holger if you are interested!

We’re also glad to welcome two new sponsors, Moxa, a device manufacturer, and a French research lab (Institut des Sciences Cognitives Marc Jeannerod).

The security tracker currently lists 37 packages with a known CVE and the dla-needed.txt file has 40 packages needing an update.

Thanks to our sponsors

Sponsors that joined recently are in bold.

One comment | Liked this article? Click here. | My blog is Flattr-enabled.

on December 18, 2020 10:02 AM
The Lubuntu Team is pleased to announce we are running a Hirsute Hippo artwork competition, giving you, our community, the chance to submit, and get your favorite wallpapers for both the desktop and the greeter/login screen (SDDM) included in the Lubuntu 21.04 release. Show Your Artwork To enter, simply post your image into this thread on our […]
on December 18, 2020 01:15 AM

December 17, 2020

S13E39 – Walking backwards

Ubuntu Podcast from the UK LoCo

This week we’ve been playing Cyberpunk 2077 and applying for Ubuntu Membership. We round up the goings on in the Ubuntu community and also bring you our favourite news picks from the wider tech world.

It’s Season 13 Episode 39 of the Ubuntu Podcast! Alan Pope, Mark Johnson and Martin Wimpress are connected and speaking to your brain.

In this week’s show:

That’s all for this week! If there’s a topic you’d like us to discuss, or you have any feedback on previous shows, please send your comments and suggestions to show@ubuntupodcast.org or Tweet us or Toot us or Comment on our Facebook page or comment on our sub-Reddit.

on December 17, 2020 03:00 PM

December 16, 2020

In the previous post I went over the reasons for switching to my own hardware and what hardware I ended up selecting for the job.

Now it’s time to look at how I intend to achieve the high availability goals of this setup. Effectively limiting the number of single point of failure as much as possible.

Hardware redundancy

On the hardware front, every server has:

  • Two power supplies
  • Hot swappable storage
  • 6 network ports served by 3 separate cards
  • BMC (IPMI/redfish) for remote monitoring and control

The switch is the only real single point of failure on the hardware side of things. But it also has two power supplies and hot swappable fans. If this ever becomes a problem, I can also source a second unit and use data and power stacking along with MLAG to get rid of this single point of failure.

I mentioned that each server has four 10Gbit ports yet my switch is Gigabit. This is fine as I’ll be using a mesh type configuration for the high-throughput part of the setup. Effectively connecting each server to the other two with a dual 10Gbit bond each. Then each server will get a dual Gigabit bond to the switch for external connectivity.

Software redundancy

The software side is where things get really interesting, there are three main aspects that need to be addressed:

  • Storage
  • Networking
  • Compute

Storage

For storage, the plan is to rely on Ceph, each server will run a total of 4 OSDs, one per physical drive with the SATA SSD acting as boot drive too with the OSD being a large partition on it instead of the full disk.

Each server will also act as MON, MGR and MDS providing a fully redundant Ceph cluster on 3 machines capable of providing both block and filesystem storage through RBD and FS.

Two maps will be setup, one for HDD storage and one for SSD storage.
Storage affinity will also be configured such that the NVME drives will be used for the primary replica in the SSD map with the SATA drives holding secondary/tertiary replicas instead.

This makes the storage layer quite reliable. A full server can go down with only minimal impact. Should a server being offline be caused by hardware failure, the on-site staff can very easily relocate the drives from the failed server to the other two servers allowing Ceph to recover the majority of its OSDs until the defective server can be repaired.

Networking

Networking is where things get quite complex when you want something really highly available. I’ll be getting a Gigabit internet drop from the co-location facility on top of which a /27 IPv4 and a /48 IPv6 subnet will be routed.

Internally, I’ll be running many small networks grouping services together. None of those networks will have much in the way of allowed ingress/egress traffic and the majority of them will be IPv6 only.

The majority of egress will be done through a proxy server and IPv4 access will be handled through a DNS64/NAT64 setup.
Ingress when needed will be done by directly routing an additional IPv4 or IPv6 address to the instance running the external service.

At the core of all this will be OVN which will run on all 3 machines with its database clustered. Similar to Ceph for storage, this allows machines to go down with no impact on the virtual networks.

Where things get tricky is on providing a highly available uplink network for OVN. OVN draws addresses from that uplink network for its virtual routers and routes egress traffic through the default gateway on that network.

One option would be for a static setup, have the switch act as the gateway on the uplink network, feed that to OVN over a VLAN and then add manual static routes for every public subnet or public address which needs routing to a virtual network. That’s easy to setup, but I don’t like the need to constantly update static routing information in my switch.

Another option is to use LXD’s l2proxy mode for OVN, this effectively makes OVN respond to ARP/NDP for any address it’s responsible for but then requires the entire IPv4 and IPv6 subnet to be directly routed to the one uplink subnet. This can get very noisy and just doesn’t scale well with large subnets.

The more complicated but more flexible option is to use dynamic routing.
Dynamic routing involves routers talking to each other, advertising and receiving routes. That’s the core of how the internet works but can also be used for internal networking.

My setup effectively looks like this:

  • Three containers running FRR each connected to both the direct link with the internet provider and to the OVN uplink network.
  • Each one of those will maintain BGP sessions with the internet provider’s routers AS WELL as with the internal hosts running OVN.
  • VRRP is used to provide a single highly available gateway address on the OVN uplink network.
  • I wrote lxd-bgp as a small BGP daemon that integrates with the LXD API to extract all the OVN subnets and instance addresses which need to be publicly available and announces those routes to the three routers.

This may feel overly complex and it quite possibly is, but that gives me three routers, one on each server and only one of which need to be running at any one time. It also gives me the ability to balance routing traffic both ingress or egress by tweaking the BGP or VRRP priorities.

The nice side effect of this setup is that I’m also able to use anycast for critical services both internally and externally. Effectively running three identical copies of the service, one per server, all with the exact same address. The routers will be aware of all three and will pick one at the destination. If that instance or server goes down, the route disappears and the traffic goes to one of the other two!

Compute

On the compute side, I’m obviously going to be using LXD with the majority of services running in containers and with a few more running in virtual machines.

Stateless services that I want to always be running no matter what happens will be using anycast as shown above. This also applies to critical internal services as is the case above with my internal DNS resolvers (unbound).

Other services may still run two or more instances and be placed behind a load balancing proxy (HAProxy) to spread the load as needed and handle failures.

Lastly even services that will only be run by a single instance will still benefit from the highly available environment. All their data will be stored on Ceph, meaning that in the event of a server maintenance or failure, it’s a simple matter of running lxc move to relocate them to any of the others and bring them back online. When planned ahead of time, this is service downtime of less than 5s or so.

Up next

In the next post, I’ll be going into more details on the host setup, setting up Ubuntu 20.04 LTS, Ceph, OVN and LXD for such a cluster.

on December 16, 2020 10:15 PM

December 14, 2020

After an unexpectedly short discussion on debian-project, we’re moving forward with this new initiative. The Debian security team submitted a project proposal requesting some improvements to tracker.debian.org, and since nobody of the security team wants to be paid to implement the project, we have opened a request for bids to find someone to implement this on a contractor basis.

If you can code in Python following test-driven development and know the Django framework, feel free to submit a bid! Ideally you have some experience with the security tracker too but that’s not a strong requirement.

About the project

If you haven’t read the discussion on debian-project, Freexian is putting aside part of the money collected for Debian LTS to use it to fund generic Debian development projects. The goal is two-fold:

  1. First, the LTS work necessarily had an impact on other Debian teams that made the project possible (security team, DSA, buildd, ftpmasters, debian-www mainly) and we wanted to be able to give back to those teams by funding improvements to their infrastructure.
  2. We have always allowed paid contributors to go beyond just preparing security updates for the LTS release. They can pick tasks that improve the LTS project at large (we try to collect such tasks here: https://salsa.debian.org/lts-team/lts-extra-tasks/-/issues) but they should not go over 25% of their allocated monthly hours so this limits their ability to tackle bigger projects and we would like to be able to tackle bigger projects that can have a meaningful impact on the LTS project and/or Debian in general.

We have tried to formalize a process to follow from project submission up to its implementation  in this salsa project:
https://salsa.debian.org/freexian-team/project-funding
https://salsa.debian.org/freexian-team/project-funding/-/blob/master/Rules-LTS.md

We highly encourage the above-mentioned Debian teams to make proposals. A member of those teams can implement the project and be paid for it. Or they can decide to let someone else implement it (we expect some of the paid LTS contributors to be willing to implement such projects), and just play the reviewer role driving the person doing the work in the right direction. Contrary to Google’s Summer of code and other similar projects, we put the focus on the results (and not in recruiting new volunteers), so we expect to work with experienced persons to implement the project. But if the reviewer is happy to be a mentor and spend more time, then it’s OK for us too. The reviewer is (usually) not a paid position.

If you’re not among those teams, but if you have a project that can have a positive impact on Debian LTS (even if only indirectly in the distant future), feel free to try your chance and to submit a proposal.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

on December 14, 2020 10:52 PM

December 13, 2020

As you know from our previous post, back in 2019 the Kubuntu team set to work collaborating with MindShare Management Ltd to bring a Kubuntu dedicated laptop to the market. Recently, Chris Titus from the ‘Chris Titus Tech’ YouTube channel acquired a Kubuntu Focus M2 for the purpose of reviewing it, and he was so impressed he has decided to keep it as his daily driver. That’s right; Chris has chosen the Kubuntu Focus M2 instead of the Apple MacBook Pro M1 that he had intended to get. That is one Awesome recommendation!

Chris stated that the Kubuntu Focus was “The most unique laptop, and I am not talking about the Apple M1, and neither I am talking about AMD Ryzen.” he says.

In the review on his channel, not only did he put our Kubuntu based machine through it’s software paces, additionally he took the hardware to pieces and demonstrated the high quality build. Chris made light work of opening the laptop up and installing additional hardware, and he went on to say “The whole build out is using branded, high quality parts, like the Samsung EVO Plus, and Crucial memory; not some cheap knock-off”

The Kubuntu Focus team have put a lot of effort into matching the software selection and operating system to the hardware. This ensures that users get the best possible performance from the Kubuntu Focus package. As Chris says in his review video “The tools, scripts and work this team has put together has Impressed the hell out of me!”

By using the power optimizations available in Kubuntu, and additionally providing a GPU switcher which makes it super simple to change between the discreet Nvidia GPU and the integrated Intel based GPU. This impressed Chris a lot “I was able to squeeze 7 to 8 hours out of it on battery, absolutely amazing!” he said.

The Kubuntu Focus is an enterprise ready machine, and arguably ‘The Ultimate Linux laptop”. In his video, Chris goes on to demonstrate that the Kubuntu Focus includes Insync integration support for DropBox, OneDrive and GoogleDrive file sharing.

The Kubuntu Focus is designed from the get-go to be a transition device, providing Apple MacBook and Microsoft Windows users with a Cloud Native device in a laptop format which delivers desktop computing performance.

Chris ran our machine through a variety of benchmark testing tools, and the results are super impressive “Deep Learning capabilities are unparalleled, but more impressive is that it is configured for deep learning out of the box, and took just 10 minutes to be up and running. This is the best mobile solution you could possibly get.” Chris states.

To bring this article to a close it would be remiss of me not to mention Chris Titus’s experience with the support provided by the Kubuntu Focus team. Chris was able to speak directly to the engineering team, and get fast accurate answers to all his questions. Chris says “Huge shout out to the support team, I am beyond impressed”

Congratulations to the support team at MindShare Management Ltd, delivering great customers support is very challenging, and their experience and expertise is obviously coming across with their customers.

WoW! this is a monumental YouTube review of Kubuntu, and the whole Kubuntu community should congratulate themselves for creating ‘The Ultimate Linux Desktop’ which is being used to build ‘The Ultimate Linux Laptop’. Below is the YouTube review on the ‘Chris Titus Tech’ YouTube channel. Check it out, and see for yourself how impressed he is with this machine. Do remember to share this article.

About the Author:

Rick Timmis is a Kubuntu Councillor, and advocate. Rick has been a user and open contributor to Kubuntu for over 10 years, and a KDE user and contributor for 20

on December 13, 2020 02:30 PM

December 12, 2020

Full Circle Weekly News #193

Full Circle Magazine


Linux Coming to Apple M1 Macs
https://www.patreon.com/marcan
on December 12, 2020 11:33 AM

December 10, 2020

CentOS Stream, or Debian?

Jonathan Carter

It’s the end of CentOS as we know it

Earlier this week, the CentOS project announced the shift to CentOS stream. In a nutshell, this means that they will discontinue being a close clone of RHEL along with security updates, and instead it will serve as a development branch of RHEL.

As you can probably imagine (or gleam from the comments in that post I referenced), a lot of people are unhappy about this.

One particular quote got my attention this morning while catching up on this week’s edition of Linux Weekly News, under the distributions quotes section:

I have been doing this for 17 years and CentOS is basically my life’s work. This was (for me personally) a heart wrenching decision. However, i see no other decision as a possibility. If there was, it would have been made.

Johnny Hughes

I feel really sorry for this person and can empathize, I’ve been in similar situations in my life before where I’ve poured all my love and energy into something and then due to some corporate or organisational decisions (and usually poor ones), the project got discontinued and all that work that went into it vanishes into the ether. Also, 17 years is really long to be contributing to any one project so I can imagine that this must have been especially gutting.

Throw me a freakin’ bone here

I’m also somewhat skeptical of how successful CentOS Stream will really be in any form of a community project. It seems that Red Hat is expecting that volunteers should contribute to their product development for free, and then when these contributors actually want to use that resulting product, they’re expected to pay a corporate subscription fee to do so. This seems like a very lop-sided relationship to me, and I’m not sure it will be sustainable in the long term. In Red Hat’s announcement of CentOS Stream, they kind of throw the community a bone by saying “In the first half of 2021, we plan to introduce low- or no-cost programs for a variety of use cases”- it seems likely that this will just be for experimental purposes similar to the Windows Insider program and won’t be of much use for production users at all.

Red Hat does point out that their Universal Base Image (UBI) is free to use and that users could just use that on any system in a container, but this doesn’t add much comfort to the individuals and organisations who have contributed huge amounts of time and effort to CentOS over the years who rely on a stable, general-purpose Linux system that can be installed on bare metal.

Way forward for CentOS users

Where to from here? I suppose CentOS users could start coughing up for RHEL subscriptions. For many CentOS use cases that won’t make much sense. They could move to another distribution, or fork/restart CentOS. The latter is already happening. One of the original founders of the CentOS project, Gregory Kurtzer, is now working on Rocky Linux, which aims to be a new free system built from the RHEL sources.

Some people from Red Hat and Canonical are often a bit surprised or skeptical when I point out to them that binary licenses are also important. This whole saga is yet another data point, but it proves that yet again. If Red Hat had from the beginning released RHEL with free sources and unobfuscated patches, then none of this would’ve been necessary in the first place. And while I wish Rocky Linux all the success it aims to achieve, I do not think that working for free on a system that ultimately supports Red Hat’s selfish eco-system is really productive or helpful.

The fact is, Debian is already a free enterprise-scale system already used by huge organisations like Google and many others, which has stable releases, LTS support and ELTS offerings from external organisations if someone really needs it. And while RHEL clones have come and gone through the years, Debian’s mission and contract to its users is something that stays consistent and I believe Debian and its ideals will be around for as long as people need Unixy operating systems to run anywhere (i.e. a very long time).

While we sometimes fall short of some of our technical goals in Debian, and while we don’t always agree on everything, we do tend to make great long-term progress, and usually in the right direction. We’ve proved that our method of building a system together is sustainable, that we can do so reliably and timely and that we can collectively support it. From there on it can only get even better when we join forces and work together, because when either individuals or organisations contribute to Debian, they can use the end result for both private or commercial purposes without having to pay any fee or be encumbered by legal gotchas.

Don’t get caught by greedy corporate motivations that will result in you losing years of your life’s work for absolutely no good reason. Make your time and effort count and either contribute to Debian or give your employees time to do so on company time. Many already do and reap the rewards of this, and don’t look back.

While Debian is a very container and virtualization friendly system, we’ve managed to remain a good general-purpose operating system that manages to span use cases so vast that I’d have to use a blog post longer than this one just to cover them.

And while learning a whole new set of package build chain, package manager and new organisational culture and so on can be uhm, really rocky at the start, I’d say that it’s a good investment with Debian and unlikely to be time that you’ll ever felt was wasted. As Debian project leader, I’m personally available to help answer any questions that someone might have if they are interested in coming over to Debian. Feel free to mail leader_AT_debian.org (replace _AT_ with @) or find me on the oftc IRC network with the nick highvoltage. I believe that together, we can make Debian the de facto free enterprise system, and that it would be to the benefit of all its corporate users, instead of tilting all the benefit to just one or two corporations who certainly don’t have your best interests in mind.

on December 10, 2020 02:45 PM

During last days I tried to get my Applied Micro Mustang running again. And it looks like it is no more. Like that Norwegian Blue parrot.

Tried some things

By default Mustang outputs information on serial console. It does not here. Checked serial cables, serial to usb dongles. Nothing.

Tried to load firmware from SD card instead of on-board flash. Nope.

Time to put it to rest.

How it looked

When I got it in June 2014 it came in 1U server case. With several loud fans. Including one on cpu radiator. So I took the board out and put into PC Tower case. Also replaced 50mm processor fan with 80mm one:

Top view of Mustang Top view of Mustang
Side view Side view

All that development…

I did several things on it:

Some of them were done for first time on AArch64.

Board gave me lot of fun. I built countless software packages on it. For CentOS, Debian, Fedora, RHEL. Tested installers of each of them.

Was running OpenStack on it since ‘liberty’ (especially after moving from 16GB to 32GB ram).

What next?

I am going to frame it. With few other devices which helped me during my career.

Replacement?

It would be nice to replace Mustang with some newer AArch64 hardware. From what is available on mass market SolidRun HoneyComb looks closest. But I will wait for something with Armv8.4 cores to be able to play with nested virtualization.

on December 10, 2020 11:33 AM

December 04, 2020

FOSDEM 2021 (Online) – Community DevRoom Call for Participation!

The twenty-first edition of FOSDEM will take place 6-7 February, 2021 – online, and we’re happy to announce that there will be a virtual Community DevRoom as part of the event. 

Key dates / New updates

  • Conference dates 6-7 February, 2021 (online)
  • Community DevRoom date: Sunday, 7 February, 2021 (online) 
  • Submission deadline: 22 December, 2020
  • Announcement of selected talks: 31 December, 2020
  • Submission of recorded talks: 17 January, 2021
  • Talks will be pre-recorded in advance, and streamed during the event
  • Q/A session will be taken live
  • A facility will be provided for people watching to chat between themselves
  • A facility will be provided for people watching to submit questions
  • The reference time will be Brussels local time (CET)
  • Talk submissions should be 30/40 mins – please specify the duration in your submission

IN MORE DETAIL 

The Community DevRoom will be back at FOSDEM 2021 (Online). Our goals in running this DevRoom are to:

  • Educate those who are primarily software developers on community-oriented topics that are vital in the process of software development, e.g. effective collaboration
  • Provide concrete advice on dealing with squishy human problems
  • To unpack preconceived ideas of what community is and the role it plays in human society, free software, and a corporate-dominated world in 2021
  • We are seeking proposals on all aspects of creating and nurturing communities for free software projects

TALK TOPICS

Here are some topics we are interested in hearing more about this year:

  • Creating sustainable communities in the midst of a pandemic
  • Community and engagement in a virtual world
  • Virtually overworked? Burnout is real, and even more so now. 
  • Conflict resolution 
  • How are communities changing in the face of a pandemic and how has it affected them?

Again, these are just suggestions. We welcome proposals on any aspect of community building!

HOW TO SUBMIT A TALK

  • If you already have a Pentabarf account, please don’t create a new one.
  • If you forgot your password, reset it. 
  • Otherwise, follow the instructions to create an account.
  • Once logged in, select, “Create Event” and click on “Show All” in the top right corner to display the full form. 
  • Your submission must include the following information: 

First and last name / Nickname (optional)/ Image

Email address

Mobile phone number (this is a very hard requirement as there will be no other reliable form of emergency communication on the day)

Title and subtitle of your talk (please be descriptive, as titles will be listed with ~500 from other projects)

Track: Select “Community DevRoom” as the track

Event type: Lecture (talk) 

Persons: Add yourself as the speaker with your bio

Description: Abstract (required)/ Full Description (optional)

Links to related websites / blogs etc. 

Beyond giving us the above, let us know if there’s anything else you’d like to share as part of your submission – Twitter handle, GitHub activity history – whatever works for you. We especially welcome videos of you speaking elsewhere, or even just a list of talks you have done previously. First time speakers are, of course, welcome!

For issues with Pentabarf, please contact community-devroom@lists.fosdem.org. Feel free to send a notification of your submission to that email. 

FOR ACCEPTED TALKS

  • Once your talk is accepted, you will be assigned a volunteer to help you with producing your pre-recorded content.
  • This volunteer will help review the content and ensure it has the required quality, and ensure the content is entered into the system and ready to broadcast.
  • You must be available for the Q/A session on the day your session is streamed. 

If you need to get in touch with the organizers or program committee of the Community DevRoom, email us at community-devroom@lists.fosdem.org

Leslie Hawthorn, Shirley Bailes and Laura Czajkowski -Community DevRoom Co-Organizers

FOSDEM website / FOSDEM code of conduct

on December 04, 2020 09:35 PM

December 01, 2020

OpenUK is looking for two charismatic and diligent individuals to be judges in the 2021 OpenUK Awards. After a successful first edition in 2020, OpenUK are looking to find two judges from the Community to judge the Awards with Katie Gamanji, our head Judge for 2021.

To be considered as an OpenUK judge:

  • You will be someone who knows at least one of the Open Source Software, Open Data or Open Hardware spaces well, enjoys engaging with the communities and wants to see good projects, people and organisations recognised, and
  • You will be willing to spend some time reviewing circa 100 applications and to make a fair assessment of the applications, be able to present your decision to your fellow judges and then to present during the Awards ceremony charismatically.

The Judges’ work requires a dive deep into the nominations and diligent investigation of all of the applications to come to a well informed and balanced decision.

Nomination form is open now if you’d like to help or you can think of someone who would be suitable.

on December 01, 2020 04:04 PM

November 26, 2020

Welcome to the 2020 edition of my Hacker Holiday Gift Guide! This has been a trying year for all of us, but I sincerely hope you and your family are happy and healthy as this year comes to an end.

Table of Contents

General Security

ProtonMail Subscription

ProtonMail is a great encrypted mail provider for those with an interest in privacy or cryptography. They offer gift cards for subscriptions to both ProtonMail and ProtonVPN, their VPN service.

Encrypted Flash Drive

Datashur Pro

I know cloud storage is all the rage, but sometimes you need a local copy. Sometimes, you even need that local copy to be protected – maybe it’s user data, maybe it’s financial data, maybe it’s medical data – and hardware encryption allows you to go from one system to another without needing any special software. Additionally, it can’t be keylogged or easily compromised from software. This Datashur Pro is my choice of encrypted flash drive, but there are a number of options out there.

Cryptographic Security Key

Yubikey 5C

These devices act as a second factor for authentication, but some of them can do so much more. The Yubikey 5 can also function as a hardware security token for encryption keys and provide one-time-password functionality. Keys from Feitian Technologies support Bluetooth Low Energy in addition to NFC and USB, allowing them to work with a variety of devices. If you or your hacker are into open source, the SoloKey keys are open source hardware implementations of the specification.

Linux Basics for Hackers

Linux Basics For Hackers

I’ve been using Linux for more than two decades, so I honestly initially just bought Linux Basics for Hackers because of the awesome hacker penguin on the cover. If you’re not already familiar with Linux, but need it to grow your skillset, this is an excellent book with a focus on the Linux you need to know as an information security professional or hacker. It has a particular focus on Kali Linux, the Linux distribution popular for penetration testing, but the lessons are more broadly applicable across different security domains.


Penetration Testers & Red Teamers

These gifts are for your pentesters, red teamers, and those learning the field.

The Pentester Blueprint

The Pentester Blueprint

The Pentester Blueprint is a guide to getting started as a professional penetration tester. It’s not very technical, and it’s not going to teach your recipient how to “hack”, but it’s great career advice for those getting started in penetration testing or looking to make a career transition. It basically just came out, so it’s up-to-date (which is, of course, a perpetual issue in technical books these days. It’s written in a very easy-reading style, so is great for those considering the switch to pentesting.

Online Learning Labs

I can recommend several online labs, some of which offer gift cards:

Penetration Testing: A Hands-On Introduction to Hacking

Penetration Testing

Georgia Weidman’s book, “Penetration Testing: A Hands-On Introduction to Hacking” is one of the best introductory guides to penetration testing that I have seen. Even though it’s been a few years since it was released, it remains high-quality content and a great introductory guide to the space. Available via Amazon or No Starch Press. Georgia is a great speaker and teacher and well-known for her efforts to spread knowledge within the security community.

WiFi Pineapple Mark VII

WiFi Pineapple

The WiFi Pineapple is probably the best known piece of “hacking hardware”. Now in it’s seventh generation, it’s used for conducting WiFi security audits, on-site penetration tests, or even as a remote implant for remote penetration tests. I’ve owned several versions of the WiFi Pineapple and found that it only gets better with each generation. Especially with dual radios, it can do things like act as a client on one radio while providing an access point on the other radio.

The WiFi Pineapple does have a bit of a learning curve, but it’s a great option for those getting into the field or learning about the various types of WiFi audits and attacks. The USB ports also allow expansion if you need to add a capability not already built-in.

PoC || GTFO

PoC||GTFO

PoC||GTFO is an online journal for offensive security and exploitation. No Starch Press has published a pair of physical journals in a beautiful biblical style. The content is very high quality, but they’re also presented in a striking style that would go well on the bookshelf of even the most discerning hacker. Check out both Volume I and Volume II, with Volume III available for pre-order to be delivered in January.


Hardware Hackers

Tigard

Tigard

Tigard is a pretty cool little hardware hacker’s universal interface that I’m super excited about. Similar to my open source project, TIMEP, it’s a universal interface for SPI, I2C, JTAG, SWD, UART, and more. It’s great for examining embedded devices and IoT, and is a really well-thought-out implementation of such a board. It supports a variety of voltages and options and is even really well documented on the back of the board so you never have to figure out how to hook it up. This is great both for those new to hardware hacking as well as those experienced looking for an addition to the toolkit.

Hardware Hacker: Adventures in Making and Breaking Hardware

Hardware Hacker

Andrew “Bunnie” Huang is a well-known hardware hacker with both experience in making and breaking hardware, and Hardware Hacker: Adventures in Making and Breaking Hardware is a great guide to his experiences in those fields. It’s not a super technical read, but it’s an excellent and interesting resource on the topics.

RTL-SDR Starter Kit

RTL-SDR

Software-Defined Radio allows you to examine wireless signals between devices. This is useful if you want to take a look at how wireless doorbells, toys, and other devices work. This Nooelec kit is a great starting SDR, as is this kit from rtl-sdr.com.

iFixit Pro Tech Toolkit

The iFixit Pro Tech Toolkit is probably the tool I use the most during security assessments of IoT/embedded devices. This kit can get into almost anything, and the driver set in it has bits for almost anything. It has torx, security torx, hex, Phillips and slotted bits, in addition to many more esoteric bits. The kit also contains other opening tools for prying and pulling apart snap-together enclosures and devices. I will admit, I don’t think I’ve ever used the anti-static wrist strap, even if it would make sense to do so.


Young Hackers

imagiCharm

imagiCharm

imagiCharm by imagiLabs is a small hardware device that allows young programmers to get their first bite into programming embedded devices – or even programming in general. While I haven’t tried it myself, it looks like a great concept, and providing something hands-on looks like a clear win for encouraging students and helping them find their interest.

Mechanical Puzzles

PuzzleMaster offers a bunch of really cool mechanical puzzles and games. These include things like puzzle locks, twisty puzzles, and more. When we’re all stuck inside, why not give something hands on a try?


Friends and Family of Hackers

Bring a touch of hacking to your friends and family!

Hardware Security Keys

Yubico Security Key

A Security Key is a physical 2 factor security token that makes web logins much more secure. Users touch the gold disc when signing in to verify their signin request, so even if a password gets stolen, the account won’t be stolen. These tokens are supported by sites like Google, GitHub, Vanguard, Dropbox, GitLab, Facebook, and more.

Unlike text-message based second factor, these tokens are impossible to phish, can’t be stolen via phone number porting attacks, and don’t depend on your phone having a charge.

Control-Alt-Hack

Control-Alt-Hack

Control-Alt-Hack is a hacking-themed card game. Don’t expect technical accuracy, but it’s a lot of fun to play. Featuring terms like “Entropy” and “Mission”, it brings the theme of hacking to the whole family. It’s an interesting take on things, and a really cool concept. If you’re a fan of independent board/card games and a fan of hacking, this would be a fun addition to your collection.

VPN Subscription

If your friends or family use open wireless networks (I know, maybe not as much this year), they should consider using a VPN. I currently use Private Internet Access when I need a commercial provider, but I have also used Ivacy before, as well as ProtonVPN.


Non-Security Tech

These are tech items that are not specific to the security industry/area. Great for hackers, friends of hackers, and more.

Raspberry Pi 4

Raspberry Pi 4

Okay, I probably could’ve put the Raspberry Pi 4 in almost any of these categories because it’s such a versatile tool. It can be a young hacker’s first Linux computer, it can be a penetration testing dropbox, it can be a great tool for hardware hackers, and it can be a project unto itself. The user can use it to run a home media server, a network-level ad blocker, or just get familiar with another operating system. While I’ve been a fan of the Raspberry Pi in various forms for years, the Pi 4 has a quad core processor and can come with enough memory for some powerful uses. There’s a bunch of configurations, like:

Keysy

Keysy

The Keysy is a a small RFID duplicator. While it can be used for physical penetration testing, it’s also just super convenient if you have multiple RFID keyfobs you need to deal with (i.e., apartment, work, garage, etc.). Note that it only handles certain types of RFID cards, but most of the common standards are available and workable.

Home Automation Learning Kit

This is a really cool kit for learning about home automation with Arduino. It has sensors and inputs for learning about how home automation systems work – controlling things with relays, measuring light, temperature, etc. I love the implementation into a fake laser cut house for the purpose of learning – it’s really clever, and makes me think it would be great for anyone into tech and automation. Teens and adults wanting to learn about Arduino, security practitioners who want to examine how things could go wrong (could augment this with consumer-grade products) and more.

Boogie Board Writing Tablet

Sometimes you just want to hand write something. While I’m also a fan of Field Notes Notebooks in my pocket, this Boogie Board tablet strikes me as a pretty cool option. It allows the user to write on its surface overlaid over anything of your choice (it’s transparent) and then capture the written content into iOS or Android. I love to hand write for brainstorming, some form of note taking, and more. System diagrams are so much easier in writing than in digital format, even today.


General Offers

This is my attempt to collect special offers for the holiday season that are relevant to the hacking community. These are all subject to change, but I believe them correct at the time of writing.

No Starch Press

No Starch Press is possibly the highest quality tech book publisher. Rather than focusing on quantity of books published, they only accept books that will be high quality. I own at least a couple of dozen of their books and they have been consistently well-written and high quality coverage of the topics. They are currently offering 33.7% off their entire catalog for Black Friday (through 11/29/20).

Hooligan Keys

Hooligan Keys offering 10% off from Thanksgiving to Christmas with offer code HAPPYDAY2020.

on November 26, 2020 08:00 AM

November 24, 2020

In the last few weeks I have been asked by many people what topics we have in the Community Council and what we are doing. After a month in the Council, I want to give a first insight into what happened in the early days and what has been on my mind. Of course, these are all subjective impressions and I am not speaking here from the perspective of the Community Council, but from my own perspective.

In the beginning, of course, we had to deal with organisational issues. These include ensuring that everyone is included in the Community Council’s communication channels. There are two main channels that we use. On the one hand, we have a team channel on IRC on Freenode to exchange ideas. The channel has the advantage that you can ask the others small questions and have a relaxed chat. To reach everyone in the Council, we have set up the mailing list: community-council at lists.ubuntu.com

No, I haven’t yet managed to read through all the documents and threads that deal with the Community Council or how to make the community more active again. But I have already read a lot in the first month on the Community Hub and on mailing lists to get different impressions. I can only encourage everyone to get involved with constructive ideas and help us to improve the community of Ubuntu.

I haven’t worked on an international board since 2017 and had completely forgotten one topic that is more complex than national teams: the different timezones. But after a short time we managed to find a date where we all can basically do it and we had our public meeting of the council. This took place twice and the second time we all managed to attend. The minutes of the meetings are publicly available: 1st Meeting and 2nd Meeting. We have decided that we will hold the meeting twice a month.

The Community Council had not been active for a year now, so there had been further problems with filling positions that are dependent on the Community Council. So we had to tackle these issues as soon as possible. In the case of the Membership Board, we extended the existing memberships for a period of time after consultation with the members concerned, so that the ability to work would not be affected. After that, we launched a call for new candidates to join the Board. The result of this call was that sufficient candidates were found and we can fill this board again. Soon the new members will be selected and announced by us.

A somewhat more difficult issue proved to be the Local Community (LoCo) Council. Like the Community Council, this one had not been staffed for some time, and as a result some local communities have fallen out of the approved status, even though they applied for it. Here we have also launched a call for a new LoCo Council. But even though the pain seemed to be big there, not enough candidates were found, so that we were able to fill this council and fill it with life. After a discussion on how to deal with the situation, we decided to take a step back and look at why we got into this situation and what the needs of the existing local communities are (see the log of our second meeting). This will be the subject of a Committee that we will set up. This way we will discuss a basic framework of the community of Ubuntu and see what new ways we as a community can go about it.

As further topics we have started the discussion about our understanding of the work of the Community Council and how we want to work. One of the results was that we want to use Launchpad in the future to manage our tasks. As another board was threatened with membership expiring, we at the Technical Board extended the membership of members until the end of the year. This will allow us to start the process for a new election there.

All in all, there are more exciting topics to come in the community on Ubuntu in the near future. Are the structures currently so suitable for the community? There is no community team at Canonical at the moment and the future cooperation between Canonical and the community has not yet been clarified. These all seem to me to be very exciting topics and I’m happy that we are able to work on it together.

If you want to get involved in discussions about the community, you can do so at the Community Hub. You can also send us an email to the mailing list above if you have a community topic on your mind. If you want to contact me: you can do so by commenting, via Twitter or sending a mail to me: torsten.franz at ubuntu.com

on November 24, 2020 09:30 PM

November 22, 2020

Kubuntu is not Free, it is Free

Kubuntu General News

Human perception has never ceased to amaze me, and in the context of this article, it is the perception of value, and the value of contribution that I want us to think about.

Photo by Andrea Piacquadio from Pexels

It is yours in title, deed and asset

A common miss perception with Open Source software is the notion of free. Many people associate free in its simplest form, that of no monetary cost, and unfortunately this ultimately leads to the second conclusion of ‘cheap’ and low quality. Proprietary commercial vendors, and their corporate marketing departments know this and use that knowledge to focus their audience on ‘perceived value’. In some ways free of cost in the open source software world is a significant disadvantage, because it means there are no funds available to pay for a marketing machine to generate ‘perceived value’.

Think, for a moment, how much of a disadvantage that is when trying to develop a customer/user base.

Kubuntu is completely and whole contributon driven. It is forged from passion and enthusiasm, built with joy and above all love. Throughout our community; users use it because they love it, supporters help users, and each other, maintainers fix issues and package improvements, developers extend functionality and add features, bloggers write articles and documentation, youtubers make videos and tutorials. All these people do this because they love what they’re doing and it brings them joy doing it.

Photo by Tima Miroshnichenko from Pexels

Today Linux is cloud native, ubiquitous and dominates wholesale in the internet space, it is both general purpose, and highly specialised, robust and extensive, yet focused and detailed.

Kubuntu is a general purpose operating system designed and developed by our community to be practical and intuitive to a wide audience. It is simple and non-intrusive to install, everyday it continues to grow a larger user base of people who download it, install it and for some, love it! Further more, some of those users will find their way into our community, they will see the contributions given so freely by others and be inspired to contribute themselves.

Image from Wikipedia

Anyone who has installed Windows 10 recently, will atest to the extent of personal information that Microsoft asks users of its operating system to ‘contribute’. This enables the Microsoft marketing teams to further refine their messaging to resonate with your personal ‘perceived value‘ and indeed to enable that across the Microsoft portfolio of ‘partners‘!
The story is identical with Apple, the recently announced Silicon M1 seeks, not only to lock Apple users into the Apple software ecosystem and their ‘partners‘ but also to lock down and isolate the software to the hardware.

With this background understanding, we are able to return full circle to the subject of this article ‘Kubuntu is not Free, it is Free‘ and further more Kubuntu users are free.
Free from intrusion, profiling, targeting, and marketing; Kubuntu user are free to share, modify and improve their beloved software however they choose.

Photo by RF._.studio from Pexels


Let us revisit that last sentence and add some clarity. Kubuntu users are free to share and improve ‘their’ beloved software however they choose.
The critical word here is ‘their’, and that is because Kubuntu is YOUR software, not Microsoft, Apple, and not even Canonical or Ubuntu’s. It is yours in title, deed and asset and that is the value that the
GNU GPL license bequithes to you.

This ownership also empowers you, and indeed puts you as an individual in a greater place of power than the marketeers from Microsoft or Apple. You can share, distribute, promote, highlight or low-light, Kubuntu wherever, and whenever you like. Blog about it, make YouTube videos about it, share it, change it, give it away and even sell it.

How about the for perceived value ?

About the Author:

Rick Timmis is a Kubuntu Councillor, and advocate. Rick has been a user and open contributor to Kubuntu for over 10 years, and a KDE user and contributor for 20

on November 22, 2020 04:32 PM