How Cursor is Rendered

Wayland has a unique way to let clients specify the contents of the cursor. After receiving a wl_pointer.enter event, the client must call wl_pointer.set_cursor request

    <request name="set_cursor">
      <arg name="serial" type="uint" summary="serial number of the enter event"/>
      <arg name="surface" type="object" interface="wl_surface" allow-null="true"
	   summary="pointer surface"/>
      <arg name="hotspot_x" type="int" summary="surface-local x coordinate"/>
      <arg name="hotspot_y" type="int" summary="surface-local y coordinate"/>

The wl_pointer.set_cursor request takes an optional wl_surface object that represents the actual contents of the cursor. That’s it, the wl_surface interface is used both by windows and cursors! It opens a whole lot of features that you could use with the cursor, for example use the wp_viewport protocol for fractional scaling or create cursor surface trees using the wl_subcompositor interface. On the other hand, many compositors view the cursor as a simple image. So, let’s see how we improved kwin in this regard.

Cursor source

Currently (as of 5.26.x), kwin assumes that the cursor can show only a QImage. It’s okay for simple cases, but it falls apart once we need to show a wl_surface, e.g. we will hit problems with getting a QImage from a linux dmabuf client buffer.

So the first thing that we need to do in order to move anywhere forward is to choose proper abstractions to represent what is actually in the cursor. For example, if the cursor hovers a window, it obviously needs to present what the corresponding wl_surface contains. But sometimes the mouse cursor is not above any window, for example if the pointer is above a server-side decoration. Server-side decoration can be considered part of the window, but we cannot use client’s wl_surface anymore, the compositor may choose to show a different cursor, which is a QImage.

class KWIN_EXPORT CursorSource : public QObject

    explicit CursorSource(QObject *parent = nullptr);

    QImage image() const;
    QSize size() const;
    QPoint hotspot() const;

    void changed();

The CursorSource class is the base class for all other “source” classes that can be attached to the cursor. It contains generic properties, e.g. the hotspot, and it also contains the CursorSource::changed() signal to tell the world when the image has changed. CursorSource::image() exists for compatibility and to make the transition to new abstractions easier.

Sometimes, we need to show a static image in the cursor, so let’s add an ImageCursorSource to serve that purpose

class ImageCursorSource : public CursorSource
    explicit ImageCursorSource(QObject *parent = nullptr);

public Q_SLOTS:
    void update(const QImage &image, const QPoint &hotspot);

On the other hand, some cursors are not static. For example, the loading cursor usually contains some animation, e.g. spinning wheel

class ShapeCursorSource : public CursorSource
    explicit ShapeCursorSource(QObject *parent = nullptr);

    QByteArray shape() const;
    void setShape(const QByteArray &shape);
    void setShape(Qt::CursorShape shape);

    KXcursorTheme theme() const;
    void setTheme(const KXcursorTheme &theme);

    void refresh();
    void selectNextSprite();
    void selectSprite(int index);

    KXcursorTheme m_theme;
    QByteArray m_shape;
    QVector<KXcursorSprite> m_sprites;
    QTimer m_delayTimer;
    int m_currentSprite = -1;

The ShapeCursorSource class represents a cursor shape from an Xcursor theme. It can be used to show both animated and static cursors. If the given cursor shape is animated, i.e. it has more than one sprite, ShapeCursorSource will start a timer with the timeout as indicated by the cursor theme. When the timer expires, ShapeCursorSource will switch to the next sprite and emit the CursorSource::changed() signal. If it’s the last sprite, it will wrap around to the first sprite.

And last but not least, we need something to represent wl_surface attached to the cursor

class SurfaceCursorSource : public CursorSource
    explicit SurfaceCursorSource(QObject *parent = nullptr);

    KWaylandServer::SurfaceInterface *surface() const;

public Q_SLOTS:
    void update(KWaylandServer::SurfaceInterface *surface, const QPoint &hotspot);

ImageCursorSource, ShapeCursorSource, and SurfaceCursorSource are the main types that indicate what the cursor shows.


The CursorSource classes act as data sources, they don’t actually paint anything on the screen. It’s the responsibility of the scene. I’ve already written a little bit about the scene abstraction in kwin, I recommend you to read my earlier blog post about it

But as a quick recap: kwin breaks up a window in smaller building blocks called items. The DecorationItem corresponds to the server-side decoration if there’s one. The ShadowItem is a server-side drop shadow, for example a drop shadow cast by the decoration or the panel. The SurfaceItem represents the actual window contents. That’s the same strategy that we will follow here. The cursor will be broken in smaller pieces.

If we look closely at our cursor sources, the painting code should handle only two cases – paint a QImage and paint an arbitrary wl_surface tree. The wl_surface case is already taken care by SurfaceItem \o/, so we just need a new type of item to present an image in the scene graph, e.g. ImageItem

The CursorItem type ties both cases together. It monitors what source is attached to the cursor and creates either a SurfaceItem or an ImageItem according to the type of the cursor source

  • If a SurfaceCursorSource is attached to the cursor, the CursorItem is going to destroy its child ImageItem (if there’s one) and create a SurfaceItem tree
  • If an ImageCursorSource or a ShapeCursorSource is attached to the cursor, the CursorItem is going to destroy its child SurfaceItem (if there’s one) and create an ImageItem child item

Note that SurfaceItems are rendered the same way regardless of their role.

With all of this, kwin can easily handle simple cases such as displaying an image in the cursor and more esoteric things that you can do with a wl_surface.

Red square is the cursor surface, the green square is its subsurface. No idea why you would want to do this, but kwin should handle this fine now 🙂

Cursor layer

The last thing that’s needed to make cursor handling perfect is putting the cursor on its own hardware plane. Imagine that you move the mouse cursor. Ideally, you should not repaint any windows below the cursor, only update the position of the cursor. Unless the windows need to repaint their content because of new pointer position, e.g. add or remove button outlines, etc. It can be achieved by painting the cursor in its own plane.

KWin already attempts to put the cursor on hardware planes, but we would like to clean things up and unify cursor and overlay planes. It is still work in progress. TBA.


The new cursor design is more powerful and should make it easier to add new fancy features. For example, it would be amazing if you could wire the velocity of the cursor to its scale so you could shake the pointer in order to find the cursor more easily. The new design also fixes longstanding limitations that prevented kwin from displaying cursors rendered by OpenGL or Vulkan.

Xinerama becomes hard requirement of KWin

This is going to be a rather short blog post, but I think it’s still worth mentioning. Since 5.26, kwin will support only one way of setting up X screens – Xinerama, multi-head won’t be supported anymore. However, despite how “setup-breaking” it may sound, this will most likely not affect you as you probably already use Xinerama.

Before diving any deeper, it’s worth providing you some background. On X11, there are several ways how you could configure your desktop environment to run with multiple monitors – multi-head and Xinerama.

Multi-head is an old school way to run multiple monitors. Basically, with that mode, there’s an X screen per monitor. In Xinerama mode, there’s only one virtual X screen per all outputs. Both modes have their advantages and disadvantages, for example you can’t freely move windows between screens when using multi-head, etc. Xinerama is younger than multi-head and it provides the most user friendly workflow on multi-screen setups, so it’s usually enabled by default in all Linux distributions and many desktop environments are optimized for running in this mode, including Plasma.

Technically, kwin does provide support for both multi-head and Xinerama. But multi-head support has been in neglected and unmaintained state for many many years, e.g. some code (primarily, old code) supports multi-head, while lots of other code (mostly, new code) does not, various system settings modules and plasmashell components do not support multi-head either, etc. It’s also safe to say that no kwin developer has ever tested multi-head within last 5+ years.

So, rather than keep advertising the support for a feature that we don’t maintain and have no plans to fix it, we decided to drop the support for multi-head mode and make Xinerama a hard requirement since 5.26.


Does this mean that Plasma won’t support multiple monitors anymore?

No, Plasma will continue supporting setups with multiple monitors, but you will need to ensure that Xinerama is used, which is usually the case and you don’t need to tweak anything.

I used multi-head for some esoteric thing, what should I do now?

It’s highly recommended to give the Wayland session a try. If something’s missing, file a bug report or contact us at

What’s cooking in KWin? Plasma 5.25

We’re past the soft feature freeze of the next Plasma release, so it’s a good time to step back and a have look at the work that has been done in KWin during 5.25 development cycle.

Gesture improvements

Credits: Eric Edlund, Marco Martin, Xaver Xugl

A lot of focus has been put into improving gesture integration in the Wayland session. In 5.24, the desktop grid effect got support for real-time gestures. In 5.25, the support for real-time gestures has been expanded. Effects such as slide, window aperture (animates windows when transitioning to the “show desktop” mode), and overview now support animations that “follow fingers.”

The slide effect follows fingers when switching between virtual desktops using gestures

Merge of kwayland-server and kwin

Credits: me

That’s not a user-facing change, but it’s really important to KWin developers. Some history trivia. KWin used to contain Wayland glue code, eventually it was split in a separate KDE Frameworks library called KWayland. The idea was to provide reusable components that can be useful not only to KWin but also other Wayland compositors. The split is better described in

At the beginning, things were good. If you need to implement a Wayland protocol, add corresponding wrappers in KWayland and then implement the protocol in KWin. However, being KDE Frameworks started presenting problems. KDE Frameworks provides strong API and ABI compatibility guarantees, which is very compelling for consumers but it can be a major source of headache for developers. For example, if a few bad design decisions were made, you cannot simply go back and correct the mistakes, you are going to live with that until the next major version release when it’s okay to make breaking changes. That’s what happened in KWin and KWayland, we made a couple of bad design choices that fired back at us and eventually resulted in a period of technical debt.

As a way out of technical debt, we made a hard decision to split the server side out of KWayland in a separate library called KWaylandServer, which provided no API or ABI compatibility guarantees between minor releases, but some between patch releases. Most of the client APIs in KWayland were deprecated too.

The split of the server side from KWayland in a separated library was a huge relief and it massively accelerated KWin development pace, that had user-facing effects too. Plasma on Wayland session started receiving less complaints (around Plasma 5.18 – 5.20 or so) from users because we were free to change KWin and KWaylandServer the way we thought was the best.

However, KWaylandServer also started showing cracks. The first problem is that it didn’t gain a strong user base. Its the only user was KWin and there weren’t any signs of new users. The second problem is that we gradually switched to qtwaylandscanner so we ended up writing wrappers for wrappers. The third problem is that wayland protocol implementations cannot exist in vacuum and they need to communicate with other compositor components, e.g. renderer; because no such components were present in KWaylandServer, we had to add glue code that made things more complicated. Also, perhaps we tried to fix a wrong problem by providing a library with Qt friendly wrappers for libwayland. Things such as the DRM backend or the scene graph are far more challenging to implement and maybe we should put focus onto making them reusable instead of wrappers.

Regardless, a year or so ago we agreed that it’s worth bringing server-side wayland code back into KWin. In 5.25, we were finally able to do the merge. That allows us to simplify many wayland protocol implementations, fix some design issues and a few known bugs.

Present Windows and Desktop Grid effects rewritten in QML

Credits: Marco Martin

We started experimenting with implementing some fullscreen effects in QML in 5.24. In order to continue that effort, the Present Windows and Desktop Grid effects were rewritten in QML. The main advantage of QML is that we will be able to build more complex scenes without significantly sacrificing maintainability, for example blurring the desktop background only takes a couple of QML lines of code, in C++ it would be a lot more! The main focus with the rewrite was put on keeping feature parity between C++ and QML versions of Present Windows and Desktop Grid.

Desktop Grid implemented in QML
Present Windows (now called “Window View”) effect implemented in QML

Compositing improvements

Credits: Xaver Xugl, me

We continue pushing forward with our ambitious goal to make KWin utilize hardware output planes better and make it more efficient. Significant amount of work in 5.25 has been put into refactoring the DRM backend and compositing abstractions. Unfortunately, we won’t be able to get everything we wanted in 5.25, but hopefully Plasma/Wayland users will start benefiting from this work in the next Plasma release, i.e. 5.26.

As a part of the scene redesign goal, we made handling of invisible windows more efficient on Wayland. For example, if an invisible window wants to be repainted for whatever reason, KWin is going to ignore that request. It’s not an issue on X11, but it was challenging to implement that behavior on Wayland the “right way.” Also, if painting code in a Wayland application is driven by frame callbacks, KWin won’t send frame callbacks anymore if the window is invisible, e.g. minimized or on a virtual desktop that is not current, thus no precious CPU or GPU resources will be wasted.

Screencasting improvements

Credits: Aleix Pol Gonzalez

KWin/Wayland got a new screencasting mode that allows capturing a rectangular region on the screen. For example, this can be useful for building screen recording tools, etc.

New blend effect

Credits: David Edmundson

The blend effect provides an eye-candy animation when switching between dark and light themes.

Fixed Aurorae decorations having “sharp” corners

Credits: Michail Vourlakos

The blur effect is applied to the region beyond top-left corner

If you use a decoration theme powered by Aurorae decoration engine, then the decoration borders may not be as round as they are supposed to be. It’s been a long standing bug caused by the blur effect and lack of enough metadata in Aurorae decoration themes

The blur effect is applied as expected in 5.25

Window management refactors

Credits: Nils Fenner

KWin used to have a strange window class hierarchy that always created confusion among new contributors.

KWin used to use the word “client” to refer to managed windows, i.e. the ones with frames, but the word “client” means a totally different thing in the Wayland world, it represents the other endpoint connected to the compositor, e.g. an application. The word “toplevel” also means different things in KWin and the xdg-shell protocol, which is used by practically all Wayland applications to create “normal” windows and popups.

Toplevel and AbstractClient classes were merged into the base Window class with a far more intuitive name. That makes the class hierarchy simpler, and hopefully removes an obstacle for new contributors.

fbdev backend was dropped

The fbdev backend was in a bit-rotten state. With the emergence of simpledrm kernel driver, we decided to drop the fbdev in favor of the DRM backend, which is actively maintained.

Closing words

5.25 is going to be the biggest release by the scale of changes within recent years, which is both great and terrifying, so it’s more than ever important that as many as possible people give us feedback about the upcoming beta. Please join us at, which is going to be held on May 26th in, to help us to make this release smooth and amazing. 🙂

Geometry handling in KWin/Wayland

It perhaps comes as no surprise that handling window position (or geometry, in general) is one of the most important responsibilities of the window manager. While it may look a very trivial task at quick glance, unfortunately, it’s not like that. In this blog post, I will describe how KWin/Wayland manages window geometry, which can be hopefully useful to people wishing to understand how KWin works.

Warning: that’s a bit technical blog post and requires some knowledge of KWin’s internals.


So, let’s start off with an easy case. On X11, if the window manager wants to move or resize a window, it simply makes an XConfigureWindow() call and that’s it, the window manager doesn’t wait for the client to repaint the window.

While things look simple and obvious from the window manager’s perspective, the compositing manager may have troubles coping with that. For example, if a window is resized, but the application hasn’t repainted it yet and the compositing manager wants to perform compositing, what should be painted in a newly exposed region of the window? Without any synchronization between the compositing manager and the application, you will most likely see noise or other types of visual artifacts. Fortunately, it’s possible to alleviate the resizing issues by using the _NET_WM_SYNC_REQUEST protocol if the window manager is also a compositing manager, which is the case with KWin.

With the _NET_WM_SYNC_REQUEST, the client has to indicate that it’s willing to participate in that protocol by listing _NET_WM_SYNC_REQUEST in the WM_PROTOCOLS property and storing the id of XSync counter in the _NET_WM_SYNC_REQUEST_COUNTER property that will be used for synchronization between the window manager and the client.

The window manager will send the client a _NET_WM_SYNC_REQUEST client message with a sync counter value that the client has to put in _NET_WM_SYNC_REQUEST in order to indicate that it’s done handling resize.

While it allows the window manager to avoid flooding the client window with resize requests and the compositing manager to “freeze” the window until it’s repainted, it does nothing to help with synchronizing compositor’s GPU commands with the app’s GPU commands, which can still result in “noisy” resize.

KWin uses the _NET_WM_SYNC_REQUEST protocol only during interactive resize and to determine when it can start painting the window’s first frame. In the remaining cases, e.g. changing window’s maximize or fullscreen mode, a ConfigureNotify event will be sent to the client window and the window will be painted on the screen without any synchronization.


Wayland has no single window type like on X11, instead a wl_surface object is given a role that indicates how the surface should be mapped or how it can be interacted with. The surface role defines how the surface can be resized. In this blog post, I will describe how xdg-toplevel surfaces are resized because they are used to build regular application windows.

In order to resize a window, the compositor has to send the client an xdg_toplevel.configure event indicating the desired window geometry size. Depending on the window state, the client should either obey the size in the configure event or treat the size as the maximum size. For example, if the window is maximized, the window must be repainted with the same exact size as in the configure event; if the window is being interactively resized, the window can be repainted at a smaller size to account for geometry constraints, for example aspect ratio. The window will be repainted with the new size after the client acknowledges the configure event and provides a buffer with the new size.

The fact that the compositor can send the maximum window geometry size in a configure event is unorthodox. On X11, the window manager knows for sure where the window will land after XConfigureWindow(), but on Wayland, the compositor only has a rough estimate what the geometry will be after the configure event is acknowledged and a new buffer is provided. It takes a while to get used to such a model.

For every window, KWin maintains four geometries – move resize geometry, frame geometry, client geometry, and buffer geometry. The last three geometries have already been described in the CSD support in KWin post. The move resize geometry is used only to resize or move the window.

In order to help better understand how all four geometries work together, let’s consider that a window needs to be resized to (400×300). The first step that has to be taken is to change the move resize geometry’s size to (400×300). After that, a configure event will be sent with the new size. Note that the other three geometries are left unchanged, they will be updated only after the client provides a new buffer. It’s worth pointing out that the frame geometry can be different after resize, for example it can be (300×300) if 1:1 aspect ratio is forced.

What if you want to maximize a window? Two things should happen – the window has to be moved to the top-left corner of the work area and it should be resized so its size matches the work area size. The simplest way to implement it would be to move the window immediately to the target position without waiting for an acknowledgement from the client, and resize the window after a new buffer is provided. It’s easy to implement, but if there needs to be a smooth transition from the normal state to the maximize state, such an animation won’t look good. In order to fix it, the window needs to be moved and resized at the same time when a new buffer is provided.

So, just put the window position in configure events, easy, right? Well, not really. What should happen if the user wants to move the window while there’s a pending move? If it’s not handled properly, the older window position from the configure event can override the newer window position.

That problem is solved by adding a special flag to every configure event that indicates whether it affects the window position and amending pending configure events (unsetting the ConfigurePosition flag) if the window needs to be moved immediately

class XdgSurfaceConfigure
   enum ConfigureFlag {
       ConfigurePosition = 0x1,
   Q_DECLARE_FLAGS(ConfigureFlags, ConfigureFlag)

   ConfigureFlags flags;

There are three functions to change the geometry of a window – move(), resize(), and moveResize(). Neither function is interchangeable because each of them has a specific semantic meaning. For example, moveResize() indicates that the window has to be moved and resized in one atomic operation, while move() says that the window has to be moved to the given position now regardless of whether there are pending moves.

There’s still one tiny issue. If the window is resized by dragging its top-left corner, the bottom-right corner should remain static. In case the window has custom geometry constraints, the window can bounce during resize if its bottom-right corner is not snapped properly. Similar to handling of pending moves, this issue is solved by adding more information to configure events so KWin can calculate correct frame geometry when a new buffer is committed.

class XdgSurfaceConfigure
   enum ConfigureFlag {
       ConfigurePosition = 0x1,
   Q_DECLARE_FLAGS(ConfigureFlags, ConfigureFlag)

   QRect bounds;
   Gravity gravity;
   ConfigureFlags flags;

There are two new things in the configure event – the bounding rectangle and the gravity. The bounding rectangle is the same as the move resize geometry. The gravity indicates the direction in which the geometry changes during resize. If the window is resized by dragging its top-left corner, the gravity will point in the top-left direction, i.e. only the top-left window corner can move. The bounding rectangle is needed to compute how much the frame geometry has to move so its bottom-right corner is static.

Just to recap, if a window needs to be resized, the first thing that will happen is that the move resize geometry size will be updated and a configure event will be sent to the client. After the configure event is acknowledged and a new buffer is provided, kwin will compute the new frame geometry, client geometry, and buffer geometry based on the information stored in the configure event.

If a window needs to be moved immediately, the move resize geometry position will be updated. All pending configure events will be amended so they don’t have the ConfigurePosition flag set and, finally, the position of the frame geometry, the client geometry, and the buffer geometry will be updated to the new position.

And that’s all I have for today. Hope this was useful.

More consistent font rendering in Plasma

There is one thing that annoys me a bit and that’s inconsistent font rendering in Qt and GTK applications.

kate (Qt)
gedit (GTK)

The most distinctive characteristic of font rendering in Qt applications is that glyphs look thicker. Some people may argue that macOS-style font rendering is the worst one but after using Plasma for a long time, I’m used to that style of font rendering and would like fonts to look the same regardless of the underlying toolkit.

After digging though some code, I’ve discovered that Qt enables stem darkening by default in its freetype font engine. With stem darkening, glyphs are embolden to improve readability. And, indeed, after putting export FREETYPE_PROPERTIES="cff:no-stem-darkening=0" in my profile scripts, the glyphs look a bit thicker in non-Qt applications.

Note that fonts can still look differently regardless of whether stem darkening is enabled. For example, text must be rendered with linear alpha blending and gamma correction and not all toolkits do that properly.

To wrap this up, I ran Visual Studio Code with and without stem darkening to see if it makes any difference.

Visual Studio Code (Skia) w/o stem darkening
Visual Studio Code (Skia) w/ stem darkening

At quick glance, both screenshots look the same. However, after taking a closer look, you can notice that glyphs in the second screenshot are more brighter and thicker than in the first screenshot (just like how it would look in a Qt application).


The fact that Qt enables stem darkening regardless of user preferences caught me by surprise. Relying fully on system and user preferences would minimize inconsistencies and give the user more control over their machine. Either way, if you happen to use regularly applications that are built using Qt and GTK or any other toolkit, enabling stem darkening by setting the FREETYPE_PROPERTIES="cff:no-stem-darkening=0" environment variable is a good way to achieve slightly more consistent font rendering. Note that there can be inconsistencies even if all applications use the same freetype options because it still matters how toolkits perform alpha blending, etc.

Scene Items in KWin

If your background includes game development, the concept of a scene should sound familiar. A scene is a way to organize the contents of the screen using a tree, where parent nodes affect their child nodes. In a game, a scene would typically consist of elements such as lights, actors, terrain, etc.

KWin also has a scene. With this blog post, I want to provide a quick glimpse at the current scene design, and the plan how it can be improved for Wayland.

Current state

Since compositing functionality in KWin predates Wayland, the scene is relatively simple, it’s just a list of windows sorted in the stacking order. After all, on X11, a compositing window manager only needs to take window buffers and compose them into a single image.

With the introduction of Wayland support, we started hitting limitations of the current scene design. wl_surface is a quite universal thing. It can used to represent the contents of a window, or a cursor, or a drag-and-drop icon, etc.

Since the scene thinks of the screen in terms of windows, it needs to have custom code paths to cover all potential usages of the wl_surface interface. But doing that has its own problems. For example, if an application renders cursors using a graphics api such as OpenGL or Vulkan, KWin won’t be able to display such cursors because the code path that renders cursors doesn’t handle hardware accelerated client buffers.

Another limitation of the current scene is that it doesn’t allow tracking damage more efficiently per each wl_surface, which is needed to avoid repainting areas of the screen that haven’t changed and thus keep power usage low.

Introducing scene items

The root cause of our problems is that the scene thinks of the contents of the screen in terms of windows. What if we stop viewing a window as a single, indivisible object? What if we start viewing every window as something that’s made of several other items, e.g. a surface item with window contents, a server-side decoration item, and a nine-tile patch drop shadow item?

A WindowItem is composed of several other items – a ShadowItem, a DecorationItem, and a SurfaceItem

With such a design, the scene won’t be limited only to windows, for example we could start putting drag-and-drop icons in it. In addition to that, it will be possible to reuse the code that paints wl_surface objects or track damage per individual surface

Besides windows, the scene contains a drag-and-drop icon and a software cursor

Another advantage of the item-based design is that it will provide a convenient path towards migration to a scene/render graph, which is crucial for performing compositing on different threads or less painful transition to Vulkan.

Work done so far

At the end of March, an initial batch of changes to migrate to the item-based design was merged. We still have a lot of work ahead of us, but even with those initial changes, you will already see some improves in the Wayland session. For example, there should less visual artifacts in applications that utilize sub-surfaces, e.g. Firefox.

The end goal of the transition to the item-based design is to have a more flexible and extensible scene. So far, the plan is to continue doing refactorings and avoid rewriting the entire compositing machinery, if possible. You can find out more about the scene redesign progress by visiting


In short, we still have some work to do to make rendering abstractions in KWin fit well all the cases that there are on Wayland. However, even with the work done so far, the results are very promising!

Making Firefox with Client-Side Decorations Look Good in KDE Plasma

Warning: The proposed workaround in this blog post might be rendered unnecessary in the future.

Out of the box, Firefox with client-side decorations doesn’t look good because captions in inactive tabs blend with the background.

Firefox with Breeze GTK theme

Fortunately, this issue can be easily fixed by adding a style rule in userChrome.css file. First of all, you need to enable the userChrome.css functionality if you run Firefox 69 or later. Go to about:config page, and set toolkit.legacyUserProfileCustomizations.stylesheets to true.

Next, open the profile directory and create userChrome.css file in a sub-directory named “chrome.” If there is no “chrome” sub-directory, create one. In case you don’t know where the profile directory is, open about:support and click the “Open Directory” button next to “Profile Directory.”

In userChrome.css file, add the following rule

:root[tabsintitlebar] #TabsToolbar:not(:-moz-lwtheme):not(:-moz-window-inactive) {
  color: #eff0f1 !important;

and restart Firefox.

Firefox with the custom userChrome.css rule

Closing words

Even though the userChrome.css functionality hasn’t been officially deprecated, it’s still better to avoid using it, but as a short-term solution, it’s good enough. Ideally, this minor issue should be fixed upstream so everything “just works” out of the box and no hacks are needed.

Relevant links

Pacman Post Transaction PackageKit Hook

As an Arch Linux user, I find it very inconvenient that pkcon refresh must be run every time after sudo pacman -Syu in order to hide the update notifier in the system tray

Fortunately, there is a simple solution for this problem. Create 90-refresh-packagekit.hook file in /etc/pacman.d/hooks/ with the following contents

Type = Package
Operation = Install
Operation = Upgrade
Target = *

Description = Refresh PackageKit
Depends = packagekit
Depends = systemd
When = PostTransaction
Exec = /usr/bin/busctl call --system org.freedesktop.PackageKit /org/freedesktop/PackageKit org.freedesktop.PackageKit StateHasChanged s posttrans

Now, when you update the system using sudo pacman -Syu, the update notifier icon will be hidden automagically.

EDIT(3/14/2021): Since was merged, you don’t need to create any post transaction hooks anymore.

Compositing Scheduling in KWin: Past, Present, and Future

From time to time, we receive regular complaints about frame scheduling. In particular, compositing not being synchronized to vblanks, missed frames, repainting monitors with different refresh rates, etc. This blog post will (hopefully) explain why these issues are present and how we plan to fix them.

Past & Present

With the current scheduling algorithm, compositing /should/ start immediately right after a vblank. A vblank is the time between the vertical front porch and the vertical back porch, or simply put, it’s the time when the display starts scanning out the contents of the next frame.

One thing that’s worth point out is that buffers are not swapped after finishing a compositing cycle, they are swapped at the start of the next compositing cycle, in other words, at the next vblank

KWin assumes that glXSwapBuffers() and eglSwapBuffers() will always block until the next vblank. By delaying the buffer swap, we have more time to process input events, do some window manager things, etc. But, this assumption is outdated, nowadays, it’s rare to see a GLX or an EGL implementation where a buffer swap operation blocks when rendering double buffered.

In case the buffer swap operation doesn’t block, which is typically the case with Mesa drivers, glXSwapBuffers() or eglSwapBuffers() will be called at the end of a compositing cycle. There is a catch though. Compositing won’t be synchronized to vblanks.

Since compositing is not synchronized with vblanks anymore, you may notice that animations in some application don’t look butter smooth as they should. This issue can be easily verified using the black frame insertion test [1].

Another problem with our compositing scheduling algorithm is latency. Ideally, if you press a key, the corresponding symbol should show up on the screen as soon as possible. In practice, things are slightly different

With the current compositing timing, if you press a key on the keyboard, it may take up to two frame before the corresponding symbol shows up on the screen. Same thing with videos, the audio might be playing two frames ahead of what is on the screen.

Monitors With Different Refresh Rates

Things get trickier if you have several monitors and they have different refresh rates. On X11, compositing is throttled to the lowest common refresh rate, in other words if you have two monitors with a refresh rate of 60Hz and one with a refresh rate of 120Hz, compositing will be performed at a rate of 60Hz. There is probably nothing that we can do about it.

On Wayland, it’s a completely different situation. From the technical point of view, we don’t have anything that prevents compositing being performed separately per screen at different refresh rates. But due to historical reasons, compositing on Wayland is throttled similar to the X11 case.


Our main goals are to unlock true per screen rendering on Wayland and reduce latency caused by compositing (both on X11 and Wayland). Some work [2] has already been started to fix compositing timing and if things go smoothly, you should be able to enjoy improved frame timings in KDE Plasma 5.21.

If we start compositing as close as possible to the next vblank, then applications, such as video players, will be able to get their contents on the screen in the shortest amount of time without inducing any screen tearing.

The main drawback of this approach is that the compositor has to know how much time exactly it will take to render the next frame. In other words, we need a reliable way to predict the future, easy, no problem!

The main idea behind the compositing timing rework is to introduce a new class, called RenderLoop, that notifies the compositor when it’s a good time to start painting the next frame. On X11, there is going to be only one RenderLoop. On Wayland, every output is going to have its own RenderLoop.

As it was mentioned previously, the compositor needs to predict how long it will take to render the next frame. We solve this inconvenient problem by making two guesses:

  • The first guess is based on a desired latency level that comes from a config. If the desired latency level is high, the predicted render time will be longer; on the other hand, if the desired latency level is low, the predicted render time will be shorter;
  • The second guess is based on the duration of previous compositing cycles.

The RenderLoop makes two guesses and the one with the longest render time is used for scheduling compositing for the next frame. By making two estimates rather than one, hopefully, animations will be more or less stable.

There is no “silver bullet” solution for the render time prediction problem, unfortunately. In the end, it all comes down to making a trade-off between latency and stability. The config option lets the user decide what matters the most. It’s worth noting that with the default latency level, the compositor will make a compromise between frame latency and animation stability that should be good enough for most of users.

The introduction of the RenderLoop helper is only half of the battle. At the moment, all compositing is done on the main thread and it can get crowded. For example, if you have several outputs with different refresh rates, some of them will have to wait until it’s their turn to get repainted. This may result in missed vblanks, and thus laggy frames. In order to address this issue, we need to put compositing on different threads. That way, monitors will be repainted independently of each other. There is no concrete milestone for compositing on different threads, but most likely, it’s going to be KDE Plasma 5.22.


Currently, compositing infrastructure in KWin is heavily influenced by the X11 requirements, e.g. there is only one compositing clock, compositing is throttled to the lowest refresh rate, etc. Besides that, incorrect assumptions were made about the behavior of glXSwapBuffers() and eglSwapBuffers(), unfortunately, which result in frame drops and other related issues. With the ongoing Wayland improvements, we hope to fix the aforementioned issues.




CSD support in KWin

If you are a long time Plasma user, you probably remember the times when most GTK applications in KDE Plasma prior to 5.18 were practically unusable due to the lacking support for _GTK_FRAME_EXTENTS. In this blog post, I would like to get a bit technical and walk you through some changes that happened during the 5.18 time frame that made it possible to support _GTK_FRAME_EXTENTS in KWin. I also would like to explain why after so many years of resistance, we had finally added support for client-side decorations. So, buckle up your seat belt!

What is _GTK_FRAME_EXTENTS, anyway?

A window can be viewed as a thing that has some contents and a frame around it with a close button and so on. The “server-side decoration” term is used to refer to a window frame drawn by the window manager (KWin). If the application draws the window frame by itself, then we refer to that window frame as a “client-side decoration.”

An example of a window frame being drawn by the window manager (KWin)
An example of a window frame being drawn by the application (gedit)

A cool thing about client-side decorations is that they are very eye candy, but there is also a lot of drawbacks about them, for example if the application hangs, the user won’t be able to close the window by clicking the close button in the window frame. But the biggest issue with client-side decorations is that the window manager has to know the extents of the client-side drop shadow otherwise things such as window snapping and so on won’t work as desired.

_GTK_FRAME_EXTENTS is a proprietary GTK extension that describes the extents of the client-side decoration on each side (left, right, top, and bottom). From the word “proprietary” you have probably already guessed that _GTK_FRAME_EXTENTS is a thing that is not in any spec. We can’t afford implementing a proprietary extension simply because we don’t know whether re-designing KWin will pay off in the end. What if GTK ditches _GTK_FRAME_EXTENTS for something else and our hard work will be for nothing? There were some suggestions to standardize _GTK_FRAME_EXTENTS in the NETWM spec, but it didn’t go well.

So, what did change our minds?

It might come as a surprise, but the reason why we decided to add CSD support after so many years of being reluctant was Wayland. In order to fully implement the xdg-shell protocol (the de-facto protocol for creating desktop-style surfaces), we must support client-side decorated windows. Prior to that, we didn’t have any reason that could possibly somehow justify the changes that we would have to make in code that was battle-tested for many years.

With Wayland, we know what changes have to be done in order to add support for client-side decorated clients. Surprisingly, the geometry abstractions that we chose specifically for client-side decorated Wayland clients turned out to be also pretty good for client-side decorated X11 clients, so we decided to add support for _GTK_FRAME_EXTENTS since it didn’t require any huge changes.

It still means that we will be screwed if GTK switches to something completely different on X11, though. But let’s hope that it won’t happen.

CSD and KDE Plasma

“But Vlad,” you may say. “Does this mean that KDE is going to switch CSD?” No, as far as I know, nothing has changed, we still use server-side decorations. Support for client-side decorations was added because it’s something that we need on Wayland and to make GTK applications usable on X11.

Frame, buffer, and client geometry

Warning: This is a brain dump. Please skip to the next section if you’re not interested in technical stuff.

For an old school window manager such as KWin, client-side decorated windows are troublesome mainly because due to the long history all rendering related code had been written with the assumption that the window frame wraps the window contents. If an application draws the window frame on its own, that’s not the case.

In 5.18, we tackled that problem by introducing two new geometries to separate window management from rendering – the frame geometry and the buffer geometry.

The frame geometry describes a rectangle that bounds the window frame. It doesn’t matter whether the window is client-side decorated or server-side decorated. This kind of geometry is used practically for everything, for example window snapping, resizing, etc. KWin scripts see and operate on this geometry.

The buffer geometry is used primarily during rendering, for example to build window quads, etc.

In 5.20, we introduced yet another new geometry, which existed prior to that in an implicit form – the client geometry. The client geometry indicates where the window contents inside the window frame [1] is on the screen. We use this geometry primarily for configuring windows.

It can be a bit challenging to deal with three different geometries at the same time, but there is not that much we can do about it, unfortunately. Each geometry has its own specific domain where the other geometries are inappropriate to use.


CSD is a rather controversial subject in the KDE community, but it’s here and it’s not going anywhere, anytime soon.

[1] On X11, the client geometry actually corresponds to the geometry of the client window.