Classic confinement is a permissive Snap confinement level, equivalent to the full system access that traditionally packaged applications have.
It’s often used as a stop-gap measure to enable developers to publish applications that need more access than the current set of interfaces and permissions allow.
This document serves as a reference for software developers who intend or need to build their snaps as classic. It outlines the principles and implementation of the classic confinement in snaps. It provides explanations and examples on what happens at build, install and runtime for snaps packaged using classic confinement.
Security confinement distinguishes snaps from software distributed using the traditional repository methods.
The confinement mechanism allows for a high level of isolation and security, and prevents snaps from being affected by underlying system changes, one snap affecting another, or snaps affecting the host system. Security policy and sandboxing details how confinement is implemented.
Different confinement levels describe what type of access snap applications have once installed on the user’s system. Confinement levels can be treated as filters that define what type of system resources the application can access outside the snap.
Confinement is defined by general levels and fine-tuned using interfaces, and there are three levels of confinement; strict, classic and devmode.
Strict
This confinement level uses Linux kernel security features to lock down the applications inside the snap. By default, a strictly confined application cannot access the network, the users’ home directory, any audio subsystems or webcams, and it cannot display any graphical output via X or Wayland.
Devmode
This is a debug mode level used by developers as they iterate on the creation of their snap. With devmode, applications can access resources that would be blocked under strict confinement. However, the access to these resources will be logged, so the developers can then review the software behavior and add interfaces as required. This allows developers to troubleshoot applications, because they may behave differently when confined.
Classic
This is a permissive level equivalent to the full system access that traditionally packaged applications have.
Classic confinement is often used as a stop-gap measure to enable developers to publish applications that need more access than the current set of permissions allow. The classic level should be used only when required for functionality, as it lowers the security of the application. Examples of classic snaps would include development environments, terminals or build tools that need to access or execute arbitrary files on the host system.
Classically confined snaps are reviewed by the Snap Store reviewers team before they can be published. Snaps that use classic confinement may be rejected if they don’t meet the necessary requirements.
Applications can be packaged as classic snaps for a variety of reasons. Primarily, this level of confinement is used for applications that need access to arbitrary binaries on the system, which is not possible when using strict confinement.
Applications packaged as classic snaps then behave almost like software provided and installed through the system’s repository archives, using the traditional packaging mechanisms (like apt or rpm), but with some important distinctions.
When a classic snap is executed on the host system, the snap daemon, snapd, will perform the following actions:
Snap daemon (snapd) actions at run time
snap-confine
, which is responsible for creating the necessary confinement for the snap (with the rules set during the installation process).apparmor_status
and find the relevant profiles listed in the complain section of the output, e.g.: snap.XXX.hook.configure
and snap.XXX.snapcraft
grep Seccomp /proc/PID/status
, with the value of 0
indicating the process does not currently use any Seccomp filtering./snap/<base>
(with the relevant base specified in snap.yaml derived from the developer edited snapcraft.yaml).
$LD_LIBRARY_PATH
configured as part of their runtime environment.Strict | Classic | |
---|---|---|
Mount namespace | private | none |
cgroups | Yes | No |
AppArmor | Enforce mode | Complain mode |
Seccomp | Strict filtering | No filtering |
LD_LIBRARY_PATH |
Depends | Empty |
Library loading | Staged packages Base |
Staged packages Base Host system |
Since there is no isolation between classic snaps and the underlying host system, at runtime, classic snaps may load dynamic library dependencies in a way that could create a possible error or conflict, leading to application instability, unknown behavior or crash.
A classic snap created with Snapcraft using one of the Ubuntu bases with dynamically linked binaries will try to load the required dependencies at runtime.
/snap/<base>
, where base can be something like core20
, core22
, etc. The libraries will need to match the name and version of libraries as provided by the Ubuntu repository archives for the specific base, e.g.: snaps built with core20
will need to use the relevant libraries (by name or version) the way they are defined for Ubuntu 20.04 LTS.Since there is no isolation between classic snaps and the underlying host system, special care needs to be taken care of any pre-built binaries with hard-coded library dependency paths, as they will “skip” the normal loading order of libraries at runtime.
This is outlined in the Build time section below.
When a classic snap is installed, snapd will perform the following actions:
/var/lib/snapd/apparmor/profiles
./var/lib/snapd/seccomp/bpf
and will contain the following entry:
@unrestricted\n
Snapcraft builds classic snaps differently from snaps with strict confinement.
This is because, in order to execute correctly, classic confined snap packages require dynamic executables to load shared libraries from the appropriate base snap instead of using the host’s root filesystem.
To prevent incompatibilities, binaries in classic snaps must be built with appropriate linker parameters, or patched to allow loading shared libraries from their base snap. In case of potential dynamic linking issues, the snap author must be aware that their package may not run as expected.
There are multiple ways dynamic linking parameters can be manipulated:
$ORIGIN
path$ORIGIN
represents the path where the binary is located, thus allowing the runtime library path to be set relative to that location (e.g.: $ORIGIN/../lib
for an executable installed under bin/
with libraries in lib/
).To execute as expected, binaries in a classic snap application must be configured to look for shared libraries provided by the base snap or bundled as part of the application snap. This is achieved by setting the runtime path to shared libraries in all ELF binaries (except relocatable object files) that are present in the package payload.
$RPATH
value must be set to reach all needed entries in the dynamic section of the ELF binary.$RPATH
, only those that mention $ORIGIN
are kept.$RPATH
entries that point to locations inside the payload are changed to be relative to $ORIGIN
.An ELF binary created during the parts lifecycle execution can have its RPATH
value set by using appropriate linker parameters. The linker is typically invoked indirectly via a compiler driver; in the gcc case parameters can be passed to the linker using the -Wl
option:
gcc -o foo foo.o -Wl,-rpath=\$ORIGIN/lib,--disable-new-dtags -Llib -lbar
Snaps may contain pre-built ELF binaries installed from arbitrary sources (typically from the distribution repository archives, after installing stage packages). In this case RPATH
must be set by modifying the existing binary using a tool such as PatchELF:
patchelf --force-rpath --set-rpath \$ORIGIN/lib “binary file”
PatchELF can also be used to change the interpreter to a different dynamic linker:
patchelf --set-interpreter /lib64/ld-linux-x86-64.so.2 foo
Patching ELF binaries to modify RPATH
or interpreter entries may fail in certain cases, as with binaries using libc variants that require a nonstandard interpreter. Additionally, patching will cause signed binaries to change the signature of the binaries, which may have the side effect of failed validation for tools or scenarios where the software hashes were generated beforehand.
Last updated 7 months ago.