xen*415: apply upstream patches for Xen Security Advisory

XSA-439, XSA-440, XSA-442, XSA-443, XSA-444, XSA-445, XSA-446
bump PKGREVISIONs
This commit is contained in:
bouyer 2023-11-15 15:59:36 +00:00
parent 7269f66e9b
commit b268e7d78a
12 changed files with 2241 additions and 11 deletions

View File

@ -1,9 +1,9 @@
# $NetBSD: Makefile,v 1.11 2023/09/21 10:39:45 bouyer Exp $
# $NetBSD: Makefile,v 1.12 2023/11/15 15:59:36 bouyer Exp $
VERSION= 4.15.5
DISTNAME= xen-${VERSION}
PKGNAME= xenkernel415-${VERSION}
PKGREVISION= 1
PKGREVISION= 2
CATEGORIES= sysutils
MASTER_SITES= https://downloads.xenproject.org/release/xen/${VERSION}/
DIST_SUBDIR= xen415

View File

@ -1,10 +1,15 @@
$NetBSD: distinfo,v 1.10 2023/09/21 10:39:45 bouyer Exp $
$NetBSD: distinfo,v 1.11 2023/11/15 15:59:36 bouyer Exp $
BLAKE2s (xen415/xen-4.15.5.tar.gz) = 85bef27c99fd9fd3037ec6df5e514289b650f2f073bcc543d13d5997c03332d4
SHA512 (xen415/xen-4.15.5.tar.gz) = 790f3d75df78f63f5b2ce3b99c1f2287f75ef5571d1b7a9bb9bac470bd28ccbd4816d07a1af8320eee4107626c75be029bd6dad1d99d58f3816906ed98d206d9
Size (xen415/xen-4.15.5.tar.gz) = 40835793 bytes
SHA1 (patch-Config.mk) = 9372a09efd05c9fbdbc06f8121e411fcb7c7ba65
SHA1 (patch-XSA438) = a8288bbbe8ffe799cebbf6bb184b1a2b59b59089
SHA1 (patch-XSA439) = 5284e7801ed379aaac3c12dafc32283567bddd95
SHA1 (patch-XSA442) = 170d94ed89a0d9ab210052fef0c8ae41a426374c
SHA1 (patch-XSA444) = 5da1c79e811bebf5fee8416b00f76cfbc3946701
SHA1 (patch-XSA445) = 85990f0ecd529b0c0b4cd9ab422d305bb94ae4b9
SHA1 (patch-XSA446) = 7271a9afc134cca8c42e7a284f1761ddce2ac5ca
SHA1 (patch-xen_Makefile) = 465388d80de414ca3bb84faefa0f52d817e423a6
SHA1 (patch-xen_Rules.mk) = c743dc63f51fc280d529a7d9e08650292c171dac
SHA1 (patch-xen_arch_x86_Kconfig) = df14bfa09b9a0008ca59d53c938d43a644822dd9

View File

@ -0,0 +1,255 @@
$NetBSD: patch-XSA439,v 1.1 2023/11/15 15:59:36 bouyer Exp $
From d7b78041dc819efde0350f27754a61cb01a93496 Mon Sep 17 00:00:00 2001
From: Andrew Cooper <andrew.cooper3@citrix.com>
Date: Wed, 30 Aug 2023 20:24:25 +0100
Subject: [PATCH 1/1] x86/spec-ctrl: Mitigate the Zen1 DIV leakage
In the Zen1 microarchitecure, there is one divider in the pipeline which
services uops from both threads. In the case of #DE, the latched result from
the previous DIV to execute will be forwarded speculatively.
This is an interesting covert channel that allows two threads to communicate
without any system calls. In also allows userspace to obtain the result of
the most recent DIV instruction executed (even speculatively) in the core,
which can be from a higher privilege context.
Scrub the result from the divider by executing a non-faulting divide. This
needs performing on the exit-to-guest paths, and ist_exit-to-Xen.
Alternatives in IST context is believed safe now that it's done in NMI
context.
This is XSA-439 / CVE-2023-20588.
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>
(cherry picked from commit b5926c6ecf05c28ee99c6248c42d691ccbf0c315)
---
docs/misc/xen-command-line.pandoc | 6 +++-
xen/arch/x86/hvm/svm/entry.S | 1 +
xen/arch/x86/spec_ctrl.c | 49 ++++++++++++++++++++++++++++-
xen/include/asm-x86/cpufeatures.h | 3 +-
xen/include/asm-x86/spec_ctrl_asm.h | 17 ++++++++++
5 files changed, 73 insertions(+), 3 deletions(-)
diff --git a/docs/misc/xen-command-line.pandoc b/docs/misc/xen-command-line.pandoc
index 16a61ad858..f3d1009f2d 100644
--- docs/misc/xen-command-line.pandoc.orig
+++ docs/misc/xen-command-line.pandoc
@@ -2189,7 +2189,7 @@ By default SSBD will be mitigated at runtime (i.e `ssbd=runtime`).
> {msr-sc,rsb,md-clear,ibpb-entry}=<bool>|{pv,hvm}=<bool>,
> bti-thunk=retpoline|lfence|jmp, {ibrs,ibpb,ssbd,psfd,
> eager-fpu,l1d-flush,branch-harden,srb-lock,
-> unpriv-mmio,gds-mit}=<bool> ]`
+> unpriv-mmio,gds-mit,div-scrub}=<bool> ]`
Controls for speculative execution sidechannel mitigations. By default, Xen
will pick the most appropriate mitigations based on compiled in support,
@@ -2309,6 +2309,10 @@ has elected not to lock the configuration, Xen will use GDS_CTRL to mitigate
GDS with. Otherwise, Xen will mitigate by disabling AVX, which blocks the use
of the AVX2 Gather instructions.
+On all hardware, the `div-scrub=` option can be used to force or prevent Xen
+from mitigating the DIV-leakage vulnerability. By default, Xen will mitigate
+DIV-leakage on hardware believed to be vulnerable.
+
### sync_console
> `= <boolean>`
diff --git a/xen/arch/x86/hvm/svm/entry.S b/xen/arch/x86/hvm/svm/entry.S
index 0ff4008060..ad5ca50c12 100644
--- xen/arch/x86/hvm/svm/entry.S.orig
+++ xen/arch/x86/hvm/svm/entry.S
@@ -72,6 +72,7 @@ __UNLIKELY_END(nsvm_hap)
1: /* No Spectre v1 concerns. Execution will hit VMRUN imminently. */
.endm
ALTERNATIVE "", svm_vmentry_spec_ctrl, X86_FEATURE_SC_MSR_HVM
+ ALTERNATIVE "", DO_SPEC_CTRL_DIV, X86_FEATURE_SC_DIV
pop %r15
pop %r14
diff --git a/xen/arch/x86/spec_ctrl.c b/xen/arch/x86/spec_ctrl.c
index e9cc6b586a..f75124117b 100644
--- xen/arch/x86/spec_ctrl.c.orig
+++ xen/arch/x86/spec_ctrl.c
@@ -22,6 +22,7 @@
#include <xen/param.h>
#include <xen/warning.h>
+#include <asm/amd.h>
#include <asm/hvm/svm/svm.h>
#include <asm/microcode.h>
#include <asm/msr.h>
@@ -78,6 +79,7 @@ static int8_t __initdata opt_srb_lock = -1;
static bool __initdata opt_unpriv_mmio;
static bool __read_mostly opt_fb_clear_mmio;
static int8_t __initdata opt_gds_mit = -1;
+static int8_t __initdata opt_div_scrub = -1;
static int __init parse_spec_ctrl(const char *s)
{
@@ -132,6 +134,7 @@ static int __init parse_spec_ctrl(const char *s)
opt_srb_lock = 0;
opt_unpriv_mmio = false;
opt_gds_mit = 0;
+ opt_div_scrub = 0;
}
else if ( val > 0 )
rc = -EINVAL;
@@ -284,6 +287,8 @@ static int __init parse_spec_ctrl(const char *s)
opt_unpriv_mmio = val;
else if ( (val = parse_boolean("gds-mit", s, ss)) >= 0 )
opt_gds_mit = val;
+ else if ( (val = parse_boolean("div-scrub", s, ss)) >= 0 )
+ opt_div_scrub = val;
else
rc = -EINVAL;
@@ -484,7 +489,7 @@ static void __init print_details(enum ind_thunk thunk)
"\n");
/* Settings for Xen's protection, irrespective of guests. */
- printk(" Xen settings: BTI-Thunk %s, SPEC_CTRL: %s%s%s%s%s, Other:%s%s%s%s%s\n",
+ printk(" Xen settings: BTI-Thunk %s, SPEC_CTRL: %s%s%s%s%s, Other:%s%s%s%s%s%s\n",
thunk == THUNK_NONE ? "N/A" :
thunk == THUNK_RETPOLINE ? "RETPOLINE" :
thunk == THUNK_LFENCE ? "LFENCE" :
@@ -509,6 +514,7 @@ static void __init print_details(enum ind_thunk thunk)
opt_l1d_flush ? " L1D_FLUSH" : "",
opt_md_clear_pv || opt_md_clear_hvm ||
opt_fb_clear_mmio ? " VERW" : "",
+ opt_div_scrub ? " DIV" : "",
opt_branch_harden ? " BRANCH_HARDEN" : "");
/* L1TF diagnostics, printed if vulnerable or PV shadowing is in use. */
@@ -933,6 +939,45 @@ static void __init srso_calculations(bool hw_smt_enabled)
setup_force_cpu_cap(X86_FEATURE_SRSO_NO);
}
+/*
+ * The Div leakage issue is specific to the AMD Zen1 microarchitecure.
+ *
+ * However, there's no $FOO_NO bit defined, so if we're virtualised we have no
+ * hope of spotting the case where we might move to vulnerable hardware. We
+ * also can't make any useful conclusion about SMT-ness.
+ *
+ * Don't check the hypervisor bit, so at least we do the safe thing when
+ * booting on something that looks like a Zen1 CPU.
+ */
+static bool __init has_div_vuln(void)
+{
+ if ( !(boot_cpu_data.x86_vendor &
+ (X86_VENDOR_AMD | X86_VENDOR_HYGON)) )
+ return false;
+
+ if ( boot_cpu_data.x86 != 0x17 && boot_cpu_data.x86 != 0x18 )
+ return false;
+
+ return is_zen1_uarch();
+}
+
+static void __init div_calculations(bool hw_smt_enabled)
+{
+ bool cpu_bug_div = has_div_vuln();
+
+ if ( opt_div_scrub == -1 )
+ opt_div_scrub = cpu_bug_div;
+
+ if ( opt_div_scrub )
+ setup_force_cpu_cap(X86_FEATURE_SC_DIV);
+
+ if ( opt_smt == -1 && !cpu_has_hypervisor && cpu_bug_div && hw_smt_enabled )
+ warning_add(
+ "Booted on leaky-DIV hardware with SMT/Hyperthreading\n"
+ "enabled. Please assess your configuration and choose an\n"
+ "explicit 'smt=<bool>' setting. See XSA-439.\n");
+}
+
static void __init ibpb_calculations(void)
{
bool def_ibpb_entry = false;
@@ -1644,6 +1689,8 @@ void __init init_speculation_mitigations(void)
ibpb_calculations();
+ div_calculations(hw_smt_enabled);
+
/* Check whether Eager FPU should be enabled by default. */
if ( opt_eager_fpu == -1 )
opt_eager_fpu = should_use_eager_fpu();
diff --git a/xen/include/asm-x86/cpufeatures.h b/xen/include/asm-x86/cpufeatures.h
index bdb119a34c..d993e06e4c 100644
--- xen/include/asm-x86/cpufeatures.h.orig
+++ xen/include/asm-x86/cpufeatures.h
@@ -35,7 +35,8 @@ XEN_CPUFEATURE(SC_RSB_HVM, X86_SYNTH(19)) /* RSB overwrite needed for HVM
XEN_CPUFEATURE(XEN_SELFSNOOP, X86_SYNTH(20)) /* SELFSNOOP gets used by Xen itself */
XEN_CPUFEATURE(SC_MSR_IDLE, X86_SYNTH(21)) /* Clear MSR_SPEC_CTRL on idle */
XEN_CPUFEATURE(XEN_LBR, X86_SYNTH(22)) /* Xen uses MSR_DEBUGCTL.LBR */
-/* Bits 23,24 unused. */
+XEN_CPUFEATURE(SC_DIV, X86_SYNTH(23)) /* DIV scrub needed */
+/* Bit 24 unused. */
XEN_CPUFEATURE(SC_VERW_IDLE, X86_SYNTH(25)) /* VERW used by Xen for idle */
XEN_CPUFEATURE(XEN_SHSTK, X86_SYNTH(26)) /* Xen uses CET Shadow Stacks */
XEN_CPUFEATURE(XEN_IBT, X86_SYNTH(27)) /* Xen uses CET Indirect Branch Tracking */
--- xen/include/asm-x86/spec_ctrl_asm.h.orig 2023-08-07 14:08:26.000000000 +0200
+++ xen/include/asm-x86/spec_ctrl_asm.h 2023-11-15 14:50:58.771057793 +0100
@@ -178,6 +178,19 @@
.L\@_verw_skip:
.endm
+.macro DO_SPEC_CTRL_DIV
+/*
+ * Requires nothing
+ * Clobbers %rax
+ *
+ * Issue a DIV for its flushing side effect (Zen1 uarch specific). Any
+ * non-faulting DIV will do; a byte DIV has least latency, and doesn't clobber
+ * %rdx.
+ */
+ mov $1, %eax
+ div %al
+.endm
+
.macro DO_SPEC_CTRL_ENTRY maybexen:req
/*
* Requires %rsp=regs (also cpuinfo if !maybexen)
@@ -231,6 +244,7 @@
wrmsr
.L\@_skip:
+ ALTERNATIVE "", DO_SPEC_CTRL_DIV, X86_FEATURE_SC_DIV
.endm
.macro DO_SPEC_CTRL_EXIT_TO_GUEST
@@ -277,7 +291,8 @@
#define SPEC_CTRL_EXIT_TO_PV \
ALTERNATIVE "", \
DO_SPEC_CTRL_EXIT_TO_GUEST, X86_FEATURE_SC_MSR_PV; \
- DO_SPEC_CTRL_COND_VERW
+ DO_SPEC_CTRL_COND_VERW; \
+ ALTERNATIVE "", DO_SPEC_CTRL_DIV, X86_FEATURE_SC_DIV
/*
* Use in IST interrupt/exception context. May interrupt Xen or PV context.
--- xen/include/asm-x86/amd.h.orig 2023-11-15 15:16:19.642351562 +0100
+++ xen/include/asm-x86/amd.h 2023-11-15 15:17:10.878437198 +0100
@@ -140,6 +140,17 @@
AMD_MODEL_RANGE(0x11, 0x0, 0x0, 0xff, 0xf), \
AMD_MODEL_RANGE(0x12, 0x0, 0x0, 0xff, 0xf))
+/*
+ * The Zen1 and Zen2 microarchitectures are implemented by AMD (Fam17h) and
+ * Hygon (Fam18h) but without simple model number rules. Instead, use STIBP
+ * as a heuristic that distinguishes the two.
+ *
+ * The caller is required to perform the appropriate vendor/family checks
+ * first.
+ */
+#define is_zen1_uarch() (!boot_cpu_has(X86_FEATURE_AMD_STIBP))
+#define is_zen2_uarch() boot_cpu_has(X86_FEATURE_AMD_STIBP)
+
struct cpuinfo_x86;
int cpu_has_amd_erratum(const struct cpuinfo_x86 *, int, ...);

View File

@ -0,0 +1,187 @@
$NetBSD: patch-XSA442,v 1.1 2023/11/15 15:59:36 bouyer Exp $
From 42614970833467d8b9aaf9def9f062c6c7425dad Mon Sep 17 00:00:00 2001
From: Roger Pau Monne <roger.pau@citrix.com>
Date: Tue, 13 Jun 2023 15:01:05 +0200
Subject: [PATCH] iommu/amd-vi: flush IOMMU TLB when flushing the DTE
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
The caching invalidation guidelines from the AMD-Vi specification (48882—Rev
3.07-PUB—Oct 2022) seem to be misleading on some hardware, as devices will
malfunction (see stale DMA mappings) if some fields of the DTE are updated but
the IOMMU TLB is not flushed. This has been observed in practice on AMD
systems. Due to the lack of guidance from the currently published
specification this patch aims to increase the flushing done in order to prevent
device malfunction.
In order to fix, issue an INVALIDATE_IOMMU_PAGES command from
amd_iommu_flush_device(), flushing all the address space. Note this requires
callers to be adjusted in order to pass the DomID on the DTE previous to the
modification.
Some call sites don't provide a valid DomID to amd_iommu_flush_device() in
order to avoid the flush. That's because the device had address translations
disabled and hence the previous DomID on the DTE is not valid. Note the
current logic relies on the entity disabling address translations to also flush
the TLB of the in use DomID.
Device I/O TLB flushing when ATS are enabled is not covered by the current
change, as ATS usage is not security supported.
This is XSA-442 / CVE-2023-34326
Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>
---
xen/drivers/passthrough/amd/iommu.h | 3 ++-
xen/drivers/passthrough/amd/iommu_cmd.c | 10 +++++++++-
xen/drivers/passthrough/amd/iommu_guest.c | 5 +++--
xen/drivers/passthrough/amd/iommu_init.c | 6 +++++-
xen/drivers/passthrough/amd/pci_amd_iommu.c | 14 ++++++++++----
5 files changed, 29 insertions(+), 9 deletions(-)
diff --git a/xen/drivers/passthrough/amd/iommu.h b/xen/drivers/passthrough/amd/iommu.h
index 0d9d976faaea..4e355ef4c12f 100644
--- xen/drivers/passthrough/amd/iommu.h.orig
+++ xen/drivers/passthrough/amd/iommu.h
@@ -265,7 +265,8 @@ void amd_iommu_flush_pages(struct domain *d, unsigned long dfn,
unsigned int order);
void amd_iommu_flush_iotlb(u8 devfn, const struct pci_dev *pdev,
uint64_t gaddr, unsigned int order);
-void amd_iommu_flush_device(struct amd_iommu *iommu, uint16_t bdf);
+void amd_iommu_flush_device(struct amd_iommu *iommu, uint16_t bdf,
+ domid_t domid);
void amd_iommu_flush_intremap(struct amd_iommu *iommu, uint16_t bdf);
void amd_iommu_flush_all_caches(struct amd_iommu *iommu);
diff --git a/xen/drivers/passthrough/amd/iommu_cmd.c b/xen/drivers/passthrough/amd/iommu_cmd.c
index dfb8b1c860d1..196e3dce3aec 100644
--- xen/drivers/passthrough/amd/iommu_cmd.c.orig
+++ xen/drivers/passthrough/amd/iommu_cmd.c
@@ -362,12 +362,20 @@ void amd_iommu_flush_pages(struct domain *d,
_amd_iommu_flush_pages(d, __dfn_to_daddr(dfn), order);
}
-void amd_iommu_flush_device(struct amd_iommu *iommu, uint16_t bdf)
+void amd_iommu_flush_device(struct amd_iommu *iommu, uint16_t bdf,
+ domid_t domid)
{
ASSERT( spin_is_locked(&iommu->lock) );
invalidate_dev_table_entry(iommu, bdf);
flush_command_buffer(iommu, 0);
+
+ /* Also invalidate IOMMU TLB entries when flushing the DTE. */
+ if ( domid != DOMID_INVALID )
+ {
+ invalidate_iommu_pages(iommu, INV_IOMMU_ALL_PAGES_ADDRESS, domid, 0);
+ flush_command_buffer(iommu, 0);
+ }
}
void amd_iommu_flush_intremap(struct amd_iommu *iommu, uint16_t bdf)
diff --git a/xen/drivers/passthrough/amd/iommu_guest.c b/xen/drivers/passthrough/amd/iommu_guest.c
index 00c5ccd7b5d2..f404e382f019 100644
--- xen/drivers/passthrough/amd/iommu_guest.c.orig
+++ xen/drivers/passthrough/amd/iommu_guest.c
@@ -385,7 +385,7 @@ static int do_completion_wait(struct domain *d, cmd_entry_t *cmd)
static int do_invalidate_dte(struct domain *d, cmd_entry_t *cmd)
{
- uint16_t gbdf, mbdf, req_id, gdom_id, hdom_id;
+ uint16_t gbdf, mbdf, req_id, gdom_id, hdom_id, prev_domid;
struct amd_iommu_dte *gdte, *mdte, *dte_base;
struct amd_iommu *iommu = NULL;
struct guest_iommu *g_iommu;
@@ -445,11 +445,12 @@ static int do_invalidate_dte(struct domain *d, cmd_entry_t *cmd)
req_id = get_dma_requestor_id(iommu->seg, mbdf);
dte_base = iommu->dev_table.buffer;
mdte = &dte_base[req_id];
+ prev_domid = mdte->domain_id;
spin_lock_irqsave(&iommu->lock, flags);
dte_set_gcr3_table(mdte, hdom_id, gcr3_mfn << PAGE_SHIFT, gv, glx);
- amd_iommu_flush_device(iommu, req_id);
+ amd_iommu_flush_device(iommu, req_id, prev_domid);
spin_unlock_irqrestore(&iommu->lock, flags);
return 0;
diff --git a/xen/drivers/passthrough/amd/iommu_init.c b/xen/drivers/passthrough/amd/iommu_init.c
index bb52c181f8cd..4a96f7fbec3c 100644
--- xen/drivers/passthrough/amd/iommu_init.c.orig
+++ xen/drivers/passthrough/amd/iommu_init.c
@@ -1554,7 +1554,11 @@ static int _invalidate_all_devices(
if ( iommu )
{
spin_lock_irqsave(&iommu->lock, flags);
- amd_iommu_flush_device(iommu, req_id);
+ /*
+ * IOMMU TLB flush performed separately (see
+ * invalidate_all_domain_pages()).
+ */
+ amd_iommu_flush_device(iommu, req_id, DOMID_INVALID);
amd_iommu_flush_intremap(iommu, req_id);
spin_unlock_irqrestore(&iommu->lock, flags);
}
diff --git a/xen/drivers/passthrough/amd/pci_amd_iommu.c b/xen/drivers/passthrough/amd/pci_amd_iommu.c
index e804fdc34fcd..872955566608 100644
--- xen/drivers/passthrough/amd/pci_amd_iommu.c.orig
+++ xen/drivers/passthrough/amd/pci_amd_iommu.c
@@ -183,10 +183,13 @@ static int __must_check amd_iommu_setup_domain_device(
iommu_has_cap(iommu, PCI_CAP_IOTLB_SHIFT) )
dte->i = ats_enabled;
- amd_iommu_flush_device(iommu, req_id);
+ /* DTE didn't have DMA translations enabled, do not flush the TLB. */
+ amd_iommu_flush_device(iommu, req_id, DOMID_INVALID);
}
else if ( dte->pt_root != mfn_x(page_to_mfn(root_pg)) )
{
+ domid_t prev_domid = dte->domain_id;
+
/*
* Strictly speaking if the device is the only one with this requestor
* ID, it could be allowed to be re-assigned regardless of unity map
@@ -240,7 +243,7 @@ static int __must_check amd_iommu_setup_domain_device(
iommu_has_cap(iommu, PCI_CAP_IOTLB_SHIFT) )
ASSERT(dte->i == ats_enabled);
- amd_iommu_flush_device(iommu, req_id);
+ amd_iommu_flush_device(iommu, req_id, prev_domid);
}
spin_unlock_irqrestore(&iommu->lock, flags);
@@ -389,6 +392,8 @@ static void amd_iommu_disable_domain_device(const struct domain *domain,
spin_lock_irqsave(&iommu->lock, flags);
if ( dte->tv || dte->v )
{
+ domid_t prev_domid = dte->domain_id;
+
/* See the comment in amd_iommu_setup_device_table(). */
dte->int_ctl = IOMMU_DEV_TABLE_INT_CONTROL_ABORTED;
smp_wmb();
@@ -405,7 +410,7 @@ static void amd_iommu_disable_domain_device(const struct domain *domain,
smp_wmb();
dte->v = true;
- amd_iommu_flush_device(iommu, req_id);
+ amd_iommu_flush_device(iommu, req_id, prev_domid);
AMD_IOMMU_DEBUG("Disable: device id = %#x, "
"domain = %d, paging mode = %d\n",
@@ -578,7 +583,8 @@ static int amd_iommu_add_device(u8 devfn, struct pci_dev *pdev)
iommu->dev_table.buffer + (bdf * IOMMU_DEV_TABLE_ENTRY_SIZE),
ivrs_mappings[bdf].intremap_table, iommu, iommu_intremap);
- amd_iommu_flush_device(iommu, bdf);
+ /* DTE didn't have DMA translations enabled, do not flush the TLB. */
+ amd_iommu_flush_device(iommu, bdf, DOMID_INVALID);
spin_unlock_irqrestore(&iommu->lock, flags);
}
--
2.42.0

View File

@ -0,0 +1,167 @@
$NetBSD: patch-XSA444,v 1.1 2023/11/15 15:59:36 bouyer Exp $
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: x86/svm: Fix asymmetry with AMD DR MASK context switching
The handling of MSR_DR{0..3}_MASK is asymmetric between PV and HVM guests.
HVM guests context switch in based on the guest view of DBEXT, whereas PV
guest switch in base on the host capability. Both guest types leave the
context dirty for the next vCPU.
This leads to the following issue:
* PV or HVM guest has debugging active (%dr7 + mask)
* Switch-out deactivates %dr7 but leaves other state stale in hardware
* Another HVM guest with masks unavailable has debugging active
* Switch in loads %dr7 but leaves the mask MSRs alone
Now, the second guest's vCPU is operating in the context of the prior vCPU's
mask MSR, while the environment the vCPU can see says there are no mask MSRs.
As a stopgap, adjust the HVM path to switch in the masks based on host
capabilities rather than guest visibility (i.e. like the PV path). Adjustment
of the intercepts still needs to be dependent on the guest visibility of
DBEXT.
This is part of XSA-444 / CVE-2023-34327
Fixes: c097f54912d3 ("x86/SVM: support data breakpoint extension registers")
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>
diff --git a/xen/arch/x86/hvm/svm/svm.c b/xen/arch/x86/hvm/svm/svm.c
index a019d196e071..ba4069f9100a 100644
--- xen/arch/x86/hvm/svm/svm.c.orig
+++ xen/arch/x86/hvm/svm/svm.c
@@ -185,6 +185,10 @@ static void svm_save_dr(struct vcpu *v)
v->arch.hvm.flag_dr_dirty = 0;
vmcb_set_dr_intercepts(vmcb, ~0u);
+ /*
+ * The guest can only have changed the mask MSRs if we previous dropped
+ * intercepts. Re-read them from hardware.
+ */
if ( v->domain->arch.cpuid->extd.dbext )
{
svm_intercept_msr(v, MSR_AMD64_DR0_ADDRESS_MASK, MSR_INTERCEPT_RW);
@@ -216,17 +220,25 @@ static void __restore_debug_registers(struct vmcb_struct *vmcb, struct vcpu *v)
ASSERT(v == current);
- if ( v->domain->arch.cpuid->extd.dbext )
+ /*
+ * Both the PV and HVM paths leave stale DR_MASK values in hardware on
+ * context-switch-out. If we're activating %dr7 for the guest, we must
+ * sync the DR_MASKs too, whether or not the guest can see them.
+ */
+ if ( boot_cpu_has(X86_FEATURE_DBEXT) )
{
- svm_intercept_msr(v, MSR_AMD64_DR0_ADDRESS_MASK, MSR_INTERCEPT_NONE);
- svm_intercept_msr(v, MSR_AMD64_DR1_ADDRESS_MASK, MSR_INTERCEPT_NONE);
- svm_intercept_msr(v, MSR_AMD64_DR2_ADDRESS_MASK, MSR_INTERCEPT_NONE);
- svm_intercept_msr(v, MSR_AMD64_DR3_ADDRESS_MASK, MSR_INTERCEPT_NONE);
-
wrmsrl(MSR_AMD64_DR0_ADDRESS_MASK, v->arch.msrs->dr_mask[0]);
wrmsrl(MSR_AMD64_DR1_ADDRESS_MASK, v->arch.msrs->dr_mask[1]);
wrmsrl(MSR_AMD64_DR2_ADDRESS_MASK, v->arch.msrs->dr_mask[2]);
wrmsrl(MSR_AMD64_DR3_ADDRESS_MASK, v->arch.msrs->dr_mask[3]);
+
+ if ( v->domain->arch.cpuid->extd.dbext )
+ {
+ svm_intercept_msr(v, MSR_AMD64_DR0_ADDRESS_MASK, MSR_INTERCEPT_NONE);
+ svm_intercept_msr(v, MSR_AMD64_DR1_ADDRESS_MASK, MSR_INTERCEPT_NONE);
+ svm_intercept_msr(v, MSR_AMD64_DR2_ADDRESS_MASK, MSR_INTERCEPT_NONE);
+ svm_intercept_msr(v, MSR_AMD64_DR3_ADDRESS_MASK, MSR_INTERCEPT_NONE);
+ }
}
write_debugreg(0, v->arch.dr[0]);
diff --git a/xen/arch/x86/traps.c b/xen/arch/x86/traps.c
index f7992ff230b5..a142a63dd869 100644
--- xen/arch/x86/traps.c.orig
+++ xen/arch/x86/traps.c
@@ -2314,6 +2314,11 @@ void activate_debugregs(const struct vcpu *curr)
if ( curr->arch.dr7 & DR7_ACTIVE_MASK )
write_debugreg(7, curr->arch.dr7);
+ /*
+ * Both the PV and HVM paths leave stale DR_MASK values in hardware on
+ * context-switch-out. If we're activating %dr7 for the guest, we must
+ * sync the DR_MASKs too, whether or not the guest can see them.
+ */
if ( boot_cpu_has(X86_FEATURE_DBEXT) )
{
wrmsrl(MSR_AMD64_DR0_ADDRESS_MASK, curr->arch.msrs->dr_mask[0]);
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: x86/pv: Correct the auditing of guest breakpoint addresses
The use of access_ok() is buggy, because it permits access to the compat
translation area. 64bit PV guests don't use the XLAT area, but on AMD
hardware, the DBEXT feature allows a breakpoint to match up to a 4G aligned
region, allowing the breakpoint to reach outside of the XLAT area.
Prior to c/s cda16c1bb223 ("x86: mirror compat argument translation area for
32-bit PV"), the live GDT was within 4G of the XLAT area.
All together, this allowed a malicious 64bit PV guest on AMD hardware to place
a breakpoint over the live GDT, and trigger a #DB livelock (CVE-2015-8104).
Introduce breakpoint_addr_ok() and explain why __addr_ok() happens to be an
appropriate check in this case.
For Xen 4.14 and later, this is a latent bug because the XLAT area has moved
to be on its own with nothing interesting adjacent. For Xen 4.13 and older on
AMD hardware, this fixes a PV-trigger-able DoS.
This is part of XSA-444 / CVE-2023-34328.
Fixes: 65e355490817 ("x86/PV: support data breakpoint extension registers")
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>
diff --git a/xen/arch/x86/pv/misc-hypercalls.c b/xen/arch/x86/pv/misc-hypercalls.c
index 5dade2472687..681c16108fd1 100644
--- xen/arch/x86/pv/misc-hypercalls.c.orig
+++ xen/arch/x86/pv/misc-hypercalls.c
@@ -68,7 +68,7 @@ long set_debugreg(struct vcpu *v, unsigned int reg, unsigned long value)
switch ( reg )
{
case 0 ... 3:
- if ( !access_ok(value, sizeof(long)) )
+ if ( !breakpoint_addr_ok(value) )
return -EPERM;
v->arch.dr[reg] = value;
diff --git a/xen/include/asm-x86/debugreg.h b/xen/include/asm-x86/debugreg.h
index c57914efc6e8..cc298265244b 100644
--- xen/include/asm-x86/debugreg.h.orig
+++ xen/include/asm-x86/debugreg.h
@@ -77,6 +77,26 @@
asm volatile ( "mov %%db" #reg ",%0" : "=r" (__val) ); \
__val; \
})
+
+/*
+ * Architecturally, %dr{0..3} can have any arbitrary value. However, Xen
+ * can't allow the guest to breakpoint the Xen address range, so we limit the
+ * guest to the lower canonical half, or above the Xen range in the higher
+ * canonical half.
+ *
+ * Breakpoint lengths are specified to mask the low order address bits,
+ * meaning all breakpoints are naturally aligned. With %dr7, the widest
+ * breakpoint is 8 bytes. With DBEXT, the widest breakpoint is 4G. Both of
+ * the Xen boundaries have >4G alignment.
+ *
+ * In principle we should account for HYPERVISOR_COMPAT_VIRT_START(d), but
+ * 64bit Xen has never enforced this for compat guests, and there's no problem
+ * (to Xen) if the guest breakpoints it's alias of the M2P. Skipping this
+ * aspect simplifies the logic, and causes us not to reject a migrating guest
+ * which operated fine on prior versions of Xen.
+ */
+#define breakpoint_addr_ok(a) __addr_ok(a)
+
long set_debugreg(struct vcpu *, unsigned int reg, unsigned long value);
void activate_debugregs(const struct vcpu *);

View File

@ -0,0 +1,67 @@
$NetBSD: patch-XSA445,v 1.1 2023/11/15 15:59:36 bouyer Exp $
From 9877bb3af60ef2b543742835c49de7d0108cdca9 Mon Sep 17 00:00:00 2001
From: Roger Pau Monne <roger.pau@citrix.com>
Date: Wed, 11 Oct 2023 13:14:21 +0200
Subject: [PATCH] iommu/amd-vi: use correct level for quarantine domain page
tables
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
The current setup of the quarantine page tables assumes that the quarantine
domain (dom_io) has been initialized with an address width of
DEFAULT_DOMAIN_ADDRESS_WIDTH (48).
However dom_io being a PV domain gets the AMD-Vi IOMMU page tables levels based
on the maximum (hot pluggable) RAM address, and hence on systems with no RAM
above the 512GB mark only 3 page-table levels are configured in the IOMMU.
On systems without RAM above the 512GB boundary amd_iommu_quarantine_init()
will setup page tables for the scratch page with 4 levels, while the IOMMU will
be configured to use 3 levels only. The page destined to be used as level 1,
and to contain a directory of PTEs ends up being the address in a PTE itself,
and thus level 1 page becomes the leaf page. Without the level mismatch it's
level 0 page that should be the leaf page instead.
The level 1 page won't be used as such, and hence it's not possible to use it
to gain access to other memory on the system. However that page is not cleared
in amd_iommu_quarantine_init() as part of re-initialization of the device
quarantine page tables, and hence data on the level 1 page can be leaked
between device usages.
Fix this by making sure the paging levels setup by amd_iommu_quarantine_init()
match the number configured on the IOMMUs.
Note that IVMD regions are not affected by this issue, as those areas are
mapped taking the configured paging levels into account.
This is XSA-445 / CVE-2023-46835
Fixes: ea38867831da ('x86 / iommu: set up a scratch page in the quarantine domain')
Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>
---
xen/drivers/passthrough/amd/iommu_map.c | 4 +---
1 file changed, 1 insertion(+), 3 deletions(-)
diff --git a/xen/drivers/passthrough/amd/iommu_map.c b/xen/drivers/passthrough/amd/iommu_map.c
index b4c182449131..3473db4c1efc 100644
--- xen/drivers/passthrough/amd/iommu_map.c.orig
+++ xen/drivers/passthrough/amd/iommu_map.c
@@ -584,9 +584,7 @@ static int fill_qpt(union amd_iommu_pte *this, unsigned int level,
int amd_iommu_quarantine_init(struct pci_dev *pdev)
{
struct domain_iommu *hd = dom_iommu(dom_io);
- unsigned long end_gfn =
- 1ul << (DEFAULT_DOMAIN_ADDRESS_WIDTH - PAGE_SHIFT);
- unsigned int level = amd_iommu_get_paging_mode(end_gfn);
+ unsigned int level = hd->arch.amd.paging_mode;
unsigned int req_id = get_dma_requestor_id(pdev->seg, pdev->sbdf.bdf);
const struct ivrs_mappings *ivrs_mappings = get_ivrs_mappings(pdev->seg);
int rc;
base-commit: 4a4daf6bddbe8a741329df5cc8768f7dec664aed
--
2.30.2

View File

@ -0,0 +1,117 @@
$NetBSD: patch-XSA446,v 1.1 2023/11/15 15:59:36 bouyer Exp $
From 80d5aada598c3a800a350003d5d582931545e13c Mon Sep 17 00:00:00 2001
From: Andrew Cooper <andrew.cooper3@citrix.com>
Date: Thu, 26 Oct 2023 14:37:38 +0100
Subject: [PATCH] x86/spec-ctrl: Remove conditional IRQs-on-ness for INT
$0x80/0x82 paths
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
Before speculation defences, some paths in Xen could genuinely get away with
being IRQs-on at entry. But XPTI invalidated this property on most paths, and
attempting to maintain it on the remaining paths was a mistake.
Fast forward, and DO_SPEC_CTRL_COND_IBPB (protection for AMD BTC/SRSO) is not
IRQ-safe, running with IRQs enabled in some cases. The other actions taken on
these paths happen to be IRQ-safe.
Make entry_int82() and int80_direct_trap() unconditionally Interrupt Gates
rather than Trap Gates. Remove the conditional re-adjustment of
int80_direct_trap() in smp_prepare_cpus(), and have entry_int82() explicitly
enable interrupts when safe to do so.
In smp_prepare_cpus(), with the conditional re-adjustment removed, the
clearing of pv_cr3 is the only remaining action gated on XPTI, and it is out
of place anyway, repeating work already done by smp_prepare_boot_cpu(). Drop
the entire if() condition to avoid leaving an incorrect vestigial remnant.
Also drop comments which make incorrect statements about when its safe to
enable interrupts.
This is XSA-446 / CVE-2023-46836
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>
---
xen/arch/x86/pv/traps.c | 4 ++--
xen/arch/x86/smpboot.c | 14 --------------
xen/arch/x86/x86_64/compat/entry.S | 2 ++
xen/arch/x86/x86_64/entry.S | 1 -
4 files changed, 4 insertions(+), 17 deletions(-)
diff --git a/xen/arch/x86/pv/traps.c b/xen/arch/x86/pv/traps.c
index 74f333da7e1c..240d1a2db7a3 100644
--- xen/arch/x86/pv/traps.c.orig
+++ xen/arch/x86/pv/traps.c
@@ -139,11 +139,11 @@ void __init pv_trap_init(void)
#ifdef CONFIG_PV32
/* The 32-on-64 hypercall vector is only accessible from ring 1. */
_set_gate(idt_table + HYPERCALL_VECTOR,
- SYS_DESC_trap_gate, 1, entry_int82);
+ SYS_DESC_irq_gate, 1, entry_int82);
#endif
/* Fast trap for int80 (faster than taking the #GP-fixup path). */
- _set_gate(idt_table + LEGACY_SYSCALL_VECTOR, SYS_DESC_trap_gate, 3,
+ _set_gate(idt_table + LEGACY_SYSCALL_VECTOR, SYS_DESC_irq_gate, 3,
&int80_direct_trap);
open_softirq(NMI_SOFTIRQ, nmi_softirq);
diff --git a/xen/arch/x86/smpboot.c b/xen/arch/x86/smpboot.c
index 3a1a659082c6..4c54ecbc91d7 100644
--- xen/arch/x86/smpboot.c.orig
+++ xen/arch/x86/smpboot.c
@@ -1158,20 +1158,6 @@ void __init smp_prepare_cpus(void)
stack_base[0] = (void *)((unsigned long)stack_start & ~(STACK_SIZE - 1));
- if ( opt_xpti_hwdom || opt_xpti_domu )
- {
- get_cpu_info()->pv_cr3 = 0;
-
-#ifdef CONFIG_PV
- /*
- * All entry points which may need to switch page tables have to start
- * with interrupts off. Re-write what pv_trap_init() has put there.
- */
- _set_gate(idt_table + LEGACY_SYSCALL_VECTOR, SYS_DESC_irq_gate, 3,
- &int80_direct_trap);
-#endif
- }
-
set_nr_sockets();
socket_cpumask = xzalloc_array(cpumask_t *, nr_sockets);
diff --git a/xen/arch/x86/x86_64/compat/entry.S b/xen/arch/x86/x86_64/compat/entry.S
index bd5abd8040bd..fcc3a721f147 100644
--- xen/arch/x86/x86_64/compat/entry.S.orig
+++ xen/arch/x86/x86_64/compat/entry.S
@@ -21,6 +21,8 @@ ENTRY(entry_int82)
SPEC_CTRL_ENTRY_FROM_PV /* Req: %rsp=regs/cpuinfo, %rdx=0, Clob: acd */
/* WARNING! `ret`, `call *`, `jmp *` not safe before this point. */
+ sti
+
CR4_PV32_RESTORE
GET_CURRENT(bx)
diff --git a/xen/arch/x86/x86_64/entry.S b/xen/arch/x86/x86_64/entry.S
index 5ca74f5f62b2..9a7b129aa7e4 100644
--- xen/arch/x86/x86_64/entry.S.orig
+++ xen/arch/x86/x86_64/entry.S
@@ -327,7 +327,6 @@ ENTRY(sysenter_entry)
#ifdef CONFIG_XEN_SHSTK
ALTERNATIVE "", "setssbsy", X86_FEATURE_XEN_SHSTK
#endif
- /* sti could live here when we don't switch page tables below. */
pushq $FLAT_USER_SS
pushq $0
pushfq
base-commit: 7befef87cc9b1bb8ca15d866ce1ecd9165ccb58c
prerequisite-patch-id: 142a87c707411d49e136c3fb76f1b14963ec6dc8
--
2.30.2

View File

@ -1,7 +1,7 @@
# $NetBSD: Makefile,v 1.27 2023/10/23 06:37:53 wiz Exp $
# $NetBSD: Makefile,v 1.28 2023/11/15 15:59:36 bouyer Exp $
#
# VERSION is set in version.mk as it is shared with other packages
PKGREVISION= 1
PKGREVISION= 2
.include "version.mk"
PKGNAME= xentools415-${VERSION}

View File

@ -1,4 +1,4 @@
@comment $NetBSD: PLIST,v 1.3 2022/01/15 17:44:35 wiz Exp $
@comment $NetBSD: PLIST,v 1.4 2023/11/15 15:59:36 bouyer Exp $
${PYSITELIB}/grub/ExtLinuxConf.py
${PYSITELIB}/grub/ExtLinuxConf.pyc
${PYSITELIB}/grub/GrubConf.py
@ -7,10 +7,10 @@ ${PYSITELIB}/grub/LiloConf.py
${PYSITELIB}/grub/LiloConf.pyc
${PYSITELIB}/grub/__init__.py
${PYSITELIB}/grub/__init__.pyc
${PYSITELIB}/pygrub-0.6-py${PYVERSSUFFIX}.egg-info/PKG-INFO
${PYSITELIB}/pygrub-0.6-py${PYVERSSUFFIX}.egg-info/SOURCES.txt
${PYSITELIB}/pygrub-0.6-py${PYVERSSUFFIX}.egg-info/dependency_links.txt
${PYSITELIB}/pygrub-0.6-py${PYVERSSUFFIX}.egg-info/top_level.txt
${PYSITELIB}/pygrub-0.7-py${PYVERSSUFFIX}.egg-info/PKG-INFO
${PYSITELIB}/pygrub-0.7-py${PYVERSSUFFIX}.egg-info/SOURCES.txt
${PYSITELIB}/pygrub-0.7-py${PYVERSSUFFIX}.egg-info/dependency_links.txt
${PYSITELIB}/pygrub-0.7-py${PYVERSSUFFIX}.egg-info/top_level.txt
${PYSITELIB}/xen-3.0-py${PYVERSSUFFIX}.egg-info/PKG-INFO
${PYSITELIB}/xen-3.0-py${PYVERSSUFFIX}.egg-info/SOURCES.txt
${PYSITELIB}/xen-3.0-py${PYVERSSUFFIX}.egg-info/dependency_links.txt

View File

@ -1,4 +1,4 @@
$NetBSD: distinfo,v 1.13 2023/08/24 10:27:09 bouyer Exp $
$NetBSD: distinfo,v 1.14 2023/11/15 15:59:36 bouyer Exp $
BLAKE2s (xen415/ipxe-988d2c13cdf0f0b4140685af35ced70ac5b3283c.tar.gz) = 67ded947316100f4f66fa61fe156baf1620db575450f4dc0dd8dcb323e57970b
SHA512 (xen415/ipxe-988d2c13cdf0f0b4140685af35ced70ac5b3283c.tar.gz) = d888e0e653727ee9895fa866d8895e6d23a568b4e9e8439db4c4d790996700c60b0655e3a3129e599736ec2b4f7b987ce79d625ba208f06665fced8bddf94403
@ -12,6 +12,8 @@ Size (xen415/xen-4.15.5.tar.gz) = 40835793 bytes
SHA1 (patch-.._seabios-rel-1.16.0_src_string.c) = e82f2f16a236a3b878c07b4fb655998591717a73
SHA1 (patch-Config.mk) = d108a1743b5b5313d3ea957b02a005b49f5b3bf6
SHA1 (patch-Makefile) = 6c580cbea532d08a38cf5e54228bd0210a98da21
SHA1 (patch-XSA440) = 92c21a9caab0292082799e357725345ac676db9e
SHA1 (patch-XSA443) = 53ea19eb131c3a83b9ab586fc6632fa3704e4fc0
SHA1 (patch-docs_man_xl.1.pod.in) = 280a3717b9f15578d90f85392249ef97844b6765
SHA1 (patch-docs_man_xl.cfg.5.pod.in) = 5970961552f29c4536a884161a208a27a20dccf4
SHA1 (patch-docs_man_xlcpupool.cfg.5.pod) = ab3a2529cd10458948557fd7ab032e80df8b1d81

View File

@ -0,0 +1,60 @@
$NetBSD: patch-XSA440,v 1.1 2023/11/15 15:59:36 bouyer Exp $
From 5d8b3d1ec98e56155d9650d7f4a70cd8ba9dc27d Mon Sep 17 00:00:00 2001
From: Julien Grall <jgrall@amazon.com>
Date: Fri, 22 Sep 2023 11:32:16 +0100
Subject: tools/xenstored: domain_entry_fix(): Handle conflicting transaction
The function domain_entry_fix() will be initially called to check if the
quota is correct before attempt to commit any nodes. So it would be
possible that accounting is temporarily negative. This is the case
in the following sequence:
1) Create 50 nodes
2) Start two transactions
3) Delete all the nodes in each transaction
4) Commit the two transactions
Because the first transaction will have succeed and updated the
accounting, there is no guarantee that 'd->nbentry + num' will still
be above 0. So the assert() would be triggered.
The assert() was introduced in dbef1f748289 ("tools/xenstore: simplify
and fix per domain node accounting") with the assumption that the
value can't be negative. As this is not true revert to the original
check but restricted to the path where we don't update. Take the
opportunity to explain the rationale behind the check.
This CVE-2023-34323 / XSA-440.
Reported-by: Stanislav Uschakow <suschako@amazon.de>
Fixes: dbef1f748289 ("tools/xenstore: simplify and fix per domain node accounting")
Signed-off-by: Julien Grall <jgrall@amazon.com>
Reviewed-by: Juergen Gross <jgross@suse.com>
diff --git a/tools/xenstore/xenstored_domain.c b/tools/xenstore/xenstored_domain.c
index aa86892fed9e..6074df210c6e 100644
--- tools/xenstore/xenstored_domain.c.orig
+++ tools/xenstore/xenstored_domain.c
@@ -1094,10 +1094,20 @@ int domain_entry_fix(unsigned int domid, int num, bool update)
}
cnt = d->nbentry + num;
- assert(cnt >= 0);
- if (update)
+ if (update) {
+ assert(cnt >= 0);
d->nbentry = cnt;
+ } else if (cnt < 0) {
+ /*
+ * In a transaction when a node is being added/removed AND
+ * the same node has been added/removed outside the
+ * transaction in parallel, the result value may be negative.
+ * This is no problem, as the transaction will fail due to
+ * the resulting conflict. So override 'cnt'.
+ */
+ cnt = 0;
+ }
return domid_is_unprivileged(domid) ? cnt : 0;
}

File diff suppressed because it is too large Load Diff