IOMMU Updates for Linux v3.16
The changes include: * A new IOMMU driver for ARM Renesas SOCs * Updates and fixes for the ARM Exynos driver to bring it closer to a usable state again * Convert the AMD IOMMUv2 driver to use the mmu_notifier->release call-back instead of the task_exit notifier * Random other fixes and minor improvements to a number of other IOMMU drivers -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.11 (GNU/Linux) iQIcBAABAgAGBQJTkI/xAAoJECvwRC2XARrjgPgQANk/62drPe+Am7YIftYWgaMM e2i5PruVtDfHYo/KhV3lqPr3/GWDKuD6zrrymEYNLoq1T2GH2XtjRvJZZrzXvApO jEOj9Lf35JQgWnjh1IhFBq2N8baX9UFhNLBUkqBT4+CFQrrqARXk1pZG56VtYjUg PRn3HCHatEyN/o24tLTpXymWGjH6Z1jJQ8LLFL1/woU4nZRVHSA6HATIx1Ytv+D3 MQTy+r7M+tphT2moeJiiWo2wX98XZH/lM7+4kuc94+7CHiJjnc9rvG+Tt/cp82cL kSf7EYnW7rEnN1Tr1unFgOkdX8GhIK1Pkm1QiJ5mfIsXdFeRVj66NBpuQhaAXfPU XhISkbl5K6YqjxLCpbId8KSbonxFfk9sO+LILrjWj6x/YsWpns7LP1OPUbbwJnXh ncsn/goeCFU5M1JO9AHm2XbrDdumTUceAtgotVRQwo6GDkAd7ReVb+6jj1MND7L7 hol8UbHZKvf41msGILqIDsVTotQvzd1fQg7hl9DLcM+mRyhJo7dgTlq8GcMcYc40 3v+aDFaD1BOtgQ2VMdCiaJ2UqJNDlhC8827wEwqvIlnPXStOOgdErPNKUvr16jYV qAZsehdFIQWylve528CtR05bG/VuzaldMo4xixktC0wj3zc2gxH1BqNmZU1zluES qNOsm/wjtaqi1we+DLFu =1DJL -----END PGP SIGNATURE----- Merge tag 'iommu-updates-v3.16' of git://git.kernel.org/pub/scm/linux/kernel/git/joro/iommu into next Pull IOMMU updates from Joerg Roedel: "The changes include: - a new IOMMU driver for ARM Renesas SOCs - updates and fixes for the ARM Exynos driver to bring it closer to a usable state again - convert the AMD IOMMUv2 driver to use the mmu_notifier->release call-back instead of the task_exit notifier - random other fixes and minor improvements to a number of other IOMMU drivers" * tag 'iommu-updates-v3.16' of git://git.kernel.org/pub/scm/linux/kernel/git/joro/iommu: (54 commits) iommu/msm: Use devm_ioremap_resource to simplify code iommu/amd: Fix recently introduced compile warnings arm/ipmmu-vmsa: Fix compile error iommu/exynos: Fix checkpatch warning iommu/exynos: Fix trivial typo iommu/exynos: Remove invalid symbol dependency iommu: fsl_pamu.c: Fix for possible null pointer dereference iommu/amd: Remove duplicate checking code iommu/amd: Handle parallel invalidate_range_start/end calls correctly iommu/amd: Remove IOMMUv2 pasid_state_list iommu/amd: Implement mmu_notifier_release call-back iommu/amd: Convert IOMMUv2 state_table into state_list iommu/amd: Don't access IOMMUv2 state_table directly iommu/ipmmu-vmsa: Support clearing mappings iommu/ipmmu-vmsa: Remove stage 2 PTE bits definitions iommu/ipmmu-vmsa: Support 2MB mappings iommu/ipmmu-vmsa: Rewrite page table management iommu/ipmmu-vmsa: PMD is never folded, PUD always is iommu/ipmmu-vmsa: Set the PTE contiguous hint bit when possible iommu/ipmmu-vmsa: Define driver-specific page directory sizes ...
This commit is contained in:
commit
2732ea9e85
14 changed files with 2107 additions and 633 deletions
70
Documentation/devicetree/bindings/iommu/samsung,sysmmu.txt
Normal file
70
Documentation/devicetree/bindings/iommu/samsung,sysmmu.txt
Normal file
|
@ -0,0 +1,70 @@
|
||||||
|
Samsung Exynos IOMMU H/W, System MMU (System Memory Management Unit)
|
||||||
|
|
||||||
|
Samsung's Exynos architecture contains System MMUs that enables scattered
|
||||||
|
physical memory chunks visible as a contiguous region to DMA-capable peripheral
|
||||||
|
devices like MFC, FIMC, FIMD, GScaler, FIMC-IS and so forth.
|
||||||
|
|
||||||
|
System MMU is an IOMMU and supports identical translation table format to
|
||||||
|
ARMv7 translation tables with minimum set of page properties including access
|
||||||
|
permissions, shareability and security protection. In addition, System MMU has
|
||||||
|
another capabilities like L2 TLB or block-fetch buffers to minimize translation
|
||||||
|
latency.
|
||||||
|
|
||||||
|
System MMUs are in many to one relation with peripheral devices, i.e. single
|
||||||
|
peripheral device might have multiple System MMUs (usually one for each bus
|
||||||
|
master), but one System MMU can handle transactions from only one peripheral
|
||||||
|
device. The relation between a System MMU and the peripheral device needs to be
|
||||||
|
defined in device node of the peripheral device.
|
||||||
|
|
||||||
|
MFC in all Exynos SoCs and FIMD, M2M Scalers and G2D in Exynos5420 has 2 System
|
||||||
|
MMUs.
|
||||||
|
* MFC has one System MMU on its left and right bus.
|
||||||
|
* FIMD in Exynos5420 has one System MMU for window 0 and 4, the other system MMU
|
||||||
|
for window 1, 2 and 3.
|
||||||
|
* M2M Scalers and G2D in Exynos5420 has one System MMU on the read channel and
|
||||||
|
the other System MMU on the write channel.
|
||||||
|
The drivers must consider how to handle those System MMUs. One of the idea is
|
||||||
|
to implement child devices or sub-devices which are the client devices of the
|
||||||
|
System MMU.
|
||||||
|
|
||||||
|
Note:
|
||||||
|
The current DT binding for the Exynos System MMU is incomplete.
|
||||||
|
The following properties can be removed or changed, if found incompatible with
|
||||||
|
the "Generic IOMMU Binding" support for attaching devices to the IOMMU.
|
||||||
|
|
||||||
|
Required properties:
|
||||||
|
- compatible: Should be "samsung,exynos-sysmmu"
|
||||||
|
- reg: A tuple of base address and size of System MMU registers.
|
||||||
|
- interrupt-parent: The phandle of the interrupt controller of System MMU
|
||||||
|
- interrupts: An interrupt specifier for interrupt signal of System MMU,
|
||||||
|
according to the format defined by a particular interrupt
|
||||||
|
controller.
|
||||||
|
- clock-names: Should be "sysmmu" if the System MMU is needed to gate its clock.
|
||||||
|
Optional "master" if the clock to the System MMU is gated by
|
||||||
|
another gate clock other than "sysmmu".
|
||||||
|
Exynos4 SoCs, there needs no "master" clock.
|
||||||
|
Exynos5 SoCs, some System MMUs must have "master" clocks.
|
||||||
|
- clocks: Required if the System MMU is needed to gate its clock.
|
||||||
|
- samsung,power-domain: Required if the System MMU is needed to gate its power.
|
||||||
|
Please refer to the following document:
|
||||||
|
Documentation/devicetree/bindings/arm/exynos/power_domain.txt
|
||||||
|
|
||||||
|
Examples:
|
||||||
|
gsc_0: gsc@13e00000 {
|
||||||
|
compatible = "samsung,exynos5-gsc";
|
||||||
|
reg = <0x13e00000 0x1000>;
|
||||||
|
interrupts = <0 85 0>;
|
||||||
|
samsung,power-domain = <&pd_gsc>;
|
||||||
|
clocks = <&clock CLK_GSCL0>;
|
||||||
|
clock-names = "gscl";
|
||||||
|
};
|
||||||
|
|
||||||
|
sysmmu_gsc0: sysmmu@13E80000 {
|
||||||
|
compatible = "samsung,exynos-sysmmu";
|
||||||
|
reg = <0x13E80000 0x1000>;
|
||||||
|
interrupt-parent = <&combiner>;
|
||||||
|
interrupts = <2 0>;
|
||||||
|
clock-names = "sysmmu", "master";
|
||||||
|
clocks = <&clock CLK_SMMU_GSCL0>, <&clock CLK_GSCL0>;
|
||||||
|
samsung,power-domain = <&pd_gsc>;
|
||||||
|
};
|
|
@ -178,13 +178,13 @@ config TEGRA_IOMMU_SMMU
|
||||||
|
|
||||||
config EXYNOS_IOMMU
|
config EXYNOS_IOMMU
|
||||||
bool "Exynos IOMMU Support"
|
bool "Exynos IOMMU Support"
|
||||||
depends on ARCH_EXYNOS && EXYNOS_DEV_SYSMMU
|
depends on ARCH_EXYNOS
|
||||||
select IOMMU_API
|
select IOMMU_API
|
||||||
help
|
help
|
||||||
Support for the IOMMU(System MMU) of Samsung Exynos application
|
Support for the IOMMU (System MMU) of Samsung Exynos application
|
||||||
processor family. This enables H/W multimedia accellerators to see
|
processor family. This enables H/W multimedia accelerators to see
|
||||||
non-linear physical memory chunks as a linear memory in their
|
non-linear physical memory chunks as linear memory in their
|
||||||
address spaces
|
address space.
|
||||||
|
|
||||||
If unsure, say N here.
|
If unsure, say N here.
|
||||||
|
|
||||||
|
@ -193,9 +193,9 @@ config EXYNOS_IOMMU_DEBUG
|
||||||
depends on EXYNOS_IOMMU
|
depends on EXYNOS_IOMMU
|
||||||
help
|
help
|
||||||
Select this to see the detailed log message that shows what
|
Select this to see the detailed log message that shows what
|
||||||
happens in the IOMMU driver
|
happens in the IOMMU driver.
|
||||||
|
|
||||||
Say N unless you need kernel log message for IOMMU debugging
|
Say N unless you need kernel log message for IOMMU debugging.
|
||||||
|
|
||||||
config SHMOBILE_IPMMU
|
config SHMOBILE_IPMMU
|
||||||
bool
|
bool
|
||||||
|
@ -272,6 +272,18 @@ config SHMOBILE_IOMMU_L1SIZE
|
||||||
default 256 if SHMOBILE_IOMMU_ADDRSIZE_64MB
|
default 256 if SHMOBILE_IOMMU_ADDRSIZE_64MB
|
||||||
default 128 if SHMOBILE_IOMMU_ADDRSIZE_32MB
|
default 128 if SHMOBILE_IOMMU_ADDRSIZE_32MB
|
||||||
|
|
||||||
|
config IPMMU_VMSA
|
||||||
|
bool "Renesas VMSA-compatible IPMMU"
|
||||||
|
depends on ARM_LPAE
|
||||||
|
depends on ARCH_SHMOBILE || COMPILE_TEST
|
||||||
|
select IOMMU_API
|
||||||
|
select ARM_DMA_USE_IOMMU
|
||||||
|
help
|
||||||
|
Support for the Renesas VMSA-compatible IPMMU Renesas found in the
|
||||||
|
R-Mobile APE6 and R-Car H2/M2 SoCs.
|
||||||
|
|
||||||
|
If unsure, say N.
|
||||||
|
|
||||||
config SPAPR_TCE_IOMMU
|
config SPAPR_TCE_IOMMU
|
||||||
bool "sPAPR TCE IOMMU Support"
|
bool "sPAPR TCE IOMMU Support"
|
||||||
depends on PPC_POWERNV || PPC_PSERIES
|
depends on PPC_POWERNV || PPC_PSERIES
|
||||||
|
|
|
@ -7,6 +7,7 @@ obj-$(CONFIG_AMD_IOMMU_V2) += amd_iommu_v2.o
|
||||||
obj-$(CONFIG_ARM_SMMU) += arm-smmu.o
|
obj-$(CONFIG_ARM_SMMU) += arm-smmu.o
|
||||||
obj-$(CONFIG_DMAR_TABLE) += dmar.o
|
obj-$(CONFIG_DMAR_TABLE) += dmar.o
|
||||||
obj-$(CONFIG_INTEL_IOMMU) += iova.o intel-iommu.o
|
obj-$(CONFIG_INTEL_IOMMU) += iova.o intel-iommu.o
|
||||||
|
obj-$(CONFIG_IPMMU_VMSA) += ipmmu-vmsa.o
|
||||||
obj-$(CONFIG_IRQ_REMAP) += intel_irq_remapping.o irq_remapping.o
|
obj-$(CONFIG_IRQ_REMAP) += intel_irq_remapping.o irq_remapping.o
|
||||||
obj-$(CONFIG_OMAP_IOMMU) += omap-iommu.o
|
obj-$(CONFIG_OMAP_IOMMU) += omap-iommu.o
|
||||||
obj-$(CONFIG_OMAP_IOMMU) += omap-iommu2.o
|
obj-$(CONFIG_OMAP_IOMMU) += omap-iommu2.o
|
||||||
|
|
|
@ -3499,8 +3499,6 @@ int __init amd_iommu_init_passthrough(void)
|
||||||
{
|
{
|
||||||
struct iommu_dev_data *dev_data;
|
struct iommu_dev_data *dev_data;
|
||||||
struct pci_dev *dev = NULL;
|
struct pci_dev *dev = NULL;
|
||||||
struct amd_iommu *iommu;
|
|
||||||
u16 devid;
|
|
||||||
int ret;
|
int ret;
|
||||||
|
|
||||||
ret = alloc_passthrough_domain();
|
ret = alloc_passthrough_domain();
|
||||||
|
@ -3514,12 +3512,6 @@ int __init amd_iommu_init_passthrough(void)
|
||||||
dev_data = get_dev_data(&dev->dev);
|
dev_data = get_dev_data(&dev->dev);
|
||||||
dev_data->passthrough = true;
|
dev_data->passthrough = true;
|
||||||
|
|
||||||
devid = get_device_id(&dev->dev);
|
|
||||||
|
|
||||||
iommu = amd_iommu_rlookup_table[devid];
|
|
||||||
if (!iommu)
|
|
||||||
continue;
|
|
||||||
|
|
||||||
attach_device(&dev->dev, pt_domain);
|
attach_device(&dev->dev, pt_domain);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
|
@ -45,6 +45,8 @@ struct pri_queue {
|
||||||
struct pasid_state {
|
struct pasid_state {
|
||||||
struct list_head list; /* For global state-list */
|
struct list_head list; /* For global state-list */
|
||||||
atomic_t count; /* Reference count */
|
atomic_t count; /* Reference count */
|
||||||
|
atomic_t mmu_notifier_count; /* Counting nested mmu_notifier
|
||||||
|
calls */
|
||||||
struct task_struct *task; /* Task bound to this PASID */
|
struct task_struct *task; /* Task bound to this PASID */
|
||||||
struct mm_struct *mm; /* mm_struct for the faults */
|
struct mm_struct *mm; /* mm_struct for the faults */
|
||||||
struct mmu_notifier mn; /* mmu_otifier handle */
|
struct mmu_notifier mn; /* mmu_otifier handle */
|
||||||
|
@ -56,6 +58,8 @@ struct pasid_state {
|
||||||
};
|
};
|
||||||
|
|
||||||
struct device_state {
|
struct device_state {
|
||||||
|
struct list_head list;
|
||||||
|
u16 devid;
|
||||||
atomic_t count;
|
atomic_t count;
|
||||||
struct pci_dev *pdev;
|
struct pci_dev *pdev;
|
||||||
struct pasid_state **states;
|
struct pasid_state **states;
|
||||||
|
@ -81,13 +85,9 @@ struct fault {
|
||||||
u16 flags;
|
u16 flags;
|
||||||
};
|
};
|
||||||
|
|
||||||
static struct device_state **state_table;
|
static LIST_HEAD(state_list);
|
||||||
static spinlock_t state_lock;
|
static spinlock_t state_lock;
|
||||||
|
|
||||||
/* List and lock for all pasid_states */
|
|
||||||
static LIST_HEAD(pasid_state_list);
|
|
||||||
static DEFINE_SPINLOCK(ps_lock);
|
|
||||||
|
|
||||||
static struct workqueue_struct *iommu_wq;
|
static struct workqueue_struct *iommu_wq;
|
||||||
|
|
||||||
/*
|
/*
|
||||||
|
@ -99,7 +99,6 @@ static u64 *empty_page_table;
|
||||||
|
|
||||||
static void free_pasid_states(struct device_state *dev_state);
|
static void free_pasid_states(struct device_state *dev_state);
|
||||||
static void unbind_pasid(struct device_state *dev_state, int pasid);
|
static void unbind_pasid(struct device_state *dev_state, int pasid);
|
||||||
static int task_exit(struct notifier_block *nb, unsigned long e, void *data);
|
|
||||||
|
|
||||||
static u16 device_id(struct pci_dev *pdev)
|
static u16 device_id(struct pci_dev *pdev)
|
||||||
{
|
{
|
||||||
|
@ -111,13 +110,25 @@ static u16 device_id(struct pci_dev *pdev)
|
||||||
return devid;
|
return devid;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
static struct device_state *__get_device_state(u16 devid)
|
||||||
|
{
|
||||||
|
struct device_state *dev_state;
|
||||||
|
|
||||||
|
list_for_each_entry(dev_state, &state_list, list) {
|
||||||
|
if (dev_state->devid == devid)
|
||||||
|
return dev_state;
|
||||||
|
}
|
||||||
|
|
||||||
|
return NULL;
|
||||||
|
}
|
||||||
|
|
||||||
static struct device_state *get_device_state(u16 devid)
|
static struct device_state *get_device_state(u16 devid)
|
||||||
{
|
{
|
||||||
struct device_state *dev_state;
|
struct device_state *dev_state;
|
||||||
unsigned long flags;
|
unsigned long flags;
|
||||||
|
|
||||||
spin_lock_irqsave(&state_lock, flags);
|
spin_lock_irqsave(&state_lock, flags);
|
||||||
dev_state = state_table[devid];
|
dev_state = __get_device_state(devid);
|
||||||
if (dev_state != NULL)
|
if (dev_state != NULL)
|
||||||
atomic_inc(&dev_state->count);
|
atomic_inc(&dev_state->count);
|
||||||
spin_unlock_irqrestore(&state_lock, flags);
|
spin_unlock_irqrestore(&state_lock, flags);
|
||||||
|
@ -158,29 +169,6 @@ static void put_device_state_wait(struct device_state *dev_state)
|
||||||
free_device_state(dev_state);
|
free_device_state(dev_state);
|
||||||
}
|
}
|
||||||
|
|
||||||
static struct notifier_block profile_nb = {
|
|
||||||
.notifier_call = task_exit,
|
|
||||||
};
|
|
||||||
|
|
||||||
static void link_pasid_state(struct pasid_state *pasid_state)
|
|
||||||
{
|
|
||||||
spin_lock(&ps_lock);
|
|
||||||
list_add_tail(&pasid_state->list, &pasid_state_list);
|
|
||||||
spin_unlock(&ps_lock);
|
|
||||||
}
|
|
||||||
|
|
||||||
static void __unlink_pasid_state(struct pasid_state *pasid_state)
|
|
||||||
{
|
|
||||||
list_del(&pasid_state->list);
|
|
||||||
}
|
|
||||||
|
|
||||||
static void unlink_pasid_state(struct pasid_state *pasid_state)
|
|
||||||
{
|
|
||||||
spin_lock(&ps_lock);
|
|
||||||
__unlink_pasid_state(pasid_state);
|
|
||||||
spin_unlock(&ps_lock);
|
|
||||||
}
|
|
||||||
|
|
||||||
/* Must be called under dev_state->lock */
|
/* Must be called under dev_state->lock */
|
||||||
static struct pasid_state **__get_pasid_state_ptr(struct device_state *dev_state,
|
static struct pasid_state **__get_pasid_state_ptr(struct device_state *dev_state,
|
||||||
int pasid, bool alloc)
|
int pasid, bool alloc)
|
||||||
|
@ -337,7 +325,6 @@ static void unbind_pasid(struct device_state *dev_state, int pasid)
|
||||||
if (pasid_state == NULL)
|
if (pasid_state == NULL)
|
||||||
return;
|
return;
|
||||||
|
|
||||||
unlink_pasid_state(pasid_state);
|
|
||||||
__unbind_pasid(pasid_state);
|
__unbind_pasid(pasid_state);
|
||||||
put_pasid_state_wait(pasid_state); /* Reference taken in this function */
|
put_pasid_state_wait(pasid_state); /* Reference taken in this function */
|
||||||
}
|
}
|
||||||
|
@ -379,7 +366,12 @@ static void free_pasid_states(struct device_state *dev_state)
|
||||||
continue;
|
continue;
|
||||||
|
|
||||||
put_pasid_state(pasid_state);
|
put_pasid_state(pasid_state);
|
||||||
unbind_pasid(dev_state, i);
|
|
||||||
|
/*
|
||||||
|
* This will call the mn_release function and
|
||||||
|
* unbind the PASID
|
||||||
|
*/
|
||||||
|
mmu_notifier_unregister(&pasid_state->mn, pasid_state->mm);
|
||||||
}
|
}
|
||||||
|
|
||||||
if (dev_state->pasid_levels == 2)
|
if (dev_state->pasid_levels == 2)
|
||||||
|
@ -443,8 +435,11 @@ static void mn_invalidate_range_start(struct mmu_notifier *mn,
|
||||||
pasid_state = mn_to_state(mn);
|
pasid_state = mn_to_state(mn);
|
||||||
dev_state = pasid_state->device_state;
|
dev_state = pasid_state->device_state;
|
||||||
|
|
||||||
amd_iommu_domain_set_gcr3(dev_state->domain, pasid_state->pasid,
|
if (atomic_add_return(1, &pasid_state->mmu_notifier_count) == 1) {
|
||||||
__pa(empty_page_table));
|
amd_iommu_domain_set_gcr3(dev_state->domain,
|
||||||
|
pasid_state->pasid,
|
||||||
|
__pa(empty_page_table));
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
static void mn_invalidate_range_end(struct mmu_notifier *mn,
|
static void mn_invalidate_range_end(struct mmu_notifier *mn,
|
||||||
|
@ -457,11 +452,31 @@ static void mn_invalidate_range_end(struct mmu_notifier *mn,
|
||||||
pasid_state = mn_to_state(mn);
|
pasid_state = mn_to_state(mn);
|
||||||
dev_state = pasid_state->device_state;
|
dev_state = pasid_state->device_state;
|
||||||
|
|
||||||
amd_iommu_domain_set_gcr3(dev_state->domain, pasid_state->pasid,
|
if (atomic_dec_and_test(&pasid_state->mmu_notifier_count)) {
|
||||||
__pa(pasid_state->mm->pgd));
|
amd_iommu_domain_set_gcr3(dev_state->domain,
|
||||||
|
pasid_state->pasid,
|
||||||
|
__pa(pasid_state->mm->pgd));
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
static void mn_release(struct mmu_notifier *mn, struct mm_struct *mm)
|
||||||
|
{
|
||||||
|
struct pasid_state *pasid_state;
|
||||||
|
struct device_state *dev_state;
|
||||||
|
|
||||||
|
might_sleep();
|
||||||
|
|
||||||
|
pasid_state = mn_to_state(mn);
|
||||||
|
dev_state = pasid_state->device_state;
|
||||||
|
|
||||||
|
if (pasid_state->device_state->inv_ctx_cb)
|
||||||
|
dev_state->inv_ctx_cb(dev_state->pdev, pasid_state->pasid);
|
||||||
|
|
||||||
|
unbind_pasid(dev_state, pasid_state->pasid);
|
||||||
}
|
}
|
||||||
|
|
||||||
static struct mmu_notifier_ops iommu_mn = {
|
static struct mmu_notifier_ops iommu_mn = {
|
||||||
|
.release = mn_release,
|
||||||
.clear_flush_young = mn_clear_flush_young,
|
.clear_flush_young = mn_clear_flush_young,
|
||||||
.change_pte = mn_change_pte,
|
.change_pte = mn_change_pte,
|
||||||
.invalidate_page = mn_invalidate_page,
|
.invalidate_page = mn_invalidate_page,
|
||||||
|
@ -606,53 +621,6 @@ static struct notifier_block ppr_nb = {
|
||||||
.notifier_call = ppr_notifier,
|
.notifier_call = ppr_notifier,
|
||||||
};
|
};
|
||||||
|
|
||||||
static int task_exit(struct notifier_block *nb, unsigned long e, void *data)
|
|
||||||
{
|
|
||||||
struct pasid_state *pasid_state;
|
|
||||||
struct task_struct *task;
|
|
||||||
|
|
||||||
task = data;
|
|
||||||
|
|
||||||
/*
|
|
||||||
* Using this notifier is a hack - but there is no other choice
|
|
||||||
* at the moment. What I really want is a sleeping notifier that
|
|
||||||
* is called when an MM goes down. But such a notifier doesn't
|
|
||||||
* exist yet. The notifier needs to sleep because it has to make
|
|
||||||
* sure that the device does not use the PASID and the address
|
|
||||||
* space anymore before it is destroyed. This includes waiting
|
|
||||||
* for pending PRI requests to pass the workqueue. The
|
|
||||||
* MMU-Notifiers would be a good fit, but they use RCU and so
|
|
||||||
* they are not allowed to sleep. Lets see how we can solve this
|
|
||||||
* in a more intelligent way in the future.
|
|
||||||
*/
|
|
||||||
again:
|
|
||||||
spin_lock(&ps_lock);
|
|
||||||
list_for_each_entry(pasid_state, &pasid_state_list, list) {
|
|
||||||
struct device_state *dev_state;
|
|
||||||
int pasid;
|
|
||||||
|
|
||||||
if (pasid_state->task != task)
|
|
||||||
continue;
|
|
||||||
|
|
||||||
/* Drop Lock and unbind */
|
|
||||||
spin_unlock(&ps_lock);
|
|
||||||
|
|
||||||
dev_state = pasid_state->device_state;
|
|
||||||
pasid = pasid_state->pasid;
|
|
||||||
|
|
||||||
if (pasid_state->device_state->inv_ctx_cb)
|
|
||||||
dev_state->inv_ctx_cb(dev_state->pdev, pasid);
|
|
||||||
|
|
||||||
unbind_pasid(dev_state, pasid);
|
|
||||||
|
|
||||||
/* Task may be in the list multiple times */
|
|
||||||
goto again;
|
|
||||||
}
|
|
||||||
spin_unlock(&ps_lock);
|
|
||||||
|
|
||||||
return NOTIFY_OK;
|
|
||||||
}
|
|
||||||
|
|
||||||
int amd_iommu_bind_pasid(struct pci_dev *pdev, int pasid,
|
int amd_iommu_bind_pasid(struct pci_dev *pdev, int pasid,
|
||||||
struct task_struct *task)
|
struct task_struct *task)
|
||||||
{
|
{
|
||||||
|
@ -682,6 +650,7 @@ int amd_iommu_bind_pasid(struct pci_dev *pdev, int pasid,
|
||||||
goto out;
|
goto out;
|
||||||
|
|
||||||
atomic_set(&pasid_state->count, 1);
|
atomic_set(&pasid_state->count, 1);
|
||||||
|
atomic_set(&pasid_state->mmu_notifier_count, 0);
|
||||||
init_waitqueue_head(&pasid_state->wq);
|
init_waitqueue_head(&pasid_state->wq);
|
||||||
spin_lock_init(&pasid_state->lock);
|
spin_lock_init(&pasid_state->lock);
|
||||||
|
|
||||||
|
@ -705,8 +674,6 @@ int amd_iommu_bind_pasid(struct pci_dev *pdev, int pasid,
|
||||||
if (ret)
|
if (ret)
|
||||||
goto out_clear_state;
|
goto out_clear_state;
|
||||||
|
|
||||||
link_pasid_state(pasid_state);
|
|
||||||
|
|
||||||
return 0;
|
return 0;
|
||||||
|
|
||||||
out_clear_state:
|
out_clear_state:
|
||||||
|
@ -727,6 +694,7 @@ EXPORT_SYMBOL(amd_iommu_bind_pasid);
|
||||||
|
|
||||||
void amd_iommu_unbind_pasid(struct pci_dev *pdev, int pasid)
|
void amd_iommu_unbind_pasid(struct pci_dev *pdev, int pasid)
|
||||||
{
|
{
|
||||||
|
struct pasid_state *pasid_state;
|
||||||
struct device_state *dev_state;
|
struct device_state *dev_state;
|
||||||
u16 devid;
|
u16 devid;
|
||||||
|
|
||||||
|
@ -743,7 +711,17 @@ void amd_iommu_unbind_pasid(struct pci_dev *pdev, int pasid)
|
||||||
if (pasid < 0 || pasid >= dev_state->max_pasids)
|
if (pasid < 0 || pasid >= dev_state->max_pasids)
|
||||||
goto out;
|
goto out;
|
||||||
|
|
||||||
unbind_pasid(dev_state, pasid);
|
pasid_state = get_pasid_state(dev_state, pasid);
|
||||||
|
if (pasid_state == NULL)
|
||||||
|
goto out;
|
||||||
|
/*
|
||||||
|
* Drop reference taken here. We are safe because we still hold
|
||||||
|
* the reference taken in the amd_iommu_bind_pasid function.
|
||||||
|
*/
|
||||||
|
put_pasid_state(pasid_state);
|
||||||
|
|
||||||
|
/* This will call the mn_release function and unbind the PASID */
|
||||||
|
mmu_notifier_unregister(&pasid_state->mn, pasid_state->mm);
|
||||||
|
|
||||||
out:
|
out:
|
||||||
put_device_state(dev_state);
|
put_device_state(dev_state);
|
||||||
|
@ -773,7 +751,8 @@ int amd_iommu_init_device(struct pci_dev *pdev, int pasids)
|
||||||
|
|
||||||
spin_lock_init(&dev_state->lock);
|
spin_lock_init(&dev_state->lock);
|
||||||
init_waitqueue_head(&dev_state->wq);
|
init_waitqueue_head(&dev_state->wq);
|
||||||
dev_state->pdev = pdev;
|
dev_state->pdev = pdev;
|
||||||
|
dev_state->devid = devid;
|
||||||
|
|
||||||
tmp = pasids;
|
tmp = pasids;
|
||||||
for (dev_state->pasid_levels = 0; (tmp - 1) & ~0x1ff; tmp >>= 9)
|
for (dev_state->pasid_levels = 0; (tmp - 1) & ~0x1ff; tmp >>= 9)
|
||||||
|
@ -803,13 +782,13 @@ int amd_iommu_init_device(struct pci_dev *pdev, int pasids)
|
||||||
|
|
||||||
spin_lock_irqsave(&state_lock, flags);
|
spin_lock_irqsave(&state_lock, flags);
|
||||||
|
|
||||||
if (state_table[devid] != NULL) {
|
if (__get_device_state(devid) != NULL) {
|
||||||
spin_unlock_irqrestore(&state_lock, flags);
|
spin_unlock_irqrestore(&state_lock, flags);
|
||||||
ret = -EBUSY;
|
ret = -EBUSY;
|
||||||
goto out_free_domain;
|
goto out_free_domain;
|
||||||
}
|
}
|
||||||
|
|
||||||
state_table[devid] = dev_state;
|
list_add_tail(&dev_state->list, &state_list);
|
||||||
|
|
||||||
spin_unlock_irqrestore(&state_lock, flags);
|
spin_unlock_irqrestore(&state_lock, flags);
|
||||||
|
|
||||||
|
@ -841,13 +820,13 @@ void amd_iommu_free_device(struct pci_dev *pdev)
|
||||||
|
|
||||||
spin_lock_irqsave(&state_lock, flags);
|
spin_lock_irqsave(&state_lock, flags);
|
||||||
|
|
||||||
dev_state = state_table[devid];
|
dev_state = __get_device_state(devid);
|
||||||
if (dev_state == NULL) {
|
if (dev_state == NULL) {
|
||||||
spin_unlock_irqrestore(&state_lock, flags);
|
spin_unlock_irqrestore(&state_lock, flags);
|
||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
|
|
||||||
state_table[devid] = NULL;
|
list_del(&dev_state->list);
|
||||||
|
|
||||||
spin_unlock_irqrestore(&state_lock, flags);
|
spin_unlock_irqrestore(&state_lock, flags);
|
||||||
|
|
||||||
|
@ -874,7 +853,7 @@ int amd_iommu_set_invalid_ppr_cb(struct pci_dev *pdev,
|
||||||
spin_lock_irqsave(&state_lock, flags);
|
spin_lock_irqsave(&state_lock, flags);
|
||||||
|
|
||||||
ret = -EINVAL;
|
ret = -EINVAL;
|
||||||
dev_state = state_table[devid];
|
dev_state = __get_device_state(devid);
|
||||||
if (dev_state == NULL)
|
if (dev_state == NULL)
|
||||||
goto out_unlock;
|
goto out_unlock;
|
||||||
|
|
||||||
|
@ -905,7 +884,7 @@ int amd_iommu_set_invalidate_ctx_cb(struct pci_dev *pdev,
|
||||||
spin_lock_irqsave(&state_lock, flags);
|
spin_lock_irqsave(&state_lock, flags);
|
||||||
|
|
||||||
ret = -EINVAL;
|
ret = -EINVAL;
|
||||||
dev_state = state_table[devid];
|
dev_state = __get_device_state(devid);
|
||||||
if (dev_state == NULL)
|
if (dev_state == NULL)
|
||||||
goto out_unlock;
|
goto out_unlock;
|
||||||
|
|
||||||
|
@ -922,7 +901,6 @@ EXPORT_SYMBOL(amd_iommu_set_invalidate_ctx_cb);
|
||||||
|
|
||||||
static int __init amd_iommu_v2_init(void)
|
static int __init amd_iommu_v2_init(void)
|
||||||
{
|
{
|
||||||
size_t state_table_size;
|
|
||||||
int ret;
|
int ret;
|
||||||
|
|
||||||
pr_info("AMD IOMMUv2 driver by Joerg Roedel <joerg.roedel@amd.com>\n");
|
pr_info("AMD IOMMUv2 driver by Joerg Roedel <joerg.roedel@amd.com>\n");
|
||||||
|
@ -938,16 +916,10 @@ static int __init amd_iommu_v2_init(void)
|
||||||
|
|
||||||
spin_lock_init(&state_lock);
|
spin_lock_init(&state_lock);
|
||||||
|
|
||||||
state_table_size = MAX_DEVICES * sizeof(struct device_state *);
|
|
||||||
state_table = (void *)__get_free_pages(GFP_KERNEL | __GFP_ZERO,
|
|
||||||
get_order(state_table_size));
|
|
||||||
if (state_table == NULL)
|
|
||||||
return -ENOMEM;
|
|
||||||
|
|
||||||
ret = -ENOMEM;
|
ret = -ENOMEM;
|
||||||
iommu_wq = create_workqueue("amd_iommu_v2");
|
iommu_wq = create_workqueue("amd_iommu_v2");
|
||||||
if (iommu_wq == NULL)
|
if (iommu_wq == NULL)
|
||||||
goto out_free;
|
goto out;
|
||||||
|
|
||||||
ret = -ENOMEM;
|
ret = -ENOMEM;
|
||||||
empty_page_table = (u64 *)get_zeroed_page(GFP_KERNEL);
|
empty_page_table = (u64 *)get_zeroed_page(GFP_KERNEL);
|
||||||
|
@ -955,29 +927,24 @@ static int __init amd_iommu_v2_init(void)
|
||||||
goto out_destroy_wq;
|
goto out_destroy_wq;
|
||||||
|
|
||||||
amd_iommu_register_ppr_notifier(&ppr_nb);
|
amd_iommu_register_ppr_notifier(&ppr_nb);
|
||||||
profile_event_register(PROFILE_TASK_EXIT, &profile_nb);
|
|
||||||
|
|
||||||
return 0;
|
return 0;
|
||||||
|
|
||||||
out_destroy_wq:
|
out_destroy_wq:
|
||||||
destroy_workqueue(iommu_wq);
|
destroy_workqueue(iommu_wq);
|
||||||
|
|
||||||
out_free:
|
out:
|
||||||
free_pages((unsigned long)state_table, get_order(state_table_size));
|
|
||||||
|
|
||||||
return ret;
|
return ret;
|
||||||
}
|
}
|
||||||
|
|
||||||
static void __exit amd_iommu_v2_exit(void)
|
static void __exit amd_iommu_v2_exit(void)
|
||||||
{
|
{
|
||||||
struct device_state *dev_state;
|
struct device_state *dev_state;
|
||||||
size_t state_table_size;
|
|
||||||
int i;
|
int i;
|
||||||
|
|
||||||
if (!amd_iommu_v2_supported())
|
if (!amd_iommu_v2_supported())
|
||||||
return;
|
return;
|
||||||
|
|
||||||
profile_event_unregister(PROFILE_TASK_EXIT, &profile_nb);
|
|
||||||
amd_iommu_unregister_ppr_notifier(&ppr_nb);
|
amd_iommu_unregister_ppr_notifier(&ppr_nb);
|
||||||
|
|
||||||
flush_workqueue(iommu_wq);
|
flush_workqueue(iommu_wq);
|
||||||
|
@ -1000,9 +967,6 @@ static void __exit amd_iommu_v2_exit(void)
|
||||||
|
|
||||||
destroy_workqueue(iommu_wq);
|
destroy_workqueue(iommu_wq);
|
||||||
|
|
||||||
state_table_size = MAX_DEVICES * sizeof(struct device_state *);
|
|
||||||
free_pages((unsigned long)state_table, get_order(state_table_size));
|
|
||||||
|
|
||||||
free_page((unsigned long)empty_page_table);
|
free_page((unsigned long)empty_page_table);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
|
@ -1167,7 +1167,7 @@ static int arm_smmu_domain_add_master(struct arm_smmu_domain *smmu_domain,
|
||||||
for (i = 0; i < master->num_streamids; ++i) {
|
for (i = 0; i < master->num_streamids; ++i) {
|
||||||
u32 idx, s2cr;
|
u32 idx, s2cr;
|
||||||
idx = master->smrs ? master->smrs[i].idx : master->streamids[i];
|
idx = master->smrs ? master->smrs[i].idx : master->streamids[i];
|
||||||
s2cr = (S2CR_TYPE_TRANS << S2CR_TYPE_SHIFT) |
|
s2cr = S2CR_TYPE_TRANS |
|
||||||
(smmu_domain->root_cfg.cbndx << S2CR_CBNDX_SHIFT);
|
(smmu_domain->root_cfg.cbndx << S2CR_CBNDX_SHIFT);
|
||||||
writel_relaxed(s2cr, gr0_base + ARM_SMMU_GR0_S2CR(idx));
|
writel_relaxed(s2cr, gr0_base + ARM_SMMU_GR0_S2CR(idx));
|
||||||
}
|
}
|
||||||
|
@ -1804,7 +1804,7 @@ static int arm_smmu_device_cfg_probe(struct arm_smmu_device *smmu)
|
||||||
* allocation (PTRS_PER_PGD).
|
* allocation (PTRS_PER_PGD).
|
||||||
*/
|
*/
|
||||||
#ifdef CONFIG_64BIT
|
#ifdef CONFIG_64BIT
|
||||||
smmu->s1_output_size = min(39UL, size);
|
smmu->s1_output_size = min((unsigned long)VA_BITS, size);
|
||||||
#else
|
#else
|
||||||
smmu->s1_output_size = min(32UL, size);
|
smmu->s1_output_size = min(32UL, size);
|
||||||
#endif
|
#endif
|
||||||
|
|
File diff suppressed because it is too large
Load diff
|
@ -592,8 +592,7 @@ found_cpu_node:
|
||||||
/* advance to next node in cache hierarchy */
|
/* advance to next node in cache hierarchy */
|
||||||
node = of_find_node_by_phandle(*prop);
|
node = of_find_node_by_phandle(*prop);
|
||||||
if (!node) {
|
if (!node) {
|
||||||
pr_debug("Invalid node for cache hierarchy %s\n",
|
pr_debug("Invalid node for cache hierarchy\n");
|
||||||
node->full_name);
|
|
||||||
return ~(u32)0;
|
return ~(u32)0;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
1255
drivers/iommu/ipmmu-vmsa.c
Normal file
1255
drivers/iommu/ipmmu-vmsa.c
Normal file
File diff suppressed because it is too large
Load diff
|
@ -127,13 +127,12 @@ static void msm_iommu_reset(void __iomem *base, int ncb)
|
||||||
|
|
||||||
static int msm_iommu_probe(struct platform_device *pdev)
|
static int msm_iommu_probe(struct platform_device *pdev)
|
||||||
{
|
{
|
||||||
struct resource *r, *r2;
|
struct resource *r;
|
||||||
struct clk *iommu_clk;
|
struct clk *iommu_clk;
|
||||||
struct clk *iommu_pclk;
|
struct clk *iommu_pclk;
|
||||||
struct msm_iommu_drvdata *drvdata;
|
struct msm_iommu_drvdata *drvdata;
|
||||||
struct msm_iommu_dev *iommu_dev = pdev->dev.platform_data;
|
struct msm_iommu_dev *iommu_dev = pdev->dev.platform_data;
|
||||||
void __iomem *regs_base;
|
void __iomem *regs_base;
|
||||||
resource_size_t len;
|
|
||||||
int ret, irq, par;
|
int ret, irq, par;
|
||||||
|
|
||||||
if (pdev->id == -1) {
|
if (pdev->id == -1) {
|
||||||
|
@ -178,35 +177,16 @@ static int msm_iommu_probe(struct platform_device *pdev)
|
||||||
iommu_clk = NULL;
|
iommu_clk = NULL;
|
||||||
|
|
||||||
r = platform_get_resource_byname(pdev, IORESOURCE_MEM, "physbase");
|
r = platform_get_resource_byname(pdev, IORESOURCE_MEM, "physbase");
|
||||||
|
regs_base = devm_ioremap_resource(&pdev->dev, r);
|
||||||
if (!r) {
|
if (IS_ERR(regs_base)) {
|
||||||
ret = -ENODEV;
|
ret = PTR_ERR(regs_base);
|
||||||
goto fail_clk;
|
goto fail_clk;
|
||||||
}
|
}
|
||||||
|
|
||||||
len = resource_size(r);
|
|
||||||
|
|
||||||
r2 = request_mem_region(r->start, len, r->name);
|
|
||||||
if (!r2) {
|
|
||||||
pr_err("Could not request memory region: start=%p, len=%d\n",
|
|
||||||
(void *) r->start, len);
|
|
||||||
ret = -EBUSY;
|
|
||||||
goto fail_clk;
|
|
||||||
}
|
|
||||||
|
|
||||||
regs_base = ioremap(r2->start, len);
|
|
||||||
|
|
||||||
if (!regs_base) {
|
|
||||||
pr_err("Could not ioremap: start=%p, len=%d\n",
|
|
||||||
(void *) r2->start, len);
|
|
||||||
ret = -EBUSY;
|
|
||||||
goto fail_mem;
|
|
||||||
}
|
|
||||||
|
|
||||||
irq = platform_get_irq_byname(pdev, "secure_irq");
|
irq = platform_get_irq_byname(pdev, "secure_irq");
|
||||||
if (irq < 0) {
|
if (irq < 0) {
|
||||||
ret = -ENODEV;
|
ret = -ENODEV;
|
||||||
goto fail_io;
|
goto fail_clk;
|
||||||
}
|
}
|
||||||
|
|
||||||
msm_iommu_reset(regs_base, iommu_dev->ncb);
|
msm_iommu_reset(regs_base, iommu_dev->ncb);
|
||||||
|
@ -222,14 +202,14 @@ static int msm_iommu_probe(struct platform_device *pdev)
|
||||||
if (!par) {
|
if (!par) {
|
||||||
pr_err("%s: Invalid PAR value detected\n", iommu_dev->name);
|
pr_err("%s: Invalid PAR value detected\n", iommu_dev->name);
|
||||||
ret = -ENODEV;
|
ret = -ENODEV;
|
||||||
goto fail_io;
|
goto fail_clk;
|
||||||
}
|
}
|
||||||
|
|
||||||
ret = request_irq(irq, msm_iommu_fault_handler, 0,
|
ret = request_irq(irq, msm_iommu_fault_handler, 0,
|
||||||
"msm_iommu_secure_irpt_handler", drvdata);
|
"msm_iommu_secure_irpt_handler", drvdata);
|
||||||
if (ret) {
|
if (ret) {
|
||||||
pr_err("Request IRQ %d failed with ret=%d\n", irq, ret);
|
pr_err("Request IRQ %d failed with ret=%d\n", irq, ret);
|
||||||
goto fail_io;
|
goto fail_clk;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
|
@ -250,10 +230,6 @@ static int msm_iommu_probe(struct platform_device *pdev)
|
||||||
clk_disable(iommu_pclk);
|
clk_disable(iommu_pclk);
|
||||||
|
|
||||||
return 0;
|
return 0;
|
||||||
fail_io:
|
|
||||||
iounmap(regs_base);
|
|
||||||
fail_mem:
|
|
||||||
release_mem_region(r->start, len);
|
|
||||||
fail_clk:
|
fail_clk:
|
||||||
if (iommu_clk) {
|
if (iommu_clk) {
|
||||||
clk_disable(iommu_clk);
|
clk_disable(iommu_clk);
|
||||||
|
|
|
@ -34,6 +34,9 @@
|
||||||
#include "omap-iopgtable.h"
|
#include "omap-iopgtable.h"
|
||||||
#include "omap-iommu.h"
|
#include "omap-iommu.h"
|
||||||
|
|
||||||
|
#define to_iommu(dev) \
|
||||||
|
((struct omap_iommu *)platform_get_drvdata(to_platform_device(dev)))
|
||||||
|
|
||||||
#define for_each_iotlb_cr(obj, n, __i, cr) \
|
#define for_each_iotlb_cr(obj, n, __i, cr) \
|
||||||
for (__i = 0; \
|
for (__i = 0; \
|
||||||
(__i < (n)) && (cr = __iotlb_read_cr((obj), __i), true); \
|
(__i < (n)) && (cr = __iotlb_read_cr((obj), __i), true); \
|
||||||
|
@ -391,6 +394,7 @@ static void flush_iotlb_page(struct omap_iommu *obj, u32 da)
|
||||||
__func__, start, da, bytes);
|
__func__, start, da, bytes);
|
||||||
iotlb_load_cr(obj, &cr);
|
iotlb_load_cr(obj, &cr);
|
||||||
iommu_write_reg(obj, 1, MMU_FLUSH_ENTRY);
|
iommu_write_reg(obj, 1, MMU_FLUSH_ENTRY);
|
||||||
|
break;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
pm_runtime_put_sync(obj->dev);
|
pm_runtime_put_sync(obj->dev);
|
||||||
|
@ -1037,19 +1041,18 @@ static void iopte_cachep_ctor(void *iopte)
|
||||||
clean_dcache_area(iopte, IOPTE_TABLE_SIZE);
|
clean_dcache_area(iopte, IOPTE_TABLE_SIZE);
|
||||||
}
|
}
|
||||||
|
|
||||||
static u32 iotlb_init_entry(struct iotlb_entry *e, u32 da, u32 pa,
|
static u32 iotlb_init_entry(struct iotlb_entry *e, u32 da, u32 pa, int pgsz)
|
||||||
u32 flags)
|
|
||||||
{
|
{
|
||||||
memset(e, 0, sizeof(*e));
|
memset(e, 0, sizeof(*e));
|
||||||
|
|
||||||
e->da = da;
|
e->da = da;
|
||||||
e->pa = pa;
|
e->pa = pa;
|
||||||
e->valid = 1;
|
e->valid = MMU_CAM_V;
|
||||||
/* FIXME: add OMAP1 support */
|
/* FIXME: add OMAP1 support */
|
||||||
e->pgsz = flags & MMU_CAM_PGSZ_MASK;
|
e->pgsz = pgsz;
|
||||||
e->endian = flags & MMU_RAM_ENDIAN_MASK;
|
e->endian = MMU_RAM_ENDIAN_LITTLE;
|
||||||
e->elsz = flags & MMU_RAM_ELSZ_MASK;
|
e->elsz = MMU_RAM_ELSZ_8;
|
||||||
e->mixed = flags & MMU_RAM_MIXED_MASK;
|
e->mixed = 0;
|
||||||
|
|
||||||
return iopgsz_to_bytes(e->pgsz);
|
return iopgsz_to_bytes(e->pgsz);
|
||||||
}
|
}
|
||||||
|
@ -1062,9 +1065,8 @@ static int omap_iommu_map(struct iommu_domain *domain, unsigned long da,
|
||||||
struct device *dev = oiommu->dev;
|
struct device *dev = oiommu->dev;
|
||||||
struct iotlb_entry e;
|
struct iotlb_entry e;
|
||||||
int omap_pgsz;
|
int omap_pgsz;
|
||||||
u32 ret, flags;
|
u32 ret;
|
||||||
|
|
||||||
/* we only support mapping a single iommu page for now */
|
|
||||||
omap_pgsz = bytes_to_iopgsz(bytes);
|
omap_pgsz = bytes_to_iopgsz(bytes);
|
||||||
if (omap_pgsz < 0) {
|
if (omap_pgsz < 0) {
|
||||||
dev_err(dev, "invalid size to map: %d\n", bytes);
|
dev_err(dev, "invalid size to map: %d\n", bytes);
|
||||||
|
@ -1073,9 +1075,7 @@ static int omap_iommu_map(struct iommu_domain *domain, unsigned long da,
|
||||||
|
|
||||||
dev_dbg(dev, "mapping da 0x%lx to pa 0x%x size 0x%x\n", da, pa, bytes);
|
dev_dbg(dev, "mapping da 0x%lx to pa 0x%x size 0x%x\n", da, pa, bytes);
|
||||||
|
|
||||||
flags = omap_pgsz | prot;
|
iotlb_init_entry(&e, da, pa, omap_pgsz);
|
||||||
|
|
||||||
iotlb_init_entry(&e, da, pa, flags);
|
|
||||||
|
|
||||||
ret = omap_iopgtable_store_entry(oiommu, &e);
|
ret = omap_iopgtable_store_entry(oiommu, &e);
|
||||||
if (ret)
|
if (ret)
|
||||||
|
@ -1248,12 +1248,6 @@ static phys_addr_t omap_iommu_iova_to_phys(struct iommu_domain *domain,
|
||||||
return ret;
|
return ret;
|
||||||
}
|
}
|
||||||
|
|
||||||
static int omap_iommu_domain_has_cap(struct iommu_domain *domain,
|
|
||||||
unsigned long cap)
|
|
||||||
{
|
|
||||||
return 0;
|
|
||||||
}
|
|
||||||
|
|
||||||
static int omap_iommu_add_device(struct device *dev)
|
static int omap_iommu_add_device(struct device *dev)
|
||||||
{
|
{
|
||||||
struct omap_iommu_arch_data *arch_data;
|
struct omap_iommu_arch_data *arch_data;
|
||||||
|
@ -1305,7 +1299,6 @@ static struct iommu_ops omap_iommu_ops = {
|
||||||
.map = omap_iommu_map,
|
.map = omap_iommu_map,
|
||||||
.unmap = omap_iommu_unmap,
|
.unmap = omap_iommu_unmap,
|
||||||
.iova_to_phys = omap_iommu_iova_to_phys,
|
.iova_to_phys = omap_iommu_iova_to_phys,
|
||||||
.domain_has_cap = omap_iommu_domain_has_cap,
|
|
||||||
.add_device = omap_iommu_add_device,
|
.add_device = omap_iommu_add_device,
|
||||||
.remove_device = omap_iommu_remove_device,
|
.remove_device = omap_iommu_remove_device,
|
||||||
.pgsize_bitmap = OMAP_IOMMU_PGSIZES,
|
.pgsize_bitmap = OMAP_IOMMU_PGSIZES,
|
||||||
|
|
|
@ -93,6 +93,3 @@ static inline phys_addr_t omap_iommu_translate(u32 d, u32 va, u32 mask)
|
||||||
/* to find an entry in the second-level page table. */
|
/* to find an entry in the second-level page table. */
|
||||||
#define iopte_index(da) (((da) >> IOPTE_SHIFT) & (PTRS_PER_IOPTE - 1))
|
#define iopte_index(da) (((da) >> IOPTE_SHIFT) & (PTRS_PER_IOPTE - 1))
|
||||||
#define iopte_offset(iopgd, da) (iopgd_page_vaddr(iopgd) + iopte_index(da))
|
#define iopte_offset(iopgd, da) (iopgd_page_vaddr(iopgd) + iopte_index(da))
|
||||||
|
|
||||||
#define to_iommu(dev) \
|
|
||||||
(platform_get_drvdata(to_platform_device(dev)))
|
|
||||||
|
|
|
@ -94,11 +94,6 @@ static int ipmmu_probe(struct platform_device *pdev)
|
||||||
struct resource *res;
|
struct resource *res;
|
||||||
struct shmobile_ipmmu_platform_data *pdata = pdev->dev.platform_data;
|
struct shmobile_ipmmu_platform_data *pdata = pdev->dev.platform_data;
|
||||||
|
|
||||||
res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
|
|
||||||
if (!res) {
|
|
||||||
dev_err(&pdev->dev, "cannot get platform resources\n");
|
|
||||||
return -ENOENT;
|
|
||||||
}
|
|
||||||
ipmmu = devm_kzalloc(&pdev->dev, sizeof(*ipmmu), GFP_KERNEL);
|
ipmmu = devm_kzalloc(&pdev->dev, sizeof(*ipmmu), GFP_KERNEL);
|
||||||
if (!ipmmu) {
|
if (!ipmmu) {
|
||||||
dev_err(&pdev->dev, "cannot allocate device data\n");
|
dev_err(&pdev->dev, "cannot allocate device data\n");
|
||||||
|
@ -106,19 +101,18 @@ static int ipmmu_probe(struct platform_device *pdev)
|
||||||
}
|
}
|
||||||
spin_lock_init(&ipmmu->flush_lock);
|
spin_lock_init(&ipmmu->flush_lock);
|
||||||
ipmmu->dev = &pdev->dev;
|
ipmmu->dev = &pdev->dev;
|
||||||
ipmmu->ipmmu_base = devm_ioremap_nocache(&pdev->dev, res->start,
|
|
||||||
resource_size(res));
|
res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
|
||||||
if (!ipmmu->ipmmu_base) {
|
ipmmu->ipmmu_base = devm_ioremap_resource(&pdev->dev, res);
|
||||||
dev_err(&pdev->dev, "ioremap_nocache failed\n");
|
if (IS_ERR(ipmmu->ipmmu_base))
|
||||||
return -ENOMEM;
|
return PTR_ERR(ipmmu->ipmmu_base);
|
||||||
}
|
|
||||||
ipmmu->dev_names = pdata->dev_names;
|
ipmmu->dev_names = pdata->dev_names;
|
||||||
ipmmu->num_dev_names = pdata->num_dev_names;
|
ipmmu->num_dev_names = pdata->num_dev_names;
|
||||||
platform_set_drvdata(pdev, ipmmu);
|
platform_set_drvdata(pdev, ipmmu);
|
||||||
ipmmu_reg_write(ipmmu, IMCTR1, 0x0); /* disable TLB */
|
ipmmu_reg_write(ipmmu, IMCTR1, 0x0); /* disable TLB */
|
||||||
ipmmu_reg_write(ipmmu, IMCTR2, 0x0); /* disable PMB */
|
ipmmu_reg_write(ipmmu, IMCTR2, 0x0); /* disable PMB */
|
||||||
ipmmu_iommu_init(ipmmu);
|
return ipmmu_iommu_init(ipmmu);
|
||||||
return 0;
|
|
||||||
}
|
}
|
||||||
|
|
||||||
static struct platform_driver ipmmu_driver = {
|
static struct platform_driver ipmmu_driver = {
|
||||||
|
|
24
include/linux/platform_data/ipmmu-vmsa.h
Normal file
24
include/linux/platform_data/ipmmu-vmsa.h
Normal file
|
@ -0,0 +1,24 @@
|
||||||
|
/*
|
||||||
|
* IPMMU VMSA Platform Data
|
||||||
|
*
|
||||||
|
* Copyright (C) 2014 Renesas Electronics Corporation
|
||||||
|
*
|
||||||
|
* This program is free software; you can redistribute it and/or modify
|
||||||
|
* it under the terms of the GNU General Public License as published by
|
||||||
|
* the Free Software Foundation; version 2 of the License.
|
||||||
|
*/
|
||||||
|
|
||||||
|
#ifndef __IPMMU_VMSA_H__
|
||||||
|
#define __IPMMU_VMSA_H__
|
||||||
|
|
||||||
|
struct ipmmu_vmsa_master {
|
||||||
|
const char *name;
|
||||||
|
unsigned int utlb;
|
||||||
|
};
|
||||||
|
|
||||||
|
struct ipmmu_vmsa_platform_data {
|
||||||
|
const struct ipmmu_vmsa_master *masters;
|
||||||
|
unsigned int num_masters;
|
||||||
|
};
|
||||||
|
|
||||||
|
#endif /* __IPMMU_VMSA_H__ */
|
Loading…
Reference in a new issue