Merge branch 'linus' into x86/apic
Conflicts: arch/x86/mach-default/setup.c Semantic conflict resolution: arch/x86/kernel/setup.c Signed-off-by: Ingo Molnar <mingo@elte.hu>
This commit is contained in:
commit
fc6fc7f1b1
330 changed files with 5039 additions and 1547 deletions
1
CREDITS
1
CREDITS
|
@ -2166,7 +2166,6 @@ D: Initial implementation of VC's, pty's and select()
|
|||
|
||||
N: Pavel Machek
|
||||
E: pavel@ucw.cz
|
||||
E: pavel@suse.cz
|
||||
D: Softcursor for vga, hypertech cdrom support, vcsa bugfix, nbd
|
||||
D: sun4/330 port, capabilities for elf, speedup for rm on ext2, USB,
|
||||
D: work on suspend-to-ram/disk, killing duplicates from ioctl32
|
||||
|
|
|
@ -1,6 +1,6 @@
|
|||
What: /sys/firmware/memmap/
|
||||
Date: June 2008
|
||||
Contact: Bernhard Walle <bwalle@suse.de>
|
||||
Contact: Bernhard Walle <bernhard.walle@gmx.de>
|
||||
Description:
|
||||
On all platforms, the firmware provides a memory map which the
|
||||
kernel reads. The resources from that memory map are registered
|
||||
|
|
|
@ -93,7 +93,7 @@ the PCI Express Port Bus driver from loading a service driver.
|
|||
|
||||
int pcie_port_service_register(struct pcie_port_service_driver *new)
|
||||
|
||||
This API replaces the Linux Driver Model's pci_module_init API. A
|
||||
This API replaces the Linux Driver Model's pci_register_driver API. A
|
||||
service driver should always calls pcie_port_service_register at
|
||||
module init. Note that after service driver being loaded, calls
|
||||
such as pci_enable_device(dev) and pci_set_master(dev) are no longer
|
||||
|
|
|
@ -252,10 +252,8 @@ cgroup file system directories.
|
|||
When a task is moved from one cgroup to another, it gets a new
|
||||
css_set pointer - if there's an already existing css_set with the
|
||||
desired collection of cgroups then that group is reused, else a new
|
||||
css_set is allocated. Note that the current implementation uses a
|
||||
linear search to locate an appropriate existing css_set, so isn't
|
||||
very efficient. A future version will use a hash table for better
|
||||
performance.
|
||||
css_set is allocated. The appropriate existing css_set is located by
|
||||
looking into a hash table.
|
||||
|
||||
To allow access from a cgroup to the css_sets (and hence tasks)
|
||||
that comprise it, a set of cg_cgroup_link objects form a lattice;
|
||||
|
|
|
@ -142,7 +142,7 @@ into the rest of the kernel, none in performance critical paths:
|
|||
- in fork and exit, to attach and detach a task from its cpuset.
|
||||
- in sched_setaffinity, to mask the requested CPUs by what's
|
||||
allowed in that tasks cpuset.
|
||||
- in sched.c migrate_all_tasks(), to keep migrating tasks within
|
||||
- in sched.c migrate_live_tasks(), to keep migrating tasks within
|
||||
the CPUs allowed by their cpuset, if possible.
|
||||
- in the mbind and set_mempolicy system calls, to mask the requested
|
||||
Memory Nodes by what's allowed in that tasks cpuset.
|
||||
|
@ -175,6 +175,10 @@ files describing that cpuset:
|
|||
- mem_exclusive flag: is memory placement exclusive?
|
||||
- mem_hardwall flag: is memory allocation hardwalled
|
||||
- memory_pressure: measure of how much paging pressure in cpuset
|
||||
- memory_spread_page flag: if set, spread page cache evenly on allowed nodes
|
||||
- memory_spread_slab flag: if set, spread slab cache evenly on allowed nodes
|
||||
- sched_load_balance flag: if set, load balance within CPUs on that cpuset
|
||||
- sched_relax_domain_level: the searching range when migrating tasks
|
||||
|
||||
In addition, the root cpuset only has the following file:
|
||||
- memory_pressure_enabled flag: compute memory_pressure?
|
||||
|
@ -252,7 +256,7 @@ is causing.
|
|||
|
||||
This is useful both on tightly managed systems running a wide mix of
|
||||
submitted jobs, which may choose to terminate or re-prioritize jobs that
|
||||
are trying to use more memory than allowed on the nodes assigned them,
|
||||
are trying to use more memory than allowed on the nodes assigned to them,
|
||||
and with tightly coupled, long running, massively parallel scientific
|
||||
computing jobs that will dramatically fail to meet required performance
|
||||
goals if they start to use more memory than allowed to them.
|
||||
|
@ -378,7 +382,7 @@ as cpusets and sched_setaffinity.
|
|||
The algorithmic cost of load balancing and its impact on key shared
|
||||
kernel data structures such as the task list increases more than
|
||||
linearly with the number of CPUs being balanced. So the scheduler
|
||||
has support to partition the systems CPUs into a number of sched
|
||||
has support to partition the systems CPUs into a number of sched
|
||||
domains such that it only load balances within each sched domain.
|
||||
Each sched domain covers some subset of the CPUs in the system;
|
||||
no two sched domains overlap; some CPUs might not be in any sched
|
||||
|
@ -485,17 +489,22 @@ of CPUs allowed to a cpuset having 'sched_load_balance' enabled.
|
|||
The internal kernel cpuset to scheduler interface passes from the
|
||||
cpuset code to the scheduler code a partition of the load balanced
|
||||
CPUs in the system. This partition is a set of subsets (represented
|
||||
as an array of cpumask_t) of CPUs, pairwise disjoint, that cover all
|
||||
the CPUs that must be load balanced.
|
||||
as an array of struct cpumask) of CPUs, pairwise disjoint, that cover
|
||||
all the CPUs that must be load balanced.
|
||||
|
||||
Whenever the 'sched_load_balance' flag changes, or CPUs come or go
|
||||
from a cpuset with this flag enabled, or a cpuset with this flag
|
||||
enabled is removed, the cpuset code builds a new such partition and
|
||||
passes it to the scheduler sched domain setup code, to have the sched
|
||||
domains rebuilt as necessary.
|
||||
The cpuset code builds a new such partition and passes it to the
|
||||
scheduler sched domain setup code, to have the sched domains rebuilt
|
||||
as necessary, whenever:
|
||||
- the 'sched_load_balance' flag of a cpuset with non-empty CPUs changes,
|
||||
- or CPUs come or go from a cpuset with this flag enabled,
|
||||
- or 'sched_relax_domain_level' value of a cpuset with non-empty CPUs
|
||||
and with this flag enabled changes,
|
||||
- or a cpuset with non-empty CPUs and with this flag enabled is removed,
|
||||
- or a cpu is offlined/onlined.
|
||||
|
||||
This partition exactly defines what sched domains the scheduler should
|
||||
setup - one sched domain for each element (cpumask_t) in the partition.
|
||||
setup - one sched domain for each element (struct cpumask) in the
|
||||
partition.
|
||||
|
||||
The scheduler remembers the currently active sched domain partitions.
|
||||
When the scheduler routine partition_sched_domains() is invoked from
|
||||
|
@ -559,7 +568,7 @@ domain, the largest value among those is used. Be careful, if one
|
|||
requests 0 and others are -1 then 0 is used.
|
||||
|
||||
Note that modifying this file will have both good and bad effects,
|
||||
and whether it is acceptable or not will be depend on your situation.
|
||||
and whether it is acceptable or not depends on your situation.
|
||||
Don't modify this file if you are not sure.
|
||||
|
||||
If your situation is:
|
||||
|
@ -600,19 +609,15 @@ to allocate a page of memory for that task.
|
|||
|
||||
If a cpuset has its 'cpus' modified, then each task in that cpuset
|
||||
will have its allowed CPU placement changed immediately. Similarly,
|
||||
if a tasks pid is written to a cpusets 'tasks' file, in either its
|
||||
current cpuset or another cpuset, then its allowed CPU placement is
|
||||
changed immediately. If such a task had been bound to some subset
|
||||
of its cpuset using the sched_setaffinity() call, the task will be
|
||||
allowed to run on any CPU allowed in its new cpuset, negating the
|
||||
affect of the prior sched_setaffinity() call.
|
||||
if a tasks pid is written to another cpusets 'tasks' file, then its
|
||||
allowed CPU placement is changed immediately. If such a task had been
|
||||
bound to some subset of its cpuset using the sched_setaffinity() call,
|
||||
the task will be allowed to run on any CPU allowed in its new cpuset,
|
||||
negating the effect of the prior sched_setaffinity() call.
|
||||
|
||||
In summary, the memory placement of a task whose cpuset is changed is
|
||||
updated by the kernel, on the next allocation of a page for that task,
|
||||
but the processor placement is not updated, until that tasks pid is
|
||||
rewritten to the 'tasks' file of its cpuset. This is done to avoid
|
||||
impacting the scheduler code in the kernel with a check for changes
|
||||
in a tasks processor placement.
|
||||
and the processor placement is updated immediately.
|
||||
|
||||
Normally, once a page is allocated (given a physical page
|
||||
of main memory) then that page stays on whatever node it
|
||||
|
@ -681,10 +686,14 @@ and then start a subshell 'sh' in that cpuset:
|
|||
# The next line should display '/Charlie'
|
||||
cat /proc/self/cpuset
|
||||
|
||||
In the future, a C library interface to cpusets will likely be
|
||||
available. For now, the only way to query or modify cpusets is
|
||||
via the cpuset file system, using the various cd, mkdir, echo, cat,
|
||||
rmdir commands from the shell, or their equivalent from C.
|
||||
There are ways to query or modify cpusets:
|
||||
- via the cpuset file system directly, using the various cd, mkdir, echo,
|
||||
cat, rmdir commands from the shell, or their equivalent from C.
|
||||
- via the C library libcpuset.
|
||||
- via the C library libcgroup.
|
||||
(http://sourceforge.net/proects/libcg/)
|
||||
- via the python application cset.
|
||||
(http://developer.novell.com/wiki/index.php/Cpuset)
|
||||
|
||||
The sched_setaffinity calls can also be done at the shell prompt using
|
||||
SGI's runon or Robert Love's taskset. The mbind and set_mempolicy
|
||||
|
@ -756,7 +765,7 @@ mount -t cpuset X /dev/cpuset
|
|||
|
||||
is equivalent to
|
||||
|
||||
mount -t cgroup -ocpuset X /dev/cpuset
|
||||
mount -t cgroup -ocpuset,noprefix X /dev/cpuset
|
||||
echo "/sbin/cpuset_release_agent" > /dev/cpuset/release_agent
|
||||
|
||||
2.2 Adding/removing cpus
|
||||
|
|
|
@ -127,9 +127,11 @@ void unlock_device(struct device * dev);
|
|||
Attributes
|
||||
~~~~~~~~~~
|
||||
struct device_attribute {
|
||||
struct attribute attr;
|
||||
ssize_t (*show)(struct device * dev, char * buf, size_t count, loff_t off);
|
||||
ssize_t (*store)(struct device * dev, const char * buf, size_t count, loff_t off);
|
||||
struct attribute attr;
|
||||
ssize_t (*show)(struct device *dev, struct device_attribute *attr,
|
||||
char *buf);
|
||||
ssize_t (*store)(struct device *dev, struct device_attribute *attr,
|
||||
const char *buf, size_t count);
|
||||
};
|
||||
|
||||
Attributes of devices can be exported via drivers using a simple
|
||||
|
|
|
@ -2,8 +2,10 @@
|
|||
sysfs - _The_ filesystem for exporting kernel objects.
|
||||
|
||||
Patrick Mochel <mochel@osdl.org>
|
||||
Mike Murphy <mamurph@cs.clemson.edu>
|
||||
|
||||
10 January 2003
|
||||
Revised: 22 February 2009
|
||||
Original: 10 January 2003
|
||||
|
||||
|
||||
What it is:
|
||||
|
@ -64,12 +66,13 @@ An attribute definition is simply:
|
|||
|
||||
struct attribute {
|
||||
char * name;
|
||||
struct module *owner;
|
||||
mode_t mode;
|
||||
};
|
||||
|
||||
|
||||
int sysfs_create_file(struct kobject * kobj, struct attribute * attr);
|
||||
void sysfs_remove_file(struct kobject * kobj, struct attribute * attr);
|
||||
int sysfs_create_file(struct kobject * kobj, const struct attribute * attr);
|
||||
void sysfs_remove_file(struct kobject * kobj, const struct attribute * attr);
|
||||
|
||||
|
||||
A bare attribute contains no means to read or write the value of the
|
||||
|
@ -80,9 +83,11 @@ a specific object type.
|
|||
For example, the driver model defines struct device_attribute like:
|
||||
|
||||
struct device_attribute {
|
||||
struct attribute attr;
|
||||
ssize_t (*show)(struct device * dev, char * buf);
|
||||
ssize_t (*store)(struct device * dev, const char * buf);
|
||||
struct attribute attr;
|
||||
ssize_t (*show)(struct device *dev, struct device_attribute *attr,
|
||||
char *buf);
|
||||
ssize_t (*store)(struct device *dev, struct device_attribute *attr,
|
||||
const char *buf, size_t count);
|
||||
};
|
||||
|
||||
int device_create_file(struct device *, struct device_attribute *);
|
||||
|
@ -90,12 +95,8 @@ void device_remove_file(struct device *, struct device_attribute *);
|
|||
|
||||
It also defines this helper for defining device attributes:
|
||||
|
||||
#define DEVICE_ATTR(_name, _mode, _show, _store) \
|
||||
struct device_attribute dev_attr_##_name = { \
|
||||
.attr = {.name = __stringify(_name) , .mode = _mode }, \
|
||||
.show = _show, \
|
||||
.store = _store, \
|
||||
};
|
||||
#define DEVICE_ATTR(_name, _mode, _show, _store) \
|
||||
struct device_attribute dev_attr_##_name = __ATTR(_name, _mode, _show, _store)
|
||||
|
||||
For example, declaring
|
||||
|
||||
|
@ -107,9 +108,9 @@ static struct device_attribute dev_attr_foo = {
|
|||
.attr = {
|
||||
.name = "foo",
|
||||
.mode = S_IWUSR | S_IRUGO,
|
||||
.show = show_foo,
|
||||
.store = store_foo,
|
||||
},
|
||||
.show = show_foo,
|
||||
.store = store_foo,
|
||||
};
|
||||
|
||||
|
||||
|
@ -161,10 +162,12 @@ To read or write attributes, show() or store() methods must be
|
|||
specified when declaring the attribute. The method types should be as
|
||||
simple as those defined for device attributes:
|
||||
|
||||
ssize_t (*show)(struct device * dev, char * buf);
|
||||
ssize_t (*store)(struct device * dev, const char * buf);
|
||||
ssize_t (*show)(struct device * dev, struct device_attribute * attr,
|
||||
char * buf);
|
||||
ssize_t (*store)(struct device * dev, struct device_attribute * attr,
|
||||
const char * buf);
|
||||
|
||||
IOW, they should take only an object and a buffer as parameters.
|
||||
IOW, they should take only an object, an attribute, and a buffer as parameters.
|
||||
|
||||
|
||||
sysfs allocates a buffer of size (PAGE_SIZE) and passes it to the
|
||||
|
@ -299,14 +302,16 @@ The following interface layers currently exist in sysfs:
|
|||
Structure:
|
||||
|
||||
struct device_attribute {
|
||||
struct attribute attr;
|
||||
ssize_t (*show)(struct device * dev, char * buf);
|
||||
ssize_t (*store)(struct device * dev, const char * buf);
|
||||
struct attribute attr;
|
||||
ssize_t (*show)(struct device *dev, struct device_attribute *attr,
|
||||
char *buf);
|
||||
ssize_t (*store)(struct device *dev, struct device_attribute *attr,
|
||||
const char *buf, size_t count);
|
||||
};
|
||||
|
||||
Declaring:
|
||||
|
||||
DEVICE_ATTR(_name, _str, _mode, _show, _store);
|
||||
DEVICE_ATTR(_name, _mode, _show, _store);
|
||||
|
||||
Creation/Removal:
|
||||
|
||||
|
@ -342,7 +347,8 @@ Structure:
|
|||
struct driver_attribute {
|
||||
struct attribute attr;
|
||||
ssize_t (*show)(struct device_driver *, char * buf);
|
||||
ssize_t (*store)(struct device_driver *, const char * buf);
|
||||
ssize_t (*store)(struct device_driver *, const char * buf,
|
||||
size_t count);
|
||||
};
|
||||
|
||||
Declaring:
|
||||
|
|
101
Documentation/hwmon/hpfall.c
Normal file
101
Documentation/hwmon/hpfall.c
Normal file
|
@ -0,0 +1,101 @@
|
|||
/* Disk protection for HP machines.
|
||||
*
|
||||
* Copyright 2008 Eric Piel
|
||||
* Copyright 2009 Pavel Machek <pavel@suse.cz>
|
||||
*
|
||||
* GPLv2.
|
||||
*/
|
||||
|
||||
#include <stdio.h>
|
||||
#include <stdlib.h>
|
||||
#include <unistd.h>
|
||||
#include <fcntl.h>
|
||||
#include <sys/stat.h>
|
||||
#include <sys/types.h>
|
||||
#include <string.h>
|
||||
#include <stdint.h>
|
||||
#include <errno.h>
|
||||
#include <signal.h>
|
||||
|
||||
void write_int(char *path, int i)
|
||||
{
|
||||
char buf[1024];
|
||||
int fd = open(path, O_RDWR);
|
||||
if (fd < 0) {
|
||||
perror("open");
|
||||
exit(1);
|
||||
}
|
||||
sprintf(buf, "%d", i);
|
||||
if (write(fd, buf, strlen(buf)) != strlen(buf)) {
|
||||
perror("write");
|
||||
exit(1);
|
||||
}
|
||||
close(fd);
|
||||
}
|
||||
|
||||
void set_led(int on)
|
||||
{
|
||||
write_int("/sys/class/leds/hp::hddprotect/brightness", on);
|
||||
}
|
||||
|
||||
void protect(int seconds)
|
||||
{
|
||||
write_int("/sys/block/sda/device/unload_heads", seconds*1000);
|
||||
}
|
||||
|
||||
int on_ac(void)
|
||||
{
|
||||
// /sys/class/power_supply/AC0/online
|
||||
}
|
||||
|
||||
int lid_open(void)
|
||||
{
|
||||
// /proc/acpi/button/lid/LID/state
|
||||
}
|
||||
|
||||
void ignore_me(void)
|
||||
{
|
||||
protect(0);
|
||||
set_led(0);
|
||||
|
||||
}
|
||||
|
||||
int main(int argc, char* argv[])
|
||||
{
|
||||
int fd, ret;
|
||||
|
||||
fd = open("/dev/freefall", O_RDONLY);
|
||||
if (fd < 0) {
|
||||
perror("open");
|
||||
return EXIT_FAILURE;
|
||||
}
|
||||
|
||||
signal(SIGALRM, ignore_me);
|
||||
|
||||
for (;;) {
|
||||
unsigned char count;
|
||||
|
||||
ret = read(fd, &count, sizeof(count));
|
||||
alarm(0);
|
||||
if ((ret == -1) && (errno == EINTR)) {
|
||||
/* Alarm expired, time to unpark the heads */
|
||||
continue;
|
||||
}
|
||||
|
||||
if (ret != sizeof(count)) {
|
||||
perror("read");
|
||||
break;
|
||||
}
|
||||
|
||||
protect(21);
|
||||
set_led(1);
|
||||
if (1 || on_ac() || lid_open()) {
|
||||
alarm(2);
|
||||
} else {
|
||||
alarm(20);
|
||||
}
|
||||
}
|
||||
|
||||
close(fd);
|
||||
return EXIT_SUCCESS;
|
||||
}
|
|
@ -33,6 +33,14 @@ rate - reports the sampling rate of the accelerometer device in HZ
|
|||
This driver also provides an absolute input class device, allowing
|
||||
the laptop to act as a pinball machine-esque joystick.
|
||||
|
||||
Another feature of the driver is misc device called "freefall" that
|
||||
acts similar to /dev/rtc and reacts on free-fall interrupts received
|
||||
from the device. It supports blocking operations, poll/select and
|
||||
fasync operation modes. You must read 1 bytes from the device. The
|
||||
result is number of free-fall interrupts since the last successful
|
||||
read (or 255 if number of interrupts would not fit).
|
||||
|
||||
|
||||
Axes orientation
|
||||
----------------
|
||||
|
||||
|
|
|
@ -78,12 +78,10 @@ to view your kernel log and look for "mmiotrace has lost events" warning. If
|
|||
events were lost, the trace is incomplete. You should enlarge the buffers and
|
||||
try again. Buffers are enlarged by first seeing how large the current buffers
|
||||
are:
|
||||
$ cat /debug/tracing/trace_entries
|
||||
$ cat /debug/tracing/buffer_size_kb
|
||||
gives you a number. Approximately double this number and write it back, for
|
||||
instance:
|
||||
$ echo 0 > /debug/tracing/tracing_enabled
|
||||
$ echo 128000 > /debug/tracing/trace_entries
|
||||
$ echo 1 > /debug/tracing/tracing_enabled
|
||||
$ echo 128000 > /debug/tracing/buffer_size_kb
|
||||
Then start again from the top.
|
||||
|
||||
If you are doing a trace for a driver project, e.g. Nouveau, you should also
|
||||
|
|
29
MAINTAINERS
29
MAINTAINERS
|
@ -692,6 +692,13 @@ M: kernel@wantstofly.org
|
|||
L: linux-arm-kernel@lists.arm.linux.org.uk (subscribers-only)
|
||||
S: Maintained
|
||||
|
||||
ARM/NUVOTON W90X900 ARM ARCHITECTURE
|
||||
P: Wan ZongShun
|
||||
M: mcuos.com@gmail.com
|
||||
L: linux-arm-kernel@lists.arm.linux.org.uk (subscribers-only)
|
||||
W: http://www.mcuos.com
|
||||
S: Maintained
|
||||
|
||||
ARPD SUPPORT
|
||||
P: Jonathan Layes
|
||||
L: netdev@vger.kernel.org
|
||||
|
@ -1905,10 +1912,10 @@ W: http://gigaset307x.sourceforge.net/
|
|||
S: Maintained
|
||||
|
||||
HARD DRIVE ACTIVE PROTECTION SYSTEM (HDAPS) DRIVER
|
||||
P: Robert Love
|
||||
M: rlove@rlove.org
|
||||
M: linux-kernel@vger.kernel.org
|
||||
W: http://www.kernel.org/pub/linux/kernel/people/rml/hdaps/
|
||||
P: Frank Seidel
|
||||
M: frank@f-seidel.de
|
||||
L: lm-sensors@lm-sensors.org
|
||||
W: http://www.kernel.org/pub/linux/kernel/people/fseidel/hdaps/
|
||||
S: Maintained
|
||||
|
||||
GSPCA FINEPIX SUBDRIVER
|
||||
|
@ -2001,7 +2008,7 @@ S: Maintained
|
|||
|
||||
HIBERNATION (aka Software Suspend, aka swsusp)
|
||||
P: Pavel Machek
|
||||
M: pavel@suse.cz
|
||||
M: pavel@ucw.cz
|
||||
P: Rafael J. Wysocki
|
||||
M: rjw@sisk.pl
|
||||
L: linux-pm@lists.linux-foundation.org
|
||||
|
@ -3327,8 +3334,8 @@ P: Jeremy Fitzhardinge
|
|||
M: jeremy@xensource.com
|
||||
P: Chris Wright
|
||||
M: chrisw@sous-sol.org
|
||||
P: Zachary Amsden
|
||||
M: zach@vmware.com
|
||||
P: Alok Kataria
|
||||
M: akataria@vmware.com
|
||||
P: Rusty Russell
|
||||
M: rusty@rustcorp.com.au
|
||||
L: virtualization@lists.osdl.org
|
||||
|
@ -4172,7 +4179,7 @@ SUSPEND TO RAM
|
|||
P: Len Brown
|
||||
M: len.brown@intel.com
|
||||
P: Pavel Machek
|
||||
M: pavel@suse.cz
|
||||
M: pavel@ucw.cz
|
||||
P: Rafael J. Wysocki
|
||||
M: rjw@sisk.pl
|
||||
L: linux-pm@lists.linux-foundation.org
|
||||
|
@ -4924,11 +4931,11 @@ L: zd1211-devs@lists.sourceforge.net (subscribers-only)
|
|||
S: Maintained
|
||||
|
||||
ZR36067 VIDEO FOR LINUX DRIVER
|
||||
P: Ronald Bultje
|
||||
M: rbultje@ronald.bitfreak.net
|
||||
L: mjpeg-users@lists.sourceforge.net
|
||||
L: linux-media@vger.kernel.org
|
||||
W: http://mjpeg.sourceforge.net/driver-zoran/
|
||||
S: Maintained
|
||||
T: Mercurial http://linuxtv.org/hg/v4l-dvb
|
||||
S: Odd Fixes
|
||||
|
||||
ZS DECSTATION Z85C30 SERIAL DRIVER
|
||||
P: Maciej W. Rozycki
|
||||
|
|
2
Makefile
2
Makefile
|
@ -389,6 +389,7 @@ PHONY += outputmakefile
|
|||
# output directory.
|
||||
outputmakefile:
|
||||
ifneq ($(KBUILD_SRC),)
|
||||
$(Q)ln -fsn $(srctree) source
|
||||
$(Q)$(CONFIG_SHELL) $(srctree)/scripts/mkmakefile \
|
||||
$(srctree) $(objtree) $(VERSION) $(PATCHLEVEL)
|
||||
endif
|
||||
|
@ -947,7 +948,6 @@ ifneq ($(KBUILD_SRC),)
|
|||
mkdir -p include2; \
|
||||
ln -fsn $(srctree)/include/asm-$(SRCARCH) include2/asm; \
|
||||
fi
|
||||
ln -fsn $(srctree) source
|
||||
endif
|
||||
|
||||
# prepare2 creates a makefile if using a separate output directory
|
||||
|
|
2
README
2
README
|
@ -188,7 +188,7 @@ CONFIGURING the kernel:
|
|||
values to random values.
|
||||
|
||||
You can find more information on using the Linux kernel config tools
|
||||
in Documentation/kbuild/make-configs.txt.
|
||||
in Documentation/kbuild/kconfig.txt.
|
||||
|
||||
NOTES on "make config":
|
||||
- having unnecessary drivers will make the kernel bigger, and can
|
||||
|
|
|
@ -93,8 +93,8 @@ common_shutdown_1(void *generic_ptr)
|
|||
if (cpuid != boot_cpuid) {
|
||||
flags |= 0x00040000UL; /* "remain halted" */
|
||||
*pflags = flags;
|
||||
cpu_clear(cpuid, cpu_present_map);
|
||||
cpu_clear(cpuid, cpu_possible_map);
|
||||
set_cpu_present(cpuid, false);
|
||||
set_cpu_possible(cpuid, false);
|
||||
halt();
|
||||
}
|
||||
#endif
|
||||
|
@ -120,8 +120,8 @@ common_shutdown_1(void *generic_ptr)
|
|||
|
||||
#ifdef CONFIG_SMP
|
||||
/* Wait for the secondaries to halt. */
|
||||
cpu_clear(boot_cpuid, cpu_present_map);
|
||||
cpu_clear(boot_cpuid, cpu_possible_map);
|
||||
set_cpu_present(boot_cpuid, false);
|
||||
set_cpu_possible(boot_cpuid, false);
|
||||
while (cpus_weight(cpu_present_map))
|
||||
barrier();
|
||||
#endif
|
||||
|
|
|
@ -120,12 +120,12 @@ void __cpuinit
|
|||
smp_callin(void)
|
||||
{
|
||||
int cpuid = hard_smp_processor_id();
|
||||
cpumask_t mask = cpu_online_map;
|
||||
|
||||
if (cpu_test_and_set(cpuid, mask)) {
|
||||
if (cpu_online(cpuid)) {
|
||||
printk("??, cpu 0x%x already present??\n", cpuid);
|
||||
BUG();
|
||||
}
|
||||
set_cpu_online(cpuid, true);
|
||||
|
||||
/* Turn on machine checks. */
|
||||
wrmces(7);
|
||||
|
@ -436,8 +436,8 @@ setup_smp(void)
|
|||
((char *)cpubase + i*hwrpb->processor_size);
|
||||
if ((cpu->flags & 0x1cc) == 0x1cc) {
|
||||
smp_num_probed++;
|
||||
cpu_set(i, cpu_possible_map);
|
||||
cpu_set(i, cpu_present_map);
|
||||
set_cpu_possible(i, true);
|
||||
set_cpu_present(i, true);
|
||||
cpu->pal_revision = boot_cpu_palrev;
|
||||
}
|
||||
|
||||
|
@ -470,8 +470,8 @@ smp_prepare_cpus(unsigned int max_cpus)
|
|||
|
||||
/* Nothing to do on a UP box, or when told not to. */
|
||||
if (smp_num_probed == 1 || max_cpus == 0) {
|
||||
cpu_possible_map = cpumask_of_cpu(boot_cpuid);
|
||||
cpu_present_map = cpumask_of_cpu(boot_cpuid);
|
||||
init_cpu_possible(cpumask_of(boot_cpuid));
|
||||
init_cpu_present(cpumask_of(boot_cpuid));
|
||||
printk(KERN_INFO "SMP mode deactivated.\n");
|
||||
return;
|
||||
}
|
||||
|
|
|
@ -608,7 +608,7 @@ CONFIG_WATCHDOG_NOWAYOUT=y
|
|||
# Watchdog Device Drivers
|
||||
#
|
||||
# CONFIG_SOFT_WATCHDOG is not set
|
||||
CONFIG_AT91SAM9_WATCHDOG=y
|
||||
CONFIG_AT91SAM9X_WATCHDOG=y
|
||||
|
||||
#
|
||||
# USB-based Watchdog Cards
|
||||
|
|
|
@ -700,7 +700,7 @@ CONFIG_WATCHDOG_NOWAYOUT=y
|
|||
# Watchdog Device Drivers
|
||||
#
|
||||
# CONFIG_SOFT_WATCHDOG is not set
|
||||
CONFIG_AT91SAM9_WATCHDOG=y
|
||||
CONFIG_AT91SAM9X_WATCHDOG=y
|
||||
|
||||
#
|
||||
# USB-based Watchdog Cards
|
||||
|
|
|
@ -710,7 +710,7 @@ CONFIG_WATCHDOG_NOWAYOUT=y
|
|||
# Watchdog Device Drivers
|
||||
#
|
||||
# CONFIG_SOFT_WATCHDOG is not set
|
||||
CONFIG_AT91SAM9_WATCHDOG=y
|
||||
CONFIG_AT91SAM9X_WATCHDOG=y
|
||||
|
||||
#
|
||||
# USB-based Watchdog Cards
|
||||
|
|
|
@ -606,7 +606,7 @@ CONFIG_WATCHDOG_NOWAYOUT=y
|
|||
# Watchdog Device Drivers
|
||||
#
|
||||
# CONFIG_SOFT_WATCHDOG is not set
|
||||
CONFIG_AT91SAM9_WATCHDOG=y
|
||||
CONFIG_AT91SAM9X_WATCHDOG=y
|
||||
|
||||
#
|
||||
# Sonics Silicon Backplane
|
||||
|
|
|
@ -727,7 +727,7 @@ CONFIG_WATCHDOG_NOWAYOUT=y
|
|||
# Watchdog Device Drivers
|
||||
#
|
||||
# CONFIG_SOFT_WATCHDOG is not set
|
||||
# CONFIG_AT91SAM9_WATCHDOG is not set
|
||||
# CONFIG_AT91SAM9X_WATCHDOG is not set
|
||||
|
||||
#
|
||||
# USB-based Watchdog Cards
|
||||
|
|
|
@ -74,9 +74,9 @@ EXPORT_SYMBOL(elf_set_personality);
|
|||
*/
|
||||
int arm_elf_read_implies_exec(const struct elf32_hdr *x, int executable_stack)
|
||||
{
|
||||
if (executable_stack != EXSTACK_ENABLE_X)
|
||||
if (executable_stack != EXSTACK_DISABLE_X)
|
||||
return 1;
|
||||
if (cpu_architecture() <= CPU_ARCH_ARMv6)
|
||||
if (cpu_architecture() < CPU_ARCH_ARMv6)
|
||||
return 1;
|
||||
return 0;
|
||||
}
|
||||
|
|
|
@ -697,7 +697,7 @@ static void __init at91_add_device_rtt(void)
|
|||
* Watchdog
|
||||
* -------------------------------------------------------------------- */
|
||||
|
||||
#if defined(CONFIG_AT91SAM9_WATCHDOG) || defined(CONFIG_AT91SAM9_WATCHDOG_MODULE)
|
||||
#if defined(CONFIG_AT91SAM9X_WATCHDOG) || defined(CONFIG_AT91SAM9X_WATCHDOG_MODULE)
|
||||
static struct platform_device at91cap9_wdt_device = {
|
||||
.name = "at91_wdt",
|
||||
.id = -1,
|
||||
|
|
|
@ -643,7 +643,7 @@ static void __init at91_add_device_rtt(void)
|
|||
* Watchdog
|
||||
* -------------------------------------------------------------------- */
|
||||
|
||||
#if defined(CONFIG_AT91SAM9_WATCHDOG) || defined(CONFIG_AT91SAM9_WATCHDOG_MODULE)
|
||||
#if defined(CONFIG_AT91SAM9X_WATCHDOG) || defined(CONFIG_AT91SAM9X_WATCHDOG_MODULE)
|
||||
static struct platform_device at91sam9260_wdt_device = {
|
||||
.name = "at91_wdt",
|
||||
.id = -1,
|
||||
|
|
|
@ -621,7 +621,7 @@ static void __init at91_add_device_rtt(void)
|
|||
* Watchdog
|
||||
* -------------------------------------------------------------------- */
|
||||
|
||||
#if defined(CONFIG_AT91SAM9_WATCHDOG) || defined(CONFIG_AT91SAM9_WATCHDOG_MODULE)
|
||||
#if defined(CONFIG_AT91SAM9X_WATCHDOG) || defined(CONFIG_AT91SAM9X_WATCHDOG_MODULE)
|
||||
static struct platform_device at91sam9261_wdt_device = {
|
||||
.name = "at91_wdt",
|
||||
.id = -1,
|
||||
|
|
|
@ -854,7 +854,7 @@ static void __init at91_add_device_rtt(void)
|
|||
* Watchdog
|
||||
* -------------------------------------------------------------------- */
|
||||
|
||||
#if defined(CONFIG_AT91SAM9_WATCHDOG) || defined(CONFIG_AT91SAM9_WATCHDOG_MODULE)
|
||||
#if defined(CONFIG_AT91SAM9X_WATCHDOG) || defined(CONFIG_AT91SAM9X_WATCHDOG_MODULE)
|
||||
static struct platform_device at91sam9263_wdt_device = {
|
||||
.name = "at91_wdt",
|
||||
.id = -1,
|
||||
|
|
|
@ -609,7 +609,7 @@ static void __init at91_add_device_rtt(void)
|
|||
* Watchdog
|
||||
* -------------------------------------------------------------------- */
|
||||
|
||||
#if defined(CONFIG_AT91SAM9_WATCHDOG) || defined(CONFIG_AT91SAM9_WATCHDOG_MODULE)
|
||||
#if defined(CONFIG_AT91SAM9X_WATCHDOG) || defined(CONFIG_AT91SAM9X_WATCHDOG_MODULE)
|
||||
static struct platform_device at91sam9rl_wdt_device = {
|
||||
.name = "at91_wdt",
|
||||
.id = -1,
|
||||
|
|
|
@ -490,7 +490,8 @@ postcore_initcall(at91_gpio_debugfs_init);
|
|||
|
||||
/*--------------------------------------------------------------------------*/
|
||||
|
||||
/* This lock class tells lockdep that GPIO irqs are in a different
|
||||
/*
|
||||
* This lock class tells lockdep that GPIO irqs are in a different
|
||||
* category than their parents, so it won't report false recursion.
|
||||
*/
|
||||
static struct lock_class_key gpio_lock_class;
|
||||
|
@ -509,9 +510,6 @@ void __init at91_gpio_irq_setup(void)
|
|||
unsigned id = this->id;
|
||||
unsigned i;
|
||||
|
||||
/* enable PIO controller's clock */
|
||||
clk_enable(this->clock);
|
||||
|
||||
__raw_writel(~0, this->regbase + PIO_IDR);
|
||||
|
||||
for (i = 0, pin = this->chipbase; i < 32; i++, pin++) {
|
||||
|
@ -556,7 +554,14 @@ void __init at91_gpio_init(struct at91_gpio_bank *data, int nr_banks)
|
|||
data->chipbase = PIN_BASE + i * 32;
|
||||
data->regbase = data->offset + (void __iomem *)AT91_VA_BASE_SYS;
|
||||
|
||||
/* AT91SAM9263_ID_PIOCDE groups PIOC, PIOD, PIOE */
|
||||
/* enable PIO controller's clock */
|
||||
clk_enable(data->clock);
|
||||
|
||||
/*
|
||||
* Some processors share peripheral ID between multiple GPIO banks.
|
||||
* SAM9263 (PIOC, PIOD, PIOE)
|
||||
* CAP9 (PIOA, PIOB, PIOC, PIOD)
|
||||
*/
|
||||
if (last && last->id == data->id)
|
||||
last->next = data;
|
||||
}
|
||||
|
|
|
@ -93,6 +93,7 @@ struct atmel_nand_data {
|
|||
u8 enable_pin; /* chip enable */
|
||||
u8 det_pin; /* card detect */
|
||||
u8 rdy_pin; /* ready/busy */
|
||||
u8 rdy_pin_active_low; /* rdy_pin value is inverted */
|
||||
u8 ale; /* address line number connected to ALE */
|
||||
u8 cle; /* address line number connected to CLE */
|
||||
u8 bus_width_16; /* buswidth is 16 bit */
|
||||
|
|
|
@ -1,3 +0,0 @@
|
|||
/*
|
||||
* arch/arm/mach-ep93xx/include/mach/gesbc9312.h
|
||||
*/
|
|
@ -10,7 +10,6 @@
|
|||
|
||||
#include "platform.h"
|
||||
|
||||
#include "gesbc9312.h"
|
||||
#include "ts72xx.h"
|
||||
|
||||
#endif
|
||||
|
|
|
@ -42,7 +42,7 @@ void __init kirkwood_init_irq(void)
|
|||
writel(0, GPIO_EDGE_CAUSE(32));
|
||||
|
||||
for (i = IRQ_KIRKWOOD_GPIO_START; i < NR_IRQS; i++) {
|
||||
set_irq_chip(i, &orion_gpio_irq_level_chip);
|
||||
set_irq_chip(i, &orion_gpio_irq_chip);
|
||||
set_irq_handler(i, handle_level_irq);
|
||||
irq_desc[i].status |= IRQ_LEVEL;
|
||||
set_irq_flags(i, IRQF_VALID);
|
||||
|
|
|
@ -40,7 +40,7 @@ void __init mv78xx0_init_irq(void)
|
|||
writel(0, GPIO_EDGE_CAUSE(0));
|
||||
|
||||
for (i = IRQ_MV78XX0_GPIO_START; i < NR_IRQS; i++) {
|
||||
set_irq_chip(i, &orion_gpio_irq_level_chip);
|
||||
set_irq_chip(i, &orion_gpio_irq_chip);
|
||||
set_irq_handler(i, handle_level_irq);
|
||||
irq_desc[i].status |= IRQ_LEVEL;
|
||||
set_irq_flags(i, IRQF_VALID);
|
||||
|
|
|
@ -565,7 +565,7 @@ u32 omap2_clksel_to_divisor(struct clk *clk, u32 field_val)
|
|||
*
|
||||
* Given a struct clk of a rate-selectable clksel clock, and a clock divisor,
|
||||
* find the corresponding register field value. The return register value is
|
||||
* the value before left-shifting. Returns 0xffffffff on error
|
||||
* the value before left-shifting. Returns ~0 on error
|
||||
*/
|
||||
u32 omap2_divisor_to_clksel(struct clk *clk, u32 div)
|
||||
{
|
||||
|
@ -577,7 +577,7 @@ u32 omap2_divisor_to_clksel(struct clk *clk, u32 div)
|
|||
|
||||
clks = omap2_get_clksel_by_parent(clk, clk->parent);
|
||||
if (clks == NULL)
|
||||
return 0;
|
||||
return ~0;
|
||||
|
||||
for (clkr = clks->rates; clkr->div; clkr++) {
|
||||
if ((clkr->flags & cpu_mask) && (clkr->div == div))
|
||||
|
@ -588,7 +588,7 @@ u32 omap2_divisor_to_clksel(struct clk *clk, u32 div)
|
|||
printk(KERN_ERR "clock: Could not find divisor %d for "
|
||||
"clock %s parent %s\n", div, clk->name,
|
||||
clk->parent->name);
|
||||
return 0;
|
||||
return ~0;
|
||||
}
|
||||
|
||||
return clkr->val;
|
||||
|
@ -708,7 +708,7 @@ static u32 omap2_clksel_get_src_field(void __iomem **src_addr,
|
|||
return 0;
|
||||
|
||||
for (clkr = clks->rates; clkr->div; clkr++) {
|
||||
if (clkr->flags & (cpu_mask | DEFAULT_RATE))
|
||||
if (clkr->flags & cpu_mask && clkr->flags & DEFAULT_RATE)
|
||||
break; /* Found the default rate for this platform */
|
||||
}
|
||||
|
||||
|
@ -746,7 +746,7 @@ int omap2_clk_set_parent(struct clk *clk, struct clk *new_parent)
|
|||
return -EINVAL;
|
||||
|
||||
if (clk->usecount > 0)
|
||||
_omap2_clk_disable(clk);
|
||||
omap2_clk_disable(clk);
|
||||
|
||||
/* Set new source value (previous dividers if any in effect) */
|
||||
reg_val = __raw_readl(src_addr) & ~field_mask;
|
||||
|
@ -759,11 +759,11 @@ int omap2_clk_set_parent(struct clk *clk, struct clk *new_parent)
|
|||
wmb();
|
||||
}
|
||||
|
||||
if (clk->usecount > 0)
|
||||
_omap2_clk_enable(clk);
|
||||
|
||||
clk->parent = new_parent;
|
||||
|
||||
if (clk->usecount > 0)
|
||||
omap2_clk_enable(clk);
|
||||
|
||||
/* CLKSEL clocks follow their parents' rates, divided by a divisor */
|
||||
clk->rate = new_parent->rate;
|
||||
|
||||
|
|
|
@ -44,7 +44,7 @@ void __init orion5x_init_irq(void)
|
|||
* User can use set_type() if he wants to use edge types handlers.
|
||||
*/
|
||||
for (i = IRQ_ORION5X_GPIO_START; i < NR_IRQS; i++) {
|
||||
set_irq_chip(i, &orion_gpio_irq_level_chip);
|
||||
set_irq_chip(i, &orion_gpio_irq_chip);
|
||||
set_irq_handler(i, handle_level_irq);
|
||||
irq_desc[i].status |= IRQ_LEVEL;
|
||||
set_irq_flags(i, IRQF_VALID);
|
||||
|
|
|
@ -693,7 +693,8 @@ static void __init sanity_check_meminfo(void)
|
|||
* Check whether this memory bank would entirely overlap
|
||||
* the vmalloc area.
|
||||
*/
|
||||
if (__va(bank->start) >= VMALLOC_MIN) {
|
||||
if (__va(bank->start) >= VMALLOC_MIN ||
|
||||
__va(bank->start) < PAGE_OFFSET) {
|
||||
printk(KERN_NOTICE "Ignoring RAM at %.8lx-%.8lx "
|
||||
"(vmalloc region overlap).\n",
|
||||
bank->start, bank->start + bank->size - 1);
|
||||
|
|
|
@ -265,51 +265,36 @@ EXPORT_SYMBOL(orion_gpio_set_blink);
|
|||
* polarity LEVEL mask
|
||||
*
|
||||
****************************************************************************/
|
||||
static void gpio_irq_edge_ack(u32 irq)
|
||||
{
|
||||
int pin = irq_to_gpio(irq);
|
||||
|
||||
writel(~(1 << (pin & 31)), GPIO_EDGE_CAUSE(pin));
|
||||
static void gpio_irq_ack(u32 irq)
|
||||
{
|
||||
int type = irq_desc[irq].status & IRQ_TYPE_SENSE_MASK;
|
||||
if (type & (IRQ_TYPE_EDGE_RISING | IRQ_TYPE_EDGE_FALLING)) {
|
||||
int pin = irq_to_gpio(irq);
|
||||
writel(~(1 << (pin & 31)), GPIO_EDGE_CAUSE(pin));
|
||||
}
|
||||
}
|
||||
|
||||
static void gpio_irq_edge_mask(u32 irq)
|
||||
static void gpio_irq_mask(u32 irq)
|
||||
{
|
||||
int pin = irq_to_gpio(irq);
|
||||
u32 u;
|
||||
|
||||
u = readl(GPIO_EDGE_MASK(pin));
|
||||
int type = irq_desc[irq].status & IRQ_TYPE_SENSE_MASK;
|
||||
u32 reg = (type & (IRQ_TYPE_EDGE_RISING | IRQ_TYPE_EDGE_FALLING)) ?
|
||||
GPIO_EDGE_MASK(pin) : GPIO_LEVEL_MASK(pin);
|
||||
u32 u = readl(reg);
|
||||
u &= ~(1 << (pin & 31));
|
||||
writel(u, GPIO_EDGE_MASK(pin));
|
||||
writel(u, reg);
|
||||
}
|
||||
|
||||
static void gpio_irq_edge_unmask(u32 irq)
|
||||
static void gpio_irq_unmask(u32 irq)
|
||||
{
|
||||
int pin = irq_to_gpio(irq);
|
||||
u32 u;
|
||||
|
||||
u = readl(GPIO_EDGE_MASK(pin));
|
||||
int type = irq_desc[irq].status & IRQ_TYPE_SENSE_MASK;
|
||||
u32 reg = (type & (IRQ_TYPE_EDGE_RISING | IRQ_TYPE_EDGE_FALLING)) ?
|
||||
GPIO_EDGE_MASK(pin) : GPIO_LEVEL_MASK(pin);
|
||||
u32 u = readl(reg);
|
||||
u |= 1 << (pin & 31);
|
||||
writel(u, GPIO_EDGE_MASK(pin));
|
||||
}
|
||||
|
||||
static void gpio_irq_level_mask(u32 irq)
|
||||
{
|
||||
int pin = irq_to_gpio(irq);
|
||||
u32 u;
|
||||
|
||||
u = readl(GPIO_LEVEL_MASK(pin));
|
||||
u &= ~(1 << (pin & 31));
|
||||
writel(u, GPIO_LEVEL_MASK(pin));
|
||||
}
|
||||
|
||||
static void gpio_irq_level_unmask(u32 irq)
|
||||
{
|
||||
int pin = irq_to_gpio(irq);
|
||||
u32 u;
|
||||
|
||||
u = readl(GPIO_LEVEL_MASK(pin));
|
||||
u |= 1 << (pin & 31);
|
||||
writel(u, GPIO_LEVEL_MASK(pin));
|
||||
writel(u, reg);
|
||||
}
|
||||
|
||||
static int gpio_irq_set_type(u32 irq, u32 type)
|
||||
|
@ -331,9 +316,9 @@ static int gpio_irq_set_type(u32 irq, u32 type)
|
|||
* Set edge/level type.
|
||||
*/
|
||||
if (type & (IRQ_TYPE_EDGE_RISING | IRQ_TYPE_EDGE_FALLING)) {
|
||||
desc->chip = &orion_gpio_irq_edge_chip;
|
||||
desc->handle_irq = handle_edge_irq;
|
||||
} else if (type & (IRQ_TYPE_LEVEL_HIGH | IRQ_TYPE_LEVEL_LOW)) {
|
||||
desc->chip = &orion_gpio_irq_level_chip;
|
||||
desc->handle_irq = handle_level_irq;
|
||||
} else {
|
||||
printk(KERN_ERR "failed to set irq=%d (type=%d)\n", irq, type);
|
||||
return -EINVAL;
|
||||
|
@ -371,19 +356,11 @@ static int gpio_irq_set_type(u32 irq, u32 type)
|
|||
return 0;
|
||||
}
|
||||
|
||||
struct irq_chip orion_gpio_irq_edge_chip = {
|
||||
.name = "orion_gpio_irq_edge",
|
||||
.ack = gpio_irq_edge_ack,
|
||||
.mask = gpio_irq_edge_mask,
|
||||
.unmask = gpio_irq_edge_unmask,
|
||||
.set_type = gpio_irq_set_type,
|
||||
};
|
||||
|
||||
struct irq_chip orion_gpio_irq_level_chip = {
|
||||
.name = "orion_gpio_irq_level",
|
||||
.mask = gpio_irq_level_mask,
|
||||
.mask_ack = gpio_irq_level_mask,
|
||||
.unmask = gpio_irq_level_unmask,
|
||||
struct irq_chip orion_gpio_irq_chip = {
|
||||
.name = "orion_gpio",
|
||||
.ack = gpio_irq_ack,
|
||||
.mask = gpio_irq_mask,
|
||||
.unmask = gpio_irq_unmask,
|
||||
.set_type = gpio_irq_set_type,
|
||||
};
|
||||
|
||||
|
|
|
@ -31,8 +31,7 @@ void orion_gpio_set_blink(unsigned pin, int blink);
|
|||
/*
|
||||
* GPIO interrupt handling.
|
||||
*/
|
||||
extern struct irq_chip orion_gpio_irq_edge_chip;
|
||||
extern struct irq_chip orion_gpio_irq_level_chip;
|
||||
extern struct irq_chip orion_gpio_irq_chip;
|
||||
void orion_gpio_irq_handler(int irqoff);
|
||||
|
||||
|
||||
|
|
|
@ -116,6 +116,7 @@ struct atmel_nand_data {
|
|||
int enable_pin; /* chip enable */
|
||||
int det_pin; /* card detect */
|
||||
int rdy_pin; /* ready/busy */
|
||||
u8 rdy_pin_active_low; /* rdy_pin value is inverted */
|
||||
u8 ale; /* address line number connected to ALE */
|
||||
u8 cle; /* address line number connected to CLE */
|
||||
u8 bus_width_16; /* buswidth is 16 bit */
|
||||
|
|
|
@ -221,7 +221,11 @@ config IA64_HP_SIM
|
|||
|
||||
config IA64_XEN_GUEST
|
||||
bool "Xen guest"
|
||||
select SWIOTLB
|
||||
depends on XEN
|
||||
help
|
||||
Build a kernel that runs on Xen guest domain. At this moment only
|
||||
16KB page size in supported.
|
||||
|
||||
endchoice
|
||||
|
||||
|
@ -479,8 +483,7 @@ config HOLES_IN_ZONE
|
|||
default y if VIRTUAL_MEM_MAP
|
||||
|
||||
config HAVE_ARCH_EARLY_PFN_TO_NID
|
||||
def_bool y
|
||||
depends on NEED_MULTIPLE_NODES
|
||||
def_bool NUMA && SPARSEMEM
|
||||
|
||||
config HAVE_ARCH_NODEDATA_EXTENSION
|
||||
def_bool y
|
||||
|
|
1601
arch/ia64/configs/xen_domu_defconfig
Normal file
1601
arch/ia64/configs/xen_domu_defconfig
Normal file
File diff suppressed because it is too large
Load diff
|
@ -24,6 +24,10 @@
|
|||
#include <linux/types.h>
|
||||
#include <linux/ioctl.h>
|
||||
|
||||
/* Select x86 specific features in <linux/kvm.h> */
|
||||
#define __KVM_HAVE_IOAPIC
|
||||
#define __KVM_HAVE_DEVICE_ASSIGNMENT
|
||||
|
||||
/* Architectural interrupt line count. */
|
||||
#define KVM_NR_INTERRUPTS 256
|
||||
|
||||
|
|
|
@ -31,10 +31,6 @@ static inline int pfn_to_nid(unsigned long pfn)
|
|||
#endif
|
||||
}
|
||||
|
||||
#ifdef CONFIG_HAVE_ARCH_EARLY_PFN_TO_NID
|
||||
extern int early_pfn_to_nid(unsigned long pfn);
|
||||
#endif
|
||||
|
||||
#ifdef CONFIG_IA64_DIG /* DIG systems are small */
|
||||
# define MAX_PHYSNODE_ID 8
|
||||
# define NR_NODE_MEMBLKS (MAX_NUMNODES * 8)
|
||||
|
|
|
@ -39,7 +39,7 @@
|
|||
/* BTE status register only supports 16 bits for length field */
|
||||
#define BTE_LEN_BITS (16)
|
||||
#define BTE_LEN_MASK ((1 << BTE_LEN_BITS) - 1)
|
||||
#define BTE_MAX_XFER ((1 << BTE_LEN_BITS) * L1_CACHE_BYTES)
|
||||
#define BTE_MAX_XFER (BTE_LEN_MASK << L1_CACHE_SHIFT)
|
||||
|
||||
|
||||
/* Define hardware */
|
||||
|
|
|
@ -736,14 +736,15 @@ int __cpu_disable(void)
|
|||
return -EBUSY;
|
||||
}
|
||||
|
||||
cpu_clear(cpu, cpu_online_map);
|
||||
|
||||
if (migrate_platform_irqs(cpu)) {
|
||||
cpu_set(cpu, cpu_online_map);
|
||||
return (-EBUSY);
|
||||
return -EBUSY;
|
||||
}
|
||||
|
||||
remove_siblinginfo(cpu);
|
||||
fixup_irqs();
|
||||
cpu_clear(cpu, cpu_online_map);
|
||||
local_flush_tlb_all();
|
||||
cpu_clear(cpu, cpu_callin_map);
|
||||
return 0;
|
||||
|
|
|
@ -1337,6 +1337,10 @@ static void kvm_release_vm_pages(struct kvm *kvm)
|
|||
}
|
||||
}
|
||||
|
||||
void kvm_arch_sync_events(struct kvm *kvm)
|
||||
{
|
||||
}
|
||||
|
||||
void kvm_arch_destroy_vm(struct kvm *kvm)
|
||||
{
|
||||
kvm_iommu_unmap_guest(kvm);
|
||||
|
|
|
@ -455,13 +455,18 @@ fpswa_ret_t vmm_fp_emulate(int fp_fault, void *bundle, unsigned long *ipsr,
|
|||
if (!vmm_fpswa_interface)
|
||||
return (fpswa_ret_t) {-1, 0, 0, 0};
|
||||
|
||||
/*
|
||||
* Just let fpswa driver to use hardware fp registers.
|
||||
* No fp register is valid in memory.
|
||||
*/
|
||||
memset(&fp_state, 0, sizeof(fp_state_t));
|
||||
|
||||
/*
|
||||
* compute fp_state. only FP registers f6 - f11 are used by the
|
||||
* vmm, so set those bits in the mask and set the low volatile
|
||||
* pointer to point to these registers.
|
||||
*/
|
||||
fp_state.bitmask_low64 = 0xfc0; /* bit6..bit11 */
|
||||
|
||||
fp_state.fp_state_low_volatile = (fp_state_low_volatile_t *) ®s->f6;
|
||||
|
||||
/*
|
||||
* unsigned long (*EFI_FPSWA) (
|
||||
* unsigned long trap_type,
|
||||
* void *Bundle,
|
||||
|
@ -545,10 +550,6 @@ void reflect_interruption(u64 ifa, u64 isr, u64 iim,
|
|||
status = vmm_handle_fpu_swa(0, regs, isr);
|
||||
if (!status)
|
||||
return ;
|
||||
else if (-EAGAIN == status) {
|
||||
vcpu_decrement_iip(vcpu);
|
||||
return ;
|
||||
}
|
||||
break;
|
||||
}
|
||||
|
||||
|
|
|
@ -58,7 +58,7 @@ paddr_to_nid(unsigned long paddr)
|
|||
* SPARSEMEM to allocate the SPARSEMEM sectionmap on the NUMA node where
|
||||
* the section resides.
|
||||
*/
|
||||
int early_pfn_to_nid(unsigned long pfn)
|
||||
int __meminit __early_pfn_to_nid(unsigned long pfn)
|
||||
{
|
||||
int i, section = pfn >> PFN_SECTION_SHIFT, ssec, esec;
|
||||
|
||||
|
@ -70,7 +70,7 @@ int early_pfn_to_nid(unsigned long pfn)
|
|||
return node_memblk[i].nid;
|
||||
}
|
||||
|
||||
return 0;
|
||||
return -1;
|
||||
}
|
||||
|
||||
#ifdef CONFIG_MEMORY_HOTPLUG
|
||||
|
|
|
@ -97,9 +97,10 @@ bte_result_t bte_copy(u64 src, u64 dest, u64 len, u64 mode, void *notification)
|
|||
return BTE_SUCCESS;
|
||||
}
|
||||
|
||||
BUG_ON((len & L1_CACHE_MASK) ||
|
||||
(src & L1_CACHE_MASK) || (dest & L1_CACHE_MASK));
|
||||
BUG_ON(!(len < ((BTE_LEN_MASK + 1) << L1_CACHE_SHIFT)));
|
||||
BUG_ON(len & L1_CACHE_MASK);
|
||||
BUG_ON(src & L1_CACHE_MASK);
|
||||
BUG_ON(dest & L1_CACHE_MASK);
|
||||
BUG_ON(len > BTE_MAX_XFER);
|
||||
|
||||
/*
|
||||
* Start with interface corresponding to cpu number
|
||||
|
|
|
@ -8,8 +8,7 @@ config XEN
|
|||
depends on PARAVIRT && MCKINLEY && IA64_PAGE_SIZE_16KB && EXPERIMENTAL
|
||||
select XEN_XENCOMM
|
||||
select NO_IDLE_HZ
|
||||
|
||||
# those are required to save/restore.
|
||||
# followings are required to save/restore.
|
||||
select ARCH_SUSPEND_POSSIBLE
|
||||
select SUSPEND
|
||||
select PM_SLEEP
|
||||
|
|
|
@ -153,7 +153,7 @@ xen_post_smp_prepare_boot_cpu(void)
|
|||
xen_setup_vcpu_info_placement();
|
||||
}
|
||||
|
||||
static const struct pv_init_ops xen_init_ops __initdata = {
|
||||
static const struct pv_init_ops xen_init_ops __initconst = {
|
||||
.banner = xen_banner,
|
||||
|
||||
.reserve_memory = xen_reserve_memory,
|
||||
|
@ -337,7 +337,7 @@ xen_iosapic_write(char __iomem *iosapic, unsigned int reg, u32 val)
|
|||
HYPERVISOR_physdev_op(PHYSDEVOP_apic_write, &apic_op);
|
||||
}
|
||||
|
||||
static const struct pv_iosapic_ops xen_iosapic_ops __initdata = {
|
||||
static const struct pv_iosapic_ops xen_iosapic_ops __initconst = {
|
||||
.pcat_compat_init = xen_pcat_compat_init,
|
||||
.__get_irq_chip = xen_iosapic_get_irq_chip,
|
||||
|
||||
|
|
|
@ -187,8 +187,8 @@ __asm__ (__ALIGN_STR "\n" \
|
|||
" jbra ret_from_interrupt\n" \
|
||||
: : "i" (&kstat_cpu(0).irqs[n+8]), "i" (&irq_handler[n+8]), \
|
||||
"n" (PT_OFF_SR), "n" (n), \
|
||||
"i" (n & 8 ? (n & 16 ? &tt_mfp.int_mk_a : &mfp.int_mk_a) \
|
||||
: (n & 16 ? &tt_mfp.int_mk_b : &mfp.int_mk_b)), \
|
||||
"i" (n & 8 ? (n & 16 ? &tt_mfp.int_mk_a : &st_mfp.int_mk_a) \
|
||||
: (n & 16 ? &tt_mfp.int_mk_b : &st_mfp.int_mk_b)), \
|
||||
"m" (preempt_count()), "di" (HARDIRQ_OFFSET) \
|
||||
); \
|
||||
for (;;); /* fake noreturn */ \
|
||||
|
@ -366,14 +366,14 @@ void __init atari_init_IRQ(void)
|
|||
/* Initialize the MFP(s) */
|
||||
|
||||
#ifdef ATARI_USE_SOFTWARE_EOI
|
||||
mfp.vec_adr = 0x48; /* Software EOI-Mode */
|
||||
st_mfp.vec_adr = 0x48; /* Software EOI-Mode */
|
||||
#else
|
||||
mfp.vec_adr = 0x40; /* Automatic EOI-Mode */
|
||||
st_mfp.vec_adr = 0x40; /* Automatic EOI-Mode */
|
||||
#endif
|
||||
mfp.int_en_a = 0x00; /* turn off MFP-Ints */
|
||||
mfp.int_en_b = 0x00;
|
||||
mfp.int_mk_a = 0xff; /* no Masking */
|
||||
mfp.int_mk_b = 0xff;
|
||||
st_mfp.int_en_a = 0x00; /* turn off MFP-Ints */
|
||||
st_mfp.int_en_b = 0x00;
|
||||
st_mfp.int_mk_a = 0xff; /* no Masking */
|
||||
st_mfp.int_mk_b = 0xff;
|
||||
|
||||
if (ATARIHW_PRESENT(TT_MFP)) {
|
||||
#ifdef ATARI_USE_SOFTWARE_EOI
|
||||
|
|
|
@ -609,10 +609,10 @@ int atari_keyb_init(void)
|
|||
ACIA_RHTID : 0);
|
||||
|
||||
/* make sure the interrupt line is up */
|
||||
} while ((mfp.par_dt_reg & 0x10) == 0);
|
||||
} while ((st_mfp.par_dt_reg & 0x10) == 0);
|
||||
|
||||
/* enable ACIA Interrupts */
|
||||
mfp.active_edge &= ~0x10;
|
||||
st_mfp.active_edge &= ~0x10;
|
||||
atari_turnon_irq(IRQ_MFP_ACIA);
|
||||
|
||||
ikbd_self_test = 1;
|
||||
|
|
|
@ -258,7 +258,7 @@ void __init config_atari(void)
|
|||
printk("STND_SHIFTER ");
|
||||
}
|
||||
}
|
||||
if (hwreg_present(&mfp.par_dt_reg)) {
|
||||
if (hwreg_present(&st_mfp.par_dt_reg)) {
|
||||
ATARIHW_SET(ST_MFP);
|
||||
printk("ST_MFP ");
|
||||
}
|
||||
|
|
|
@ -34,9 +34,9 @@ static struct console atari_console_driver = {
|
|||
|
||||
static inline void ata_mfp_out(char c)
|
||||
{
|
||||
while (!(mfp.trn_stat & 0x80)) /* wait for tx buf empty */
|
||||
while (!(st_mfp.trn_stat & 0x80)) /* wait for tx buf empty */
|
||||
barrier();
|
||||
mfp.usart_dta = c;
|
||||
st_mfp.usart_dta = c;
|
||||
}
|
||||
|
||||
static void atari_mfp_console_write(struct console *co, const char *str,
|
||||
|
@ -91,7 +91,7 @@ static int ata_par_out(char c)
|
|||
/* This a some-seconds timeout in case no printer is connected */
|
||||
unsigned long i = loops_per_jiffy > 1 ? loops_per_jiffy : 10000000/HZ;
|
||||
|
||||
while ((mfp.par_dt_reg & 1) && --i) /* wait for BUSY == L */
|
||||
while ((st_mfp.par_dt_reg & 1) && --i) /* wait for BUSY == L */
|
||||
;
|
||||
if (!i)
|
||||
return 0;
|
||||
|
@ -131,9 +131,9 @@ static void atari_par_console_write(struct console *co, const char *str,
|
|||
#if 0
|
||||
int atari_mfp_console_wait_key(struct console *co)
|
||||
{
|
||||
while (!(mfp.rcv_stat & 0x80)) /* wait for rx buf filled */
|
||||
while (!(st_mfp.rcv_stat & 0x80)) /* wait for rx buf filled */
|
||||
barrier();
|
||||
return mfp.usart_dta;
|
||||
return st_mfp.usart_dta;
|
||||
}
|
||||
|
||||
int atari_scc_console_wait_key(struct console *co)
|
||||
|
@ -175,12 +175,12 @@ static void __init atari_init_mfp_port(int cflag)
|
|||
baud = B9600; /* use default 9600bps for non-implemented rates */
|
||||
baud -= B1200; /* baud_table[] starts at 1200bps */
|
||||
|
||||
mfp.trn_stat &= ~0x01; /* disable TX */
|
||||
mfp.usart_ctr = parity | csize | 0x88; /* 1:16 clk mode, 1 stop bit */
|
||||
mfp.tim_ct_cd &= 0x70; /* stop timer D */
|
||||
mfp.tim_dt_d = baud_table[baud];
|
||||
mfp.tim_ct_cd |= 0x01; /* start timer D, 1:4 */
|
||||
mfp.trn_stat |= 0x01; /* enable TX */
|
||||
st_mfp.trn_stat &= ~0x01; /* disable TX */
|
||||
st_mfp.usart_ctr = parity | csize | 0x88; /* 1:16 clk mode, 1 stop bit */
|
||||
st_mfp.tim_ct_cd &= 0x70; /* stop timer D */
|
||||
st_mfp.tim_dt_d = baud_table[baud];
|
||||
st_mfp.tim_ct_cd |= 0x01; /* start timer D, 1:4 */
|
||||
st_mfp.trn_stat |= 0x01; /* enable TX */
|
||||
}
|
||||
|
||||
#define SCC_WRITE(reg, val) \
|
||||
|
|
|
@ -27,9 +27,9 @@ void __init
|
|||
atari_sched_init(irq_handler_t timer_routine)
|
||||
{
|
||||
/* set Timer C data Register */
|
||||
mfp.tim_dt_c = INT_TICKS;
|
||||
st_mfp.tim_dt_c = INT_TICKS;
|
||||
/* start timer C, div = 1:100 */
|
||||
mfp.tim_ct_cd = (mfp.tim_ct_cd & 15) | 0x60;
|
||||
st_mfp.tim_ct_cd = (st_mfp.tim_ct_cd & 15) | 0x60;
|
||||
/* install interrupt service routine for MFP Timer C */
|
||||
if (request_irq(IRQ_MFP_TIMC, timer_routine, IRQ_TYPE_SLOW,
|
||||
"timer", timer_routine))
|
||||
|
@ -46,11 +46,11 @@ unsigned long atari_gettimeoffset (void)
|
|||
unsigned long ticks, offset = 0;
|
||||
|
||||
/* read MFP timer C current value */
|
||||
ticks = mfp.tim_dt_c;
|
||||
ticks = st_mfp.tim_dt_c;
|
||||
/* The probability of underflow is less than 2% */
|
||||
if (ticks > INT_TICKS - INT_TICKS / 50)
|
||||
/* Check for pending timer interrupt */
|
||||
if (mfp.int_pn_b & (1 << 5))
|
||||
if (st_mfp.int_pn_b & (1 << 5))
|
||||
offset = TICK_SIZE;
|
||||
|
||||
ticks = INT_TICKS - ticks;
|
||||
|
|
|
@ -113,7 +113,7 @@ extern struct atari_hw_present atari_hw_present;
|
|||
* of nops on various machines. Somebody claimed that the tstb takes 600 ns.
|
||||
*/
|
||||
#define MFPDELAY() \
|
||||
__asm__ __volatile__ ( "tstb %0" : : "m" (mfp.par_dt_reg) : "cc" );
|
||||
__asm__ __volatile__ ( "tstb %0" : : "m" (st_mfp.par_dt_reg) : "cc" );
|
||||
|
||||
/* Do cache push/invalidate for DMA read/write. This function obeys the
|
||||
* snooping on some machines (Medusa) and processors: The Medusa itself can
|
||||
|
@ -565,7 +565,7 @@ struct MFP
|
|||
u_char char_dummy23;
|
||||
u_char usart_dta;
|
||||
};
|
||||
# define mfp ((*(volatile struct MFP*)MFP_BAS))
|
||||
# define st_mfp ((*(volatile struct MFP*)MFP_BAS))
|
||||
|
||||
/* TT's second MFP */
|
||||
|
||||
|
|
|
@ -113,7 +113,7 @@ static inline int get_mfp_bit( unsigned irq, int type )
|
|||
{ unsigned char mask, *reg;
|
||||
|
||||
mask = 1 << (irq & 7);
|
||||
reg = (unsigned char *)&mfp.int_en_a + type*4 +
|
||||
reg = (unsigned char *)&st_mfp.int_en_a + type*4 +
|
||||
((irq & 8) >> 2) + (((irq-8) & 16) << 3);
|
||||
return( *reg & mask );
|
||||
}
|
||||
|
@ -123,7 +123,7 @@ static inline void set_mfp_bit( unsigned irq, int type )
|
|||
{ unsigned char mask, *reg;
|
||||
|
||||
mask = 1 << (irq & 7);
|
||||
reg = (unsigned char *)&mfp.int_en_a + type*4 +
|
||||
reg = (unsigned char *)&st_mfp.int_en_a + type*4 +
|
||||
((irq & 8) >> 2) + (((irq-8) & 16) << 3);
|
||||
__asm__ __volatile__ ( "orb %0,%1"
|
||||
: : "di" (mask), "m" (*reg) : "memory" );
|
||||
|
@ -134,7 +134,7 @@ static inline void clear_mfp_bit( unsigned irq, int type )
|
|||
{ unsigned char mask, *reg;
|
||||
|
||||
mask = ~(1 << (irq & 7));
|
||||
reg = (unsigned char *)&mfp.int_en_a + type*4 +
|
||||
reg = (unsigned char *)&st_mfp.int_en_a + type*4 +
|
||||
((irq & 8) >> 2) + (((irq-8) & 16) << 3);
|
||||
if (type == MFP_PENDING || type == MFP_SERVICE)
|
||||
__asm__ __volatile__ ( "moveb %0,%1"
|
||||
|
|
|
@ -7,6 +7,7 @@ mainmenu "Linux Kernel Configuration"
|
|||
|
||||
config MN10300
|
||||
def_bool y
|
||||
select HAVE_OPROFILE
|
||||
|
||||
config AM33
|
||||
def_bool y
|
||||
|
|
|
@ -173,7 +173,7 @@ static int pci_ampci_write_config_byte(struct pci_bus *bus, unsigned int devfn,
|
|||
BRIDGEREGB(where) = value;
|
||||
} else {
|
||||
if (bus->number == 0 &&
|
||||
(devfn == PCI_DEVFN(2, 0) && devfn == PCI_DEVFN(3, 0))
|
||||
(devfn == PCI_DEVFN(2, 0) || devfn == PCI_DEVFN(3, 0))
|
||||
)
|
||||
__pcidebug("<= %02x", bus, devfn, where, value);
|
||||
CONFIG_ADDRESS = CONFIG_CMD(bus, devfn, where);
|
||||
|
|
|
@ -60,7 +60,7 @@
|
|||
/* It should be preserving the high 48 bits and then specifically */
|
||||
/* preserving _PAGE_SECONDARY | _PAGE_GROUP_IX */
|
||||
#define _PAGE_CHG_MASK (PAGE_MASK | _PAGE_ACCESSED | _PAGE_DIRTY | \
|
||||
_PAGE_HPTEFLAGS)
|
||||
_PAGE_HPTEFLAGS | _PAGE_SPECIAL)
|
||||
|
||||
/* Bits to mask out from a PMD to get to the PTE page */
|
||||
#define PMD_MASKED_BITS 0
|
||||
|
|
|
@ -114,7 +114,7 @@ static inline struct subpage_prot_table *pgd_subpage_prot(pgd_t *pgd)
|
|||
* pgprot changes
|
||||
*/
|
||||
#define _PAGE_CHG_MASK (PTE_RPN_MASK | _PAGE_HPTEFLAGS | _PAGE_DIRTY | \
|
||||
_PAGE_ACCESSED)
|
||||
_PAGE_ACCESSED | _PAGE_SPECIAL)
|
||||
|
||||
/* Bits to mask out from a PMD to get to the PTE page */
|
||||
#define PMD_MASKED_BITS 0x1ff
|
||||
|
|
|
@ -429,7 +429,8 @@ extern int icache_44x_need_flush;
|
|||
#define PMD_PAGE_SIZE(pmd) bad_call_to_PMD_PAGE_SIZE()
|
||||
#endif
|
||||
|
||||
#define _PAGE_CHG_MASK (PAGE_MASK | _PAGE_ACCESSED | _PAGE_DIRTY)
|
||||
#define _PAGE_CHG_MASK (PAGE_MASK | _PAGE_ACCESSED | _PAGE_DIRTY | \
|
||||
_PAGE_SPECIAL)
|
||||
|
||||
|
||||
#define PAGE_PROT_BITS (_PAGE_GUARDED | _PAGE_COHERENT | _PAGE_NO_CACHE | \
|
||||
|
|
|
@ -646,11 +646,16 @@ static int emulate_vsx(unsigned char __user *addr, unsigned int reg,
|
|||
unsigned int areg, struct pt_regs *regs,
|
||||
unsigned int flags, unsigned int length)
|
||||
{
|
||||
char *ptr = (char *) ¤t->thread.TS_FPR(reg);
|
||||
char *ptr;
|
||||
int ret = 0;
|
||||
|
||||
flush_vsx_to_thread(current);
|
||||
|
||||
if (reg < 32)
|
||||
ptr = (char *) ¤t->thread.TS_FPR(reg);
|
||||
else
|
||||
ptr = (char *) ¤t->thread.vr[reg - 32];
|
||||
|
||||
if (flags & ST)
|
||||
ret = __copy_to_user(addr, ptr, length);
|
||||
else {
|
||||
|
|
|
@ -125,6 +125,10 @@ static void kvmppc_free_vcpus(struct kvm *kvm)
|
|||
}
|
||||
}
|
||||
|
||||
void kvm_arch_sync_events(struct kvm *kvm)
|
||||
{
|
||||
}
|
||||
|
||||
void kvm_arch_destroy_vm(struct kvm *kvm)
|
||||
{
|
||||
kvmppc_free_vcpus(kvm);
|
||||
|
|
|
@ -19,6 +19,7 @@
|
|||
#include <linux/notifier.h>
|
||||
#include <linux/lmb.h>
|
||||
#include <linux/of.h>
|
||||
#include <linux/pfn.h>
|
||||
#include <asm/sparsemem.h>
|
||||
#include <asm/prom.h>
|
||||
#include <asm/system.h>
|
||||
|
@ -882,7 +883,7 @@ static void mark_reserved_regions_for_nid(int nid)
|
|||
unsigned long physbase = lmb.reserved.region[i].base;
|
||||
unsigned long size = lmb.reserved.region[i].size;
|
||||
unsigned long start_pfn = physbase >> PAGE_SHIFT;
|
||||
unsigned long end_pfn = ((physbase + size) >> PAGE_SHIFT);
|
||||
unsigned long end_pfn = PFN_UP(physbase + size);
|
||||
struct node_active_region node_ar;
|
||||
unsigned long node_end_pfn = node->node_start_pfn +
|
||||
node->node_spanned_pages;
|
||||
|
@ -908,7 +909,7 @@ static void mark_reserved_regions_for_nid(int nid)
|
|||
*/
|
||||
if (end_pfn > node_ar.end_pfn)
|
||||
reserve_size = (node_ar.end_pfn << PAGE_SHIFT)
|
||||
- (start_pfn << PAGE_SHIFT);
|
||||
- physbase;
|
||||
/*
|
||||
* Only worry about *this* node, others may not
|
||||
* yet have valid NODE_DATA().
|
||||
|
|
|
@ -328,7 +328,7 @@ static int __init ps3_mm_add_memory(void)
|
|||
return result;
|
||||
}
|
||||
|
||||
core_initcall(ps3_mm_add_memory);
|
||||
device_initcall(ps3_mm_add_memory);
|
||||
|
||||
/*============================================================================*/
|
||||
/* dma routines */
|
||||
|
|
|
@ -145,7 +145,7 @@ cputime_to_timeval(const cputime_t cputime, struct timeval *value)
|
|||
value->tv_usec = rp.subreg.even / 4096;
|
||||
value->tv_sec = rp.subreg.odd;
|
||||
#else
|
||||
value->tv_usec = cputime % 4096000000ULL;
|
||||
value->tv_usec = (cputime % 4096000000ULL) / 4096;
|
||||
value->tv_sec = cputime / 4096000000ULL;
|
||||
#endif
|
||||
}
|
||||
|
|
|
@ -43,6 +43,8 @@ struct mem_chunk {
|
|||
|
||||
extern struct mem_chunk memory_chunk[];
|
||||
extern unsigned long real_memory_size;
|
||||
extern int memory_end_set;
|
||||
extern unsigned long memory_end;
|
||||
|
||||
void detect_memory_layout(struct mem_chunk chunk[]);
|
||||
|
||||
|
|
|
@ -82,7 +82,9 @@ char elf_platform[ELF_PLATFORM_SIZE];
|
|||
|
||||
struct mem_chunk __initdata memory_chunk[MEMORY_CHUNKS];
|
||||
volatile int __cpu_logical_map[NR_CPUS]; /* logical cpu to cpu address */
|
||||
static unsigned long __initdata memory_end;
|
||||
|
||||
int __initdata memory_end_set;
|
||||
unsigned long __initdata memory_end;
|
||||
|
||||
/*
|
||||
* This is set up by the setup-routine at boot-time
|
||||
|
@ -281,6 +283,7 @@ void (*pm_power_off)(void) = machine_power_off;
|
|||
static int __init early_parse_mem(char *p)
|
||||
{
|
||||
memory_end = memparse(p, &p);
|
||||
memory_end_set = 1;
|
||||
return 0;
|
||||
}
|
||||
early_param("mem", early_parse_mem);
|
||||
|
@ -508,8 +511,10 @@ static void __init setup_memory_end(void)
|
|||
int i;
|
||||
|
||||
#if defined(CONFIG_ZFCPDUMP) || defined(CONFIG_ZFCPDUMP_MODULE)
|
||||
if (ipl_info.type == IPL_TYPE_FCP_DUMP)
|
||||
if (ipl_info.type == IPL_TYPE_FCP_DUMP) {
|
||||
memory_end = ZFCPDUMP_HSA_SIZE;
|
||||
memory_end_set = 1;
|
||||
}
|
||||
#endif
|
||||
memory_size = 0;
|
||||
memory_end &= PAGE_MASK;
|
||||
|
|
|
@ -212,6 +212,10 @@ static void kvm_free_vcpus(struct kvm *kvm)
|
|||
}
|
||||
}
|
||||
|
||||
void kvm_arch_sync_events(struct kvm *kvm)
|
||||
{
|
||||
}
|
||||
|
||||
void kvm_arch_destroy_vm(struct kvm *kvm)
|
||||
{
|
||||
kvm_free_vcpus(kvm);
|
||||
|
|
|
@ -78,7 +78,7 @@ void vde_init_libstuff(struct vde_data *vpri, struct vde_init *init)
|
|||
{
|
||||
struct vde_open_args *args;
|
||||
|
||||
vpri->args = kmalloc(sizeof(struct vde_open_args), UM_GFP_KERNEL);
|
||||
vpri->args = uml_kmalloc(sizeof(struct vde_open_args), UM_GFP_KERNEL);
|
||||
if (vpri->args == NULL) {
|
||||
printk(UM_KERN_ERR "vde_init_libstuff - vde_open_args "
|
||||
"allocation failed");
|
||||
|
@ -91,8 +91,8 @@ void vde_init_libstuff(struct vde_data *vpri, struct vde_init *init)
|
|||
args->group = init->group;
|
||||
args->mode = init->mode ? init->mode : 0700;
|
||||
|
||||
args->port ? printk(UM_KERN_INFO "port %d", args->port) :
|
||||
printk(UM_KERN_INFO "undefined port");
|
||||
args->port ? printk("port %d", args->port) :
|
||||
printk("undefined port");
|
||||
}
|
||||
|
||||
int vde_user_read(void *conn, void *buf, int len)
|
||||
|
|
|
@ -175,28 +175,8 @@ config IOMMU_LEAK
|
|||
Add a simple leak tracer to the IOMMU code. This is useful when you
|
||||
are debugging a buggy device driver that leaks IOMMU mappings.
|
||||
|
||||
config MMIOTRACE
|
||||
bool "Memory mapped IO tracing"
|
||||
depends on DEBUG_KERNEL && PCI
|
||||
select TRACING
|
||||
help
|
||||
Mmiotrace traces Memory Mapped I/O access and is meant for
|
||||
debugging and reverse engineering. It is called from the ioremap
|
||||
implementation and works via page faults. Tracing is disabled by
|
||||
default and can be enabled at run-time.
|
||||
|
||||
See Documentation/tracers/mmiotrace.txt.
|
||||
If you are not helping to develop drivers, say N.
|
||||
|
||||
config MMIOTRACE_TEST
|
||||
tristate "Test module for mmiotrace"
|
||||
depends on MMIOTRACE && m
|
||||
help
|
||||
This is a dumb module for testing mmiotrace. It is very dangerous
|
||||
as it will write garbage to IO memory starting at a given address.
|
||||
However, it should be safe to use on e.g. unused portion of VRAM.
|
||||
|
||||
Say N, unless you absolutely know what you are doing.
|
||||
config HAVE_MMIOTRACE_SUPPORT
|
||||
def_bool y
|
||||
|
||||
#
|
||||
# IO delay types:
|
||||
|
|
|
@ -9,6 +9,13 @@
|
|||
#include <linux/types.h>
|
||||
#include <linux/ioctl.h>
|
||||
|
||||
/* Select x86 specific features in <linux/kvm.h> */
|
||||
#define __KVM_HAVE_PIT
|
||||
#define __KVM_HAVE_IOAPIC
|
||||
#define __KVM_HAVE_DEVICE_ASSIGNMENT
|
||||
#define __KVM_HAVE_MSI
|
||||
#define __KVM_HAVE_USER_NMI
|
||||
|
||||
/* Architectural interrupt line count. */
|
||||
#define KVM_NR_INTERRUPTS 256
|
||||
|
||||
|
|
|
@ -32,8 +32,6 @@ static inline void get_memcfg_numa(void)
|
|||
get_memcfg_numa_flat();
|
||||
}
|
||||
|
||||
extern int early_pfn_to_nid(unsigned long pfn);
|
||||
|
||||
extern void resume_map_numa_kva(pgd_t *pgd);
|
||||
|
||||
#else /* !CONFIG_NUMA */
|
||||
|
|
|
@ -40,8 +40,6 @@ static inline __attribute__((pure)) int phys_to_nid(unsigned long addr)
|
|||
#define node_end_pfn(nid) (NODE_DATA(nid)->node_start_pfn + \
|
||||
NODE_DATA(nid)->node_spanned_pages)
|
||||
|
||||
extern int early_pfn_to_nid(unsigned long pfn);
|
||||
|
||||
#ifdef CONFIG_NUMA_EMU
|
||||
#define FAKE_NODE_MIN_SIZE (64 * 1024 * 1024)
|
||||
#define FAKE_NODE_MIN_HASH_MASK (~(FAKE_NODE_MIN_SIZE - 1UL))
|
||||
|
|
|
@ -13,7 +13,6 @@
|
|||
* Hooray, we are in Long 64-bit mode (but still running in low memory)
|
||||
*/
|
||||
ENTRY(wakeup_long64)
|
||||
wakeup_long64:
|
||||
movq saved_magic, %rax
|
||||
movq $0x123456789abcdef0, %rdx
|
||||
cmpq %rdx, %rax
|
||||
|
@ -34,16 +33,12 @@ wakeup_long64:
|
|||
|
||||
movq saved_rip, %rax
|
||||
jmp *%rax
|
||||
ENDPROC(wakeup_long64)
|
||||
|
||||
bogus_64_magic:
|
||||
jmp bogus_64_magic
|
||||
|
||||
.align 2
|
||||
.p2align 4,,15
|
||||
.globl do_suspend_lowlevel
|
||||
.type do_suspend_lowlevel,@function
|
||||
do_suspend_lowlevel:
|
||||
.LFB5:
|
||||
ENTRY(do_suspend_lowlevel)
|
||||
subq $8, %rsp
|
||||
xorl %eax, %eax
|
||||
call save_processor_state
|
||||
|
@ -67,7 +62,7 @@ do_suspend_lowlevel:
|
|||
pushfq
|
||||
popq pt_regs_flags(%rax)
|
||||
|
||||
movq $.L97, saved_rip(%rip)
|
||||
movq $resume_point, saved_rip(%rip)
|
||||
|
||||
movq %rsp, saved_rsp
|
||||
movq %rbp, saved_rbp
|
||||
|
@ -78,14 +73,12 @@ do_suspend_lowlevel:
|
|||
addq $8, %rsp
|
||||
movl $3, %edi
|
||||
xorl %eax, %eax
|
||||
jmp acpi_enter_sleep_state
|
||||
.L97:
|
||||
.p2align 4,,7
|
||||
.L99:
|
||||
.align 4
|
||||
movl $24, %eax
|
||||
movw %ax, %ds
|
||||
call acpi_enter_sleep_state
|
||||
/* in case something went wrong, restore the machine status and go on */
|
||||
jmp resume_point
|
||||
|
||||
.align 4
|
||||
resume_point:
|
||||
/* We don't restore %rax, it must be 0 anyway */
|
||||
movq $saved_context, %rax
|
||||
movq saved_context_cr4(%rax), %rbx
|
||||
|
@ -117,12 +110,9 @@ do_suspend_lowlevel:
|
|||
xorl %eax, %eax
|
||||
addq $8, %rsp
|
||||
jmp restore_processor_state
|
||||
.LFE5:
|
||||
.Lfe5:
|
||||
.size do_suspend_lowlevel, .Lfe5-do_suspend_lowlevel
|
||||
|
||||
ENDPROC(do_suspend_lowlevel)
|
||||
|
||||
.data
|
||||
ALIGN
|
||||
ENTRY(saved_rbp) .quad 0
|
||||
ENTRY(saved_rsi) .quad 0
|
||||
ENTRY(saved_rdi) .quad 0
|
||||
|
|
|
@ -837,7 +837,7 @@ void clear_local_APIC(void)
|
|||
}
|
||||
|
||||
/* lets not touch this if we didn't frob it */
|
||||
#if defined(CONFIG_X86_MCE_P4THERMAL) || defined(X86_MCE_INTEL)
|
||||
#if defined(CONFIG_X86_MCE_P4THERMAL) || defined(CONFIG_X86_MCE_INTEL)
|
||||
if (maxlvt >= 5) {
|
||||
v = apic_read(APIC_LVTTHMR);
|
||||
apic_write(APIC_LVTTHMR, v | APIC_LVT_MASKED);
|
||||
|
|
|
@ -1192,6 +1192,7 @@ static int suspend(int vetoable)
|
|||
device_suspend(PMSG_SUSPEND);
|
||||
local_irq_disable();
|
||||
device_power_down(PMSG_SUSPEND);
|
||||
sysdev_suspend(PMSG_SUSPEND);
|
||||
|
||||
local_irq_enable();
|
||||
|
||||
|
@ -1208,6 +1209,7 @@ static int suspend(int vetoable)
|
|||
if (err != APM_SUCCESS)
|
||||
apm_error("suspend", err);
|
||||
err = (err == APM_SUCCESS) ? 0 : -EIO;
|
||||
sysdev_resume();
|
||||
device_power_up(PMSG_RESUME);
|
||||
local_irq_enable();
|
||||
device_resume(PMSG_RESUME);
|
||||
|
@ -1228,6 +1230,7 @@ static void standby(void)
|
|||
|
||||
local_irq_disable();
|
||||
device_power_down(PMSG_SUSPEND);
|
||||
sysdev_suspend(PMSG_SUSPEND);
|
||||
local_irq_enable();
|
||||
|
||||
err = set_system_power_state(APM_STATE_STANDBY);
|
||||
|
@ -1235,6 +1238,7 @@ static void standby(void)
|
|||
apm_error("standby", err);
|
||||
|
||||
local_irq_disable();
|
||||
sysdev_resume();
|
||||
device_power_up(PMSG_RESUME);
|
||||
local_irq_enable();
|
||||
}
|
||||
|
|
|
@ -1157,8 +1157,7 @@ static int __cpuinit powernowk8_cpu_init(struct cpufreq_policy *pol)
|
|||
data->cpu = pol->cpu;
|
||||
data->currpstate = HW_PSTATE_INVALID;
|
||||
|
||||
rc = powernow_k8_cpu_init_acpi(data);
|
||||
if (rc) {
|
||||
if (powernow_k8_cpu_init_acpi(data)) {
|
||||
/*
|
||||
* Use the PSB BIOS structure. This is only availabe on
|
||||
* an UP version, and is deprecated by AMD.
|
||||
|
@ -1176,17 +1175,20 @@ static int __cpuinit powernowk8_cpu_init(struct cpufreq_policy *pol)
|
|||
"ACPI maintainers and complain to your BIOS "
|
||||
"vendor.\n");
|
||||
#endif
|
||||
goto err_out;
|
||||
kfree(data);
|
||||
return -ENODEV;
|
||||
}
|
||||
if (pol->cpu != 0) {
|
||||
printk(KERN_ERR FW_BUG PFX "No ACPI _PSS objects for "
|
||||
"CPU other than CPU0. Complain to your BIOS "
|
||||
"vendor.\n");
|
||||
goto err_out;
|
||||
kfree(data);
|
||||
return -ENODEV;
|
||||
}
|
||||
rc = find_psb_table(data);
|
||||
if (rc) {
|
||||
goto err_out;
|
||||
kfree(data);
|
||||
return -ENODEV;
|
||||
}
|
||||
/* Take a crude guess here.
|
||||
* That guess was in microseconds, so multiply with 1000 */
|
||||
|
|
|
@ -295,11 +295,11 @@ void do_machine_check(struct pt_regs * regs, long error_code)
|
|||
* If we know that the error was in user space, send a
|
||||
* SIGBUS. Otherwise, panic if tolerance is low.
|
||||
*
|
||||
* do_exit() takes an awful lot of locks and has a slight
|
||||
* force_sig() takes an awful lot of locks and has a slight
|
||||
* risk of deadlocking.
|
||||
*/
|
||||
if (user_space) {
|
||||
do_exit(SIGBUS);
|
||||
force_sig(SIGBUS, current);
|
||||
} else if (panic_on_oops || tolerant < 2) {
|
||||
mce_panic("Uncorrected machine check",
|
||||
&panicm, mcestart);
|
||||
|
@ -490,7 +490,7 @@ static void __cpuinit mce_cpu_quirks(struct cpuinfo_x86 *c)
|
|||
|
||||
}
|
||||
|
||||
static void __cpuinit mce_cpu_features(struct cpuinfo_x86 *c)
|
||||
static void mce_cpu_features(struct cpuinfo_x86 *c)
|
||||
{
|
||||
switch (c->x86_vendor) {
|
||||
case X86_VENDOR_INTEL:
|
||||
|
@ -734,6 +734,7 @@ __setup("mce=", mcheck_enable);
|
|||
static int mce_resume(struct sys_device *dev)
|
||||
{
|
||||
mce_init(NULL);
|
||||
mce_cpu_features(¤t_cpu_data);
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
|
|
@ -121,7 +121,7 @@ static long threshold_restart_bank(void *_tr)
|
|||
}
|
||||
|
||||
/* cpu init entry point, called from mce.c with preempt off */
|
||||
void __cpuinit mce_amd_feature_init(struct cpuinfo_x86 *c)
|
||||
void mce_amd_feature_init(struct cpuinfo_x86 *c)
|
||||
{
|
||||
unsigned int bank, block;
|
||||
unsigned int cpu = smp_processor_id();
|
||||
|
|
|
@ -31,7 +31,7 @@ asmlinkage void smp_thermal_interrupt(void)
|
|||
irq_exit();
|
||||
}
|
||||
|
||||
static void __cpuinit intel_init_thermal(struct cpuinfo_x86 *c)
|
||||
static void intel_init_thermal(struct cpuinfo_x86 *c)
|
||||
{
|
||||
u32 l, h;
|
||||
int tm2 = 0;
|
||||
|
@ -85,7 +85,7 @@ static void __cpuinit intel_init_thermal(struct cpuinfo_x86 *c)
|
|||
return;
|
||||
}
|
||||
|
||||
void __cpuinit mce_intel_feature_init(struct cpuinfo_x86 *c)
|
||||
void mce_intel_feature_init(struct cpuinfo_x86 *c)
|
||||
{
|
||||
intel_init_thermal(c);
|
||||
}
|
||||
|
|
|
@ -111,9 +111,6 @@ void cpu_idle(void)
|
|||
check_pgt_cache();
|
||||
rmb();
|
||||
|
||||
if (rcu_pending(cpu))
|
||||
rcu_check_callbacks(cpu, 0);
|
||||
|
||||
if (cpu_is_offline(cpu))
|
||||
play_dead();
|
||||
|
||||
|
|
|
@ -1051,7 +1051,7 @@ void __init trap_init_hook(void)
|
|||
|
||||
static struct irqaction irq0 = {
|
||||
.handler = timer_interrupt,
|
||||
.flags = IRQF_DISABLED | IRQF_NOBALANCING | IRQF_IRQPOLL,
|
||||
.flags = IRQF_DISABLED | IRQF_NOBALANCING | IRQF_IRQPOLL | IRQF_TIMER,
|
||||
.mask = CPU_MASK_NONE,
|
||||
.name = "timer"
|
||||
};
|
||||
|
|
|
@ -115,7 +115,7 @@ unsigned long __init calibrate_cpu(void)
|
|||
|
||||
static struct irqaction irq0 = {
|
||||
.handler = timer_interrupt,
|
||||
.flags = IRQF_DISABLED | IRQF_IRQPOLL | IRQF_NOBALANCING,
|
||||
.flags = IRQF_DISABLED | IRQF_IRQPOLL | IRQF_NOBALANCING | IRQF_TIMER,
|
||||
.mask = CPU_MASK_NONE,
|
||||
.name = "timer"
|
||||
};
|
||||
|
|
|
@ -202,7 +202,7 @@ static irqreturn_t vmi_timer_interrupt(int irq, void *dev_id)
|
|||
static struct irqaction vmi_clock_action = {
|
||||
.name = "vmi-timer",
|
||||
.handler = vmi_timer_interrupt,
|
||||
.flags = IRQF_DISABLED | IRQF_NOBALANCING,
|
||||
.flags = IRQF_DISABLED | IRQF_NOBALANCING | IRQF_TIMER,
|
||||
.mask = CPU_MASK_ALL,
|
||||
};
|
||||
|
||||
|
@ -283,10 +283,13 @@ void __devinit vmi_time_ap_init(void)
|
|||
#endif
|
||||
|
||||
/** vmi clocksource */
|
||||
static struct clocksource clocksource_vmi;
|
||||
|
||||
static cycle_t read_real_cycles(void)
|
||||
{
|
||||
return vmi_timer_ops.get_cycle_counter(VMI_CYCLES_REAL);
|
||||
cycle_t ret = (cycle_t)vmi_timer_ops.get_cycle_counter(VMI_CYCLES_REAL);
|
||||
return ret >= clocksource_vmi.cycle_last ?
|
||||
ret : clocksource_vmi.cycle_last;
|
||||
}
|
||||
|
||||
static struct clocksource clocksource_vmi = {
|
||||
|
|
|
@ -207,7 +207,7 @@ static int __pit_timer_fn(struct kvm_kpit_state *ps)
|
|||
hrtimer_add_expires_ns(&pt->timer, pt->period);
|
||||
pt->scheduled = hrtimer_get_expires_ns(&pt->timer);
|
||||
if (pt->period)
|
||||
ps->channels[0].count_load_time = hrtimer_get_expires(&pt->timer);
|
||||
ps->channels[0].count_load_time = ktime_get();
|
||||
|
||||
return (pt->period == 0 ? 0 : 1);
|
||||
}
|
||||
|
|
|
@ -87,13 +87,6 @@ void kvm_inject_pending_timer_irqs(struct kvm_vcpu *vcpu)
|
|||
}
|
||||
EXPORT_SYMBOL_GPL(kvm_inject_pending_timer_irqs);
|
||||
|
||||
void kvm_timer_intr_post(struct kvm_vcpu *vcpu, int vec)
|
||||
{
|
||||
kvm_apic_timer_intr_post(vcpu, vec);
|
||||
/* TODO: PIT, RTC etc. */
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(kvm_timer_intr_post);
|
||||
|
||||
void __kvm_migrate_timers(struct kvm_vcpu *vcpu)
|
||||
{
|
||||
__kvm_migrate_apic_timer(vcpu);
|
||||
|
|
|
@ -89,7 +89,6 @@ static inline int irqchip_in_kernel(struct kvm *kvm)
|
|||
|
||||
void kvm_pic_reset(struct kvm_kpic_state *s);
|
||||
|
||||
void kvm_timer_intr_post(struct kvm_vcpu *vcpu, int vec);
|
||||
void kvm_inject_pending_timer_irqs(struct kvm_vcpu *vcpu);
|
||||
void kvm_inject_apic_timer_irqs(struct kvm_vcpu *vcpu);
|
||||
void kvm_apic_nmi_wd_deliver(struct kvm_vcpu *vcpu);
|
||||
|
|
|
@ -35,6 +35,12 @@
|
|||
#include "kvm_cache_regs.h"
|
||||
#include "irq.h"
|
||||
|
||||
#ifndef CONFIG_X86_64
|
||||
#define mod_64(x, y) ((x) - (y) * div64_u64(x, y))
|
||||
#else
|
||||
#define mod_64(x, y) ((x) % (y))
|
||||
#endif
|
||||
|
||||
#define PRId64 "d"
|
||||
#define PRIx64 "llx"
|
||||
#define PRIu64 "u"
|
||||
|
@ -511,52 +517,22 @@ static void apic_send_ipi(struct kvm_lapic *apic)
|
|||
|
||||
static u32 apic_get_tmcct(struct kvm_lapic *apic)
|
||||
{
|
||||
u64 counter_passed;
|
||||
ktime_t passed, now;
|
||||
ktime_t remaining;
|
||||
s64 ns;
|
||||
u32 tmcct;
|
||||
|
||||
ASSERT(apic != NULL);
|
||||
|
||||
now = apic->timer.dev.base->get_time();
|
||||
tmcct = apic_get_reg(apic, APIC_TMICT);
|
||||
|
||||
/* if initial count is 0, current count should also be 0 */
|
||||
if (tmcct == 0)
|
||||
if (apic_get_reg(apic, APIC_TMICT) == 0)
|
||||
return 0;
|
||||
|
||||
if (unlikely(ktime_to_ns(now) <=
|
||||
ktime_to_ns(apic->timer.last_update))) {
|
||||
/* Wrap around */
|
||||
passed = ktime_add(( {
|
||||
(ktime_t) {
|
||||
.tv64 = KTIME_MAX -
|
||||
(apic->timer.last_update).tv64}; }
|
||||
), now);
|
||||
apic_debug("time elapsed\n");
|
||||
} else
|
||||
passed = ktime_sub(now, apic->timer.last_update);
|
||||
remaining = hrtimer_expires_remaining(&apic->timer.dev);
|
||||
if (ktime_to_ns(remaining) < 0)
|
||||
remaining = ktime_set(0, 0);
|
||||
|
||||
counter_passed = div64_u64(ktime_to_ns(passed),
|
||||
(APIC_BUS_CYCLE_NS * apic->timer.divide_count));
|
||||
|
||||
if (counter_passed > tmcct) {
|
||||
if (unlikely(!apic_lvtt_period(apic))) {
|
||||
/* one-shot timers stick at 0 until reset */
|
||||
tmcct = 0;
|
||||
} else {
|
||||
/*
|
||||
* periodic timers reset to APIC_TMICT when they
|
||||
* hit 0. The while loop simulates this happening N
|
||||
* times. (counter_passed %= tmcct) would also work,
|
||||
* but might be slower or not work on 32-bit??
|
||||
*/
|
||||
while (counter_passed > tmcct)
|
||||
counter_passed -= tmcct;
|
||||
tmcct -= counter_passed;
|
||||
}
|
||||
} else {
|
||||
tmcct -= counter_passed;
|
||||
}
|
||||
ns = mod_64(ktime_to_ns(remaining), apic->timer.period);
|
||||
tmcct = div64_u64(ns, (APIC_BUS_CYCLE_NS * apic->timer.divide_count));
|
||||
|
||||
return tmcct;
|
||||
}
|
||||
|
@ -653,8 +629,6 @@ static void start_apic_timer(struct kvm_lapic *apic)
|
|||
{
|
||||
ktime_t now = apic->timer.dev.base->get_time();
|
||||
|
||||
apic->timer.last_update = now;
|
||||
|
||||
apic->timer.period = apic_get_reg(apic, APIC_TMICT) *
|
||||
APIC_BUS_CYCLE_NS * apic->timer.divide_count;
|
||||
atomic_set(&apic->timer.pending, 0);
|
||||
|
@ -1110,16 +1084,6 @@ void kvm_inject_apic_timer_irqs(struct kvm_vcpu *vcpu)
|
|||
}
|
||||
}
|
||||
|
||||
void kvm_apic_timer_intr_post(struct kvm_vcpu *vcpu, int vec)
|
||||
{
|
||||
struct kvm_lapic *apic = vcpu->arch.apic;
|
||||
|
||||
if (apic && apic_lvt_vector(apic, APIC_LVTT) == vec)
|
||||
apic->timer.last_update = ktime_add_ns(
|
||||
apic->timer.last_update,
|
||||
apic->timer.period);
|
||||
}
|
||||
|
||||
int kvm_get_apic_interrupt(struct kvm_vcpu *vcpu)
|
||||
{
|
||||
int vector = kvm_apic_has_interrupt(vcpu);
|
||||
|
|
|
@ -12,7 +12,6 @@ struct kvm_lapic {
|
|||
atomic_t pending;
|
||||
s64 period; /* unit: ns */
|
||||
u32 divide_count;
|
||||
ktime_t last_update;
|
||||
struct hrtimer dev;
|
||||
} timer;
|
||||
struct kvm_vcpu *vcpu;
|
||||
|
@ -42,7 +41,6 @@ void kvm_set_apic_base(struct kvm_vcpu *vcpu, u64 data);
|
|||
void kvm_apic_post_state_restore(struct kvm_vcpu *vcpu);
|
||||
int kvm_lapic_enabled(struct kvm_vcpu *vcpu);
|
||||
int kvm_lapic_find_highest_irr(struct kvm_vcpu *vcpu);
|
||||
void kvm_apic_timer_intr_post(struct kvm_vcpu *vcpu, int vec);
|
||||
|
||||
void kvm_lapic_set_vapic_addr(struct kvm_vcpu *vcpu, gpa_t vapic_addr);
|
||||
void kvm_lapic_sync_from_vapic(struct kvm_vcpu *vcpu);
|
||||
|
|
|
@ -1698,8 +1698,13 @@ static int set_spte(struct kvm_vcpu *vcpu, u64 *shadow_pte,
|
|||
if (largepage)
|
||||
spte |= PT_PAGE_SIZE_MASK;
|
||||
if (mt_mask) {
|
||||
mt_mask = get_memory_type(vcpu, gfn) <<
|
||||
kvm_x86_ops->get_mt_mask_shift();
|
||||
if (!kvm_is_mmio_pfn(pfn)) {
|
||||
mt_mask = get_memory_type(vcpu, gfn) <<
|
||||
kvm_x86_ops->get_mt_mask_shift();
|
||||
mt_mask |= VMX_EPT_IGMT_BIT;
|
||||
} else
|
||||
mt_mask = MTRR_TYPE_UNCACHABLE <<
|
||||
kvm_x86_ops->get_mt_mask_shift();
|
||||
spte |= mt_mask;
|
||||
}
|
||||
|
||||
|
|
|
@ -1600,7 +1600,6 @@ static void svm_intr_assist(struct kvm_vcpu *vcpu)
|
|||
/* Okay, we can deliver the interrupt: grab it and update PIC state. */
|
||||
intr_vector = kvm_cpu_get_interrupt(vcpu);
|
||||
svm_inject_irq(svm, intr_vector);
|
||||
kvm_timer_intr_post(vcpu, intr_vector);
|
||||
out:
|
||||
update_cr8_intercept(vcpu);
|
||||
}
|
||||
|
|
|
@ -903,6 +903,7 @@ static int vmx_get_msr(struct kvm_vcpu *vcpu, u32 msr_index, u64 *pdata)
|
|||
data = vmcs_readl(GUEST_SYSENTER_ESP);
|
||||
break;
|
||||
default:
|
||||
vmx_load_host_state(to_vmx(vcpu));
|
||||
msr = find_msr_entry(to_vmx(vcpu), msr_index);
|
||||
if (msr) {
|
||||
data = msr->data;
|
||||
|
@ -3285,7 +3286,6 @@ static void vmx_intr_assist(struct kvm_vcpu *vcpu)
|
|||
}
|
||||
if (vcpu->arch.interrupt.pending) {
|
||||
vmx_inject_irq(vcpu, vcpu->arch.interrupt.nr);
|
||||
kvm_timer_intr_post(vcpu, vcpu->arch.interrupt.nr);
|
||||
if (kvm_cpu_has_interrupt(vcpu))
|
||||
enable_irq_window(vcpu);
|
||||
}
|
||||
|
@ -3687,8 +3687,7 @@ static int __init vmx_init(void)
|
|||
if (vm_need_ept()) {
|
||||
bypass_guest_pf = 0;
|
||||
kvm_mmu_set_base_ptes(VMX_EPT_READABLE_MASK |
|
||||
VMX_EPT_WRITABLE_MASK |
|
||||
VMX_EPT_IGMT_BIT);
|
||||
VMX_EPT_WRITABLE_MASK);
|
||||
kvm_mmu_set_mask_ptes(0ull, 0ull, 0ull, 0ull,
|
||||
VMX_EPT_EXECUTABLE_MASK,
|
||||
VMX_EPT_DEFAULT_MT << VMX_EPT_MT_EPTE_SHIFT);
|
||||
|
|
|
@ -967,7 +967,6 @@ int kvm_dev_ioctl_check_extension(long ext)
|
|||
case KVM_CAP_MMU_SHADOW_CACHE_CONTROL:
|
||||
case KVM_CAP_SET_TSS_ADDR:
|
||||
case KVM_CAP_EXT_CPUID:
|
||||
case KVM_CAP_CLOCKSOURCE:
|
||||
case KVM_CAP_PIT:
|
||||
case KVM_CAP_NOP_IO_DELAY:
|
||||
case KVM_CAP_MP_STATE:
|
||||
|
@ -992,6 +991,9 @@ int kvm_dev_ioctl_check_extension(long ext)
|
|||
case KVM_CAP_IOMMU:
|
||||
r = iommu_found();
|
||||
break;
|
||||
case KVM_CAP_CLOCKSOURCE:
|
||||
r = boot_cpu_has(X86_FEATURE_CONSTANT_TSC);
|
||||
break;
|
||||
default:
|
||||
r = 0;
|
||||
break;
|
||||
|
@ -4127,9 +4129,13 @@ static void kvm_free_vcpus(struct kvm *kvm)
|
|||
|
||||
}
|
||||
|
||||
void kvm_arch_destroy_vm(struct kvm *kvm)
|
||||
void kvm_arch_sync_events(struct kvm *kvm)
|
||||
{
|
||||
kvm_free_all_assigned_devices(kvm);
|
||||
}
|
||||
|
||||
void kvm_arch_destroy_vm(struct kvm *kvm)
|
||||
{
|
||||
kvm_iommu_unmap_guest(kvm);
|
||||
kvm_free_pit(kvm);
|
||||
kfree(kvm->arch.vpic);
|
||||
|
|
|
@ -57,7 +57,7 @@ void __init trap_init_hook(void)
|
|||
|
||||
static struct irqaction irq0 = {
|
||||
.handler = timer_interrupt,
|
||||
.flags = IRQF_DISABLED | IRQF_NOBALANCING | IRQF_IRQPOLL,
|
||||
.flags = IRQF_DISABLED | IRQF_NOBALANCING | IRQF_IRQPOLL | IRQF_TIMER,
|
||||
.mask = CPU_MASK_NONE,
|
||||
.name = "timer"
|
||||
};
|
||||
|
|
|
@ -166,7 +166,7 @@ int __init compute_hash_shift(struct bootnode *nodes, int numnodes,
|
|||
return shift;
|
||||
}
|
||||
|
||||
int early_pfn_to_nid(unsigned long pfn)
|
||||
int __meminit __early_pfn_to_nid(unsigned long pfn)
|
||||
{
|
||||
return phys_to_nid(pfn << PAGE_SHIFT);
|
||||
}
|
||||
|
|
|
@ -508,18 +508,13 @@ static int split_large_page(pte_t *kpte, unsigned long address)
|
|||
#endif
|
||||
|
||||
/*
|
||||
* Install the new, split up pagetable. Important details here:
|
||||
* Install the new, split up pagetable.
|
||||
*
|
||||
* On Intel the NX bit of all levels must be cleared to make a
|
||||
* page executable. See section 4.13.2 of Intel 64 and IA-32
|
||||
* Architectures Software Developer's Manual).
|
||||
*
|
||||
* Mark the entry present. The current mapping might be
|
||||
* set to not present, which we preserved above.
|
||||
* We use the standard kernel pagetable protections for the new
|
||||
* pagetable protections, the actual ptes set above control the
|
||||
* primary protection behavior:
|
||||
*/
|
||||
ref_prot = pte_pgprot(pte_mkexec(pte_clrhuge(*kpte)));
|
||||
pgprot_val(ref_prot) |= _PAGE_PRESENT;
|
||||
__set_pmd_pte(kpte, address, mk_pte(base, ref_prot));
|
||||
__set_pmd_pte(kpte, address, mk_pte(base, __pgprot(_KERNPG_TABLE)));
|
||||
base = NULL;
|
||||
|
||||
out_unlock:
|
||||
|
|
|
@ -209,12 +209,19 @@ void blk_abort_queue(struct request_queue *q)
|
|||
{
|
||||
unsigned long flags;
|
||||
struct request *rq, *tmp;
|
||||
LIST_HEAD(list);
|
||||
|
||||
spin_lock_irqsave(q->queue_lock, flags);
|
||||
|
||||
elv_abort_queue(q);
|
||||
|
||||
list_for_each_entry_safe(rq, tmp, &q->timeout_list, timeout_list)
|
||||
/*
|
||||
* Splice entries to local list, to avoid deadlocking if entries
|
||||
* get readded to the timeout list by error handling
|
||||
*/
|
||||
list_splice_init(&q->timeout_list, &list);
|
||||
|
||||
list_for_each_entry_safe(rq, tmp, &list, timeout_list)
|
||||
blk_abort_request(rq);
|
||||
|
||||
spin_unlock_irqrestore(q->queue_lock, flags);
|
||||
|
|
|
@ -142,7 +142,7 @@ static void __blk_add_trace(struct blk_trace *bt, sector_t sector, int bytes,
|
|||
|
||||
what |= ddir_act[rw & WRITE];
|
||||
what |= MASK_TC_BIT(rw, BARRIER);
|
||||
what |= MASK_TC_BIT(rw, SYNC);
|
||||
what |= MASK_TC_BIT(rw, SYNCIO);
|
||||
what |= MASK_TC_BIT(rw, AHEAD);
|
||||
what |= MASK_TC_BIT(rw, META);
|
||||
what |= MASK_TC_BIT(rw, DISCARD);
|
||||
|
|
Some files were not shown because too many files have changed in this diff Show more
Loading…
Reference in a new issue