Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/net-next
Pull networking updates from David Miller: 1) The addition of nftables. No longer will we need protocol aware firewall filtering modules, it can all live in userspace. At the core of nftables is a, for lack of a better term, virtual machine that executes byte codes to inspect packet or metadata (arriving interface index, etc.) and make verdict decisions. Besides support for loading packet contents and comparing them, the interpreter supports lookups in various datastructures as fundamental operations. For example sets are supports, and therefore one could create a set of whitelist IP address entries which have ACCEPT verdicts attached to them, and use the appropriate byte codes to do such lookups. Since the interpreted code is composed in userspace, userspace can do things like optimize things before giving it to the kernel. Another major improvement is the capability of atomically updating portions of the ruleset. In the existing netfilter implementation, one has to update the entire rule set in order to make a change and this is very expensive. Userspace tools exist to create nftables rules using existing netfilter rule sets, but both kernel implementations will need to co-exist for quite some time as we transition from the old to the new stuff. Kudos to Patrick McHardy, Pablo Neira Ayuso, and others who have worked so hard on this. 2) Daniel Borkmann and Hannes Frederic Sowa made several improvements to our pseudo-random number generator, mostly used for things like UDP port randomization and netfitler, amongst other things. In particular the taus88 generater is updated to taus113, and test cases are added. 3) Support 64-bit rates in HTB and TBF schedulers, from Eric Dumazet and Yang Yingliang. 4) Add support for new 577xx tigon3 chips to tg3 driver, from Nithin Sujir. 5) Fix two fatal flaws in TCP dynamic right sizing, from Eric Dumazet, Neal Cardwell, and Yuchung Cheng. 6) Allow IP_TOS and IP_TTL to be specified in sendmsg() ancillary control message data, much like other socket option attributes. From Francesco Fusco. 7) Allow applications to specify a cap on the rate computed automatically by the kernel for pacing flows, via a new SO_MAX_PACING_RATE socket option. From Eric Dumazet. 8) Make the initial autotuned send buffer sizing in TCP more closely reflect actual needs, from Eric Dumazet. 9) Currently early socket demux only happens for TCP sockets, but we can do it for connected UDP sockets too. Implementation from Shawn Bohrer. 10) Refactor inet socket demux with the goal of improving hash demux performance for listening sockets. With the main goals being able to use RCU lookups on even request sockets, and eliminating the listening lock contention. From Eric Dumazet. 11) The bonding layer has many demuxes in it's fast path, and an RCU conversion was started back in 3.11, several changes here extend the RCU usage to even more locations. From Ding Tianhong and Wang Yufen, based upon suggestions by Nikolay Aleksandrov and Veaceslav Falico. 12) Allow stackability of segmentation offloads to, in particular, allow segmentation offloading over tunnels. From Eric Dumazet. 13) Significantly improve the handling of secret keys we input into the various hash functions in the inet hashtables, TCP fast open, as well as syncookies. From Hannes Frederic Sowa. The key fundamental operation is "net_get_random_once()" which uses static keys. Hannes even extended this to ipv4/ipv6 fragmentation handling and our generic flow dissector. 14) The generic driver layer takes care now to set the driver data to NULL on device removal, so it's no longer necessary for drivers to explicitly set it to NULL any more. Many drivers have been cleaned up in this way, from Jingoo Han. 15) Add a BPF based packet scheduler classifier, from Daniel Borkmann. 16) Improve CRC32 interfaces and generic SKB checksum iterators so that SCTP's checksumming can more cleanly be handled. Also from Daniel Borkmann. 17) Add a new PMTU discovery mode, IP_PMTUDISC_INTERFACE, which forces using the interface MTU value. This helps avoid PMTU attacks, particularly on DNS servers. From Hannes Frederic Sowa. 18) Use generic XPS for transmit queue steering rather than internal (re-)implementation in virtio-net. From Jason Wang. * git://git.kernel.org/pub/scm/linux/kernel/git/davem/net-next: (1622 commits) random32: add test cases for taus113 implementation random32: upgrade taus88 generator to taus113 from errata paper random32: move rnd_state to linux/random.h random32: add prandom_reseed_late() and call when nonblocking pool becomes initialized random32: add periodic reseeding random32: fix off-by-one in seeding requirement PHY: Add RTL8201CP phy_driver to realtek xtsonic: add missing platform_set_drvdata() in xtsonic_probe() macmace: add missing platform_set_drvdata() in mace_probe() ethernet/arc/arc_emac: add missing platform_set_drvdata() in arc_emac_probe() ipv6: protect for_each_sk_fl_rcu in mem_check with rcu_read_lock_bh vlan: Implement vlan_dev_get_egress_qos_mask as an inline. ixgbe: add warning when max_vfs is out of range. igb: Update link modes display in ethtool netfilter: push reasm skb through instead of original frag skbs ip6_output: fragment outgoing reassembled skb properly MAINTAINERS: mv643xx_eth: take over maintainership from Lennart net_sched: tbf: support of 64bit rates ixgbe: deleting dfwd stations out of order can cause null ptr deref ixgbe: fix build err, num_rx_queues is only available with CONFIG_RPS ...
This commit is contained in:
commit
42a2d923cc
1331 changed files with 78918 additions and 32365 deletions
|
@ -1,13 +1,13 @@
|
|||
|
||||
What: /sys/class/net/<iface>/batman-adv/iface_status
|
||||
Date: May 2010
|
||||
Contact: Marek Lindner <lindner_marek@yahoo.de>
|
||||
Contact: Marek Lindner <mareklindner@neomailbox.ch>
|
||||
Description:
|
||||
Indicates the status of <iface> as it is seen by batman.
|
||||
|
||||
What: /sys/class/net/<iface>/batman-adv/mesh_iface
|
||||
Date: May 2010
|
||||
Contact: Marek Lindner <lindner_marek@yahoo.de>
|
||||
Contact: Marek Lindner <mareklindner@neomailbox.ch>
|
||||
Description:
|
||||
The /sys/class/net/<iface>/batman-adv/mesh_iface file
|
||||
displays the batman mesh interface this <iface>
|
||||
|
|
|
@ -1,22 +1,23 @@
|
|||
|
||||
What: /sys/class/net/<mesh_iface>/mesh/aggregated_ogms
|
||||
Date: May 2010
|
||||
Contact: Marek Lindner <lindner_marek@yahoo.de>
|
||||
Contact: Marek Lindner <mareklindner@neomailbox.ch>
|
||||
Description:
|
||||
Indicates whether the batman protocol messages of the
|
||||
mesh <mesh_iface> shall be aggregated or not.
|
||||
|
||||
What: /sys/class/net/<mesh_iface>/mesh/ap_isolation
|
||||
What: /sys/class/net/<mesh_iface>/mesh/<vlan_subdir>/ap_isolation
|
||||
Date: May 2011
|
||||
Contact: Antonio Quartulli <ordex@autistici.org>
|
||||
Contact: Antonio Quartulli <antonio@meshcoding.com>
|
||||
Description:
|
||||
Indicates whether the data traffic going from a
|
||||
wireless client to another wireless client will be
|
||||
silently dropped.
|
||||
silently dropped. <vlan_subdir> is empty when referring
|
||||
to the untagged lan.
|
||||
|
||||
What: /sys/class/net/<mesh_iface>/mesh/bonding
|
||||
Date: June 2010
|
||||
Contact: Simon Wunderlich <siwu@hrz.tu-chemnitz.de>
|
||||
Contact: Simon Wunderlich <sw@simonwunderlich.de>
|
||||
Description:
|
||||
Indicates whether the data traffic going through the
|
||||
mesh will be sent using multiple interfaces at the
|
||||
|
@ -24,7 +25,7 @@ Description:
|
|||
|
||||
What: /sys/class/net/<mesh_iface>/mesh/bridge_loop_avoidance
|
||||
Date: November 2011
|
||||
Contact: Simon Wunderlich <siwu@hrz.tu-chemnitz.de>
|
||||
Contact: Simon Wunderlich <sw@simonwunderlich.de>
|
||||
Description:
|
||||
Indicates whether the bridge loop avoidance feature
|
||||
is enabled. This feature detects and avoids loops
|
||||
|
@ -41,21 +42,21 @@ Description:
|
|||
|
||||
What: /sys/class/net/<mesh_iface>/mesh/gw_bandwidth
|
||||
Date: October 2010
|
||||
Contact: Marek Lindner <lindner_marek@yahoo.de>
|
||||
Contact: Marek Lindner <mareklindner@neomailbox.ch>
|
||||
Description:
|
||||
Defines the bandwidth which is propagated by this
|
||||
node if gw_mode was set to 'server'.
|
||||
|
||||
What: /sys/class/net/<mesh_iface>/mesh/gw_mode
|
||||
Date: October 2010
|
||||
Contact: Marek Lindner <lindner_marek@yahoo.de>
|
||||
Contact: Marek Lindner <mareklindner@neomailbox.ch>
|
||||
Description:
|
||||
Defines the state of the gateway features. Can be
|
||||
either 'off', 'client' or 'server'.
|
||||
|
||||
What: /sys/class/net/<mesh_iface>/mesh/gw_sel_class
|
||||
Date: October 2010
|
||||
Contact: Marek Lindner <lindner_marek@yahoo.de>
|
||||
Contact: Marek Lindner <mareklindner@neomailbox.ch>
|
||||
Description:
|
||||
Defines the selection criteria this node will use
|
||||
to choose a gateway if gw_mode was set to 'client'.
|
||||
|
@ -77,25 +78,14 @@ Description:
|
|||
|
||||
What: /sys/class/net/<mesh_iface>/mesh/orig_interval
|
||||
Date: May 2010
|
||||
Contact: Marek Lindner <lindner_marek@yahoo.de>
|
||||
Contact: Marek Lindner <mareklindner@neomailbox.ch>
|
||||
Description:
|
||||
Defines the interval in milliseconds in which batman
|
||||
sends its protocol messages.
|
||||
|
||||
What: /sys/class/net/<mesh_iface>/mesh/routing_algo
|
||||
Date: Dec 2011
|
||||
Contact: Marek Lindner <lindner_marek@yahoo.de>
|
||||
Contact: Marek Lindner <mareklindner@neomailbox.ch>
|
||||
Description:
|
||||
Defines the routing procotol this mesh instance
|
||||
uses to find the optimal paths through the mesh.
|
||||
|
||||
What: /sys/class/net/<mesh_iface>/mesh/vis_mode
|
||||
Date: May 2010
|
||||
Contact: Marek Lindner <lindner_marek@yahoo.de>
|
||||
Description:
|
||||
Each batman node only maintains information about its
|
||||
own local neighborhood, therefore generating graphs
|
||||
showing the topology of the entire mesh is not easily
|
||||
feasible without having a central instance to collect
|
||||
the local topologies from all nodes. This file allows
|
||||
to activate the collecting (server) mode.
|
||||
|
|
|
@ -152,8 +152,8 @@
|
|||
!Finclude/net/cfg80211.h cfg80211_scan_request
|
||||
!Finclude/net/cfg80211.h cfg80211_scan_done
|
||||
!Finclude/net/cfg80211.h cfg80211_bss
|
||||
!Finclude/net/cfg80211.h cfg80211_inform_bss_frame
|
||||
!Finclude/net/cfg80211.h cfg80211_inform_bss
|
||||
!Finclude/net/cfg80211.h cfg80211_inform_bss_width_frame
|
||||
!Finclude/net/cfg80211.h cfg80211_inform_bss_width
|
||||
!Finclude/net/cfg80211.h cfg80211_unlink_bss
|
||||
!Finclude/net/cfg80211.h cfg80211_find_ie
|
||||
!Finclude/net/cfg80211.h ieee80211_bss_get_ie
|
||||
|
|
28
Documentation/devicetree/bindings/net/cpsw-phy-sel.txt
Normal file
28
Documentation/devicetree/bindings/net/cpsw-phy-sel.txt
Normal file
|
@ -0,0 +1,28 @@
|
|||
TI CPSW Phy mode Selection Device Tree Bindings
|
||||
-----------------------------------------------
|
||||
|
||||
Required properties:
|
||||
- compatible : Should be "ti,am3352-cpsw-phy-sel"
|
||||
- reg : physical base address and size of the cpsw
|
||||
registers map
|
||||
- reg-names : names of the register map given in "reg" node
|
||||
|
||||
Optional properties:
|
||||
-rmii-clock-ext : If present, the driver will configure the RMII
|
||||
interface to external clock usage
|
||||
|
||||
Examples:
|
||||
|
||||
phy_sel: cpsw-phy-sel@44e10650 {
|
||||
compatible = "ti,am3352-cpsw-phy-sel";
|
||||
reg= <0x44e10650 0x4>;
|
||||
reg-names = "gmii-sel";
|
||||
};
|
||||
|
||||
(or)
|
||||
phy_sel: cpsw-phy-sel@44e10650 {
|
||||
compatible = "ti,am3352-cpsw-phy-sel";
|
||||
reg= <0x44e10650 0x4>;
|
||||
reg-names = "gmii-sel";
|
||||
rmii-clock-ext;
|
||||
};
|
|
@ -69,8 +69,7 @@ folder:
|
|||
# aggregated_ogms gw_bandwidth log_level
|
||||
# ap_isolation gw_mode orig_interval
|
||||
# bonding gw_sel_class routing_algo
|
||||
# bridge_loop_avoidance hop_penalty vis_mode
|
||||
# fragmentation
|
||||
# bridge_loop_avoidance hop_penalty fragmentation
|
||||
|
||||
|
||||
There is a special folder for debugging information:
|
||||
|
@ -78,7 +77,7 @@ There is a special folder for debugging information:
|
|||
# ls /sys/kernel/debug/batman_adv/bat0/
|
||||
# bla_backbone_table log transtable_global
|
||||
# bla_claim_table originators transtable_local
|
||||
# gateways socket vis_data
|
||||
# gateways socket
|
||||
|
||||
Some of the files contain all sort of status information regard-
|
||||
ing the mesh network. For example, you can view the table of
|
||||
|
@ -127,51 +126,6 @@ ously assigned to interfaces now used by batman advanced, e.g.
|
|||
# ifconfig eth0 0.0.0.0
|
||||
|
||||
|
||||
VISUALIZATION
|
||||
-------------
|
||||
|
||||
If you want topology visualization, at least one mesh node must
|
||||
be configured as VIS-server:
|
||||
|
||||
# echo "server" > /sys/class/net/bat0/mesh/vis_mode
|
||||
|
||||
Each node is either configured as "server" or as "client" (de-
|
||||
fault: "client"). Clients send their topology data to the server
|
||||
next to them, and server synchronize with other servers. If there
|
||||
is no server configured (default) within the mesh, no topology
|
||||
information will be transmitted. With these "synchronizing
|
||||
servers", there can be 1 or more vis servers sharing the same (or
|
||||
at least very similar) data.
|
||||
|
||||
When configured as server, you can get a topology snapshot of
|
||||
your mesh:
|
||||
|
||||
# cat /sys/kernel/debug/batman_adv/bat0/vis_data
|
||||
|
||||
This raw output is intended to be easily parsable and convertable
|
||||
with other tools. Have a look at the batctl README if you want a
|
||||
vis output in dot or json format for instance and how those out-
|
||||
puts could then be visualised in an image.
|
||||
|
||||
The raw format consists of comma separated values per entry where
|
||||
each entry is giving information about a certain source inter-
|
||||
face. Each entry can/has to have the following values:
|
||||
-> "mac" - mac address of an originator's source interface
|
||||
(each line begins with it)
|
||||
-> "TQ mac value" - src mac's link quality towards mac address
|
||||
of a neighbor originator's interface which
|
||||
is being used for routing
|
||||
-> "TT mac" - TT announced by source mac
|
||||
-> "PRIMARY" - this is a primary interface
|
||||
-> "SEC mac" - secondary mac address of source
|
||||
(requires preceding PRIMARY)
|
||||
|
||||
The TQ value has a range from 4 to 255 with 255 being the best.
|
||||
The TT entries are showing which hosts are connected to the mesh
|
||||
via bat0 or being bridged into the mesh network. The PRIMARY/SEC
|
||||
values are only applied on primary interfaces
|
||||
|
||||
|
||||
LOGGING/DEBUGGING
|
||||
-----------------
|
||||
|
||||
|
@ -245,5 +199,5 @@ Mailing-list: b.a.t.m.a.n@open-mesh.org (optional subscription
|
|||
|
||||
You can also contact the Authors:
|
||||
|
||||
Marek Lindner <lindner_marek@yahoo.de>
|
||||
Simon Wunderlich <siwu@hrz.tu-chemnitz.de>
|
||||
Marek Lindner <mareklindner@neomailbox.ch>
|
||||
Simon Wunderlich <sw@simonwunderlich.de>
|
||||
|
|
|
@ -639,6 +639,15 @@ num_unsol_na
|
|||
are generated by the ipv4 and ipv6 code and the numbers of
|
||||
repetitions cannot be set independently.
|
||||
|
||||
packets_per_slave
|
||||
|
||||
Specify the number of packets to transmit through a slave before
|
||||
moving to the next one. When set to 0 then a slave is chosen at
|
||||
random.
|
||||
|
||||
The valid range is 0 - 65535; the default value is 1. This option
|
||||
has effect only in balance-rr mode.
|
||||
|
||||
primary
|
||||
|
||||
A string (eth0, eth2, etc) specifying which slave is the
|
||||
|
@ -743,21 +752,16 @@ xmit_hash_policy
|
|||
protocol information to generate the hash.
|
||||
|
||||
Uses XOR of hardware MAC addresses and IP addresses to
|
||||
generate the hash. The IPv4 formula is
|
||||
generate the hash. The formula is
|
||||
|
||||
(((source IP XOR dest IP) AND 0xffff) XOR
|
||||
( source MAC XOR destination MAC ))
|
||||
modulo slave count
|
||||
hash = source MAC XOR destination MAC
|
||||
hash = hash XOR source IP XOR destination IP
|
||||
hash = hash XOR (hash RSHIFT 16)
|
||||
hash = hash XOR (hash RSHIFT 8)
|
||||
And then hash is reduced modulo slave count.
|
||||
|
||||
The IPv6 formula is
|
||||
|
||||
hash = (source ip quad 2 XOR dest IP quad 2) XOR
|
||||
(source ip quad 3 XOR dest IP quad 3) XOR
|
||||
(source ip quad 4 XOR dest IP quad 4)
|
||||
|
||||
(((hash >> 24) XOR (hash >> 16) XOR (hash >> 8) XOR hash)
|
||||
XOR (source MAC XOR destination MAC))
|
||||
modulo slave count
|
||||
If the protocol is IPv6 then the source and destination
|
||||
addresses are first hashed using ipv6_addr_hash.
|
||||
|
||||
This algorithm will place all traffic to a particular
|
||||
network peer on the same slave. For non-IP traffic,
|
||||
|
@ -779,21 +783,16 @@ xmit_hash_policy
|
|||
slaves, although a single connection will not span
|
||||
multiple slaves.
|
||||
|
||||
The formula for unfragmented IPv4 TCP and UDP packets is
|
||||
The formula for unfragmented TCP and UDP packets is
|
||||
|
||||
((source port XOR dest port) XOR
|
||||
((source IP XOR dest IP) AND 0xffff)
|
||||
modulo slave count
|
||||
hash = source port, destination port (as in the header)
|
||||
hash = hash XOR source IP XOR destination IP
|
||||
hash = hash XOR (hash RSHIFT 16)
|
||||
hash = hash XOR (hash RSHIFT 8)
|
||||
And then hash is reduced modulo slave count.
|
||||
|
||||
The formula for unfragmented IPv6 TCP and UDP packets is
|
||||
|
||||
hash = (source port XOR dest port) XOR
|
||||
((source ip quad 2 XOR dest IP quad 2) XOR
|
||||
(source ip quad 3 XOR dest IP quad 3) XOR
|
||||
(source ip quad 4 XOR dest IP quad 4))
|
||||
|
||||
((hash >> 24) XOR (hash >> 16) XOR (hash >> 8) XOR hash)
|
||||
modulo slave count
|
||||
If the protocol is IPv6 then the source and destination
|
||||
addresses are first hashed using ipv6_addr_hash.
|
||||
|
||||
For fragmented TCP or UDP packets and all other IPv4 and
|
||||
IPv6 protocol traffic, the source and destination port
|
||||
|
@ -801,10 +800,6 @@ xmit_hash_policy
|
|||
formula is the same as for the layer2 transmit hash
|
||||
policy.
|
||||
|
||||
The IPv4 policy is intended to mimic the behavior of
|
||||
certain switches, notably Cisco switches with PFC2 as
|
||||
well as some Foundry and IBM products.
|
||||
|
||||
This algorithm is not fully 802.3ad compliant. A
|
||||
single TCP or UDP conversation containing both
|
||||
fragmented and unfragmented packets will see packets
|
||||
|
@ -815,6 +810,26 @@ xmit_hash_policy
|
|||
conversations. Other implementations of 802.3ad may
|
||||
or may not tolerate this noncompliance.
|
||||
|
||||
encap2+3
|
||||
|
||||
This policy uses the same formula as layer2+3 but it
|
||||
relies on skb_flow_dissect to obtain the header fields
|
||||
which might result in the use of inner headers if an
|
||||
encapsulation protocol is used. For example this will
|
||||
improve the performance for tunnel users because the
|
||||
packets will be distributed according to the encapsulated
|
||||
flows.
|
||||
|
||||
encap3+4
|
||||
|
||||
This policy uses the same formula as layer3+4 but it
|
||||
relies on skb_flow_dissect to obtain the header fields
|
||||
which might result in the use of inner headers if an
|
||||
encapsulation protocol is used. For example this will
|
||||
improve the performance for tunnel users because the
|
||||
packets will be distributed according to the encapsulated
|
||||
flows.
|
||||
|
||||
The default value is layer2. This option was added in bonding
|
||||
version 2.6.3. In earlier versions of bonding, this parameter
|
||||
does not exist, and the layer2 policy is the only policy. The
|
||||
|
|
|
@ -25,6 +25,12 @@ This file contains
|
|||
4.1.5 RAW socket option CAN_RAW_FD_FRAMES
|
||||
4.1.6 RAW socket returned message flags
|
||||
4.2 Broadcast Manager protocol sockets (SOCK_DGRAM)
|
||||
4.2.1 Broadcast Manager operations
|
||||
4.2.2 Broadcast Manager message flags
|
||||
4.2.3 Broadcast Manager transmission timers
|
||||
4.2.4 Broadcast Manager message sequence transmission
|
||||
4.2.5 Broadcast Manager receive filter timers
|
||||
4.2.6 Broadcast Manager multiplex message receive filter
|
||||
4.3 connected transport protocols (SOCK_SEQPACKET)
|
||||
4.4 unconnected transport protocols (SOCK_DGRAM)
|
||||
|
||||
|
@ -593,6 +599,217 @@ solution for a couple of reasons:
|
|||
In order to receive such messages, CAN_RAW_RECV_OWN_MSGS must be set.
|
||||
|
||||
4.2 Broadcast Manager protocol sockets (SOCK_DGRAM)
|
||||
|
||||
The Broadcast Manager protocol provides a command based configuration
|
||||
interface to filter and send (e.g. cyclic) CAN messages in kernel space.
|
||||
|
||||
Receive filters can be used to down sample frequent messages; detect events
|
||||
such as message contents changes, packet length changes, and do time-out
|
||||
monitoring of received messages.
|
||||
|
||||
Periodic transmission tasks of CAN frames or a sequence of CAN frames can be
|
||||
created and modified at runtime; both the message content and the two
|
||||
possible transmit intervals can be altered.
|
||||
|
||||
A BCM socket is not intended for sending individual CAN frames using the
|
||||
struct can_frame as known from the CAN_RAW socket. Instead a special BCM
|
||||
configuration message is defined. The basic BCM configuration message used
|
||||
to communicate with the broadcast manager and the available operations are
|
||||
defined in the linux/can/bcm.h include. The BCM message consists of a
|
||||
message header with a command ('opcode') followed by zero or more CAN frames.
|
||||
The broadcast manager sends responses to user space in the same form:
|
||||
|
||||
struct bcm_msg_head {
|
||||
__u32 opcode; /* command */
|
||||
__u32 flags; /* special flags */
|
||||
__u32 count; /* run 'count' times with ival1 */
|
||||
struct timeval ival1, ival2; /* count and subsequent interval */
|
||||
canid_t can_id; /* unique can_id for task */
|
||||
__u32 nframes; /* number of can_frames following */
|
||||
struct can_frame frames[0];
|
||||
};
|
||||
|
||||
The aligned payload 'frames' uses the same basic CAN frame structure defined
|
||||
at the beginning of section 4 and in the include/linux/can.h include. All
|
||||
messages to the broadcast manager from user space have this structure.
|
||||
|
||||
Note a CAN_BCM socket must be connected instead of bound after socket
|
||||
creation (example without error checking):
|
||||
|
||||
int s;
|
||||
struct sockaddr_can addr;
|
||||
struct ifreq ifr;
|
||||
|
||||
s = socket(PF_CAN, SOCK_DGRAM, CAN_BCM);
|
||||
|
||||
strcpy(ifr.ifr_name, "can0");
|
||||
ioctl(s, SIOCGIFINDEX, &ifr);
|
||||
|
||||
addr.can_family = AF_CAN;
|
||||
addr.can_ifindex = ifr.ifr_ifindex;
|
||||
|
||||
connect(s, (struct sockaddr *)&addr, sizeof(addr))
|
||||
|
||||
(..)
|
||||
|
||||
The broadcast manager socket is able to handle any number of in flight
|
||||
transmissions or receive filters concurrently. The different RX/TX jobs are
|
||||
distinguished by the unique can_id in each BCM message. However additional
|
||||
CAN_BCM sockets are recommended to communicate on multiple CAN interfaces.
|
||||
When the broadcast manager socket is bound to 'any' CAN interface (=> the
|
||||
interface index is set to zero) the configured receive filters apply to any
|
||||
CAN interface unless the sendto() syscall is used to overrule the 'any' CAN
|
||||
interface index. When using recvfrom() instead of read() to retrieve BCM
|
||||
socket messages the originating CAN interface is provided in can_ifindex.
|
||||
|
||||
4.2.1 Broadcast Manager operations
|
||||
|
||||
The opcode defines the operation for the broadcast manager to carry out,
|
||||
or details the broadcast managers response to several events, including
|
||||
user requests.
|
||||
|
||||
Transmit Operations (user space to broadcast manager):
|
||||
|
||||
TX_SETUP: Create (cyclic) transmission task.
|
||||
|
||||
TX_DELETE: Remove (cyclic) transmission task, requires only can_id.
|
||||
|
||||
TX_READ: Read properties of (cyclic) transmission task for can_id.
|
||||
|
||||
TX_SEND: Send one CAN frame.
|
||||
|
||||
Transmit Responses (broadcast manager to user space):
|
||||
|
||||
TX_STATUS: Reply to TX_READ request (transmission task configuration).
|
||||
|
||||
TX_EXPIRED: Notification when counter finishes sending at initial interval
|
||||
'ival1'. Requires the TX_COUNTEVT flag to be set at TX_SETUP.
|
||||
|
||||
Receive Operations (user space to broadcast manager):
|
||||
|
||||
RX_SETUP: Create RX content filter subscription.
|
||||
|
||||
RX_DELETE: Remove RX content filter subscription, requires only can_id.
|
||||
|
||||
RX_READ: Read properties of RX content filter subscription for can_id.
|
||||
|
||||
Receive Responses (broadcast manager to user space):
|
||||
|
||||
RX_STATUS: Reply to RX_READ request (filter task configuration).
|
||||
|
||||
RX_TIMEOUT: Cyclic message is detected to be absent (timer ival1 expired).
|
||||
|
||||
RX_CHANGED: BCM message with updated CAN frame (detected content change).
|
||||
Sent on first message received or on receipt of revised CAN messages.
|
||||
|
||||
4.2.2 Broadcast Manager message flags
|
||||
|
||||
When sending a message to the broadcast manager the 'flags' element may
|
||||
contain the following flag definitions which influence the behaviour:
|
||||
|
||||
SETTIMER: Set the values of ival1, ival2 and count
|
||||
|
||||
STARTTIMER: Start the timer with the actual values of ival1, ival2
|
||||
and count. Starting the timer leads simultaneously to emit a CAN frame.
|
||||
|
||||
TX_COUNTEVT: Create the message TX_EXPIRED when count expires
|
||||
|
||||
TX_ANNOUNCE: A change of data by the process is emitted immediately.
|
||||
|
||||
TX_CP_CAN_ID: Copies the can_id from the message header to each
|
||||
subsequent frame in frames. This is intended as usage simplification. For
|
||||
TX tasks the unique can_id from the message header may differ from the
|
||||
can_id(s) stored for transmission in the subsequent struct can_frame(s).
|
||||
|
||||
RX_FILTER_ID: Filter by can_id alone, no frames required (nframes=0).
|
||||
|
||||
RX_CHECK_DLC: A change of the DLC leads to an RX_CHANGED.
|
||||
|
||||
RX_NO_AUTOTIMER: Prevent automatically starting the timeout monitor.
|
||||
|
||||
RX_ANNOUNCE_RESUME: If passed at RX_SETUP and a receive timeout occured, a
|
||||
RX_CHANGED message will be generated when the (cyclic) receive restarts.
|
||||
|
||||
TX_RESET_MULTI_IDX: Reset the index for the multiple frame transmission.
|
||||
|
||||
RX_RTR_FRAME: Send reply for RTR-request (placed in op->frames[0]).
|
||||
|
||||
4.2.3 Broadcast Manager transmission timers
|
||||
|
||||
Periodic transmission configurations may use up to two interval timers.
|
||||
In this case the BCM sends a number of messages ('count') at an interval
|
||||
'ival1', then continuing to send at another given interval 'ival2'. When
|
||||
only one timer is needed 'count' is set to zero and only 'ival2' is used.
|
||||
When SET_TIMER and START_TIMER flag were set the timers are activated.
|
||||
The timer values can be altered at runtime when only SET_TIMER is set.
|
||||
|
||||
4.2.4 Broadcast Manager message sequence transmission
|
||||
|
||||
Up to 256 CAN frames can be transmitted in a sequence in the case of a cyclic
|
||||
TX task configuration. The number of CAN frames is provided in the 'nframes'
|
||||
element of the BCM message head. The defined number of CAN frames are added
|
||||
as array to the TX_SETUP BCM configuration message.
|
||||
|
||||
/* create a struct to set up a sequence of four CAN frames */
|
||||
struct {
|
||||
struct bcm_msg_head msg_head;
|
||||
struct can_frame frame[4];
|
||||
} mytxmsg;
|
||||
|
||||
(..)
|
||||
mytxmsg.nframes = 4;
|
||||
(..)
|
||||
|
||||
write(s, &mytxmsg, sizeof(mytxmsg));
|
||||
|
||||
With every transmission the index in the array of CAN frames is increased
|
||||
and set to zero at index overflow.
|
||||
|
||||
4.2.5 Broadcast Manager receive filter timers
|
||||
|
||||
The timer values ival1 or ival2 may be set to non-zero values at RX_SETUP.
|
||||
When the SET_TIMER flag is set the timers are enabled:
|
||||
|
||||
ival1: Send RX_TIMEOUT when a received message is not received again within
|
||||
the given time. When START_TIMER is set at RX_SETUP the timeout detection
|
||||
is activated directly - even without a former CAN frame reception.
|
||||
|
||||
ival2: Throttle the received message rate down to the value of ival2. This
|
||||
is useful to reduce messages for the application when the signal inside the
|
||||
CAN frame is stateless as state changes within the ival2 periode may get
|
||||
lost.
|
||||
|
||||
4.2.6 Broadcast Manager multiplex message receive filter
|
||||
|
||||
To filter for content changes in multiplex message sequences an array of more
|
||||
than one CAN frames can be passed in a RX_SETUP configuration message. The
|
||||
data bytes of the first CAN frame contain the mask of relevant bits that
|
||||
have to match in the subsequent CAN frames with the received CAN frame.
|
||||
If one of the subsequent CAN frames is matching the bits in that frame data
|
||||
mark the relevant content to be compared with the previous received content.
|
||||
Up to 257 CAN frames (multiplex filter bit mask CAN frame plus 256 CAN
|
||||
filters) can be added as array to the TX_SETUP BCM configuration message.
|
||||
|
||||
/* usually used to clear CAN frame data[] - beware of endian problems! */
|
||||
#define U64_DATA(p) (*(unsigned long long*)(p)->data)
|
||||
|
||||
struct {
|
||||
struct bcm_msg_head msg_head;
|
||||
struct can_frame frame[5];
|
||||
} msg;
|
||||
|
||||
msg.msg_head.opcode = RX_SETUP;
|
||||
msg.msg_head.can_id = 0x42;
|
||||
msg.msg_head.flags = 0;
|
||||
msg.msg_head.nframes = 5;
|
||||
U64_DATA(&msg.frame[0]) = 0xFF00000000000000ULL; /* MUX mask */
|
||||
U64_DATA(&msg.frame[1]) = 0x01000000000000FFULL; /* data mask (MUX 0x01) */
|
||||
U64_DATA(&msg.frame[2]) = 0x0200FFFF000000FFULL; /* data mask (MUX 0x02) */
|
||||
U64_DATA(&msg.frame[3]) = 0x330000FFFFFF0003ULL; /* data mask (MUX 0x33) */
|
||||
U64_DATA(&msg.frame[4]) = 0x4F07FC0FF0000000ULL; /* data mask (MUX 0x4F) */
|
||||
|
||||
write(s, &msg, sizeof(msg));
|
||||
|
||||
4.3 connected transport protocols (SOCK_SEQPACKET)
|
||||
4.4 unconnected transport protocols (SOCK_DGRAM)
|
||||
|
||||
|
|
|
@ -267,17 +267,6 @@ tcp_max_orphans - INTEGER
|
|||
more aggressively. Let me to remind again: each orphan eats
|
||||
up to ~64K of unswappable memory.
|
||||
|
||||
tcp_max_ssthresh - INTEGER
|
||||
Limited Slow-Start for TCP with large congestion windows (cwnd) defined in
|
||||
RFC3742. Limited slow-start is a mechanism to limit growth of the cwnd
|
||||
on the region where cwnd is larger than tcp_max_ssthresh. TCP increases cwnd
|
||||
by at most tcp_max_ssthresh segments, and by at least tcp_max_ssthresh/2
|
||||
segments per RTT when the cwnd is above tcp_max_ssthresh.
|
||||
If TCP connection increased cwnd to thousands (or tens of thousands) segments,
|
||||
and thousands of packets were being dropped during slow-start, you can set
|
||||
tcp_max_ssthresh to improve performance for new TCP connection.
|
||||
Default: 0 (off)
|
||||
|
||||
tcp_max_syn_backlog - INTEGER
|
||||
Maximal number of remembered connection requests, which have not
|
||||
received an acknowledgment from connecting client.
|
||||
|
@ -451,7 +440,7 @@ tcp_fastopen - INTEGER
|
|||
connect() to perform a TCP handshake automatically.
|
||||
|
||||
The values (bitmap) are
|
||||
1: Enables sending data in the opening SYN on the client.
|
||||
1: Enables sending data in the opening SYN on the client w/ MSG_FASTOPEN.
|
||||
2: Enables TCP Fast Open on the server side, i.e., allowing data in
|
||||
a SYN packet to be accepted and passed to the application before
|
||||
3-way hand shake finishes.
|
||||
|
@ -464,7 +453,7 @@ tcp_fastopen - INTEGER
|
|||
different ways of setting max_qlen without the TCP_FASTOPEN socket
|
||||
option.
|
||||
|
||||
Default: 0
|
||||
Default: 1
|
||||
|
||||
Note that the client & server side Fast Open flags (1 and 2
|
||||
respectively) must be also enabled before the rest of flags can take
|
||||
|
|
|
@ -10,12 +10,12 @@ network devices.
|
|||
struct net_device allocation rules
|
||||
==================================
|
||||
Network device structures need to persist even after module is unloaded and
|
||||
must be allocated with kmalloc. If device has registered successfully,
|
||||
it will be freed on last use by free_netdev. This is required to handle the
|
||||
pathologic case cleanly (example: rmmod mydriver </sys/class/net/myeth/mtu )
|
||||
must be allocated with alloc_netdev_mqs() and friends.
|
||||
If device has registered successfully, it will be freed on last use
|
||||
by free_netdev(). This is required to handle the pathologic case cleanly
|
||||
(example: rmmod mydriver </sys/class/net/myeth/mtu )
|
||||
|
||||
There are routines in net_init.c to handle the common cases of
|
||||
alloc_etherdev, alloc_netdev. These reserve extra space for driver
|
||||
alloc_netdev_mqs()/alloc_netdev() reserve extra space for driver
|
||||
private data which gets freed when the network device is freed. If
|
||||
separately allocated data is attached to the network device
|
||||
(netdev_priv(dev)) then it is up to the module exit handler to free that.
|
||||
|
|
|
@ -100,6 +100,11 @@ static long ppb_to_scaled_ppm(int ppb)
|
|||
return (long) (ppb * 65.536);
|
||||
}
|
||||
|
||||
static int64_t pctns(struct ptp_clock_time *t)
|
||||
{
|
||||
return t->sec * 1000000000LL + t->nsec;
|
||||
}
|
||||
|
||||
static void usage(char *progname)
|
||||
{
|
||||
fprintf(stderr,
|
||||
|
@ -112,6 +117,8 @@ static void usage(char *progname)
|
|||
" -f val adjust the ptp clock frequency by 'val' ppb\n"
|
||||
" -g get the ptp clock time\n"
|
||||
" -h prints this message\n"
|
||||
" -k val measure the time offset between system and phc clock\n"
|
||||
" for 'val' times (Maximum 25)\n"
|
||||
" -p val enable output with a period of 'val' nanoseconds\n"
|
||||
" -P val enable or disable (val=1|0) the system clock PPS\n"
|
||||
" -s set the ptp clock time from the system time\n"
|
||||
|
@ -133,8 +140,12 @@ int main(int argc, char *argv[])
|
|||
struct itimerspec timeout;
|
||||
struct sigevent sigevent;
|
||||
|
||||
struct ptp_clock_time *pct;
|
||||
struct ptp_sys_offset *sysoff;
|
||||
|
||||
|
||||
char *progname;
|
||||
int c, cnt, fd;
|
||||
int i, c, cnt, fd;
|
||||
|
||||
char *device = DEVICE;
|
||||
clockid_t clkid;
|
||||
|
@ -144,14 +155,19 @@ int main(int argc, char *argv[])
|
|||
int extts = 0;
|
||||
int gettime = 0;
|
||||
int oneshot = 0;
|
||||
int pct_offset = 0;
|
||||
int n_samples = 0;
|
||||
int periodic = 0;
|
||||
int perout = -1;
|
||||
int pps = -1;
|
||||
int settime = 0;
|
||||
|
||||
int64_t t1, t2, tp;
|
||||
int64_t interval, offset;
|
||||
|
||||
progname = strrchr(argv[0], '/');
|
||||
progname = progname ? 1+progname : argv[0];
|
||||
while (EOF != (c = getopt(argc, argv, "a:A:cd:e:f:ghp:P:sSt:v"))) {
|
||||
while (EOF != (c = getopt(argc, argv, "a:A:cd:e:f:ghk:p:P:sSt:v"))) {
|
||||
switch (c) {
|
||||
case 'a':
|
||||
oneshot = atoi(optarg);
|
||||
|
@ -174,6 +190,10 @@ int main(int argc, char *argv[])
|
|||
case 'g':
|
||||
gettime = 1;
|
||||
break;
|
||||
case 'k':
|
||||
pct_offset = 1;
|
||||
n_samples = atoi(optarg);
|
||||
break;
|
||||
case 'p':
|
||||
perout = atoi(optarg);
|
||||
break;
|
||||
|
@ -376,6 +396,47 @@ int main(int argc, char *argv[])
|
|||
}
|
||||
}
|
||||
|
||||
if (pct_offset) {
|
||||
if (n_samples <= 0 || n_samples > 25) {
|
||||
puts("n_samples should be between 1 and 25");
|
||||
usage(progname);
|
||||
return -1;
|
||||
}
|
||||
|
||||
sysoff = calloc(1, sizeof(*sysoff));
|
||||
if (!sysoff) {
|
||||
perror("calloc");
|
||||
return -1;
|
||||
}
|
||||
sysoff->n_samples = n_samples;
|
||||
|
||||
if (ioctl(fd, PTP_SYS_OFFSET, sysoff))
|
||||
perror("PTP_SYS_OFFSET");
|
||||
else
|
||||
puts("system and phc clock time offset request okay");
|
||||
|
||||
pct = &sysoff->ts[0];
|
||||
for (i = 0; i < sysoff->n_samples; i++) {
|
||||
t1 = pctns(pct+2*i);
|
||||
tp = pctns(pct+2*i+1);
|
||||
t2 = pctns(pct+2*i+2);
|
||||
interval = t2 - t1;
|
||||
offset = (t2 + t1) / 2 - tp;
|
||||
|
||||
printf("system time: %ld.%ld\n",
|
||||
(pct+2*i)->sec, (pct+2*i)->nsec);
|
||||
printf("phc time: %ld.%ld\n",
|
||||
(pct+2*i+1)->sec, (pct+2*i+1)->nsec);
|
||||
printf("system time: %ld.%ld\n",
|
||||
(pct+2*i+2)->sec, (pct+2*i+2)->nsec);
|
||||
printf("system/phc clock time offset is %ld ns\n"
|
||||
"system clock time delay is %ld ns\n",
|
||||
offset, interval);
|
||||
}
|
||||
|
||||
free(sysoff);
|
||||
}
|
||||
|
||||
close(fd);
|
||||
return 0;
|
||||
}
|
||||
|
|
18
MAINTAINERS
18
MAINTAINERS
|
@ -1667,9 +1667,9 @@ F: drivers/video/backlight/
|
|||
F: include/linux/backlight.h
|
||||
|
||||
BATMAN ADVANCED
|
||||
M: Marek Lindner <lindner_marek@yahoo.de>
|
||||
M: Simon Wunderlich <siwu@hrz.tu-chemnitz.de>
|
||||
M: Antonio Quartulli <ordex@autistici.org>
|
||||
M: Marek Lindner <mareklindner@neomailbox.ch>
|
||||
M: Simon Wunderlich <sw@simonwunderlich.de>
|
||||
M: Antonio Quartulli <antonio@meshcoding.com>
|
||||
L: b.a.t.m.a.n@lists.open-mesh.org
|
||||
W: http://www.open-mesh.org/
|
||||
S: Maintained
|
||||
|
@ -1822,7 +1822,7 @@ F: drivers/net/ethernet/broadcom/bnx2.*
|
|||
F: drivers/net/ethernet/broadcom/bnx2_*
|
||||
|
||||
BROADCOM BNX2X 10 GIGABIT ETHERNET DRIVER
|
||||
M: Eilon Greenstein <eilong@broadcom.com>
|
||||
M: Ariel Elior <ariele@broadcom.com>
|
||||
L: netdev@vger.kernel.org
|
||||
S: Supported
|
||||
F: drivers/net/ethernet/broadcom/bnx2x/
|
||||
|
@ -5374,7 +5374,7 @@ S: Orphan
|
|||
F: drivers/net/wireless/libertas/
|
||||
|
||||
MARVELL MV643XX ETHERNET DRIVER
|
||||
M: Lennert Buytenhek <buytenh@wantstofly.org>
|
||||
M: Sebastian Hesselbarth <sebastian.hesselbarth@gmail.com>
|
||||
L: netdev@vger.kernel.org
|
||||
S: Maintained
|
||||
F: drivers/net/ethernet/marvell/mv643xx_eth.*
|
||||
|
@ -6890,6 +6890,14 @@ L: linux-hexagon@vger.kernel.org
|
|||
S: Supported
|
||||
F: arch/hexagon/
|
||||
|
||||
QUALCOMM WCN36XX WIRELESS DRIVER
|
||||
M: Eugene Krasnikov <k.eugene.e@gmail.com>
|
||||
L: wcn36xx@lists.infradead.org
|
||||
W: http://wireless.kernel.org/en/users/Drivers/wcn36xx
|
||||
T: git git://github.com/KrasnikovEugene/wcn36xx.git
|
||||
S: Supported
|
||||
F: drivers/net/wireless/ath/wcn36xx/
|
||||
|
||||
QUICKCAM PARALLEL PORT WEBCAMS
|
||||
M: Hans Verkuil <hverkuil@xs4all.nl>
|
||||
L: linux-media@vger.kernel.org
|
||||
|
|
|
@ -81,6 +81,8 @@
|
|||
|
||||
#define SO_SELECT_ERR_QUEUE 45
|
||||
|
||||
#define SO_BUSY_POLL 46
|
||||
#define SO_BUSY_POLL 46
|
||||
|
||||
#define SO_MAX_PACING_RATE 47
|
||||
|
||||
#endif /* _UAPI_ASM_SOCKET_H */
|
||||
|
|
|
@ -670,6 +670,12 @@
|
|||
/* Filled in by U-Boot */
|
||||
mac-address = [ 00 00 00 00 00 00 ];
|
||||
};
|
||||
|
||||
phy_sel: cpsw-phy-sel@44e10650 {
|
||||
compatible = "ti,am3352-cpsw-phy-sel";
|
||||
reg= <0x44e10650 0x4>;
|
||||
reg-names = "gmii-sel";
|
||||
};
|
||||
};
|
||||
|
||||
ocmcram: ocmcram@40300000 {
|
||||
|
|
|
@ -76,4 +76,6 @@
|
|||
|
||||
#define SO_BUSY_POLL 46
|
||||
|
||||
#define SO_MAX_PACING_RATE 47
|
||||
|
||||
#endif /* __ASM_AVR32_SOCKET_H */
|
||||
|
|
|
@ -78,6 +78,8 @@
|
|||
|
||||
#define SO_BUSY_POLL 46
|
||||
|
||||
#define SO_MAX_PACING_RATE 47
|
||||
|
||||
#endif /* _ASM_SOCKET_H */
|
||||
|
||||
|
||||
|
|
|
@ -76,5 +76,7 @@
|
|||
|
||||
#define SO_BUSY_POLL 46
|
||||
|
||||
#define SO_MAX_PACING_RATE 47
|
||||
|
||||
#endif /* _ASM_SOCKET_H */
|
||||
|
||||
|
|
|
@ -85,4 +85,6 @@
|
|||
|
||||
#define SO_BUSY_POLL 46
|
||||
|
||||
#define SO_MAX_PACING_RATE 47
|
||||
|
||||
#endif /* _ASM_IA64_SOCKET_H */
|
||||
|
|
|
@ -76,4 +76,6 @@
|
|||
|
||||
#define SO_BUSY_POLL 46
|
||||
|
||||
#define SO_MAX_PACING_RATE 47
|
||||
|
||||
#endif /* _ASM_M32R_SOCKET_H */
|
||||
|
|
|
@ -94,4 +94,6 @@
|
|||
|
||||
#define SO_BUSY_POLL 46
|
||||
|
||||
#define SO_MAX_PACING_RATE 47
|
||||
|
||||
#endif /* _UAPI_ASM_SOCKET_H */
|
||||
|
|
|
@ -76,4 +76,6 @@
|
|||
|
||||
#define SO_BUSY_POLL 46
|
||||
|
||||
#define SO_MAX_PACING_RATE 47
|
||||
|
||||
#endif /* _ASM_SOCKET_H */
|
||||
|
|
|
@ -75,6 +75,8 @@
|
|||
|
||||
#define SO_BUSY_POLL 0x4027
|
||||
|
||||
#define SO_MAX_PACING_RATE 0x4048
|
||||
|
||||
/* O_NONBLOCK clashes with the bits used for socket types. Therefore we
|
||||
* have to define SOCK_NONBLOCK to a different value here.
|
||||
*/
|
||||
|
|
|
@ -83,4 +83,6 @@
|
|||
|
||||
#define SO_BUSY_POLL 46
|
||||
|
||||
#define SO_MAX_PACING_RATE 47
|
||||
|
||||
#endif /* _ASM_POWERPC_SOCKET_H */
|
||||
|
|
|
@ -82,4 +82,6 @@
|
|||
|
||||
#define SO_BUSY_POLL 46
|
||||
|
||||
#define SO_MAX_PACING_RATE 47
|
||||
|
||||
#endif /* _ASM_SOCKET_H */
|
||||
|
|
|
@ -72,6 +72,8 @@
|
|||
|
||||
#define SO_BUSY_POLL 0x0030
|
||||
|
||||
#define SO_MAX_PACING_RATE 0x0031
|
||||
|
||||
/* Security levels - as per NRL IPv6 - don't actually do anything */
|
||||
#define SO_SECURITY_AUTHENTICATION 0x5001
|
||||
#define SO_SECURITY_ENCRYPTION_TRANSPORT 0x5002
|
||||
|
|
|
@ -42,15 +42,27 @@ static void __jump_label_transform(struct jump_entry *entry,
|
|||
int init)
|
||||
{
|
||||
union jump_code_union code;
|
||||
const unsigned char default_nop[] = { STATIC_KEY_INIT_NOP };
|
||||
const unsigned char *ideal_nop = ideal_nops[NOP_ATOMIC5];
|
||||
|
||||
if (type == JUMP_LABEL_ENABLE) {
|
||||
/*
|
||||
* We are enabling this jump label. If it is not a nop
|
||||
* then something must have gone wrong.
|
||||
*/
|
||||
if (unlikely(memcmp((void *)entry->code, ideal_nop, 5) != 0))
|
||||
bug_at((void *)entry->code, __LINE__);
|
||||
if (init) {
|
||||
/*
|
||||
* Jump label is enabled for the first time.
|
||||
* So we expect a default_nop...
|
||||
*/
|
||||
if (unlikely(memcmp((void *)entry->code, default_nop, 5)
|
||||
!= 0))
|
||||
bug_at((void *)entry->code, __LINE__);
|
||||
} else {
|
||||
/*
|
||||
* ...otherwise expect an ideal_nop. Otherwise
|
||||
* something went horribly wrong.
|
||||
*/
|
||||
if (unlikely(memcmp((void *)entry->code, ideal_nop, 5)
|
||||
!= 0))
|
||||
bug_at((void *)entry->code, __LINE__);
|
||||
}
|
||||
|
||||
code.jump = 0xe9;
|
||||
code.offset = entry->target -
|
||||
|
@ -63,7 +75,6 @@ static void __jump_label_transform(struct jump_entry *entry,
|
|||
* are converting the default nop to the ideal nop.
|
||||
*/
|
||||
if (init) {
|
||||
const unsigned char default_nop[] = { STATIC_KEY_INIT_NOP };
|
||||
if (unlikely(memcmp((void *)entry->code, default_nop, 5) != 0))
|
||||
bug_at((void *)entry->code, __LINE__);
|
||||
} else {
|
||||
|
|
|
@ -788,5 +788,7 @@ void bpf_jit_free(struct sk_filter *fp)
|
|||
if (fp->bpf_func != sk_run_filter) {
|
||||
INIT_WORK(&fp->work, bpf_jit_free_deferred);
|
||||
schedule_work(&fp->work);
|
||||
} else {
|
||||
kfree(fp);
|
||||
}
|
||||
}
|
||||
|
|
|
@ -87,4 +87,6 @@
|
|||
|
||||
#define SO_BUSY_POLL 46
|
||||
|
||||
#define SO_MAX_PACING_RATE 47
|
||||
|
||||
#endif /* _XTENSA_SOCKET_H */
|
||||
|
|
|
@ -420,7 +420,6 @@ struct fs_transmit_config {
|
|||
#define RC_FLAGS_BFPS_BFP27 (0xd << 17)
|
||||
#define RC_FLAGS_BFPS_BFP47 (0xe << 17)
|
||||
|
||||
#define RC_FLAGS_BFPS (0x1 << 17)
|
||||
#define RC_FLAGS_BFPP (0x1 << 21)
|
||||
#define RC_FLAGS_TEVC (0x1 << 22)
|
||||
#define RC_FLAGS_TEP (0x1 << 23)
|
||||
|
|
|
@ -188,8 +188,11 @@ static int bcma_host_pci_probe(struct pci_dev *dev,
|
|||
pci_write_config_dword(dev, 0x40, val & 0xffff00ff);
|
||||
|
||||
/* SSB needed additional powering up, do we have any AMBA PCI cards? */
|
||||
if (!pci_is_pcie(dev))
|
||||
bcma_err(bus, "PCI card detected, report problems.\n");
|
||||
if (!pci_is_pcie(dev)) {
|
||||
bcma_err(bus, "PCI card detected, they are not supported.\n");
|
||||
err = -ENXIO;
|
||||
goto err_pci_release_regions;
|
||||
}
|
||||
|
||||
/* Map MMIO */
|
||||
err = -ENOMEM;
|
||||
|
@ -269,6 +272,7 @@ static SIMPLE_DEV_PM_OPS(bcma_pm_ops, bcma_host_pci_suspend,
|
|||
|
||||
static DEFINE_PCI_DEVICE_TABLE(bcma_pci_bridge_tbl) = {
|
||||
{ PCI_DEVICE(PCI_VENDOR_ID_BROADCOM, 0x0576) },
|
||||
{ PCI_DEVICE(PCI_VENDOR_ID_BROADCOM, 0x4313) },
|
||||
{ PCI_DEVICE(PCI_VENDOR_ID_BROADCOM, 43224) },
|
||||
{ PCI_DEVICE(PCI_VENDOR_ID_BROADCOM, 0x4331) },
|
||||
{ PCI_DEVICE(PCI_VENDOR_ID_BROADCOM, 0x4353) },
|
||||
|
|
|
@ -30,3 +30,5 @@ hci_uart-$(CONFIG_BT_HCIUART_LL) += hci_ll.o
|
|||
hci_uart-$(CONFIG_BT_HCIUART_ATH3K) += hci_ath.o
|
||||
hci_uart-$(CONFIG_BT_HCIUART_3WIRE) += hci_h5.o
|
||||
hci_uart-objs := $(hci_uart-y)
|
||||
|
||||
ccflags-y += -D__CHECK_ENDIAN__
|
||||
|
|
|
@ -57,7 +57,7 @@ struct ath3k_version {
|
|||
unsigned char reserved[0x07];
|
||||
};
|
||||
|
||||
static struct usb_device_id ath3k_table[] = {
|
||||
static const struct usb_device_id ath3k_table[] = {
|
||||
/* Atheros AR3011 */
|
||||
{ USB_DEVICE(0x0CF3, 0x3000) },
|
||||
|
||||
|
@ -112,7 +112,7 @@ MODULE_DEVICE_TABLE(usb, ath3k_table);
|
|||
#define BTUSB_ATH3012 0x80
|
||||
/* This table is to load patch and sysconfig files
|
||||
* for AR3012 */
|
||||
static struct usb_device_id ath3k_blist_tbl[] = {
|
||||
static const struct usb_device_id ath3k_blist_tbl[] = {
|
||||
|
||||
/* Atheros AR3012 with sflash firmware*/
|
||||
{ USB_DEVICE(0x0CF3, 0x0036), .driver_info = BTUSB_ATH3012 },
|
||||
|
|
|
@ -42,7 +42,7 @@
|
|||
|
||||
static struct usb_driver bfusb_driver;
|
||||
|
||||
static struct usb_device_id bfusb_table[] = {
|
||||
static const struct usb_device_id bfusb_table[] = {
|
||||
/* AVM BlueFRITZ! USB */
|
||||
{ USB_DEVICE(0x057c, 0x2200) },
|
||||
|
||||
|
@ -318,7 +318,6 @@ static inline int bfusb_recv_block(struct bfusb_data *data, int hdr, unsigned ch
|
|||
return -ENOMEM;
|
||||
}
|
||||
|
||||
skb->dev = (void *) data->hdev;
|
||||
bt_cb(skb)->pkt_type = pkt_type;
|
||||
|
||||
data->reassembly = skb;
|
||||
|
@ -333,7 +332,7 @@ static inline int bfusb_recv_block(struct bfusb_data *data, int hdr, unsigned ch
|
|||
memcpy(skb_put(data->reassembly, len), buf, len);
|
||||
|
||||
if (hdr & 0x08) {
|
||||
hci_recv_frame(data->reassembly);
|
||||
hci_recv_frame(data->hdev, data->reassembly);
|
||||
data->reassembly = NULL;
|
||||
}
|
||||
|
||||
|
@ -465,26 +464,18 @@ static int bfusb_close(struct hci_dev *hdev)
|
|||
return 0;
|
||||
}
|
||||
|
||||
static int bfusb_send_frame(struct sk_buff *skb)
|
||||
static int bfusb_send_frame(struct hci_dev *hdev, struct sk_buff *skb)
|
||||
{
|
||||
struct hci_dev *hdev = (struct hci_dev *) skb->dev;
|
||||
struct bfusb_data *data;
|
||||
struct bfusb_data *data = hci_get_drvdata(hdev);
|
||||
struct sk_buff *nskb;
|
||||
unsigned char buf[3];
|
||||
int sent = 0, size, count;
|
||||
|
||||
BT_DBG("hdev %p skb %p type %d len %d", hdev, skb, bt_cb(skb)->pkt_type, skb->len);
|
||||
|
||||
if (!hdev) {
|
||||
BT_ERR("Frame for unknown HCI device (hdev=NULL)");
|
||||
return -ENODEV;
|
||||
}
|
||||
|
||||
if (!test_bit(HCI_RUNNING, &hdev->flags))
|
||||
return -EBUSY;
|
||||
|
||||
data = hci_get_drvdata(hdev);
|
||||
|
||||
switch (bt_cb(skb)->pkt_type) {
|
||||
case HCI_COMMAND_PKT:
|
||||
hdev->stat.cmd_tx++;
|
||||
|
@ -544,11 +535,6 @@ static int bfusb_send_frame(struct sk_buff *skb)
|
|||
return 0;
|
||||
}
|
||||
|
||||
static int bfusb_ioctl(struct hci_dev *hdev, unsigned int cmd, unsigned long arg)
|
||||
{
|
||||
return -ENOIOCTLCMD;
|
||||
}
|
||||
|
||||
static int bfusb_load_firmware(struct bfusb_data *data,
|
||||
const unsigned char *firmware, int count)
|
||||
{
|
||||
|
@ -699,11 +685,10 @@ static int bfusb_probe(struct usb_interface *intf, const struct usb_device_id *i
|
|||
hci_set_drvdata(hdev, data);
|
||||
SET_HCIDEV_DEV(hdev, &intf->dev);
|
||||
|
||||
hdev->open = bfusb_open;
|
||||
hdev->close = bfusb_close;
|
||||
hdev->flush = bfusb_flush;
|
||||
hdev->send = bfusb_send_frame;
|
||||
hdev->ioctl = bfusb_ioctl;
|
||||
hdev->open = bfusb_open;
|
||||
hdev->close = bfusb_close;
|
||||
hdev->flush = bfusb_flush;
|
||||
hdev->send = bfusb_send_frame;
|
||||
|
||||
if (hci_register_dev(hdev) < 0) {
|
||||
BT_ERR("Can't register HCI device");
|
||||
|
|
|
@ -399,7 +399,6 @@ static void bluecard_receive(bluecard_info_t *info, unsigned int offset)
|
|||
|
||||
if (info->rx_state == RECV_WAIT_PACKET_TYPE) {
|
||||
|
||||
info->rx_skb->dev = (void *) info->hdev;
|
||||
bt_cb(info->rx_skb)->pkt_type = buf[i];
|
||||
|
||||
switch (bt_cb(info->rx_skb)->pkt_type) {
|
||||
|
@ -477,7 +476,7 @@ static void bluecard_receive(bluecard_info_t *info, unsigned int offset)
|
|||
break;
|
||||
|
||||
case RECV_WAIT_DATA:
|
||||
hci_recv_frame(info->rx_skb);
|
||||
hci_recv_frame(info->hdev, info->rx_skb);
|
||||
info->rx_skb = NULL;
|
||||
break;
|
||||
|
||||
|
@ -659,17 +658,9 @@ static int bluecard_hci_close(struct hci_dev *hdev)
|
|||
}
|
||||
|
||||
|
||||
static int bluecard_hci_send_frame(struct sk_buff *skb)
|
||||
static int bluecard_hci_send_frame(struct hci_dev *hdev, struct sk_buff *skb)
|
||||
{
|
||||
bluecard_info_t *info;
|
||||
struct hci_dev *hdev = (struct hci_dev *)(skb->dev);
|
||||
|
||||
if (!hdev) {
|
||||
BT_ERR("Frame for unknown HCI device (hdev=NULL)");
|
||||
return -ENODEV;
|
||||
}
|
||||
|
||||
info = hci_get_drvdata(hdev);
|
||||
bluecard_info_t *info = hci_get_drvdata(hdev);
|
||||
|
||||
switch (bt_cb(skb)->pkt_type) {
|
||||
case HCI_COMMAND_PKT:
|
||||
|
@ -693,12 +684,6 @@ static int bluecard_hci_send_frame(struct sk_buff *skb)
|
|||
}
|
||||
|
||||
|
||||
static int bluecard_hci_ioctl(struct hci_dev *hdev, unsigned int cmd, unsigned long arg)
|
||||
{
|
||||
return -ENOIOCTLCMD;
|
||||
}
|
||||
|
||||
|
||||
|
||||
/* ======================== Card services HCI interaction ======================== */
|
||||
|
||||
|
@ -734,11 +719,10 @@ static int bluecard_open(bluecard_info_t *info)
|
|||
hci_set_drvdata(hdev, info);
|
||||
SET_HCIDEV_DEV(hdev, &info->p_dev->dev);
|
||||
|
||||
hdev->open = bluecard_hci_open;
|
||||
hdev->close = bluecard_hci_close;
|
||||
hdev->flush = bluecard_hci_flush;
|
||||
hdev->send = bluecard_hci_send_frame;
|
||||
hdev->ioctl = bluecard_hci_ioctl;
|
||||
hdev->open = bluecard_hci_open;
|
||||
hdev->close = bluecard_hci_close;
|
||||
hdev->flush = bluecard_hci_flush;
|
||||
hdev->send = bluecard_hci_send_frame;
|
||||
|
||||
id = inb(iobase + 0x30);
|
||||
|
||||
|
|
|
@ -37,7 +37,7 @@
|
|||
|
||||
#define VERSION "0.10"
|
||||
|
||||
static struct usb_device_id bpa10x_table[] = {
|
||||
static const struct usb_device_id bpa10x_table[] = {
|
||||
/* Tektronix BPA 100/105 (Digianswer) */
|
||||
{ USB_DEVICE(0x08fd, 0x0002) },
|
||||
|
||||
|
@ -129,8 +129,6 @@ static int bpa10x_recv(struct hci_dev *hdev, int queue, void *buf, int count)
|
|||
return -ENOMEM;
|
||||
}
|
||||
|
||||
skb->dev = (void *) hdev;
|
||||
|
||||
data->rx_skb[queue] = skb;
|
||||
|
||||
scb = (void *) skb->cb;
|
||||
|
@ -155,7 +153,7 @@ static int bpa10x_recv(struct hci_dev *hdev, int queue, void *buf, int count)
|
|||
data->rx_skb[queue] = NULL;
|
||||
|
||||
bt_cb(skb)->pkt_type = scb->type;
|
||||
hci_recv_frame(skb);
|
||||
hci_recv_frame(hdev, skb);
|
||||
}
|
||||
|
||||
count -= len; buf += len;
|
||||
|
@ -352,9 +350,8 @@ static int bpa10x_flush(struct hci_dev *hdev)
|
|||
return 0;
|
||||
}
|
||||
|
||||
static int bpa10x_send_frame(struct sk_buff *skb)
|
||||
static int bpa10x_send_frame(struct hci_dev *hdev, struct sk_buff *skb)
|
||||
{
|
||||
struct hci_dev *hdev = (struct hci_dev *) skb->dev;
|
||||
struct bpa10x_data *data = hci_get_drvdata(hdev);
|
||||
struct usb_ctrlrequest *dr;
|
||||
struct urb *urb;
|
||||
|
@ -366,6 +363,8 @@ static int bpa10x_send_frame(struct sk_buff *skb)
|
|||
if (!test_bit(HCI_RUNNING, &hdev->flags))
|
||||
return -EBUSY;
|
||||
|
||||
skb->dev = (void *) hdev;
|
||||
|
||||
urb = usb_alloc_urb(0, GFP_ATOMIC);
|
||||
if (!urb)
|
||||
return -ENOMEM;
|
||||
|
|
|
@ -247,7 +247,6 @@ static void bt3c_receive(bt3c_info_t *info)
|
|||
|
||||
if (info->rx_state == RECV_WAIT_PACKET_TYPE) {
|
||||
|
||||
info->rx_skb->dev = (void *) info->hdev;
|
||||
bt_cb(info->rx_skb)->pkt_type = inb(iobase + DATA_L);
|
||||
inb(iobase + DATA_H);
|
||||
//printk("bt3c: PACKET_TYPE=%02x\n", bt_cb(info->rx_skb)->pkt_type);
|
||||
|
@ -318,7 +317,7 @@ static void bt3c_receive(bt3c_info_t *info)
|
|||
break;
|
||||
|
||||
case RECV_WAIT_DATA:
|
||||
hci_recv_frame(info->rx_skb);
|
||||
hci_recv_frame(info->hdev, info->rx_skb);
|
||||
info->rx_skb = NULL;
|
||||
break;
|
||||
|
||||
|
@ -416,19 +415,11 @@ static int bt3c_hci_close(struct hci_dev *hdev)
|
|||
}
|
||||
|
||||
|
||||
static int bt3c_hci_send_frame(struct sk_buff *skb)
|
||||
static int bt3c_hci_send_frame(struct hci_dev *hdev, struct sk_buff *skb)
|
||||
{
|
||||
bt3c_info_t *info;
|
||||
struct hci_dev *hdev = (struct hci_dev *)(skb->dev);
|
||||
bt3c_info_t *info = hci_get_drvdata(hdev);
|
||||
unsigned long flags;
|
||||
|
||||
if (!hdev) {
|
||||
BT_ERR("Frame for unknown HCI device (hdev=NULL)");
|
||||
return -ENODEV;
|
||||
}
|
||||
|
||||
info = hci_get_drvdata(hdev);
|
||||
|
||||
switch (bt_cb(skb)->pkt_type) {
|
||||
case HCI_COMMAND_PKT:
|
||||
hdev->stat.cmd_tx++;
|
||||
|
@ -455,12 +446,6 @@ static int bt3c_hci_send_frame(struct sk_buff *skb)
|
|||
}
|
||||
|
||||
|
||||
static int bt3c_hci_ioctl(struct hci_dev *hdev, unsigned int cmd, unsigned long arg)
|
||||
{
|
||||
return -ENOIOCTLCMD;
|
||||
}
|
||||
|
||||
|
||||
|
||||
/* ======================== Card services HCI interaction ======================== */
|
||||
|
||||
|
@ -577,11 +562,10 @@ static int bt3c_open(bt3c_info_t *info)
|
|||
hci_set_drvdata(hdev, info);
|
||||
SET_HCIDEV_DEV(hdev, &info->p_dev->dev);
|
||||
|
||||
hdev->open = bt3c_hci_open;
|
||||
hdev->close = bt3c_hci_close;
|
||||
hdev->flush = bt3c_hci_flush;
|
||||
hdev->send = bt3c_hci_send_frame;
|
||||
hdev->ioctl = bt3c_hci_ioctl;
|
||||
hdev->open = bt3c_hci_open;
|
||||
hdev->close = bt3c_hci_close;
|
||||
hdev->flush = bt3c_hci_flush;
|
||||
hdev->send = bt3c_hci_send_frame;
|
||||
|
||||
/* Load firmware */
|
||||
err = request_firmware(&firmware, "BT3CPCC.bin", &info->p_dev->dev);
|
||||
|
|
|
@ -23,6 +23,8 @@
|
|||
#include <linux/bitops.h>
|
||||
#include <linux/slab.h>
|
||||
#include <net/bluetooth/bluetooth.h>
|
||||
#include <linux/ctype.h>
|
||||
#include <linux/firmware.h>
|
||||
|
||||
#define BTM_HEADER_LEN 4
|
||||
#define BTM_UPLD_SIZE 2312
|
||||
|
@ -41,6 +43,8 @@ struct btmrvl_thread {
|
|||
struct btmrvl_device {
|
||||
void *card;
|
||||
struct hci_dev *hcidev;
|
||||
struct device *dev;
|
||||
const char *cal_data;
|
||||
|
||||
u8 dev_type;
|
||||
|
||||
|
@ -91,6 +95,7 @@ struct btmrvl_private {
|
|||
#define BT_CMD_HOST_SLEEP_CONFIG 0x59
|
||||
#define BT_CMD_HOST_SLEEP_ENABLE 0x5A
|
||||
#define BT_CMD_MODULE_CFG_REQ 0x5B
|
||||
#define BT_CMD_LOAD_CONFIG_DATA 0x61
|
||||
|
||||
/* Sub-commands: Module Bringup/Shutdown Request/Response */
|
||||
#define MODULE_BRINGUP_REQ 0xF1
|
||||
|
@ -116,11 +121,8 @@ struct btmrvl_private {
|
|||
#define PS_SLEEP 0x01
|
||||
#define PS_AWAKE 0x00
|
||||
|
||||
struct btmrvl_cmd {
|
||||
__le16 ocf_ogf;
|
||||
u8 length;
|
||||
u8 data[4];
|
||||
} __packed;
|
||||
#define BT_CMD_DATA_SIZE 32
|
||||
#define BT_CAL_DATA_SIZE 28
|
||||
|
||||
struct btmrvl_event {
|
||||
u8 ec; /* event counter */
|
||||
|
|
|
@ -57,8 +57,7 @@ bool btmrvl_check_evtpkt(struct btmrvl_private *priv, struct sk_buff *skb)
|
|||
ocf = hci_opcode_ocf(opcode);
|
||||
ogf = hci_opcode_ogf(opcode);
|
||||
|
||||
if (ocf == BT_CMD_MODULE_CFG_REQ &&
|
||||
priv->btmrvl_dev.sendcmdflag) {
|
||||
if (priv->btmrvl_dev.sendcmdflag) {
|
||||
priv->btmrvl_dev.sendcmdflag = false;
|
||||
priv->adapter->cmd_complete = true;
|
||||
wake_up_interruptible(&priv->adapter->cmd_wait_q);
|
||||
|
@ -116,7 +115,6 @@ int btmrvl_process_event(struct btmrvl_private *priv, struct sk_buff *skb)
|
|||
adapter->hs_state = HS_ACTIVATED;
|
||||
if (adapter->psmode)
|
||||
adapter->ps_state = PS_SLEEP;
|
||||
wake_up_interruptible(&adapter->cmd_wait_q);
|
||||
BT_DBG("HS ACTIVATED!");
|
||||
} else {
|
||||
BT_DBG("HS Enable failed");
|
||||
|
@ -168,45 +166,50 @@ exit:
|
|||
}
|
||||
EXPORT_SYMBOL_GPL(btmrvl_process_event);
|
||||
|
||||
int btmrvl_send_module_cfg_cmd(struct btmrvl_private *priv, int subcmd)
|
||||
static int btmrvl_send_sync_cmd(struct btmrvl_private *priv, u16 cmd_no,
|
||||
const void *param, u8 len)
|
||||
{
|
||||
struct sk_buff *skb;
|
||||
struct btmrvl_cmd *cmd;
|
||||
int ret = 0;
|
||||
struct hci_command_hdr *hdr;
|
||||
|
||||
skb = bt_skb_alloc(sizeof(*cmd), GFP_ATOMIC);
|
||||
skb = bt_skb_alloc(HCI_COMMAND_HDR_SIZE + len, GFP_ATOMIC);
|
||||
if (skb == NULL) {
|
||||
BT_ERR("No free skb");
|
||||
return -ENOMEM;
|
||||
}
|
||||
|
||||
cmd = (struct btmrvl_cmd *) skb_put(skb, sizeof(*cmd));
|
||||
cmd->ocf_ogf = cpu_to_le16(hci_opcode_pack(OGF, BT_CMD_MODULE_CFG_REQ));
|
||||
cmd->length = 1;
|
||||
cmd->data[0] = subcmd;
|
||||
hdr = (struct hci_command_hdr *)skb_put(skb, HCI_COMMAND_HDR_SIZE);
|
||||
hdr->opcode = cpu_to_le16(hci_opcode_pack(OGF, cmd_no));
|
||||
hdr->plen = len;
|
||||
|
||||
if (len)
|
||||
memcpy(skb_put(skb, len), param, len);
|
||||
|
||||
bt_cb(skb)->pkt_type = MRVL_VENDOR_PKT;
|
||||
|
||||
skb->dev = (void *) priv->btmrvl_dev.hcidev;
|
||||
skb_queue_head(&priv->adapter->tx_queue, skb);
|
||||
|
||||
priv->btmrvl_dev.sendcmdflag = true;
|
||||
|
||||
priv->adapter->cmd_complete = false;
|
||||
|
||||
BT_DBG("Queue module cfg Command");
|
||||
|
||||
wake_up_interruptible(&priv->main_thread.wait_q);
|
||||
|
||||
if (!wait_event_interruptible_timeout(priv->adapter->cmd_wait_q,
|
||||
priv->adapter->cmd_complete,
|
||||
msecs_to_jiffies(WAIT_UNTIL_CMD_RESP))) {
|
||||
ret = -ETIMEDOUT;
|
||||
BT_ERR("module_cfg_cmd(%x): timeout: %d",
|
||||
subcmd, priv->btmrvl_dev.sendcmdflag);
|
||||
}
|
||||
msecs_to_jiffies(WAIT_UNTIL_CMD_RESP)))
|
||||
return -ETIMEDOUT;
|
||||
|
||||
BT_DBG("module cfg Command done");
|
||||
return 0;
|
||||
}
|
||||
|
||||
int btmrvl_send_module_cfg_cmd(struct btmrvl_private *priv, int subcmd)
|
||||
{
|
||||
int ret;
|
||||
|
||||
ret = btmrvl_send_sync_cmd(priv, BT_CMD_MODULE_CFG_REQ, &subcmd, 1);
|
||||
if (ret)
|
||||
BT_ERR("module_cfg_cmd(%x) failed\n", subcmd);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
@ -214,61 +217,36 @@ EXPORT_SYMBOL_GPL(btmrvl_send_module_cfg_cmd);
|
|||
|
||||
int btmrvl_send_hscfg_cmd(struct btmrvl_private *priv)
|
||||
{
|
||||
struct sk_buff *skb;
|
||||
struct btmrvl_cmd *cmd;
|
||||
int ret;
|
||||
u8 param[2];
|
||||
|
||||
skb = bt_skb_alloc(sizeof(*cmd), GFP_ATOMIC);
|
||||
if (!skb) {
|
||||
BT_ERR("No free skb");
|
||||
return -ENOMEM;
|
||||
}
|
||||
param[0] = (priv->btmrvl_dev.gpio_gap & 0xff00) >> 8;
|
||||
param[1] = (u8) (priv->btmrvl_dev.gpio_gap & 0x00ff);
|
||||
|
||||
cmd = (struct btmrvl_cmd *) skb_put(skb, sizeof(*cmd));
|
||||
cmd->ocf_ogf = cpu_to_le16(hci_opcode_pack(OGF,
|
||||
BT_CMD_HOST_SLEEP_CONFIG));
|
||||
cmd->length = 2;
|
||||
cmd->data[0] = (priv->btmrvl_dev.gpio_gap & 0xff00) >> 8;
|
||||
cmd->data[1] = (u8) (priv->btmrvl_dev.gpio_gap & 0x00ff);
|
||||
BT_DBG("Sending HSCFG Command, gpio=0x%x, gap=0x%x",
|
||||
param[0], param[1]);
|
||||
|
||||
bt_cb(skb)->pkt_type = MRVL_VENDOR_PKT;
|
||||
ret = btmrvl_send_sync_cmd(priv, BT_CMD_HOST_SLEEP_CONFIG, param, 2);
|
||||
if (ret)
|
||||
BT_ERR("HSCFG command failed\n");
|
||||
|
||||
skb->dev = (void *) priv->btmrvl_dev.hcidev;
|
||||
skb_queue_head(&priv->adapter->tx_queue, skb);
|
||||
|
||||
BT_DBG("Queue HSCFG Command, gpio=0x%x, gap=0x%x", cmd->data[0],
|
||||
cmd->data[1]);
|
||||
|
||||
return 0;
|
||||
return ret;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(btmrvl_send_hscfg_cmd);
|
||||
|
||||
int btmrvl_enable_ps(struct btmrvl_private *priv)
|
||||
{
|
||||
struct sk_buff *skb;
|
||||
struct btmrvl_cmd *cmd;
|
||||
|
||||
skb = bt_skb_alloc(sizeof(*cmd), GFP_ATOMIC);
|
||||
if (skb == NULL) {
|
||||
BT_ERR("No free skb");
|
||||
return -ENOMEM;
|
||||
}
|
||||
|
||||
cmd = (struct btmrvl_cmd *) skb_put(skb, sizeof(*cmd));
|
||||
cmd->ocf_ogf = cpu_to_le16(hci_opcode_pack(OGF,
|
||||
BT_CMD_AUTO_SLEEP_MODE));
|
||||
cmd->length = 1;
|
||||
int ret;
|
||||
u8 param;
|
||||
|
||||
if (priv->btmrvl_dev.psmode)
|
||||
cmd->data[0] = BT_PS_ENABLE;
|
||||
param = BT_PS_ENABLE;
|
||||
else
|
||||
cmd->data[0] = BT_PS_DISABLE;
|
||||
param = BT_PS_DISABLE;
|
||||
|
||||
bt_cb(skb)->pkt_type = MRVL_VENDOR_PKT;
|
||||
|
||||
skb->dev = (void *) priv->btmrvl_dev.hcidev;
|
||||
skb_queue_head(&priv->adapter->tx_queue, skb);
|
||||
|
||||
BT_DBG("Queue PSMODE Command:%d", cmd->data[0]);
|
||||
ret = btmrvl_send_sync_cmd(priv, BT_CMD_AUTO_SLEEP_MODE, ¶m, 1);
|
||||
if (ret)
|
||||
BT_ERR("PSMODE command failed\n");
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
@ -276,37 +254,11 @@ EXPORT_SYMBOL_GPL(btmrvl_enable_ps);
|
|||
|
||||
int btmrvl_enable_hs(struct btmrvl_private *priv)
|
||||
{
|
||||
struct sk_buff *skb;
|
||||
struct btmrvl_cmd *cmd;
|
||||
int ret = 0;
|
||||
int ret;
|
||||
|
||||
skb = bt_skb_alloc(sizeof(*cmd), GFP_ATOMIC);
|
||||
if (skb == NULL) {
|
||||
BT_ERR("No free skb");
|
||||
return -ENOMEM;
|
||||
}
|
||||
|
||||
cmd = (struct btmrvl_cmd *) skb_put(skb, sizeof(*cmd));
|
||||
cmd->ocf_ogf = cpu_to_le16(hci_opcode_pack(OGF, BT_CMD_HOST_SLEEP_ENABLE));
|
||||
cmd->length = 0;
|
||||
|
||||
bt_cb(skb)->pkt_type = MRVL_VENDOR_PKT;
|
||||
|
||||
skb->dev = (void *) priv->btmrvl_dev.hcidev;
|
||||
skb_queue_head(&priv->adapter->tx_queue, skb);
|
||||
|
||||
BT_DBG("Queue hs enable Command");
|
||||
|
||||
wake_up_interruptible(&priv->main_thread.wait_q);
|
||||
|
||||
if (!wait_event_interruptible_timeout(priv->adapter->cmd_wait_q,
|
||||
priv->adapter->hs_state,
|
||||
msecs_to_jiffies(WAIT_UNTIL_HS_STATE_CHANGED))) {
|
||||
ret = -ETIMEDOUT;
|
||||
BT_ERR("timeout: %d, %d,%d", priv->adapter->hs_state,
|
||||
priv->adapter->ps_state,
|
||||
priv->adapter->wakeup_tries);
|
||||
}
|
||||
ret = btmrvl_send_sync_cmd(priv, BT_CMD_HOST_SLEEP_ENABLE, NULL, 0);
|
||||
if (ret)
|
||||
BT_ERR("Host sleep enable command failed\n");
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
@ -403,26 +355,12 @@ static void btmrvl_free_adapter(struct btmrvl_private *priv)
|
|||
priv->adapter = NULL;
|
||||
}
|
||||
|
||||
static int btmrvl_ioctl(struct hci_dev *hdev,
|
||||
unsigned int cmd, unsigned long arg)
|
||||
static int btmrvl_send_frame(struct hci_dev *hdev, struct sk_buff *skb)
|
||||
{
|
||||
return -ENOIOCTLCMD;
|
||||
}
|
||||
|
||||
static int btmrvl_send_frame(struct sk_buff *skb)
|
||||
{
|
||||
struct hci_dev *hdev = (struct hci_dev *) skb->dev;
|
||||
struct btmrvl_private *priv = NULL;
|
||||
struct btmrvl_private *priv = hci_get_drvdata(hdev);
|
||||
|
||||
BT_DBG("type=%d, len=%d", skb->pkt_type, skb->len);
|
||||
|
||||
if (!hdev) {
|
||||
BT_ERR("Frame for unknown HCI device");
|
||||
return -ENODEV;
|
||||
}
|
||||
|
||||
priv = hci_get_drvdata(hdev);
|
||||
|
||||
if (!test_bit(HCI_RUNNING, &hdev->flags)) {
|
||||
BT_ERR("Failed testing HCI_RUNING, flags=%lx", hdev->flags);
|
||||
print_hex_dump_bytes("data: ", DUMP_PREFIX_OFFSET,
|
||||
|
@ -479,6 +417,137 @@ static int btmrvl_open(struct hci_dev *hdev)
|
|||
return 0;
|
||||
}
|
||||
|
||||
/*
|
||||
* This function parses provided calibration data input. It should contain
|
||||
* hex bytes separated by space or new line character. Here is an example.
|
||||
* 00 1C 01 37 FF FF FF FF 02 04 7F 01
|
||||
* CE BA 00 00 00 2D C6 C0 00 00 00 00
|
||||
* 00 F0 00 00
|
||||
*/
|
||||
static int btmrvl_parse_cal_cfg(const u8 *src, u32 len, u8 *dst, u32 dst_size)
|
||||
{
|
||||
const u8 *s = src;
|
||||
u8 *d = dst;
|
||||
int ret;
|
||||
u8 tmp[3];
|
||||
|
||||
tmp[2] = '\0';
|
||||
while ((s - src) <= len - 2) {
|
||||
if (isspace(*s)) {
|
||||
s++;
|
||||
continue;
|
||||
}
|
||||
|
||||
if (isxdigit(*s)) {
|
||||
if ((d - dst) >= dst_size) {
|
||||
BT_ERR("calibration data file too big!!!");
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
memcpy(tmp, s, 2);
|
||||
|
||||
ret = kstrtou8(tmp, 16, d++);
|
||||
if (ret < 0)
|
||||
return ret;
|
||||
|
||||
s += 2;
|
||||
} else {
|
||||
return -EINVAL;
|
||||
}
|
||||
}
|
||||
if (d == dst)
|
||||
return -EINVAL;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int btmrvl_load_cal_data(struct btmrvl_private *priv,
|
||||
u8 *config_data)
|
||||
{
|
||||
int i, ret;
|
||||
u8 data[BT_CMD_DATA_SIZE];
|
||||
|
||||
data[0] = 0x00;
|
||||
data[1] = 0x00;
|
||||
data[2] = 0x00;
|
||||
data[3] = BT_CMD_DATA_SIZE - 4;
|
||||
|
||||
/* Swap cal-data bytes. Each four bytes are swapped. Considering 4
|
||||
* byte SDIO header offset, mapping of input and output bytes will be
|
||||
* {3, 2, 1, 0} -> {0+4, 1+4, 2+4, 3+4},
|
||||
* {7, 6, 5, 4} -> {4+4, 5+4, 6+4, 7+4} */
|
||||
for (i = 4; i < BT_CMD_DATA_SIZE; i++)
|
||||
data[i] = config_data[(i / 4) * 8 - 1 - i];
|
||||
|
||||
print_hex_dump_bytes("Calibration data: ",
|
||||
DUMP_PREFIX_OFFSET, data, BT_CMD_DATA_SIZE);
|
||||
|
||||
ret = btmrvl_send_sync_cmd(priv, BT_CMD_LOAD_CONFIG_DATA, data,
|
||||
BT_CMD_DATA_SIZE);
|
||||
if (ret)
|
||||
BT_ERR("Failed to download caibration data\n");
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int
|
||||
btmrvl_process_cal_cfg(struct btmrvl_private *priv, u8 *data, u32 size)
|
||||
{
|
||||
u8 cal_data[BT_CAL_DATA_SIZE];
|
||||
int ret;
|
||||
|
||||
ret = btmrvl_parse_cal_cfg(data, size, cal_data, sizeof(cal_data));
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
ret = btmrvl_load_cal_data(priv, cal_data);
|
||||
if (ret) {
|
||||
BT_ERR("Fail to load calibrate data");
|
||||
return ret;
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int btmrvl_cal_data_config(struct btmrvl_private *priv)
|
||||
{
|
||||
const struct firmware *cfg;
|
||||
int ret;
|
||||
const char *cal_data = priv->btmrvl_dev.cal_data;
|
||||
|
||||
if (!cal_data)
|
||||
return 0;
|
||||
|
||||
ret = request_firmware(&cfg, cal_data, priv->btmrvl_dev.dev);
|
||||
if (ret < 0) {
|
||||
BT_DBG("Failed to get %s file, skipping cal data download",
|
||||
cal_data);
|
||||
return 0;
|
||||
}
|
||||
|
||||
ret = btmrvl_process_cal_cfg(priv, (u8 *)cfg->data, cfg->size);
|
||||
release_firmware(cfg);
|
||||
return ret;
|
||||
}
|
||||
|
||||
static int btmrvl_setup(struct hci_dev *hdev)
|
||||
{
|
||||
struct btmrvl_private *priv = hci_get_drvdata(hdev);
|
||||
|
||||
btmrvl_send_module_cfg_cmd(priv, MODULE_BRINGUP_REQ);
|
||||
|
||||
if (btmrvl_cal_data_config(priv))
|
||||
BT_ERR("Set cal data failed");
|
||||
|
||||
priv->btmrvl_dev.psmode = 1;
|
||||
btmrvl_enable_ps(priv);
|
||||
|
||||
priv->btmrvl_dev.gpio_gap = 0xffff;
|
||||
btmrvl_send_hscfg_cmd(priv);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
/*
|
||||
* This function handles the event generated by firmware, rx data
|
||||
* received from firmware, and tx data sent from kernel.
|
||||
|
@ -566,14 +635,12 @@ int btmrvl_register_hdev(struct btmrvl_private *priv)
|
|||
priv->btmrvl_dev.hcidev = hdev;
|
||||
hci_set_drvdata(hdev, priv);
|
||||
|
||||
hdev->bus = HCI_SDIO;
|
||||
hdev->open = btmrvl_open;
|
||||
hdev->bus = HCI_SDIO;
|
||||
hdev->open = btmrvl_open;
|
||||
hdev->close = btmrvl_close;
|
||||
hdev->flush = btmrvl_flush;
|
||||
hdev->send = btmrvl_send_frame;
|
||||
hdev->ioctl = btmrvl_ioctl;
|
||||
|
||||
btmrvl_send_module_cfg_cmd(priv, MODULE_BRINGUP_REQ);
|
||||
hdev->send = btmrvl_send_frame;
|
||||
hdev->setup = btmrvl_setup;
|
||||
|
||||
hdev->dev_type = priv->btmrvl_dev.dev_type;
|
||||
|
||||
|
|
|
@ -18,7 +18,6 @@
|
|||
* this warranty disclaimer.
|
||||
**/
|
||||
|
||||
#include <linux/firmware.h>
|
||||
#include <linux/slab.h>
|
||||
|
||||
#include <linux/mmc/sdio_ids.h>
|
||||
|
@ -102,6 +101,7 @@ static const struct btmrvl_sdio_card_reg btmrvl_reg_88xx = {
|
|||
static const struct btmrvl_sdio_device btmrvl_sdio_sd8688 = {
|
||||
.helper = "mrvl/sd8688_helper.bin",
|
||||
.firmware = "mrvl/sd8688.bin",
|
||||
.cal_data = NULL,
|
||||
.reg = &btmrvl_reg_8688,
|
||||
.sd_blksz_fw_dl = 64,
|
||||
};
|
||||
|
@ -109,6 +109,7 @@ static const struct btmrvl_sdio_device btmrvl_sdio_sd8688 = {
|
|||
static const struct btmrvl_sdio_device btmrvl_sdio_sd8787 = {
|
||||
.helper = NULL,
|
||||
.firmware = "mrvl/sd8787_uapsta.bin",
|
||||
.cal_data = NULL,
|
||||
.reg = &btmrvl_reg_87xx,
|
||||
.sd_blksz_fw_dl = 256,
|
||||
};
|
||||
|
@ -116,6 +117,7 @@ static const struct btmrvl_sdio_device btmrvl_sdio_sd8787 = {
|
|||
static const struct btmrvl_sdio_device btmrvl_sdio_sd8797 = {
|
||||
.helper = NULL,
|
||||
.firmware = "mrvl/sd8797_uapsta.bin",
|
||||
.cal_data = "mrvl/sd8797_caldata.conf",
|
||||
.reg = &btmrvl_reg_87xx,
|
||||
.sd_blksz_fw_dl = 256,
|
||||
};
|
||||
|
@ -123,6 +125,7 @@ static const struct btmrvl_sdio_device btmrvl_sdio_sd8797 = {
|
|||
static const struct btmrvl_sdio_device btmrvl_sdio_sd8897 = {
|
||||
.helper = NULL,
|
||||
.firmware = "mrvl/sd8897_uapsta.bin",
|
||||
.cal_data = NULL,
|
||||
.reg = &btmrvl_reg_88xx,
|
||||
.sd_blksz_fw_dl = 256,
|
||||
};
|
||||
|
@ -597,15 +600,14 @@ static int btmrvl_sdio_card_to_host(struct btmrvl_private *priv)
|
|||
case HCI_SCODATA_PKT:
|
||||
case HCI_EVENT_PKT:
|
||||
bt_cb(skb)->pkt_type = type;
|
||||
skb->dev = (void *)hdev;
|
||||
skb_put(skb, buf_len);
|
||||
skb_pull(skb, SDIO_HEADER_LEN);
|
||||
|
||||
if (type == HCI_EVENT_PKT) {
|
||||
if (btmrvl_check_evtpkt(priv, skb))
|
||||
hci_recv_frame(skb);
|
||||
hci_recv_frame(hdev, skb);
|
||||
} else {
|
||||
hci_recv_frame(skb);
|
||||
hci_recv_frame(hdev, skb);
|
||||
}
|
||||
|
||||
hdev->stat.byte_rx += buf_len;
|
||||
|
@ -613,12 +615,11 @@ static int btmrvl_sdio_card_to_host(struct btmrvl_private *priv)
|
|||
|
||||
case MRVL_VENDOR_PKT:
|
||||
bt_cb(skb)->pkt_type = HCI_VENDOR_PKT;
|
||||
skb->dev = (void *)hdev;
|
||||
skb_put(skb, buf_len);
|
||||
skb_pull(skb, SDIO_HEADER_LEN);
|
||||
|
||||
if (btmrvl_process_event(priv, skb))
|
||||
hci_recv_frame(skb);
|
||||
hci_recv_frame(hdev, skb);
|
||||
|
||||
hdev->stat.byte_rx += buf_len;
|
||||
break;
|
||||
|
@ -1006,6 +1007,7 @@ static int btmrvl_sdio_probe(struct sdio_func *func,
|
|||
struct btmrvl_sdio_device *data = (void *) id->driver_data;
|
||||
card->helper = data->helper;
|
||||
card->firmware = data->firmware;
|
||||
card->cal_data = data->cal_data;
|
||||
card->reg = data->reg;
|
||||
card->sd_blksz_fw_dl = data->sd_blksz_fw_dl;
|
||||
}
|
||||
|
@ -1034,6 +1036,8 @@ static int btmrvl_sdio_probe(struct sdio_func *func,
|
|||
}
|
||||
|
||||
card->priv = priv;
|
||||
priv->btmrvl_dev.dev = &card->func->dev;
|
||||
priv->btmrvl_dev.cal_data = card->cal_data;
|
||||
|
||||
/* Initialize the interface specific function pointers */
|
||||
priv->hw_host_to_card = btmrvl_sdio_host_to_card;
|
||||
|
@ -1046,12 +1050,6 @@ static int btmrvl_sdio_probe(struct sdio_func *func,
|
|||
goto disable_host_int;
|
||||
}
|
||||
|
||||
priv->btmrvl_dev.psmode = 1;
|
||||
btmrvl_enable_ps(priv);
|
||||
|
||||
priv->btmrvl_dev.gpio_gap = 0xffff;
|
||||
btmrvl_send_hscfg_cmd(priv);
|
||||
|
||||
return 0;
|
||||
|
||||
disable_host_int:
|
||||
|
@ -1222,4 +1220,5 @@ MODULE_FIRMWARE("mrvl/sd8688_helper.bin");
|
|||
MODULE_FIRMWARE("mrvl/sd8688.bin");
|
||||
MODULE_FIRMWARE("mrvl/sd8787_uapsta.bin");
|
||||
MODULE_FIRMWARE("mrvl/sd8797_uapsta.bin");
|
||||
MODULE_FIRMWARE("mrvl/sd8797_caldata.conf");
|
||||
MODULE_FIRMWARE("mrvl/sd8897_uapsta.bin");
|
||||
|
|
|
@ -85,6 +85,7 @@ struct btmrvl_sdio_card {
|
|||
u32 ioport;
|
||||
const char *helper;
|
||||
const char *firmware;
|
||||
const char *cal_data;
|
||||
const struct btmrvl_sdio_card_reg *reg;
|
||||
u16 sd_blksz_fw_dl;
|
||||
u8 rx_unit;
|
||||
|
@ -94,6 +95,7 @@ struct btmrvl_sdio_card {
|
|||
struct btmrvl_sdio_device {
|
||||
const char *helper;
|
||||
const char *firmware;
|
||||
const char *cal_data;
|
||||
const struct btmrvl_sdio_card_reg *reg;
|
||||
u16 sd_blksz_fw_dl;
|
||||
};
|
||||
|
|
|
@ -157,10 +157,9 @@ static int btsdio_rx_packet(struct btsdio_data *data)
|
|||
|
||||
data->hdev->stat.byte_rx += len;
|
||||
|
||||
skb->dev = (void *) data->hdev;
|
||||
bt_cb(skb)->pkt_type = hdr[3];
|
||||
|
||||
err = hci_recv_frame(skb);
|
||||
err = hci_recv_frame(data->hdev, skb);
|
||||
if (err < 0)
|
||||
return err;
|
||||
|
||||
|
@ -255,9 +254,8 @@ static int btsdio_flush(struct hci_dev *hdev)
|
|||
return 0;
|
||||
}
|
||||
|
||||
static int btsdio_send_frame(struct sk_buff *skb)
|
||||
static int btsdio_send_frame(struct hci_dev *hdev, struct sk_buff *skb)
|
||||
{
|
||||
struct hci_dev *hdev = (struct hci_dev *) skb->dev;
|
||||
struct btsdio_data *data = hci_get_drvdata(hdev);
|
||||
|
||||
BT_DBG("%s", hdev->name);
|
||||
|
|
|
@ -198,7 +198,6 @@ static void btuart_receive(btuart_info_t *info)
|
|||
|
||||
if (info->rx_state == RECV_WAIT_PACKET_TYPE) {
|
||||
|
||||
info->rx_skb->dev = (void *) info->hdev;
|
||||
bt_cb(info->rx_skb)->pkt_type = inb(iobase + UART_RX);
|
||||
|
||||
switch (bt_cb(info->rx_skb)->pkt_type) {
|
||||
|
@ -265,7 +264,7 @@ static void btuart_receive(btuart_info_t *info)
|
|||
break;
|
||||
|
||||
case RECV_WAIT_DATA:
|
||||
hci_recv_frame(info->rx_skb);
|
||||
hci_recv_frame(info->hdev, info->rx_skb);
|
||||
info->rx_skb = NULL;
|
||||
break;
|
||||
|
||||
|
@ -424,17 +423,9 @@ static int btuart_hci_close(struct hci_dev *hdev)
|
|||
}
|
||||
|
||||
|
||||
static int btuart_hci_send_frame(struct sk_buff *skb)
|
||||
static int btuart_hci_send_frame(struct hci_dev *hdev, struct sk_buff *skb)
|
||||
{
|
||||
btuart_info_t *info;
|
||||
struct hci_dev *hdev = (struct hci_dev *)(skb->dev);
|
||||
|
||||
if (!hdev) {
|
||||
BT_ERR("Frame for unknown HCI device (hdev=NULL)");
|
||||
return -ENODEV;
|
||||
}
|
||||
|
||||
info = hci_get_drvdata(hdev);
|
||||
btuart_info_t *info = hci_get_drvdata(hdev);
|
||||
|
||||
switch (bt_cb(skb)->pkt_type) {
|
||||
case HCI_COMMAND_PKT:
|
||||
|
@ -458,12 +449,6 @@ static int btuart_hci_send_frame(struct sk_buff *skb)
|
|||
}
|
||||
|
||||
|
||||
static int btuart_hci_ioctl(struct hci_dev *hdev, unsigned int cmd, unsigned long arg)
|
||||
{
|
||||
return -ENOIOCTLCMD;
|
||||
}
|
||||
|
||||
|
||||
|
||||
/* ======================== Card services HCI interaction ======================== */
|
||||
|
||||
|
@ -495,11 +480,10 @@ static int btuart_open(btuart_info_t *info)
|
|||
hci_set_drvdata(hdev, info);
|
||||
SET_HCIDEV_DEV(hdev, &info->p_dev->dev);
|
||||
|
||||
hdev->open = btuart_hci_open;
|
||||
hdev->close = btuart_hci_close;
|
||||
hdev->flush = btuart_hci_flush;
|
||||
hdev->send = btuart_hci_send_frame;
|
||||
hdev->ioctl = btuart_hci_ioctl;
|
||||
hdev->open = btuart_hci_open;
|
||||
hdev->close = btuart_hci_close;
|
||||
hdev->flush = btuart_hci_flush;
|
||||
hdev->send = btuart_hci_send_frame;
|
||||
|
||||
spin_lock_irqsave(&(info->lock), flags);
|
||||
|
||||
|
|
|
@ -50,7 +50,7 @@ static struct usb_driver btusb_driver;
|
|||
#define BTUSB_ATH3012 0x80
|
||||
#define BTUSB_INTEL 0x100
|
||||
|
||||
static struct usb_device_id btusb_table[] = {
|
||||
static const struct usb_device_id btusb_table[] = {
|
||||
/* Generic Bluetooth USB device */
|
||||
{ USB_DEVICE_INFO(0xe0, 0x01, 0x01) },
|
||||
|
||||
|
@ -121,7 +121,7 @@ static struct usb_device_id btusb_table[] = {
|
|||
|
||||
MODULE_DEVICE_TABLE(usb, btusb_table);
|
||||
|
||||
static struct usb_device_id blacklist_table[] = {
|
||||
static const struct usb_device_id blacklist_table[] = {
|
||||
/* CSR BlueCore devices */
|
||||
{ USB_DEVICE(0x0a12, 0x0001), .driver_info = BTUSB_CSR },
|
||||
|
||||
|
@ -716,9 +716,8 @@ static int btusb_flush(struct hci_dev *hdev)
|
|||
return 0;
|
||||
}
|
||||
|
||||
static int btusb_send_frame(struct sk_buff *skb)
|
||||
static int btusb_send_frame(struct hci_dev *hdev, struct sk_buff *skb)
|
||||
{
|
||||
struct hci_dev *hdev = (struct hci_dev *) skb->dev;
|
||||
struct btusb_data *data = hci_get_drvdata(hdev);
|
||||
struct usb_ctrlrequest *dr;
|
||||
struct urb *urb;
|
||||
|
@ -730,6 +729,8 @@ static int btusb_send_frame(struct sk_buff *skb)
|
|||
if (!test_bit(HCI_RUNNING, &hdev->flags))
|
||||
return -EBUSY;
|
||||
|
||||
skb->dev = (void *) hdev;
|
||||
|
||||
switch (bt_cb(skb)->pkt_type) {
|
||||
case HCI_COMMAND_PKT:
|
||||
urb = usb_alloc_urb(0, GFP_ATOMIC);
|
||||
|
@ -774,7 +775,7 @@ static int btusb_send_frame(struct sk_buff *skb)
|
|||
break;
|
||||
|
||||
case HCI_SCODATA_PKT:
|
||||
if (!data->isoc_tx_ep || hdev->conn_hash.sco_num < 1)
|
||||
if (!data->isoc_tx_ep || hci_conn_num(hdev, SCO_LINK) < 1)
|
||||
return -ENODEV;
|
||||
|
||||
urb = usb_alloc_urb(BTUSB_MAX_ISOC_FRAMES, GFP_ATOMIC);
|
||||
|
@ -833,8 +834,8 @@ static void btusb_notify(struct hci_dev *hdev, unsigned int evt)
|
|||
|
||||
BT_DBG("%s evt %d", hdev->name, evt);
|
||||
|
||||
if (hdev->conn_hash.sco_num != data->sco_num) {
|
||||
data->sco_num = hdev->conn_hash.sco_num;
|
||||
if (hci_conn_num(hdev, SCO_LINK) != data->sco_num) {
|
||||
data->sco_num = hci_conn_num(hdev, SCO_LINK);
|
||||
schedule_work(&data->work);
|
||||
}
|
||||
}
|
||||
|
@ -889,7 +890,7 @@ static void btusb_work(struct work_struct *work)
|
|||
int new_alts;
|
||||
int err;
|
||||
|
||||
if (hdev->conn_hash.sco_num > 0) {
|
||||
if (data->sco_num > 0) {
|
||||
if (!test_bit(BTUSB_DID_ISO_RESUME, &data->flags)) {
|
||||
err = usb_autopm_get_interface(data->isoc ? data->isoc : data->intf);
|
||||
if (err < 0) {
|
||||
|
@ -903,9 +904,9 @@ static void btusb_work(struct work_struct *work)
|
|||
|
||||
if (hdev->voice_setting & 0x0020) {
|
||||
static const int alts[3] = { 2, 4, 5 };
|
||||
new_alts = alts[hdev->conn_hash.sco_num - 1];
|
||||
new_alts = alts[data->sco_num - 1];
|
||||
} else {
|
||||
new_alts = hdev->conn_hash.sco_num;
|
||||
new_alts = data->sco_num;
|
||||
}
|
||||
|
||||
if (data->isoc_altsetting != new_alts) {
|
||||
|
@ -1628,7 +1629,6 @@ static struct usb_driver btusb_driver = {
|
|||
#ifdef CONFIG_PM
|
||||
.suspend = btusb_suspend,
|
||||
.resume = btusb_resume,
|
||||
.reset_resume = btusb_resume,
|
||||
#endif
|
||||
.id_table = btusb_table,
|
||||
.supports_autosuspend = 1,
|
||||
|
|
|
@ -108,10 +108,8 @@ static long st_receive(void *priv_data, struct sk_buff *skb)
|
|||
return -EFAULT;
|
||||
}
|
||||
|
||||
skb->dev = (void *) lhst->hdev;
|
||||
|
||||
/* Forward skb to HCI core layer */
|
||||
err = hci_recv_frame(skb);
|
||||
err = hci_recv_frame(lhst->hdev, skb);
|
||||
if (err < 0) {
|
||||
BT_ERR("Unable to push skb to HCI core(%d)", err);
|
||||
return err;
|
||||
|
@ -253,14 +251,11 @@ static int ti_st_close(struct hci_dev *hdev)
|
|||
return err;
|
||||
}
|
||||
|
||||
static int ti_st_send_frame(struct sk_buff *skb)
|
||||
static int ti_st_send_frame(struct hci_dev *hdev, struct sk_buff *skb)
|
||||
{
|
||||
struct hci_dev *hdev;
|
||||
struct ti_st *hst;
|
||||
long len;
|
||||
|
||||
hdev = (struct hci_dev *)skb->dev;
|
||||
|
||||
if (!test_bit(HCI_RUNNING, &hdev->flags))
|
||||
return -EBUSY;
|
||||
|
||||
|
|
|
@ -256,9 +256,8 @@ static void dtl1_receive(dtl1_info_t *info)
|
|||
case 0x83:
|
||||
case 0x84:
|
||||
/* send frame to the HCI layer */
|
||||
info->rx_skb->dev = (void *) info->hdev;
|
||||
bt_cb(info->rx_skb)->pkt_type &= 0x0f;
|
||||
hci_recv_frame(info->rx_skb);
|
||||
hci_recv_frame(info->hdev, info->rx_skb);
|
||||
break;
|
||||
default:
|
||||
/* unknown packet */
|
||||
|
@ -383,20 +382,12 @@ static int dtl1_hci_close(struct hci_dev *hdev)
|
|||
}
|
||||
|
||||
|
||||
static int dtl1_hci_send_frame(struct sk_buff *skb)
|
||||
static int dtl1_hci_send_frame(struct hci_dev *hdev, struct sk_buff *skb)
|
||||
{
|
||||
dtl1_info_t *info;
|
||||
struct hci_dev *hdev = (struct hci_dev *)(skb->dev);
|
||||
dtl1_info_t *info = hci_get_drvdata(hdev);
|
||||
struct sk_buff *s;
|
||||
nsh_t nsh;
|
||||
|
||||
if (!hdev) {
|
||||
BT_ERR("Frame for unknown HCI device (hdev=NULL)");
|
||||
return -ENODEV;
|
||||
}
|
||||
|
||||
info = hci_get_drvdata(hdev);
|
||||
|
||||
switch (bt_cb(skb)->pkt_type) {
|
||||
case HCI_COMMAND_PKT:
|
||||
hdev->stat.cmd_tx++;
|
||||
|
@ -438,12 +429,6 @@ static int dtl1_hci_send_frame(struct sk_buff *skb)
|
|||
}
|
||||
|
||||
|
||||
static int dtl1_hci_ioctl(struct hci_dev *hdev, unsigned int cmd, unsigned long arg)
|
||||
{
|
||||
return -ENOIOCTLCMD;
|
||||
}
|
||||
|
||||
|
||||
|
||||
/* ======================== Card services HCI interaction ======================== */
|
||||
|
||||
|
@ -477,11 +462,10 @@ static int dtl1_open(dtl1_info_t *info)
|
|||
hci_set_drvdata(hdev, info);
|
||||
SET_HCIDEV_DEV(hdev, &info->p_dev->dev);
|
||||
|
||||
hdev->open = dtl1_hci_open;
|
||||
hdev->close = dtl1_hci_close;
|
||||
hdev->flush = dtl1_hci_flush;
|
||||
hdev->send = dtl1_hci_send_frame;
|
||||
hdev->ioctl = dtl1_hci_ioctl;
|
||||
hdev->open = dtl1_hci_open;
|
||||
hdev->close = dtl1_hci_close;
|
||||
hdev->flush = dtl1_hci_flush;
|
||||
hdev->send = dtl1_hci_send_frame;
|
||||
|
||||
spin_lock_irqsave(&(info->lock), flags);
|
||||
|
||||
|
|
|
@ -522,7 +522,7 @@ static void bcsp_complete_rx_pkt(struct hci_uart *hu)
|
|||
memcpy(skb_push(bcsp->rx_skb, HCI_EVENT_HDR_SIZE), &hdr, HCI_EVENT_HDR_SIZE);
|
||||
bt_cb(bcsp->rx_skb)->pkt_type = HCI_EVENT_PKT;
|
||||
|
||||
hci_recv_frame(bcsp->rx_skb);
|
||||
hci_recv_frame(hu->hdev, bcsp->rx_skb);
|
||||
} else {
|
||||
BT_ERR ("Packet for unknown channel (%u %s)",
|
||||
bcsp->rx_skb->data[1] & 0x0f,
|
||||
|
@ -536,7 +536,7 @@ static void bcsp_complete_rx_pkt(struct hci_uart *hu)
|
|||
/* Pull out BCSP hdr */
|
||||
skb_pull(bcsp->rx_skb, 4);
|
||||
|
||||
hci_recv_frame(bcsp->rx_skb);
|
||||
hci_recv_frame(hu->hdev, bcsp->rx_skb);
|
||||
}
|
||||
|
||||
bcsp->rx_state = BCSP_W4_PKT_DELIMITER;
|
||||
|
@ -655,7 +655,6 @@ static int bcsp_recv(struct hci_uart *hu, void *data, int count)
|
|||
bcsp->rx_count = 0;
|
||||
return 0;
|
||||
}
|
||||
bcsp->rx_skb->dev = (void *) hu->hdev;
|
||||
break;
|
||||
}
|
||||
break;
|
||||
|
|
|
@ -124,30 +124,6 @@ static int h4_enqueue(struct hci_uart *hu, struct sk_buff *skb)
|
|||
return 0;
|
||||
}
|
||||
|
||||
static inline int h4_check_data_len(struct h4_struct *h4, int len)
|
||||
{
|
||||
int room = skb_tailroom(h4->rx_skb);
|
||||
|
||||
BT_DBG("len %d room %d", len, room);
|
||||
|
||||
if (!len) {
|
||||
hci_recv_frame(h4->rx_skb);
|
||||
} else if (len > room) {
|
||||
BT_ERR("Data length is too large");
|
||||
kfree_skb(h4->rx_skb);
|
||||
} else {
|
||||
h4->rx_state = H4_W4_DATA;
|
||||
h4->rx_count = len;
|
||||
return len;
|
||||
}
|
||||
|
||||
h4->rx_state = H4_W4_PACKET_TYPE;
|
||||
h4->rx_skb = NULL;
|
||||
h4->rx_count = 0;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
/* Recv data */
|
||||
static int h4_recv(struct hci_uart *hu, void *data, int count)
|
||||
{
|
||||
|
|
|
@ -340,7 +340,7 @@ static void h5_complete_rx_pkt(struct hci_uart *hu)
|
|||
/* Remove Three-wire header */
|
||||
skb_pull(h5->rx_skb, 4);
|
||||
|
||||
hci_recv_frame(h5->rx_skb);
|
||||
hci_recv_frame(hu->hdev, h5->rx_skb);
|
||||
h5->rx_skb = NULL;
|
||||
|
||||
break;
|
||||
|
|
|
@ -234,21 +234,13 @@ static int hci_uart_close(struct hci_dev *hdev)
|
|||
}
|
||||
|
||||
/* Send frames from HCI layer */
|
||||
static int hci_uart_send_frame(struct sk_buff *skb)
|
||||
static int hci_uart_send_frame(struct hci_dev *hdev, struct sk_buff *skb)
|
||||
{
|
||||
struct hci_dev* hdev = (struct hci_dev *) skb->dev;
|
||||
struct hci_uart *hu;
|
||||
|
||||
if (!hdev) {
|
||||
BT_ERR("Frame for unknown device (hdev=NULL)");
|
||||
return -ENODEV;
|
||||
}
|
||||
struct hci_uart *hu = hci_get_drvdata(hdev);
|
||||
|
||||
if (!test_bit(HCI_RUNNING, &hdev->flags))
|
||||
return -EBUSY;
|
||||
|
||||
hu = hci_get_drvdata(hdev);
|
||||
|
||||
BT_DBG("%s: type %d len %d", hdev->name, bt_cb(skb)->pkt_type, skb->len);
|
||||
|
||||
hu->proto->enqueue(hu, skb);
|
||||
|
|
|
@ -110,7 +110,6 @@ static int send_hcill_cmd(u8 cmd, struct hci_uart *hu)
|
|||
/* prepare packet */
|
||||
hcill_packet = (struct hcill_cmd *) skb_put(skb, 1);
|
||||
hcill_packet->cmd = cmd;
|
||||
skb->dev = (void *) hu->hdev;
|
||||
|
||||
/* send packet */
|
||||
skb_queue_tail(&ll->txq, skb);
|
||||
|
@ -346,14 +345,14 @@ static int ll_enqueue(struct hci_uart *hu, struct sk_buff *skb)
|
|||
return 0;
|
||||
}
|
||||
|
||||
static inline int ll_check_data_len(struct ll_struct *ll, int len)
|
||||
static inline int ll_check_data_len(struct hci_dev *hdev, struct ll_struct *ll, int len)
|
||||
{
|
||||
int room = skb_tailroom(ll->rx_skb);
|
||||
|
||||
BT_DBG("len %d room %d", len, room);
|
||||
|
||||
if (!len) {
|
||||
hci_recv_frame(ll->rx_skb);
|
||||
hci_recv_frame(hdev, ll->rx_skb);
|
||||
} else if (len > room) {
|
||||
BT_ERR("Data length is too large");
|
||||
kfree_skb(ll->rx_skb);
|
||||
|
@ -395,7 +394,7 @@ static int ll_recv(struct hci_uart *hu, void *data, int count)
|
|||
switch (ll->rx_state) {
|
||||
case HCILL_W4_DATA:
|
||||
BT_DBG("Complete data");
|
||||
hci_recv_frame(ll->rx_skb);
|
||||
hci_recv_frame(hu->hdev, ll->rx_skb);
|
||||
|
||||
ll->rx_state = HCILL_W4_PACKET_TYPE;
|
||||
ll->rx_skb = NULL;
|
||||
|
@ -406,7 +405,7 @@ static int ll_recv(struct hci_uart *hu, void *data, int count)
|
|||
|
||||
BT_DBG("Event header: evt 0x%2.2x plen %d", eh->evt, eh->plen);
|
||||
|
||||
ll_check_data_len(ll, eh->plen);
|
||||
ll_check_data_len(hu->hdev, ll, eh->plen);
|
||||
continue;
|
||||
|
||||
case HCILL_W4_ACL_HDR:
|
||||
|
@ -415,7 +414,7 @@ static int ll_recv(struct hci_uart *hu, void *data, int count)
|
|||
|
||||
BT_DBG("ACL header: dlen %d", dlen);
|
||||
|
||||
ll_check_data_len(ll, dlen);
|
||||
ll_check_data_len(hu->hdev, ll, dlen);
|
||||
continue;
|
||||
|
||||
case HCILL_W4_SCO_HDR:
|
||||
|
@ -423,7 +422,7 @@ static int ll_recv(struct hci_uart *hu, void *data, int count)
|
|||
|
||||
BT_DBG("SCO header: dlen %d", sh->dlen);
|
||||
|
||||
ll_check_data_len(ll, sh->dlen);
|
||||
ll_check_data_len(hu->hdev, ll, sh->dlen);
|
||||
continue;
|
||||
}
|
||||
}
|
||||
|
@ -494,7 +493,6 @@ static int ll_recv(struct hci_uart *hu, void *data, int count)
|
|||
return -ENOMEM;
|
||||
}
|
||||
|
||||
ll->rx_skb->dev = (void *) hu->hdev;
|
||||
bt_cb(ll->rx_skb)->pkt_type = type;
|
||||
}
|
||||
|
||||
|
|
|
@ -24,6 +24,7 @@
|
|||
*/
|
||||
|
||||
#include <linux/module.h>
|
||||
#include <asm/unaligned.h>
|
||||
|
||||
#include <linux/kernel.h>
|
||||
#include <linux/init.h>
|
||||
|
@ -39,17 +40,17 @@
|
|||
#include <net/bluetooth/bluetooth.h>
|
||||
#include <net/bluetooth/hci_core.h>
|
||||
|
||||
#define VERSION "1.3"
|
||||
#define VERSION "1.4"
|
||||
|
||||
static bool amp;
|
||||
|
||||
struct vhci_data {
|
||||
struct hci_dev *hdev;
|
||||
|
||||
unsigned long flags;
|
||||
|
||||
wait_queue_head_t read_wait;
|
||||
struct sk_buff_head readq;
|
||||
|
||||
struct delayed_work open_timeout;
|
||||
};
|
||||
|
||||
static int vhci_open_dev(struct hci_dev *hdev)
|
||||
|
@ -80,35 +81,73 @@ static int vhci_flush(struct hci_dev *hdev)
|
|||
return 0;
|
||||
}
|
||||
|
||||
static int vhci_send_frame(struct sk_buff *skb)
|
||||
static int vhci_send_frame(struct hci_dev *hdev, struct sk_buff *skb)
|
||||
{
|
||||
struct hci_dev* hdev = (struct hci_dev *) skb->dev;
|
||||
struct vhci_data *data;
|
||||
|
||||
if (!hdev) {
|
||||
BT_ERR("Frame for unknown HCI device (hdev=NULL)");
|
||||
return -ENODEV;
|
||||
}
|
||||
struct vhci_data *data = hci_get_drvdata(hdev);
|
||||
|
||||
if (!test_bit(HCI_RUNNING, &hdev->flags))
|
||||
return -EBUSY;
|
||||
|
||||
data = hci_get_drvdata(hdev);
|
||||
|
||||
memcpy(skb_push(skb, 1), &bt_cb(skb)->pkt_type, 1);
|
||||
skb_queue_tail(&data->readq, skb);
|
||||
|
||||
wake_up_interruptible(&data->read_wait);
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int vhci_create_device(struct vhci_data *data, __u8 dev_type)
|
||||
{
|
||||
struct hci_dev *hdev;
|
||||
struct sk_buff *skb;
|
||||
|
||||
skb = bt_skb_alloc(4, GFP_KERNEL);
|
||||
if (!skb)
|
||||
return -ENOMEM;
|
||||
|
||||
hdev = hci_alloc_dev();
|
||||
if (!hdev) {
|
||||
kfree_skb(skb);
|
||||
return -ENOMEM;
|
||||
}
|
||||
|
||||
data->hdev = hdev;
|
||||
|
||||
hdev->bus = HCI_VIRTUAL;
|
||||
hdev->dev_type = dev_type;
|
||||
hci_set_drvdata(hdev, data);
|
||||
|
||||
hdev->open = vhci_open_dev;
|
||||
hdev->close = vhci_close_dev;
|
||||
hdev->flush = vhci_flush;
|
||||
hdev->send = vhci_send_frame;
|
||||
|
||||
if (hci_register_dev(hdev) < 0) {
|
||||
BT_ERR("Can't register HCI device");
|
||||
hci_free_dev(hdev);
|
||||
data->hdev = NULL;
|
||||
kfree_skb(skb);
|
||||
return -EBUSY;
|
||||
}
|
||||
|
||||
bt_cb(skb)->pkt_type = HCI_VENDOR_PKT;
|
||||
|
||||
*skb_put(skb, 1) = 0xff;
|
||||
*skb_put(skb, 1) = dev_type;
|
||||
put_unaligned_le16(hdev->id, skb_put(skb, 2));
|
||||
skb_queue_tail(&data->readq, skb);
|
||||
|
||||
wake_up_interruptible(&data->read_wait);
|
||||
return 0;
|
||||
}
|
||||
|
||||
static inline ssize_t vhci_get_user(struct vhci_data *data,
|
||||
const char __user *buf, size_t count)
|
||||
const char __user *buf, size_t count)
|
||||
{
|
||||
struct sk_buff *skb;
|
||||
__u8 pkt_type, dev_type;
|
||||
int ret;
|
||||
|
||||
if (count > HCI_MAX_FRAME_SIZE)
|
||||
if (count < 2 || count > HCI_MAX_FRAME_SIZE)
|
||||
return -EINVAL;
|
||||
|
||||
skb = bt_skb_alloc(count, GFP_KERNEL);
|
||||
|
@ -120,27 +159,69 @@ static inline ssize_t vhci_get_user(struct vhci_data *data,
|
|||
return -EFAULT;
|
||||
}
|
||||
|
||||
skb->dev = (void *) data->hdev;
|
||||
bt_cb(skb)->pkt_type = *((__u8 *) skb->data);
|
||||
pkt_type = *((__u8 *) skb->data);
|
||||
skb_pull(skb, 1);
|
||||
|
||||
hci_recv_frame(skb);
|
||||
switch (pkt_type) {
|
||||
case HCI_EVENT_PKT:
|
||||
case HCI_ACLDATA_PKT:
|
||||
case HCI_SCODATA_PKT:
|
||||
if (!data->hdev) {
|
||||
kfree_skb(skb);
|
||||
return -ENODEV;
|
||||
}
|
||||
|
||||
return count;
|
||||
bt_cb(skb)->pkt_type = pkt_type;
|
||||
|
||||
ret = hci_recv_frame(data->hdev, skb);
|
||||
break;
|
||||
|
||||
case HCI_VENDOR_PKT:
|
||||
if (data->hdev) {
|
||||
kfree_skb(skb);
|
||||
return -EBADFD;
|
||||
}
|
||||
|
||||
cancel_delayed_work_sync(&data->open_timeout);
|
||||
|
||||
dev_type = *((__u8 *) skb->data);
|
||||
skb_pull(skb, 1);
|
||||
|
||||
if (skb->len > 0) {
|
||||
kfree_skb(skb);
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
kfree_skb(skb);
|
||||
|
||||
if (dev_type != HCI_BREDR && dev_type != HCI_AMP)
|
||||
return -EINVAL;
|
||||
|
||||
ret = vhci_create_device(data, dev_type);
|
||||
break;
|
||||
|
||||
default:
|
||||
kfree_skb(skb);
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
return (ret < 0) ? ret : count;
|
||||
}
|
||||
|
||||
static inline ssize_t vhci_put_user(struct vhci_data *data,
|
||||
struct sk_buff *skb, char __user *buf, int count)
|
||||
struct sk_buff *skb,
|
||||
char __user *buf, int count)
|
||||
{
|
||||
char __user *ptr = buf;
|
||||
int len, total = 0;
|
||||
int len;
|
||||
|
||||
len = min_t(unsigned int, skb->len, count);
|
||||
|
||||
if (copy_to_user(ptr, skb->data, len))
|
||||
return -EFAULT;
|
||||
|
||||
total += len;
|
||||
if (!data->hdev)
|
||||
return len;
|
||||
|
||||
data->hdev->stat.byte_tx += len;
|
||||
|
||||
|
@ -148,21 +229,19 @@ static inline ssize_t vhci_put_user(struct vhci_data *data,
|
|||
case HCI_COMMAND_PKT:
|
||||
data->hdev->stat.cmd_tx++;
|
||||
break;
|
||||
|
||||
case HCI_ACLDATA_PKT:
|
||||
data->hdev->stat.acl_tx++;
|
||||
break;
|
||||
|
||||
case HCI_SCODATA_PKT:
|
||||
data->hdev->stat.sco_tx++;
|
||||
break;
|
||||
}
|
||||
|
||||
return total;
|
||||
return len;
|
||||
}
|
||||
|
||||
static ssize_t vhci_read(struct file *file,
|
||||
char __user *buf, size_t count, loff_t *pos)
|
||||
char __user *buf, size_t count, loff_t *pos)
|
||||
{
|
||||
struct vhci_data *data = file->private_data;
|
||||
struct sk_buff *skb;
|
||||
|
@ -185,7 +264,7 @@ static ssize_t vhci_read(struct file *file,
|
|||
}
|
||||
|
||||
ret = wait_event_interruptible(data->read_wait,
|
||||
!skb_queue_empty(&data->readq));
|
||||
!skb_queue_empty(&data->readq));
|
||||
if (ret < 0)
|
||||
break;
|
||||
}
|
||||
|
@ -194,7 +273,7 @@ static ssize_t vhci_read(struct file *file,
|
|||
}
|
||||
|
||||
static ssize_t vhci_write(struct file *file,
|
||||
const char __user *buf, size_t count, loff_t *pos)
|
||||
const char __user *buf, size_t count, loff_t *pos)
|
||||
{
|
||||
struct vhci_data *data = file->private_data;
|
||||
|
||||
|
@ -213,10 +292,17 @@ static unsigned int vhci_poll(struct file *file, poll_table *wait)
|
|||
return POLLOUT | POLLWRNORM;
|
||||
}
|
||||
|
||||
static void vhci_open_timeout(struct work_struct *work)
|
||||
{
|
||||
struct vhci_data *data = container_of(work, struct vhci_data,
|
||||
open_timeout.work);
|
||||
|
||||
vhci_create_device(data, amp ? HCI_AMP : HCI_BREDR);
|
||||
}
|
||||
|
||||
static int vhci_open(struct inode *inode, struct file *file)
|
||||
{
|
||||
struct vhci_data *data;
|
||||
struct hci_dev *hdev;
|
||||
|
||||
data = kzalloc(sizeof(struct vhci_data), GFP_KERNEL);
|
||||
if (!data)
|
||||
|
@ -225,35 +311,13 @@ static int vhci_open(struct inode *inode, struct file *file)
|
|||
skb_queue_head_init(&data->readq);
|
||||
init_waitqueue_head(&data->read_wait);
|
||||
|
||||
hdev = hci_alloc_dev();
|
||||
if (!hdev) {
|
||||
kfree(data);
|
||||
return -ENOMEM;
|
||||
}
|
||||
|
||||
data->hdev = hdev;
|
||||
|
||||
hdev->bus = HCI_VIRTUAL;
|
||||
hci_set_drvdata(hdev, data);
|
||||
|
||||
if (amp)
|
||||
hdev->dev_type = HCI_AMP;
|
||||
|
||||
hdev->open = vhci_open_dev;
|
||||
hdev->close = vhci_close_dev;
|
||||
hdev->flush = vhci_flush;
|
||||
hdev->send = vhci_send_frame;
|
||||
|
||||
if (hci_register_dev(hdev) < 0) {
|
||||
BT_ERR("Can't register HCI device");
|
||||
kfree(data);
|
||||
hci_free_dev(hdev);
|
||||
return -EBUSY;
|
||||
}
|
||||
INIT_DELAYED_WORK(&data->open_timeout, vhci_open_timeout);
|
||||
|
||||
file->private_data = data;
|
||||
nonseekable_open(inode, file);
|
||||
|
||||
schedule_delayed_work(&data->open_timeout, msecs_to_jiffies(1000));
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
@ -262,8 +326,12 @@ static int vhci_release(struct inode *inode, struct file *file)
|
|||
struct vhci_data *data = file->private_data;
|
||||
struct hci_dev *hdev = data->hdev;
|
||||
|
||||
hci_unregister_dev(hdev);
|
||||
hci_free_dev(hdev);
|
||||
cancel_delayed_work_sync(&data->open_timeout);
|
||||
|
||||
if (hdev) {
|
||||
hci_unregister_dev(hdev);
|
||||
hci_free_dev(hdev);
|
||||
}
|
||||
|
||||
file->private_data = NULL;
|
||||
kfree(data);
|
||||
|
@ -309,3 +377,4 @@ MODULE_AUTHOR("Marcel Holtmann <marcel@holtmann.org>");
|
|||
MODULE_DESCRIPTION("Bluetooth virtual HCI driver ver " VERSION);
|
||||
MODULE_VERSION(VERSION);
|
||||
MODULE_LICENSE("GPL");
|
||||
MODULE_ALIAS("devname:vhci");
|
||||
|
|
|
@ -603,8 +603,11 @@ retry:
|
|||
|
||||
if (!r->initialized && nbits > 0) {
|
||||
r->entropy_total += nbits;
|
||||
if (r->entropy_total > 128)
|
||||
if (r->entropy_total > 128) {
|
||||
r->initialized = 1;
|
||||
if (r == &nonblocking_pool)
|
||||
prandom_reseed_late();
|
||||
}
|
||||
}
|
||||
|
||||
trace_credit_entropy_bits(r->name, nbits, entropy_count,
|
||||
|
|
|
@ -1848,6 +1848,26 @@ static int cma_resolve_iw_route(struct rdma_id_private *id_priv, int timeout_ms)
|
|||
return 0;
|
||||
}
|
||||
|
||||
static int iboe_tos_to_sl(struct net_device *ndev, int tos)
|
||||
{
|
||||
int prio;
|
||||
struct net_device *dev;
|
||||
|
||||
prio = rt_tos2priority(tos);
|
||||
dev = ndev->priv_flags & IFF_802_1Q_VLAN ?
|
||||
vlan_dev_real_dev(ndev) : ndev;
|
||||
|
||||
if (dev->num_tc)
|
||||
return netdev_get_prio_tc_map(dev, prio);
|
||||
|
||||
#if IS_ENABLED(CONFIG_VLAN_8021Q)
|
||||
if (ndev->priv_flags & IFF_802_1Q_VLAN)
|
||||
return (vlan_dev_get_egress_qos_mask(ndev, prio) &
|
||||
VLAN_PRIO_MASK) >> VLAN_PRIO_SHIFT;
|
||||
#endif
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int cma_resolve_iboe_route(struct rdma_id_private *id_priv)
|
||||
{
|
||||
struct rdma_route *route = &id_priv->id.route;
|
||||
|
@ -1888,11 +1908,7 @@ static int cma_resolve_iboe_route(struct rdma_id_private *id_priv)
|
|||
route->path_rec->reversible = 1;
|
||||
route->path_rec->pkey = cpu_to_be16(0xffff);
|
||||
route->path_rec->mtu_selector = IB_SA_EQ;
|
||||
route->path_rec->sl = netdev_get_prio_tc_map(
|
||||
ndev->priv_flags & IFF_802_1Q_VLAN ?
|
||||
vlan_dev_real_dev(ndev) : ndev,
|
||||
rt_tos2priority(id_priv->tos));
|
||||
|
||||
route->path_rec->sl = iboe_tos_to_sl(ndev, id_priv->tos);
|
||||
route->path_rec->mtu = iboe_get_mtu(ndev->mtu);
|
||||
route->path_rec->rate_selector = IB_SA_EQ;
|
||||
route->path_rec->rate = iboe_get_rate(ndev);
|
||||
|
@ -2294,7 +2310,7 @@ static int cma_alloc_any_port(struct idr *ps, struct rdma_id_private *id_priv)
|
|||
int low, high, remaining;
|
||||
unsigned int rover;
|
||||
|
||||
inet_get_local_port_range(&low, &high);
|
||||
inet_get_local_port_range(&init_net, &low, &high);
|
||||
remaining = (high - low) + 1;
|
||||
rover = net_random() % remaining + low;
|
||||
retry:
|
||||
|
|
|
@ -177,18 +177,18 @@ static int mlx4_ib_query_device(struct ib_device *ibdev,
|
|||
|
||||
props->max_mr_size = ~0ull;
|
||||
props->page_size_cap = dev->dev->caps.page_size_cap;
|
||||
props->max_qp = dev->dev->caps.num_qps - dev->dev->caps.reserved_qps;
|
||||
props->max_qp = dev->dev->quotas.qp;
|
||||
props->max_qp_wr = dev->dev->caps.max_wqes - MLX4_IB_SQ_MAX_SPARE;
|
||||
props->max_sge = min(dev->dev->caps.max_sq_sg,
|
||||
dev->dev->caps.max_rq_sg);
|
||||
props->max_cq = dev->dev->caps.num_cqs - dev->dev->caps.reserved_cqs;
|
||||
props->max_cq = dev->dev->quotas.cq;
|
||||
props->max_cqe = dev->dev->caps.max_cqes;
|
||||
props->max_mr = dev->dev->caps.num_mpts - dev->dev->caps.reserved_mrws;
|
||||
props->max_mr = dev->dev->quotas.mpt;
|
||||
props->max_pd = dev->dev->caps.num_pds - dev->dev->caps.reserved_pds;
|
||||
props->max_qp_rd_atom = dev->dev->caps.max_qp_dest_rdma;
|
||||
props->max_qp_init_rd_atom = dev->dev->caps.max_qp_init_rdma;
|
||||
props->max_res_rd_atom = props->max_qp_rd_atom * props->max_qp;
|
||||
props->max_srq = dev->dev->caps.num_srqs - dev->dev->caps.reserved_srqs;
|
||||
props->max_srq = dev->dev->quotas.srq;
|
||||
props->max_srq_wr = dev->dev->caps.max_srq_wqes - 1;
|
||||
props->max_srq_sge = dev->dev->caps.max_srq_sge;
|
||||
props->max_fast_reg_page_list_len = MLX4_MAX_FAST_REG_PAGES;
|
||||
|
@ -526,7 +526,6 @@ static int mlx4_ib_modify_device(struct ib_device *ibdev, int mask,
|
|||
if (IS_ERR(mailbox))
|
||||
return 0;
|
||||
|
||||
memset(mailbox->buf, 0, 256);
|
||||
memcpy(mailbox->buf, props->node_desc, 64);
|
||||
mlx4_cmd(to_mdev(ibdev)->dev, mailbox->dma, 1, 0,
|
||||
MLX4_CMD_SET_NODE, MLX4_CMD_TIME_CLASS_A, MLX4_CMD_NATIVE);
|
||||
|
@ -547,8 +546,6 @@ static int mlx4_SET_PORT(struct mlx4_ib_dev *dev, u8 port, int reset_qkey_viols,
|
|||
if (IS_ERR(mailbox))
|
||||
return PTR_ERR(mailbox);
|
||||
|
||||
memset(mailbox->buf, 0, 256);
|
||||
|
||||
if (dev->dev->flags & MLX4_FLAG_OLD_PORT_CMDS) {
|
||||
*(u8 *) mailbox->buf = !!reset_qkey_viols << 6;
|
||||
((__be32 *) mailbox->buf)[2] = cpu_to_be32(cap_mask);
|
||||
|
@ -879,8 +876,6 @@ static int __mlx4_ib_create_flow(struct ib_qp *qp, struct ib_flow_attr *flow_att
|
|||
struct mlx4_ib_dev *mdev = to_mdev(qp->device);
|
||||
struct mlx4_cmd_mailbox *mailbox;
|
||||
struct mlx4_net_trans_rule_hw_ctrl *ctrl;
|
||||
size_t rule_size = sizeof(struct mlx4_net_trans_rule_hw_ctrl) +
|
||||
(sizeof(struct _rule_hw) * flow_attr->num_of_specs);
|
||||
|
||||
static const u16 __mlx4_domain[] = {
|
||||
[IB_FLOW_DOMAIN_USER] = MLX4_DOMAIN_UVERBS,
|
||||
|
@ -905,7 +900,6 @@ static int __mlx4_ib_create_flow(struct ib_qp *qp, struct ib_flow_attr *flow_att
|
|||
mailbox = mlx4_alloc_cmd_mailbox(mdev->dev);
|
||||
if (IS_ERR(mailbox))
|
||||
return PTR_ERR(mailbox);
|
||||
memset(mailbox->buf, 0, rule_size);
|
||||
ctrl = mailbox->buf;
|
||||
|
||||
ctrl->prio = cpu_to_be16(__mlx4_domain[domain] |
|
||||
|
|
|
@ -481,7 +481,7 @@ void __inline__ outpp(void __iomem *addr, word p)
|
|||
int diva_os_register_irq(void *context, byte irq, const char *name)
|
||||
{
|
||||
int result = request_irq(irq, diva_os_irq_wrapper,
|
||||
IRQF_DISABLED | IRQF_SHARED, name, context);
|
||||
IRQF_SHARED, name, context);
|
||||
return (result);
|
||||
}
|
||||
|
||||
|
|
|
@ -288,9 +288,9 @@ int divas_um_idi_delete_entity(int adapter_nr, void *entity)
|
|||
cleanup_entity(e);
|
||||
diva_os_free(0, e->os_context);
|
||||
memset(e, 0x00, sizeof(*e));
|
||||
diva_os_free(0, e);
|
||||
|
||||
DBG_LOG(("A(%d) remove E:%08x", adapter_nr, e));
|
||||
diva_os_free(0, e);
|
||||
|
||||
return (0);
|
||||
}
|
||||
|
|
|
@ -1580,8 +1580,7 @@ icn_addcard(int port, char *id1, char *id2)
|
|||
}
|
||||
if (!(card2 = icn_initcard(port, id2))) {
|
||||
printk(KERN_INFO
|
||||
"icn: (%s) half ICN-4B, port 0x%x added\n",
|
||||
card2->interface.id, port);
|
||||
"icn: (%s) half ICN-4B, port 0x%x added\n", id2, port);
|
||||
return 0;
|
||||
}
|
||||
card->doubleS0 = 1;
|
||||
|
|
|
@ -336,7 +336,7 @@ static int __init sc_init(void)
|
|||
*/
|
||||
sc_adapter[cinst]->interrupt = irq[b];
|
||||
if (request_irq(sc_adapter[cinst]->interrupt, interrupt_handler,
|
||||
IRQF_DISABLED, interface->id,
|
||||
0, interface->id,
|
||||
(void *)(unsigned long) cinst))
|
||||
{
|
||||
kfree(sc_adapter[cinst]->channel);
|
||||
|
|
|
@ -4,7 +4,7 @@
|
|||
|
||||
obj-$(CONFIG_BONDING) += bonding.o
|
||||
|
||||
bonding-objs := bond_main.o bond_3ad.o bond_alb.o bond_sysfs.o bond_debugfs.o
|
||||
bonding-objs := bond_main.o bond_3ad.o bond_alb.o bond_sysfs.o bond_debugfs.o bond_netlink.o bond_options.o
|
||||
|
||||
proc-$(CONFIG_PROC_FS) += bond_procfs.o
|
||||
bonding-objs += $(proc-y)
|
||||
|
|
|
@ -135,41 +135,6 @@ static inline struct bonding *__get_bond_by_port(struct port *port)
|
|||
return bond_get_bond_by_slave(port->slave);
|
||||
}
|
||||
|
||||
/**
|
||||
* __get_first_port - get the first port in the bond
|
||||
* @bond: the bond we're looking at
|
||||
*
|
||||
* Return the port of the first slave in @bond, or %NULL if it can't be found.
|
||||
*/
|
||||
static inline struct port *__get_first_port(struct bonding *bond)
|
||||
{
|
||||
struct slave *first_slave = bond_first_slave(bond);
|
||||
|
||||
return first_slave ? &(SLAVE_AD_INFO(first_slave).port) : NULL;
|
||||
}
|
||||
|
||||
/**
|
||||
* __get_next_port - get the next port in the bond
|
||||
* @port: the port we're looking at
|
||||
*
|
||||
* Return the port of the slave that is next in line of @port's slave in the
|
||||
* bond, or %NULL if it can't be found.
|
||||
*/
|
||||
static inline struct port *__get_next_port(struct port *port)
|
||||
{
|
||||
struct bonding *bond = __get_bond_by_port(port);
|
||||
struct slave *slave = port->slave, *slave_next;
|
||||
|
||||
// If there's no bond for this port, or this is the last slave
|
||||
if (bond == NULL)
|
||||
return NULL;
|
||||
slave_next = bond_next_slave(bond, slave);
|
||||
if (!slave_next || bond_is_first_slave(bond, slave_next))
|
||||
return NULL;
|
||||
|
||||
return &(SLAVE_AD_INFO(slave_next).port);
|
||||
}
|
||||
|
||||
/**
|
||||
* __get_first_agg - get the first aggregator in the bond
|
||||
* @bond: the bond we're looking at
|
||||
|
@ -190,28 +155,6 @@ static inline struct aggregator *__get_first_agg(struct port *port)
|
|||
return first_slave ? &(SLAVE_AD_INFO(first_slave).aggregator) : NULL;
|
||||
}
|
||||
|
||||
/**
|
||||
* __get_next_agg - get the next aggregator in the bond
|
||||
* @aggregator: the aggregator we're looking at
|
||||
*
|
||||
* Return the aggregator of the slave that is next in line of @aggregator's
|
||||
* slave in the bond, or %NULL if it can't be found.
|
||||
*/
|
||||
static inline struct aggregator *__get_next_agg(struct aggregator *aggregator)
|
||||
{
|
||||
struct slave *slave = aggregator->slave, *slave_next;
|
||||
struct bonding *bond = bond_get_bond_by_slave(slave);
|
||||
|
||||
// If there's no bond for this aggregator, or this is the last slave
|
||||
if (bond == NULL)
|
||||
return NULL;
|
||||
slave_next = bond_next_slave(bond, slave);
|
||||
if (!slave_next || bond_is_first_slave(bond, slave_next))
|
||||
return NULL;
|
||||
|
||||
return &(SLAVE_AD_INFO(slave_next).aggregator);
|
||||
}
|
||||
|
||||
/*
|
||||
* __agg_has_partner
|
||||
*
|
||||
|
@ -755,16 +698,15 @@ static u32 __get_agg_bandwidth(struct aggregator *aggregator)
|
|||
*/
|
||||
static struct aggregator *__get_active_agg(struct aggregator *aggregator)
|
||||
{
|
||||
struct aggregator *retval = NULL;
|
||||
struct bonding *bond = aggregator->slave->bond;
|
||||
struct list_head *iter;
|
||||
struct slave *slave;
|
||||
|
||||
for (; aggregator; aggregator = __get_next_agg(aggregator)) {
|
||||
if (aggregator->is_active) {
|
||||
retval = aggregator;
|
||||
break;
|
||||
}
|
||||
}
|
||||
bond_for_each_slave(bond, slave, iter)
|
||||
if (SLAVE_AD_INFO(slave).aggregator.is_active)
|
||||
return &(SLAVE_AD_INFO(slave).aggregator);
|
||||
|
||||
return retval;
|
||||
return NULL;
|
||||
}
|
||||
|
||||
/**
|
||||
|
@ -1274,12 +1216,17 @@ static void ad_port_selection_logic(struct port *port)
|
|||
{
|
||||
struct aggregator *aggregator, *free_aggregator = NULL, *temp_aggregator;
|
||||
struct port *last_port = NULL, *curr_port;
|
||||
struct list_head *iter;
|
||||
struct bonding *bond;
|
||||
struct slave *slave;
|
||||
int found = 0;
|
||||
|
||||
// if the port is already Selected, do nothing
|
||||
if (port->sm_vars & AD_PORT_SELECTED)
|
||||
return;
|
||||
|
||||
bond = __get_bond_by_port(port);
|
||||
|
||||
// if the port is connected to other aggregator, detach it
|
||||
if (port->aggregator) {
|
||||
// detach the port from its former aggregator
|
||||
|
@ -1320,8 +1267,8 @@ static void ad_port_selection_logic(struct port *port)
|
|||
}
|
||||
}
|
||||
// search on all aggregators for a suitable aggregator for this port
|
||||
for (aggregator = __get_first_agg(port); aggregator;
|
||||
aggregator = __get_next_agg(aggregator)) {
|
||||
bond_for_each_slave(bond, slave, iter) {
|
||||
aggregator = &(SLAVE_AD_INFO(slave).aggregator);
|
||||
|
||||
// keep a free aggregator for later use(if needed)
|
||||
if (!aggregator->lag_ports) {
|
||||
|
@ -1515,19 +1462,23 @@ static int agg_device_up(const struct aggregator *agg)
|
|||
static void ad_agg_selection_logic(struct aggregator *agg)
|
||||
{
|
||||
struct aggregator *best, *active, *origin;
|
||||
struct bonding *bond = agg->slave->bond;
|
||||
struct list_head *iter;
|
||||
struct slave *slave;
|
||||
struct port *port;
|
||||
|
||||
origin = agg;
|
||||
active = __get_active_agg(agg);
|
||||
best = (active && agg_device_up(active)) ? active : NULL;
|
||||
|
||||
do {
|
||||
bond_for_each_slave(bond, slave, iter) {
|
||||
agg = &(SLAVE_AD_INFO(slave).aggregator);
|
||||
|
||||
agg->is_active = 0;
|
||||
|
||||
if (agg->num_of_ports && agg_device_up(agg))
|
||||
best = ad_agg_selection_test(best, agg);
|
||||
|
||||
} while ((agg = __get_next_agg(agg)));
|
||||
}
|
||||
|
||||
if (best &&
|
||||
__get_agg_selection_mode(best->lag_ports) == BOND_AD_STABLE) {
|
||||
|
@ -1565,8 +1516,8 @@ static void ad_agg_selection_logic(struct aggregator *agg)
|
|||
best->lag_ports, best->slave,
|
||||
best->slave ? best->slave->dev->name : "NULL");
|
||||
|
||||
for (agg = __get_first_agg(best->lag_ports); agg;
|
||||
agg = __get_next_agg(agg)) {
|
||||
bond_for_each_slave(bond, slave, iter) {
|
||||
agg = &(SLAVE_AD_INFO(slave).aggregator);
|
||||
|
||||
pr_debug("Agg=%d; P=%d; a k=%d; p k=%d; Ind=%d; Act=%d\n",
|
||||
agg->aggregator_identifier, agg->num_of_ports,
|
||||
|
@ -1614,13 +1565,7 @@ static void ad_agg_selection_logic(struct aggregator *agg)
|
|||
}
|
||||
}
|
||||
|
||||
if (origin->slave) {
|
||||
struct bonding *bond;
|
||||
|
||||
bond = bond_get_bond_by_slave(origin->slave);
|
||||
if (bond)
|
||||
bond_3ad_set_carrier(bond);
|
||||
}
|
||||
bond_3ad_set_carrier(bond);
|
||||
}
|
||||
|
||||
/**
|
||||
|
@ -1969,6 +1914,9 @@ void bond_3ad_unbind_slave(struct slave *slave)
|
|||
struct port *port, *prev_port, *temp_port;
|
||||
struct aggregator *aggregator, *new_aggregator, *temp_aggregator;
|
||||
int select_new_active_agg = 0;
|
||||
struct bonding *bond = slave->bond;
|
||||
struct slave *slave_iter;
|
||||
struct list_head *iter;
|
||||
|
||||
// find the aggregator related to this slave
|
||||
aggregator = &(SLAVE_AD_INFO(slave).aggregator);
|
||||
|
@ -1998,14 +1946,16 @@ void bond_3ad_unbind_slave(struct slave *slave)
|
|||
// reason to search for new aggregator, and that we will find one
|
||||
if ((aggregator->lag_ports != port) || (aggregator->lag_ports->next_port_in_aggregator)) {
|
||||
// find new aggregator for the related port(s)
|
||||
new_aggregator = __get_first_agg(port);
|
||||
for (; new_aggregator; new_aggregator = __get_next_agg(new_aggregator)) {
|
||||
bond_for_each_slave(bond, slave_iter, iter) {
|
||||
new_aggregator = &(SLAVE_AD_INFO(slave_iter).aggregator);
|
||||
// if the new aggregator is empty, or it is connected to our port only
|
||||
if (!new_aggregator->lag_ports
|
||||
|| ((new_aggregator->lag_ports == port)
|
||||
&& !new_aggregator->lag_ports->next_port_in_aggregator))
|
||||
break;
|
||||
}
|
||||
if (!slave_iter)
|
||||
new_aggregator = NULL;
|
||||
// if new aggregator found, copy the aggregator's parameters
|
||||
// and connect the related lag_ports to the new aggregator
|
||||
if ((new_aggregator) && ((!new_aggregator->lag_ports) || ((new_aggregator->lag_ports == port) && !new_aggregator->lag_ports->next_port_in_aggregator))) {
|
||||
|
@ -2056,15 +2006,17 @@ void bond_3ad_unbind_slave(struct slave *slave)
|
|||
pr_info("%s: Removing an active aggregator\n",
|
||||
slave->bond->dev->name);
|
||||
// select new active aggregator
|
||||
ad_agg_selection_logic(__get_first_agg(port));
|
||||
temp_aggregator = __get_first_agg(port);
|
||||
if (temp_aggregator)
|
||||
ad_agg_selection_logic(temp_aggregator);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
pr_debug("Unbinding port %d\n", port->actor_port_number);
|
||||
// find the aggregator that this port is connected to
|
||||
temp_aggregator = __get_first_agg(port);
|
||||
for (; temp_aggregator; temp_aggregator = __get_next_agg(temp_aggregator)) {
|
||||
bond_for_each_slave(bond, slave_iter, iter) {
|
||||
temp_aggregator = &(SLAVE_AD_INFO(slave_iter).aggregator);
|
||||
prev_port = NULL;
|
||||
// search the port in the aggregator's related ports
|
||||
for (temp_port = temp_aggregator->lag_ports; temp_port;
|
||||
|
@ -2111,19 +2063,24 @@ void bond_3ad_state_machine_handler(struct work_struct *work)
|
|||
{
|
||||
struct bonding *bond = container_of(work, struct bonding,
|
||||
ad_work.work);
|
||||
struct port *port;
|
||||
struct aggregator *aggregator;
|
||||
struct list_head *iter;
|
||||
struct slave *slave;
|
||||
struct port *port;
|
||||
|
||||
read_lock(&bond->lock);
|
||||
|
||||
//check if there are any slaves
|
||||
if (list_empty(&bond->slave_list))
|
||||
if (!bond_has_slaves(bond))
|
||||
goto re_arm;
|
||||
|
||||
// check if agg_select_timer timer after initialize is timed out
|
||||
if (BOND_AD_INFO(bond).agg_select_timer && !(--BOND_AD_INFO(bond).agg_select_timer)) {
|
||||
slave = bond_first_slave(bond);
|
||||
port = slave ? &(SLAVE_AD_INFO(slave).port) : NULL;
|
||||
|
||||
// select the active aggregator for the bond
|
||||
if ((port = __get_first_port(bond))) {
|
||||
if (port) {
|
||||
if (!port->slave) {
|
||||
pr_warning("%s: Warning: bond's first port is uninitialized\n",
|
||||
bond->dev->name);
|
||||
|
@ -2137,7 +2094,8 @@ void bond_3ad_state_machine_handler(struct work_struct *work)
|
|||
}
|
||||
|
||||
// for each port run the state machines
|
||||
for (port = __get_first_port(bond); port; port = __get_next_port(port)) {
|
||||
bond_for_each_slave(bond, slave, iter) {
|
||||
port = &(SLAVE_AD_INFO(slave).port);
|
||||
if (!port->slave) {
|
||||
pr_warning("%s: Warning: Found an uninitialized port\n",
|
||||
bond->dev->name);
|
||||
|
@ -2382,9 +2340,12 @@ int __bond_3ad_get_active_agg_info(struct bonding *bond,
|
|||
struct ad_info *ad_info)
|
||||
{
|
||||
struct aggregator *aggregator = NULL;
|
||||
struct list_head *iter;
|
||||
struct slave *slave;
|
||||
struct port *port;
|
||||
|
||||
for (port = __get_first_port(bond); port; port = __get_next_port(port)) {
|
||||
bond_for_each_slave_rcu(bond, slave, iter) {
|
||||
port = &(SLAVE_AD_INFO(slave).port);
|
||||
if (port->aggregator && port->aggregator->is_active) {
|
||||
aggregator = port->aggregator;
|
||||
break;
|
||||
|
@ -2408,25 +2369,25 @@ int bond_3ad_get_active_agg_info(struct bonding *bond, struct ad_info *ad_info)
|
|||
{
|
||||
int ret;
|
||||
|
||||
read_lock(&bond->lock);
|
||||
rcu_read_lock();
|
||||
ret = __bond_3ad_get_active_agg_info(bond, ad_info);
|
||||
read_unlock(&bond->lock);
|
||||
rcu_read_unlock();
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
int bond_3ad_xmit_xor(struct sk_buff *skb, struct net_device *dev)
|
||||
{
|
||||
struct slave *slave, *start_at;
|
||||
struct bonding *bond = netdev_priv(dev);
|
||||
int slave_agg_no;
|
||||
int slaves_in_agg;
|
||||
int agg_id;
|
||||
int i;
|
||||
struct slave *slave, *first_ok_slave;
|
||||
struct aggregator *agg;
|
||||
struct ad_info ad_info;
|
||||
struct list_head *iter;
|
||||
int slaves_in_agg;
|
||||
int slave_agg_no;
|
||||
int res = 1;
|
||||
int agg_id;
|
||||
|
||||
read_lock(&bond->lock);
|
||||
if (__bond_3ad_get_active_agg_info(bond, &ad_info)) {
|
||||
pr_debug("%s: Error: __bond_3ad_get_active_agg_info failed\n",
|
||||
dev->name);
|
||||
|
@ -2437,20 +2398,28 @@ int bond_3ad_xmit_xor(struct sk_buff *skb, struct net_device *dev)
|
|||
agg_id = ad_info.aggregator_id;
|
||||
|
||||
if (slaves_in_agg == 0) {
|
||||
/*the aggregator is empty*/
|
||||
pr_debug("%s: Error: active aggregator is empty\n", dev->name);
|
||||
goto out;
|
||||
}
|
||||
|
||||
slave_agg_no = bond->xmit_hash_policy(skb, slaves_in_agg);
|
||||
slave_agg_no = bond_xmit_hash(bond, skb, slaves_in_agg);
|
||||
first_ok_slave = NULL;
|
||||
|
||||
bond_for_each_slave(bond, slave) {
|
||||
struct aggregator *agg = SLAVE_AD_INFO(slave).port.aggregator;
|
||||
bond_for_each_slave_rcu(bond, slave, iter) {
|
||||
agg = SLAVE_AD_INFO(slave).port.aggregator;
|
||||
if (!agg || agg->aggregator_identifier != agg_id)
|
||||
continue;
|
||||
|
||||
if (agg && (agg->aggregator_identifier == agg_id)) {
|
||||
if (slave_agg_no >= 0) {
|
||||
if (!first_ok_slave && SLAVE_IS_OK(slave))
|
||||
first_ok_slave = slave;
|
||||
slave_agg_no--;
|
||||
if (slave_agg_no < 0)
|
||||
break;
|
||||
continue;
|
||||
}
|
||||
|
||||
if (SLAVE_IS_OK(slave)) {
|
||||
res = bond_dev_queue_xmit(bond, skb, slave->dev);
|
||||
goto out;
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -2460,23 +2429,12 @@ int bond_3ad_xmit_xor(struct sk_buff *skb, struct net_device *dev)
|
|||
goto out;
|
||||
}
|
||||
|
||||
start_at = slave;
|
||||
|
||||
bond_for_each_slave_from(bond, slave, i, start_at) {
|
||||
int slave_agg_id = 0;
|
||||
struct aggregator *agg = SLAVE_AD_INFO(slave).port.aggregator;
|
||||
|
||||
if (agg)
|
||||
slave_agg_id = agg->aggregator_identifier;
|
||||
|
||||
if (SLAVE_IS_OK(slave) && agg && (slave_agg_id == agg_id)) {
|
||||
res = bond_dev_queue_xmit(bond, skb, slave->dev);
|
||||
break;
|
||||
}
|
||||
}
|
||||
/* we couldn't find any suitable slave after the agg_no, so use the
|
||||
* first suitable found, if found. */
|
||||
if (first_ok_slave)
|
||||
res = bond_dev_queue_xmit(bond, skb, first_ok_slave->dev);
|
||||
|
||||
out:
|
||||
read_unlock(&bond->lock);
|
||||
if (res) {
|
||||
/* no suitable interface, frame not sent */
|
||||
kfree_skb(skb);
|
||||
|
@ -2515,11 +2473,12 @@ int bond_3ad_lacpdu_recv(const struct sk_buff *skb, struct bonding *bond,
|
|||
void bond_3ad_update_lacp_rate(struct bonding *bond)
|
||||
{
|
||||
struct port *port = NULL;
|
||||
struct list_head *iter;
|
||||
struct slave *slave;
|
||||
int lacp_fast;
|
||||
|
||||
lacp_fast = bond->params.lacp_fast;
|
||||
bond_for_each_slave(bond, slave) {
|
||||
bond_for_each_slave(bond, slave, iter) {
|
||||
port = &(SLAVE_AD_INFO(slave).port);
|
||||
__get_state_machine_lock(port);
|
||||
if (lacp_fast)
|
||||
|
|
|
@ -223,13 +223,14 @@ static long long compute_gap(struct slave *slave)
|
|||
static struct slave *tlb_get_least_loaded_slave(struct bonding *bond)
|
||||
{
|
||||
struct slave *slave, *least_loaded;
|
||||
struct list_head *iter;
|
||||
long long max_gap;
|
||||
|
||||
least_loaded = NULL;
|
||||
max_gap = LLONG_MIN;
|
||||
|
||||
/* Find the slave with the largest gap */
|
||||
bond_for_each_slave(bond, slave) {
|
||||
bond_for_each_slave_rcu(bond, slave, iter) {
|
||||
if (SLAVE_IS_OK(slave)) {
|
||||
long long gap = compute_gap(slave);
|
||||
|
||||
|
@ -382,30 +383,64 @@ out:
|
|||
static struct slave *rlb_next_rx_slave(struct bonding *bond)
|
||||
{
|
||||
struct alb_bond_info *bond_info = &(BOND_ALB_INFO(bond));
|
||||
struct slave *rx_slave, *slave, *start_at;
|
||||
int i = 0;
|
||||
struct slave *before = NULL, *rx_slave = NULL, *slave;
|
||||
struct list_head *iter;
|
||||
bool found = false;
|
||||
|
||||
if (bond_info->next_rx_slave)
|
||||
start_at = bond_info->next_rx_slave;
|
||||
else
|
||||
start_at = bond_first_slave(bond);
|
||||
|
||||
rx_slave = NULL;
|
||||
|
||||
bond_for_each_slave_from(bond, slave, i, start_at) {
|
||||
if (SLAVE_IS_OK(slave)) {
|
||||
if (!rx_slave) {
|
||||
bond_for_each_slave(bond, slave, iter) {
|
||||
if (!SLAVE_IS_OK(slave))
|
||||
continue;
|
||||
if (!found) {
|
||||
if (!before || before->speed < slave->speed)
|
||||
before = slave;
|
||||
} else {
|
||||
if (!rx_slave || rx_slave->speed < slave->speed)
|
||||
rx_slave = slave;
|
||||
} else if (slave->speed > rx_slave->speed) {
|
||||
rx_slave = slave;
|
||||
}
|
||||
}
|
||||
if (slave == bond_info->rx_slave)
|
||||
found = true;
|
||||
}
|
||||
/* we didn't find anything after the current or we have something
|
||||
* better before and up to the current slave
|
||||
*/
|
||||
if (!rx_slave || (before && rx_slave->speed < before->speed))
|
||||
rx_slave = before;
|
||||
|
||||
if (rx_slave) {
|
||||
slave = bond_next_slave(bond, rx_slave);
|
||||
bond_info->next_rx_slave = slave;
|
||||
if (rx_slave)
|
||||
bond_info->rx_slave = rx_slave;
|
||||
|
||||
return rx_slave;
|
||||
}
|
||||
|
||||
/* Caller must hold rcu_read_lock() for read */
|
||||
static struct slave *__rlb_next_rx_slave(struct bonding *bond)
|
||||
{
|
||||
struct alb_bond_info *bond_info = &(BOND_ALB_INFO(bond));
|
||||
struct slave *before = NULL, *rx_slave = NULL, *slave;
|
||||
struct list_head *iter;
|
||||
bool found = false;
|
||||
|
||||
bond_for_each_slave_rcu(bond, slave, iter) {
|
||||
if (!SLAVE_IS_OK(slave))
|
||||
continue;
|
||||
if (!found) {
|
||||
if (!before || before->speed < slave->speed)
|
||||
before = slave;
|
||||
} else {
|
||||
if (!rx_slave || rx_slave->speed < slave->speed)
|
||||
rx_slave = slave;
|
||||
}
|
||||
if (slave == bond_info->rx_slave)
|
||||
found = true;
|
||||
}
|
||||
/* we didn't find anything after the current or we have something
|
||||
* better before and up to the current slave
|
||||
*/
|
||||
if (!rx_slave || (before && rx_slave->speed < before->speed))
|
||||
rx_slave = before;
|
||||
|
||||
if (rx_slave)
|
||||
bond_info->rx_slave = rx_slave;
|
||||
|
||||
return rx_slave;
|
||||
}
|
||||
|
@ -626,12 +661,14 @@ static struct slave *rlb_choose_channel(struct sk_buff *skb, struct bonding *bon
|
|||
{
|
||||
struct alb_bond_info *bond_info = &(BOND_ALB_INFO(bond));
|
||||
struct arp_pkt *arp = arp_pkt(skb);
|
||||
struct slave *assigned_slave;
|
||||
struct slave *assigned_slave, *curr_active_slave;
|
||||
struct rlb_client_info *client_info;
|
||||
u32 hash_index = 0;
|
||||
|
||||
_lock_rx_hashtbl(bond);
|
||||
|
||||
curr_active_slave = rcu_dereference(bond->curr_active_slave);
|
||||
|
||||
hash_index = _simple_hash((u8 *)&arp->ip_dst, sizeof(arp->ip_dst));
|
||||
client_info = &(bond_info->rx_hashtbl[hash_index]);
|
||||
|
||||
|
@ -656,14 +693,14 @@ static struct slave *rlb_choose_channel(struct sk_buff *skb, struct bonding *bon
|
|||
* that the new client can be assigned to this entry.
|
||||
*/
|
||||
if (bond->curr_active_slave &&
|
||||
client_info->slave != bond->curr_active_slave) {
|
||||
client_info->slave = bond->curr_active_slave;
|
||||
client_info->slave != curr_active_slave) {
|
||||
client_info->slave = curr_active_slave;
|
||||
rlb_update_client(client_info);
|
||||
}
|
||||
}
|
||||
}
|
||||
/* assign a new slave */
|
||||
assigned_slave = rlb_next_rx_slave(bond);
|
||||
assigned_slave = __rlb_next_rx_slave(bond);
|
||||
|
||||
if (assigned_slave) {
|
||||
if (!(client_info->assigned &&
|
||||
|
@ -726,7 +763,7 @@ static struct slave *rlb_arp_xmit(struct sk_buff *skb, struct bonding *bond)
|
|||
/* Don't modify or load balance ARPs that do not originate locally
|
||||
* (e.g.,arrive via a bridge).
|
||||
*/
|
||||
if (!bond_slave_has_mac(bond, arp->mac_src))
|
||||
if (!bond_slave_has_mac_rcu(bond, arp->mac_src))
|
||||
return NULL;
|
||||
|
||||
if (arp->op_code == htons(ARPOP_REPLY)) {
|
||||
|
@ -1019,7 +1056,7 @@ static void alb_send_learning_packets(struct slave *slave, u8 mac_addr[])
|
|||
|
||||
/* loop through vlans and send one packet for each */
|
||||
rcu_read_lock();
|
||||
netdev_for_each_upper_dev_rcu(bond->dev, upper, iter) {
|
||||
netdev_for_each_all_upper_dev_rcu(bond->dev, upper, iter) {
|
||||
if (upper->priv_flags & IFF_802_1Q_VLAN)
|
||||
alb_send_lp_vid(slave, mac_addr,
|
||||
vlan_dev_vlan_id(upper));
|
||||
|
@ -1172,10 +1209,11 @@ static void alb_change_hw_addr_on_detach(struct bonding *bond, struct slave *sla
|
|||
*/
|
||||
static int alb_handle_addr_collision_on_attach(struct bonding *bond, struct slave *slave)
|
||||
{
|
||||
struct slave *tmp_slave1, *free_mac_slave = NULL;
|
||||
struct slave *has_bond_addr = bond->curr_active_slave;
|
||||
struct slave *tmp_slave1, *free_mac_slave = NULL;
|
||||
struct list_head *iter;
|
||||
|
||||
if (list_empty(&bond->slave_list)) {
|
||||
if (!bond_has_slaves(bond)) {
|
||||
/* this is the first slave */
|
||||
return 0;
|
||||
}
|
||||
|
@ -1196,7 +1234,7 @@ static int alb_handle_addr_collision_on_attach(struct bonding *bond, struct slav
|
|||
/* The slave's address is equal to the address of the bond.
|
||||
* Search for a spare address in the bond for this slave.
|
||||
*/
|
||||
bond_for_each_slave(bond, tmp_slave1) {
|
||||
bond_for_each_slave(bond, tmp_slave1, iter) {
|
||||
if (!bond_slave_has_mac(bond, tmp_slave1->perm_hwaddr)) {
|
||||
/* no slave has tmp_slave1's perm addr
|
||||
* as its curr addr
|
||||
|
@ -1246,15 +1284,16 @@ static int alb_handle_addr_collision_on_attach(struct bonding *bond, struct slav
|
|||
*/
|
||||
static int alb_set_mac_address(struct bonding *bond, void *addr)
|
||||
{
|
||||
char tmp_addr[ETH_ALEN];
|
||||
struct slave *slave;
|
||||
struct slave *slave, *rollback_slave;
|
||||
struct list_head *iter;
|
||||
struct sockaddr sa;
|
||||
char tmp_addr[ETH_ALEN];
|
||||
int res;
|
||||
|
||||
if (bond->alb_info.rlb_enabled)
|
||||
return 0;
|
||||
|
||||
bond_for_each_slave(bond, slave) {
|
||||
bond_for_each_slave(bond, slave, iter) {
|
||||
/* save net_device's current hw address */
|
||||
memcpy(tmp_addr, slave->dev->dev_addr, ETH_ALEN);
|
||||
|
||||
|
@ -1274,10 +1313,12 @@ unwind:
|
|||
sa.sa_family = bond->dev->type;
|
||||
|
||||
/* unwind from head to the slave that failed */
|
||||
bond_for_each_slave_continue_reverse(bond, slave) {
|
||||
memcpy(tmp_addr, slave->dev->dev_addr, ETH_ALEN);
|
||||
dev_set_mac_address(slave->dev, &sa);
|
||||
memcpy(slave->dev->dev_addr, tmp_addr, ETH_ALEN);
|
||||
bond_for_each_slave(bond, rollback_slave, iter) {
|
||||
if (rollback_slave == slave)
|
||||
break;
|
||||
memcpy(tmp_addr, rollback_slave->dev->dev_addr, ETH_ALEN);
|
||||
dev_set_mac_address(rollback_slave->dev, &sa);
|
||||
memcpy(rollback_slave->dev->dev_addr, tmp_addr, ETH_ALEN);
|
||||
}
|
||||
|
||||
return res;
|
||||
|
@ -1337,11 +1378,6 @@ int bond_alb_xmit(struct sk_buff *skb, struct net_device *bond_dev)
|
|||
skb_reset_mac_header(skb);
|
||||
eth_data = eth_hdr(skb);
|
||||
|
||||
/* make sure that the curr_active_slave do not change during tx
|
||||
*/
|
||||
read_lock(&bond->lock);
|
||||
read_lock(&bond->curr_slave_lock);
|
||||
|
||||
switch (ntohs(skb->protocol)) {
|
||||
case ETH_P_IP: {
|
||||
const struct iphdr *iph = ip_hdr(skb);
|
||||
|
@ -1423,12 +1459,12 @@ int bond_alb_xmit(struct sk_buff *skb, struct net_device *bond_dev)
|
|||
|
||||
if (!tx_slave) {
|
||||
/* unbalanced or unassigned, send through primary */
|
||||
tx_slave = bond->curr_active_slave;
|
||||
tx_slave = rcu_dereference(bond->curr_active_slave);
|
||||
bond_info->unbalanced_load += skb->len;
|
||||
}
|
||||
|
||||
if (tx_slave && SLAVE_IS_OK(tx_slave)) {
|
||||
if (tx_slave != bond->curr_active_slave) {
|
||||
if (tx_slave != rcu_dereference(bond->curr_active_slave)) {
|
||||
memcpy(eth_data->h_source,
|
||||
tx_slave->dev->dev_addr,
|
||||
ETH_ALEN);
|
||||
|
@ -1443,8 +1479,6 @@ int bond_alb_xmit(struct sk_buff *skb, struct net_device *bond_dev)
|
|||
}
|
||||
}
|
||||
|
||||
read_unlock(&bond->curr_slave_lock);
|
||||
read_unlock(&bond->lock);
|
||||
if (res) {
|
||||
/* no suitable interface, frame not sent */
|
||||
kfree_skb(skb);
|
||||
|
@ -1458,11 +1492,12 @@ void bond_alb_monitor(struct work_struct *work)
|
|||
struct bonding *bond = container_of(work, struct bonding,
|
||||
alb_work.work);
|
||||
struct alb_bond_info *bond_info = &(BOND_ALB_INFO(bond));
|
||||
struct list_head *iter;
|
||||
struct slave *slave;
|
||||
|
||||
read_lock(&bond->lock);
|
||||
|
||||
if (list_empty(&bond->slave_list)) {
|
||||
if (!bond_has_slaves(bond)) {
|
||||
bond_info->tx_rebalance_counter = 0;
|
||||
bond_info->lp_counter = 0;
|
||||
goto re_arm;
|
||||
|
@ -1480,7 +1515,7 @@ void bond_alb_monitor(struct work_struct *work)
|
|||
*/
|
||||
read_lock(&bond->curr_slave_lock);
|
||||
|
||||
bond_for_each_slave(bond, slave)
|
||||
bond_for_each_slave(bond, slave, iter)
|
||||
alb_send_learning_packets(slave, slave->dev->dev_addr);
|
||||
|
||||
read_unlock(&bond->curr_slave_lock);
|
||||
|
@ -1493,7 +1528,7 @@ void bond_alb_monitor(struct work_struct *work)
|
|||
|
||||
read_lock(&bond->curr_slave_lock);
|
||||
|
||||
bond_for_each_slave(bond, slave) {
|
||||
bond_for_each_slave(bond, slave, iter) {
|
||||
tlb_clear_slave(bond, slave, 1);
|
||||
if (slave == bond->curr_active_slave) {
|
||||
SLAVE_TLB_INFO(slave).load =
|
||||
|
@ -1599,13 +1634,13 @@ int bond_alb_init_slave(struct bonding *bond, struct slave *slave)
|
|||
*/
|
||||
void bond_alb_deinit_slave(struct bonding *bond, struct slave *slave)
|
||||
{
|
||||
if (!list_empty(&bond->slave_list))
|
||||
if (bond_has_slaves(bond))
|
||||
alb_change_hw_addr_on_detach(bond, slave);
|
||||
|
||||
tlb_clear_slave(bond, slave, 0);
|
||||
|
||||
if (bond->alb_info.rlb_enabled) {
|
||||
bond->alb_info.next_rx_slave = NULL;
|
||||
bond->alb_info.rx_slave = NULL;
|
||||
rlb_clear_slave(bond, slave);
|
||||
}
|
||||
}
|
||||
|
@ -1669,7 +1704,7 @@ void bond_alb_handle_active_change(struct bonding *bond, struct slave *new_slave
|
|||
swap_slave = bond->curr_active_slave;
|
||||
rcu_assign_pointer(bond->curr_active_slave, new_slave);
|
||||
|
||||
if (!new_slave || list_empty(&bond->slave_list))
|
||||
if (!new_slave || !bond_has_slaves(bond))
|
||||
return;
|
||||
|
||||
/* set the new curr_active_slave to the bonds mac address
|
||||
|
@ -1692,6 +1727,23 @@ void bond_alb_handle_active_change(struct bonding *bond, struct slave *new_slave
|
|||
|
||||
ASSERT_RTNL();
|
||||
|
||||
/* in TLB mode, the slave might flip down/up with the old dev_addr,
|
||||
* and thus filter bond->dev_addr's packets, so force bond's mac
|
||||
*/
|
||||
if (bond->params.mode == BOND_MODE_TLB) {
|
||||
struct sockaddr sa;
|
||||
u8 tmp_addr[ETH_ALEN];
|
||||
|
||||
memcpy(tmp_addr, new_slave->dev->dev_addr, ETH_ALEN);
|
||||
|
||||
memcpy(sa.sa_data, bond->dev->dev_addr, bond->dev->addr_len);
|
||||
sa.sa_family = bond->dev->type;
|
||||
/* we don't care if it can't change its mac, best effort */
|
||||
dev_set_mac_address(new_slave->dev, &sa);
|
||||
|
||||
memcpy(new_slave->dev->dev_addr, tmp_addr, ETH_ALEN);
|
||||
}
|
||||
|
||||
/* curr_active_slave must be set before calling alb_swap_mac_addr */
|
||||
if (swap_slave) {
|
||||
/* swap mac address */
|
||||
|
|
|
@ -154,9 +154,7 @@ struct alb_bond_info {
|
|||
u8 rx_ntt; /* flag - need to transmit
|
||||
* to all rx clients
|
||||
*/
|
||||
struct slave *next_rx_slave;/* next slave to be assigned
|
||||
* to a new rx client for
|
||||
*/
|
||||
struct slave *rx_slave;/* last slave to xmit from */
|
||||
u8 primary_is_promisc; /* boolean */
|
||||
u32 rlb_promisc_timeout_counter;/* counts primary
|
||||
* promiscuity time
|
||||
|
|
File diff suppressed because it is too large
Load diff
131
drivers/net/bonding/bond_netlink.c
Normal file
131
drivers/net/bonding/bond_netlink.c
Normal file
|
@ -0,0 +1,131 @@
|
|||
/*
|
||||
* drivers/net/bond/bond_netlink.c - Netlink interface for bonding
|
||||
* Copyright (c) 2013 Jiri Pirko <jiri@resnulli.us>
|
||||
*
|
||||
* This program is free software; you can redistribute it and/or modify
|
||||
* it under the terms of the GNU General Public License as published by
|
||||
* the Free Software Foundation; either version 2 of the License, or
|
||||
* (at your option) any later version.
|
||||
*/
|
||||
|
||||
#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
|
||||
|
||||
#include <linux/module.h>
|
||||
#include <linux/errno.h>
|
||||
#include <linux/netdevice.h>
|
||||
#include <linux/etherdevice.h>
|
||||
#include <linux/if_link.h>
|
||||
#include <linux/if_ether.h>
|
||||
#include <net/netlink.h>
|
||||
#include <net/rtnetlink.h>
|
||||
#include "bonding.h"
|
||||
|
||||
static const struct nla_policy bond_policy[IFLA_BOND_MAX + 1] = {
|
||||
[IFLA_BOND_MODE] = { .type = NLA_U8 },
|
||||
[IFLA_BOND_ACTIVE_SLAVE] = { .type = NLA_U32 },
|
||||
};
|
||||
|
||||
static int bond_validate(struct nlattr *tb[], struct nlattr *data[])
|
||||
{
|
||||
if (tb[IFLA_ADDRESS]) {
|
||||
if (nla_len(tb[IFLA_ADDRESS]) != ETH_ALEN)
|
||||
return -EINVAL;
|
||||
if (!is_valid_ether_addr(nla_data(tb[IFLA_ADDRESS])))
|
||||
return -EADDRNOTAVAIL;
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int bond_changelink(struct net_device *bond_dev,
|
||||
struct nlattr *tb[], struct nlattr *data[])
|
||||
{
|
||||
struct bonding *bond = netdev_priv(bond_dev);
|
||||
int err;
|
||||
|
||||
if (data && data[IFLA_BOND_MODE]) {
|
||||
int mode = nla_get_u8(data[IFLA_BOND_MODE]);
|
||||
|
||||
err = bond_option_mode_set(bond, mode);
|
||||
if (err)
|
||||
return err;
|
||||
}
|
||||
if (data && data[IFLA_BOND_ACTIVE_SLAVE]) {
|
||||
int ifindex = nla_get_u32(data[IFLA_BOND_ACTIVE_SLAVE]);
|
||||
struct net_device *slave_dev;
|
||||
|
||||
if (ifindex == 0) {
|
||||
slave_dev = NULL;
|
||||
} else {
|
||||
slave_dev = __dev_get_by_index(dev_net(bond_dev),
|
||||
ifindex);
|
||||
if (!slave_dev)
|
||||
return -ENODEV;
|
||||
}
|
||||
err = bond_option_active_slave_set(bond, slave_dev);
|
||||
if (err)
|
||||
return err;
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int bond_newlink(struct net *src_net, struct net_device *bond_dev,
|
||||
struct nlattr *tb[], struct nlattr *data[])
|
||||
{
|
||||
int err;
|
||||
|
||||
err = bond_changelink(bond_dev, tb, data);
|
||||
if (err < 0)
|
||||
return err;
|
||||
|
||||
return register_netdevice(bond_dev);
|
||||
}
|
||||
|
||||
static size_t bond_get_size(const struct net_device *bond_dev)
|
||||
{
|
||||
return nla_total_size(sizeof(u8)) + /* IFLA_BOND_MODE */
|
||||
nla_total_size(sizeof(u32)); /* IFLA_BOND_ACTIVE_SLAVE */
|
||||
}
|
||||
|
||||
static int bond_fill_info(struct sk_buff *skb,
|
||||
const struct net_device *bond_dev)
|
||||
{
|
||||
struct bonding *bond = netdev_priv(bond_dev);
|
||||
struct net_device *slave_dev = bond_option_active_slave_get(bond);
|
||||
|
||||
if (nla_put_u8(skb, IFLA_BOND_MODE, bond->params.mode) ||
|
||||
(slave_dev &&
|
||||
nla_put_u32(skb, IFLA_BOND_ACTIVE_SLAVE, slave_dev->ifindex)))
|
||||
goto nla_put_failure;
|
||||
return 0;
|
||||
|
||||
nla_put_failure:
|
||||
return -EMSGSIZE;
|
||||
}
|
||||
|
||||
struct rtnl_link_ops bond_link_ops __read_mostly = {
|
||||
.kind = "bond",
|
||||
.priv_size = sizeof(struct bonding),
|
||||
.setup = bond_setup,
|
||||
.maxtype = IFLA_BOND_MAX,
|
||||
.policy = bond_policy,
|
||||
.validate = bond_validate,
|
||||
.newlink = bond_newlink,
|
||||
.changelink = bond_changelink,
|
||||
.get_size = bond_get_size,
|
||||
.fill_info = bond_fill_info,
|
||||
.get_num_tx_queues = bond_get_num_tx_queues,
|
||||
.get_num_rx_queues = bond_get_num_tx_queues, /* Use the same number
|
||||
as for TX queues */
|
||||
};
|
||||
|
||||
int __init bond_netlink_init(void)
|
||||
{
|
||||
return rtnl_link_register(&bond_link_ops);
|
||||
}
|
||||
|
||||
void bond_netlink_fini(void)
|
||||
{
|
||||
rtnl_link_unregister(&bond_link_ops);
|
||||
}
|
||||
|
||||
MODULE_ALIAS_RTNL_LINK("bond");
|
142
drivers/net/bonding/bond_options.c
Normal file
142
drivers/net/bonding/bond_options.c
Normal file
|
@ -0,0 +1,142 @@
|
|||
/*
|
||||
* drivers/net/bond/bond_options.c - bonding options
|
||||
* Copyright (c) 2013 Jiri Pirko <jiri@resnulli.us>
|
||||
*
|
||||
* This program is free software; you can redistribute it and/or modify
|
||||
* it under the terms of the GNU General Public License as published by
|
||||
* the Free Software Foundation; either version 2 of the License, or
|
||||
* (at your option) any later version.
|
||||
*/
|
||||
|
||||
#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
|
||||
|
||||
#include <linux/errno.h>
|
||||
#include <linux/if.h>
|
||||
#include <linux/netdevice.h>
|
||||
#include <linux/rwlock.h>
|
||||
#include <linux/rcupdate.h>
|
||||
#include "bonding.h"
|
||||
|
||||
static bool bond_mode_is_valid(int mode)
|
||||
{
|
||||
int i;
|
||||
|
||||
for (i = 0; bond_mode_tbl[i].modename; i++);
|
||||
|
||||
return mode >= 0 && mode < i;
|
||||
}
|
||||
|
||||
int bond_option_mode_set(struct bonding *bond, int mode)
|
||||
{
|
||||
if (!bond_mode_is_valid(mode)) {
|
||||
pr_err("invalid mode value %d.\n", mode);
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
if (bond->dev->flags & IFF_UP) {
|
||||
pr_err("%s: unable to update mode because interface is up.\n",
|
||||
bond->dev->name);
|
||||
return -EPERM;
|
||||
}
|
||||
|
||||
if (bond_has_slaves(bond)) {
|
||||
pr_err("%s: unable to update mode because bond has slaves.\n",
|
||||
bond->dev->name);
|
||||
return -EPERM;
|
||||
}
|
||||
|
||||
if (BOND_MODE_IS_LB(mode) && bond->params.arp_interval) {
|
||||
pr_err("%s: %s mode is incompatible with arp monitoring.\n",
|
||||
bond->dev->name, bond_mode_tbl[mode].modename);
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
/* don't cache arp_validate between modes */
|
||||
bond->params.arp_validate = BOND_ARP_VALIDATE_NONE;
|
||||
bond->params.mode = mode;
|
||||
return 0;
|
||||
}
|
||||
|
||||
static struct net_device *__bond_option_active_slave_get(struct bonding *bond,
|
||||
struct slave *slave)
|
||||
{
|
||||
return USES_PRIMARY(bond->params.mode) && slave ? slave->dev : NULL;
|
||||
}
|
||||
|
||||
struct net_device *bond_option_active_slave_get_rcu(struct bonding *bond)
|
||||
{
|
||||
struct slave *slave = rcu_dereference(bond->curr_active_slave);
|
||||
|
||||
return __bond_option_active_slave_get(bond, slave);
|
||||
}
|
||||
|
||||
struct net_device *bond_option_active_slave_get(struct bonding *bond)
|
||||
{
|
||||
return __bond_option_active_slave_get(bond, bond->curr_active_slave);
|
||||
}
|
||||
|
||||
int bond_option_active_slave_set(struct bonding *bond,
|
||||
struct net_device *slave_dev)
|
||||
{
|
||||
int ret = 0;
|
||||
|
||||
if (slave_dev) {
|
||||
if (!netif_is_bond_slave(slave_dev)) {
|
||||
pr_err("Device %s is not bonding slave.\n",
|
||||
slave_dev->name);
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
if (bond->dev != netdev_master_upper_dev_get(slave_dev)) {
|
||||
pr_err("%s: Device %s is not our slave.\n",
|
||||
bond->dev->name, slave_dev->name);
|
||||
return -EINVAL;
|
||||
}
|
||||
}
|
||||
|
||||
if (!USES_PRIMARY(bond->params.mode)) {
|
||||
pr_err("%s: Unable to change active slave; %s is in mode %d\n",
|
||||
bond->dev->name, bond->dev->name, bond->params.mode);
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
block_netpoll_tx();
|
||||
read_lock(&bond->lock);
|
||||
write_lock_bh(&bond->curr_slave_lock);
|
||||
|
||||
/* check to see if we are clearing active */
|
||||
if (!slave_dev) {
|
||||
pr_info("%s: Clearing current active slave.\n",
|
||||
bond->dev->name);
|
||||
rcu_assign_pointer(bond->curr_active_slave, NULL);
|
||||
bond_select_active_slave(bond);
|
||||
} else {
|
||||
struct slave *old_active = bond->curr_active_slave;
|
||||
struct slave *new_active = bond_slave_get_rtnl(slave_dev);
|
||||
|
||||
BUG_ON(!new_active);
|
||||
|
||||
if (new_active == old_active) {
|
||||
/* do nothing */
|
||||
pr_info("%s: %s is already the current active slave.\n",
|
||||
bond->dev->name, new_active->dev->name);
|
||||
} else {
|
||||
if (old_active && (new_active->link == BOND_LINK_UP) &&
|
||||
IS_UP(new_active->dev)) {
|
||||
pr_info("%s: Setting %s as active slave.\n",
|
||||
bond->dev->name, new_active->dev->name);
|
||||
bond_change_active_slave(bond, new_active);
|
||||
} else {
|
||||
pr_err("%s: Could not set %s as active slave; either %s is down or the link is down.\n",
|
||||
bond->dev->name, new_active->dev->name,
|
||||
new_active->dev->name);
|
||||
ret = -EINVAL;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
write_unlock_bh(&bond->curr_slave_lock);
|
||||
read_unlock(&bond->lock);
|
||||
unblock_netpoll_tx();
|
||||
return ret;
|
||||
}
|
|
@ -10,8 +10,9 @@ static void *bond_info_seq_start(struct seq_file *seq, loff_t *pos)
|
|||
__acquires(&bond->lock)
|
||||
{
|
||||
struct bonding *bond = seq->private;
|
||||
loff_t off = 0;
|
||||
struct list_head *iter;
|
||||
struct slave *slave;
|
||||
loff_t off = 0;
|
||||
|
||||
/* make sure the bond won't be taken away */
|
||||
rcu_read_lock();
|
||||
|
@ -20,7 +21,7 @@ static void *bond_info_seq_start(struct seq_file *seq, loff_t *pos)
|
|||
if (*pos == 0)
|
||||
return SEQ_START_TOKEN;
|
||||
|
||||
bond_for_each_slave(bond, slave)
|
||||
bond_for_each_slave(bond, slave, iter)
|
||||
if (++off == *pos)
|
||||
return slave;
|
||||
|
||||
|
@ -30,17 +31,25 @@ static void *bond_info_seq_start(struct seq_file *seq, loff_t *pos)
|
|||
static void *bond_info_seq_next(struct seq_file *seq, void *v, loff_t *pos)
|
||||
{
|
||||
struct bonding *bond = seq->private;
|
||||
struct slave *slave = v;
|
||||
struct list_head *iter;
|
||||
struct slave *slave;
|
||||
bool found = false;
|
||||
|
||||
++*pos;
|
||||
if (v == SEQ_START_TOKEN)
|
||||
return bond_first_slave(bond);
|
||||
|
||||
if (bond_is_last_slave(bond, slave))
|
||||
if (bond_is_last_slave(bond, v))
|
||||
return NULL;
|
||||
slave = bond_next_slave(bond, slave);
|
||||
|
||||
return slave;
|
||||
bond_for_each_slave(bond, slave, iter) {
|
||||
if (found)
|
||||
return slave;
|
||||
if (slave == v)
|
||||
found = true;
|
||||
}
|
||||
|
||||
return NULL;
|
||||
}
|
||||
|
||||
static void bond_info_seq_stop(struct seq_file *seq, void *v)
|
||||
|
|
|
@ -40,6 +40,7 @@
|
|||
#include <net/net_namespace.h>
|
||||
#include <net/netns/generic.h>
|
||||
#include <linux/nsproxy.h>
|
||||
#include <linux/reciprocal_div.h>
|
||||
|
||||
#include "bonding.h"
|
||||
|
||||
|
@ -159,41 +160,6 @@ static const struct class_attribute class_attr_bonding_masters = {
|
|||
.store = bonding_store_bonds,
|
||||
};
|
||||
|
||||
int bond_create_slave_symlinks(struct net_device *master,
|
||||
struct net_device *slave)
|
||||
{
|
||||
char linkname[IFNAMSIZ+7];
|
||||
int ret = 0;
|
||||
|
||||
/* first, create a link from the slave back to the master */
|
||||
ret = sysfs_create_link(&(slave->dev.kobj), &(master->dev.kobj),
|
||||
"master");
|
||||
if (ret)
|
||||
return ret;
|
||||
/* next, create a link from the master to the slave */
|
||||
sprintf(linkname, "slave_%s", slave->name);
|
||||
ret = sysfs_create_link(&(master->dev.kobj), &(slave->dev.kobj),
|
||||
linkname);
|
||||
|
||||
/* free the master link created earlier in case of error */
|
||||
if (ret)
|
||||
sysfs_remove_link(&(slave->dev.kobj), "master");
|
||||
|
||||
return ret;
|
||||
|
||||
}
|
||||
|
||||
void bond_destroy_slave_symlinks(struct net_device *master,
|
||||
struct net_device *slave)
|
||||
{
|
||||
char linkname[IFNAMSIZ+7];
|
||||
|
||||
sysfs_remove_link(&(slave->dev.kobj), "master");
|
||||
sprintf(linkname, "slave_%s", slave->name);
|
||||
sysfs_remove_link(&(master->dev.kobj), linkname);
|
||||
}
|
||||
|
||||
|
||||
/*
|
||||
* Show the slaves in the current bond.
|
||||
*/
|
||||
|
@ -201,11 +167,14 @@ static ssize_t bonding_show_slaves(struct device *d,
|
|||
struct device_attribute *attr, char *buf)
|
||||
{
|
||||
struct bonding *bond = to_bond(d);
|
||||
struct list_head *iter;
|
||||
struct slave *slave;
|
||||
int res = 0;
|
||||
|
||||
read_lock(&bond->lock);
|
||||
bond_for_each_slave(bond, slave) {
|
||||
if (!rtnl_trylock())
|
||||
return restart_syscall();
|
||||
|
||||
bond_for_each_slave(bond, slave, iter) {
|
||||
if (res > (PAGE_SIZE - IFNAMSIZ)) {
|
||||
/* not enough space for another interface name */
|
||||
if ((PAGE_SIZE - res) > 10)
|
||||
|
@ -215,7 +184,9 @@ static ssize_t bonding_show_slaves(struct device *d,
|
|||
}
|
||||
res += sprintf(buf + res, "%s ", slave->dev->name);
|
||||
}
|
||||
read_unlock(&bond->lock);
|
||||
|
||||
rtnl_unlock();
|
||||
|
||||
if (res)
|
||||
buf[res-1] = '\n'; /* eat the leftover space */
|
||||
|
||||
|
@ -304,50 +275,26 @@ static ssize_t bonding_store_mode(struct device *d,
|
|||
struct device_attribute *attr,
|
||||
const char *buf, size_t count)
|
||||
{
|
||||
int new_value, ret = count;
|
||||
int new_value, ret;
|
||||
struct bonding *bond = to_bond(d);
|
||||
|
||||
if (!rtnl_trylock())
|
||||
return restart_syscall();
|
||||
|
||||
if (bond->dev->flags & IFF_UP) {
|
||||
pr_err("unable to update mode of %s because interface is up.\n",
|
||||
bond->dev->name);
|
||||
ret = -EPERM;
|
||||
goto out;
|
||||
}
|
||||
|
||||
if (!list_empty(&bond->slave_list)) {
|
||||
pr_err("unable to update mode of %s because it has slaves.\n",
|
||||
bond->dev->name);
|
||||
ret = -EPERM;
|
||||
goto out;
|
||||
}
|
||||
|
||||
new_value = bond_parse_parm(buf, bond_mode_tbl);
|
||||
if (new_value < 0) {
|
||||
pr_err("%s: Ignoring invalid mode value %.*s.\n",
|
||||
bond->dev->name, (int)strlen(buf) - 1, buf);
|
||||
ret = -EINVAL;
|
||||
goto out;
|
||||
return -EINVAL;
|
||||
}
|
||||
if ((new_value == BOND_MODE_ALB ||
|
||||
new_value == BOND_MODE_TLB) &&
|
||||
bond->params.arp_interval) {
|
||||
pr_err("%s: %s mode is incompatible with arp monitoring.\n",
|
||||
bond->dev->name, bond_mode_tbl[new_value].modename);
|
||||
ret = -EINVAL;
|
||||
goto out;
|
||||
if (!rtnl_trylock())
|
||||
return restart_syscall();
|
||||
|
||||
ret = bond_option_mode_set(bond, new_value);
|
||||
if (!ret) {
|
||||
pr_info("%s: setting mode to %s (%d).\n",
|
||||
bond->dev->name, bond_mode_tbl[new_value].modename,
|
||||
new_value);
|
||||
ret = count;
|
||||
}
|
||||
|
||||
/* don't cache arp_validate between modes */
|
||||
bond->params.arp_validate = BOND_ARP_VALIDATE_NONE;
|
||||
bond->params.mode = new_value;
|
||||
bond_set_mode_ops(bond, bond->params.mode);
|
||||
pr_info("%s: setting mode to %s (%d).\n",
|
||||
bond->dev->name, bond_mode_tbl[new_value].modename,
|
||||
new_value);
|
||||
out:
|
||||
rtnl_unlock();
|
||||
return ret;
|
||||
}
|
||||
|
@ -383,7 +330,6 @@ static ssize_t bonding_store_xmit_hash(struct device *d,
|
|||
ret = -EINVAL;
|
||||
} else {
|
||||
bond->params.xmit_policy = new_value;
|
||||
bond_set_mode_ops(bond, bond->params.mode);
|
||||
pr_info("%s: setting xmit hash policy to %s (%d).\n",
|
||||
bond->dev->name,
|
||||
xmit_hashtype_tbl[new_value].modename, new_value);
|
||||
|
@ -513,7 +459,7 @@ static ssize_t bonding_store_fail_over_mac(struct device *d,
|
|||
if (!rtnl_trylock())
|
||||
return restart_syscall();
|
||||
|
||||
if (!list_empty(&bond->slave_list)) {
|
||||
if (bond_has_slaves(bond)) {
|
||||
pr_err("%s: Can't alter fail_over_mac with slaves in bond.\n",
|
||||
bond->dev->name);
|
||||
ret = -EPERM;
|
||||
|
@ -647,11 +593,15 @@ static ssize_t bonding_store_arp_targets(struct device *d,
|
|||
const char *buf, size_t count)
|
||||
{
|
||||
struct bonding *bond = to_bond(d);
|
||||
struct list_head *iter;
|
||||
struct slave *slave;
|
||||
__be32 newtarget, *targets;
|
||||
unsigned long *targets_rx;
|
||||
int ind, i, j, ret = -EINVAL;
|
||||
|
||||
if (!rtnl_trylock())
|
||||
return restart_syscall();
|
||||
|
||||
targets = bond->params.arp_targets;
|
||||
newtarget = in_aton(buf + 1);
|
||||
/* look for adds */
|
||||
|
@ -679,7 +629,7 @@ static ssize_t bonding_store_arp_targets(struct device *d,
|
|||
&newtarget);
|
||||
/* not to race with bond_arp_rcv */
|
||||
write_lock_bh(&bond->lock);
|
||||
bond_for_each_slave(bond, slave)
|
||||
bond_for_each_slave(bond, slave, iter)
|
||||
slave->target_last_arp_rx[ind] = jiffies;
|
||||
targets[ind] = newtarget;
|
||||
write_unlock_bh(&bond->lock);
|
||||
|
@ -705,7 +655,7 @@ static ssize_t bonding_store_arp_targets(struct device *d,
|
|||
&newtarget);
|
||||
|
||||
write_lock_bh(&bond->lock);
|
||||
bond_for_each_slave(bond, slave) {
|
||||
bond_for_each_slave(bond, slave, iter) {
|
||||
targets_rx = slave->target_last_arp_rx;
|
||||
j = ind;
|
||||
for (; (j < BOND_MAX_ARP_TARGETS-1) && targets[j+1]; j++)
|
||||
|
@ -725,6 +675,7 @@ static ssize_t bonding_store_arp_targets(struct device *d,
|
|||
|
||||
ret = count;
|
||||
out:
|
||||
rtnl_unlock();
|
||||
return ret;
|
||||
}
|
||||
static DEVICE_ATTR(arp_ip_target, S_IRUGO | S_IWUSR , bonding_show_arp_targets, bonding_store_arp_targets);
|
||||
|
@ -1102,6 +1053,7 @@ static ssize_t bonding_store_primary(struct device *d,
|
|||
const char *buf, size_t count)
|
||||
{
|
||||
struct bonding *bond = to_bond(d);
|
||||
struct list_head *iter;
|
||||
char ifname[IFNAMSIZ];
|
||||
struct slave *slave;
|
||||
|
||||
|
@ -1129,7 +1081,7 @@ static ssize_t bonding_store_primary(struct device *d,
|
|||
goto out;
|
||||
}
|
||||
|
||||
bond_for_each_slave(bond, slave) {
|
||||
bond_for_each_slave(bond, slave, iter) {
|
||||
if (strncmp(slave->dev->name, ifname, IFNAMSIZ) == 0) {
|
||||
pr_info("%s: Setting %s as primary slave.\n",
|
||||
bond->dev->name, slave->dev->name);
|
||||
|
@ -1259,13 +1211,13 @@ static ssize_t bonding_show_active_slave(struct device *d,
|
|||
char *buf)
|
||||
{
|
||||
struct bonding *bond = to_bond(d);
|
||||
struct slave *curr;
|
||||
struct net_device *slave_dev;
|
||||
int count = 0;
|
||||
|
||||
rcu_read_lock();
|
||||
curr = rcu_dereference(bond->curr_active_slave);
|
||||
if (USES_PRIMARY(bond->params.mode) && curr)
|
||||
count = sprintf(buf, "%s\n", curr->dev->name);
|
||||
slave_dev = bond_option_active_slave_get_rcu(bond);
|
||||
if (slave_dev)
|
||||
count = sprintf(buf, "%s\n", slave_dev->name);
|
||||
rcu_read_unlock();
|
||||
|
||||
return count;
|
||||
|
@ -1275,80 +1227,33 @@ static ssize_t bonding_store_active_slave(struct device *d,
|
|||
struct device_attribute *attr,
|
||||
const char *buf, size_t count)
|
||||
{
|
||||
struct slave *slave, *old_active, *new_active;
|
||||
int ret;
|
||||
struct bonding *bond = to_bond(d);
|
||||
char ifname[IFNAMSIZ];
|
||||
struct net_device *dev;
|
||||
|
||||
if (!rtnl_trylock())
|
||||
return restart_syscall();
|
||||
|
||||
old_active = new_active = NULL;
|
||||
block_netpoll_tx();
|
||||
read_lock(&bond->lock);
|
||||
write_lock_bh(&bond->curr_slave_lock);
|
||||
|
||||
if (!USES_PRIMARY(bond->params.mode)) {
|
||||
pr_info("%s: Unable to change active slave; %s is in mode %d\n",
|
||||
bond->dev->name, bond->dev->name, bond->params.mode);
|
||||
goto out;
|
||||
}
|
||||
|
||||
sscanf(buf, "%15s", ifname); /* IFNAMSIZ */
|
||||
|
||||
/* check to see if we are clearing active */
|
||||
if (!strlen(ifname) || buf[0] == '\n') {
|
||||
pr_info("%s: Clearing current active slave.\n",
|
||||
bond->dev->name);
|
||||
rcu_assign_pointer(bond->curr_active_slave, NULL);
|
||||
bond_select_active_slave(bond);
|
||||
goto out;
|
||||
}
|
||||
|
||||
bond_for_each_slave(bond, slave) {
|
||||
if (strncmp(slave->dev->name, ifname, IFNAMSIZ) == 0) {
|
||||
old_active = bond->curr_active_slave;
|
||||
new_active = slave;
|
||||
if (new_active == old_active) {
|
||||
/* do nothing */
|
||||
pr_info("%s: %s is already the current"
|
||||
" active slave.\n",
|
||||
bond->dev->name,
|
||||
slave->dev->name);
|
||||
goto out;
|
||||
} else {
|
||||
if ((new_active) &&
|
||||
(old_active) &&
|
||||
(new_active->link == BOND_LINK_UP) &&
|
||||
IS_UP(new_active->dev)) {
|
||||
pr_info("%s: Setting %s as active"
|
||||
" slave.\n",
|
||||
bond->dev->name,
|
||||
slave->dev->name);
|
||||
bond_change_active_slave(bond,
|
||||
new_active);
|
||||
} else {
|
||||
pr_info("%s: Could not set %s as"
|
||||
" active slave; either %s is"
|
||||
" down or the link is down.\n",
|
||||
bond->dev->name,
|
||||
slave->dev->name,
|
||||
slave->dev->name);
|
||||
}
|
||||
goto out;
|
||||
}
|
||||
dev = NULL;
|
||||
} else {
|
||||
dev = __dev_get_by_name(dev_net(bond->dev), ifname);
|
||||
if (!dev) {
|
||||
ret = -ENODEV;
|
||||
goto out;
|
||||
}
|
||||
}
|
||||
|
||||
pr_info("%s: Unable to set %.*s as active slave.\n",
|
||||
bond->dev->name, (int)strlen(buf) - 1, buf);
|
||||
out:
|
||||
write_unlock_bh(&bond->curr_slave_lock);
|
||||
read_unlock(&bond->lock);
|
||||
unblock_netpoll_tx();
|
||||
ret = bond_option_active_slave_set(bond, dev);
|
||||
if (!ret)
|
||||
ret = count;
|
||||
|
||||
out:
|
||||
rtnl_unlock();
|
||||
|
||||
return count;
|
||||
return ret;
|
||||
|
||||
}
|
||||
static DEVICE_ATTR(active_slave, S_IRUGO | S_IWUSR,
|
||||
|
@ -1484,14 +1389,14 @@ static ssize_t bonding_show_queue_id(struct device *d,
|
|||
char *buf)
|
||||
{
|
||||
struct bonding *bond = to_bond(d);
|
||||
struct list_head *iter;
|
||||
struct slave *slave;
|
||||
int res = 0;
|
||||
|
||||
if (!rtnl_trylock())
|
||||
return restart_syscall();
|
||||
|
||||
read_lock(&bond->lock);
|
||||
bond_for_each_slave(bond, slave) {
|
||||
bond_for_each_slave(bond, slave, iter) {
|
||||
if (res > (PAGE_SIZE - IFNAMSIZ - 6)) {
|
||||
/* not enough space for another interface_name:queue_id pair */
|
||||
if ((PAGE_SIZE - res) > 10)
|
||||
|
@ -1502,9 +1407,9 @@ static ssize_t bonding_show_queue_id(struct device *d,
|
|||
res += sprintf(buf + res, "%s:%d ",
|
||||
slave->dev->name, slave->queue_id);
|
||||
}
|
||||
read_unlock(&bond->lock);
|
||||
if (res)
|
||||
buf[res-1] = '\n'; /* eat the leftover space */
|
||||
|
||||
rtnl_unlock();
|
||||
|
||||
return res;
|
||||
|
@ -1520,6 +1425,7 @@ static ssize_t bonding_store_queue_id(struct device *d,
|
|||
{
|
||||
struct slave *slave, *update_slave;
|
||||
struct bonding *bond = to_bond(d);
|
||||
struct list_head *iter;
|
||||
u16 qid;
|
||||
int ret = count;
|
||||
char *delim;
|
||||
|
@ -1552,11 +1458,9 @@ static ssize_t bonding_store_queue_id(struct device *d,
|
|||
if (!sdev)
|
||||
goto err_no_cmd;
|
||||
|
||||
read_lock(&bond->lock);
|
||||
|
||||
/* Search for thes slave and check for duplicate qids */
|
||||
update_slave = NULL;
|
||||
bond_for_each_slave(bond, slave) {
|
||||
bond_for_each_slave(bond, slave, iter) {
|
||||
if (sdev == slave->dev)
|
||||
/*
|
||||
* We don't need to check the matching
|
||||
|
@ -1564,23 +1468,20 @@ static ssize_t bonding_store_queue_id(struct device *d,
|
|||
*/
|
||||
update_slave = slave;
|
||||
else if (qid && qid == slave->queue_id) {
|
||||
goto err_no_cmd_unlock;
|
||||
goto err_no_cmd;
|
||||
}
|
||||
}
|
||||
|
||||
if (!update_slave)
|
||||
goto err_no_cmd_unlock;
|
||||
goto err_no_cmd;
|
||||
|
||||
/* Actually set the qids for the slave */
|
||||
update_slave->queue_id = qid;
|
||||
|
||||
read_unlock(&bond->lock);
|
||||
out:
|
||||
rtnl_unlock();
|
||||
return ret;
|
||||
|
||||
err_no_cmd_unlock:
|
||||
read_unlock(&bond->lock);
|
||||
err_no_cmd:
|
||||
pr_info("invalid input for queue_id set for %s.\n",
|
||||
bond->dev->name);
|
||||
|
@ -1610,8 +1511,12 @@ static ssize_t bonding_store_slaves_active(struct device *d,
|
|||
{
|
||||
struct bonding *bond = to_bond(d);
|
||||
int new_value, ret = count;
|
||||
struct list_head *iter;
|
||||
struct slave *slave;
|
||||
|
||||
if (!rtnl_trylock())
|
||||
return restart_syscall();
|
||||
|
||||
if (sscanf(buf, "%d", &new_value) != 1) {
|
||||
pr_err("%s: no all_slaves_active value specified.\n",
|
||||
bond->dev->name);
|
||||
|
@ -1631,8 +1536,7 @@ static ssize_t bonding_store_slaves_active(struct device *d,
|
|||
goto out;
|
||||
}
|
||||
|
||||
read_lock(&bond->lock);
|
||||
bond_for_each_slave(bond, slave) {
|
||||
bond_for_each_slave(bond, slave, iter) {
|
||||
if (!bond_is_active_slave(slave)) {
|
||||
if (new_value)
|
||||
slave->inactive = 0;
|
||||
|
@ -1640,8 +1544,8 @@ static ssize_t bonding_store_slaves_active(struct device *d,
|
|||
slave->inactive = 1;
|
||||
}
|
||||
}
|
||||
read_unlock(&bond->lock);
|
||||
out:
|
||||
rtnl_unlock();
|
||||
return ret;
|
||||
}
|
||||
static DEVICE_ATTR(all_slaves_active, S_IRUGO | S_IWUSR,
|
||||
|
@ -1728,6 +1632,53 @@ out:
|
|||
static DEVICE_ATTR(lp_interval, S_IRUGO | S_IWUSR,
|
||||
bonding_show_lp_interval, bonding_store_lp_interval);
|
||||
|
||||
static ssize_t bonding_show_packets_per_slave(struct device *d,
|
||||
struct device_attribute *attr,
|
||||
char *buf)
|
||||
{
|
||||
struct bonding *bond = to_bond(d);
|
||||
int packets_per_slave = bond->params.packets_per_slave;
|
||||
|
||||
if (packets_per_slave > 1)
|
||||
packets_per_slave = reciprocal_value(packets_per_slave);
|
||||
|
||||
return sprintf(buf, "%d\n", packets_per_slave);
|
||||
}
|
||||
|
||||
static ssize_t bonding_store_packets_per_slave(struct device *d,
|
||||
struct device_attribute *attr,
|
||||
const char *buf, size_t count)
|
||||
{
|
||||
struct bonding *bond = to_bond(d);
|
||||
int new_value, ret = count;
|
||||
|
||||
if (sscanf(buf, "%d", &new_value) != 1) {
|
||||
pr_err("%s: no packets_per_slave value specified.\n",
|
||||
bond->dev->name);
|
||||
ret = -EINVAL;
|
||||
goto out;
|
||||
}
|
||||
if (new_value < 0 || new_value > USHRT_MAX) {
|
||||
pr_err("%s: packets_per_slave must be between 0 and %u\n",
|
||||
bond->dev->name, USHRT_MAX);
|
||||
ret = -EINVAL;
|
||||
goto out;
|
||||
}
|
||||
if (bond->params.mode != BOND_MODE_ROUNDROBIN)
|
||||
pr_warn("%s: Warning: packets_per_slave has effect only in balance-rr mode\n",
|
||||
bond->dev->name);
|
||||
if (new_value > 1)
|
||||
bond->params.packets_per_slave = reciprocal_value(new_value);
|
||||
else
|
||||
bond->params.packets_per_slave = new_value;
|
||||
out:
|
||||
return ret;
|
||||
}
|
||||
|
||||
static DEVICE_ATTR(packets_per_slave, S_IRUGO | S_IWUSR,
|
||||
bonding_show_packets_per_slave,
|
||||
bonding_store_packets_per_slave);
|
||||
|
||||
static struct attribute *per_bond_attrs[] = {
|
||||
&dev_attr_slaves.attr,
|
||||
&dev_attr_mode.attr,
|
||||
|
@ -1759,6 +1710,7 @@ static struct attribute *per_bond_attrs[] = {
|
|||
&dev_attr_resend_igmp.attr,
|
||||
&dev_attr_min_links.attr,
|
||||
&dev_attr_lp_interval.attr,
|
||||
&dev_attr_packets_per_slave.attr,
|
||||
NULL,
|
||||
};
|
||||
|
||||
|
|
|
@ -58,6 +58,11 @@
|
|||
#define TX_QUEUE_OVERRIDE(mode) \
|
||||
(((mode) == BOND_MODE_ACTIVEBACKUP) || \
|
||||
((mode) == BOND_MODE_ROUNDROBIN))
|
||||
|
||||
#define BOND_MODE_IS_LB(mode) \
|
||||
(((mode) == BOND_MODE_TLB) || \
|
||||
((mode) == BOND_MODE_ALB))
|
||||
|
||||
/*
|
||||
* Less bad way to call ioctl from within the kernel; this needs to be
|
||||
* done some other way to get the call out of interrupt context.
|
||||
|
@ -72,63 +77,37 @@
|
|||
res; })
|
||||
|
||||
/* slave list primitives */
|
||||
#define bond_to_slave(ptr) list_entry(ptr, struct slave, list)
|
||||
#define bond_slave_list(bond) (&(bond)->dev->adj_list.lower)
|
||||
|
||||
#define bond_has_slaves(bond) !list_empty(bond_slave_list(bond))
|
||||
|
||||
/* IMPORTANT: bond_first/last_slave can return NULL in case of an empty list */
|
||||
#define bond_first_slave(bond) \
|
||||
list_first_entry_or_null(&(bond)->slave_list, struct slave, list)
|
||||
(bond_has_slaves(bond) ? \
|
||||
netdev_adjacent_get_private(bond_slave_list(bond)->next) : \
|
||||
NULL)
|
||||
#define bond_last_slave(bond) \
|
||||
(list_empty(&(bond)->slave_list) ? NULL : \
|
||||
bond_to_slave((bond)->slave_list.prev))
|
||||
(bond_has_slaves(bond) ? \
|
||||
netdev_adjacent_get_private(bond_slave_list(bond)->prev) : \
|
||||
NULL)
|
||||
|
||||
#define bond_is_first_slave(bond, pos) ((pos)->list.prev == &(bond)->slave_list)
|
||||
#define bond_is_last_slave(bond, pos) ((pos)->list.next == &(bond)->slave_list)
|
||||
|
||||
/* Since bond_first/last_slave can return NULL, these can return NULL too */
|
||||
#define bond_next_slave(bond, pos) \
|
||||
(bond_is_last_slave(bond, pos) ? bond_first_slave(bond) : \
|
||||
bond_to_slave((pos)->list.next))
|
||||
|
||||
#define bond_prev_slave(bond, pos) \
|
||||
(bond_is_first_slave(bond, pos) ? bond_last_slave(bond) : \
|
||||
bond_to_slave((pos)->list.prev))
|
||||
|
||||
/**
|
||||
* bond_for_each_slave_from - iterate the slaves list from a starting point
|
||||
* @bond: the bond holding this list.
|
||||
* @pos: current slave.
|
||||
* @cnt: counter for max number of moves
|
||||
* @start: starting point.
|
||||
*
|
||||
* Caller must hold bond->lock
|
||||
*/
|
||||
#define bond_for_each_slave_from(bond, pos, cnt, start) \
|
||||
for (cnt = 0, pos = start; pos && cnt < (bond)->slave_cnt; \
|
||||
cnt++, pos = bond_next_slave(bond, pos))
|
||||
#define bond_is_first_slave(bond, pos) (pos == bond_first_slave(bond))
|
||||
#define bond_is_last_slave(bond, pos) (pos == bond_last_slave(bond))
|
||||
|
||||
/**
|
||||
* bond_for_each_slave - iterate over all slaves
|
||||
* @bond: the bond holding this list
|
||||
* @pos: current slave
|
||||
* @iter: list_head * iterator
|
||||
*
|
||||
* Caller must hold bond->lock
|
||||
*/
|
||||
#define bond_for_each_slave(bond, pos) \
|
||||
list_for_each_entry(pos, &(bond)->slave_list, list)
|
||||
#define bond_for_each_slave(bond, pos, iter) \
|
||||
netdev_for_each_lower_private((bond)->dev, pos, iter)
|
||||
|
||||
/* Caller must have rcu_read_lock */
|
||||
#define bond_for_each_slave_rcu(bond, pos) \
|
||||
list_for_each_entry_rcu(pos, &(bond)->slave_list, list)
|
||||
|
||||
/**
|
||||
* bond_for_each_slave_reverse - iterate in reverse from a given position
|
||||
* @bond: the bond holding this list
|
||||
* @pos: slave to continue from
|
||||
*
|
||||
* Caller must hold bond->lock
|
||||
*/
|
||||
#define bond_for_each_slave_continue_reverse(bond, pos) \
|
||||
list_for_each_entry_continue_reverse(pos, &(bond)->slave_list, list)
|
||||
#define bond_for_each_slave_rcu(bond, pos, iter) \
|
||||
netdev_for_each_lower_private_rcu((bond)->dev, pos, iter)
|
||||
|
||||
#ifdef CONFIG_NET_POLL_CONTROLLER
|
||||
extern atomic_t netpoll_block_tx;
|
||||
|
@ -177,6 +156,7 @@ struct bond_params {
|
|||
int all_slaves_active;
|
||||
int resend_igmp;
|
||||
int lp_interval;
|
||||
int packets_per_slave;
|
||||
};
|
||||
|
||||
struct bond_parm_tbl {
|
||||
|
@ -188,7 +168,6 @@ struct bond_parm_tbl {
|
|||
|
||||
struct slave {
|
||||
struct net_device *dev; /* first - useful for panic debug */
|
||||
struct list_head list;
|
||||
struct bonding *bond; /* our master */
|
||||
int delay;
|
||||
unsigned long jiffies;
|
||||
|
@ -228,7 +207,6 @@ struct slave {
|
|||
*/
|
||||
struct bonding {
|
||||
struct net_device *dev; /* first - useful for panic debug */
|
||||
struct list_head slave_list;
|
||||
struct slave *curr_active_slave;
|
||||
struct slave *current_arp_slave;
|
||||
struct slave *primary_slave;
|
||||
|
@ -245,8 +223,7 @@ struct bonding {
|
|||
char proc_file_name[IFNAMSIZ];
|
||||
#endif /* CONFIG_PROC_FS */
|
||||
struct list_head bond_list;
|
||||
int (*xmit_hash_policy)(struct sk_buff *, int);
|
||||
u16 rr_tx_counter;
|
||||
u32 rr_tx_counter;
|
||||
struct ad_bond_info ad_info;
|
||||
struct alb_bond_info alb_info;
|
||||
struct bond_params params;
|
||||
|
@ -276,13 +253,7 @@ struct bonding {
|
|||
static inline struct slave *bond_get_slave_by_dev(struct bonding *bond,
|
||||
struct net_device *slave_dev)
|
||||
{
|
||||
struct slave *slave = NULL;
|
||||
|
||||
bond_for_each_slave(bond, slave)
|
||||
if (slave->dev == slave_dev)
|
||||
return slave;
|
||||
|
||||
return NULL;
|
||||
return netdev_lower_dev_get_private(bond->dev, slave_dev);
|
||||
}
|
||||
|
||||
static inline struct bonding *bond_get_bond_by_slave(struct slave *slave)
|
||||
|
@ -294,8 +265,7 @@ static inline struct bonding *bond_get_bond_by_slave(struct slave *slave)
|
|||
|
||||
static inline bool bond_is_lb(const struct bonding *bond)
|
||||
{
|
||||
return (bond->params.mode == BOND_MODE_TLB ||
|
||||
bond->params.mode == BOND_MODE_ALB);
|
||||
return BOND_MODE_IS_LB(bond->params.mode);
|
||||
}
|
||||
|
||||
static inline void bond_set_active_slave(struct slave *slave)
|
||||
|
@ -432,21 +402,18 @@ static inline bool slave_can_tx(struct slave *slave)
|
|||
struct bond_net;
|
||||
|
||||
int bond_arp_rcv(const struct sk_buff *skb, struct bonding *bond, struct slave *slave);
|
||||
struct vlan_entry *bond_next_vlan(struct bonding *bond, struct vlan_entry *curr);
|
||||
int bond_dev_queue_xmit(struct bonding *bond, struct sk_buff *skb, struct net_device *slave_dev);
|
||||
void bond_xmit_slave_id(struct bonding *bond, struct sk_buff *skb, int slave_id);
|
||||
int bond_create(struct net *net, const char *name);
|
||||
int bond_create_sysfs(struct bond_net *net);
|
||||
void bond_destroy_sysfs(struct bond_net *net);
|
||||
void bond_prepare_sysfs_group(struct bonding *bond);
|
||||
int bond_create_slave_symlinks(struct net_device *master, struct net_device *slave);
|
||||
void bond_destroy_slave_symlinks(struct net_device *master, struct net_device *slave);
|
||||
int bond_enslave(struct net_device *bond_dev, struct net_device *slave_dev);
|
||||
int bond_release(struct net_device *bond_dev, struct net_device *slave_dev);
|
||||
void bond_mii_monitor(struct work_struct *);
|
||||
void bond_loadbalance_arp_mon(struct work_struct *);
|
||||
void bond_activebackup_arp_mon(struct work_struct *);
|
||||
void bond_set_mode_ops(struct bonding *bond, int mode);
|
||||
int bond_xmit_hash(struct bonding *bond, struct sk_buff *skb, int count);
|
||||
int bond_parse_parm(const char *mode_arg, const struct bond_parm_tbl *tbl);
|
||||
void bond_select_active_slave(struct bonding *bond);
|
||||
void bond_change_active_slave(struct bonding *bond, struct slave *new_active);
|
||||
|
@ -456,6 +423,14 @@ void bond_debug_register(struct bonding *bond);
|
|||
void bond_debug_unregister(struct bonding *bond);
|
||||
void bond_debug_reregister(struct bonding *bond);
|
||||
const char *bond_mode_name(int mode);
|
||||
void bond_setup(struct net_device *bond_dev);
|
||||
unsigned int bond_get_num_tx_queues(void);
|
||||
int bond_netlink_init(void);
|
||||
void bond_netlink_fini(void);
|
||||
int bond_option_mode_set(struct bonding *bond, int mode);
|
||||
int bond_option_active_slave_set(struct bonding *bond, struct net_device *slave_dev);
|
||||
struct net_device *bond_option_active_slave_get_rcu(struct bonding *bond);
|
||||
struct net_device *bond_option_active_slave_get(struct bonding *bond);
|
||||
|
||||
struct bond_net {
|
||||
struct net * net; /* Associated network namespace */
|
||||
|
@ -492,9 +467,24 @@ static inline void bond_destroy_proc_dir(struct bond_net *bn)
|
|||
static inline struct slave *bond_slave_has_mac(struct bonding *bond,
|
||||
const u8 *mac)
|
||||
{
|
||||
struct list_head *iter;
|
||||
struct slave *tmp;
|
||||
|
||||
bond_for_each_slave(bond, tmp)
|
||||
bond_for_each_slave(bond, tmp, iter)
|
||||
if (ether_addr_equal_64bits(mac, tmp->dev->dev_addr))
|
||||
return tmp;
|
||||
|
||||
return NULL;
|
||||
}
|
||||
|
||||
/* Caller must hold rcu_read_lock() for read */
|
||||
static inline struct slave *bond_slave_has_mac_rcu(struct bonding *bond,
|
||||
const u8 *mac)
|
||||
{
|
||||
struct list_head *iter;
|
||||
struct slave *tmp;
|
||||
|
||||
bond_for_each_slave_rcu(bond, tmp, iter)
|
||||
if (ether_addr_equal_64bits(mac, tmp->dev->dev_addr))
|
||||
return tmp;
|
||||
|
||||
|
@ -528,4 +518,7 @@ extern const struct bond_parm_tbl fail_over_mac_tbl[];
|
|||
extern const struct bond_parm_tbl pri_reselect_tbl[];
|
||||
extern struct bond_parm_tbl ad_select_tbl[];
|
||||
|
||||
/* exported from bond_netlink.c */
|
||||
extern struct rtnl_link_ops bond_link_ops;
|
||||
|
||||
#endif /* _LINUX_BONDING_H */
|
||||
|
|
|
@ -1347,7 +1347,7 @@ static int at91_can_probe(struct platform_device *pdev)
|
|||
priv->reg_base = addr;
|
||||
priv->devtype_data = *devtype_data;
|
||||
priv->clk = clk;
|
||||
priv->pdata = pdev->dev.platform_data;
|
||||
priv->pdata = dev_get_platdata(&pdev->dev);
|
||||
priv->mb0_id = 0x7ff;
|
||||
|
||||
netif_napi_add(dev, &priv->napi, at91_poll, get_mb_rx_num(priv));
|
||||
|
|
|
@ -539,7 +539,7 @@ static int bfin_can_probe(struct platform_device *pdev)
|
|||
struct resource *res_mem, *rx_irq, *tx_irq, *err_irq;
|
||||
unsigned short *pdata;
|
||||
|
||||
pdata = pdev->dev.platform_data;
|
||||
pdata = dev_get_platdata(&pdev->dev);
|
||||
if (!pdata) {
|
||||
dev_err(&pdev->dev, "No platform data provided!\n");
|
||||
err = -EINVAL;
|
||||
|
|
|
@ -160,7 +160,6 @@ static int c_can_pci_probe(struct pci_dev *pdev,
|
|||
return 0;
|
||||
|
||||
out_free_c_can:
|
||||
pci_set_drvdata(pdev, NULL);
|
||||
free_c_can_dev(dev);
|
||||
out_iounmap:
|
||||
pci_iounmap(pdev, addr);
|
||||
|
@ -181,7 +180,6 @@ static void c_can_pci_remove(struct pci_dev *pdev)
|
|||
|
||||
unregister_c_can_dev(dev);
|
||||
|
||||
pci_set_drvdata(pdev, NULL);
|
||||
free_c_can_dev(dev);
|
||||
|
||||
pci_iounmap(pdev, priv->base);
|
||||
|
|
|
@ -322,7 +322,7 @@ static struct platform_driver c_can_plat_driver = {
|
|||
.driver = {
|
||||
.name = KBUILD_MODNAME,
|
||||
.owner = THIS_MODULE,
|
||||
.of_match_table = of_match_ptr(c_can_of_table),
|
||||
.of_match_table = c_can_of_table,
|
||||
},
|
||||
.probe = c_can_plat_probe,
|
||||
.remove = c_can_plat_remove,
|
||||
|
|
|
@ -152,7 +152,7 @@ static int cc770_get_platform_data(struct platform_device *pdev,
|
|||
struct cc770_priv *priv)
|
||||
{
|
||||
|
||||
struct cc770_platform_data *pdata = pdev->dev.platform_data;
|
||||
struct cc770_platform_data *pdata = dev_get_platdata(&pdev->dev);
|
||||
|
||||
priv->can.clock.freq = pdata->osc_freq;
|
||||
if (priv->cpu_interface & CPUIF_DSC)
|
||||
|
@ -203,7 +203,7 @@ static int cc770_platform_probe(struct platform_device *pdev)
|
|||
|
||||
if (pdev->dev.of_node)
|
||||
err = cc770_get_of_node_data(pdev, priv);
|
||||
else if (pdev->dev.platform_data)
|
||||
else if (dev_get_platdata(&pdev->dev))
|
||||
err = cc770_get_platform_data(pdev, priv);
|
||||
else
|
||||
err = -ENODEV;
|
||||
|
|
|
@ -645,19 +645,6 @@ static int can_changelink(struct net_device *dev,
|
|||
/* We need synchronization with dev->stop() */
|
||||
ASSERT_RTNL();
|
||||
|
||||
if (data[IFLA_CAN_CTRLMODE]) {
|
||||
struct can_ctrlmode *cm;
|
||||
|
||||
/* Do not allow changing controller mode while running */
|
||||
if (dev->flags & IFF_UP)
|
||||
return -EBUSY;
|
||||
cm = nla_data(data[IFLA_CAN_CTRLMODE]);
|
||||
if (cm->flags & ~priv->ctrlmode_supported)
|
||||
return -EOPNOTSUPP;
|
||||
priv->ctrlmode &= ~cm->mask;
|
||||
priv->ctrlmode |= cm->flags;
|
||||
}
|
||||
|
||||
if (data[IFLA_CAN_BITTIMING]) {
|
||||
struct can_bittiming bt;
|
||||
|
||||
|
@ -680,6 +667,19 @@ static int can_changelink(struct net_device *dev,
|
|||
}
|
||||
}
|
||||
|
||||
if (data[IFLA_CAN_CTRLMODE]) {
|
||||
struct can_ctrlmode *cm;
|
||||
|
||||
/* Do not allow changing controller mode while running */
|
||||
if (dev->flags & IFF_UP)
|
||||
return -EBUSY;
|
||||
cm = nla_data(data[IFLA_CAN_CTRLMODE]);
|
||||
if (cm->flags & ~priv->ctrlmode_supported)
|
||||
return -EOPNOTSUPP;
|
||||
priv->ctrlmode &= ~cm->mask;
|
||||
priv->ctrlmode |= cm->flags;
|
||||
}
|
||||
|
||||
if (data[IFLA_CAN_RESTART_MS]) {
|
||||
/* Do not allow changing restart delay while running */
|
||||
if (dev->flags & IFF_UP)
|
||||
|
@ -702,17 +702,17 @@ static int can_changelink(struct net_device *dev,
|
|||
static size_t can_get_size(const struct net_device *dev)
|
||||
{
|
||||
struct can_priv *priv = netdev_priv(dev);
|
||||
size_t size;
|
||||
size_t size = 0;
|
||||
|
||||
size = nla_total_size(sizeof(u32)); /* IFLA_CAN_STATE */
|
||||
size += nla_total_size(sizeof(struct can_ctrlmode)); /* IFLA_CAN_CTRLMODE */
|
||||
size += nla_total_size(sizeof(u32)); /* IFLA_CAN_RESTART_MS */
|
||||
size += nla_total_size(sizeof(struct can_bittiming)); /* IFLA_CAN_BITTIMING */
|
||||
size += nla_total_size(sizeof(struct can_clock)); /* IFLA_CAN_CLOCK */
|
||||
if (priv->do_get_berr_counter) /* IFLA_CAN_BERR_COUNTER */
|
||||
size += nla_total_size(sizeof(struct can_berr_counter));
|
||||
if (priv->bittiming_const) /* IFLA_CAN_BITTIMING_CONST */
|
||||
size += nla_total_size(sizeof(struct can_bittiming)); /* IFLA_CAN_BITTIMING */
|
||||
if (priv->bittiming_const) /* IFLA_CAN_BITTIMING_CONST */
|
||||
size += nla_total_size(sizeof(struct can_bittiming_const));
|
||||
size += nla_total_size(sizeof(struct can_clock)); /* IFLA_CAN_CLOCK */
|
||||
size += nla_total_size(sizeof(u32)); /* IFLA_CAN_STATE */
|
||||
size += nla_total_size(sizeof(struct can_ctrlmode)); /* IFLA_CAN_CTRLMODE */
|
||||
size += nla_total_size(sizeof(u32)); /* IFLA_CAN_RESTART_MS */
|
||||
if (priv->do_get_berr_counter) /* IFLA_CAN_BERR_COUNTER */
|
||||
size += nla_total_size(sizeof(struct can_berr_counter));
|
||||
|
||||
return size;
|
||||
}
|
||||
|
@ -726,23 +726,20 @@ static int can_fill_info(struct sk_buff *skb, const struct net_device *dev)
|
|||
|
||||
if (priv->do_get_state)
|
||||
priv->do_get_state(dev, &state);
|
||||
if (nla_put_u32(skb, IFLA_CAN_STATE, state) ||
|
||||
nla_put(skb, IFLA_CAN_CTRLMODE, sizeof(cm), &cm) ||
|
||||
nla_put_u32(skb, IFLA_CAN_RESTART_MS, priv->restart_ms) ||
|
||||
nla_put(skb, IFLA_CAN_BITTIMING,
|
||||
if (nla_put(skb, IFLA_CAN_BITTIMING,
|
||||
sizeof(priv->bittiming), &priv->bittiming) ||
|
||||
nla_put(skb, IFLA_CAN_CLOCK, sizeof(cm), &priv->clock) ||
|
||||
(priv->do_get_berr_counter &&
|
||||
!priv->do_get_berr_counter(dev, &bec) &&
|
||||
nla_put(skb, IFLA_CAN_BERR_COUNTER, sizeof(bec), &bec)) ||
|
||||
(priv->bittiming_const &&
|
||||
nla_put(skb, IFLA_CAN_BITTIMING_CONST,
|
||||
sizeof(*priv->bittiming_const), priv->bittiming_const)))
|
||||
goto nla_put_failure;
|
||||
sizeof(*priv->bittiming_const), priv->bittiming_const)) ||
|
||||
nla_put(skb, IFLA_CAN_CLOCK, sizeof(cm), &priv->clock) ||
|
||||
nla_put_u32(skb, IFLA_CAN_STATE, state) ||
|
||||
nla_put(skb, IFLA_CAN_CTRLMODE, sizeof(cm), &cm) ||
|
||||
nla_put_u32(skb, IFLA_CAN_RESTART_MS, priv->restart_ms) ||
|
||||
(priv->do_get_berr_counter &&
|
||||
!priv->do_get_berr_counter(dev, &bec) &&
|
||||
nla_put(skb, IFLA_CAN_BERR_COUNTER, sizeof(bec), &bec)))
|
||||
return -EMSGSIZE;
|
||||
return 0;
|
||||
|
||||
nla_put_failure:
|
||||
return -EMSGSIZE;
|
||||
}
|
||||
|
||||
static size_t can_get_xstats_size(const struct net_device *dev)
|
||||
|
|
|
@ -1068,7 +1068,7 @@ static int flexcan_probe(struct platform_device *pdev)
|
|||
priv->dev = dev;
|
||||
priv->clk_ipg = clk_ipg;
|
||||
priv->clk_per = clk_per;
|
||||
priv->pdata = pdev->dev.platform_data;
|
||||
priv->pdata = dev_get_platdata(&pdev->dev);
|
||||
priv->devtype_data = devtype_data;
|
||||
|
||||
priv->reg_xceiver = devm_regulator_get(&pdev->dev, "xceiver");
|
||||
|
|
|
@ -1769,7 +1769,7 @@ static int ican3_probe(struct platform_device *pdev)
|
|||
struct device *dev;
|
||||
int ret;
|
||||
|
||||
pdata = pdev->dev.platform_data;
|
||||
pdata = dev_get_platdata(&pdev->dev);
|
||||
if (!pdata)
|
||||
return -ENXIO;
|
||||
|
||||
|
|
|
@ -999,7 +999,7 @@ static int mcp251x_can_probe(struct spi_device *spi)
|
|||
{
|
||||
struct net_device *net;
|
||||
struct mcp251x_priv *priv;
|
||||
struct mcp251x_platform_data *pdata = spi->dev.platform_data;
|
||||
struct mcp251x_platform_data *pdata = dev_get_platdata(&spi->dev);
|
||||
int ret = -ENODEV;
|
||||
|
||||
if (!pdata)
|
||||
|
|
|
@ -297,8 +297,8 @@ struct mscan_priv {
|
|||
struct napi_struct napi;
|
||||
};
|
||||
|
||||
extern struct net_device *alloc_mscandev(void);
|
||||
extern int register_mscandev(struct net_device *dev, int mscan_clksrc);
|
||||
extern void unregister_mscandev(struct net_device *dev);
|
||||
struct net_device *alloc_mscandev(void);
|
||||
int register_mscandev(struct net_device *dev, int mscan_clksrc);
|
||||
void unregister_mscandev(struct net_device *dev);
|
||||
|
||||
#endif /* __MSCAN_H__ */
|
||||
|
|
|
@ -964,7 +964,6 @@ static void pch_can_remove(struct pci_dev *pdev)
|
|||
pci_disable_msi(priv->dev);
|
||||
pci_release_regions(pdev);
|
||||
pci_disable_device(pdev);
|
||||
pci_set_drvdata(pdev, NULL);
|
||||
pch_can_reset(priv);
|
||||
pci_iounmap(pdev, priv->regs);
|
||||
free_candev(priv->ndev);
|
||||
|
|
|
@ -207,7 +207,6 @@ static void ems_pci_del_card(struct pci_dev *pdev)
|
|||
kfree(card);
|
||||
|
||||
pci_disable_device(pdev);
|
||||
pci_set_drvdata(pdev, NULL);
|
||||
}
|
||||
|
||||
static void ems_pci_card_reset(struct ems_pci_card *card)
|
||||
|
|
|
@ -387,7 +387,6 @@ static void kvaser_pci_remove_one(struct pci_dev *pdev)
|
|||
|
||||
pci_release_regions(pdev);
|
||||
pci_disable_device(pdev);
|
||||
pci_set_drvdata(pdev, NULL);
|
||||
}
|
||||
|
||||
static struct pci_driver kvaser_pci_driver = {
|
||||
|
|
|
@ -744,8 +744,6 @@ static void peak_pci_remove(struct pci_dev *pdev)
|
|||
pci_iounmap(pdev, cfg_base);
|
||||
pci_release_regions(pdev);
|
||||
pci_disable_device(pdev);
|
||||
|
||||
pci_set_drvdata(pdev, NULL);
|
||||
}
|
||||
|
||||
static struct pci_driver peak_pci_driver = {
|
||||
|
|
|
@ -477,7 +477,6 @@ static void plx_pci_del_card(struct pci_dev *pdev)
|
|||
kfree(card);
|
||||
|
||||
pci_disable_device(pdev);
|
||||
pci_set_drvdata(pdev, NULL);
|
||||
}
|
||||
|
||||
/*
|
||||
|
|
|
@ -76,7 +76,7 @@ static int sp_probe(struct platform_device *pdev)
|
|||
struct resource *res_mem, *res_irq;
|
||||
struct sja1000_platform_data *pdata;
|
||||
|
||||
pdata = pdev->dev.platform_data;
|
||||
pdata = dev_get_platdata(&pdev->dev);
|
||||
if (!pdata) {
|
||||
dev_err(&pdev->dev, "No platform data provided!\n");
|
||||
err = -ENODEV;
|
||||
|
|
|
@ -71,34 +71,34 @@ struct softing {
|
|||
} id;
|
||||
};
|
||||
|
||||
extern int softing_default_output(struct net_device *netdev);
|
||||
int softing_default_output(struct net_device *netdev);
|
||||
|
||||
extern ktime_t softing_raw2ktime(struct softing *card, u32 raw);
|
||||
ktime_t softing_raw2ktime(struct softing *card, u32 raw);
|
||||
|
||||
extern int softing_chip_poweron(struct softing *card);
|
||||
int softing_chip_poweron(struct softing *card);
|
||||
|
||||
extern int softing_bootloader_command(struct softing *card, int16_t cmd,
|
||||
const char *msg);
|
||||
int softing_bootloader_command(struct softing *card, int16_t cmd,
|
||||
const char *msg);
|
||||
|
||||
/* Load firmware after reset */
|
||||
extern int softing_load_fw(const char *file, struct softing *card,
|
||||
__iomem uint8_t *virt, unsigned int size, int offset);
|
||||
int softing_load_fw(const char *file, struct softing *card,
|
||||
__iomem uint8_t *virt, unsigned int size, int offset);
|
||||
|
||||
/* Load final application firmware after bootloader */
|
||||
extern int softing_load_app_fw(const char *file, struct softing *card);
|
||||
int softing_load_app_fw(const char *file, struct softing *card);
|
||||
|
||||
/*
|
||||
* enable or disable irq
|
||||
* only called with fw.lock locked
|
||||
*/
|
||||
extern int softing_enable_irq(struct softing *card, int enable);
|
||||
int softing_enable_irq(struct softing *card, int enable);
|
||||
|
||||
/* start/stop 1 bus on card */
|
||||
extern int softing_startstop(struct net_device *netdev, int up);
|
||||
int softing_startstop(struct net_device *netdev, int up);
|
||||
|
||||
/* netif_rx() */
|
||||
extern int softing_netdev_rx(struct net_device *netdev,
|
||||
const struct can_frame *msg, ktime_t ktime);
|
||||
int softing_netdev_rx(struct net_device *netdev, const struct can_frame *msg,
|
||||
ktime_t ktime);
|
||||
|
||||
/* SOFTING DPRAM mappings */
|
||||
#define DPRAM_RX 0x0000
|
||||
|
|
|
@ -768,7 +768,7 @@ static int softing_pdev_remove(struct platform_device *pdev)
|
|||
|
||||
static int softing_pdev_probe(struct platform_device *pdev)
|
||||
{
|
||||
const struct softing_platform_data *pdat = pdev->dev.platform_data;
|
||||
const struct softing_platform_data *pdat = dev_get_platdata(&pdev->dev);
|
||||
struct softing *card;
|
||||
struct net_device *netdev;
|
||||
struct softing_priv *priv;
|
||||
|
|
|
@ -286,15 +286,6 @@ static inline u32 hecc_get_bit(struct ti_hecc_priv *priv, int reg, u32 bit_mask)
|
|||
return (hecc_read(priv, reg) & bit_mask) ? 1 : 0;
|
||||
}
|
||||
|
||||
static int ti_hecc_get_state(const struct net_device *ndev,
|
||||
enum can_state *state)
|
||||
{
|
||||
struct ti_hecc_priv *priv = netdev_priv(ndev);
|
||||
|
||||
*state = priv->can.state;
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int ti_hecc_set_btc(struct ti_hecc_priv *priv)
|
||||
{
|
||||
struct can_bittiming *bit_timing = &priv->can.bittiming;
|
||||
|
@ -894,7 +885,7 @@ static int ti_hecc_probe(struct platform_device *pdev)
|
|||
void __iomem *addr;
|
||||
int err = -ENODEV;
|
||||
|
||||
pdata = pdev->dev.platform_data;
|
||||
pdata = dev_get_platdata(&pdev->dev);
|
||||
if (!pdata) {
|
||||
dev_err(&pdev->dev, "No platform data\n");
|
||||
goto probe_exit;
|
||||
|
@ -940,7 +931,6 @@ static int ti_hecc_probe(struct platform_device *pdev)
|
|||
|
||||
priv->can.bittiming_const = &ti_hecc_bittiming_const;
|
||||
priv->can.do_set_mode = ti_hecc_do_set_mode;
|
||||
priv->can.do_get_state = ti_hecc_get_state;
|
||||
priv->can.do_get_berr_counter = ti_hecc_get_berr_counter;
|
||||
priv->can.ctrlmode_supported = CAN_CTRLMODE_3_SAMPLES;
|
||||
|
||||
|
|
|
@ -35,7 +35,7 @@ config EL3
|
|||
|
||||
config 3C515
|
||||
tristate "3c515 ISA \"Fast EtherLink\""
|
||||
depends on (ISA || EISA) && ISA_DMA_API
|
||||
depends on ISA && ISA_DMA_API
|
||||
---help---
|
||||
If you have a 3Com ISA EtherLink XL "Corkscrew" 3c515 Fast Ethernet
|
||||
network card, say Y and read the Ethernet-HOWTO, available from
|
||||
|
@ -70,7 +70,7 @@ config VORTEX
|
|||
select MII
|
||||
---help---
|
||||
This option enables driver support for a large number of 10Mbps and
|
||||
10/100Mbps EISA, PCI and PCMCIA 3Com network cards:
|
||||
10/100Mbps EISA, PCI and Cardbus 3Com network cards:
|
||||
|
||||
"Vortex" (Fast EtherLink 3c590/3c592/3c595/3c597) EISA and PCI
|
||||
"Boomerang" (EtherLink XL 3c900 or 3c905) PCI
|
||||
|
|
|
@ -2525,7 +2525,6 @@ typhoon_remove_one(struct pci_dev *pdev)
|
|||
pci_release_regions(pdev);
|
||||
pci_clear_mwi(pdev);
|
||||
pci_disable_device(pdev);
|
||||
pci_set_drvdata(pdev, NULL);
|
||||
free_netdev(dev);
|
||||
}
|
||||
|
||||
|
|
|
@ -28,42 +28,42 @@ extern int ei_debug;
|
|||
#endif
|
||||
|
||||
#ifdef CONFIG_NET_POLL_CONTROLLER
|
||||
extern void ei_poll(struct net_device *dev);
|
||||
extern void eip_poll(struct net_device *dev);
|
||||
void ei_poll(struct net_device *dev);
|
||||
void eip_poll(struct net_device *dev);
|
||||
#endif
|
||||
|
||||
|
||||
/* Without I/O delay - non ISA or later chips */
|
||||
extern void NS8390_init(struct net_device *dev, int startp);
|
||||
extern int ei_open(struct net_device *dev);
|
||||
extern int ei_close(struct net_device *dev);
|
||||
extern irqreturn_t ei_interrupt(int irq, void *dev_id);
|
||||
extern void ei_tx_timeout(struct net_device *dev);
|
||||
extern netdev_tx_t ei_start_xmit(struct sk_buff *skb, struct net_device *dev);
|
||||
extern void ei_set_multicast_list(struct net_device *dev);
|
||||
extern struct net_device_stats *ei_get_stats(struct net_device *dev);
|
||||
void NS8390_init(struct net_device *dev, int startp);
|
||||
int ei_open(struct net_device *dev);
|
||||
int ei_close(struct net_device *dev);
|
||||
irqreturn_t ei_interrupt(int irq, void *dev_id);
|
||||
void ei_tx_timeout(struct net_device *dev);
|
||||
netdev_tx_t ei_start_xmit(struct sk_buff *skb, struct net_device *dev);
|
||||
void ei_set_multicast_list(struct net_device *dev);
|
||||
struct net_device_stats *ei_get_stats(struct net_device *dev);
|
||||
|
||||
extern const struct net_device_ops ei_netdev_ops;
|
||||
|
||||
extern struct net_device *__alloc_ei_netdev(int size);
|
||||
struct net_device *__alloc_ei_netdev(int size);
|
||||
static inline struct net_device *alloc_ei_netdev(void)
|
||||
{
|
||||
return __alloc_ei_netdev(0);
|
||||
}
|
||||
|
||||
/* With I/O delay form */
|
||||
extern void NS8390p_init(struct net_device *dev, int startp);
|
||||
extern int eip_open(struct net_device *dev);
|
||||
extern int eip_close(struct net_device *dev);
|
||||
extern irqreturn_t eip_interrupt(int irq, void *dev_id);
|
||||
extern void eip_tx_timeout(struct net_device *dev);
|
||||
extern netdev_tx_t eip_start_xmit(struct sk_buff *skb, struct net_device *dev);
|
||||
extern void eip_set_multicast_list(struct net_device *dev);
|
||||
extern struct net_device_stats *eip_get_stats(struct net_device *dev);
|
||||
void NS8390p_init(struct net_device *dev, int startp);
|
||||
int eip_open(struct net_device *dev);
|
||||
int eip_close(struct net_device *dev);
|
||||
irqreturn_t eip_interrupt(int irq, void *dev_id);
|
||||
void eip_tx_timeout(struct net_device *dev);
|
||||
netdev_tx_t eip_start_xmit(struct sk_buff *skb, struct net_device *dev);
|
||||
void eip_set_multicast_list(struct net_device *dev);
|
||||
struct net_device_stats *eip_get_stats(struct net_device *dev);
|
||||
|
||||
extern const struct net_device_ops eip_netdev_ops;
|
||||
|
||||
extern struct net_device *__alloc_eip_netdev(int size);
|
||||
struct net_device *__alloc_eip_netdev(int size);
|
||||
static inline struct net_device *alloc_eip_netdev(void)
|
||||
{
|
||||
return __alloc_eip_netdev(0);
|
||||
|
|
|
@ -702,7 +702,7 @@ static int ax_init_dev(struct net_device *dev)
|
|||
for (i = 0; i < 16; i++)
|
||||
SA_prom[i] = SA_prom[i+i];
|
||||
|
||||
memcpy(dev->dev_addr, SA_prom, 6);
|
||||
memcpy(dev->dev_addr, SA_prom, ETH_ALEN);
|
||||
}
|
||||
|
||||
#ifdef CONFIG_AX88796_93CX6
|
||||
|
|
|
@ -389,9 +389,7 @@ err_out_free_netdev:
|
|||
free_netdev (dev);
|
||||
err_out_free_res:
|
||||
release_region (ioaddr, NE_IO_EXTENT);
|
||||
pci_set_drvdata (pdev, NULL);
|
||||
return -ENODEV;
|
||||
|
||||
}
|
||||
|
||||
/*
|
||||
|
@ -655,7 +653,6 @@ static void ne2k_pci_remove_one(struct pci_dev *pdev)
|
|||
release_region(dev->base_addr, NE_IO_EXTENT);
|
||||
free_netdev(dev);
|
||||
pci_disable_device(pdev);
|
||||
pci_set_drvdata(pdev, NULL);
|
||||
}
|
||||
|
||||
#ifdef CONFIG_PM
|
||||
|
|
|
@ -835,7 +835,6 @@ static int starfire_init_one(struct pci_dev *pdev,
|
|||
return 0;
|
||||
|
||||
err_out_cleardev:
|
||||
pci_set_drvdata(pdev, NULL);
|
||||
iounmap(base);
|
||||
err_out_free_res:
|
||||
pci_release_regions (pdev);
|
||||
|
@ -2012,7 +2011,6 @@ static void starfire_remove_one(struct pci_dev *pdev)
|
|||
iounmap(np->base);
|
||||
pci_release_regions(pdev);
|
||||
|
||||
pci_set_drvdata(pdev, NULL);
|
||||
free_netdev(dev); /* Will also free np!! */
|
||||
}
|
||||
|
||||
|
|
|
@ -104,6 +104,6 @@ struct bfin_mac_local {
|
|||
#endif
|
||||
};
|
||||
|
||||
extern int bfin_get_ether_addr(char *addr);
|
||||
int bfin_get_ether_addr(char *addr);
|
||||
|
||||
#endif
|
||||
|
|
|
@ -242,13 +242,13 @@ struct lance_private
|
|||
#define LANCE_ADDR(x) ((int)(x) & ~0xff000000)
|
||||
|
||||
/* Now the prototypes we export */
|
||||
extern int lance_open(struct net_device *dev);
|
||||
extern int lance_close (struct net_device *dev);
|
||||
extern int lance_start_xmit (struct sk_buff *skb, struct net_device *dev);
|
||||
extern void lance_set_multicast (struct net_device *dev);
|
||||
extern void lance_tx_timeout(struct net_device *dev);
|
||||
int lance_open(struct net_device *dev);
|
||||
int lance_close (struct net_device *dev);
|
||||
int lance_start_xmit (struct sk_buff *skb, struct net_device *dev);
|
||||
void lance_set_multicast (struct net_device *dev);
|
||||
void lance_tx_timeout(struct net_device *dev);
|
||||
#ifdef CONFIG_NET_POLL_CONTROLLER
|
||||
extern void lance_poll(struct net_device *dev);
|
||||
void lance_poll(struct net_device *dev);
|
||||
#endif
|
||||
|
||||
#endif /* ndef _7990_H */
|
||||
|
|
|
@ -1711,7 +1711,6 @@ static void amd8111e_remove_one(struct pci_dev *pdev)
|
|||
free_netdev(dev);
|
||||
pci_release_regions(pdev);
|
||||
pci_disable_device(pdev);
|
||||
pci_set_drvdata(pdev, NULL);
|
||||
}
|
||||
}
|
||||
static void amd8111e_config_ipg(struct net_device* dev)
|
||||
|
@ -1967,7 +1966,6 @@ err_free_reg:
|
|||
|
||||
err_disable_pdev:
|
||||
pci_disable_device(pdev);
|
||||
pci_set_drvdata(pdev, NULL);
|
||||
return err;
|
||||
|
||||
}
|
||||
|
|
|
@ -586,10 +586,10 @@ static unsigned long __init lance_probe1( struct net_device *dev,
|
|||
switch( lp->cardtype ) {
|
||||
case OLD_RIEBL:
|
||||
/* No ethernet address! (Set some default address) */
|
||||
memcpy( dev->dev_addr, OldRieblDefHwaddr, 6 );
|
||||
memcpy(dev->dev_addr, OldRieblDefHwaddr, ETH_ALEN);
|
||||
break;
|
||||
case NEW_RIEBL:
|
||||
lp->memcpy_f( dev->dev_addr, RIEBL_HWADDR_ADDR, 6 );
|
||||
lp->memcpy_f(dev->dev_addr, RIEBL_HWADDR_ADDR, ETH_ALEN);
|
||||
break;
|
||||
case PAM_CARD:
|
||||
i = IO->eeprom;
|
||||
|
|
|
@ -1138,7 +1138,7 @@ static int au1000_probe(struct platform_device *pdev)
|
|||
aup->phy1_search_mac0 = 1;
|
||||
} else {
|
||||
if (is_valid_ether_addr(pd->mac)) {
|
||||
memcpy(dev->dev_addr, pd->mac, 6);
|
||||
memcpy(dev->dev_addr, pd->mac, ETH_ALEN);
|
||||
} else {
|
||||
/* Set a random MAC since no valid provided by platform_data. */
|
||||
eth_hw_addr_random(dev);
|
||||
|
|
|
@ -344,8 +344,8 @@ static void cp_to_buf(const int type, void *to, const void *from, int len)
|
|||
}
|
||||
|
||||
clen = len & 1;
|
||||
rtp = tp;
|
||||
rfp = fp;
|
||||
rtp = (unsigned char *)tp;
|
||||
rfp = (const unsigned char *)fp;
|
||||
while (clen--) {
|
||||
*rtp++ = *rfp++;
|
||||
}
|
||||
|
@ -372,8 +372,8 @@ static void cp_to_buf(const int type, void *to, const void *from, int len)
|
|||
* do the rest, if any.
|
||||
*/
|
||||
clen = len & 15;
|
||||
rtp = (unsigned char *) tp;
|
||||
rfp = (unsigned char *) fp;
|
||||
rtp = (unsigned char *)tp;
|
||||
rfp = (const unsigned char *)fp;
|
||||
while (clen--) {
|
||||
*rtp++ = *rfp++;
|
||||
}
|
||||
|
@ -403,8 +403,8 @@ static void cp_from_buf(const int type, void *to, const void *from, int len)
|
|||
|
||||
clen = len & 1;
|
||||
|
||||
rtp = tp;
|
||||
rfp = fp;
|
||||
rtp = (unsigned char *)tp;
|
||||
rfp = (const unsigned char *)fp;
|
||||
|
||||
while (clen--) {
|
||||
*rtp++ = *rfp++;
|
||||
|
@ -433,8 +433,8 @@ static void cp_from_buf(const int type, void *to, const void *from, int len)
|
|||
* do the rest, if any.
|
||||
*/
|
||||
clen = len & 15;
|
||||
rtp = (unsigned char *) tp;
|
||||
rfp = (unsigned char *) fp;
|
||||
rtp = (unsigned char *)tp;
|
||||
rfp = (const unsigned char *)fp;
|
||||
while (clen--) {
|
||||
*rtp++ = *rfp++;
|
||||
}
|
||||
|
|
|
@ -754,7 +754,7 @@ lance_open(struct net_device *dev)
|
|||
int i;
|
||||
|
||||
if (dev->irq == 0 ||
|
||||
request_irq(dev->irq, lance_interrupt, 0, lp->name, dev)) {
|
||||
request_irq(dev->irq, lance_interrupt, 0, dev->name, dev)) {
|
||||
return -EAGAIN;
|
||||
}
|
||||
|
||||
|
|
|
@ -1675,7 +1675,7 @@ pcnet32_probe1(unsigned long ioaddr, int shared, struct pci_dev *pdev)
|
|||
pr_cont(" warning: CSR address invalid,\n");
|
||||
pr_info(" using instead PROM address of");
|
||||
}
|
||||
memcpy(dev->dev_addr, promaddr, 6);
|
||||
memcpy(dev->dev_addr, promaddr, ETH_ALEN);
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -2818,7 +2818,6 @@ static void pcnet32_remove_one(struct pci_dev *pdev)
|
|||
lp->init_block, lp->init_dma_addr);
|
||||
free_netdev(dev);
|
||||
pci_disable_device(pdev);
|
||||
pci_set_drvdata(pdev, NULL);
|
||||
}
|
||||
}
|
||||
|
||||
|
|
Some files were not shown because too many files have changed in this diff Show more
Loading…
Reference in a new issue